Deep Learning-based Model for Wildlife Species Classification
Shailendra Singh Kathait, Ashish Kumar, Piyush Dhuliya and Ikshu Chauhan
Motion-activated cameras have become ubiquitous in ecological parks and wildlife sanctuaries, capturing images upon sensor-triggered motion, including infrared visuals that were once impractical. Despite this technological leap, extracting pertinent wildlife information from the vast image dataset remains time and labor-intensive. This paper presents a groundbreaking solution, employing deep learning models, particularly the VGG16 ConvNet architecture through transfer learning, to achieve near-human-level accuracy in information extraction.
The study focuses on a dataset of 33,511 images representing 19 species from the Ladakh region of India. Training and testing the model yielded an impressive accuracy of 89.12%. The established pipeline exhibits vast potential for wildlife monitoring in various national parks, advancing ecological research and conservation.
The methodology involved utilizing 80% of the dataset for training and 20% for validation. Subsequently, 3309 unseen images were tested, leading to a confusion matrix. The matrix highlights accurate species classifications, such as correctly identifying 284 out of 302 bird images.
However, the study acknowledges geographical limitations, emphasizing the need for a region-specific model. Enhancements in overall test accuracy are anticipated through increased and diverse training data, optimizing the model’s efficiency beyond the Ladakh region.