Title:
Artificial Intelligence for Human-Animal Conflict Mitigation: Image Classification and Human Tracking in Tadoba Andhari Tiger Reserve
Authors:
Mothukuri Sujith, Shailendra Singh Kathait, Piyush Dhuliya
Valiance Analytics Private Limited
Summary:
This study presents an advanced AI-driven approach to mitigating human-animal conflicts within the Tadoba Andhari Tiger Reserve (TATR), located in the Chandrapur region. This area faces significant issues as it harbors a diverse population of flora and fauna, including tigers, leopards, and bears, which frequently come into contact with surrounding communities. The Human-Animal Conflict Mitigation System (HACMS) developed for TATR utilizes edge AI cameras, deep learning-based image classification, and human tracking systems to predict and prevent potential conflict scenarios.
Central to this approach are daytime-specific deep learning models that detect and classify animals in real time, leveraging the YOLO v5 architecture. Three distinct models comprise this system: a custom detection model trained on species-specific data, a pre-trained model based on YOLO for public datasets, and a segmentation model to resolve specific challenges in detecting animals like bears and bisons that often appear similar in images. Each model serves a specific function within the detection pipeline, achieving robust accuracy in species identification and human recognition.
To build the models, a custom dataset of 7,959 images from TATR was utilized, with 73% allocated for training, 16% for validation, and 11% for testing. Data augmentation techniques such as rotation, brightness adjustment, and image preprocessing were applied to increase model generalization, enabling it to handle varied lighting and forest conditions. The YOLO v5 architecture’s use of anchor-free detection and mini-batch normalization significantly boosted efficiency and precision, allowing the model to adapt to various object shapes and sizes. Through this setup, the system achieved an overall test accuracy of 94.82%, with a near-perfect ~100% accuracy for critical species like tigers, leopards, and bears, meeting forest authorities’ requirements for reliable animal identification and alerting.
For human detection, the system integrates the Nanotrack algorithm from OpenCV, which provides lightweight, real-time tracking of human movement within forest areas. When the AI-enabled cameras detect human presence, this tracking mechanism initiates and follows the individual’s movement using bounding boxes across frames. This process aids in monitoring human entry into restricted zones, alerting authorities if a person is close to potentially dangerous wildlife. Additionally, adjustments were made to the pre-trained model by replacing common vehicle classes with a ‘Human’ class, improving detection accuracy by focusing on forest-relevant categories.
This paper emphasizes that effective conflict mitigation relies not only on accurate animal classification but also on tracking human activities to preemptively raise alerts and deter risky encounters. By harnessing edge analytics, the HACMS operates with limited dependence on cloud computing, making it well-suited to remote areas where connectivity may be sporadic. The system’s design is both scalable and adaptive, offering a template for future implementations in other high-conflict zones.
Ultimately, this research demonstrates the transformative potential of AI and deep learning in human-animal conflict management, combining real-time image analysis with proactive alerting to create a safer environment for both humans and animals. The solution offers a promising step toward sustainable coexistence, supporting local communities, wildlife authorities, and conservation efforts by leveraging innovative technology to address the complex dynamics of shared ecosystems.