Treffer: Analyzing the enhancement of CNN-YOLO and transformer based architectures for real-time animal detection in complex ecological environments.

Title:
Analyzing the enhancement of CNN-YOLO and transformer based architectures for real-time animal detection in complex ecological environments.
Source:
Scientific Reports; 11/7/2025, Vol. 15 Issue 1, p1-33, 33p
Database:
Complementary Index

Weitere Informationen

Automatic animal detection has become a critical capability in ecology, conservation, agriculture, and public safety, driven by the rapid growth of visual data collected through camera traps, UAVs, and remote sensors. The necessity of this study arises from the increasing demand to understand and apply these underlying detection techniques in practical domains such as animal husbandry, farming, and livestock management, where timely and accurate animal identification directly impacts productivity, welfare, and safety. Traditional convolutional neural networks (CNNs) have demonstrated strong accuracy in static or controlled environments but often face limitations in computational cost and inference speed. In contrast, the You Only Look Once (YOLO) family of one-stage detectors has revolutionized animal detection by achieving real-time performance while maintaining competitive accuracy across challenging geospatial environments. This review provides a chronological synthesis of detection approaches, tracing the evolution from handcrafted features and two-stage CNN-based models to modern YOLO architectures and transformer-enhanced frameworks. A detailed comparative analysis is presented, highlighting trade-offs in accuracy, speed, robustness, and deployment feasibility across diverse datasets, including camera trap imagery, UAV-based surveys, and satellite observations. Persistent challenges such as small-object detection, class imbalance, and limited cross-geographical generalization are discussed alongside enhancement strategies, including attention mechanisms, few-shot learning, and domain adaptation. Furthermore, practical deployment considerations are explored, with emphasis on edge computing platforms such as Jetson Nano, Coral TPU, and UAV-embedded systems. This review adopts a systematic methodology following PRISMA guidelines, covering studies published between 2015 and 2025, from which 142 were included after screening. Comparative findings show that on camera-trap datasets, transformer-augmented YOLO variants achieve up to 94% mAP under controlled illumination, while lightweight YOLOv7-SE and YOLOv8 architectures offer superior real-time performance ( 60 FPS) on UAV-based imagery. However, large-scale deployment remains constrained by edge-device memory limits and cross-domain generalization challenges. [ABSTRACT FROM AUTHOR]

Copyright of Scientific Reports is the property of Springer Nature and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)