Treffer: YOLO-RAPD: Enhanced YOLOv8s-Based Automated Detection of Road Assets and Pavement Distress.

Title:
YOLO-RAPD: Enhanced YOLOv8s-Based Automated Detection of Road Assets and Pavement Distress.
Source:
Journal of Computing in Civil Engineering; Nov2025, Vol. 39 Issue 6, p1-20, 20p
Database:
Complementary Index

Weitere Informationen

This paper introduces an object detection algorithm called you only look once-road assets and pavement distress (YOLO-RAPD), developed to automatically identify road assets and pavement distresses. YOLO-RAPD is an improvement based on you only look once version 8 small (YOLOv8s). Unlike YOLOv8s, YOLO-RAPD utilizes a backbone network with multilayer fusion and an enhanced feature extraction network that autonomously selects the number of cycles to optimize feature extraction. Additionally, a linking method is employed to aggregate results from multiple loop layers. In tests using 6,333 images, YOLO-RAPD-1 (the smallest configuration) achieved a mean average precision (mAP) of 68.06% at a confidence level of 0.5, with a processing speed of 102 frames/s (FPS). YOLO-RAPD-4 (the larger configuration) achieved mAP of 75.36% at a confidence level of 0.5, with a processing speed of 40 FPS. Compared with several advanced models [you only look once version 8 nano (YOLOv8n), YOLOv8s, you only look once version 8 medium (YOLOv8m), you only look once version 8 large (YOLOv8l), you only look once version 8 extra-large (YOLOv8x), and fully convolutional one-stage object detection (FCOS)], YOLO-RAPD demonstrated superior detection accuracy on the same test set. To further validate the model's generalizability and performance under complex conditions, we trained and tested it on the publicly available CeyMo data set. The results showed that YOLO-RAPD maintains high detection efficiency across different scenarios, highlighting its strong generalization capability. Notably, the detection accuracy and speed of YOLO-RAPD can be fine-tuned by adjusting the number of autonomously selected loop layers, suggesting that this model holds significant potential for the automated detection of road assets and pavement distresses. Practical Applications: Pavement distresses, such as cracks, potholes, spalling, and missing markings, are the most common form of damage in highway maintenance, directly affecting the service life of the road and driving safety. With the increase of urban traffic, the traditional manual inspection method can no longer meet the demand for efficient and accurate inspection. For this reason, in recent years, intelligent inspection technology has gradually become the mainstream choice in highway maintenance. In this study, an automatic detection method for pavement distress and traffic assets based on the YOLO-RAPD model is proposed, aiming to improve the detection accuracy and speed. Compared with the existing traditional image processing methods, the model is not only able to detect pavement diseases such as cracks and potholes in real time, but also identify traffic assets such as pavement markings, traffic signs, and noise barriers, which covers the intelligent identification of a wide range of pavement problems and facility assets. Through the application in several real projects, the results showed that the YOLO-RAPD model has high detection accuracy and speed in various complex environments, and it can operate stably under different road conditions, effectively reducing the cost of manual inspection, and improving the timeliness and efficiency of pavement maintenance, with strong engineering practicality and reliability. [ABSTRACT FROM AUTHOR]

Copyright of Journal of Computing in Civil Engineering is the property of American Society of Civil Engineers and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)