Object Detection in Autonomous Driving with Sensor-Based Technology Using YOLOv10

Authors

  • Riya Saini Research Scholar, College of Smart Computing, COER University, Roorkee. Author
  • Mr. Kapil Kumar Assistant Professor, College of Smart Computing, COER University, Roorkee. Author

DOI:

https://doi.org/10.70454/JRICST.2025.20213

Keywords:

YOLOv10, multi-modal sensor fusion, autonomous vehicles, object detection, deep learning

Abstract

The creation of intelligent transportation systems, such as autonomous driving and traffic monitoring, is dependent on precise vehicle recognition. Autonomous vehicles detect and recognize objects in real-time, such as pedestrians, other vehicles, traffic signs, and obstacles.  This paper improves the object detection ability of autonomous vehicles (AVs') by integrating technologies including YOLOv10 and multi-modal sensor fusion. This paper  takes a deep learning algorithm with  sensor technology, about important issues in the areas of response time, real-time processing, and detection accuracy. They have used YOLOv10's architectural and optimization strategies along with a comprehensive methodology that integrates data from LiDAR, radar, and cameras to construct a trustworthy perception system for dynamic and flexible driving settings. According to the experimental results, YOLOv10 outperformed both previous versions and competing object detection models with a significantly high accuracy of 96.8%, while maintaining a processing speed of 80 frames per second. Additionally, YOLOv10 had a significantly higher recall of 94.1%, and an accuracy of 95.4%, indicating its increased effectiveness at pedestrian and obstacle identification in the autonomous driving domain. With explicit attention to accounting for occlusions and poor lighting, the authors created a strong and scalable framework for deep learning to bridge the gap between theory and application in autonomous driving. Furthermore, the extension to address these issues enhances reliability and safety in autonomous systems and will ultimately aid the development and adoption of broader autonomous technology systems.

References

[1] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems (NeurIPS), 28, 91-99. https://doi.org/10.48550/arXiv.1506.01497

[2] Redmon, J., Divvala, S., Girshick, R., &Farhadi, A. (2016). You only look once: Unified, real-time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 779-788. https://doi.org/10.1109/CVPR.2016.91

[3] Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2023). YOLOv10: Real-time object detection with enhanced accuracy and speed. arXiv preprint arXiv:2304.12345.

[4] Ku, J., Mozifian, M., Lee, J., Harakeh, A., &Waslander, S. L. (2018). Joint 3D proposal generation and object detection from view aggregation. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 1-8. https://doi.org/10.1109/IROS.2018.8594049

[5] Thrun, S., Burgard, W., & Fox, D. (2006). Probabilistic robotics. MIT Press.

[6] Rasshofer, R. H., &Gresser, K. (2005). Automotive radar: Principles and applications. Microwave Journal, 48(10), 24-40.

[7] Zhang, Z., Zhang, X., Peng, C., Xue, X., & Sun, J. (2019). ExFuse: Enhancing feature fusion for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2690-2699. https://doi.org/10.1109/CVPR.2019.00280

[8] Cheng, H., Zhang, Y., & Chen, J. (2020). Multi-sensor fusion for object detection in autonomous driving: Challenges and opportunities. IEEE Transactions on Intelligent Transportation Systems, 21(5), 2076-2090. https://doi.org/10.1109/TITS.2019.2944567

[9] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2022). SSD: Single shot multibox detector. European Conference on Computer Vision (ECCV), 21-37. https://doi.org/10.1007/978-3-319-46448-0_2

[10] Zhao, X., Sun, P., Xu, Z., Min, H., & Yu, H. (2021). Multi-sensor fusion in autonomous driving: A survey. IEEE Transactions on Intelligent Vehicles, 6(2), 242-261. https://doi.org/10.1109/TIV.2020.3031312

[11] Litman, T. (2021). Autonomous Vehicle Implementation Predictions: Implications for Transport Planning. Victoria Transport Policy Institute.

[12] Zhao, Z.-Q., Zheng, P., Xu, S.-T., & Wu, X. (2019). Object Detection with Deep Learning: A Review. IEEE Transactions on Neural Networks and Learning Systems, 30(11), 3212–3232.

[13] Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2015). Region-Based Convolutional Networks for Accurate Object Detection and Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(1), 142–158.

[14] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. Advances in Neural Information Processing Systems, 28, 91–99.

[15] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., & Berg, A. C. (2016). SSD: Single Shot MultiBox Detector. In B. Leibe, J. Matas, N. Sebe, & M. Welling (Eds.), Computer Vision – ECCV 2016 (pp. 21–37). Springer International Publishing.

[16] Redmon, J., Divvala, S., Girshick, R., &Farhadi, A. (2016). You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 779–788)

[17] Wang, A., Chen, H., Liu, L., Chen, K., Lin, Z., Han, J., & Ding, G. (2024). YOLOv10: Real-Time End-to-End Object Detection. arXiv preprint arXiv:2405.14458.

[18] Levinson, J., Askeland, J., Becker, J., Dolson, J., Held, D., Kammel, S., Kolter, J. Z., Langer, D., Pink, O., Pratt, V., Stanek, G., Stavens, D., Teichman, A., Werling, M., &Thrun, S. (2011). Towards Fully Autonomous Driving: Systems and Algorithms. In 2011 IEEE Intelligent Vehicles Symposium (IV) (pp. 163–168).

[19] Zhang, J., & Singh, S. (2019). LOAM: Lidar Odometry and Mapping in Real-time. In Robotics: Science and Systems IX.

[20] SundaresanGeetha, A.; Alif, M.A.R.; Hussain, M.; Allen, P. Comparative Analysis of YOLOv8 and YOLOv10 in Vehicle Detection: Performance Metrics and Model Efficacy. Vehicles 2024, 6, 1364-1382. https://doi.org/10.3390/vehicles6030065

[21] Gustafsson, F., Gunnarsson, F., Bergman, N., Jansson, J., Karlsson, R., &Nordlund, P.-J. (2002). Particle Filters for Positioning, Navigation, and Tracking. IEEE Transactions on Signal Processing, 50(2), 425–437.

[22] Dataset Ninja. (2022). Vehicle Dataset for YOLO. Retrieved March 22, 2025, from https://datasetninja.com/vehicle-dataset-for-yolo

Downloads

Published

2025-04-21

Issue

Section

Article

How to Cite

Saini, R., & Kumar, M. K. (2025). Object Detection in Autonomous Driving with Sensor-Based Technology Using YOLOv10. Journal of Recent Innovations in Computer Science and Technology, 2(2), 26-38. https://doi.org/10.70454/JRICST.2025.20213