-
CiteScore
3.77
Impact Factor
Volume 1, Issue 1, IECE Transactions on Emerging Topics in Artificial Intelligence
Volume 1, Issue 1, 2024
Submit Manuscript Edit a Special Issue
Academic Editor
Teerath Kumar
Teerath Kumar
National College of Ireland, Ireland
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
IECE Transactions on Emerging Topics in Artificial Intelligence, Volume 1, Issue 1, 2024: 1-16

Open Access | Research Article | 07 April 2024
YOLOv8-Lite: A Lightweight Object Detection Model for Real-time Autonomous Driving Systems
1 College of Architecture and Design, Tongmyong University, Busan 608-711, Republic of Korea
* Corresponding Author: Ming Yang, [email protected]
Received: 17 December 2023, Accepted: 02 April 2024, Published: 07 April 2024  
Cited by: 13  (Source: Web of Science) , 14  (Source: Google Scholar)
Abstract
With the rapid development of autonomous driving technology, the demand for real-time and efficient object detection systems has been increasing to ensure vehicles can accurately perceive and respond to the surrounding environment. Traditional object detection models often suffer from issues such as large parameter sizes and high computational resource consumption, limiting their applicability on edge devices. To address this issue, we propose a lightweight object detection model called YOLOv8-Lite, based on the YOLOv8 framework, and improved through various enhancements including the adoption of the FastDet structure, TFPN pyramid structure, and CBAM attention mechanism. These improvements effectively enhance the performance and efficiency of the model. Experimental results demonstrate significant performance improvements of our model on the NEXET and KITTI datasets. Compared to traditional methods, our model exhibits higher accuracy and robustness in object detection tasks, better addressing the challenges in fields such as autonomous driving, and contributing to the advancement of intelligent transportation systems.

Graphical Abstract
YOLOv8-Lite: A Lightweight Object Detection Model for Real-time Autonomous Driving Systems

Keywords
autonomous driving
object detection
YOLOv8
real-time performance
intelligent transportation

Data Availability Statement
Data will be made available on request.

Funding
This work was supported without any funding.

Conflicts of Interest
The authors declare no conflicts of interest. 

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Huang, Y., & Chen, Y. (2020). Autonomous driving with deep learning: A survey of state-of-art technologies. arXiv preprint arXiv:2006.06091.
    [Google Scholar]
  2. Pang, Z., Chen, Z., Lu, J., Zhang, M., Feng, X., Chen, Y., ... & Cao, Y. (2023). A survey of decision-making safety assessment methods for autonomous vehicles. IEEE Intelligent Transportation Systems Magazine, 16(1), 74-103.
    [CrossRef]   [Google Scholar]
  3. Cunningham, A. G., Galceran, E., Eustice, R. M., & Olson, E. (2015, May). MPDM: Multipolicy decision-making in dynamic, uncertain environments for autonomous driving. In 2015 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1670-1677). IEEE.
    [CrossRef]   [Google Scholar]
  4. Feng, D., Harakeh, A., Waslander, S. L., & Dietmayer, K. (2021). A review and comparative study on probabilistic object detection in autonomous driving. IEEE Transactions on Intelligent Transportation Systems, 23(8), 9961-9980.
    [CrossRef]   [Google Scholar]
  5. Mao, J., Shi, S., Wang, X., & Li, H. (2023). 3D object detection for autonomous driving: A comprehensive survey. International Journal of Computer Vision, 131(8), 1909-1963.
    [CrossRef]   [Google Scholar]
  6. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).
    [Google Scholar]
  7. Zou, H., Zhan, H., & Zhang, L. (2022). Neural network based on multi-scale saliency fusion for traffic signs detection. Sustainability, 14(24), 16491.
    [CrossRef]   [Google Scholar]
  8. Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28.
    [Google Scholar]
  9. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016). Ssd: Single shot multibox detector. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14 (pp. 21-37). Springer International Publishing.
    [CrossRef]   [Google Scholar]
  10. Zimmermann, R. S., & Siems, J. N. (2019). Faster training of Mask R-CNN by focusing on instance boundaries. Computer Vision and Image Understanding, 188, 102795.
    [CrossRef]   [Google Scholar]
  11. He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969).
    [Google Scholar]
  12. Howard, A. G. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
    [Google Scholar]
  13. Tan, M., Pang, R., & Le, Q. V. (2020). Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10781-10790).
    [Google Scholar]
  14. Zhang, S., Wen, L., Lei, Z., & Li, S. Z. (2020). RefineDet++: Single-shot refinement neural network for object detection. IEEE Transactions on Circuits and Systems for Video Technology, 31(2), 674-687.
    [CrossRef]   [Google Scholar]
  15. Han, S., Mao, H., & Dally, W. J. (2015). Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149.
    [Google Scholar]
  16. Chen, Y. H., Krishna, T., Emer, J. S., & Sze, V. (2016). Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE journal of solid-state circuits, 52(1), 127-138.
    [CrossRef]   [Google Scholar]
  17. Carranza-García, M., Lara-Benítez, P., García-Gutiérrez, J., & Riquelme, J. C. (2021). Enhancing object detection for autonomous driving by optimizing anchor generation and addressing class imbalance. Neurocomputing, 449, 229-244.
    [CrossRef]   [Google Scholar]
  18. Arora, N., Kumar, Y., Karkra, R., & Kumar, M. (2022). Automatic vehicle detection system in different environment conditions using fast R-CNN. Multimedia Tools and Applications, 81(13), 18715-18735.
    [Google Scholar]
  19. Soylu, E., & Soylu, T. (2024). A performance comparison of YOLOv8 models for traffic sign detection in the Robotaxi-full scale autonomous vehicle competition. Multimedia Tools and Applications, 83(8), 25005-25035.
    [Google Scholar]
  20. Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
    [Google Scholar]
  21. Tian, Z., Shen, C., Chen, H., & He, T. (2020). FCOS: A simple and strong anchor-free object detector. IEEE transactions on pattern analysis and machine intelligence, 44(4), 1922-1933.
    [CrossRef]   [Google Scholar]
  22. Zou, Z., Chen, K., Shi, Z., Guo, Y., & Ye, J. (2023). Object detection in 20 years: A survey. Proceedings of the IEEE, 111(3), 257-276.
    [CrossRef]   [Google Scholar]
  23. Reis, D., Kupec, J., Hong, J., & Daoudi, A. (2023). Real-time flying object detection with YOLOv8. arXiv preprint arXiv:2305.09972.
    [Google Scholar]
  24. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
    [Google Scholar]
  25. Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2117-2125).
    [Google Scholar]
  26. Wang, C. Y., Liao, H. Y. M., Wu, Y. H., Chen, P. Y., Hsieh, J. W., & Yeh, I. H. (2020). CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops (pp. 390-391).
    [Google Scholar]
  27. Nexar Team. (2020). The NEXET Dataset: A Large-Scale Benchmark for Autonomous Driving Perception. Nexar Technical Report TR-2020-001. Retrieved from https://www.getnexar.com/nexet-dataset
    [Google Scholar]
  28. Geiger, A., Lenz, P., & Urtasun, R. (2012, June). Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE conference on computer vision and pattern recognition (pp. 3354-3361). IEEE.
    [CrossRef]   [Google Scholar]
  29. Geiger, A., Lenz, P., Stiller, C., & Urtasun, R. Vision meets robotics: The kitti dataset. the international journal of robotics research. Int. J. Rob. Res, 1-6.
    [Google Scholar]
  30. Chen, X. (2022, October). Traffic lights detection method based on the improved yolov5 network. In 2022 IEEE 4th International Conference on Civil Aviation Safety and Information Technology (ICCASIT) (pp. 1111-1114). IEEE.
    [CrossRef]   [Google Scholar]
  31. Li, S., Wang, S., & Wang, P. (2023). A small object detection algorithm for traffic signs based on improved YOLOv7. Sensors, 23(16), 7145.
    [CrossRef]   [Google Scholar]
  32. Mohandas, P. (2023). Sad: Sensor-based anomaly detection system for smart junctions. IEEE Sensors Journal, 23(17), 20368-20378.
    [CrossRef]   [Google Scholar]
  33. Talaat, F. M., & ZainEldin, H. (2023). An improved fire detection approach based on YOLO-v8 for smart cities. Neural Computing and Applications, 35(28), 20939-20954.
    [Google Scholar]

Cite This Article
APA Style
Yang, M., & Fan, X. (2024). YOLOv8-Lite: A Lightweight Object Detection Model for Real-time Autonomous Driving Systems. IECE Transactions on Emerging Topics in Artificial Intelligence, 1(1), 1–16. https://doi.org/10.62762/TETAI.2024.894227

Article Metrics
Citations:

Crossref

2

Scopus

14

Web of Science

13
Article Access Statistics:
Views: 5689
PDF Downloads: 658

Publisher's Note
IECE stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions
CC BY Copyright © 2024 by the Author(s). Published by Institute of Emerging and Computer Engineers. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
IECE Transactions on Emerging Topics in Artificial Intelligence

IECE Transactions on Emerging Topics in Artificial Intelligence

ISSN: 3066-1676 (Online) | ISSN: 3066-1668 (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/iece/

Copyright © 2024 Institute of Emerging and Computer Engineers Inc.