Optimisation of Decision Efficiency for Autonomous Driving at Unsignalised Intersections Based on DRL and GPT
Downloads
The rapid increase in urban vehicle numbers has intensified traffic congestion and safety challenges. Unsignalised intersections pose significant difficulties for autonomous vehicle decision-making. To enhance decision efficiency and safety in such scenarios, this study proposes a decision optimisation method for autonomous driving at unsignalized intersections. The approach first employs a generative pre-trained transformer (GPT) to learn complex interactive behaviour patterns from driving data and acquire prior knowledge. This prior knowledge is then used to initialise the policy network of a deep reinforcement learning (DRL) agent, specifically deep q-network (DQN), which is further optimised through interaction within a simulated environment. This framework aims to combine the powerful sequence modelling capability of GPT with the goal-directed optimisation strength of DRL. Experimental results demonstrate that the proposed method achieves superior performance: The median safe distance reaches 19.58 m (maximum 32.50 m, minimum 8.46 m), the collision rate is as low as 1.07%, and the success rate exceeds 98%. Compared to baseline methods, the proposed approach significantly improves decision-making efficiency and safety for autonomous vehicles at unsignalised intersections, validating its effectiveness.
Downloads
Kiran BR, et al. Deep reinforcement learning for autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems. 2022;23(6):4909-4926. DOI: 10.1109/TITS.2021.3054625.
Grigorescu S, et al. A survey of deep learning techniques for autonomous driving. Journal of Field Robotics. 2020;37(3):362-386. DOI: 10.1002/rob.21918.
Zhu Z, Zhao H. A survey of deep RL and IL for autonomous driving policy learning. IEEE Transactions on Intelligent Transportation Systems. 2022;23(9):14043-14065. DOI: 10.1109/TITS.2021.3134702.
Liu P, et al. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Acm Computing Surveys. 2023;55(9):1-35. DOI: 10.1145/3560815.
Chavez MR, et al. Chat generative pre-trained transformer: Why we should embrace this technology. American Journal of Obstetrics and Gynecology. 2023;228(6):706-711. DOI: 10.1016/j.ajog.2023.03.010.
Shi Y, et al. A control method with reinforcement learning for urban un-signalized intersection in hybrid traffic environment. Sensors (Basel). 2022;22(3):779-782. DOI: 10.3390/s22030779.
Liu H, et al. Automatic lane-level intersection map generation using low-channel roadside LiDAR. IEEE-CAA Journal of Automatica Sinica. 2023;10(5):1209-1222. DOI: 10.1109/JAS.2023.123183.
Maadi S, et al. Real-time adaptive traffic signal control in a connected and automated vehicle environment: optimisation of signal planning with reinforcement learning under vehicle speed guidance. Sensors. 2022;22(19):7501-7509. DOI: 10.3390/s22197501.
Qu S, et al. Behavioral patterns of drivers under signalized and unsignalised urban intersections. Applied Sciences, 2024, 14(5): 1802-1807. DOI: 10.3390/App14051802.
Huang Z, Wu J, Lv C. Efficient deep reinforcement learning with imitative expert priors for autonomous driving. IEEE Trans Neural Netw Learn Syst. 2023;34(10):7391-7403. DOI: 10.1109/tnnls.2022.3142822.
Chen J, Li SE, Tomizuka M. Interpretable end-to-end urban autonomous driving with latent deep reinforcement learning. IEEE Transactions on Intelligent Transportation Systems. 2022;23(6):5068-5078. DOI: 10.1109/TITS.2020.3046646.
Fuchs F, et al. Super-human performance in gran turismo sport using deep reinforcement learning. IEEE Robotics and Automation Letters. 2021;6(3):4257-4264. DOI: 10.1109/LRA.2021.3064284.
Pang H, Wang Z, Li G. Large language model guided deep reinforcement learning for decision making in autonomous driving. Arxiv Preprint Arxiv. 2024. DOI: 10.48550/arXiv.2412.18511.
Al‑Sharman M, et al. Autonomous driving at unsignalised intersections: A review of decision‑making challenges and reinforcement learning‑based solutions. Arxiv Preprint Arxiv. 2024. DOI: 10.48550/arXiv.2409.13144.
Liu J, et al. MTD‑GPT: A multi‑task decision‑making GPT model for autonomous driving at unsignalised intersections. In: Proceedings of the 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC); 2023 Sep 24‑28; Bilbao, Spain. p. 5154‑5161. DOI: 10.1109/ITSC57777.2023.10421993.
Kodati D, Tene R. Identifying suicidal emotions on social media through transformer-based deep learning. Applied Intelligence. 2023;53(10):11885-11917. DOI: 10.1007/s10489-022-04060-8.
Ji Y, et al. DNABERT: pre-trained bidirectional encoder representations from transformers model for DNA-language in genome. Bioinformatics. 2021;37(15):2112-2120. DOI: 10.1093/bioinformatics/btab083.
Smarandache F. Plithogeny, plithogenic set, logic, probability and statistics: a short review. Journal of Computational and Cognitive Engineering. 2022;1(2):47-50. DOI: 10.47852/bonviewJCCE2202191.
Guo Y, Mustafaoglu Z, Koundal D. Spam detection using bidirectional transformers and machine learning classifier algorithms. Journal of Computational and Cognitive Engineering. 2023;2(1):5-9. DOI: 10.47852/bonviewJCCE2202192.
Lei L, et al. Deep reinforcement learning for autonomous internet of things: Model, applications and challenges. IEEE Communications Surveys and Tutorials. 2020;22(3):1722-1760. DOI: 10.1109/COMST.2020.2988367.
Kuutti S, et al. A survey of deep learning applications to autonomous vehicle control. IEEE Transactions on Intelligent Transportation Systems. 2021;22(2):712-733. DOI: 10.1109/TITS.2019.2962338.
Aradi S. Survey of deep reinforcement learning for motion planning of autonomous vehicles. IEEE Transactions on Intelligent Transportation Systems. 2022;23(2):740-759. DOI: 10.1109/TITS.2020.3024655.
Copyright (c) 2026 Bojun LIU

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.













