IALight: Importance-Aware Multi-Agent Reinforcement Learning for Arterial Traffic Cooperative Control

Authors

  • Lu WEI Beijing Polytechnic College, School of Information Engineering
  • Xiaoyan ZHANG Beijing Polytechnic College, School of Information Engineering
  • Lijun FAN Beijing Polytechnic College, School of Information Engineering
  • Lei GAO North China University of Technology, School of Computer Science and Technology
  • Jian YANG North China University of Technology, School of Computer Science and Technology

DOI:

https://doi.org/10.7307/ptt.v37i1.650

Keywords:

traffic signal control, intersection importance, multi-agent reinforcement learning, arterial cooperative control

Abstract

Multi-intersection cooperative control for arterial or network scenarios is a crucial issue in urban traffic management. Multi-agent reinforcement learning (MARL) has been recognised as an efficient solution and shows outperformed results. However, most existing MARL-based methods treat intersection equally, ignoring different importance of each intersection, such as high traffic volume, connecting multiple main roads, serving as entry or exit point for highways or commercial areas, etc. Besides, learning efficiency and practicality remain challenges. To address these issues, this paper proposes a novel importance-aware MARL-based method named IALight for traffic optimisation control. First, a normalised traffic pressure is introduced to ensure our state and reward design can accurately reflect the status of intersection traffic flow. Second, a reward adjustment module is designed to modify the reward based on intersection importance. To enhance practicality and safety for real-world applications, we adopt a green duration optimisation strategy under a cyclic fixed phase sequence. Comprehensive experiments on both synthetic and real-world traffic scenarios demonstrate that the proposed IALight outperforms the traditional and deep reinforcement learning baselines by more than 20.41% and 17.88% in average vehicle travel time, respectively.

References

Mao F, Li Z, Li L. A comparison of deep reinforcement learning models for isolated traffic signal control. IEEE Intelligent Transportation Systems Magazine. 2023;15(1):169-180. DOI: 10.1109/MITS.2022.3144797.

Haddad TA, Hedjazi D, Aouag S. A deep reinforcement learning-based cooperative approach for multi-intersection traffic signal control. Engineering Applications of Artificial Intelligence. 2022;114:105019. DOI: 10.1016/j.engappai.2022.105019.

Wang T, Cao J, Hussain A. Adaptive traffic signal control for large-scale scenario with cooperative group-based multi-agent reinforcement learning. Transportation research part C: emerging technologies. 2021;125:103046. DOI: 10.1016/j.trc.2021.103046.

Liu J, Zhang H, Fu Z, Wang Y. Learning scalable multi-agent coordination by spatial differentiation for traffic signal control. Engineering Applications of Artificial Intelligence. 2021;100:104165. DOI: 10.1016/j.engappai.2021.104165.

Zhao W, et al. IPDALight: Intensity-and phase duration-aware traffic signal control based on reinforcement learning. Journal of Systems Architecture. 2022;123:102374. DOI: 10.1016/j.sysarc.2021.102374.

Chandan K, Seco AM, Silva AB. Real-time traffic signal control for isolated intersection, using car-following logic under connected vehicle environment [J]. Transportation Research Procedia. 2017;25:1610-1625. DOI: 10.1016/j.trpro.2017.05.207.

Kustija J. SCATS (Sydney Coordinated Adaptive Traffic System) as A solution to overcome traffic congestion in big cities. International Journal of Research and Applied Technology (INJURATECH). 2023;3(1): 1-14. DOI: 10.34010/injuratech.v3i1.7875.

Studer L, Ketabdari M, Marchionni G. Analysis of adaptive traffic control systems design of a decision support system for better choices. Journal of Civil & Environmental Engineering. 2015; 5(6): 1-10.

Li L, Lv Y, Wang FY. Traffic signal timing via deep reinforcement learning. IEEE/CAA Journal of Automatica Sinica. 2016;3:247–254. DOI: 10.1109/JAS.2016.7508798.

Genders W, Razavi S. Using a deep reinforcement learning agent for traffic signal control. arXiv preprint arXiv:1611.01142 2016. DOI:10.48550/arXiv.1611.01142.

Liang X, Du X, Wang G, Han Z. A deep reinforcement learning network for traffic light cycle control. IEEE Transactions on Vehicular Technology. 2019;68:1243–1253. DOI:10.1109/TVT.2018.2890726.

Zhang L, Deng J. Data might be enough: Bridge real-world traffic signal control using offline reinforcement learning. arXiv preprint arXiv:2303;10828 2023. DOI:10.48550/arXiv.2303.10828.

Wei H, et al. Presslight: Learning max pressure control to coordinate traffic signals in arterial network. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019:1290–1298. DOI: 10.1145/3292500.3330949.

Li S. Multi-agent deep deterministic policy gradient for traffic signal control on urban road network. 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications (AEECA). IEEE. 2020:896–900. DOI: 10.1109/AEECA49918.2020.9213523.

Ma D, Chen X, Wu X, Jin S. Mixed-coordinated decision-making method for arterial signals based on reinforcement learning. Journal of Transportation Systems Engineering and Information Technology. 2022;22:145. DOI: 10.16097/j.cnki.1009-6744.2022.02.014.

Wei H, et al. Colight: Learning network-level cooperation for traffic signal control. Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 2019;1913–1922. DOI: 10.1145/3357384.3357902.

Chen C, et al. Toward a thousand lights: Decentralized deep reinforcement learning for large-scale traffic signal control. Proceedings of the AAAI Conference on Artificial Intelligence. 2020;34(4):3414–3421. DOI: 10.1609/aaai.v34i04.5744.

Xu M, et al. Discovery of critical nodes in road networks through mining from vehicle trajectories. IEEE Transactions on Intelligent Transportation Systems. 2018;20:583–593. DOI: 10.1109/TITS.2018.2817282.

Xu M, et al. Networkwide traffic signal control based on the discovery of critical nodes and deep reinforcement learning. Journal of Intelligent Transportation Systems. 2020;24:1–10. DOI: 10.1080/15472450.2018.1527694.

Zhang W, et al. Distributed signal control of arterial corridors using multi-agent deep reinforcement learning. IEEE Transactions on Intelligent Transportation Systems. 2023;24(1):178-190. DOI: 10.1109/TITS.2022.3216203.

Zeng J, et al. Halight: Hierarchical deep reinforcement learning for cooperative arterial traffic signal control with cycle strategy. IEEE 25th International Conference on Intelligent Transportation Systems (ITSC). 2022;479-485. DOI: 10.1109/ITSC55140.2022.9921819.

Fang Z, et al. MonitorLight: Reinforcement learning-based traffic signal control using mixed pressure monitoring. Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 2022;478–487. DOI: 10.1145/3511808.3557400.

Yang H, et al. Deep reinforcement learning based strategy for optimizing phase splits in traffic signal control. 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC). 2022;2329–2334. DOI:10.1109/ITSC55140.2022.9922531.

Ibrokhimov B, Kim YJ, Kang S. Biased pressure: Cyclic reinforcement learning model for intelligent traffic signal control. Sensors. 2022;22:2818. DOI:10.3390/s22072818.

Barman S, Levin MW. Performance evaluation of modified cyclic max-pressure controlled intersections in realistic corridors. Transportation Research Record. 2022;2676:110–128. DOI: 10.1177/03611981211072807.

Huang X, et al. Traffic node importance evaluation based on clustering in represented transportation networks. IEEE Transactions on Intelligent Transportation Systems. 2022;23:16622–16631. DOI: 10.1109/TITS.2022.3163756.

Liu J, Li X, Dong J. A survey on network node ranking algorithms: Representative methods, extensions, and applications. Science China Technological Sciences. 2021;64:451–461. DOI: 10.1007/s11431-020-1683-2.

Lowe R, et al. Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in neural information processing systems. 2017;30.

Monga R, Mehta D. Sumo (Simulation of Urban Mobility) and OSM (Open Street Map) Implementation. 2022 11th International Conference on System Modeling & Advancement in Research Trends (SMART). IEEE. 2022; 534–538. DOI:10.1109/SMART55829.2022.10046720.

Ferreira M, et al. Self-organized traffic control. Proceedings of the seventh ACM international workshop on VehiculAr InterNETworking. 2010;85–90. DOI: 10.1145/1860058.1860077.

Varaiya P. Max pressure control of a network of signalized intersections. Transportation Research Part C: Emerging Technologies. 2013;36:177–195. DOI: 10.1016/j.trc.2013.08.014.

Liu P, et al. Traffic signal timing optimization based on intersection importance in vehicle-road collaboration. Machine Learning for Cyber Security. 2023(14541). DOI:10.1007/978-981-97-2458-1_6.

Huang X, et al. Traffic node importance evaluation based on clustering in represented transportation networks. IEEE Transactions on Intelligent Transportation Systems. 2022;23(9): 16622-16631. DOI: 10.1109/TITS.2022.3163756.

Downloads

Published

06-02-2025

How to Cite

WEI, L., ZHANG, X., FAN, L., GAO, L., & YANG, J. (2025). IALight: Importance-Aware Multi-Agent Reinforcement Learning for Arterial Traffic Cooperative Control. Promet - Traffic&Transportation, 37(1), 151–169. https://doi.org/10.7307/ptt.v37i1.650

Issue

Section

Articles