###
计算机系统应用英文版:2024,33(4):82-92
本文二维码信息
码上扫一扫!
融合多种时空自注意力机制的Transformer交通流预测模型
(1.福建师范大学 计算机与网络空间安全学院, 福州 350117;2.福建师范大学 数字福建大数据安全技术研究所, 福州 350117;3.福建理工大学 福建省汽车电子与电驱动技术重点实验室, 福州 350118)
Transformer Traffic Flow Prediction Model Integrating Multiple Spatiotemporal Self-attention Mechanisms
(1.College of Computer and Cyber Security, Fujian Normal University, Fuzhou 350117, China;2.Digital Fujian Institute of Big Data Security Technology, Fujian Normal University, Fuzhou 350117, China;3.Fujian Key Laboratory of Automotive Electronics and Electric Drive, Fujian University of Technology, Fuzhou 350118, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 132次   下载 384
Received:October 08, 2023    Revised:November 09, 2023
中文摘要: 交通流预测是智能交通系统中实现城市交通优化的一种重要方法, 准确的交通流量预测对交通管理和诱导具有重要意义. 然而, 因交通流本身存在高度时空依赖性而表现出复杂的非线性特征, 现有的方法主要考虑路网中节点的局部时空特征, 忽略了路网中所有节点的长期时空特征. 为了充分挖掘交通流数据复杂的时空依赖, 提出一种融合多种时空自注意力机制的Transformer交通流预测模型(MSTTF). 该模型在嵌入层通过位置编码嵌入时间和空间信息, 并在注意力机制层融合邻接空间自注意力机制, 相似空间自注意力机制, 时间自注意力机制, 时间-空间自注意力机制等多种自注意力机制挖掘数据中潜在的时空依赖关系, 最后在输出层进行预测. 结果表明, MSTTF模型与传统时空Transformer相比, MAE平均降低了10.36%. 特别地, 相比于目前最先进的PDFormer模型, MAE平均降低了1.24%, 能取得更好的预测效果.
Abstract:Traffic flow prediction is an important method for achieving urban traffic optimization in intelligent transportation systems. Accurate traffic flow prediction holds significant importance for traffic management and guidance. However, due to the high spatiotemporal dependence, the traffic flow exhibits complex nonlinear characteristics. Existing methods mainly consider the local spatiotemporal features of nodes in the road network, overlooking the long-term spatiotemporal characteristics of all nodes in the network. To fully explore the complex spatiotemporal dependencies in traffic flow data, this study proposes a Transformer-based traffic flow prediction model called multi-spatiotemporal self-attention Transformer (MSTTF). This model embeds temporal and spatial information through position encoding in the embedding layer and integrates various self-attention mechanisms, including adjacent spatial self-attention, similar spatial self-attention, temporal self-attention, and spatiotemporal self-attention, to uncover potential spatiotemporal dependencies in the data. The predictions are made in the output layer. The results demonstrate that the MSTTF model achieves an average reduction of 10.36% in MAE compared to the traditional spatiotemporal Transformer model. Particularly, when compared to the state-of-the-art PDFormer model, the MSTTF model achieves an average MAE reduction of 1.24%, indicating superior predictive performance.
文章编号:     中图分类号:    文献标志码:
基金项目:福建省科技厅对外合作项目(2020I0014)
引用文本:
曹威,王兴,邹复民,金彪,王小军.融合多种时空自注意力机制的Transformer交通流预测模型.计算机系统应用,2024,33(4):82-92
CAO Wei,WANG Xing,ZOU Fu-Min,JIN Biao,WANG Xiao-Jun.Transformer Traffic Flow Prediction Model Integrating Multiple Spatiotemporal Self-attention Mechanisms.COMPUTER SYSTEMS APPLICATIONS,2024,33(4):82-92