基于图偏差网络的外部自编码器时间序列异常检测
作者:
基金项目:

国家自然科学基金(61972210)


Time Series Anomaly Detection With External Autoencoder Based on Graph Deviation Network
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [29]
  • |
  • 相似文献
  • | | |
  • 文章评论
    摘要:

    随着互联网和连接技术的提高, 传感器产生的数据逐渐趋于复杂化. 深度学习方法在处理高维数据的异常检测方面取得较好的进展, 图偏差网络(graph deviation network, GDN)学习传感器节点之间关系来预测异常, 并取得一定的效果. 针对图偏差网络模型缺少对时间依赖性以及异常数据不稳定的处理, 提出了基于图偏差网络的外部自编码器模型(graph deviation network-based external attention autoencoder, AEEA-GDN)深度提取表征, 此外在模型训练时引入自适应学习机制, 帮助网络更好地适应异常数据的变化. 在3个现实收集传感器数据集上的实验结果表明, 基于图偏差网络的外部自编码器模型比基线方法更准确地检测异常, 且总体性能更优.

    Abstract:

    With the improvement of the Internet and connection technology, the data generated by sensors is gradually becoming complex. Deep learning methods have made great progress in anomaly detection of high-dimensional data. The graph deviation network (GDN) learns the relationship between sensor nodes to predict anomalies and has achieved certain results. Since the GDN model fails to deal with time dependence and instability of abnormal data, an external attention autoencoder based on GDN (AEEA-GDN) is proposed to deeply extract features. In addition, an adaptive learning mechanism is introduced during model training to help the network better adapt to changes in abnormal data. Experimental results on three real-world collected sensor datasets show that the AEEA-GDN model can more accurately detect anomalies than baseline methods and has better overall performance.

    参考文献
    [1] Pang GS, Shen CH, Cao LB, et al. Deep learning for anomaly detection: A review. ACM Computing Surveys, 2022, 54(2): 38.
    [2] Elsken T, Metzen JH, Hutter F. Neural architecture search: A survey. The Journal of Machine Learning Research, 2019, 20(1): 1997–2017.
    [3] Blázquez-García A, Conde A, Mori U, et al. A review on outlier/anomaly detection in time series data. ACM Computing Surveys (CSUR), 2021, 54(3): 1–33.
    [4] Blázquez-García A, Conde A, Mori U, et al. A review on outlier/anomaly detection in time series data. ACM Computing Surveys, 2022, 54(3): 56.
    [5] Shyu ML, Chen SC, Sarinnapakorn K, et al. A novel anomaly detection scheme based on principal component classifier. Proceedings of the 2003 IEEE Foundations and New Directions of Data Mining Workshop. IEEE, 2003. 172–179.
    [6] Angiulli F, Pizzuti C. Fast outlier detection in high dimensional spaces. Proceedings of the 6th European Conference on Principles of Data Mining and Knowledge Discovery. Helsinki: Springer, 2002. 15–27.
    [7] Schölkopf B, Platt JC, Shawe-Taylor J, et al. Estimating the support of a high-dimensional distribution. Neural Computation, 2001, 13(7): 1443–1471.
    [8] Breunig MM, Kriegel HP, Ng RT, et al. LOF: Identifying density-based local outliers. ACM SIGMOD Record, 2000, 29(2): 93–104.
    [9] Aggarwal CC. An introduction to outlier analysis. Outlier Analysis. 2nd ed., Cham: Springer, 2017. 1–34.
    [10] Zong B, Song Q, Min MR, et al. Deep autoencoding Gaussian mixture model for unsupervised anomaly detection. Proceedings of the 6th International Conference on Learning Representations. Vancouver: OpenReview.net, 2018. 1–19.
    [11] Audibert J, Michiardi P, Guyard F, et al. USAD: Unsupervised anomaly detection on multivariate time series. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2020. 3395–3404.
    [12] Li D, Chen DC, Jin BH, et al. MAD-GAN: Multivariate anomaly detection for time series data with generative adversarial networks. Proceedings of the 28th International Conference on Artificial Neural Networks. Munich: Springer, 2019. 703–716.
    [13] Lample G, Ballesteros M, Subramanian S, et al. Neural architectures for named entity recognition. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego: Association for Computational Linguistics, 2016. 260–270.
    [14] Defferrard M, Bresson X, Vandergheynst P. Convolutional neural networks on graphs with fast localized spectral filtering. Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona: Curran Associates Inc., 2016. 3844–3852.
    [15] Kipf TN, Welling M. Semi-supervised classification with graph convolutional networks. Proceedings of the 5th International Conference on Learning Representations. Toulon: OpenReview.net, 2016.
    [16] Veličković P, Cucurull G, Casanova A, et al. Graph attention networks. Proceedings of the 6th International Conference on Learning Representations. Vancouver: OpenReview.net, 2018.
    [17] Deng AL, Hooi B. Graph neural network-based anomaly detection in multivariate time series. Proceedings of the 35th AAAI Conference on Artificial Intelligence. AAAI, 2021. 4027–4035.
    [18] Gori M, Monfardini G, Scarselli F. A new model for learning in graph domains. Proceedings of the 2005 IEEE International Joint Conference on Neural Networks. IEEE, 2005. 729–734.
    [19] Yu B, Yin HT, Zhu ZX. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. Proceedings of the 27th International Joint Conference on Artificial Intelligence. Stockholm: IJCAI.org, 2017. 3634–3640.
    [20] Lim N, Hooi B, Ng SK, et al. STP-UDGAT: Spatial-temporal-preference user dimensional graph attention network for next POI recommendation. Proceedings of the 29th ACM International Conference on Information & Knowledge Management. ACM, 2020. 845–854.
    [21] Wang YW, Wang W, Ca YJ, et al. Detecting implementation bugs in graph convolutional network based node classifiers. Proceedings of the 31st IEEE International Symposium on Software Reliability Engineering (ISSRE). Coimbra: IEEE, 2020. 313–324.
    [22] Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature, 1986, 323(6088): 533–536.
    [23] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017. 6000–6010.
    [24] Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. Proceedings of the 3rd International Conference on Learning Representations. San Diego: ICLR, 2015.
    [25] Wang XL, Girshick R, Gupta A, et al. Non-local neural networks. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 7794–7803.
    [26] Guo MH, Liu ZN, Mu TJ, et al. Beyond self-attention: External attention using two linear layers for visual tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(5): 5436–5447.
    [27] Mathur AP, Tippenhauer NO. SWaT: A water treatment testbed for research and training on ICS security. Proceedings of the 2016 International Workshop on Cyber-physical Systems for Smart Water Networks. Vienna: IEEE, 2016. 31–36.
    [28] Ahmed CM, Palleti VR, Mathur AP. WADI: A water distribution testbed for research in the design of secure cyber physical systems. Proceedings of the 3rd International Workshop on Cyber-physical Systems for Smart Water Networks. Pennsylvania: ACM, 2017. 25–28.
    [29] Hundman K, Constantinou V, Laporte C, et al. Detecting spacecraft anomalies using LSTMs and nonparametric dynamic thresholding. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. London: ACM, 2018. 387–395.
    相似文献
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

张孚容,顾磊.基于图偏差网络的外部自编码器时间序列异常检测.计算机系统应用,2024,33(3):24-33

复制
分享
文章指标
  • 点击次数:639
  • 下载次数: 1428
  • HTML阅读次数: 968
  • 引用次数: 0
历史
  • 收稿日期:2023-09-02
  • 最后修改日期:2023-10-08
  • 在线发布日期: 2023-12-25
文章二维码
您是第11203181位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号