Time Series Anomaly Detection With External Autoencoder Based on Graph Deviation Network
CSTR:
Author:
  • Article
  • | |
  • Metrics
  • |
  • Reference [29]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    With the improvement of the Internet and connection technology, the data generated by sensors is gradually becoming complex. Deep learning methods have made great progress in anomaly detection of high-dimensional data. The graph deviation network (GDN) learns the relationship between sensor nodes to predict anomalies and has achieved certain results. Since the GDN model fails to deal with time dependence and instability of abnormal data, an external attention autoencoder based on GDN (AEEA-GDN) is proposed to deeply extract features. In addition, an adaptive learning mechanism is introduced during model training to help the network better adapt to changes in abnormal data. Experimental results on three real-world collected sensor datasets show that the AEEA-GDN model can more accurately detect anomalies than baseline methods and has better overall performance.

    Reference
    [1] Pang GS, Shen CH, Cao LB, et al. Deep learning for anomaly detection: A review. ACM Computing Surveys, 2022, 54(2): 38.
    [2] Elsken T, Metzen JH, Hutter F. Neural architecture search: A survey. The Journal of Machine Learning Research, 2019, 20(1): 1997–2017.
    [3] Blázquez-García A, Conde A, Mori U, et al. A review on outlier/anomaly detection in time series data. ACM Computing Surveys (CSUR), 2021, 54(3): 1–33.
    [4] Blázquez-García A, Conde A, Mori U, et al. A review on outlier/anomaly detection in time series data. ACM Computing Surveys, 2022, 54(3): 56.
    [5] Shyu ML, Chen SC, Sarinnapakorn K, et al. A novel anomaly detection scheme based on principal component classifier. Proceedings of the 2003 IEEE Foundations and New Directions of Data Mining Workshop. IEEE, 2003. 172–179.
    [6] Angiulli F, Pizzuti C. Fast outlier detection in high dimensional spaces. Proceedings of the 6th European Conference on Principles of Data Mining and Knowledge Discovery. Helsinki: Springer, 2002. 15–27.
    [7] Schölkopf B, Platt JC, Shawe-Taylor J, et al. Estimating the support of a high-dimensional distribution. Neural Computation, 2001, 13(7): 1443–1471.
    [8] Breunig MM, Kriegel HP, Ng RT, et al. LOF: Identifying density-based local outliers. ACM SIGMOD Record, 2000, 29(2): 93–104.
    [9] Aggarwal CC. An introduction to outlier analysis. Outlier Analysis. 2nd ed., Cham: Springer, 2017. 1–34.
    [10] Zong B, Song Q, Min MR, et al. Deep autoencoding Gaussian mixture model for unsupervised anomaly detection. Proceedings of the 6th International Conference on Learning Representations. Vancouver: OpenReview.net, 2018. 1–19.
    [11] Audibert J, Michiardi P, Guyard F, et al. USAD: Unsupervised anomaly detection on multivariate time series. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2020. 3395–3404.
    [12] Li D, Chen DC, Jin BH, et al. MAD-GAN: Multivariate anomaly detection for time series data with generative adversarial networks. Proceedings of the 28th International Conference on Artificial Neural Networks. Munich: Springer, 2019. 703–716.
    [13] Lample G, Ballesteros M, Subramanian S, et al. Neural architectures for named entity recognition. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego: Association for Computational Linguistics, 2016. 260–270.
    [14] Defferrard M, Bresson X, Vandergheynst P. Convolutional neural networks on graphs with fast localized spectral filtering. Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona: Curran Associates Inc., 2016. 3844–3852.
    [15] Kipf TN, Welling M. Semi-supervised classification with graph convolutional networks. Proceedings of the 5th International Conference on Learning Representations. Toulon: OpenReview.net, 2016.
    [16] Veličković P, Cucurull G, Casanova A, et al. Graph attention networks. Proceedings of the 6th International Conference on Learning Representations. Vancouver: OpenReview.net, 2018.
    [17] Deng AL, Hooi B. Graph neural network-based anomaly detection in multivariate time series. Proceedings of the 35th AAAI Conference on Artificial Intelligence. AAAI, 2021. 4027–4035.
    [18] Gori M, Monfardini G, Scarselli F. A new model for learning in graph domains. Proceedings of the 2005 IEEE International Joint Conference on Neural Networks. IEEE, 2005. 729–734.
    [19] Yu B, Yin HT, Zhu ZX. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. Proceedings of the 27th International Joint Conference on Artificial Intelligence. Stockholm: IJCAI.org, 2017. 3634–3640.
    [20] Lim N, Hooi B, Ng SK, et al. STP-UDGAT: Spatial-temporal-preference user dimensional graph attention network for next POI recommendation. Proceedings of the 29th ACM International Conference on Information & Knowledge Management. ACM, 2020. 845–854.
    [21] Wang YW, Wang W, Ca YJ, et al. Detecting implementation bugs in graph convolutional network based node classifiers. Proceedings of the 31st IEEE International Symposium on Software Reliability Engineering (ISSRE). Coimbra: IEEE, 2020. 313–324.
    [22] Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature, 1986, 323(6088): 533–536.
    [23] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017. 6000–6010.
    [24] Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. Proceedings of the 3rd International Conference on Learning Representations. San Diego: ICLR, 2015.
    [25] Wang XL, Girshick R, Gupta A, et al. Non-local neural networks. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 7794–7803.
    [26] Guo MH, Liu ZN, Mu TJ, et al. Beyond self-attention: External attention using two linear layers for visual tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(5): 5436–5447.
    [27] Mathur AP, Tippenhauer NO. SWaT: A water treatment testbed for research and training on ICS security. Proceedings of the 2016 International Workshop on Cyber-physical Systems for Smart Water Networks. Vienna: IEEE, 2016. 31–36.
    [28] Ahmed CM, Palleti VR, Mathur AP. WADI: A water distribution testbed for research in the design of secure cyber physical systems. Proceedings of the 3rd International Workshop on Cyber-physical Systems for Smart Water Networks. Pennsylvania: ACM, 2017. 25–28.
    [29] Hundman K, Constantinou V, Laporte C, et al. Detecting spacecraft anomalies using LSTMs and nonparametric dynamic thresholding. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. London: ACM, 2018. 387–395.
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

张孚容,顾磊.基于图偏差网络的外部自编码器时间序列异常检测.计算机系统应用,2024,33(3):24-33

Copy
Share
Article Metrics
  • Abstract:682
  • PDF: 1478
  • HTML: 1008
  • Cited by: 0
History
  • Received:September 02,2023
  • Revised:October 08,2023
  • Online: December 25,2023
Article QR Code
You are the first992118Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063