Abstract:Distributed cluster environment makes real-time data computation more complex, and the correctness of stream large data processing system is difficult to guarantee. The existing large data benchmarking framework can test the performance of stream large data processing system, but there are many shortcomings such as simple application scenario design and inadequate evaluation index. To address this challenge, this study constructs a stream large data benchmarking framework for stock trading scenarios, generates high-frequency stock trading data through a flow-based data generator, and tests the performance of the system in high-speed scenarios in terms of delay, throughput, GC time, CPU resources, and so on. At the same time, the scalability of large data streaming system is verified by horizontal test. In this study, Apache Spark Streaming is used as the test system to test. The experimental results show that the performance degradation problems such as delay increase and GC time increase occur in high-speed scenarios because of the increase of input rate and parallelism of the system.