﻿ 面向回归测试的代码变更影响度量模型
 计算机系统应用  2020, Vol. 29 Issue (5): 270-274 PDF

Code Change Impact Metric Model for Regression Test
ZHOU Hai-Xu
Beijing Engineering Research Center of Civil Aviation Big Data, China Civil Aviation Information Network Inc., Beijing 101318, China
Foundation item: National Science and Technology Major Program (2014ZX010450101); Year 2014, Cloud Computing Project of National Development and Reform Commission ([2014]1799)
Abstract: Code change which introduces risk in software quality is associated with regression test case prioritization. It is an important topic that evaluates the code change impact on regression test case prioritization, which plays a significant role in software quality assurance. This study analyzes the relationship between regression test case and code change from a test coverage and coupling perspective based on the test case prioritization evaluating model, and presents a new code change impact metric model which introduces both the dominant and recessive impact level. Experiments indicate that, the quantitative metric results of code change impact to regression test cases prioritization computed by this model are comprehensive and objective, and could provide effective support for regression test case prioritization evaluation.
Key words: regression test     code change impact     metric model     test coverage     dominant influence level     hidden influence level

1 代码变更与回归测试用例优先级的关系

(1) 用例对应的需求特性受代码变更的影响越大, 则该用例的优先级应该越高;

(2) 用例对应的需求特性越重要, 则该用例的优先级应该越高;

(3) 执行用例所需的成本(包含运行用例和维护用例的成本)越低, 则该用例的优先级应该越高.

 ${c_i} = \frac{{N \cdot R \cdot {v_i} \cdot {a_i}}}{{\left( {{e_i} + {m_i}} \right) \cdot \displaystyle\sum\limits_{i = 1}^N {\left( {{e_i} + {m_i}} \right) \cdot \displaystyle\sum\limits_{i = 1}^N {\frac{{{v_i} \cdot {a_i}}}{{{e_i} + {m_i}}}} } }}$ (1)

2 面向回归测试的代码变更影响度量模型

2.1 基于测试覆盖的度量模型设计

 图 1 代码变更与需求特性、测试用例的关系

2.2 显性影响水平

(1) 对被测代码进行插桩, 即插入一些用于信息采集的探针代码, 并保证语义及功能与原始代码完全等效;

(2) 对插桩后的代码进行编译、链接, 获得可执行程序;

(3) 执行测试用例, 对插桩输出的信息进行处理, 获得用例对代码的覆盖结果, 也就是用例与代码的关联关系.

 $J\left( {{L_i},{L_D}} \right) = \frac{{\left| {{L_i} \cap {L_D}} \right|}}{{\left| {{L_i} \cup {L_D}} \right|}}$ (2)

2.3 隐性影响水平

 $COF({S_i}) = \frac{{\displaystyle\sum\limits_{m = 1}^{\left| {{C_{{S_i}}}} \right|} {\displaystyle\sum\limits_{n = 1}^{\left| C \right|} {isclient({c^{{S_i}}}_m,{c_n})} } }}{{\left| {{C_{{S_i}}}} \right| \cdot \left( {\left| C \right| - 1} \right) - \left( {2\displaystyle\sum\limits_{m = 1}^{\left| {{C_{{S_i}}}} \right|} {\left| {Descendents\left( {{c^{{S_i}}}_m} \right)} \right|} } \right)}}$ (3)

$COF({S_i})$ 刻画了代码全局耦合性中与测试用例 ${S_i}$ 有关的部分, 使用这个指标可以在一定程度上表征代码变更对测试用例的间接影响. 但是该指标与代码变更程度的关联较小, 只要代码没有类级别的变更, 比如类的新增、删除、派生关系、类间引用的改变等, $COF({S_i})$ 就是一个定值. 而在直观认知中, 代码修改得越多, 对测试用例执行结果和相关需求特性产生间接影响的可能性就越大. 因此, 我们需要针对代码变更程度补充另一个度量维度.

 $J\left( {L,{L_D}} \right) = \frac{{\left| {L \cap {L_D}} \right|}}{{\left| {L \cup {L_D}} \right|}} = \frac{{\left| {{L_D}} \right|}}{{\left| L \right|}}$ (4)

2.4 度量模型的构成与特性

 ${a_i}= \lambda \cdot J\left( {{L_i},{L_D}} \right) + COF({S_i}) + J\left( {L,{L_D}} \right)= \lambda \cdot \dfrac{{\left| {{L_i} \cap {L_D}} \right|}}{{\left| {{L_i} \cup {L_D}} \right|}}+\frac{{\displaystyle\sum\limits_{m = 1}^{\left| {{C_{{S_i}}}} \right|} {\displaystyle\sum\limits_{n = 1}^{\left| C \right|} {isclient({c^{{S_i}}}_m,{c_n})} } }}{{\left| {{C_{{S_i}}}} \right| \cdot \left( {\left| C \right| - 1} \right) - \left( {2\displaystyle\sum\limits_{m = 1}^{\left| {{C_{{S_i}}}} \right|} {\left| {Descendents\left( {{c^{{S_i}}}_m} \right)} \right|} } \right)}}+ \dfrac{{\left| {{L_D}} \right|}}{{\left| L \right|}}$ (5)

 图 2 度量模型构成

(1) 回归测试用例覆盖的代码发生了变更. 这时, 模型中显性影响水平指标大于0, 而且居于主导地位;

(2) 回归测试用例覆盖的代码未发生变更. 这时, 模式中显性影响水平指标等于0, 度量结果取决于隐性影响水平指标, 即用例覆盖的代码与变更代码的耦合程度.

3 实验分析

4 结论与展望

 [1] Wong WE, Horgan JR, London S, et al. A study of effective regression testing in practice. Proceedings the Eighth International Symposium on Software Reliability Engineering. Albuquerque, NM, USA. 1997. 264–274. [2] Epitropakis MG, Yoo S, Harman M, et al. Empirical evaluation of pareto efficient multi-objective regression test case prioritisation. Proceedings of the 2015 International Symposium on Software Testing and Analysis. Baltimore, MD, USA. 2015. 234–245. [3] 周海旭. 回归测试用例优先级评估模型研究. 电子技术与软件工程, 2019(1): 49-50. [4] Acharya AA, Mahali P, Mohapatra DP. Model based test case prioritization using association rule mining. Computational Intelligence in Data Mining, 2015, 3: 429-440. [5] Panichella A, Oliveto R, Penta MD, et al. Improving multi-objective test case selection by injecting diversity in genetic algorithms. IEEE Transactions on Software Engineering, 2015, 41(4): 358-383. DOI:10.1109/TSE.2014.2364175 [6] Ranga KK. Analysis and design of test case prioritization technique for regression testing. International Journal for Innovative Research in Science and Technology, 2015, 2(1): 248-252. [7] Orso A, Apiwattanapong T, Law J, et al. Leveraging field data for impact analysis and regression testing. Proceedings of the 9th European Software Engineering Conference Held Jointly with 11th ACM SIGSOFT International Symposium on Foundations of Software Engineering. Helsinki, Finland. 2003. 128–137. [8] Pfleeger SL, Bohner SA. A framework for software maintenance metrics. Proceedings. Conference on Software Maintenance 1990. San Diego, CA, USA. 1990. 320–327. [9] Ryder BG, Tip F. Change impact analysis for object-oriented programs. Proceedings of the 2001 ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering. Charleston, SC, USA 2001. 46–53. [10] Tyagi M, Malhotra S. An approach for test case prioritization based on three factors. International Journal of Information Technology and Computer Science (IJITCS), 2015, 7(4): 79-86. [11] Mahmood H, Hosain S. Improving test case prioritization based on practical priority factors. Proceedings of 2017 8th IEEE International Conference on Software Engineering and Service Science. Beijing, China. 2017. 899–902. [12] Arar ÖF, Ayan K. Software defect prediction using cost-sensitive neural network. Applied Soft Computing, 2015, 33: 263-277. DOI:10.1016/j.asoc.2015.04.045 [13] Ryu D, Jang JI, Baik J. A transfer cost-sensitive boosting approach for cross-project defect prediction. Software Quality Journal, 2015, 25(1): 235-272. [14] Di Nardo D, Alshahwan N, Briand L, et al. Coverage-based regression test case selection, minimization and prioritization: A case study on an industrial system. Software: Testing, Verification and Reliability, 2015, 25(4): 371-396. DOI:10.1002/stvr.1572 [15] Abreu FBE, Goulão M, Esteves R. Toward the design quality evaluation of object-oriented software systems. Proceedings of the 5th International Conference on Software Quality. Austin, TX, USA. 1995.