Measurement and Impact Factors of Speed of Reviews and Integration in Continuous Software Engineering

Open access

Abstract

Continuous integration and continuous software deployment depend on the mix of automated and manual activities. The automated build and test processes are often intertwined with manual reviews and bug-fixing activities. In this paper, we set o to study how these manual and automated activities influence the speed of reviews and integration. We conduct a case study of two companies developing embedded software, measure the time required for reviewing and integrating software code (alias speed), and conduct a workshop to identify factors which explain the quantitative results. Our results show that the measurement of speed is a good alias for calendar time and triggers improvements better than using measures for velocity. We have also found that the distribution of code repositories, frequent reminders and team proximity decrease the time needed to deploy the software. Our findings are that there is a difference in the structure of code repositories between the fast and slow integration cases, which contributes to the debate on the pros and cons of different repository structures in modern companies.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1] Alleman G. B. Henderson M. and Seggelke R. Making agile development work in a government contracting environment-measuring velocity with earned value. In Agile Development Conference 2003. ADC 2003. Proceedings of the pages 114–119. IEEE 2003.

  • [2] Baum T. Liskin O. Niklas K. and Schneider K. Factors influencing code review processes in industry. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering pages 85–96. ACM 2016.

  • [3] Beller M. Bacchelli A. Zaidman A. and Juergens E. Modern code reviews in open-source projects: Which problems do they fix? In Proceedings of the 11th working conference on mining software repositories pages 202–211. ACM 2014.

  • [4] Blackburn J. D. Scudder G. D. and Van Wassenhove L. N. Improving speed and productivity of software development: a global survey of software developers. IEEE Transactions on Software Engineering 22(12):875–885 1996.

  • [5] Bosch J. Continuous Software Engineering. Springer 2014.

  • [6] Bosch J. Speed data and ecosystems: The future of software engineering. IEEE Software 33(1):82–88 2016.

  • [7] Choi J. The science behind why jeff bezos’s two-pizza team rule works 2014.

  • [8] Coelho E. and Basu A. E ort estimation in agile software development using story points. International Journal of Applied Information Systems (IJAIS) 3(7) 2012.

  • [9] Cohn M. User stories applied: For agile software development. Addison-Wesley Professional 2004.

  • [10] Cohn M. Agile estimating and planning. Pearson Education 2005.

  • [11] Coskun H. Cognitive stimulation with convergent and divergent thinking exercises in brainwriting: Incubation sequence priming and group context. Small group research 36(4):466–498 2005.

  • [12] Fagan M. Design and code inspections to reduce errors in program development IBM Systems Journal vol. 15 1976.

  • [13] Hüttermann M. DevOps for developers. Apress 2012.

  • [14] Jaspan C. Jorde M. Knight A. Sadowski C. Smith E. K. Winter C. and Murphy-Hill E. Advantages and disadvantages of a monolithic repository: a case study at Google. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice pages 225–234. ACM 2018.

  • [15] Kemerer C. F. and Paulk M. C. The impact of design and code reviews on software quality: An empirical study based on psp data. IEEE transactions on software engineering 35(4):534–550 2009.

  • [16] Kononenko O. Baysal O. and Godfrey M. W. Code review quality: how developers see it. In Proceedings of the 38th International Conference on Software Engineering pages 1028–1038. ACM 2016.

  • [17] Kononenko O. Baysal O. Guerrouj L. Cao Y. and Godfrey M. W. Investigating code review quality: Do people and participation matter? In Software Maintenance and Evolution (ICSME) 2015 IEEE International Conference on pages 111–120. IEEE 2015.

  • [18] McIntosh S. Kamei Y. Adams B. and Hassan A. E. The impact of code review coverage and code review participation on software quality: A case study of the qt vtk and itk projects. In Proceedings of the 11th Working Conference on Mining Software Repositories pages 192–201. ACM 2014.

  • [19] Meding W. Effective monitoring of progress of agile software development teams in modern software companies: An industrial case study. In Proceedings of the 27th International Workshop on Software Measurement and 12th International Conference on Software Process and Product Measurement IWSM Mensura ‘17 pages 23–32 New York NY USA 2017. ACM.

  • [20] Nicolette D. Software development metrics. Manning 2015.

  • [21] Organization I. S. and Commission I. E. Software and systems engineering software measurement process. Technical report ISO/IEC 2007.

  • [22] Perry D. E. Porter A. Wade M. W. Votta L. G. and Perpich J. Reducing inspection interval in large-scale software development. IEEE Transactions on Software Engineering (7):695–705 2002.

  • [23] Petersen K. A palette of lean indicators to detect waste in software maintenance: A case study. In Agile processes in software engineering and extreme programming pages 108–122. Springer 2012.

  • [24] Porter A. Siy H. Mockus A. and Votta L. Understanding the sources of variation in software inspections. ACM Transactions on Software Engineering and Methodology (TOSEM) 7(1):41–79 1998.

  • [25] Runeson P. and Höst M. Guidelines for conducting and reporting case study research in software engineering. Empirical software engineering 14(2):131 2009.

  • [26] Runeson P. Host M. Rainer A. and Regnell B. Case study research in software engineering: Guidelines and examples. John Wiley & Sons 2012.

  • [27] Shimagaki J. Kamei Y. McIntosh S. Hassan A. E. and Ubayashi N. A study of the quality-impacting practices of modern code review at sony mobile. In Proceedings of the 38th International Conference on Software Engineering Companion pages 212–221. ACM 2016.

  • [28] Staron M. and Meding W. Monitoring bottlenecks in agile and lean software development projects–a method and its industrial use. Product-Focused Software Process Improvement pages 3–16 2011.

  • [29] Staron M. and Meding W. Software Development Measurement Programs: Development Management and Evolution. Springer 2018.

  • [30] Thongtanunam P. McIntosh S. Hassan A. E. and Iida H. Investigating code review practices in defective files: An empirical study of the qt system. In Proceedings of the 12th Working Conference on Mining Software Repositories pages 168–179. IEEE Press 2015.

  • [31] Thongtanunam P. McIntosh S. Hassan A. E. and Iida H. Review participation in modern code review. Empirical Software Engineering 22(2):768–817 2017.

Search
Journal information
Impact Factor


CiteScore 2018: 0.61

SCImago Journal Rank (SJR) 2018: 0.152
Source Normalized Impact per Paper (SNIP) 2018: 0.463

Mathematical Citation Quotient (MCQ) 2018: 0.08

Metrics
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 211 211 2
PDF Downloads 195 195 6