Open Access

Cost Effectiveness of Software Defect Prediction in an Industrial Project


Cite

[1] Arisholm, E., Briand, L.C., Johannessen, E.B.: A Systematic and Comprehensive Investigation of Methods to Build and Evaluate Fault Prediction Models. The Journal of Systems and Software 83(1), 2–17 (2010)10.1016/j.jss.2009.06.055Search in Google Scholar

[2] Atlassian: JIRA Homepage (2016), https://www.atlassian.com/software/jira/, accessed: 2016.01.06Search in Google Scholar

[3] Bell, T.E., Thayer, T.A.: Software requirements: Are they really a problem? In: Proceedings of the 2nd international conference on Software engineering. pp. 61–68. IEEE Computer Society Press (1976)10.1109/TSE.1976.233529Search in Google Scholar

[4] Boehm, B.W.: Software Engineering. IEEE Transactions on Computers 25(12), 1226–1241 (1976)10.1109/TC.1976.1674590Search in Google Scholar

[5] Boissier, G., Cassell, K.: Eclipse Metrics 2 Homepage (2016), http://metrics2.sourceforge.net/, accessed: 2016.01.06Search in Google Scholar

[6] Endres, A., Rombach, D.: A Handbook of Software and Systems Engineering. Addison-Wesley (2003)Search in Google Scholar

[7] Hall, T., Beecham, S., Bowes, D., Gray, D., Counsell, S.: A Systematic Literature Review on Fault Prediction Performance in Software Engineering. IEEE Transactions on Software Engineering 38(6), 1276–1304 (2012)10.1109/TSE.2011.103Open DOISearch in Google Scholar

[8] Hryszko, J., Madeyski, L.: Bottlenecks in Software Defect Prediction Implementation in Industrial Projects. Foundations and Computing and Decision Sciences 40(1), 17–33 (2015), http://dx.doi.org/10.1515/fcds-2015-0002, DOI: 10.1515/fcds-2015-000210.1515/fcds-2015-0002Search in Google Scholar

[9] Hryszko, J., Madeyski, L.: Assessment of the Software Defect Prediction Cost Effectiveness in an Industrial Project. In: Software Engineering: Challenges and Solutions, Advances in Intelligent Systems and Computing, vol. 504, pp. 77–90. Springer (2017), DOI: 10.1007/978-3-319-43606-7_610.1007/978-3-319-43606-7_6Open DOISearch in Google Scholar

[10] Jureczko, M., Madeyski, L.: Towards Identifying Software Project Clusters with Regard to Defect Prediction. In: Proceedings of the 6th International Conference on Predictive Models in Software Engineering. pp. 9:1–9:10. PROMISE ’10, ACM, New York, USA (2010), http://dx.doi.org/10.1145/1868328.1868342, DOI: 10.1145/1868328.186834210.1145/1868328.1868342Search in Google Scholar

[11] Jureczko, M., Madeyski, L.: A Review of Process Metrics in Defect Prediction Studies. Metody Informatyki Stosowanej 30(5), 133–145 (2011), http://madeyski.e-informatyka.pl/download/Madeyski11.pdfSearch in Google Scholar

[12] Jureczko, M., Madeyski, L.: Cross–project defect prediction with respect to code ownership model: An empirical study. e-Informatica Software Engineering Journal 9(1), 21–35 (2015), http://dx.doi.org/10.5277/e-Inf150102, DOI: 10.5277/e-Inf15010210.5277/e-Inf150102:10.5277/e-Inf150102Open DOISearch in Google Scholar

[13] Khoshgoftaar, T.M., Allen, E.B., Hudepohl, J.P., Aud, S.J.: Application of Neural Networks To Software Quality Modelling Of a Very Large Telecommunications System. IEEE Transactions on Neural Networks 8(4), 902–909 (1997)10.1109/72.59588818255693Open DOISearch in Google Scholar

[14] Khoshgoftaar, T.M., Pandya, A.S., Lanning, D.L.: Application of Neural Networks for Predicting Faults. Annals of Software Engineering 1(1), 141–154 (1995)10.1007/BF02249049Search in Google Scholar

[15] Khoshgoftaar, T.M., Seliya, N.: Comparative Assessment of Software Quality Classification Techniques: An Empirical Case Study. Empirical Software Engineering 9(3), 229–257 (2004)10.1023/B:EMSE.0000027781.18360.9bSearch in Google Scholar

[16] Khoshgoftaar, T.M., Seliya, N.: Assessment of a New Three-Group Software Quality Classification Technique: An Empirical Case Study. Empirical Software Engineering 10(2), 183–218 (2005)10.1007/s10664-004-6191-xSearch in Google Scholar

[17] Kläs, M., Nakao, H., Elberzhager, F., Münch, J.: Predicting Defect Content and Quality Assurance Effectiveness by Combining Expert Judgment and Defect Data-A Case Study. In: Proceedings of the 19th International Symposium on Software Reliability Engineering. pp. 17–26 (2008)10.1109/ISSRE.2008.43Search in Google Scholar

[18] KNIME.COM AG: KNIME Framework Documentation (2016), https://tech.knime.org/documentation/, accessed: 2016.11.06Search in Google Scholar

[19] Li, P.L., Herbsleb, J., Shaw, M., Robinson, B.: Experiences and Results from Initiating Field Defect Prediction and Product Test Prioritization Efforts at ABB Inc. In: Proceedings of the 28th International Conference on Software Engineering. pp. 413–422 (2006)10.1145/1134285.1134343Search in Google Scholar

[20] Madeyski, L., Jureczko, M.: Which Process Metrics Can Significantly Improve Defect Prediction Models? An Empirical Study. Software Quality Journal 23(3), 393–422 (2015), http://dx.doi.org/10.1007/s11219-014-9241-7, DOI: 10.1007/s11219-014-9241-710.1007/s11219-014-9241-7:10.1007/s11219-014-9241-7Open DOISearch in Google Scholar

[21] Madeyski, L., Majchrzak, M.: Software Measurement and Defect Prediction with DePress Extensible Framework. Foundations and Computing and Decision Sciences 39(4), 249–270 (2014), http://dx.doi.org/10.2478/fcds-2014-0014, DOI: 10.2478/fcds-2014-001410.2478/fcds-2014-0014:10.2478/fcds-2014-0014Open DOISearch in Google Scholar

[22] Madeyski, L., Majchrzak, M.: ImpressiveCode DePress (Defect Prediction for software systems) Extensible Framework (2016), https://github.com/ImpressiveCode/ic-depressSearch in Google Scholar

[23] Menzies, T., Jalali, O., Hihn, J., Baker, D., Lum, K.: Stable Rankings for Different Effort Models. Automated Software Engineering 17(4), 409–437 (2010)10.1007/s10515-010-0070-zOpen DOISearch in Google Scholar

[24] Monden, A., Shinoda, S., Shirai, K., Yoshida, J., Barker, M., Matsumoto, K.: Assessing the Cost Effectiveness of Fault Prediction in Acceptance Testing. IEEE Transactions on Software Engineering 39(10), 1345–1357 (2013)10.1109/TSE.2013.21Search in Google Scholar

[25] Moser, R., Pedrycz, W., Succi, G.: A Comparative Analysis of The Efficiency of Change Metrics and Static Code Attributes for Defect Prediction. In: Software Engineering, 2008. ICSE ’08. ACM/IEEE 30th International Conference on. pp. 181–190 (2008)10.1145/1368088.1368114Search in Google Scholar

[26] Müller, M.M., Padberg, F.: About the Return on Investment of Test-Driven Development. In: International Workshop on Economics-Driven Software Engineering Research EDSER-5. pp. 26–31 (2003)Search in Google Scholar

[27] Munson, J.C., Khoshgoftaar, T.M.: The Detection of Fault-Prone Programs. IEEE Transactions on Software Engineering 18(5), 423–433 (1992)10.1109/32.135775Open DOISearch in Google Scholar

[28] Oracle Corporation: Java EE Homepage (2016), http://www.oracle.com/technetwork/java/javaee/overview/index.html, accessed: 2016.01.06Search in Google Scholar

[29] Ostrand, T.J., Weyuker, E.J.: The Distribution of Faults in a Large Industrial Software System. SIGSOFT Software Engineering Notes 27, 55–64 (2002)10.1145/566171.566181Search in Google Scholar

[30] Ostrand, T.J., Weyuker, E.J., Bell, R.M.: Predicting the Location and Number of Faults in Large Software Systems. IEEE Transactions on Software Engineering 31(4), 340–355 (2005)10.1109/TSE.2005.49Search in Google Scholar

[31] Ostrand, T.J., Weyuker, E.J., Bell, R.M.: Programmer-Based Fault Prediction. In: Proceedings of the Sixth International Conference on Predictive Models in Software Engineering. pp. 1–10 (2010)10.1145/1868328.1868357Search in Google Scholar

[32] Pendharkar, P.C.: Exhaustive and Heuristic Search Approaches For Learning a Software Defect Prediction Model. Engineering Applications of Artificial Intelligence 23, 34–40 (2010)10.1016/j.engappai.2009.10.001Search in Google Scholar

[33] Pressman, R.: Software Engineering: A Practitioner’s Approach. McGraw-Hill (2010)Search in Google Scholar

[34] Rahman, F., Sammer, K., Barr, E.T., Devanbu, P.: Comparing Static Bug Finders and Statistical Prediction. In: Software Engineering, 2014. ICSE ’14. ACM/IEEE International Conference on. ACM (2014)10.1145/2568225.2568269Search in Google Scholar

[35] Rijsbergen, C.J.V.: Information Retrieval. Butterworth-Heinemann Newton (1979)Search in Google Scholar

[36] Rizwan, M., Iqbal, M.: Application of 80/20 Rule in Software Engineering Waterfall Model. In: Proceedings of the International Conference on Information and Communication Technologies ‘09 (2009)Search in Google Scholar

[37] Sauer, F.: Eclipse Metrics Homepage (2016), http://metrics.sourceforge.net/, accessed: 2016.01.06Search in Google Scholar

[38] Selby, R.W., Porter, A.: Learning from Examples: Generation and Evaluation of Decision Trees for Software Resource Analysis. IEEE Transactions on Software Engineering 14(12), 1743–1756 (1988)10.1109/32.9061Search in Google Scholar

[39] Slaughter, S.A., Harter, D.E., Krishnan, M.S.: Evaluating The Cost of Software Quality. Communications of the ACM 41(8), 67–73 (1998)10.1145/280324.280335Search in Google Scholar

[40] Source of Information on Salaries in Poland (2015), http://wynagrodzenia.pl/, accessed: 2015.02.28Search in Google Scholar

[41] Succi, G., Pedrycz, W., Stefanovic, M., Miller, J.: Practical Assessment of the Models for Identification of Defect-Prone Classes in Object-Oriented Commercial Systems Using Design Metrics. Journal of Systems and Software 65(1), 1–12 (2003)10.1016/S0164-1212(02)00024-9Search in Google Scholar

[42] The Apache Foundation: Apache Maven Homepage (2016), https://maven.apache.org/, accessed: 2016.01.06Search in Google Scholar

[43] The Apache Foundation: Apache Subversion Homepage (2016), https://subversion.apache.org/, accessed: 2016.01.06Search in Google Scholar

[44] The Eclipse Foundation: Eclipse IDE Homepage (2016), https://eclipse.org/, accessed: 2016.01.06Search in Google Scholar

[45] Tosun, A., Bener, A., Turhan, B., Menzies, T.: Practical Considerations in Deploying Statistical Methods for Defect Prediction: A Case Study within the Turkish Telecommunications Industry. Information and Software Technology 52(11), 1242–1257 (2010)10.1016/j.infsof.2010.06.006Search in Google Scholar

[46] Tosun, A., Turhan, B., Bener, A.: Practical Considerations in Deploying AI for Defect Prediction: A Case Study within the Turkish Telecommunication Industry. In: Proceedings of the Fifth International Conference on Predictor Models in Software Engineering. p. 11 (2009)10.1145/1540438.1540453Search in Google Scholar

[47] Turhan, B., Kocak, G., Bener, A.: Data Mining Source Code for Locating Software Bugs: A Case Study in Telecommunication Industry. Expert Systems with Applications 36(6), 9986–9990 (2009)10.1016/j.eswa.2008.12.028Open DOISearch in Google Scholar

[48] Turhan, B., Menzies, T., Bener, A., Stefano, J.D.: On the Relative Value of Cross-Company and within-Company Data for Defect Prediction. Empirical Software Engineering 14(5), 540–578 (2009)10.1007/s10664-008-9103-7Search in Google Scholar

[49] Witten, I.H., Frank, E., Hall, M.A.: Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann (2005)Search in Google Scholar

[50] Wong, W.E., Horgan, J., Syring, M., Zage, W., Zage, D.: Applying Design Metrics to Predict Fault-Proneness: A Case Study on a Large-Scale Software System. Software: Practice and Experience 30(14), 1587–1608 (2000)Search in Google Scholar

eISSN:
2300-3405
Language:
English
Publication timeframe:
4 times per year
Journal Subjects:
Computer Sciences, Artificial Intelligence, Software Development