Nowadays, multiclassifier systems (MCSs) are being widely applied in various machine learning problems and in many different domains. Over the last two decades, a variety of ensemble systems have been developed, but there is still room for improvement. This paper focuses on developing competence and interclass cross-competence measures which can be applied as a method for classifiers combination. The cross-competence measure allows an ensemble to harness pieces of information obtained from incompetent classifiers instead of removing them from the ensemble. The cross-competence measure originally determined on the basis of a validation set (static mode) can be further easily updated using additional feedback information on correct/incorrect classification during the recognition process (dynamic mode). The analysis of computational and storage complexity of the proposed method is presented. The performance of the MCS with the proposed cross-competence function was experimentally compared against five reference MCSs and one reference MCS for static and dynamic modes, respectively. Results for the static mode show that the proposed technique is comparable with the reference methods in terms of classification accuracy. For the dynamic mode, the system developed achieves the highest classification accuracy, demonstrating the potential of the MCS for practical applications when feedback information is available.
Berger, J.O. and Berger, J.O. (1985). Statistical Decision Theory and Bayesian Analysis, Springer-Verlag, New York, NY.
Bishop, C. (1995). Neural Networks for Pattern Recognition, Clarendon Press/Oxford University Press, Oxford/New York, NY.
Blum, A. (1998). On-line algorithms in machine learning, in A. Fiat and G.J. Woeginger (Eds.), Developments from a June 1996 Seminar on Online Algorithms: The State of the Art, Springer-Verlag, London, pp. 306–325.
Breiman, L. (1996). Bagging predictors, Machine Learning24(2): 123–140.
Breiman, L., Friedman, J., Olshen, R. and Stone, C. (1984). Classification and Regression Trees, Wadsworth and Brooks, Monterey, CA.
Cover, T. and Hart, P. (1967). Nearest neighbor pattern classification, IEEE Transactions on Information Theory13(1): 21–27, DOI:10.1109/TIT.1967.1053964.
Dai, Q. (2013). A competitive ensemble pruning approach based on cross-validation technique, Knowledge-Based Systems37(9): 394–414, DOI: 10.1016/j.knosys.2012.08.024.
Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets, The Journal of Machine Learning Research7: 1–30.
Devroye, L., Györfi, L. and Lugosi, G. (1996). A Probabilistic Theory of Pattern Recognition, Springer, New York, NY.
Didaci, L., Giacinto, G., Roli, F. and Marcialis, G.L. (2005). A study on the performances of dynamic classifier selection based on local accuracy estimation, Pattern Recognition38(11): 2188–2191.
Dietterich, T.G. (2000). Ensemble methods in machine learning, Proceedings of the 1st International Workshop on Multiple Classifier Systems, MCS’00, Cagliari, Italy, pp. 1–15.
Dunn, O.J. (1961). Multiple comparisons among means, Journal of the American Statistical Association56(293): 52–64.
Fraz, M.M., Remagnino, P., Hoppe, A., Uyyanonvara, B., Rudnicka, A.R., Owen, C.G. and Barman, S. (2012). An ensemble classification-based approach applied to retinal blood vessel segmentation, IEEE Transactions on Biomedical Engineering59(9): 2538–2548.
Freund, Y. and Shapire, R. (1996). Experiments with a new boosting algorithm, Machine Learning: Proceedings of the 13th International Conference, Bari, Italy, pp. 148–156.
Friedman, M. (1940). A comparison of alternative tests of significance for the problem of m rankings, The Annals of Mathematical Statistics11(1): 86–92, DOI: 10.2307/2235971.
Gama, J. (2010). Knowledge Discovery from Data Streams, 1st Edn., Chapman & Hall/CRC, London.
Giacinto, G. and Roli, F. (2001). Dynamic classifier selection based on multiple classifier behaviour, Pattern Recognition34(9): 1879–1881.
Holm, S. (1979). A simple sequentially rejective multiple test procedure, Scandinavian Journal of Statistics6(2): 65–70.
Hsieh, N.-C. and Hung, L.-P. (2010). A data driven ensemble classifier for credit scoring analysis, Expert systems with Applications37(1): 534–545.
Huenupán, F., Yoma, N.B., Molina, C. and Garretón, C. (2008). Confidence based multiple classifier fusion in speaker verification, Pattern Recognition Letters29(7): 957–966.
Jurek, A., Bi, Y., Wu, S. and Nugent, C. (2013). A survey of commonly used ensemble-based classification techniques, The Knowledge Engineering Review29(5): 551–581, DOI: 10.1017/s0269888913000155.
Kittler, J. (1998). Combining classifiers: A theoretical framework, Pattern Analysis and Applications1(1): 18–27.
Ko, A.H., Sabourin, R. and Britto, Jr., A.S. (2008). From dynamic classifier selection to dynamic ensemble selection, Pattern Recognition41(5): 1718–1731.
Kuncheva, L.I. (2004). Combining Pattern Classifiers: Methods and Algorithms, 1st Edn., Wiley-Interscience, New York, NY.
Kuncheva, L.I. and Rodríguez, J.J. (2014). A weighted voting framework for classifiers ensembles, Knowledge-Based Systems38(2): 259–275.
Kurzynski, M. (1987). Diagnosis of acute abdominal pain using three-stage classifier, Computers in Biology and Medicine17(1): 19–27.
Kurzynski, M., Krysmann, M., Trajdos, P. and Wolczowski, A. (2014). Two-stage multiclassifier system with correction of competence of base classifiers applied to the control of bioprosthetic hand, IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2014, Limassol, Cyprus.
Kurzynski, M. and Wolczowski, A. (2012). Control system of bioprosthetic hand based on advanced analysis of biosignals and feedback from the prosthesis sensors, Proceedings of the 3rd International Conference on Information Technologies in Biomedicine, ITIB 12, Kamień Śląski, Poland, pp. 199–208.
Mamoni, D. (2013). On cardinality of fuzzy sets, International Journal of Intelligent Systems and Applications5(6): 47–52.
Plumpton, C.O. (2014). Semi-supervised ensemble update strategies for on-line classification of FMRI data, Pattern Recognition Letters37: 172–177.
Plumpton, C.O., Kuncheva, L.I., Oosterhof, N.N. and Johnston, S.J. (2012). Naive random subspace ensemble with linear classifiers for real-time classification of FMRI data, Pattern Recognition45(6): 2101–2108.
R Core Team (2012). R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, http://www.R-project.org/.
Rokach, L. (2010). Ensemble-based classifiers, Artificial Intelligence Review33(1–2): 1–39.
Rokach, L. and Maimon, O. (2005). Clustering methods, Data Mining and Knowledge Discovery Handbook, Springer Science + Business Media, New York, NY, pp. 321–352.
Rousseeuw, P. (1987). Silhouettes: A graphical aid to the interpretation and validation of cluster analysis, Journal of Computational and Applied Mathematics20(1): 53–65.
Scholkopf, B. and Smola, A.J. (2001). Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, Cambridge, MA.
Tahir, M.A., Kittler, J. and Bouridane, A. (2012). Multilabel classification using heterogeneous ensemble of multi-label classifiers, Pattern Recognition Letters33(5): 513–523.
Tsoumakas, G., Katakis, I. and Vlahavas, I. (2010). Random k-labelsets for multi-label classification, IEEE Transactions on Knowledge and Data Engineering99(1): 1079–1089.
Valdovinos, R. and Sánchez, J. (2009). Combining multiple classifiers with dynamic weighted voting, in E. Corchado et al. (Eds.), Hybrid Artificial Intelligence Systems, Lecture Notes in Computer Science, Vol. 5572, Springer, Berlin/Heidelberg, pp. 510–516.
Ward, J. (1963). Hierarchical grouping to optimize an objective function, Journal of the American Statistical Association58(301): 236–244.
Wilcoxon, F. (1945). Individual comparisons by ranking methods, Biometrics Bulletin1(6): 80–83.