Interpretable decision-tree induction in a big data parallel framework

Open access

Abstract

When running data-mining algorithms on big data platforms, a parallel, distributed framework, such asMAPREDUCE, may be used. However, in a parallel framework, each individual model fits the data allocated to its own computing node without necessarily fitting the entire dataset. In order to induce a single consistent model, ensemble algorithms such as majority voting, aggregate the local models, rather than analyzing the entire dataset directly. Our goal is to develop an efficient algorithm for choosing one representative model from multiple, locally induced decision-tree models. The proposed SySM (syntactic similarity method) algorithm computes the similarity between the models produced by parallel nodes and chooses the model which is most similar to others as the best representative of the entire dataset. In 18.75% of 48 experiments on four big datasets, SySM accuracy is significantly higher than that of the ensemble; in about 43.75% of the experiments, SySM accuracy is significantly lower; in one case, the results are identical; and in the remaining 35.41% of cases the difference is not statistically significant. Compared with ensemble methods, the representative tree models selected by the proposed methodology are more compact and interpretable, their induction consumes less memory, and, as confirmed by the empirical results, they allow faster classification of new records.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • AlSabti K. Ranka S. and Singh V. (1998). Clouds: Classification for large or out-of-core datasets Conference on Knowledge Discovery and Data Mining New York NY USA pp. 2-8.

  • Amado N. Gama J. and Silva F. (2001). Parallel implementation of decision tree learning algorithms in P.

  • Brazdil and A. Jorge (Eds.) Progress in Artificial Intelligence Springer Berlin/Heidelberg pp. 6-13.

  • Amado N. Gama J. and Silva F. (2003). Exploiting parallelism in decision tree induction ECML/PKDDWorkshop on Parallel and Distributed Computing for Machine Learning Cavtat/Dubrovnik Croatia pp. 13-22.

  • Andrzejak A. Langner F. and Zabala S. (2013). Interpretable models from distributed data via merging of decision trees IEEE Symposium on Computational Intelligence and Data Mining (CIDM) Savannah GA USA pp. 1-9.

  • Bekkerman R. Bilenko M. and Langford J. (2011). Scaling up Machine Learning: Parallel and Distributed Approaches Cambridge University Press Cambridge.

  • Ben-Haim Y. and Tom-Tov E. (2010). A streaming parallel decision tree algorithm The Journal of Machine Learning Research 11: 849-872.

  • Breiman L. (1999). Pasting small votes for classification in large databases and on-line Machine Learning 36(1-2): 85-103.

  • Dai W. and Ji W. (2014). A MAPREDUCE implementation of c4.5 decision tree algorithm International Journal of Database Theory and Application 7(1): 49-60.

  • DeWitt D.J. Naughton J.F. and Schneider D. (1991). Parallel sorting on a shared-nothing architecture using probabilistic splitting Proceedings of the 1st International Conference on Parallel and Distributed Information Systems Miami Beach FL USA pp. 280-291.

  • Domingos P. and Hulten G. (2000). Mining high-speed data streams Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining Boston MA USA pp. 71-80.

  • Fan W. and Bifet A. (2013). Mining big data: Current status and forecast to the future ACM sIGKDD Explorations Newsletter 14(2): 1-5.

  • Gehrke J. Ganti V. Ramakrishnan R. and Loh W.-Y. (1999). Boat-optimistic decision tree construction in S. Davidson and C. Faloutsos (Eds.) ACM SIGMOD Record Vol. 28 ACM New York NY pp. 169-180.

  • Goil S. and Choudhary A. (2001). Parsimony: An infrastructure for parallel multidimensional analysis and data mining Journal of Parallel and Distributed Computing 61(3): 285-321.

  • Hansen L.K. and Salamon P. (1990). Neural network ensembles IEEE Transactions on Pattern Analysis & Machine Intelligence 12(10): 993-1001.

  • Jin R. and Agrawal G. (2003). Communication and memory efficient parallel decision tree construction Proceedings of the 3rd SIAM International Conference on Data Mining San Francisco CA USA pp. 119-129.

  • Joshi M.V. Karypis G. and Kumar V. (1998). SCALPARC: A new scalable and efficient parallel classification algorithm for mining large datasets Parallel Processing Symposium Los Alamitos CA USA pp. 573-579.

  • Kargupta H. and Park B.-H. (2004). A Fourier spectrum-based approach to represent decision trees for mining data streams in mobile environments IEEE Transactions on Knowledge and Data Engineering 16(2): 216-229.

  • Kourtellis N. Morales G.D.F. Bifet A. and Murdopo A. (2016). VHT: Vertical Hoeffding tree arXiv preprint 1607.08325.

  • Louppe G. and Geurts P. (2012). Ensembles on random patches in P.A. Flach et al. (Eds.) Machine Learning and Knowledge Discovery in Databases Springer Berlin/Heidelberg pp. 346-361.

  • Mehta M. Agrawal R. and Rissanen J. (1996). SLIQ: A fast scalable classifier for data mining in P. Aspers et al. (Eds.) Advances in Database Technology Springer Berlin/Heidelberg pp. 18-32.

  • Miglio R. and Soffritti G. (2004). The comparison between classification trees through proximity measures Computational Statistics & Data Analysis 45(3): 577-593.

  • Narlikar G.J. (1998). A parallel multithreaded decision tree builder Technical report DTIC Document http://www.dtic.mil/docs/citations/ADA363531

  • Ntoutsi I. Kalousis A. and Theodoridis Y. (2008). A general framework for estimating similarity of datasets and decision trees: Exploring semantic similarity of decision trees in C. Apte et al. (Eds.) SIAM Conference on Data Mining SIAM Philadelphia PA pp. 810-821.

  • Panda B. Herbach J.S. Basu S. and Bayardo R.J. (2009). Planet: Massively parallel learning of tree ensembles with MapReduce Proceedings of the VLDB Endowment 2(2): 1426-1437.

  • Pawlik M. and Augsten N. (2011). RTED: A robust algorithm for the tree edit distance Proceedings of the VLDB Endowment 5(4): 334-345.

  • Shafer J. Agrawal R. and Mehta M. (1996). Sprint: A scalable parallel classifier for data mining International Conference on Very Large Data Bases Mumbai (Bombay) India pp. 544-555.

  • Shannon W.D. and Banks D. (1999). Combining classification trees using MLE Statistics in Medicine 18(6): 727-740.

  • Sollich P. and Krogh A. (1996). Learning with ensembles: How overfitting can be useful in D.S. Touretzky et al. (Eds.)Advances in Neural Information Processing Systems 8 MIT Press Cambridge MA pp. 190-196.

  • Sreenivas M.K. AlSabti K. and Ranka S. (2000). Parallel out-of-core decision tree classifiers in H. Kargupta and P. Chan (Eds.) Advances in Distributed and Parallel Knowledge Discovery Cambridge MA pp. 317-336.

  • Srivastava A. Han E.-H. Kumar V. and Singh V. (1995). Parallel formulations of decision-tree classification algorithms Data Mining and Knowledge Discovery 3(3): 237-261.

  • Triguero I. Peralta D. Bacardit J. Garc´ıa S. and Herrera F. (2015). MRPR: A MAPREDUCE solution for prototype reduction in big data classification Neurocomputing 150(A): 331-345.

  • Zhang K. and Shasha D. (1989). Simple fast algorithms for the editing distance between trees and related problems SIAM Journal on Computing 18(6): 1245-1262.

  • Zhang X. and Jiang S. (2012). A splitting criteria based on similarity in decision tree learning Journal of Software 7(8): 1775-1782.

  • Zhang Y. Gao Q. Gao L. and Wang C. (2012). IMAPREDUCE: A distributed computing framework for iterative computation Journal of Grid Computing 10(1): 47-68.

Search
Journal information
Impact Factor

IMPACT FACTOR 2018: 1.504
5-year IMPACT FACTOR: 1.553

CiteScore 2018: 2.09

SCImago Journal Rank (SJR) 2018: 0.493
Source Normalized Impact per Paper (SNIP) 2018: 1.361

Mathematical Citation Quotient (MCQ) 2018: 0.08

Cited By
Metrics
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 454 244 9
PDF Downloads 214 147 5