Open Access

Comparison of Prototype Selection Algorithms Used in Construction of Neural Networks Learned by SVD


Cite

Aha, D.W., Kibler, D. and Albert, M.K. (1991). Instance-based learning algorithm, Machine Learning 6(1): 37-66.10.1007/BF00153759Search in Google Scholar

Angiulli, F. (2007). Fast nearest neighbor condensation for large data sets classification, IEEE Transactions on Knowledge and Data Engineering 19(11): 1450-1464.10.1109/TKDE.2007.190645Search in Google Scholar

Barandela, R., Ferri, F. and Sanchez, J. (2005). Decision boundary preserving prototype selection for nearest neighbor classification, International Journal of Pattern Recognition and Artificial Intelligence 19(6): 787-806.10.1142/S0218001405004332Search in Google Scholar

Boser, B.E., Guyon, I.M. and Vapnik, V. (1992). A training algorithm for optimal margin classifiers, in D. Haussler (Ed.), Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, ACM Press, New York, NY, pp. 144-152.10.1145/130385.130401Search in Google Scholar

Brighton, H. and Mellish, C. (2002). Advances in instance selection for instance-based learning algorithms, Data Mining and Knowledge Discovery 6(2): 153-172.10.1023/A:1014043630878Search in Google Scholar

Brodley, C. (1995). Recursive automatic bias selection for classifier construction, Machine Learning 20(1/2): 63-94.10.1007/BF00993475Search in Google Scholar

Broomhead, D.S. and Lowe, D. (1988). Multivariable functional interpolation and adaptive networks, Complex Systems 2(3): 321-355.Search in Google Scholar

Cameron-Jones, R.M. (1995). Instance selection by encoding length heuristic with random mutation hill climbing, Proceedings of the 8th Australian Joint Conference on Artificial Intelligence, Canberra, Australia, pp. 99-106.Search in Google Scholar

Cano, J.-R., Herrera, F. and Lozano, M. (2003). Using evolutionary algorithms as instance selection for data reduction in KDD: An experimental study, IEEE Transactions on Evolutionary Computation 7(6): 561-575.10.1109/TEVC.2003.819265Search in Google Scholar

Chamara, L.L., Kasun, Zhou, H. and Huang, G.-B. (2013). Representational learning with ELMs for big data, IEEE Intelligent Systems 28(6): 31-34.Search in Google Scholar

Devi, V. and Murty, M. (2002). An incremental prototype set building technique, Pattern Recognition 35(2): 505-513.10.1016/S0031-3203(00)00184-9Search in Google Scholar

Garcia, S., Cano, J. and Herrera, F. (2008). A memetic algorithm for evolutionary prototype selection: A scaling up approach, Pattern Recognition 41(8): 2693-2709.10.1016/j.patcog.2008.02.006Search in Google Scholar

Garcia, S., Derrac, J., Cano, J. and Herrera, F. (2012). Prototype selection for nearest neighbor classification: Taxonomy and empirical study, IEEE Transactions on Pattern Analysis and Machine Intelligence 34(3): 417-435.10.1109/TPAMI.2011.14221768651Search in Google Scholar

Gates, G. (1972). The reduced nearest neighbor rule, IEEE Transactions on Information Theory 18(3): 431-433.10.1109/TIT.1972.1054809Search in Google Scholar

Górecki, T. and Łuczak, M. (2013). Linear discriminant analysis with a generalization of the Moore-Penrose pseudoinverse, International Journal of Applied Mathematics and Computer Science 23(2): 463-471, DOI: 10.2478/amcs-2013-0035.10.2478/amcs-2013-0035Open DOISearch in Google Scholar

Grochowski, M. and Jankowski, N. (2004). Comparison of instances selection algorithms. I: Results and comments, in L. Rutkowski et al. (Eds.), Artificial Intelligence and Soft Computing, Lecture Notes in Computer Science, Vol. 3070, Springer-Verlag, Berlin/Heidelberg, pp. 580-585.10.1007/978-3-540-24844-6_87Search in Google Scholar

Hart, P.E. (1968). The condensed nearest neighbor rule, IEEE Transactions on Information Theory 14(3): 515-516.10.1109/TIT.1968.1054155Search in Google Scholar

Hattori, K. and Takahashi, M. (2000). A new edited k-nearest neighbor rule in the pattern classification problem, Pattern Recognition 33(3): 521-528.10.1016/S0031-3203(99)00068-0Search in Google Scholar

Huang, G.-B., Zhu, Q.-Y. and Siew, C.-K. (2004). Extreme learning machine: A new learning scheme of feedforward neural networks, International Joint Conference on Neural Networks, Budapest, Hungary, pp. 985-990.Search in Google Scholar

Huang, G.-B., Zhu, Q.-Y. and Siew, C.-K. (2006). Extreme learning machine: Theory and applications, Neurocomputing 70(1-3): 489-501.10.1016/j.neucom.2005.12.126Search in Google Scholar

Jankowski, N. and Grochowski, M. (2004). Comparison of instances selection algorithms. II: Algorithms survey, in L. Rutkowski et al. (Eds.), Artificial Intelligence and Soft Computing, Lecture Notes in Computer Science, Vol. 3070, Springer-Verlag, Berlin/Heidelberg, pp. 598-603.10.1007/978-3-540-24844-6_90Search in Google Scholar

Kuncheva, L. (1995). Editing for the k-nearest neighbors rule by a genetic algorithm, Pattern Recognition Letters 16(8): 809-814.10.1016/0167-8655(95)00047-KSearch in Google Scholar

Lozano, M., Sanchez, J. and Pla, F. (2003). Using the geometrical distribution of prototypes for training set condensing, Conference of the Spanish Association for Artificial Intelligence, San Sebastian, Spain, pp. 618-627.10.1007/978-3-540-25945-9_61Search in Google Scholar

Marchiori, E. (2008). Hit miss networks with applications to instance selection, Journal of Transactions on Machine Learning Research 9: 997-1017.Search in Google Scholar

Marchiori, E. (2010). Class conditional nearest neighbor for large margin instance selection, IEEE Transactions Pattern Analysis and Machine Intelligence 32(2): 364-370.10.1109/TPAMI.2009.164Search in Google Scholar

Merz, C.J. and Murphy, P.M. (1998). UCI Repository of Machine Learning Databases, https://archive.ics.uci.edu/ml/index.php.Search in Google Scholar

Riquelme, J., Aguilar-Ruiz, J. and Toro, M. (2003). Finding representative patterns with ordered projections, Pattern Recognition 36(4): 1009-1018.10.1016/S0031-3203(02)00119-XSearch in Google Scholar

Sanchez, J., Pla, F. and Ferri, F. (1997). Prototype selection for the nearest neighbor rule through proximity graphs, Pattern Recognition Letters 18(6): 507-513.10.1016/S0167-8655(97)00035-4Search in Google Scholar

Schölkopf, B., Sung, K., Burges, C., Girosi, F., Niyogi, P., Poggio, T. and Vapnik, V. (1996). Comparing support vector machines with Gaussian kernels to radial basis function classifiers, Technical Report AI, Memo No 1599, CBCL Paper No 142, MIT, Cambridge, MA.Search in Google Scholar

Schölkopf, B., Sung, K.-K. and Burges, C. (1997). Comparing support vector machines with Gaussian kernels to radial basis function classifiers, IEEE Transactions on Signal Processing 45(11).10.1109/78.650102Search in Google Scholar

Schwenker, F., Kestler, H.A. and Palm, G. (2001). Three learning phases for radial-basis-function networks, Neural Networks 14(4-5): 439-458.10.1016/S0893-6080(01)00027-2Search in Google Scholar

Skalak, D.B. (1994). Prototype and feature selection by sampling and random mutation hill climbing algorithms, International Conference on Machine Learning, New Bunswick, NJ, USA, pp. 293-301.10.1016/B978-1-55860-335-6.50043-XSearch in Google Scholar

Vapnik, V. (1995). The Nature of Statistical Learning Theory, Springer-Verlag, New York, NY.10.1007/978-1-4757-2440-0Search in Google Scholar

Wilson, D. (1972). Asymptotic properties of nearest neighbor rules sing edited data, IEEE Transactions on Systems, Man, and Cybernetics 2(3): 408-421.10.1109/TSMC.1972.4309137Search in Google Scholar

Wilson, D.R. and Martinez, T.R. (2000). Reduction techniques or instance-based learning algorithms, Machine Learning 8(3): 257-286.10.1023/A:1007626913721Search in Google Scholar

Woźniak, M. and Krawczyk, B. (2012). Combined classifier based on feature space partitioning, International Journal f Applied Mathematics and Computer Science 2(4): 855-866, DOI: 10.2478/v10006-012-0063-0.10.2478/v10006-012-0063-0Open DOISearch in Google Scholar

Yousef, R. and el Hindi, K. (2005). Training radial basis function networks using reduced sets as center points, International Journal of Information Technology 2(1): 21-35.Search in Google Scholar

Zhao, K., Zhou, S., Guan, J. and Zhou, A. (2003). C-pruner: An improved instance pruning algorithm, 2nd International Conference on Machine Learning and Cybernetics, Xi’an, China, pp. 94-99.Search in Google Scholar

eISSN:
2083-8492
Language:
English
Publication timeframe:
4 times per year
Journal Subjects:
Mathematics, Applied Mathematics