Open Access

Accumulative Information Enhancement In The Self-Organizing Maps And Its Application To The Analysis Of Mission Statements


Cite

[1] R. Linsker, “Self-organization in a perceptual network,” Computer, vol. 21, pp. 105–117, 1988.10.1109/2.36Search in Google Scholar

[2] R. Linsker, “How to generate ordered maps by maximizing the mutual information between input and output,” Neural Computation, vol. 1, pp. 402–411, 1989.10.1162/neco.1989.1.3.402Search in Google Scholar

[3] R. Linsker, “Local synaptic rules suffice to maximize mutual information in a linear network,” Neural Computation, vol. 4, pp. 691–702, 1992.10.1162/neco.1992.4.5.691Search in Google Scholar

[4] R. Linsker, “Improved local learning rule for information maximization and related applications,” Neural Networks, vol. 18, pp. 261–265, 2005.10.1016/j.neunet.2005.01.002Search in Google Scholar

[5] Z. Nenadic, “Information discriminant analysis: Feature extraction with an information-theoretic objective,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 8, pp. 1394–1407, 2007.Search in Google Scholar

[6] K. Torkkola, “Feature extraction by non-parametric mutual information maximization,” Journal of Machine Learning Research, vol. 3, pp. 1415–1438, 2003.Search in Google Scholar

[7] J. M. Leiva-Murillo and A. Artes-Rodriguez, “Maximization of mutual information for supervised linear feature extraction,” IEEE Transactions on Neural Networks, vol. 18, no. 5, pp. 1433–1441, 2007.Search in Google Scholar

[8] D. E. Rumelhart and D. Zipser, “Feature discovery by competitive learning,” in Parallel Distributed Processing (D. E. Rumelhart and G. E. H. et al., eds.), vol. 1, pp. 151–193, Cambridge: MIT Press, 1986.10.1207/s15516709cog0901_5Search in Google Scholar

[9] T. Kohonen, Self-Organization and Associative Memory. New York: Springer-Verlag, 1988.10.1007/978-3-662-00784-6Search in Google Scholar

[10] T. Kohonen, Self-Organizing Maps. Springer-Verlag, 1995.10.1007/978-3-642-97610-0Search in Google Scholar

[11] R. Kamimura and T. Kamimura, “Structural information and linguistic rule extraction,” in Proceedings of ICONIP-2000, pp. 720–726, 2000.Search in Google Scholar

[12] R. Kamimura, T. Kamimura, and O. Uchida, “Flexible feature discovery and structural information control,” Connection Science, vol. 13, no. 4, pp. 323–347, 2001.10.1080/09540090110108679Search in Google Scholar

[13] R. Kamimura, “Information-theoretic competitive learning with inverse Euclidean distance output units,” Neural Processing Letters, vol. 18, pp. 163–184, 2003.10.1023/B:NEPL.0000011136.78760.22Search in Google Scholar

[14] R. Kamimura, “Teacher-directed learning: information-theoretic competitive learning in supervised multi-layered networks,” Connection Science, vol. 15, pp. 117–140, 2003.10.1080/09540090310001611136Search in Google Scholar

[15] R. Kamimura, “Progressive feature extraction by greedy network-growing algorithm,” Complex Systems, vol. 14, no. 2, pp. 127–153, 2003.Search in Google Scholar

[16] R. Kamimura, “Information theoretic competitive learning in self-adaptive multi-layered networks,” Connection Science, vol. 13, no. 4, pp. 323–347, 2003.Search in Google Scholar

[17] R. Kamimura, “Feature discovery by enhancement and relaxation of competitive units,” in Intelligent data engineering and automated learning-IDEAL2008(LNCS), vol. LNCS5326, pp. 148–155, Springer, 2008.10.1007/978-3-540-88906-9_19Search in Google Scholar

[18] R. Kamimura, “Information-theoretic enhancement learning and its application to visualization of self-organizing maps,” Neurocomputing, vol. 73, no. 13-15, pp. 2642–2664, 2010.Search in Google Scholar

[19] R. Kamimura, “Information-theoretic enhancement learning and its application to visualization of self-organizing maps,” Neurocomputing, vol. 73, no. 13-15, pp. 2642–2664, 2010.Search in Google Scholar

[20] R. Kamimura, “Double enhancement learning for explicit internal representations: unifying self-enhancement and information enhancement to incorporate information on input variables,” Applied Intelligence, pp. 1–23, 2011.10.1007/s10489-011-0300-5Search in Google Scholar

[21] R. Kamimura, “Selective information enhancement learning for creating interpretable representations in competitive learning,” Neural Networks, vol. 24, no. 4, pp. 387–405, 2011.10.1016/j.neunet.2010.12.009Search in Google Scholar

[22] B. Bartkus, M. Glassman, and B. McAFEE, “Mission statement quality and financial performance,” European Management Journal, vol. 24, no. 1, pp. 86–94, 2006.10.1016/j.emj.2005.12.010Search in Google Scholar

[23] B. R. Bartkus, M. Glassman, and R. B. McAfee, “A comparison of the quality of european, japanese and us mission statements:: A content analysis,” European Management Journal, vol. 22, no. 4, pp. 393–401, 2004.10.1016/j.emj.2004.06.013Search in Google Scholar

[24] E. Oda and H. Mitsuhashi, “Experimental study of management principle and company performance by text mining(in japanese),” Management philosophy, vol. 7, no. 2, pp. 22–37, 2010.Search in Google Scholar

[25] K. Ryozo and K. Ryotaro, “Company policy analysis by information theoretical neural networks,” in Proceedings of the 40th fuzzy workshop, pp. 13–14, 2014.Search in Google Scholar

[26] R. Kamimura, T. Kamimura, and T. R. Shultz, “Information theoretic competitive learning and linguistic rule acquisition,” Transactions of the Japanese Society for Artificial Intelligence, vol. 16, no. 2, pp. 287–298, 2001.10.1527/tjsai.16.287Search in Google Scholar

[27] D. E. Rumelhart and D. Zipser, “Feature discovery by competitive learning,” Cognitive Science, vol. 9, pp. 75–112, 1985.10.1207/s15516709cog0901_5Search in Google Scholar

[28] M. Van Hulle, “Topographic map formation by maximizing unconditional entropy: a plausible strategy for ’on-line’ unsupervised competitive learning and nonparametric density estimation,” IEEE Transactions on Neural Networks, vol. 7, no. 5, pp. 1299–1305, 1996.Search in Google Scholar

[29] M. M. Van Hulle, “The formation of topographic maps that maximize the average mutual information of the output responses to noiseless input signals,” Neural Computation, vol. 9, no. 3, pp. 595–606, 1997.10.1162/neco.1997.9.3.595Search in Google Scholar

[30] M. M. Van Hulle, “Topology-preserving map formation achieved with a purely local unsupervised competitive learning rule,” Neural Networks, vol. 10, no. 3, pp. 431–446, 1997.10.1016/S0893-6080(96)00107-4Search in Google Scholar

[31] M. M. Van Hulle, “Faithful representations with topographic maps,” Neural Networks, vol. 12, no. 6, pp. 803–823, 1999.10.1016/S0893-6080(99)00041-6Search in Google Scholar

[32] M. M. Van Hulle, “Entropy-based kernel modeling for topographic map formation,” IEEE Transactions on Neural Networks, vol. 15, no. 4, pp. 850–858, 2004.10.1109/TNN.2004.828763Search in Google Scholar

[33] M. M. V. Hulle, “The formation of topographic maps that maximize the average mutual information of the output responses to noiseless input signals,” Neural Computation, vol. 9, no. 3, pp. 595–606, 1997.10.1162/neco.1997.9.3.595Search in Google Scholar

[34] S. C. Ahalt, A. K. Krishnamurthy, P. Chen, and D. E. Melton, “Competitive learning algorithms for vector quantization,” Neural Networks, vol. 3, pp. 277–290, 1990.10.1016/0893-6080(90)90071-RSearch in Google Scholar

[35] L. Xu, “Rival penalized competitive learning for clustering analysis, RBF net, and curve detection,” IEEE Transaction on Neural Networks, vol. 4, no. 4, pp. 636–649, 1993.10.1109/72.23831818267764Search in Google Scholar

[36] A. Luk and S. Lien, “Properties of the generalized lotto-type competitive learning,” in Proceedings of International conference on neural information processing, (San Mateo: CA), pp. 1180–1185, Morgan Kaufmann Publishers, 2000.Search in Google Scholar

[37] Y. J. Zhang and Z. Q. Liu, “Self-splitting competitive learning: a new on-line clustering paradigm,” IEEE Transactions on Neural Networks, vol. 13, no. 2, pp. 369–380, 2002.10.1109/72.99142218244438Search in Google Scholar

[38] H. Xiong, M. N. S. Swamy, and M. O. Ahmad, “Competitive splitting for codebook initialization,” IEEE Signal Processing Letters, vol. 11, pp. 474–477, 2004.10.1109/LSP.2004.824054Search in Google Scholar

[39] J. C. Yen, J. I. Guo, and H. C. Chen, “A new k-winners-take-all neural networks and its array architecture,” IEEE Transactions on Neural Networks, vol. 9, no. 5, pp. 901–912, 1998.10.1109/72.71216318255775Search in Google Scholar

[40] S. Ridella, S. Rovetta, and R. Zunino, “K-winner machines for pattern classification,” IEEE Transactions on Neural Networks, vol. 12, no. 2, pp. 371–385, 2001.10.1109/72.91453118244391Search in Google Scholar

[41] S. Kurohashi and D. Kawata, “http://nlp.ist.i.kyoto-u.ac.jp/index.php?juman,”Search in Google Scholar

[42] E. Merényi, K. Tasdemir, and L. Zhang, “Learning highly structured manifolds: harnessing the power of soms,” in Similarity-Based Clustering, pp. 138–168, Springer, 2009.10.1007/978-3-642-01805-3_8Search in Google Scholar

[43] K. Tasdemir and E. Merényi, “Exploiting data topology in visualization and clustering of self-organizing maps,” Neural Networks, IEEE Transactions on, vol. 20, no. 4, pp. 549–562, 2009.10.1109/TNN.2008.200540919228556Search in Google Scholar

[44] I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,” Journal of Machine Learning Research, vol. 3, pp. 1157–1182, 2003.Search in Google Scholar

[45] A. Rakotomamonjy, “Variable selection using SVM-based criteria,” Journal of Machine Learning Research, vol. 3, pp. 1357–1370, 2003.Search in Google Scholar

[46] S. Perkins, K. Lacker, and J. Theiler, “Grafting: Fast, incremental feature selection by gradient descent in function space,” Journal of Machine Learning Research, vol. 3, pp. 1333–1356, 2003.Search in Google Scholar

eISSN:
2083-2567
Language:
English
Publication timeframe:
4 times per year
Journal Subjects:
Computer Sciences, Databases and Data Mining, Artificial Intelligence