A Multilayered Self-Learning Spiking Neural Network and its Learning Algorithm Based on ‘Winner-Takes-More’ Rule in Hierarchical Clustering
This paper introduces architecture of multilayered selflearning spiking neural network for hierarchical data clustering. It consists of the layer of population coding and several layers of spiking neurons. Contrary to originally suggested multilayered spiking neural network, the proposed one does not require a separate learning algorithm for lateral connections. Irregular clusters detecting capability is achieved by improving the temporal Hebbian learning algorithm. It is generalized by replacing ‘Winner-Takes-All’ rule with ‘Winner-Takes-More’ one. It is shown that the layer of receptive neurons can be treated as a fuzzification layer where pool of receptive neurons is a linguistic variable, and receptive neuron within a pool is a linguistic term. The network architecture is designed in terms of control systems theory. Using the Laplace transform notion, spiking neuron synapse is presented as a second-order critically damped response unit. Spiking neuron soma is modeled on the basis of bang-bang control systems theory as a threshold detection system. Simulation experiment confirms that the proposed architecture is effective in detecting irregular clusters.
Autoassociative Memory Evolving System Based on Fuzzy Basis Functions
This paper proposes a neuro-fuzzy architecture of evolving autoassociative memory network. Learning of a given construction is based on fuzzy basis functions. The algorithm of membership functions centres determination is described and processes of fundamental memory patterns accumulation and their retrieval are considered. This hybrid neuro-fuzzy system combines advantages of artificial neural networks, fuzzy inference systems and evolving systems and its using allows researchers to increase autoassociative memory capacity without considerable complication of its construction.
In this article the problem of clustering massive data sets, which are represented in the matrix form, is considered. The article represents the 2-D self-organizing Kohonen map and its self-learning algorithms based on the winner-take-all (WTA) and winner-take-more (WTM) rules with Gaussian and Epanechnikov functions as the fuzzy membership functions, and without the winner. The fuzzy inference for processing data with overlapping classes in a neural network is introduced. It allows one to estimate membership levels for every sample to every class. This network is the generalization of a vector neuro- and neuro-fuzzy Kohonen network and allows for data processing as they are fed in the on-line mode.
This research contribution instantiates a framework of a hybrid cascade neural network based on the application of a specific sort of neo-fuzzy elements and a new peculiar adaptive training rule. The main trait of the offered system is its competence to continue intensifying its cascades until the required accuracy is gained. A distinctive rapid training procedure is also covered for this case that offers the possibility to operate with non-stationary data streams in an attempt to provide online training of multiple parametric variables. A new training criterion is examined for handling non-stationary objects. Additionally, there is always an occasion to set up (increase) the inference order and the number of membership relations inside the extended neo-fuzzy neuron.
Yevgeniy Bodyanskiy, Olena Vynokurova, Iryna Pliss and Yuliia Tatarinova
In the paper, a new hybrid system of computational intelligence is proposed. This system combines the advantages of neuro-fuzzy system of Takagi-Sugeno-Kang, type-2 fuzzy logic, wavelet neural networks and generalised additive models of Hastie-Tibshirani. The proposed system has universal approximation properties and learning capability based on the experimental data sets which pertain to the neural networks and neuro-fuzzy systems; interpretability and transparency of the obtained results due to the soft computing systems and, first of all, due to type-2 fuzzy systems; possibility of effective description of local signal and process features due to the application of systems based on wavelet transform; simplicity and speed of learning process due to generalised additive models. The proposed system can be used for solving a wide class of dynamic data mining tasks, which are connected with non-stationary, nonlinear stochastic and chaotic signals. Such a system is sufficiently simple in numerical implementation and is characterised by a high speed of learning and information processing.
Yevgeniy Bodyanskiy, Olena Vynokurova, Ilya Kobylin and Oleg Kobylin
In the paper, adaptive modifications of fuzzy clustering methods have been proposed for solving the problem of data stream mining in online mode. The clustering-segmentation task of short time series with unevenly distributed observations (at the same time in all samples) is considered. The proposed approach for adaptive fuzzy clustering of data stream is sufficiently simple in numerical implementation and is characterised by a high speed of information processing. The computational experiments have confirmed the effectiveness of the developed approach.
In the paper a two-layer encoder is proposed. The nodes of encoder under consideration are neo-fuzzy neurons, which are characterised by high speed of learning process and effective approximation properties. The proposed architecture of neo-fuzzy encoder has a two-layer bottle neck” structure and its learning algorithm is based on error backpropagation. The learning algorithm is characterised by a high rate of convergence because the output signals of encoder’s nodes (neo-fuzzy neurons) are linearly dependent on the tuning parameters. The proposed learning algorithm can tune both the synaptic weights and centres of membership functions. Thus, in the paper the hybrid neo-fuzzy system-encoder is proposed that has essential advantages over conventional neurocompressors.
Yevgeniy Bodyanskiy, Iryna Pliss and Olena Vynokurova
In the paper, a new flexible modification of neofuzzy neuron, neuro-fuzzy network based on these neurons and adaptive learning algorithms for the tuning of their all parameters are proposed. The algorithms are of interest because they ensure the on-line tuning of not only the synaptic weights and membership function parameters but also forms of these functions that provide improving approximation properties and allow avoiding the occurrence of “gaps” in the space of inputs.
Yevgeniy Bodyanskiy, Olena Vynokurova and Oleksandra Kharchenko
In this paper, a simple wavelet-neuro-system that implements learning ideas based on minimization of empirical risk and oriented on information processing in on-line mode is developed. As an elementary block of such systems, we propose using wavelet-neuron that has improved approximation properties, computational simplicity, high learning rate and capability of local feature identification in data processing. The architecture and learning algorithm for least squares wavelet support machines that are characterized by high speed of operation and possibility of learning under conditions of short training set are proposed.