Search Results

1 - 5 of 5 items

  • Author: Simone A. Ludwig x
Clear All Modify Search

Abstract

An intrusion detection system (IDS) is an important feature to employ in order to protect a system against network attacks. An IDS monitors the activity within a network of connected computers as to analyze the activity of intrusive patterns. In the event of an ‘attack’, the system has to respond appropriately. Different machine learning techniques have been applied in the past. These techniques fall either into the clustering or the classification category. In this paper, the classification method is used whereby a neural network ensemble method is employed to classify the different types of attacks. The neural network ensemble method consists of an autoencoder, a deep belief neural network, a deep neural network, and an extreme learning machine. The data used for the investigation is the NSL-KDD data set. In particular, the detection rate and false alarm rate among other measures (confusion matrix, classification accuracy, and AUC) of the implemented neural network ensemble are evaluated.

Abstract

Adaptive Particle Swarm Optimization (PSO) variants have become popular in recent years. The main idea of these adaptive PSO variants is that they adaptively change their search behavior during the optimization process based on information gathered during the run. Adaptive PSO variants have shown to be able to solve a wide range of difficult optimization problems efficiently and effectively. In this paper we propose a Repulsive Self-adaptive Acceleration PSO (RSAPSO) variant that adaptively optimizes the velocity weights of every particle at every iteration. The velocity weights include the acceleration constants as well as the inertia weight that are responsible for the balance between exploration and exploitation. Our proposed RSAPSO variant optimizes the velocity weights that are then used to search for the optimal solution of the problem (e.g., benchmark function). We compare RSAPSO to four known adaptive PSO variants (decreasing weight PSO, time-varying acceleration coefficients PSO, guaranteed convergence PSO, and attractive and repulsive PSO) on twenty benchmark problems. The results show that RSAPSO achives better results compared to the known PSO variants on difficult optimization problems that require large numbers of function evaluations.

Abstract

Differential Evolution (DE) is a simple, yet highly competitive real parameter optimizer in the family of evolutionary algorithms. A significant contribution of its robust performance is attributed to its control parameters, and mutation strategy employed, proper settings of which, generally lead to good solutions. Finding the best parameters for a given problem through the trial and error method is time consuming, and sometimes impractical. This calls for the development of adaptive parameter control mechanisms. In this work, we investigate the impact and efficacy of adapting mutation strategies with or without adapting the control parameters, and report the plausibility of this scheme. Backed with empirical evidence from this and previous works, we first build a case for strategy adaptation in the presence as well as in the absence of parameter adaptation. Afterwards, we propose a new mutation strategy, and an adaptive variant SA-SHADE which is based on a recently proposed self-adaptive memory based variant of Differential evolution, SHADE. We report the performance of SA-SHADE on 28 benchmark functions of varying complexity, and compare it with the classic DE algorithm (DE/Rand/1/bin), and other state-of-the-art adaptive DE variants including CoDE, EPSDE, JADE, and SHADE itself. Our results show that adaptation of mutation strategy improves the performance of DE in both presence, and absence of control parameter adaptation, and should thus be employed frequently.

Abstract

Fuzzy clustering is a popular unsupervised learning method that is used in cluster analysis. Fuzzy clustering allows a data point to belong to two or more clusters. Fuzzy c-means is the most well-known method that is applied to cluster analysis, however, the shortcoming is that the number of clusters need to be predefined. This paper proposes a clustering approach based on Particle Swarm Optimization (PSO). This PSO approach determines the optimal number of clusters automatically with the help of a threshold vector. The algorithm first randomly partitions the data set within a preset number of clusters, and then uses a reconstruction criterion to evaluate the performance of the clustering results. The experiments conducted demonstrate that the proposed algorithm automatically finds the optimal number of clusters. Furthermore, to visualize the results principal component analysis projection, conventional Sammon mapping, and fuzzy Sammon mapping were used

Abstract

Deep Neural Networks (DNN) are nothing but neural networks with many hidden layers. DNNs are becoming popular in automatic speech recognition tasks which combines a good acoustic with a language model. Standard feedforward neural networks cannot handle speech data well since they do not have a way to feed information from a later layer back to an earlier layer. Thus, Recurrent Neural Networks (RNNs) have been introduced to take temporal dependencies into account. However, the shortcoming of RNNs is that long-term dependencies due to the vanishing/exploding gradient problem cannot be handled. Therefore, Long Short-Term Memory (LSTM) networks were introduced, which are a special case of RNNs, that takes long-term dependencies in a speech in addition to short-term dependencies into account. Similarily, GRU (Gated Recurrent Unit) networks are an improvement of LSTM networks also taking long-term dependencies into consideration. Thus, in this paper, we evaluate RNN, LSTM, and GRU to compare their performances on a reduced TED-LIUM speech data set. The results show that LSTM achieves the best word error rates, however, the GRU optimization is faster while achieving word error rates close to LSTM.