In the presented research two Deep Neural Network (DNN) models for face image analysis were developed. The first one detects eyes, nose and mouth and it is based on a moderate size Convolutional Neural Network (CNN) while the second one identifies 68 landmarks resulting in a novel Face Alignment Network composed of a CNN and a recurrent neural network. The Face Parts Detector inputs face image and outputs the pixel coordinates of bounding boxes for detected facial parts. The Face Alignment Network extracts deep features in CNN module while in the recurrent module it generates 68 facial landmarks using not only this deep features, but also the geometry of facial parts. Both methods are robust to varying head poses and changing light conditions.
Convolutional neural networks (CNN) is a contemporary technique for computer vision applications, where pooling implies as an integral part of the deep CNN. Besides, pooling provides the ability to learn invariant features and also acts as a regularizer to further reduce the problem of overfitting. Additionally, the pooling techniques significantly reduce the computational cost and training time of networks which are equally important to consider. Here, the performances of pooling strategies on different datasets are analyzed and discussed qualitatively. This study presents a detailed review of the conventional and the latest strategies which would help in appraising the readers with the upsides and downsides of each strategy. Also, we have identified four fundamental factors namely network architecture, activation function, overlapping and regularization approaches which immensely affect the performance of pooling operations. It is believed that this work would help in extending the scope of understanding the significance of CNN along with pooling regimes for solving computer vision problems.
A novel technique for deep learning of image classifiers is presented. The learned CNN models higher offer better separation of deep features (also known as embedded vectors) measured by Euclidean proximity and also no deterioration of the classification results by class membership probability. The latter feature can be used for enhancing image classifiers having the classes at the model’s exploiting stage different from from classes during the training stage. While the Shannon information of SoftMax probability for target class is extended for mini-batch by the intra-class variance, the trained network itself is extended by the Hadamard layer with the parameters representing the class centers. Contrary to the existing solutions, this extra neural layer enables interfacing of the training algorithm to the standard stochastic gradient optimizers, e.g. AdaM algorithm. Moreover, this approach makes the computed centroids immediately adapting to the updating embedded vectors and finally getting the comparable accuracy in less epochs.
In this research note the satisficing newsvendor problem is considered which is defined as the maximization of the probability of exceeding the expected profit multiplied by a positive constant. This constant called optimism coefficient can be chosen by the firm’s management either based on their preference or the market conditions. The coefficient indicates whether there is a low or high optimistic decision maker. For the general demand distribution the results are significantly dependent on this coefficient.
This paper addresses the problem of effective processing using third generation neural networks. The article features two new models of spiking neurons based on the cusp catastrophe theory. The effectiveness of the models is demonstrated with an example of a network composed of three neurons solving the problem of linear inseparability of the XOR function. The proposed solutions are dedicated to hardware implementation using the Edge computing strategy. The paper presents simulation results and outlines further research direction in the field of practical applications and implementations using nanometer CMOS technologies and the current processing mode.
In this note, we propose a game-theoretic approach for benchmarking computational problems and their solvers. The approach takes an assessment matrix as a payoff matrix for some zero-sum matrix game in which the first player chooses a problem and the second player chooses a solver. The solution in mixed strategies of this game is used to construct a notionally objective ranking of the problems and solvers under consideration. The proposed approach is illustrated in terms of an example to demonstrate its viability and its suitability for applications.
Pixel art is aesthetics that emulates the graphical style of old computer systems. Graphics created with this style needs to be scaled up for presentation on modern displays. The authors proposed two new modifications of image scaling for this purpose: a proximity-based coefficient correction and a transition area restriction. Moreover a new interpolation kernel has been introduced. The presented approaches are aimed at reliable and flexible bitmap scaling while overcoming limitations of existing methods. The new techniques were introduced in an extensible. NET application that serves as both an executable program and a library. The project is designed for prototyping and testing interpolation operations and can be easily expanded with new functionality by adding it to the code or by using the provided interface.
In the 1970s, computer scientists began to engage in research in the field of structural biology. The first structural databases, as well as models and methods supporting the analysis of biomolecule structures, started to be created. RNA was put at the centre of scientific interest quite late. However, more and more methods dedicated to this molecule are currently being developed. This paper presents RNApolis - a new computing platform, which offers access to seven bioinformatic tools developed to support the RNA structure study. The set of tools include a structural database and systems for predicting, modelling, annotating and evaluating the RNA structure. RNApolis supports research at different structural levels and allows the discovery, establishment, and validation of relationships between the primary, secondary and tertiary structure of RNAs. The platform is freely available at http://rnapolis.pl
Sentiment classification is an important task which gained extensive attention both in academia and in industry. Many issues related to this task such as handling of negation or of sarcastic utterances were analyzed and accordingly addressed in previous works. However, the issue of class imbalance which often compromises the prediction capabilities of learning algorithms was scarcely studied. In this work, we aim to bridge the gap between imbalanced learning and sentiment analysis. An experimental study including twelve imbalanced learning preprocessing methods, four feature representations, and a dozen of datasets, is carried out in order to analyze the usefulness of imbalanced learning methods for sentiment classification. Moreover, the data difficulty factors — commonly studied in imbalanced learning — are investigated on sentiment corpora to evaluate the impact of class imbalance.
Atefeh Moghaddam, Jacques Teghem, Daniel Tuyttens, Farouk Yalaoui and Lionel Amodeo
We consider a single-machine bi-objective scheduling problem with rejection. In this problem, it is possible to reject some jobs. Four algorithms are provided to solve this scheduling problem. The two objectives are the total weighted completion time and the total rejection cost. The aim is to determine the set of efficient solutions. Four heuristics are described; they are implicit enumeration algorithms forming a branching tree, each one having two versions according to the root of the tree corresponding either to acceptance or rejection of all the jobs. The algorithms are first illustrated by a didactic example. Then they are compared on a large set of instances of various dimension and their respective performances are analysed.