Search Results

1 - 9 of 9 items

  • Author: Vadim V. Romanuke x
Clear All Modify Search

Abstract

A 2×2 MIMO wireless communication system with channel estimation is simulated, in which two transmit, and two receive antennas are employed. The orthogonal pilot signal approach is used for the channel estimation, where the Hadamard sequences are used for piloting. Data are modulated by coherent binary phase-shift keying, whereupon an orthogonal space-time block coding subsystem encodes information symbols by using the Alamouti code. Based on the simulation, it is ascertained a possibility to decrease the bit-error rate by substituting the Hadamard sequences for the sequences having irregular structures, and constituting the eight known orthogonal bases. Considering a de-orthogonalization caused by two any pilot sequence symbol errors, the bit-error rate is decreased by almost 2.9 %. If de-orthogonalizations are caused by two repeated indefinite, and definite pilot sequence symbol errors, the decrements are almost 16 % and 10 %, respectively. Whichever sequences are used for piloting, the 2×2 MIMO system is ascertained to be resistant to the de-orthogonalization if the frame is of 128 to 256 symbols piloted with 32 to 64 symbols, respectively.

Abstract

A problem of reducing interval uncertainty is considered by an approach of cutting off equal parts from the left and right. The interval contains admissible values of an observed object’s parameter. The object’s parameter cannot be measured directly or deductively computed, so it is estimated by expert judgments. Terms of observations are short, and the object’s statistical data are poor. Thus an algorithm of flexibly reducing interval uncertainty is designed via adjusting the parameter by expert procedures and allowing to control cutting off. While the parameter is adjusted forward, the interval becomes progressively narrowed after every next expert procedure. The narrowing is performed via division-by-q dichotomization cutting off the q −1-th parts from the left and right. If the current parameter’s value falls outside of the interval, forward adjustment is canceled. Then backward adjustment is executed, where one of the endpoints is moved backwards. Adjustment is not executed when the current parameter’s value enclosed within the interval is simultaneously too close to both left and right endpoints. If the value is “trapped” like that for a definite number of times in succession, the early stop fires.

Abstract

Adjustment of an unknown parameter of the multistage expert procedure is considered. The lower and upper boundaries of the parameter are counted to be known. A key condition showing that experts’ estimations are satisfactory in the current procedure is an inequality, in which the value based on the estimations is not greater than the parameter. The algorithms of hard and soft adjusting are developed. If the inequality is true and its both terms are too close for a long sequence of expert procedures, the adjusting can be early stopped. The algorithms are reversible, implying inversion to the reverse inequality and sliding up off the lower boundary.

Abstract

The problem of classifying diversely distorted objects is considered. The classifier is a 2-layer perceptron capable of classifying greater amounts of objects in a unit of time. This is an advantage of the 2-layer perceptron over more complex neural networks like the neocognitron, the convolutional neural network, and the deep learning neural networks. Distortion types are scaling, turning, and shifting. The object model is a monochrome 60 × 80 image of the enlarged English alphabet capital letter. Consequently, there are 26 classes of 4800-featured objects. Training sets have a parameter, which is the ratio of the pixel-to-scale-turn-shift standard deviations, which allows controlling normally distributed feature distortion. An optimal ratio is found, at which the performance of the 2-layer perceptron is still unsatisfactory. Then, the best classifier is further trained with additional 438 passes of training sets by increasing the training smoothness tenfold. This aids in decreasing the ultimate classification error percentage from 35.23 % down to 12.92 %. However, the expected practicable distortions are smaller, so the percentage corresponding to them becomes just 1.64 %, which means that only one object out of 61 is misclassified. Such a solution scheme is directly applied to other classification problems, where the number of features is a thousand or a few thousands by a few tens of classes.

Abstract

An attempt of finding an appropriate number of convolutional layers in convolutional neural networks is made. The benchmark datasets are CIFAR-10, NORB and EEACL26, whose diversity and heterogeneousness must serve for a general applicability of a rule presumed to yield that number. The rule is drawn from the best performances of convolutional neural networks built with 2 to 12 convolutional layers. It is not an exact best number of convolutional layers but the result of a short process of trying a few versions of such numbers. For small images (like those in CIFAR-10), the initial number is 4. For datasets that have a few tens of image categories and more, initially setting five to eight convolutional layers is recommended depending on the complexity of the dataset. The fuzziness in the rule is not removable because of the required diversity and heterogeneousness

Abstract

The present paper considers an open problem of setting hyperparameters for convolutional neural networks aimed at image classification. Since selecting filter spatial extents for convolutional layers is a topical problem, it is approximately solved by accumulating statistics of the neural network performance. The network architecture is taken on the basis of the MNIST database experience. The eight-layered architecture having four convolutional layers is nearly best suitable for classifying small and medium size images. Image databases are formed of grayscale images whose size range is 28 × 28 to 64 × 64 by step 2. Except for the filter spatial extents, the rest of those eight layer hyperparameters are unalterable, and they are chosen scrupulously based on rules of thumb. A sequence of possible filter spatial extents is generated for each size. Then sets of four filter spatial extents producing the best performance are extracted. The rule of this extraction that allows selecting the best filter spatial extents is formalized with two conditions. Mainly, difference between maximal and minimal extents must be as minimal as possible. No unit filter spatial extent is recommended. The secondary condition is that the filter spatial extents should constitute a non-increasing set. Validation on MNIST and CIFAR- 10 databases justifies such a solution, which can be extended for building convolutional neural network classifiers of colour and larger images.

Abstract

A technique of DropOut for preventing overfitting of convolutional neural networks for image classification is considered in the paper. The goal is to find a rule of rationally allocating DropOut layers of 0.5 rate to maximise performance. To achieve the goal, two common network architectures are used having either 4 or 5 convolutional layers. Benchmarking is fulfilled with CIFAR-10, EEACL26, and NORB datasets. Initially, series of all admissible versions for allocation of DropOut layers are generated. After the performance against the series is evaluated, normalized and averaged, the compromising rule is found. It consists in non-compactly inserting a few DropOut layers before the last convolutional layer. It is likely that the scheme with two or more DropOut layers fits networks of many convolutional layers for image classification problems with a plenty of features. Such a scheme shall also fit simple datasets prone to overfitting. In fact, the rule “prefers” a fewer number of DropOut layers. The exemplary gain of the rule application is roughly between 10 % and 50 %.

Abstract

An optimization problem of classifying shifting-distorted objects is studied. The classifier is 2-layer perceptron, and the object model is monochrome 60 × 80 image. Based on the fact that previously the perceptron has successfully been attempted to classify shifted objects with a pixel-to-shift standard deviation ratio for training, the ratio is optimized. The optimization criterion is minimization of classification error percentage. A classifier trained under the found optimal ratio is optimized additionally. Then it effectively classifies shifting-distorted images, erring only in one case from eight takings at the maximal shift distortion. On average, classification error percentage appears less than 2.5 %. Thus, the optimized 2-layer perceptron outruns much slower neocognitron. And the found optimal ratio shall be nearly the same for other object classification problems, when the number of object features varies about 4800, and the number of classes is between two and three tens.

Abstract

Approximation in solving the infinite two-person non-cooperative games is studied in the paper. An approximation approach with conversion of infinite game into finite one is suggested. The conversion is fulfilled in three stages. Primarily the players’ payoff functions are sampled variously according to the stated requirements to the sampling. These functions are defined on unit hypercube of the appropriate Euclidean finite-dimensional space. The sampling step along each of hypercube dimensions is constant. At the second stage, the players’ payoff multidimensional matrices are reshaped into ordinary two-dimensional matrices, using the reversible index-to-index reshaping. Thus, a bimatrix game as an initial infinite game approximation is obtained. At the third stage of the conversion, the player’s finite equilibrium strategy support is checked out for its weak consistency, defined by five types of inequalities within minimal neighbourhood of every specified sampling step. If necessary, the weakly consistent solution of the bimatrix game is checked out for its consistency, strengthened in that the cardinality of every player’s equilibrium strategy support and their densities shall be non-decreasing within minimal neighbourhood of the sampling steps. Eventually, the consistent solution certifies the game approximation acceptability, letting solve even games without any equilibrium situations, including isomorphic ones to the unit hypercube game. A case of the consistency light check is stated for the completely mixed Nash equilibrium situation.