Open Access

Classifying Scaled-Turned-Shifted Objects with Optimal Pixel-to-Scale-Turn-Shift Standard Deviations Ratio in Training 2-Layer Perceptron on Scaled-Turned-Shifted 4800-Featured Objects under Normally Distributed Feature Distortion


Cite

The problem of classifying diversely distorted objects is considered. The classifier is a 2-layer perceptron capable of classifying greater amounts of objects in a unit of time. This is an advantage of the 2-layer perceptron over more complex neural networks like the neocognitron, the convolutional neural network, and the deep learning neural networks. Distortion types are scaling, turning, and shifting. The object model is a monochrome 60 × 80 image of the enlarged English alphabet capital letter. Consequently, there are 26 classes of 4800-featured objects. Training sets have a parameter, which is the ratio of the pixel-to-scale-turn-shift standard deviations, which allows controlling normally distributed feature distortion. An optimal ratio is found, at which the performance of the 2-layer perceptron is still unsatisfactory. Then, the best classifier is further trained with additional 438 passes of training sets by increasing the training smoothness tenfold. This aids in decreasing the ultimate classification error percentage from 35.23 % down to 12.92 %. However, the expected practicable distortions are smaller, so the percentage corresponding to them becomes just 1.64 %, which means that only one object out of 61 is misclassified. Such a solution scheme is directly applied to other classification problems, where the number of features is a thousand or a few thousands by a few tens of classes.

eISSN:
2255-9159
Language:
English
Publication timeframe:
2 times per year
Journal Subjects:
Engineering, Introductions and Overviews, other