Open Access

Classifying Scaled-Turned-Shifted Objects with Optimal Pixel-to-Scale-Turn-Shift Standard Deviations Ratio in Training 2-Layer Perceptron on Scaled-Turned-Shifted 4800-Featured Objects under Normally Distributed Feature Distortion


Cite

[1] K. Fukushima, “Artificial vision by multi-layered neural networks: Neocognitron and its advances,” Neural Networks, vol. 37, pp. 103–119, Jan. 2013. https://doi.org/10.1016/j.neunet.2012.09.01610.1016/j.neunet.2012.09.016Open DOISearch in Google Scholar

[2] S. Kim, Y. Choi, and M. Lee, “Deep learning with support vector data description,” Neurocomputing, vol. 165, pp. 111–117, Oct. 2015. https://doi.org/10.1016/j.neucom.2014.09.08610.1016/j.neucom.2014.09.086Open DOISearch in Google Scholar

[3] V. V. Romanuke, “Two-layer perceptron for classifying scaled-turned-shifted objects by 26 classes general totality of monochrome 60-by-80-images via training with pixel-distorted scaled-turned-shifted images,” Information processing systems, iss. 7 (132), pp. 98–107, 2015.Search in Google Scholar

[4] V. V. Romanuke, “Optimal Pixel-to-Shift Standard Deviation Ratio for Training 2-Layer Perceptron on Shifted 60 × 80 Images with Pixel Distortion in Classifying Shifting-Distorted Objects,” Applied Computer Systems, vol. 19, no. 1, Jan. 2016. https://doi.org/10.1515/acss-2016-000810.1515/acss-2016-0008Open DOISearch in Google Scholar

[5] V. V. Romanuke, “Boosting ensembles of heavy two-layer perceptrons for increasing classification accuracy in recognizing shifted-turned-scaled flat images with binary features,” Journal of Information and Organizational Sciences, vol. 39, no. 1, pp. 75–84, 2015.Search in Google Scholar

[6] V. V. Romanuke, “Accuracy improvement in wear state discontinuous tracking model regarding statistical data inaccuracies and shifts with boosting mini-ensemble of two-layer perceptrons,” Problems of tribology, no. 4, pp. 55–58, 2014.Search in Google Scholar

[7] V. V. Romanuke, “Two-layer perceptron for classifying flat scaled-turned-shifted objects by additional feature distortions in training,” Journal of Uncertain Systems, vol. 9, no. 4, pp. 286–305, 2015.Search in Google Scholar

[8] K. Hagiwara, T. Hayasaka, N. Toda, S. Usui, and K. Kuno, “Upper bound of the expected training error of neural network regression for a Gaussian noise sequence,” Neural Networks, vol. 14, no. 10, pp. 1419–1429, Dec. 2001. https://doi.org/10.1016/s0893-6080(01)00122-810.1016/S0893-6080(01)00122-8Search in Google Scholar

[9] V. Romanuke, “Setting the Hidden Layer Neuron Number in Feedforward Neural Network for an Image Recognition Problem under Gaussian Noise of Distortion,” Computer and Information Science, vol. 6, no. 2, Mar. 2013. https://doi.org/10.5539/cis.v6n2p3810.5539/cis.v6n2p38Open DOISearch in Google Scholar

[10] C.-H. Yoo, S.-W. Kim, J.-Y. Jung, and S.-J. Ko, “High-dimensional feature extraction using bit-plane decomposition of local binary patterns for robust face recognition,” Journal of Visual Communication and Image Representation, vol. 45, pp. 11–19, May 2017. https://doi.org/10.1016/j.jvcir.2017.02.00910.1016/j.jvcir.2017.02.009Search in Google Scholar

[11] C. Zhu and Y. Peng, “Discriminative latent semantic feature learning for pedestrian detection,” Neurocomputing, vol. 238, pp. 126–138, May 2017. https://doi.org/10.1016/j.neucom.2017.01.04310.1016/j.neucom.2017.01.043Open DOISearch in Google Scholar

[12] V. V. Romanuke, “An attempt for 2-layer perceptron high performance in classifying shifted monochrome 60-by-80-images via training with pixel-distorted shifted images on the pattern of 26 alphabet letters,” Radio Electronics, Computer Science, Control, no. 2, pp. 112–118, 2013.10.15588/1607-3274-2013-2-18Search in Google Scholar

[13] V. V. Romanuke, “A 2-layer perceptron performance improvement in classifying 26 turned monochrome 60-by-80-images via training with pixel-distorted turned images,” Research Bulletin of the National Technical University of Ukraine “Kyiv Polytechnic Institute”, no. 5, pp. 55–62, 2014.10.15588/1607-3274-2013-2-18Search in Google Scholar

[14] A. Y. Alanis, J. D. Rios, J. Rivera, N. Arana-Daniel, and C. Lopez-Franco, “Real-time discrete neural control applied to a Linear Induction Motor,” Neurocomputing, vol. 164, pp. 240–251, Sep. 2015. https://doi.org/10.1016/j.neucom.2015.02.06510.1016/j.neucom.2015.02.065Open DOISearch in Google Scholar

[15] A. B. Asghar and X. Liu, “Estimation of wind turbine power coefficient by adaptive neuro-fuzzy methodology,” Neurocomputing, vol. 238, pp. 227–233, May 2017. https://doi.org/10.1016/j.neucom.2017.01.05810.1016/j.neucom.2017.01.058Open DOISearch in Google Scholar

[16] D. Costarelli and R. Spigler, “Approximation results for neural network operators activated by sigmoidal functions,” Neural Networks, vol. 44, pp. 101–106, Aug. 2013. https://doi.org/10.1016/j.neunet.2013.03.01510.1016/j.neunet.2013.03.015Open DOISearch in Google Scholar

[17] Z. Chen, F. Cao, and J. Hu, “Approximation by network operators with logistic activation functions,” Applied Mathematics and Computation, vol. 256, pp. 565–571, Apr. 2015. https://doi.org/10.1016/j.amc.2015.01.04910.1016/j.amc.2015.01.049Open DOISearch in Google Scholar

[18] M. F. Møller, “A scaled conjugate gradient algorithm for fast supervised learning,” Neural Networks, vol. 6, no. 4, pp. 525–533, Jan. 1993. https://doi.org/10.1016/s0893-6080(05)80056-510.1016/s0893-6080(05)80056-5Open DOISearch in Google Scholar

[19] T. Kathirvalavakumar and S. Jeyaseeli Subavathi, “Neighborhood based modified backpropagation algorithm using adaptive learning parameters for training feedforward neural networks,” Neurocomputing, vol. 72, no. 16–18, pp. 3915–3921, Oct. 2009. https://doi.org/10.1016/j.neucom.2009.04.01010.1016/j.neucom.2009.04.010Open DOISearch in Google Scholar

[20] A. Nied, S. I. Seleme, G. G. Parma, and B. R. Menezes, “On-line neural training algorithm with sliding mode control and adaptive learning rate,” Neurocomputing, vol. 70, no. 16–18, pp. 2687–2691, Oct. 2007. https://doi.org/10.1016/j.neucom.2006.07.01910.1016/j.neucom.2006.07.019Open DOISearch in Google Scholar

[21] S. J. Yoo, J. B. Park, and Y. H. Choi, “Indirect adaptive control of nonlinear dynamic systems using self recurrent wavelet neural networks via adaptive learning rates,” Information Sciences, vol. 177, no. 15, pp. 3074–3098, Aug. 2007. https://doi.org/10.1016/j.ins.2007.02.00910.1016/j.ins.2007.02.009Open DOISearch in Google Scholar

[22] M. Egmont-Petersen, D. de Ridder, and H. Handels, “Image processing with neural networks–a review,” Pattern Recognition, vol. 35, no. 10, pp. 2279–2301, Oct. 2002. https://doi.org/10.1016/s0031-3203(01)00178-910.1016/S0031-3203(01)00178-9Search in Google Scholar

[23] C. Yu, M. T. Manry, J. Li, and P. Lakshmi Narasimha, “An efficient hidden layer training method for the multilayer perceptron,” Neurocomputing, vol. 70, no. 1–3, pp. 525–535, Dec. 2006. https://doi.org/10.1016/j.neucom.2005.11.00810.1016/j.neucom.2005.11.008Open DOISearch in Google Scholar

[24] V. V. Romanuke, “Pixel-to-scale standard deviations ratio optimization for two-layer perceptron training on pixel-distorted scaled 60-by-80-images in scaled objects classification problem,” Visnyk of Kremenchuk National University of Mykhaylo Ostrogradskyy, iss. 2 (85), pp. 96–105, 2014.Search in Google Scholar

[25] V. V. Romanuke, “Classification error percentage decrement of two-layer perceptron for classifying scaled objects on the pattern of monochrome 60-by-80-images of 26 alphabet letters by training with pixel-distorted scaled images,” Scientific bulletin of Chernivtsi National University of Yuriy Fedkovych. Series: Computer systems and components, vol. 4, iss. 3, pp. 53–64, 2013.Search in Google Scholar

[26] V. V. Romanuke, “Optimal hidden layer neurons number in two-layer perceptron and pixel-to-turn standard deviations ratio for its training on pixel-distorted turned 60 × 80 images in turned objects classification problem,” Visnyk of Kremenchuk National University of Mykhaylo Ostrogradskyy, iss. 5 (94), pp. 86–93, 2015.Search in Google Scholar

[27] R. E. Walpole, R. H. Myers, S. L. Myers, and K. Ye, Probability & Statistics for Engineers & Scientists (9th ed.). Boston, Massachusetts: Prentice Hall, 2012.Search in Google Scholar

[28] V. V. Romanuke, “Optimal Training Parameters and Hidden Layer Neuron Number of Two-Layer Perceptron for Generalised Scaled Object Classification Problem,” Information Technology and Management Science, vol. 18, no. 1, Jan. 2015. https://doi.org/10.1515/itms-2015-000710.1515/itms-2015-0007Open DOISearch in Google Scholar

[29] V. V. Romanuke, “Dependence of performance of feed-forward neuronet with single hidden layer of neurons against its training smoothness on noised replicas of pattern alphabet,” Herald of Khmelnytskyi national university. Technical sciences, no. 1, pp. 201–206, 2013.Search in Google Scholar

[30] M. J. Kochenderfer, C. Amato, G. Chowdhary, J. P. How, H. J. Davison Reynolds, J. R. Thornton, P. A. Torres-Carrasquillo, N. K. Üre, and J. Vian, Decision Making Under Uncertainty: Theory and Application. Cambridge, Massachusetts, London, England: The MIT Press, 2015.Search in Google Scholar

[31] H. R. Tavakoli, A. Borji, J. Laaksonen, and E. Rahtu, “Exploiting inter-image similarity and ensemble of extreme learners for fixation prediction using deep features,” Neurocomputing, vol. 244, pp. 10–18, Jun. 2017. https://doi.org/10.1016/j.neucom.2017.03.01810.1016/j.neucom.2017.03.018Open DOISearch in Google Scholar

eISSN:
2255-9159
Language:
English
Publication timeframe:
2 times per year
Journal Subjects:
Engineering, Introductions and Overviews, other