Smooth Non-increasing Square Spatial Extents of Filters in Convolutional Layers of CNNs for Image Classification Problems

Vadim V. Romanuke 1
  • 1 Polish Naval Academy, , Gdynia, Poland


The present paper considers an open problem of setting hyperparameters for convolutional neural networks aimed at image classification. Since selecting filter spatial extents for convolutional layers is a topical problem, it is approximately solved by accumulating statistics of the neural network performance. The network architecture is taken on the basis of the MNIST database experience. The eight-layered architecture having four convolutional layers is nearly best suitable for classifying small and medium size images. Image databases are formed of grayscale images whose size range is 28 × 28 to 64 × 64 by step 2. Except for the filter spatial extents, the rest of those eight layer hyperparameters are unalterable, and they are chosen scrupulously based on rules of thumb. A sequence of possible filter spatial extents is generated for each size. Then sets of four filter spatial extents producing the best performance are extracted. The rule of this extraction that allows selecting the best filter spatial extents is formalized with two conditions. Mainly, difference between maximal and minimal extents must be as minimal as possible. No unit filter spatial extent is recommended. The secondary condition is that the filter spatial extents should constitute a non-increasing set. Validation on MNIST and CIFAR- 10 databases justifies such a solution, which can be extended for building convolutional neural network classifiers of colour and larger images.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1] V. Chandrasekhar, J. Lin, O. Morère, H. Goh, and A. Veillard, “A practical guide to CNNs and Fisher Vectors for image instance retrieval,” Signal Processing, vol. 128, 2016, pp. 426–439.

  • [2] M. Elleuch, R. Maalej, and M. Kherallah, “A new design based-SVM of the CNN classifier architecture with dropout for offline Arabic handwritten recognition,” Procedia Computer Science, vol. 80, 2016, pp. 1712–1723.

  • [3] Q. Guo, F. Wang, J. Lei, D. Tu, and G. Li, “Convolutional feature learning and Hybrid CNN-HMM for scene number recognition,” Neurocomputing, vol. 184, 2016, pp. 78–90.

  • [4] M. Joo Er, Y. Zhang, N. Wang, and M. Pratama, “Attention pooling-based convolutional neural network for sentence modelling,” Information Sciences, vol. 373, 2016, pp. 388–403.

  • [5] Z. Chen, F. Cao, and J. Hu, “Approximation by network operators with logistic activation functions,” Applied Mathematics and Computation, vol. 256, 2015, pp. 565–571.

  • [6] D. Costarelli and R. Spigler, “Approximation results for neural network operators activated by sigmoidal functions,” Neural Networks, vol. 44, 2013, pp. 101–106.

  • [7] G. A. Anastassiou, “Multivariate sigmoidal neural network approximation,” Neural Networks, vol. 24, iss. 4, 2011, pp. 378–386.

  • [8] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, iss. 11, 1998, pp. 2278–2324.

  • [9] P. Simard, D. Steinkraus, and J. C. Platt, “Best practices for convolutional neural networks applied to visual document analysis,” International Conference on Document Analysis and Recognition (ICDAR), vol. 3, 2003, pp. 958–962.

  • [10] D. Ciresan, U. Meier, and J. Schmidhuber, “Multi-column deep neural networks for image classification,” 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 3642–3649.

  • [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, iss. 6, 2017, pp. 84–90.

  • [12] J. Mutch and D. G. Lowe, “Object class recognition and localization using sparse features with limited receptive fields,” International Journal of Computer Vision, vol. 80, iss. 1, 2008, pp. 45–57.

  • [13] K. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Biological Cybernetics, vol. 36, iss. 4, 1980, pp. 193–202.

  • [14] K. Fukushima, “Neocognitron: A hierarchical neural network capable of visual pattern recognition,” Neural Networks, vol. 1, iss. 2, 1988, pp. 119–130.

  • [15] K. Fukushima, “Artificial vision by multi-layered neural networks: Neocognitron and its advances,” Neural Networks, vol. 37, 2013, pp. 103–119.

  • [16] D. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber, “Flexible, high performance convolutional neural networks for image classification,” Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, vol. 2, 2011, pp. 1237–1242.

  • [17] P. Connor, P. Hollensen, O. Krigolson, and T. Trappenberg, “A biological mechanism for Bayesian feature selection: Weight decay and raising the LASSO”, Neural Networks, vol. 67, 2015, pp. 121–130.

  • [18] A. Mahendran and A. Vedaldi, “Visualizing deep convolutional neural networks using natural pre-images,” International Journal of Computer Vision, vol. 120, iss. 3, 2016, pp. 233–255.

  • [19] L. Guo, S. Li, X. Niu, and Y. Dou, “A study on layer connection strategies in stacked convolutional deep belief networks,” Pattern Recognition, 6th Chinese Conference, CCPR 2014, Changsha, China, November 1719, 2014 (Proceedings, Part I), 2014, pp. 81–90.

  • [20] Z. Wang, Z. Deng, and S. Wang, “Accelerating convolutional neural networks with dominant convolutional kernel and knowledge preregression,” Computer Vision–ECCV 2016, 14th European Conference, Amsterdam, The Netherlands, October 1114, 2016, Proceedings, Part VIII), 2016, pp. 533–548.

  • [21] Z.-Z. Li, Z.-Y. Zhong, and L.-W. Jin, “Identifying best hyperparameters for deep architectures using random forests,” Learning and Intelligent Optimization, 9th International Conference, LION 9, Lille, France, January 12–15, 2015 (Revised Selected Papers), 2015, pp. 29–42.

  • [22] C. Ann Ronao and S.-B. Cho, “Deep convolutional neural networks for human activity recognition with smartphone sensors,” Neural Information Processing, 22nd International Conference, ICONIP 2015, November 912, 2015 (Proceedings, Part IV), 2015, pp. 46–53.

  • [23] A. Azadeh, M. Saberi, A. Kazem, V. Ebrahimipour, A. Nourmohammadzadeh, and Z. Saberi, “A flexible algorithm for fault diagnosis in a centrifugal pump with corrupted data and noise based on ANN and support vector machine with hyper-parameters optimization,” Applied Soft Computing, vol. 13, iss. 3, 2013, pp. 1478–1485.

  • [24] Z. Bai, L. L. C. Kasun, and G.-B. Huang, “Generic object recognition with local receptive fields based extreme learning machine,” Procedia Computer Science, vol. 53, 2015, pp. 391–399.

  • [25] P. Date, J. A. Hendler, and C. D. Carothers, “Design index for deep neural networks,” Procedia Computer Science, vol. 88, 2016, pp. 131–138.

  • [26] N. van Noord and E. Postma, “Learning scale-variant and scale-invariant features for deep image classification,” Pattern Recognition, vol. 61, 2017, pp. 583–592.

  • [27] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” Computer Vision and Pattern Recognition, arXiv:1312.6034v2 [cs.CV], 2014.

  • [28] Y. Zhu, C. Zhang, D. Zhou, X. Wang, X. Bai, and W. Liu, “Traffic sign detection and recognition using fully convolutional network guided proposals,” Neurocomputing, vol. 214, 2016, pp. 758–766.

  • [29] J. Ma, F. Wu, J. Zhu, D. Xu, and D. Kong, “A pre-trained convolutional neural network based method for thyroid nodule diagnosis,” Ultrasonics, vol. 73, 2017, pp. 221–230.

  • [30] J.-L. Buessler, P. Smagghe, and J.-P. Urban, “Image receptive fields for artificial neural networks,” Neurocomputing, vol. 144, 2014, pp. 258–270.

  • [31] J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson, “Understanding neural networks through deep visualization,” Computer Vision and Pattern Recognition, arXiv:1506.06579v1 [cs.CV], 2015.

  • [32] L. A. Gatys, A. S. Ecker, and M. Bethge, “Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks,” Computer Vision and Pattern Recognition, arXiv:1505.07376v1 [cs.CV], 2015.

  • [33] H. Jégou, M. Douze, C. Schmid, and P. Pérez, “Aggregating local descriptors into a compact image representation,” 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 3304–3311.

  • [34] A. Mahendran and A. Vedaldi, “Understanding deep image representations by inverting them,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 5188–5196.

  • [35] C. Schmid and R. Mohr, “Local grayvalue invariants for image retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, iss. 5, 1997, pp. 530–535.

  • [36] V. Mayya, R. M. Pai, and M. M. M. Pai, “Automatic facial expression recognition using DCNN,” Procedia Computer Science, vol. 93, 2016, pp. 453–461.

  • [37] Y. LeCun, F. J. Huang, and L. Bottou, “Learning methods for generic object recognition with invariance to pose and lighting,” International Conference on Computer Vision and Pattern Recognition, vol. 2, 2004, pp. 97–104.

  • [38] V. V. Romanuke, “Boosting ensembles of heavy two-layer perceptrons for increasing classification accuracy in recognizing shifted-turned-scaled flat images with binary features,” Journal of Information and Organizational Sciences, vol. 39, no. 1, 2015, pp. 75–84.

  • [39] V. V. Romanuke, “Optimal training parameters and hidden layer neurons number of two-layer perceptron for generalized scaled objects classification problem,” Information Technology and Management Science, vol. 18, 2015, pp. 42–48.

  • [40] V. V. Romanuke, “Two-layer perceptron for classifying flat scaledturned-shifted objects by additional feature distortions in training,” Journal of Uncertain Systems, vol. 9, no. 4, 2015, pp. 286–305.

  • [41] V. V. Romanuke, “An attempt for 2-layer perceptron high performance in classifying shifted monochrome 60-by-80-images via training with pixel-distorted shifted images on the pattern of 26 alphabet letters,” Radio Electronics, Computer Science, Control, no. 2, 2013, pp. 112–118.

  • [42] E. Kussul and T. Baidyk, “Improved method of handwritten digit recognition tested on MNIST database,” Image and Vision Computing, vol. 22, iss. 12, 2004, pp. 971–981.

  • [43] V. V. Romanuke, “Training data expansion and boosting of convolutional neural networks for reducing the MNIST dataset error rate,” Research Bulletin of the National Technical University of Ukraine “Kyiv Polytechnic Institute”, no. 6, pp. 29–34, 2016.

  • [44] V. V. Romanuke, “Uniform sampling of fundamental simplexes as sets of players’ mixed strategies in the finite noncooperative game for finding equilibrium situations with possible concessions,” Journal of Automation and Information Sciences, vol. 47, iss. 9, 2015, pp. 76–85.

  • [45] V. V. Romanuke, “Sampling individually fundamental simplexes as sets of players’ mixed strategies in finite noncooperative game for applicable approximate Nash equilibrium situations with possible concessions,” Journal of Information and Organizational Sciences, vol. 40, no. 1, 2016, pp. 105–143.

  • [46] V. V. Romanuke, “Appropriate number and allocation of ReLUs in convolutional neural networks,” Research Bulletin of the National Technical University of Ukraine “Kyiv Polytechnic Institute”, no. 1, pp. 69–78, 2017.


Journal + Issues