Search Results

1 - 10 of 78 items :

  • "overfitting" x
Clear All

networks, Neural Networks, 83 , 2016, 21-31. [46] Springenberg J. T., Dosovitskiy A., Brox T., Riedmiller M., Striving for simplicity: The all convolutional net, arXiv preprint arXiv:1412.6806, 2014, [47] Srivastava N., Hinton G., Krizhevsky A., Sutskever I., Salakhutdinov R., Dropout: a simple way to prevent neural networks from overfitting, The Journal of Machine Learning Research, 15 , 1, 2014, 1929-1958. [48] Szegedy C., Liu W., Jia Y., Sermanet P., Reed S., Anguelov D., Erhan D., Vanhoucke V., Rabinovich A., Going deeper with convolutions, in Proceedings of the

R eferences [1] K. Hagiwara and K. Fukumizu, “Relation Between Weight Size and Degree of Over-Fitting in Neural Network Regression,” Neural Networks , vol. 21, no. 1, pp. 48–58, Jan. 2008. [2] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, Improving Neural Networks by Preventing Co-Adaptation of Feature Detectors , 2012. [3] H. Wu and X. Gu, “Towards Dropout Training for Convolutional Neural Networks,” Neural Networks , vol. 71, pp. 1–10, Nov. 2015.


This article aims to extend evaluation of the classic multifactor model of Carhart (1997) for the case of global equity indices and to expand analysis performed in Sakowski et. al. (2015). Our intention is to test several modifications of these models to take into account different dynamics of equity excess returns between emerging and developed equity indices. Proposed extensions include a volatility regime switching mechanism (using dummy variables and the Markov approach) and the fifth risk factor based on realized volatility of index returns.

Moreover, instead of using data for stocks of a particular market (which is a common approach in the literature), we check performance of these models for weekly data of 81 world investable equity indices in the period of 2000-2015. Such an approach is proposed to estimate an equity risk premium for a single country.

Empirical evidence reveals important differences between results for classical models estimated on single stocks (either in international or US-only frameworks) and models evaluated for equity in­dices. Additionally, we observe substantial discrepancies between results for developed countries and emerging markets. Finally, using weekly data for the last 15 years we illustrate the importance of model risk and data overfitting effects when drawing conclusions upon results of multifactor models.

risk of overfitting than the strategies resulted from the Exhaustive Search procedure. The distributions of optimization criteria and the computation time of 1000 executions of different methods were compared and presented along with the Exhaustive Search results. The adjustment quality was assessed on in-sample data and additional out of sample data in order to test the overfitting tendency. Let us emphasise that the purpose of this paper is not to design the most profitable strategy, but to compare the efficiency of different machine learning methods and the

frequently arises is overfitting, which is quite common in machine learning ( Cawley et al . 2010 ). That is why, the question of choice of meta-parameters is a major one. The meaning of meta-parameters was described in the theoretical part in section 2 . As a short reminder, the general meaning of the C (cost) and γ (gamma) is as follows. High values for C causes the cost of misclassification to be large; therefore, SVMs are forced to classify the input data more severely and the problem of overfitting may arise. Small values for C mean lower variance and higher bias


In his article “Embracing Noise and Error”, Bálint L. Bálint argues that human society is going through a profound change as mathematical models are used to predict human behavior both on a personal level and on the level of the entire society. An inherent component of mathematical models is the concept of error or noise, which describes the level of unpredictability of a system by the specific mathematical model. The author reveals the educational origin of the abstract world that can be described by pure mathematics and can be considered an ideal world without errors. While the human perception of the world is different from the abstractions we were taught, the mathematical models need to integrate the error factor to deal with the unpredictability of reality. While scientific thinking developed the statistic-probabilistic model to define the limits of predictability, here we present that in a flow of time driven by entropy, stochastic variability is an in-built characteristic of the material world and represents ultimately the singularity of each individual moment in time and the chance for our freedom of choice.


Deconvolutional neural networks are a very accurate tool for semantic image segmentation. Segmenting curvilinear meandering regions is a typical task in computer vision applied to navigational, civil engineering, and defence problems. In the study, such regions of interest are modelled as meandering transparent stripes whose width is not constant. The stripe on the white background is formed by the upper and lower non-parallel black curves so that the upper and lower image parts are completely separated. An algorithm of generating datasets of such regions is developed. It is revealed that deeper networks segment the regions more accurately. However, the segmentation is harder when the regions become bigger. This is why an alternative method of the region segmentation consisting in segmenting the upper and lower image parts by subsequently unifying the results is not effective. If the region of interest becomes bigger, it must be squeezed in order to avoid segmenting the empty image. Once the squeezed region is segmented, the image is conversely rescaled to the original view. To control the accuracy, the mean BF score having the least value among the other accuracy indicators should be maximised first.

. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research , 2014. [60] L. Theis, W. Shi, A. Cunningham, and F. Huszár. Lossy image compression with compressive autoencoders. In ICLR , 2017. [61] F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart. Stealing machine learning models via prediction apis. In USENIX Security , 2016. [62] A. Triastcyn and B. Faltings. Generating differentially private datasets using gans. arXiv preprint arXiv

, spatial autocorrelation and overfitting in habitat suitability modelling. - Ecol. Modelling 222: 588-597. Metz, C.E. 1978. Basic principles of ROC curve analysis. - Semin. nucl. Med. 8: 283-298. Michel, P., Overton, J.M., Mason, N.W.H., Hurst, J.M. & Lee, W.G. 2011. Species-environment relationships of mosses in New Zealand indigenous forest and shrubland ecosystems. - Pl. Ecol. 212: 353-367. Minchin, P.R. 1989. Montane vegetation of the Mt. Field massif, Tasmania: a test of some hypotheses about properties of community patterns. - Vegetatio 83: 97-110. *Monterroso, P


A grey box framework is applied to model ship maneuvering by using a reference model (RM) and a support vector machine (SVM) (RM-SVM). First, the nonlinear characteristics of the target ship are determined using the RM and the similarity rule. Then, the linear SVM adaptively fits the errors between acceleration variables of RM and target ship. Finally, the accelerations of the target ship are predicted using RM and linear SVM. The parameters of the RM are known and conveniently acquired, thus avoiding the modeling process. The SVM has the advantages of fast training, quick simulation, and no overfitting. Testing and validation are conducted using the ship model test data. The test case reveals the practicability of the RF-SVM based modeling method, while the validation cases confirm the generalization ability of the grey box framework.