in ferritic steels , Politechnika Świętokrzyska, Kielce, (in Polish).
9. Dzioba I., Gajewski M., Neimitz A. (2010), Studies of fracture processes in Cr-Mo-V ferritic steel with various types of microstructure, International Journal Pressure Vessel and Piping , 87, 575-586.
10. Dzioba I., Kasińska J., Pała R. (2015), The influence of the rare earth metals modification on the fracture toughness of G17CrMo5-5 cast steel at low temperatures, Archives of Metallurgy and Materials , 60, 773-777.
11. EN–10213–2:1999. Cast steel G17CrMo5
In highly developed countries, research in the field of bankruptcy risk prediction has been conducted for many years. For example, in the United States, which can be considered a pioneering country, the first publications appeared in the early twentieth century. In Poland, due to political and economic reasons, the interest in this issue dates back to the early 1990s. For this reason, this publication attempts to answer the following questions: 1) What is the level of advancement of the research into predicting bankruptcies of enterprises in Poland? 2) How does it compare to worldwide trends? Therefore, the main aim of this study is to present and evaluate the scientific achievements of Polish authors in the field of corporate bankruptcy prediction and compare them to global trends. Literature analysis was adopted as the research method and shows that initially in Poland only very simple tools were used to assess the risk of bankruptcy of enterprises. With time, however, advanced methods began to be introduced and new models included non-financial variables. Also, research on the selection of the samples was conducted. Currently, the level of research and applied tools do not differ from those used in highly developed countries.
This article concerns the problem of formulating assumptions in scenario analysis for investments which consist of the renting out of an apartment. The article attempts to indicate the foundations for the formulation of assumptions on the basis of observed retrospective regularities. It includes theoretical considerations regarding scenario design, as well as the results of studies on the formulation, in the past, of quantities which determined or were likely to bring closer estimate the value of the individual explanatory variables for a chosen measure of investment profitability (MIRRFCFE). The dynamics of and correlation between the variables were studied. The research was based on quarterly data from local residential real estate markets in Poland (in the six largest cities) in the years 2006 – 2014, as well as on data from the financial market.
The last financial crisis affected the SMEs sector in different countries at different levels and strength. SMEs represent the backbone of the economy of every country. Therefore, they need bankruptcy prediction models easily adaptable to their characteristics. In our analysis we verified hypothesis: including information about macroeconomic conditions significantly increases the effectiveness of the bankruptcy model. The data set used in our research contained information about 1,138 SMEs. All information was taken from the financial statements covering the period 2002-2010. The sample included enterprises from sectors: industry, trade and services. Selected financial ratios were used to build the model and the macroeconomic variables were added: GDP, inflation, and the unemployment rate. Logistic regression as the research method was applied. In our study we showed that the incorporation of the macro variables improved the prediction of the SMEs bankruptcy risk.
This paper focuses on modelling environment changes in a way that allows to price weather derivatives in a flexible and efficient way. Applications and importance of climate and weather contracts extends beyond financial markets and hedging as they can be used as complementary tools for risk assessment. In addition, option-based approach toward resource management can offer very special insights on rare-events and allow to reuse derivative pricing methods to improve natural resources management. To demonstrate this general concept, we use Monte Carlo and stochastic modelling of temperatures to evaluate weather options. Research results are accompanied by R and Python code.
The paper presents an empirical verification of the main assumptions underlying the calculation of terminal value in DCF valuation models. The test results suggest that the volatility of free cash flows and the dynamism of the operating environment do not allow us to make a reliable long-term forecast of value creation potential of the public companies in Poland. Regardless of their organic growth phase, the overwhelming majority of the sampled firms are evidenced to exhibit extreme year-on-year fluctuations of sales, investments and cash flows over the short- and medium-term observation windows. The variability of operating results and the probabilistic nature of company-level fundamentals may preclude the possibility of constructing a reliable cash flow forecast for the purposes of a DCF valuation. This methodological issue appears to pose a particular challenge during the calculation of terminal value, which is heavily dependent on highly subjective and uncertain steady-state fundamentals. Therefore, the predictive power of the deterministic DCF models may be reduced to a snapshot of the current market sentiment regarding a particular stock. The paper postulates that a further discussion on the tenets of terminal value calculation may be necessary in order to overcome the existing flaws and increase the accuracy of valuation models. We contribute to this discussion by outlining the principal methodological and theoretical issues which challenge the practicing valuators at the stage of terminal value calculation. Our conclusions may help to shed light on the problems of market short-termism, and high inconstancy of investment recommendations.
Since the deterministic chaos appeared in the literature, we have observed a huge increase in interest in nonlinear dynamic systems theory among researchers, which has led to the creation of new methods of time series prediction, e.g. the largest Lyapunov exponent method and the nearest neighbor method. Real time series are usually disturbed by random noise, which can complicate the problem of forecasting of time series. Since the presence of noise in the data can significantly affect the quality of forecasts, the aim of the paper will be to evaluate the accuracy of predicting the time series filtered using the nearest neighbor method. The test will be conducted on the basis of selected financial time series.
Research background: Market participants have been trying to forecast future price movements and create tools to facilitate making the right investment decisions since the beginning of the operation of stock exchanges. As a result, there are an increasing number of methods, tools, strategies and models to make the decision process which is becoming extremely complicated.
Purpose: to maximize the simplification of trade rules and to check whether it is possible to parameterize transactions based on the length of price movements in order that the system built in this way would generate profits.
Research methodology: empirical research was conducted on data from the period between 20/01/1998 and 29/06/2018 covering listing futures contracts for the WIG20. First, the length of the price movements was determined according to the closing rate, then the frequency of individual lengths of the price movements was determined so transaction parameters were fixed. Next, the parameters were optimized and the rates of return from the tested options were examined.
Result: It is possible to parameterize transactions based on the length of price movements and to create a simple investment strategy which generates profits. In the audited period, the optimal length of traffic was 25 points with a simultaneous use of a profit/loss ratio of 1 : 1, 1 : 2 or 1 : 3.
Novelty: an original investment strategy based on the parameterization of transactions that is based on length of price movement and profit/loss ratio.
In the study, the two-step EWS-GARCH models to forecast Value-at-Risk is presented. The EWS-GARCH allows different distributions of returns or Value-at-Risk forecasting models to be used in Value-at-Risk forecasting depending on a forecasted state of the financial time series. In the study EWS-GARCH with GARCH(1,1) and GARCH(1,1), with the amendment to the empirical distribution of random errors as a Value-at-Risk model in a state of tranquillity and empirical tail, exponential or Pareto distributions used to forecast Value-at-Risk in a state of turbulence were considered. The evaluation of Value-at-Risk forecasts was based on the Value-at-Risk forecasts and the analysis of loss functions. Obtained results indicate that EWS-GARCH models may improve the quality of Value-at-Risk forecasts generated using the benchmark models. However, the choice of best assumptions for the EWS-GARCH model should depend on the goals of the Value-at-Risk forecasting model. The final selection may depend on an expected level of adequacy, conservatism and costs of the model.
The main aim of this paper was to formulate and analyse the machine learning methods, fitted to the strategy parameters optimization specificity. The most important problems are the sensitivity of a strategy performance to little parameter changes and numerous local extrema distributed over the solution space in an irregular way. The methods were designed for the purpose of significant shortening of the computation time, without a substantial loss of strategy quality. The efficiency of methods was compared for three different pairs of assets in case of moving averages crossover system. The problem was presented for three sets of two assets’ portfolios. In the first case, a strategy was trading on the SPX and DAX index futures; in the second, on the AAPL and MSFT stocks; and finally, in the third case, on the HGF and CBF commodities futures. The methods operated on the in-sample data, containing 16 years of daily prices between 1998 and 2013 and was validated on the out-of-sample period between 2014 and 2017. The major hypothesis verified in this paper is that machine learning methods select strategies with evaluation criterion near the highest one, but in significantly lower execution time than the brute force method (Exhaustive Search).