Krzysztof Dmytrów and Sebastian Gnat
It is believed that the ad valorem tax will increase fiscal burdens. In order to verify this statement, with the use of the Szczecin Algorithm of Real Estates Mass Appraisal, the land plots were appraised and the ad valorem tax was calculated. Next, a training set was sampled, for which the composite variable was calculated by means of three approaches: the TOPSIS method, the Generalised Distance Measure as the composite measure of development (GDM2), and the quasi-TOPSIS. They were the explanatory variables in the logistic regression model. Next, for the test set, changes of tax burden were forecasted. The aim of the research was to check the effectiveness of the presented approach for the estimation of the consequences of introducing the ad valorem tax. The results showed that all three approaches yielded similar results, but GDM2 was the best one. The main finding is that these approaches can be used in the prediction of changes in the tax burden of land plots.
The aim of the paper is to assess the potential for using some selected PCA-based methods to analyze the spatial diversity of crime in Poland during 2000-2017. Classical principal components analysis (PCA) deals with two-way matrices, usually taking into account objects and variables. In the case of data analyzed in the study, apart from two dimensions (objects – voivodships, variables – criminal offences), there is also the dimension of time, so the dataset can be seen as data cube: objects × variables × time. Therefore, this type of data requires the use of methods handling three-way data structures. In the paper the variability of some selected categories of criminal offences in time (2000–2017) and space (according to voivodships) is analyzed using the between-class and the within-class principal component analysis. The advantage of these methods is, among others, the possibility of the graphical presentation of the results in two-dimensional space with the use of factorial maps.
Ewa Roszkowska and Tomasz Wachowicz
The paper discusses the impact of the decision-making profiles on the consistency of rankings obtained by three multiple criteria methods, i.e. DR, AHP and TOPSIS. The online decision making experiment was organized, based on an electronic questionnaire which is a hybrid of the internet survey system and the decision support system. The participants of the experiment were 418 students of Polish universities. To describe the decision-making profile, the REI test was used which allows to distinguish two decision-making styles: rational and intuitive. The Kendall rank correlation coefficient was used to test the consistency of the rankings obtained by the considered methods. Using different grouping methods, the relationship between the decision profile and the ability to express one’s preferences by means of these methods, that differ in cognitive requirements, was examined. The results of the research may be helpful for supporting the decision-maker in decision processes by choosing the method that fits their profile best.
One of the central tasks of credit institutions is credit risk assessment, in which the estimation of the probability of default is an important element. The size of an institution’s credit portfolio can decrease as a result of early repayments, which changes the probability of default over time. Prognosis of the probability of default should therefore also take into consideration the prognosis of early repayments. In this paper, methods of evaluating the probability of default over time, using competing risks regression models, are considered. Methods of evaluation for models of default over time are proposed. A sample of retail credits, provided by a Polish financial institution, was empirically examined.
The beta parameter is a popular tool for the evaluation of portfolio performance. The Sharpe single-index model is a simple regression model in which the stock’s returns are regressed against the returns of a broader index. The beta parameter is a measure of the strength of this relation. Extensive recent research has proved that the beta is not constant in time and should be modelled as a time-variant coefficient. One of the most popular methods of the estimation of a time-varying beta is the Kalman filter. As the output of the Kalman filter, one obtains a sequence of the estimates of a time-varying beta. This sequence shows the historical dynamics of sensitivity of a company’s returns to the variations of market returns. The article proposes a method of clustering companies listed on the Warsaw Stock Exchange according to time-varying betas.
Zdzisław Kes and Łukasz Kuźmiński
This paper presents the methods for the evaluation of budget variance risk, i.e. the risk of a difference between the budgeted and actual figures. The postulated approach is based on extreme value analysis (EVA), to offer, among other things, the evaluation of maxima distribution parameters for studied phenomena. The proper recognition of these parameters yields potential for calculation of probabilities for budget variance to pass certain levels established as critical. This methodology can be used to evaluate deviation levels by time period, and to compare them against historical data. The main objective of this paper was to examine the utility of the theory of extreme values in the estimation of budget deviation risks. The study presents the results of probabilistic analyses of data obtained from a budgetary cost control unit of a production company located in eastern Poland, for the period of 2011-2012. The developed method of analysis and assessment of budget deviations is in line with the development of concepts and methods of management accounting.
The aim of the research was an assessment of the relative risk of liquidation of a company depending on its age. The research covered economic entities established in Szczecin in the period 1990-2010. The analysis was carried out with the use of a logit model. The risk of company liquidation was examined depending on the entity’s age expressed both in months (continuous variable) and in grouped intervals (year, half-year). In this way, attention was drawn to the benefits of continuous variable coding (rank and 0-1 coding). The research covered companies established during 1990-2010 in total (over 120 thousand) and in time periods resulting from the cyclical character of liquidation of companies (in accordance with the earlier research findings). The research showed that the risk of company liquidation decreases as the company grows older (the use of a continuous variable and a rank variable). On the other hand, the risk of subsequent age groups (using the 0-1 variable) prevents the risk from being monotonous.
Maria José Sá, Carlos Miguel Ferreira and Sandro Serpa
Academic conferences have always been privileged spaces and moments for the dissemination of new scientific knowledge, as well as for social interaction and for the establishment and development of social networks among scientists. However, the virtual dimension of conferences, in which individuals are not physically present in the same place, begins to emerge as an increasingly used possibility, which implies a different framing of these scientific events. This paper seeks to comparatively analyse several models of academic conferences, putting forth their advantages, limitations and potentials. Furthermore, it also seeks to reasonably envision the importance and challenges to be faced in the near future. The analysis allows concluding that virtual conferences tend to take on an increasingly central role in this type of scientific dissemination, but without totally relegating the conference mode with face-to-face interaction. Moreover, there may be conferences that emerge as a hybrid between these two types of conferences, in an attempt to provide their main benefits to the various participants. However, the insufficient literature on this topic calls for the need to develop and deepen studies in this area that allow understanding this academic and social, but also economic phenomenon, in its broader implications.
Jan Gresil S. Kahambing and Jabin J. Deguma
The film Bicentennial Man (1999) pictured in a nutshell a robot who/that became human via his personality by plunging into the realities of freedom and death. The aim of this paper is to reflect on the notion of personality in the case of what this paper coins as a ‘robot-incarnate’ with the name Andrew, the first man who lived for two hundred years from his inception as an artificial machine. The method of exposition proceeds from (1) utilizing a philosophical reflection on the film concerning the determinacy of Andrew as a person and (2) then anchoring his case as a subject for the understanding of machine ethics. Regarding the first, the paper focuses on the questions of personality, death, and freedom. Regarding the second, the paper exposes the discussions of machine ethics and the issue of moral agency. Deducing from the already existing literature on the matter, the paper concludes that machine ethics must stand as the principle that serves as law and limitation to any scientific machine advancement showing promising potentials.