Browse

1 - 10 of 414 items :

  • Software Development x
Clear All

Abstract

In this work, we introduce a simple multi-agent simulation model with two roles of agents that correspond to moral and immoral attitudes. The model is given explicitly by a set of mathematical equations with continuous variables and is characterized by four parameters: morality, protection, and two efficiency parameters. Agents are free to adjust their roles to maximize individual gains. The model is analyzed theoretically to find conditions for its stability, i.e., the fractions of agents of both roles that lead to an equilibrium in their gains. A multi-agent simulation is also developed to verify the dynamics of the model for all values of morality and protection parameters, and to identify potential discrepancies with the theoretical analysis.

Abstract

In the last decade of research in the origins of life, there has been an increase in the interest on theoretical molecular modeling methods aimed to improve the accuracy and speed of the algorithms that solve the molecular mechanics and chemical reactions of the matter. Research on the scenarios of prebiotic chemistry has also advanced. The presented work attempts to discuss the latest computational techniques and trends implemented so far. Although it is difficult to cover the full extent of the current publications, we tried to orient the reader into the modern tendencies and challenges faced by those who are in the origins of life field.

Abstract

The coupled tasks scheduling problem is class of scheduling problems, where each task consists of two operations and a separation gap between them. The high-multiplicity is a compact encoding, where identical tasks are grouped together, and the group is specified instead of each individual task. Consequently the encoding of a problem instance is decreased significantly. In this article we derive a lower bound for the problem variant as well as propose an asymptotically optimal algorithm. The theoretical results are complemented with computational experiment, where a new algorithm is compared with three other algorithms implemented.

Abstract

The classifications of risk made by international rating agencies aim at guiding investors when it comes to the capacity and disposition of the evaluated countries to honor their public debt commitments. In this study, the analysis of economic variables of sovereign rating, in a context of vagueness and uncertainty, leads the inference of patterns (multi-criteria rules) by following the Dominance-based Rough Set Approach (DRSA). The discovery of patterns in data may be useful for subsidizing foreign investment decisions in countries; and this knowledge base may be used in rule-based expert systems (learning from training examples).The present study seeks to complement the analysis produced by an international credit rating agency, Standard & Poor’s (S&P), for the year 2018.

Abstract

To distinguish individuals with dangerous abnormal behaviours from the crowd, human characteristics (e.g., speed and direction of motion, interaction with other people), crowd characteristics (such as flow and density), space available to individuals, etc. must be considered. The paper proposes an approach that considers individual and crowd metrics to determine anomaly. An individual’s abnormal behaviour alone cannot indicate behaviour, which can be threatening toward other individuals, as this behaviour can also be triggered by positive emotions or events. To avoid individuals whose abnormal behaviour is potentially unrelated to aggression and is not environmentally dangerous, it is suggested to use emotional state of individuals. The aim of the proposed approach is to automate video surveillance systems by enabling them to automatically detect potentially dangerous situations.

Abstract

A stable reference of Internet resources is crucial not only to identify a resource in a trustworthy and certified way but also to guarantee continuous access to it over time. The current practice in scientific publication as the use of a Persistent Identifier (PID) like a DOI or Handle, is becoming attractive also for the datasets. In fact, in the era of Big Data, the aspects of replicability and verification of the scientific result are paramount. In this paper we verify the functional feasibility of permissioned blockchain technology as a tool to implement a Trustworthy Persistent Identifier (T-PID) system for datasets in the scientific domain.

Abstract

Cloud computing has become one of the major computing paradigms. Not only the number of offered cloud services has grown exponentially but also many different providers compete and propose very similar services. This situation should eventually be beneficial for the customers, but considering that these services slightly differ functionally and non-functionally -wise (e.g., performance, reliability, security), consumers may be confused and unable to make an optimal choice. The emergence of cloud service brokers addresses these issues. A broker gathers information about services from providers and about the needs and requirements of the customers, with the final goal of finding the best match.

In this paper, we formalize and study a novel problem that arises in the area of cloud brokering. In its simplest form, brokering is a trivial assignment problem, but in more complex and realistic cases this does not longer hold. The novelty of the presented problem lies in considering services which can be sold in bundles. Bundling is a common business practice, in which a set of services is sold together for the lower price than the sum of services’ prices that are included in it. This work introduces a multi-criteria optimization problem which could help customers to determine the best IT solutions according to several criteria. The Cloud Brokering with Bundles (CBB) models the different IT packages (or bundles) found on the market while minimizing (maximizing) different criteria. A proof of complexity is given for the single-objective case and experiments have been conducted with a special case of two criteria: the first one being the cost and the second is artificially generated. We also designed and developed a benchmark generator, which is based on real data gathered from 19 cloud providers. The problem is solved using an exact optimizer relying on a dichotomic search method. The results show that the dichotomic search can be successfully applied for small instances corresponding to typical cloud-brokering use cases and returns results in terms of seconds. For larger problem instances, solving times are not prohibitive, and solutions could be obtained for large, corporate clients in terms of minutes.

Abstract

During the recent years, numerous endeavours have been made in the area of software development effort estimation for calculating the software costs in the preliminary development stages. These studies have resulted in the offering of a great many of the models. Despite the large deal of efforts, the substantial problems of the offered methods are their dependency on the used data collection and, sometimes, their lack of appropriate efficiency. The current article attempts to present a model for software development effort estimation through making use of evolutionary algorithms and neural networks. The distinctive characteristic of this model is its lack of dependency on the collection of data used as well as its high efficiency. To evaluate the proposed model, six different data collections have been used in the area of software effort estimation. The reason for the application of several data collections is related to the investigation of the model performance independence of the data collection used. The evaluation scales have been MMRE, MdMRE and PRED (0.25). The results have indicated that the proposed model, besides delivering high efficiency in contrast to its counterparts, produces the best responses for all of the used data collections.

Abstract

Competence management is a discipline that recently has regained popularity due to the growing demand for constantly higher competences of employees as well as graduates. One of the main implementation challenges of competence management is that, as a rule, it is based on experts’ implicit knowledge. This is the reason why the transformation of implicit knowledge into explicit knowledge practically is unmanageable and, as a consequence, limits the ability to transfer the already existing knowledge from one organisation to another.

The paper proposes an ontology-based competence model that allows the reuse of existing competence frameworks in the field of non-formal education where different competence frameworks need to be used together for the purpose of identification, assessment and development of customers’ competences without forcing the organisations to change their routine competence management processes. The proposed competence model is used as a basis for development of competence management model on which IT tools that support a competence management processes may be built up. Several existing frameworks have been analysed and the terminology used in them has been combined in a single model. The usage of the proposed model is discussed and the possible IT tools to support the competence management process are identified in the paper.