Browse

1 - 10 of 409 items :

  • Software Development x
  • Artificial Intelligence x
  • Computer Sciences x
Clear All
Affective State Based Anomaly Detection in Crowd

Abstract

To distinguish individuals with dangerous abnormal behaviours from the crowd, human characteristics (e.g., speed and direction of motion, interaction with other people), crowd characteristics (such as flow and density), space available to individuals, etc. must be considered. The paper proposes an approach that considers individual and crowd metrics to determine anomaly. An individual’s abnormal behaviour alone cannot indicate behaviour, which can be threatening toward other individuals, as this behaviour can also be triggered by positive emotions or events. To avoid individuals whose abnormal behaviour is potentially unrelated to aggression and is not environmentally dangerous, it is suggested to use emotional state of individuals. The aim of the proposed approach is to automate video surveillance systems by enabling them to automatically detect potentially dangerous situations.

Open access
A blockchain based Trusted Persistent Identifier system for Big Data in Science

Abstract

A stable reference of Internet resources is crucial not only to identify a resource in a trustworthy and certified way but also to guarantee continuous access to it over time. The current practice in scientific publication as the use of a Persistent Identifier (PID) like a DOI or Handle, is becoming attractive also for the datasets. In fact, in the era of Big Data, the aspects of replicability and verification of the scientific result are paramount. In this paper we verify the functional feasibility of permissioned blockchain technology as a tool to implement a Trustworthy Persistent Identifier (T-PID) system for datasets in the scientific domain.

Open access
Cloud Brokering with Bundles: Multi-objective Optimization of Services Selection

Abstract

Cloud computing has become one of the major computing paradigms. Not only the number of offered cloud services has grown exponentially but also many different providers compete and propose very similar services. This situation should eventually be beneficial for the customers, but considering that these services slightly differ functionally and non-functionally -wise (e.g., performance, reliability, security), consumers may be confused and unable to make an optimal choice. The emergence of cloud service brokers addresses these issues. A broker gathers information about services from providers and about the needs and requirements of the customers, with the final goal of finding the best match.

In this paper, we formalize and study a novel problem that arises in the area of cloud brokering. In its simplest form, brokering is a trivial assignment problem, but in more complex and realistic cases this does not longer hold. The novelty of the presented problem lies in considering services which can be sold in bundles. Bundling is a common business practice, in which a set of services is sold together for the lower price than the sum of services’ prices that are included in it. This work introduces a multi-criteria optimization problem which could help customers to determine the best IT solutions according to several criteria. The Cloud Brokering with Bundles (CBB) models the different IT packages (or bundles) found on the market while minimizing (maximizing) different criteria. A proof of complexity is given for the single-objective case and experiments have been conducted with a special case of two criteria: the first one being the cost and the second is artificially generated. We also designed and developed a benchmark generator, which is based on real data gathered from 19 cloud providers. The problem is solved using an exact optimizer relying on a dichotomic search method. The results show that the dichotomic search can be successfully applied for small instances corresponding to typical cloud-brokering use cases and returns results in terms of seconds. For larger problem instances, solving times are not prohibitive, and solutions could be obtained for large, corporate clients in terms of minutes.

Open access
A Dataset-Independent Model for Estimating Software Development Effort Using Soft Computing Techniques

Abstract

During the recent years, numerous endeavours have been made in the area of software development effort estimation for calculating the software costs in the preliminary development stages. These studies have resulted in the offering of a great many of the models. Despite the large deal of efforts, the substantial problems of the offered methods are their dependency on the used data collection and, sometimes, their lack of appropriate efficiency. The current article attempts to present a model for software development effort estimation through making use of evolutionary algorithms and neural networks. The distinctive characteristic of this model is its lack of dependency on the collection of data used as well as its high efficiency. To evaluate the proposed model, six different data collections have been used in the area of software effort estimation. The reason for the application of several data collections is related to the investigation of the model performance independence of the data collection used. The evaluation scales have been MMRE, MdMRE and PRED (0.25). The results have indicated that the proposed model, besides delivering high efficiency in contrast to its counterparts, produces the best responses for all of the used data collections.

Open access
Development of Ontology Based Competence Management Model for Non-Formal Education Services

Abstract

Competence management is a discipline that recently has regained popularity due to the growing demand for constantly higher competences of employees as well as graduates. One of the main implementation challenges of competence management is that, as a rule, it is based on experts’ implicit knowledge. This is the reason why the transformation of implicit knowledge into explicit knowledge practically is unmanageable and, as a consequence, limits the ability to transfer the already existing knowledge from one organisation to another.

The paper proposes an ontology-based competence model that allows the reuse of existing competence frameworks in the field of non-formal education where different competence frameworks need to be used together for the purpose of identification, assessment and development of customers’ competences without forcing the organisations to change their routine competence management processes. The proposed competence model is used as a basis for development of competence management model on which IT tools that support a competence management processes may be built up. Several existing frameworks have been analysed and the terminology used in them has been combined in a single model. The usage of the proposed model is discussed and the possible IT tools to support the competence management process are identified in the paper.

Open access
Extracting TFM Core Elements From Use Case Scenarios by Processing Structure and Text in Natural Language

Abstract

Extracting core elements of Topological Functioning Model (TFM) from use case scenarios requires processing of both structure and natural language constructs in use case step descriptions. The processing steps are discussed in the present paper. Analysis of natural language constructs is based on outcomes provided by Stanford CoreNLP. Stanford CoreNLP is the Natural Language Processing pipeline that allows analysing text at paragraph, sentence and word levels. The proposed technique allows extracting actions, objects, results, preconditions, post-conditions and executors of the functional features, as well as cause-effect relations between them. However, accuracy of it is dependent on the used language constructs and accuracy of specification of event flows. The analysis of the results allows concluding that even use case specifications require the use of rigor, or even uniform, structure of paths and sentences as well as awareness of the possible parsing errors.

Open access
Fuzzy Expert System Generalised Model for Medical Applications

Abstract

Over the past two decades an exponential growth of medical fuzzy expert systems has been observed. These systems address specific forms of medical and health problems resulting in differentiated models which are application dependent and may lack adaptability. This research proposes a generalized model encompassing major features in specialized existing fuzzy systems. Generalization modelling by design in which the major components of differentiated the system were identified and used as the components of the general model. The prototype shows that the proposed model allows medical experts to define fuzzy variables (rules base) for any medical application and users to enter symptoms (facts base) and ask their medical conditions from the designed generalised core inference engine. Further research may include adding more composition conditions, more combining techniques and more tests in several environments in order to check its precision, sensitivity and specificity.

Open access
Genetic Algorithm Based Feature Selection Technique for Electroencephalography Data

Abstract

High dimensionality is a well-known problem that has a huge number of highlights in the data, yet none is helpful for a particular data mining task undertaking, for example, classification and grouping. Therefore, selection of features is used frequently to reduce the data set dimensionality. Feature selection is a multi-target errand, which diminishes dataset dimensionality, decreases the running time, and furthermore enhances the expected precision. In the study, our goal is to diminish the quantity of features of electroencephalography data for eye state classification and achieve the same or even better classification accuracy with the least number of features. We propose a genetic algorithm-based feature selection technique with the KNN classifier. The accuracy is improved with the selected feature subset using the proposed technique as compared to the full feature set. Results prove that the classification precision of the proposed strategy is enhanced by 3 % on average when contrasted with the accuracy without feature selection.

Open access
Integration of Relational and Graph Databases Functionally

Abstract

In today’s multi-model database world there is an effort to integrate databases expressed in different data models. The aim of the article is to show possibilities of integration of relational and graph databases with the help of a functional data model and its formal language – a typed lambda calculus. We suppose the existence of a data schema both for the relational and graph database. In this approach, relations are considered as characteristic functions and property graphs as sets of single-valued and multivalued functions. Then it is possible to express a query over such integrated heterogeneous database by one query expression expressed in a version of the typed lambda calculus. A more user-friendly version of such language could serve as a powerful query tool in practice. We discuss also queries sent to the integrated system and translated into queries in SQL and Cypher - the graph query language for Neo4j.

Open access
Minimal Total Weighted Tardiness in Tight-Tardy Single Machine Preemptive Idling-Free Scheduling

Abstract

Two possibilities of obtaining the minimal total weighted tardiness in tight-tardy single machine preemptive idling-free scheduling are studied. The Boolean linear programming model, which allows obtaining the exactly minimal tardiness, becomes too time-consuming as either the number of jobs or numbers of job parts increase. Therefore, a heuristic based on remaining available and processing periods is used instead. The heuristic schedules 2 jobs always with the minimal tardiness. In scheduling 3 to 7 jobs, the risk of missing the minimal tardiness is just 1.5 % to 3.2 %. It is expected that scheduling 12 and more jobs has at the most the same risk or even lower. In scheduling 10 jobs without a timeout, the heuristic is almost 1 million times faster than the exact model. The exact model is still applicable for scheduling 3 to 5 jobs, where the averaged computation time varies from 0.1 s to 1.02 s. However, the maximal computation time for 6 jobs is close to 1 minute. Further increment of jobs may delay obtaining the minimal tardiness at least for a few minutes, but 7 jobs still can be scheduled at worst for 7 minutes. When scheduling 8 jobs and more, the exact model should be substituted with the heuristic.

Open access