Evolution of software development process and increasing complexity of software systems calls for developers to pay great attention to the evolution of CASE tools for software development. This, in turn, causes explosion for appearance of a new wave (or new generation) of such CASE tools. The authors of the paper have been working on the development of the so-called two-hemisphere model-driven approach and its supporting BrainTool for the past 10 years. This paper is a step forward in the research on the ability to use the two-hemisphere model driven approach for system modelling at the problem domain level and to generate UML diagrams and software code from the two-hemisphere model. The paper discusses the usage of anemic domain model instead of rich domain model and offers the main principle of transformation of the two-hemisphere model into the first one.
The achievement of high-precision segmentation in medical image analysis has been an active direction of research over the past decade. Significant success in medical imaging tasks has been feasible due to the employment of deep learning methods, including convolutional neural networks (CNNs). Convolutional architectures have been mostly applied to homogeneous medical datasets with separate organs. Nevertheless, the segmentation of volumetric medical images of several organs remains an open question. In this paper, we investigate fully convolutional neural networks (FCNs) and propose a modified 3D U-Net architecture devoted to the processing of computed tomography (CT) volumetric images in the automatic semantic segmentation tasks. To benchmark the architecture, we utilised the differentiable Sørensen-Dice similarity coefficient (SDSC) as a validation metric and optimised it on the training data by minimising the loss function. Our hand-crafted architecture was trained and tested on the manually compiled dataset of CT scans. The improved 3D UNet architecture achieved the average SDSC score of 84.8 % on testing subset among multiple abdominal organs. We also compared our architecture with recognised state-of-the-art results and demonstrated that 3D U-Net based architectures could achieve competitive performance and efficiency in the multi-organ segmentation task.
In this day and age, access to the Internet has become very easy, thereby providing access to different educational resources posted on the cloud even easier. Open access to resources, such as research journals, publications, articles in periodicals etc. is restricted to retain their authenticity and integrity, as well as to track and record their usage in the form of citations. This gives the author of the resource his fair share of credibility in the community, but this may not be the case with open educational resources such as lecture notes, presentations, test papers, reports etc. that are produced and used internally within an organisation or multiple organisations. This calls for the need to build a system that stores a permanent and immutable repository of these resources in addition to keeping a track record of who utilises them. Keeping in view the above-mentioned problem in mind, the present research attempts to explore how a Blockchain based system called Block-ED can be used to help the educational community manage their resources in a way to avoid any unauthorised manipulations or alterations to the documents, as well as recognise how this system can provide an innovative method of giving credibility to the creator of the resource whenever it is utilised.
Uniform multi-dimensional designs of experiments for effective research in computer modelling are highly demanded. The combinations of several one-dimensional quasi-random sequences with a uniform distribution are used to create designs with high homogeneity, but their optimal choice is a separate problem, the solution of which is not trivial. It is believed that now the best results are achieved using Sobol’s LPτ-sequences, but this is not observed in all cases of their combinations. The authors proposed the creation of effective uniform designs with guaranteed acceptably low discrepancy using recursive Rd-sequences and not requiring additional research to find successful combinations of vectors set distributed in a single hypercube. The authors performed a comparative analysis of both approaches using indicators of centred and wrap-around discrepancies, graphical visualization based on Voronoi diagrams. The conclusion was drawn on the practical use of the proposed approach in cases where the requirements for the designs allowed restricting to its not ideal but close to it variant with low discrepancy, which was obtained automatically without additional research.
The foundational features of multi-agent systems are communication and interaction with other agents. To achieve these features, agents have to transfer messages in the predefined format and semantics. The communication among these agents takes place with the help of ACL (Agent Communication Language). ACL is a predefined language for communication among agents that has been standardised by the FIPA (Foundation for Intelligent Physical Agent). FIPA-ACL defines different performatives for communication among the agents. These performatives are generic, and it becomes computationally expensive to use them for a specific domain like e-commerce. These performatives do not define the exact meaning of communication for any specific domain like e-commerce. In the present research, we introduced new performatives specifically for e-commerce domain. Our designed performatives are based on FIPA-ACL so that they can still support communication within diverse agent platforms. The proposed performatives are helpful in modelling e-commerce negotiation protocol applications using the paradigm of multi-agent systems for efficient communication. For exact semantic interpretation of the proposed performatives, we also performed formal modelling of these performatives using BNF. The primary objective of our research was to provide the negotiation facility to agents, working in an e-commerce domain, in a succinct way to reduce the number of negotiation messages, time consumption and network overhead on the platform. We used an e-commerce based bidding case study among agents to demonstrate the efficiency of our approach. The results showed that there was a lot of reduction in total time required for the bidding process.
Detection of local text reuse is central to a variety of applications, including plagiarism detection, origin detection, and information flow analysis. This paper evaluates and compares effectiveness of fingerprint selection algorithms for the source retrieval stage of local text reuse detection. In total, six algorithms are compared – Every p-th, 0 mod p, Winnowing, Hailstorm, Frequency-biased Winnowing (FBW), as well as the proposed modified version of FBW (MFBW).
Most of the previously published studies in local text reuse detection are based on datasets having either artificially generated, long-sized, or unobfuscated text reuse. In this study, to evaluate performance of the algorithms, a new dataset has been built containing real text reuse cases from Bachelor and Master Theses (written in English in the field of computer science) where about half of the cases involve less than 1 % of document text while about two-thirds of the cases involve paraphrasing.
In the performed experiments, the overall best detection quality is reached by Winnowing, 0 mod p, and MFBW. The proposed MFBW algorithm is a considerable improvement over FBW and becomes one of the best performing algorithms.
Deep learning is a new branch of machine learning, which is widely used by researchers in a lot of artificial intelligence applications, including signal processing and computer vision. The present research investigates the use of deep learning to solve the hand gesture recognition (HGR) problem and proposes two models using deep learning architecture. The first model comprises a convolutional neural network (CNN) and a recurrent neural network with a long short-term memory (RNN-LSTM). The accuracy of model achieves up to 82 % when fed by colour channel, and 89 % when fed by depth channel. The second model comprises two parallel convolutional neural networks, which are merged by a merge layer, and a recurrent neural network with a long short-term memory fed by RGB-D. The accuracy of the latest model achieves up to 93 %.
Predicting the stock market remains a challenging task due to the numerous influencing factors such as investor sentiment, firm performance, economic factors and social media sentiments. However, the profitability and economic advantage associated with accurate prediction of stock price draw the interest of academicians, economic, and financial analyst into researching in this field. Despite the improvement in stock prediction accuracy, the literature argues that prediction accuracy can be further improved beyond its current measure by looking for newer information sources particularly on the Internet. Using web news, financial tweets posted on Twitter, Google trends and forum discussions, the current study examines the association between public sentiments and the predictability of future stock price movement using Artificial Neural Network (ANN). We experimented the proposed predictive framework with stock data obtained from the Ghana Stock Exchange (GSE) between January 2010 and September 2019, and predicted the future stock value for a time window of 1 day, 7 days, 30 days, 60 days, and 90 days. We observed an accuracy of (49.4–52.95 %) based on Google trends, (55.5–60.05 %) based on Twitter, (41.52–41.77 %) based on forum post, (50.43–55.81 %) based on web news and (70.66–77.12 %) based on a combined dataset. Thus, we recorded an increase in prediction accuracy as several stock-related data sources were combined as input to our prediction model. We also established a high level of direct association between stock market behaviour and social networking sites. Therefore, based on the study outcome, we advised that stock market investors could utilise the information from web financial news, tweet, forum discussion, and Google trends to effectively perceive the future stock price movement and design effective portfolio/investment plans.
Deconvolutional neural networks are a very accurate tool for semantic image segmentation. Segmenting curvilinear meandering regions is a typical task in computer vision applied to navigational, civil engineering, and defence problems. In the study, such regions of interest are modelled as meandering transparent stripes whose width is not constant. The stripe on the white background is formed by the upper and lower non-parallel black curves so that the upper and lower image parts are completely separated. An algorithm of generating datasets of such regions is developed. It is revealed that deeper networks segment the regions more accurately. However, the segmentation is harder when the regions become bigger. This is why an alternative method of the region segmentation consisting in segmenting the upper and lower image parts by subsequently unifying the results is not effective. If the region of interest becomes bigger, it must be squeezed in order to avoid segmenting the empty image. Once the squeezed region is segmented, the image is conversely rescaled to the original view. To control the accuracy, the mean BF score having the least value among the other accuracy indicators should be maximised first.
To distinguish individuals with dangerous abnormal behaviours from the crowd, human characteristics (e.g., speed and direction of motion, interaction with other people), crowd characteristics (such as flow and density), space available to individuals, etc. must be considered. The paper proposes an approach that considers individual and crowd metrics to determine anomaly. An individual’s abnormal behaviour alone cannot indicate behaviour, which can be threatening toward other individuals, as this behaviour can also be triggered by positive emotions or events. To avoid individuals whose abnormal behaviour is potentially unrelated to aggression and is not environmentally dangerous, it is suggested to use emotional state of individuals. The aim of the proposed approach is to automate video surveillance systems by enabling them to automatically detect potentially dangerous situations.