In this paper, we focus on the time-varying shortest path problem, where the transit costs are fuzzy numbers. Moreover, we consider this problem in which the transit time can be shortened at a fuzzy speedup cost. Speedup may also be a better decision to find the shortest path from a source vertex to a specified vertex.
References  Benzmüller Ch., Kerber M., A Challenge for Mechanized Deduction (to find the full text in Web, ask Google for the title and select the relevant PDF), 2001.  Benzmueller Ch., Brown Ch., The Curious Inference of Boolos in Mizar and OMEGA, in: Studies in Logic, Grammar and Rhetoric ( http://logika.uwb.edu.pl/studies/index.php?page=search&vol=23 ) 23, 2007.  Boolos G., A Curious Inference?, Journal of Philosophical Logic , 16, 1987, 1-12.  Buss S.R., On Godel’s Theorems on Lengths of Proofs I: Number of Lines and Speedup for Arithmetics, J
’s Theorems on Lengths of Proofs I: Number of Lines and Speedups for Arithmetic”, Journal of Symbolic Logic, 39, 1994, pp. 737-756. 16. See http://logika.uwb.edu.pl/studies/index.php?page=search&vol=22, sections 1.1-1.5.
-K Dynamic Clustering Algorithm. - Journal of Animal and Veterinary Advances, Vol. 4, 2005, No 5, pp. 535-539. 14. UCL Machine Learning Repository http://archive.ics.uci.edu/ml/datasets.html 15. Radeva, I. Multi-Criteria Models for Clusters Design. - Cybernetics and Information Technology, Vol. 13, 2013, No 1, pp. 18-33. 16. Rao, V. S. H., M. V. Jonnalagedda. Insurance Dynamics - A Data Mining Approach for Customer Retention in Health Care Insurance Industry. - Cybernetics and Information Technologies, Vol. 12, 2012, No 1, pp. 49-60. 17. Jollois, F. X., M. Nadif. Speed-up
efficiency of backtrack programs, Mathematics of Computation 29(129): 121-139. Luby, M., Sinclair, A. and Zuckerman, D. (1993). Optimal speedup of Las Vegas algorithms, Information Processing Letters 47(4): 173-180. Mann, Z. (2011). Optimization in Computer Engineering- Theory and Applications, Scientific Research Publishing, Irvine, CA. Mann, Z. and Szajkó, A. (2012). Complexity of different ILP models of the frequency assignment problem, in Z. Mann (Ed.), Linear Programming-New Frontiers in Theory and Applications, Nova Science Publishers, New York, NY, pp. 305-326. Mann
Recurrent neural networks (RNN) have been successfully applied to various sequential decision-making tasks, natural language processing applications, and time-series predictions. Such networks are usually trained through back-propagation through time (BPTT) which is prohibitively expensive, especially when the length of the time dependencies and the number of hidden neurons increase. To reduce the training time, extreme learning machines (ELMs) have been recently applied to RNN training, reaching a 99% speedup on some applications. Due to its non-iterative nature, ELM training, when parallelized, has the potential to reach higher speedups than BPTT.
In this work, we present Opt-PR-ELM, an optimized parallel RNN training algorithm based on ELM that takes advantage of the GPU shared memory and of parallel QR factorization algorithms to efficiently reach optimal solutions. The theoretical analysis of the proposed algorithm is presented on six RNN architectures, including LSTM and GRU, and its performance is empirically tested on ten time-series prediction applications. Opt-PR-ELM is shown to reach up to 461 times speedup over its sequential counterpart and to require up to 20x less time to train than parallel BPTT. Such high speedups over new generation CPUs are extremely crucial in real-time applications and IoT environments.
The aim of this paper is to investigate dense linear algebra algorithms on shared memory multicore architectures. The design and implementation of a parallel tiled WZ factorization algorithm which can fully exploit such architectures are presented. Three parallel implementations of the algorithm are studied. The first one relies only on exploiting multithreaded BLAS (basic linear algebra subprograms) operations. The second implementation, except for BLAS operations, employs the OpenMP standard to use the loop-level parallelism. The third implementation, except for BLAS operations, employs the OpenMP task directive with the depend clause. We report the computational performance and the speedup of the parallel tiled WZ factorization algorithm on shared memory multicore architectures for dense square diagonally dominant matrices. Then we compare our parallel implementations with the respective LU factorization from a vendor implemented LAPACK library. We also analyze the numerical accuracy. Two of our implementations can be achieved with near maximal theoretical speedup implied by Amdahl’s law.
With the rapid development of electronic technology, network technology and cloud computing technology, the current data is increasing in the way of mass, has entered the era of big data. Based on cloud computing clusters, this paper proposes a novel method of parallel implementation of multilayered neural networks based on Map-Reduce. Namely in order to meet the requirements of big data processing, this paper presents an efficient mapping scheme for a fully connected multi-layered neural network, which is trained by using error back propagation (BP) algorithm based on Map-Reduce on cloud computing clusters (MRBP). The batch-training (or epoch-training) regimes are used by effective segmentation of samples on the clusters, and are adopted in the separated training method, weight summary to achieve convergence by iterating. For a parallel BP algorithm on the clusters and a serial BP algorithm on uniprocessor, the required time for implementing the algorithms is derived. The performance parameters, such as speed-up, optimal number and minimum of data nodes are evaluated for the parallel BP algorithm on the clusters. Experiment results demonstrate that the proposed parallel BP algorithm in this paper has better speed-up, faster convergence rate, less iterations than that of the existed algorithms.
In the last centuries the experimental particle physics began to develop thank to growing capacity of computers among others. It is allowed to know the structure of the matter to level of quark gluon. Plasma in the strong interaction. Experimental evidences supported the theory to measure the predicted results. Since its inception the researchers are interested in the track reconstruction. We studied the jet browser model, which was developed for 4π calorimeter. This method works on the measurement data set, which contain the components of interaction points in the detector space and it allows to examine the trajectory reconstruction of the final state particles. We keep the total energy in constant values and it satisfies the Gauss law. Using GPUs the evaluation of the model can be drastically accelerated, as we were able to achieve up to 223 fold speedup compared to a CPU based parallel implementation.
As part of an ongoing study into hydropower runner failure, a submerged, vibrating blade is investigated both experimentally and numerically. The numerical simulations performed are fully coupled acoustic-structural simulations in ANSYS Mechanical. In order to speed up the simulations, a model order reduction technique based on Krylov subspaces is implemented. This paper presents a comparison between the full ANSYS harmonic response and the reduced order model, and shows excellent agreement. The speedup factor obtained by using the reduced order model is shown to be between one and two orders of magnitude. The number of dimensions in the reduced subspace needed for accurate results is investigated, and confirms what is found in other studies on similar model order reduction applications. In addition, experimental results are available for validation, and show good match when not too far from the resonance peak.