Loop profiling tool for HPC code inspection as an efficient method of FPGA based acceleration
This paper presents research on FPGA based acceleration of HPC applications. The most important goal is to extract a code that can be sped up. A major drawback is the lack of a tool which could do it. HPC applications usually consist of a huge amount of a complex source code. This is one of the reasons why the process of acceleration should be as automated as possible. Another reason is to make use of HLLs (High Level Languages) such as Mitrion-C (Mohl, 2006). HLLs were invented to make the development of HPRC applications faster. Loop profiling is one of the steps to check if the insertion of an HLL to an existing HPC source code is possible to gain acceleration of these applications. Hence the most important step to achieve acceleration is to extract the most time consuming code and data dependency, which makes the code easier to be pipelined and parallelized. Data dependency also gives information on how to implement algorithms in an FPGA circuit with minimal initialization of it during the execution of algorithms.
Local Correlation and Entropy Maps as Tools for Detecting Defects in Industrial Images
The aim of this paper is to propose two methods of detecting defects in industrial products by an analysis of gray level images with low contrast between the defects and their background. An additional difficulty is the high nonuniformity of the background in different parts of the same image. The first method is based on correlating subimages with a nondefective reference subimage and searching for pixels with low correlation. To speed up calculations, correlations are replaced by a map of locally computed inner products. The second approach does not require a reference subimage and is based on estimating local entropies and searching for areas with maximum entropy. A nonparametric estimator of local entropy is also proposed, together with its realization as a bank of RBF neural networks. The performance of both methods is illustrated with an industrial image.
Control Error Dynamic Modification as an Efficient Tool for Reduction of Effects Introduced by Actuator Constraints
A modification of digital controller algorithms, based on the introduction of a virtual reference value, which never exceeds active constraints in the actuator output is presented and investigated for some algorithms used in single-loop control systems. This idea, derived from virtual modification of a control error, can be used in digital control systems subjected to both magnitude and rate constraints. The modification is introduced in the form of on-line adaptation to the control task. Hence the design of optimal (in a specified sense) digital controller parameters can be separated from actuator constraints. The adaptation of the control algorithm (to actuator constraints) is performed by the transformation of the control error and is equivalent to the introduction of a new, virtual reference value for the control system. An application of this approach is presented through examples of three digital control algorithms: the PID algorithm, the dead-beat controller and the state space controller. In all cases, clear advantages of transients are observed, which yields some general conclusions to the problem of processing actuator constraints in control.
Ewaryst Rafajłowicz, Krystyn Styczeń and Wojciech Rafajłowicz
A modified filter SQP method as a tool for optimal control of nonlinear systems with spatio-temporal dynamics
Our aim is to adapt Fletcher's filter approach to solve optimal control problems for systems described by nonlinear Partial Differential Equations (PDEs) with state constraints. To this end, we propose a number of modifications of the filter approach, which are well suited for our purposes. Then, we discuss possible ways of cooperation between the filter method and a PDE solver, and one of them is selected and tested.
Szymon Łukasik, Konrad Lalik, Piotr Sarna, Piotr A. Kowalski, Małgorzata Charytanowicz and Piotr Kulczycki
Abraham, S., Philip, N.S., Kembhavi, A., Wadadekar, Y.G. and Sinha, R. (2012). A photometric catalogue of quasars and other point sources in the Sloan Digital Sky Survey, Monthly Notices of the Royal Astronomical Society 419 (1): 80–94, DOI: 10.1111/j.1365-2966.2011.19674.x.
Arefin, A.S., Riveros, C., Berretta, R. and Moscato, P. (2012). GPU-FS-kNN: A software tool for fast and scalable kNN computation using GPUs, PLoS One 7 : e44000, DOI: 10.1371/journal.pone.0044000.
Breunig, M.M., Kriegel, H.-P., Ng, R.T. and Sander, J. (2000
Rapid deployment of IT brings about new issues with software usability measurement. Usability is based on users’ experience and is strongly subjective, having a qualitative character. The users’ comfort is usually collected by surveys in their daily work. The present article stems from an experimental study related to the evaluation of the usability of tools by a rule-based system. The work suggests a robust computational model that will be able to avoid the main problems arising from the experimental study (a large and less-legible rule base) and to deal with the vagueness of IT user experience, different levels of skills and various numbers of filled questionnaires in different departments. The computational model is based on three hierarchical levels of aggregation supported by fuzzy logic. Choices for the most suitable aggregation functions in each level are advocated and illustrated with examples. The number of questions and granularity of answers in this approach can be adjusted to each user group, which could reduce the response burden and errors. Finally, the paper briefly describes further possibilities of the suggested approach.
Depeng Gao, Jiafeng Liu, Rui Wu, Dansong Cheng, Xiaopeng Fan and Xianglong Tang
information for visual recognition, Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, pp. 630–647.
Motiian, S., Piccirilli, M., Adjeroh, D.A. and Doretto, G. (2016). Information bottleneck learning using privileged information for visual recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA , pp. 1496–1505.
Nuricumbo, J.R., Ali, H., Mrton, Z.C. and Grzegorzek, M. (2015). Improving object classification robustness in RGB-D using adaptive SVMS, Multimedia Tools and
Piotr Szymczyk, Sylwia Tomecka-Suchoń and Magdalena Szymczyk
In this article a new neural network based method for automatic classification of ground penetrating radar (GPR) traces is proposed. The presented approach is based on a new representation of GPR signals by polynomials approximation. The coefficients of the polynomial (the feature vector) are neural network inputs for automatic classification of a special kind of geologic structure—a sinkhole. The analysis and results show that the classifier can effectively distinguish sinkholes from other geologic structures.
Łukasz Hładowski, Błażej Cichy, Krzysztof Gałkowski and Eric Rogers
Nikoukhah R., Delebecque F. and Ghaoui L. E. (2008). LMI-TOOL: A Package for LMI Optimization in Scilab, available at: http://www.scilab.org/doc/lmidoc/
Nullsoft (2008). NSIS Users Manual , available at: http://nsis.sourceforge.net/Docs/
Owens D. H., Amann N., Rogers E. and French M. (2000). Analysis of linear iterative learning control schemes — A 2D systems/repetitive processes approach, Multidimensional Systems and Signal Processing 11 (1-2): 125-177.
"IGx89" Lieder M. (2008). MD5
-strength verification tool, in T. Touili et al. (Eds.), Proceedings of the 22nd International Conference on Computer Aided Verification, CAV’10 , Springer-Verlag, Berlin/Heidelberg, pp. 24–40, DOI: 10.1007/978-3-642-14295-6-5.
Bryant, R. (1986). Graph-based algorithms for Boolean function manipulation, IEEE Transactions on Computers C-35 (8): 677–691.
Chang, S.-C. and Marek-Sadowska, M. (1992). Technology mapping via transformations of function graphs, IEEE 1992 International Conference on Computer Design: VLSI in Computers