Browse

1 - 10 of 728 items :

  • Control Engineering, Metrology and Testing x
Clear All

Abstract

Web-based browser fingerprint (or device fingerprint) is a tool used to identify and track user activity in web traffic. It is also used to identify computers that are abusing online advertising and also to prevent credit card fraud. A device fingerprint is created by extracting multiple parameter values from a browser API (e.g. operating system type or browser version). The acquired parameter values are then used to create a hash using the hash function. The disadvantage of using this method is too high susceptibility to small, normally occurring changes (e.g. when changing the browser version number or screen resolution). Minor changes in the input values generate a completely different fingerprint hash, making it impossible to find similar ones in the database. On the other hand, omitting these unstable values when creating a hash, significantly limits the ability of the fingerprint to distinguish between devices. This weak point is commonly exploited by fraudsters who knowingly evade this form of protection by deliberately changing the value of device parameters. The paper presents methods that significantly limit this type of activity. New algorithms for coding and comparing fingerprints are presented, in which the values of parameters with low stability and low entropy are especially taken into account. The fingerprint generation methods are based on popular Minhash, the LSH, and autoencoder methods. The effectiveness of coding and comparing each of the presented methods was also examined in comparison with the currently used hash generation method. Authentic data of the devices and browsers of users visiting 186 different websites were collected for the research.

Abstract

Air quality data prediction in urban area is of great significance to control air pollution and protect the public health. The prediction of the air quality in the monitoring station is well studied in existing researches. However, air-quality-monitor stations are insufficient in most cities and the air quality varies from one place to another dramatically due to complex factors. A novel model is established in this paper to estimate and predict the Air Quality Index (AQI) of the areas without monitoring stations in Nanjing. The proposed model predicts AQI in a non-monitoring area both in temporal dimension and in spatial dimension respectively. The temporal dimension model is presented at first based on the enhanced k-Nearest Neighbor (KNN) algorithm to predict the AQI values among monitoring stations, the acceptability of the results achieves 92% for one-hour prediction. Meanwhile, in order to forecast the evolution of air quality in the spatial dimension, the method is utilized with the help of Back Propagation neural network (BP), which considers geographical distance. Furthermore, to improve the accuracy and adaptability of the spatial model, the similarity of topological structure is introduced. Especially, the temporal-spatial model is built and its adaptability is tested on a specific non-monitoring site, Jiulonghu Campus of Southeast University. The result demonstrates that the acceptability achieves 73.8% on average. The current paper provides strong evidence suggesting that the proposed non-parametric and data-driven approach for air quality forecasting provides promising results.

Abstract

This paper presents a local modification of the Levenberg-Marquardt algorithm (LM). First, the mathematical basics of the classic LM method are shown. The classic LM algorithm is very efficient for learning small neural networks. For bigger neural networks, whose computational complexity grows significantly, it makes this method practically inefficient. In order to overcome this limitation, local modification of the LM is introduced in this paper. The main goal of this paper is to develop a more complexity efficient modification of the LM method by using a local computation. The introduced modification has been tested on the following benchmarks: the function approximation and classification problems. The obtained results have been compared to the classic LM method performance. The paper shows that the local modification of the LM method significantly improves the algorithm’s performance for bigger networks. Several possible proposals for future works are suggested.

Abstract

The training set consists of many features that influence the classifier in different degrees. Choosing the most important features and rejecting those that do not carry relevant information is of great importance to the operating of the learned model. In the case of data streams, the importance of the features may additionally change over time. Such changes affect the performance of the classifier but can also be an important indicator of occurring concept-drift. In this work, we propose a new algorithm for data streams classification, called Random Forest with Features Importance (RFFI), which uses the measure of features importance as a drift detector. The RFFT algorithm implements solutions inspired by the Random Forest algorithm to the data stream scenarios. The proposed algorithm combines the ability of ensemble methods for handling slow changes in a data stream with a new method for detecting concept drift occurrence. The work contains an experimental analysis of the proposed algorithm, carried out on synthetic and real data.

Abstract

In real-world approximation problems, precise input data are economically expensive. Therefore, fuzzy methods devoted to uncertain data are in the focus of current research. Consequently, a method based on fuzzy-rough sets for fuzzification of inputs in a rule-based fuzzy system is discussed in this paper. A triangular membership function is applied to describe the nature of imprecision in data. Firstly, triangular fuzzy partitions are introduced to approximate common antecedent fuzzy rule sets. As a consequence of the proposed method, we obtain a structure of a general (non-interval) type-2 fuzzy logic system in which secondary membership functions are cropped triangular. Then, the possibility of applying so-called regular triangular norms is discussed. Finally, an experimental system constructed on precise data, which is then transformed and verified for uncertain data, is provided to demonstrate its basic properties.

Abstract

The electrohydrodynamic (EHD) flow induced by a corona discharge has an important influence on the movement and collection of fine particles in an electrostatic precipitator. In this paper, three-dimensional particle image velocimetry (3D-PIV) is used to investigate the impact of different primary flow velocities and applied voltage on diffusion and transport of the spiked tubular electrode corona discharge EHD flow in a wide type electrostatic precipitator. In order to measure the flow characteristics of different positions of a spiked tubular electrode, the PIV measurements are carried out in several cross-sectional planes along the ESP duct. From 2D flow streamlines, in plane 1 (where the tip of the spike is oriented in the direction of primary flow), the velocity of the counter-clockwise vortex caused by the EHD flow near the plate decreases as the primary flow velocity increases. However, in plane 3 (where the tip direction is opposite to the primary flow), two vortices rotate adversely, and the flow velocity of the clockwise vortex near the plate increases as the primary flow velocity increases. Flow velocity increasing near the plate makes the particles deposited on the plate more easily to be re-entrained. It can be found in the three-dimensional analysis of the flow field that there are mainly “ascending vortex” and downward tip jet in the three observation planes. There is a discrepancy (in terms of distribution region and the magnitude of velocity) between the three-dimensional characteristics of these vortices and tip jets in the different cross-sectional planes.

Abstract

An active-vision process is presented by the affine invariability of the ratio of triangle areas to reconstruct the 3D object. Firstly, a plate with the triangle array is designed in the same plane of the planar laser. The image of the plate is rectified from the projection space to the affine space by the image of the line at infinity. Then the laser point and the centroids of the triangles constitute a new triangle that bridges the affine space and the original Euclidean space. The object coordinates are solved by the invariant of the triangle area ratio before and after the affine transformation. Finally, the reconstruction accuracy under various measurement conditions is verified by experiments. The influence analyses of the number of line pairs and the accuracy of the extracted point pixels are provided in the experimental results. The average reconstruction errors are 1.54, 1.79, 1.90, and 2.46 mm for the test distance of 550, 600, 650, and 700 mm, which demonstrates the application potential of the approach in the 3D measurement.

Abstract

The generalized maximum likelihood algorithm is introduced for detecting the abrupt change in the band center of a fast-fluctuating Gaussian random process with the uniform spectral density. This algorithm has a simpler structure than the ones obtained by means of common approaches and it can be effectively implemented on the base of both modern digital signal processors and field-programmable gate arrays. By applying the multiplicative and additive local Markov approximation of the decision statistics and its increments, the analytical expressions are calculated for the false alarm and missing probabilities. And with the help of statistical simulation it is confirmed that the proposed detector is operable, while the theoretical formulas describing its quality and efficiency approximate satisfactorily the corresponding experimental data in a wide range of parameters of the observable data realization.

Abstract

This paper focuses on the problematic of intraocular pressure (IOP) measurements, performed by non-invasive methods. More specifically, the devices that are connected with the presented finding are non-contact tonometers that use concentrated air stream and optical sensors to determine the IOP within a human’s eye. The paper analyzes various influential factors that have an effect on the determination of the IOP values originating from the patients themselves and from the non-contact tonometer devices. The paper furthermore elaborates on the lack of independent methods of calibration and control of these devices. In order to fill this gap a measurement standard device that is capable of calibrating and testing these devices with traceability to the basic SI unit is presented. A detailed characterization and the determination of the expected uncertainty of the device are provided. By introducing an independent and traceable calibration method and control of non-contact tonometers into the clinical practice, the reliability of the measured IOP that is the primary indicator of glaucoma can be improved.

Abstract

The article presents an analysis of the dynamic error occurring when processing a stochastic signal in an inertial measurement system. The problem was illustrated using both a calculation and a laboratory example. The technique of conditional averaging of signals was used in the experiment. The possibility to minimize the root mean square value of the error as well as the need for a time correction of measurement values in an inertial measurement system was demonstrated.