Search Results

1 - 10 of 103 items :

  • "K-nearest neighbours" x
Clear All


We propose a skeletonization algorithm that is based on an iterative points contraction. We make an observation that the local center that is obtained via optimizing the sum of the distance to k nearest neighbors possesses good properties of robustness to noise and incomplete data. Based on such an observation, we devise a skeletonization algorithm that mainly consists of two stages: points contraction and skeleton nodes connection. Extensive experiments show that our method can work on raw scans of real-world objects and exhibits better robustness than the previous results in terms of extracting topology-preserving curve skeletons.

. 2, 1322-1326. [13] Wang, Y., Tang, Y., Jiang, Y., Chung, J.G., Song, S. S., Lim, M.S. (2007). Novel memory reference reduction methods for FFT implementations on DSP processors. IEEE Transactions on Magnetics , 5, 2338-2349. [14] Wang, L., Khan, L., Thuraisingham, B. (2008). An effective evidence theory based K-nearest neighbor classification. IEEE Transactions on Magnetics , 1, 797-801. [15] Jiang, Z., Deng, Y. (2010). On the time series knearest neighbor classification of abnormal brain activity. IEEE Transactions on Magnetics , 6, 1005-1016. [16] Jiang, Z

R eferences Borusiewicz B. 2009. Operat ochrony ekosystemów leśnych Parku Narodowego Gór Stołowych. Taksus SI, Warszawa–Gorzów Wielkopolski. Franco-Lopez H., Alan R.E., Bauer M.E. 2001. Estimation and mapping of forest stand density, volume, and cover type using the k-Nearest Neighbors method. Remote Sensing of Environment , 77, 251–274. Halme M., Tomppo E. 2001. Improving the accuracy of multi-source forest inventory estimates to reducing plot location error – a multi-criteria approach. Remote Sensing of Environment , 78 (3), 321–327. Katila M., Heikkinen J

References [1] BOWMAN, A. W.: An alternative method cross-validation for the smoothing of density estimates, Biometrika 71 (1984), 353-360. [2] GYÖRFI, L.: On the rate of convergence of nearest neighbor rules, IEEE Trans. Inform. Theory 24 (1978), 509-512. [3] GYÖRFI, L.: The rate of convergence of kn-nn regression estimates and classification rules, IEEE Trans. Inform. Theory 27 (1981), 362-364. [4] MACK, Y. P.-ROSENBLATT, M.: Multivariate k-nearest neighbour density estimates, J. Multivariate Anal. 9 (1979), 1-15. [5] ORAVA, J.: K-nearest neighbour kernel

6. References: B ao Y., I shii N., 2002, Combining Multiple K-Nearest Neighbor Classifiers for Text Classification by Reducts , In: Lange S., Satoh K., Smith C.H. (eds) Discovery Science. DS 2002. Lecture Notes in Computer Science, vol 2534. Springer, Berlin, Heidelberg. B oschetti A., M assaron L., 2017, Python Data Science Essentials in Polish: Python, Podstawy nauki o danych , Helion, Gliwice. C zaja J., 2001, Methods of appraising real property market and cadastral value in Polish: Metody szacowania wartości rynkowej i katastralnej nieruchomości


In this paper, we compare the following machine learning methods as classifiers for sentiment analysis: k – nearest neighbours (kNN), artificial neural network (ANN), support vector machine (SVM), random forest. We used a dataset containing 5,000 movie reviews in which 2,500 were marked as positive and 2,500 as negative. We chose 5,189 words which have an influence on sentence sentiment. The dataset was prepared using a term document matrix (TDM) and classical multidimensional scaling (MDS). This is the first time that TDM and MDS have been used to choose the characteristics of text in sentiment analysis. In this case, we decided to examine different indicators of the specific classifier, such as kernel type for SVM and neighbour count in kNN. All calculations were performed in the R language, in the program R Studio v 3.5.2. Our work can be reproduced because all of our data sets and source code are public.

.J., Yang, Y. (2007). Application of support vector regression machines to the processing of end effects of Hilbert-Huang transform. Mechanical Systems and Signal Processing , 21 (3), 1197-1211. [18] Chaovalitwongse, W.A., Fan, Y., Sachdeo, R.C. (2007). On the time series K-nearest neighbor classification of abnormal brain activity. IEEE Transactions on System, Man and Cybernetics, Part A: System and Humans , 37 (6), 1005-1016. [19] Manoharan, S.C., Veezhinathan, M., Ramakrishnan, S. (2008). Comparison of two ANN methods for classification of spirometer Data


Water supply systems are complex engineering structures; certainly, the most important part is the water distribution network. The design of this element requires calculations and many analyses to arrive at the best solution. The main task of the calculation is to determine the flow rates through pipes, to determine pressure losses, height of tanks, pressure required in the supply pumping station, pressure levels in the individual nodes of the network. Correct execution of the calculations requires careful evaluation of the results obtained and accuracy in the solutions applied. The issue of controlling the results of calculations is difficult to present in algorithmic form as these are mainly based on the experience and knowledge of the designer. Classes of decisions describing the problems of pressure loss in the pipework were established in order to evaluate the results of calculations. Numerical experiments were carried out in this paper to show how the ‘K-nearest neighbour’ method can be used to evaluate pressure loss in water pipes.

-585. Hart, P.E. (1968). The condensed nearest neighbor rule, IEEE Transactions on Information Theory 14(3): 515-516. Hattori, K. and Takahashi, M. (2000). A new edited k-nearest neighbor rule in the pattern classification problem, Pattern Recognition 33(3): 521-528. Huang, G.-B., Zhu, Q.-Y. and Siew, C.-K. (2004). Extreme learning machine: A new learning scheme of feedforward neural networks, International Joint Conference on Neural Networks, Budapest, Hungary, pp. 985-990. Huang, G.-B., Zhu, Q.-Y. and Siew, C.-K. (2006). Extreme learning machine: Theory and applications


This paper presents another approach for determining document’s semantic orientation process. It includes a brief introduction describing the area of application of opinion mining, and some definitions useful in the field. The most commonly used methods are mentioned and some alternative ones are described. Experiment results are presented which show that kNN algorithm gives similar results to proportional algorithm.