## Abstract

In this article, we propose an ontology learning algorithm for ontology similarity measure and ontology mapping in view of distance function learning techniques. Using the distance computation formulation, all the pairs of ontology vertices are mapped into real numbers which express the distance of their corresponding vectors. The more distance between two vertices, the smaller similarity between their corresponding concepts. The stabilities of our learning algorithm are defined and several bounds are yielded via stability assumptions. The simulation experimental conclusions show that the new proposed ontology algorithm has high efficiency and accuracy in ontology similarity measure and ontology mapping in certain engineering applications.

## 1 Introduction

Ontology originally comes from philosophy. It is used to describe the natural connection of things and their components’ inherently hidden connections. Ontology is set up as a model for knowledge storage and representation in information and computer science. It has been extensively applied in different fields such as knowledge management, machine learning, information systems, image retrieval, information retrieval search extension, collaboration and intelligent information integration. As a conceptually semantic model and an analysis tool, being quite effective, ontology has been favored by researchers from pharmacology science, biology science, medical science, geographic information system and social sciences since a few years ago (for instance, see Przydzial et al., [1], Koehler et al., [2], Ivanovic and Budimac [3], Hristoskova et al., [4], and Kabir [5]).

A simple graph is usually used by researchers to represent the structure of ontology. Every concept, objects and elements in ontology are made to correspond to a vertex. Each (directed or undirected) edge on an ontology graph represents a relationship (or potential link) between two concepts (objects or elements). Let* O* be an ontology and* G* be a simple graph corresponding to* O*. It can be attributed to getting. We use the similarity calculating function, the nature of ontology engineer application to compute the similarities between ontology vertices, which represent the intrinsic link between vertices in ontology graph. The ontology similarity measuring function is obtained by measuring the similarity between vertices from different ontologies. That is the goal of ontology mapping. The mapping serves as a bridge connecting different ontologies. Only through mapping, we gain a potential association between the objects or elements from different ontologies. The semi-positive score function *Sim* : *V* × *V* → ℝ^{+} ∪ {0} maps each pair of vertices to a non-negative real number.

Several effective methods exist for getting efficient ontology similarity measure or ontology mapping algorithm in terms of ontology function. The ontology similarity calculation in terms of ranking learning technology was considered by Wang et al., [12]. The fast ontology algorithm in order to cut the time complexity for ontology application was raised by Huang et al., [13]. An ontology optimizing model in which the ontology function is determined by virtue of NDCG measure was presented by Gao and Liang [14], which is successfully applied in physics education. More ontology applications on various engineering can be refered to Gao et al., [11].

In this article, we determine a new ontology learning method by means of distance calculating. Moreover, we give a theoretical analysis for proposed ontology algorithm.

## 2 Algorithm Description

Let *v _{i},v_{j}* ∊ ℝ

*are ontology vectors and*

^{p}*y*= ±1 (if

_{ij}*v*and

_{i}*v*are similar, then

_{j}*y*= 1; otherwise,

_{ij}*y*= −1. We also fixed

_{ij}*m*relevant source ontology training sets

*q*= 1,...,

*m*) if the number of target ontology training samples

*N*is not large, and

*v*∊ ℝ

_{qi},v_{qj}*belong to the certain ontology feature space as*

^{p}*v*,

_{i}*v*in this setting.

_{j}We aim to learn a distance function *d*(*v _{i},v_{j}*|

**W**) = (

*v*)

_{i}− v_{j}*W(*

^{T}*v*) which equals to learning a distance matrix

_{i}− v_{j}**W**, and the similarity or dissimilarity between a ontology vertex pair

*v*and

_{i}*v*is obtained by comparing

_{j}*d*(

*v*|

_{i},v_{j}**W**) with a constant threshold parameter

*c*. Specifically, our ontology optimization problem can be stated as

where **W**||_{F} is the Frobenius norm of the metric **W** which is applied to control the model complexity, *η* is a balance parameter, and the constraint condition reveals that **W** is positive semi-definite.

The general version of ontology distance learning approach is formulated by

where *θ*||_{1} are employed to control the complexity of model. In what follows, *γ*_{1}, *γ*_{2} and *γ*_{3} are all positive balance parameters.

Select *g*, that is to say, *g*(*z*) = max(0, *b* − *z*) and *b* is set to be 0. Thus, we deduce the following ontology optimization problem:

For short expressions, we use *v _{i}*,

*x*and

_{j}*y*to denote

_{i j}*y*with

_{k}The answer can be inferred by alternating between two sub ontology problems (minimization *α* = [*α*_{1},··· ,*α _{m}*]

*and*

^{T}*θ*= [

*θ*

_{1},··· ,

*θ*]

_{n}*respectively) until its convergence.*

^{T}Given *α*, the ontology optimization problem with respect to *θ* then it can be stated as

where *θ*) is non-differentiable, we should smooth the ontology loss and then solve (5) in terms of the gradient trick. Let Θ = {*x* : 0 ≤ *x _{k}* ≤ 1

*,x*∊ ℝ

*} and*

^{N′}*σ*be the smooth parameter. Then, the smoothed expression of the ontology hinge loss

*g*(

*f*) = max{0

_{k}, y_{k}, θ*, −y*(1

_{k}*− θ*)} can be formulated as

^{T}f_{k}where ||*f _{k}*||

_{∞}term is used as a normalization. In view of setting the objective ontology function of (6) to 0 and projecting

*x*on Θ, we infer the following solution:

_{k}*g*can be expressed as

By the computation and deduction, the gradient of the smoothed hinge ontology loss *g _{σ}* (

*θ*) is

Let **H**^{Λ} = [ *f*_{1}*,··· , f _{N}′*] and

**Y**=

*diag*(

*y*). We get

*g*(

_{σ}*θ*).

By setting *l*(*θ*) = ||θ||_{1}, we infer the approximation of *l* with the smooth parameter *σ ^{′}* as

Furthermore, for each

Moreover, set *F*(*θ* ).

Denote *θ ^{t}*,

*y*and

^{t}*z*as the solutions in the

^{t}*t*-th iteration round, and use

*θ*. We obtain that

*L*is the Lipschitz constant of

_{σ}*F*(

_{σ}*θ*) and the two attached ontology optimizations are stated as

and

respectively. Set the gradients of the two objective ontology functions in the above two attached ontology problems to be zeros, we yield *F _{σ}* (

*θ*

^{t+1}) −

*F*(

_{σ}*θ*)| <

^{t}*ε*.

Given *θ* , the optimization ontology problem on parameter *α* can be stated as

And, the ontology problem (9) can be expressed in compact form which is stated by

where *f* = [ *f*_{1},··· , *f _{m}*] with

*f*=

_{q}*γ*

_{1}Tr(

**W**

^{T}**W**

*), and*

_{q}**H**is a symmetric positive semi-definite matrix such that

*α*and

_{i}*α*to update for each iteration. In order to meet the restraint

_{j}*α*≥ 0, we further set

_{q}The whole ontology algorithm is stated as follows:

Initialize: *α*^{(0)}, *θ*^{(0)}, *t* = 0, construct

Iterate:

Optimize

Optimize

and update

Determine

Obtain

*t* ← *t* + 1.

Until convergence.

## 3 Stability Analysis

In this section, we give the theoretical analysis of our ontology algorithm via stability assumption.

### 3.1 Uniform stability

(Leave-One-Out) An ontology algorithm has uniform stability *β*_{1} with respect to the ontology loss function *l* if the following holds

where *Z* is the ontology sample space, *f _{s}* is the ontology function determined by the ontology algorithm learning with the set of samples

*s*, and

*s*= {

^{i}*z*

_{1},··· ,

*z*

_{i−1},

*z*

_{i+1},··· ,

*z*} denotes an ontology sample set with the

_{m}*i*-th element

^{′}*z*deleted.

_{i}(Leave-Two-Out) An ontology algorithm has uniform stability *β*_{2} with respect to the ontology loss function *l* if the following holds

where *Z* is the ontology sample space, *f _{s}* is the ontology function determined by the ontology algorithm learning with the set of samples

*s*, and

*s*

^{i, j}is the ontology sample set given from

*s*by deleting two elements

*z*and

_{i}*z*.

_{j}For any convex and differentiable ontology function *F* : *ℱ* → ℝ as follows (here *ℱ* denotes the Hilbert space): ∀ *f*, *g* ∈ *ℱ* ,*B _{F}*(

*f*||

*g*) =

*F*(

*f*)

*−F*(

*g*)

*−*Tr(

*< f −g*, ∇

*F*(

*g*)

*>*), we have

*∂ ℱ*(∀

*f*) = {

*g*∈

*ℱ*|∀

*f*∈

^{′}*ℱ*,

*F*(

*f*)

^{ ′}*− F*(

*f*) ≥ Tr(

*< f*−

^{′}*f′, δF*(

*f*)

*>*)}. Let

*δ F*(

*f*) be any element of

*∂ F*(

*h*). We infer 8∀

*f , f*∈

^{′}*ℱ*,

*B*(

_{F}*f*||

^{ ′}*f*) =

*F*(

*f*)

^{′}*− F*(

*f*)

*−*Tr(

*< f*(

^{ ′}− f , ∇F*f*)

*>*),

*B*(

_{F}*f*||

^{ ′}*f*) ≥ 0 and

*B*

_{P+Q}=

*B*+

_{P}*B*for any convex ontology functions

_{Q}*P*and

*Q*.

*For any three distance metrics* **W** *and* **W**^{′}, the following inequality established for any ontology sample z_{i} and zj

Next, we describe the LOO and LTO stability of our algorithm.

*Let β*_{1} *and β*_{2} *be the LOO and LTO stability of our ontology algorithm problem* (2)*. Suppose that* ||*v*||_{2} ≤ *M for any sample v. Then, we have*

*where L is the Lipschitz constant of the function g.*

We only present the detailed proof of the first inequality, and the second one can be determined in the similar way. Let *F _{𝒩}* (

*θ*) =

*P*(

_{𝒩}*θ*) +

*Q*(

*θ*), where

*P*(

_{𝒩}*θ*and

*Q*(

*θ*) are convex. Suppose

*θ*and

_{𝒩}*θ*be the minimizers of

_{𝒩′}*F*(

_{𝒩}*θ*) and

*F*(

_{𝒩′}*θ*) respectively, where

*𝒩*is the set of ontology examples that deletes

^{′}*z*∈

_{i}*𝒩*from

*𝒩*.

Note that

Let ∆ = ||*θ _{𝒩′}*||

_{1}− < θ

_{𝒩},sgn(

*θ*)

_{𝒩′}*>*+||

*θ*

_{𝒩′}||_{1}− < θ

_{𝒩′},sgn(

*θ*) > ≥ 0, sgn(

_{𝒩}*θ*) = [sgn(

*θ*

_{1})

*,··· ,*sgn(

*θ*)]

_{𝒩}*. Hence, we have*

^{T}*δ f*(

*θ*) is the sub-gradient of ||

*θ*||

_{1}and

We have *δ F _{𝒩}* (

*θ*) =

_{𝒩}*δ F*(

_{𝒩′}*θ*) = 0 since

_{𝒩 ′}*θ*and

_{𝒩}*θ*are minimizers of

_{𝒩′}*F*(

_{𝒩}*θ*) and

*F*(

_{𝒩′}*θ*). Using Lemma 1, we obtain

This implies that

By virtue of |*V* (**W**_{𝒩 ,zi,zj}) *−V* (**W*** _{𝒩′,zi,zj}*)| ≤ 4

*LM*

^{2}||

**W**

_{𝒩}−**W**

*, we deduce*

_{𝒩′}||_{F}Therefore, the expected result is obtained.

Let *𝒩* be the ontology sample set and *R*(**W**) *− R _{𝒩}* (

**W**) in the next theorem. For this purpose, we should use the following McDiarmid inequality.

[15] *Let X*_{1},··· ,*X _{N} be independent random variables, each taking values in a set A. Let ϕ* :

*A*→ ℝ

^{N}*be such that for each i*∈ {1

*,··· ,N*},

*there exists a constant c*0

_{i}>*such that*

*Then for any ε* > 0*,*

The generalization error bound via uniform stability is presented as follows.

*Let 𝒩 be a set of N randomly selected ontology samples and* **W*** _{𝒩} be the ontology distance matrix determined by (2). With probability at least* 1 −

*δ, we have*

*where*

The method to proof Theorem 4 mainly followed by [16–19], we skip the detailed proof here.

### 3.2 Strong and weak stabilities

Naturally, the stability in uniform version is too restrictive for most learning algorithms, and only a small number of literatures presented that standard ontology learning algorithms met the uniform stability directly, most of these ontology learning algorithms were uncertain. Thus, we are inspired to consider the other “almost everywhere stability” beyond uniform stability in our ontology setting. We define strong and weak stabilities for our ontology framework which are also good measures to show how robust a ontology algorithm is. We assume 0 < *δ*_{3},*δ*_{4} < 1 in this subsection.

(Strong Stability) Let *A* be our ontology algorithm whose output on an ontology training sample *Z* is denoted by *f _{s}*, and let

*l*be an ontology loss function. Let

*β*

_{3}: ℕ → ℝ and

*s*be the ontology sample set which

^{i}*v*is replaced by

_{i}*A*has

*β*

_{3}loss stable with respect to ontology loss

*l*if for all

*n ∈*ℕ,

*i*∈ {1,··· ,n}, we have,

We say that the ontology algorithm *A* has strong loss stability *β*_{3} if

(Weak Stability) Let *A* be our ontology algorithm whose output on an ontology training sample *Z* is denoted by *f _{s}*, and let

*l*be an ontology loss function. Let

*β*

_{4}: ℕ → ℝ. We say that our ontology algorithm

*A*has weak loss stability

*β*

_{4}if for all

*n*∈ ℕ,

*i*∈ {1,··· ,

*n*}, we have

We present the following lemma which is a fundamental for proving the results on strong and weak stability.

(Kutin [22]) *Let X*_{1},··· ,*X _{N} be independent random variables, each taking values in a set C. There is a “bad” subset B ⊆ C, where* ℙ(

*x*

_{1},··· ,

*x*∈

_{N}*B*) =

*δ. Let ϕ*:

*C*→ ℝ

^{N}*be such that for each k*∈ {1,··· ,

*N*},

*there exists b*≥

*c*> 0

_{k}*such that*

*Then for any ε* > 0,

(Kutin [22]) *Let X*_{1},··· ,*X _{N} be independent random variables, each taking values in a set C. Let ϕ* :

*C*→ ℝ

^{N}*be such that for each k ∈ {1,··· ,* $\frac{{\lambda}_{k}}{N}$ for c0 <

*N*}, there satisfies two condition inequalities in Lemma 1 by substituting_{k}, and substituting e^{−KN}for δ . If*e*≤ min

*(*

_{k}T*b*,

*λ*,

_{k}*K*),

*and N*≥ max

*∆(*

_{k}*b*,

*λ*,

_{k}*K*,

*ε*),

*then*

The main result in this subsection is stated as follows.

*Let A be our ontology algorithm whose output on an ontology training sample Z is denoted by f _{s}*.

*Let l be an ontology loss function such that*0 ≤

*l*(

*f*,·) ≤ Ξ

*for all f and*

*1) Let β*_{3} *such that our ontology algorithm A has strong loss stability* (*β*_{3},*δ*_{1}). *Then for any* 0 < *δ* < 1, *with probability at least* 1 − *δ, we have*

*2) Let β*_{4} *such that our ontology algorithm A has weak loss stability* (*β*_{4},*δ*_{2}). *And if*

*and*

*Then, for any* 0 < *δ* < 1, *with probability at least* 1 − *δ*, *we have*

The method to proof Theorem 7 is mainly followed by [20, 21], we skip the detailed proof here.

## 4 Experiments

In this section, we design five simulation experiments respectively concerning ontology measure and ontology mapping. In our experiment, we select the ontology loss function as the square loss. To make sure the accuracy of the comparison, we ran our algorithm in C++ through available LAPACK and BLAS libraries for linear algebra and operation computations. We implement five experiments on a double-core CPU with a memory of 8GB.

### 4.1 Ontology similarity measure experiment on plant data

We use *O*_{1}, a plant “PO” ontology in the first experiment. It was constructed in www.plantontology.org. We use the structure of *O*_{1} presented in Fig. 1. *P*@*N* (Precision Ratio see Craswell and Hawking [5]) to measure the quality of the experiment data. At first, the closest *N* concepts for every vertex on the ontology graph in plant field was given by experts. Then we gain the first *N* concepts for every vertex on ontology graph by our algorithm, and compute the precision ratio.

Meanwhile, we apply ontology methods in [12], [13] and [14] to the “PO” ontology. Then after getting the average precision ratio by means of these three algorithms, the results with our algorithm are compared. Parts of the data can be referred to Table 1.

Tab. 1.The Experiment Results of Ontology Similarity measure

P@3 average precision ratio | P@5 average precision ratio | P@10 average precision ratio | |
---|---|---|---|

Our Algorithm | 0.5358 | 0.6517 | 0.8821 |

Algorithm in [12] | 0.4549 | 0.5117 | 0.5859 |

Algorithm in [13] | 0.4282 | 0.4849 | 0.5632 |

Algorithm in [14] | 0.4831 | 0.5635 | 0.6871 |

When *N* =3, 5 or 10, the precision ratio gained from our algorithms are a little bit higher than the precision ratio determined by algorithms proposed in [12], [13] and [14]. Furthermore, the precision ratios show it tends to increase apparently as N increases. As a result, our algorithms is proved to be better and more effective than those raised by [12], [13] and [14].

### 4.2 Ontology mapping experiment on humanoid robotics data

“Humanoid robotics” ontologies *O*_{2} and *O*_{3} are used in the second experiment. The structure of *O*_{2} and *O*_{3 }are respectively presented in Fig. 2 and Fig. 3. The leg joint structure of bionic walking device for six-legged robot is presented by the ontology *O*_{2}. The exoskeleton frame of a robot with wearable and power assisted lower extremities is presented by the ontology *O*_{3}.

‘Humanoid Robotics” Ontology *O*_{2}.

Citation: Applied Mathematics and Nonlinear Sciences 1, 1; 10.21042/AMNS.2016.1.00012

“Humanoid Robotics” Ontology *O*_{3}.

Citation: Applied Mathematics and Nonlinear Sciences 1, 1; 10.21042/AMNS.2016.1.00012

The Structure of “GO” Ontology.

Citation: Applied Mathematics and Nonlinear Sciences 1, 1; 10.21042/AMNS.2016.1.00012

We set the experiment, aiming to get ontology mapping between *O*_{2} and *O*_{3}. *P*@*N* Precision Ratio is taken as a measure for the quality of experiment. After applying ontology algorithms in [24], [13] and [14] on “humanoid robotics” ontology and getting the average precision ratio, the precision ratios gained from these three methods are compared. Some results can refer to Table 2.

Tab. 2. The Experiment Results of Ontology Mapping

P@1 average precision ratio | P@3 average precision ratio | P@5 average precision ratio | |
---|---|---|---|

Our Algorithm | 0.2778 | 0.5000 | 0.7667 |

Algorithm in [24] | 0.2778 | 0.4815 | 0.5444 |

Algorithm in [13] | 0.2222 | 0.4074 | 0.4889 |

Algorithm in [14] | 0.2778 | 0.4630 | 0.5333 |

When *N* = 1, 3 or 5, the precision ratios gained from our new ontology algorithm are higher than the precision ratios determined by algorithms proposed in [24], [13] and [14]. Furthermore, the precision ratios show they tend to increase apparently as N increases. As a result, our algorithms shows much more efficiency than those raised by [24], [13] and [14].

### 4.3 Ontology similarity measure experiment on biology data

Gene “GO” ontology *O*_{4} is used in the third experiment, which was constructed in the website http: //www. geneontology. We present the structure of *O*_{4} in Figure 4. Again, *P*@*N* is chosen as a measure for the quality of the experiment data. Then we apply the ontology methods in [13], [14] and [25] to the “GO” ontology. Then after getting the average precision ratio by means of these three algorithms, the results with our algorithm are compared. Parts of the data can be referred to Table 3.

Tab. 3. The Experiment Results of Ontology Similarity measure

P@3 average precision ratio | P@5 average precision ratio | P@10 average precision ratio | P@20 average precision ratio | |
---|---|---|---|---|

Our Algorithm | 0.4987 | 0.6364 | 0.7602 | 0.8546 |

Algorithm in [13] | 0.4638 | 0.5348 | 0.6234 | 0.7459 |

Algorithm in [14] | 0.4356 | 0.4938 | 0.5647 | 0.7194 |

Algorithm in [25] | 0.4213 | 0.5183 | 0.6019 | 0.7239 |

When *N* = 3, 5 or 10, the precision ratios gained from our ontology algorithms are higher than the precision ratios determined by algorithms proposed in [13], [14] and [25]. Furthermore, the precision ratios show they tend to increase apparently as *N* increases. As a result, our algorithms turn out to have more effectiveness than those raised by [13], [14] and [25].

### 4.4 Ontology mapping experiment on physics education data

“Physics education” ontologies *O*_{5} and *O*_{6} are used in the fourth experiment. We respectively present the structures of *O*_{5} and *O*_{6} in Fig. 5 and Fig. 6.

“Physics Education” Ontology *O*_{5}.

Citation: Applied Mathematics and Nonlinear Sciences 1, 1; 10.21042/AMNS.2016.1.00012

“Physics Education” Ontology *O*_{6}.

Citation: Applied Mathematics and Nonlinear Sciences 1, 1; 10.21042/AMNS.2016.1.00012

We set the experiment, aiming to give ontology mapping between *O*_{5} and *O*_{6}. *P*@*N* precision ratio is taken as a measure for the quality of the experiment. Ontology algorithms are applied in [13], [14] and [26] on “physics education” ontology. The precision ratio gotten from the three methods is compared. Some results can be referred to Table 4.

Tab. 4. The Experiment Results of Ontology Mapping

P@1 average precision ratio | P@3 average precision ratio | P@5 average precision ratio | |
---|---|---|---|

Our Algorithm | 0.6913 | 0.7556 | 0.9161 |

Algorithm in [13] | 0.6129 | 0.7312 | 0.7935 |

Algorithm in [14] | 0.6913 | 0.7556 | 0.8452 |

Algorithm in [26] | 0.6774 | 0.7742 | 0.8968 |

When *N* = 1, 3 or 5, the precision ratio in terms of our new ontology mapping algorithms are much higher than the precision ratio determined by algorithms proposed in [13], [14] and [26]. Furthermore, the precision ratios show they tend to increase apparently as *N* increases. As a result, our algorithms shows more effectiveness than those raised by [13], [14] and [26].

### 4.5 Ontology mapping experiment on university data

“University” ontologies *O*_{7} and *O*_{8} are applied in the last experiment. We present the structures of *O*_{7} and *O*_{8} in Fig. 7 and Fig. 8.

“University” Ontology *O*_{7}.

Citation: Applied Mathematics and Nonlinear Sciences 1, 1; 10.21042/AMNS.2016.1.00012

“University” Ontology *O*_{8}.

Citation: Applied Mathematics and Nonlinear Sciences 1, 1; 10.21042/AMNS.2016.1.00012

We set the experiment, aiming to give ontology mapping between *O*_{7} and *O*_{8}. *P*@*N* precision ratio is taken as a criterion to measure the quality of the experiment. Ontology algorithms are applied in [12], [13] and [14] on “University” ontology. The precision ratios gotten from the three methods are compared. Some results can be referred to Table 5.

Tab. 5. The Experiment Results of Ontology Mapping

P@1 average precision ratio | P@3 average precision ratio | P@5 average precision ratio | |
---|---|---|---|

Our Algorithm | 0.5714 | 0.6786 | 0.7714 |

Algorithm in [12] | 0.5000 | 0.5952 | 0.6857 |

Algorithm in [13] | 0.4286 | 0.5238 | 0.6071 |

Algorithm in [14] | 0.5714 | 0.6429 | 0.6500 |

When *N* = 1, 3 or 5, the precision ratios in terms of our new ontology mapping algorithms are much higher than the precision ratios determined by algorithms proposed in [12], [13] and [14]. Furthermore, the precision ratios show they tend to increase apparently as *N* increases. As a result, our algorithms turn out to have more effectiveness than those raised by [12], [13] and [14].

## 5 Conclusions

In this paper, the new ontology learning framework and its optimal approaches are manifested for ontology similarity calculating and ontology mapping. The new ontology algorithm is based on distance function learning tricks. Also, the stability analysis and generalized bounding computation of ontology learning algorithm are presented. Finally, simulation data in five experiments show that our new ontology learning algorithm has high efficiency in these engineering applications. The distance learning based ontology algorithm proposed in our paper illustrates the promising application prospects for multiple disciplines.

Communicated by F. Balibrea

We thank the reviewers for their constructive comments in improving the quality of this paper. This work was supported in part by the Key Laboratory of Educational Informatization for Nationalities, Ministry of Education, NSFC (No.11401519), and the PhD initial funding of the third author.

## References

- [1]↑
J.M. Przydzial, B. Bhhatarai, A. Koleti. (2013), GPCR ontology: development and application of a G protein-coupled receptor pharmacology knowledge framework, Bioinformatics, 29(24), 3211-3219.

- [2]↑
S. Koehler, S.C. Doelken, C.J. Mungall. (2014), The human phenotype ontology project: linking molecular biology and disease through phenotype data, Nucleic Acids Research, 42(D1), 966-974.

- [3]↑
M. Ivanovic, Z. Budimac. (2014), An overview of ontologies and data resources in medical domains, Expert Systerms and Applications, 41(11), 5158-5166.

- [4]↑
A. Hristoskova, V. Sakkalis, G. Zacharioudakis. (2014), Ontology-driven monitoring of patient’s vital signs enabling personalized medical detection and alert, Sensors, 14(1), 1598-1628.

- [5]↑
M.A. Kabir, J. Han, J. Yu. (2014), User-centric social context information management: an ontology-based approach and platform, Personal and Ubiquitous Computing, 18(5), 1061-1083.

- [6]
Y.L. Ma, L. Liu, K. Lu, B.H. Jin, X.J. Liu. (2014), A graph derivation based approach for measuring and comparing structural semantics of ontologies, IEEE Transactions on Knowledge and Data Engineering, 26(5), 1039-1052.

- [7]
Z. Li, H.S. Guo, Y.S. Yuan, L.B. Sun. (2014), Ontology representation of online shopping customers knowledge in enterprise information, Applied Mechanics and Materials, 483, 603-606.

- [8]
R. Santodomingo, S. Rohjans, M. Uslar, J.A. Rodriguez-Mondejar, M.A. Sanz-Bobi. (2014), Ontology matching system for future energy smart grids, Engineering Applications of Artificial Intelligence, 32, 242-257.

- [9]
T. Pizzuti, G. Mirabelli, M.A. Sanz-Bobi, F. Gomez-Gonzalez. (2014), Food track & trace ontology for helping the food traceability control, Journal of Food Engineering, 120(1), 17-30.

- [10]
N. Lasierra, A. Alesanco, J. Garcia. (2014), Designing an architecture for monitoring patients at home: ontologies and web services for clinical and technical management integration, IEEE Journal of Biomedical and Health Informatics, 18(3), 896-906.

- [11]↑
W. Gao, L.L. Zhu, Y. Guo. (2015), Multi-dividing infinite push ontology algorithm, Engineering Letters, 23(3), 132-139.

- [12]↑
Y.Y. Wang, W. Gao, Y.G. Zhang, Y. Gao. (2010), Ontology similarity computation use ranking learning method, The 3rd International Conference on Computational Intelligence and Industrial Application, Wuhan, China, 2010: 20–22.

- [13]↑
X. Huang, T.W. Xu, W. Gao, Z.Y. Jia. (2011), Ontology similarity measure and ontology mapping via fast ranking method, International Journal of Applied Physics and Mathematics, 1(1), 54-59.

- [14]↑
W. Gao, L. Liang. (2011), Ontology similarity measure by optimizing NDCG measure and application in physics education, Future Communication, Computing, Control and Management, 142, 415-421.

- [15]↑
C. McDiarmid. (1989), On the method of bounded differences, in Surveys in Combinatorics, Cambridge University Press, 1989, pp. 148-188.

- [16]↑
W. Gao, Y. G. Zhang, L. Liang, Y. M. Xia. (2010), Stability analysis for ranking algorithms, Proceedings 2010 IEEE International Conference on Information Theory and Information Security, Publisher: IEEE, 2010, pp. 973-976.

- [17]
Z. Y. Jia, W. Gao, X. G. He. (2011), Generalization bounds for ranking algorithm via query-level stabilities analysis,

- [18]
Y. Gao, W. Gao, Y. G. Zhang. (2011), Query-level stability of IRSVM for replacement case, Procedia Engineering, 15, 2150-2154.

- [19]↑
Y. Gao, W. Gao, Y. G. Zhang. (2011), Query-level stability of ranking SVM for replacement case, Procedia Engineering, 15, 2145-2149.

- [20]↑
W. Gao, Y. G. Zhang, Y. Gao, L. Liang, Y. M. Xia. (2013), Strong and weak stability of bipartite ranking algorithms, Proceedings of SPIE - The International Society for Optical Engineering, Doi:

- [21]↑
T. W. Xu, Y. G. Zhang, W. Gao. (2011), Generalization bounds for ordinal regression algorithms via strong and weak stability, Energy Procedia, 13, 3471-3478.

- [22]↑
S. Kutin. (2002), Extensions to McDiarmid’s inequality when differences are bounded with high probability, Technical report, Department of Computer Science, The University of Chicago, 2002.

- [23]
N. Craswell, D. Hawking. (2003), Overview of the TREC 2003 web track, In proceedings of the Twelfth Text Retrieval Conference, Gaithersburg, Maryland, NIST Special Publication, 2003, pp. 78-92.

- [24]↑
W. Gao, M.H. Lan. (2011), Ontology mapping algorithm based on ranking learning method, Microelectronics and Computer, 28(9), 59-61.

- [25]↑
Y. Gao, W. Gao. (2012), Ontology similarity measure and ontology mapping via learning optimization similarity function, International Journal of Machine Learning and Computing, 2(2), 107-112.

- [26]↑
W. Gao, Y. Gao, L. Liang. (2013), Diffusion and harmonic analysis on hypergraph and application in ontology similarity measure and ontology mapping, Journal of Chemical and Pharmaceutical Research, 5(9), 592-598.