# Random inverse packet information and its acquisition

Xiuqing Yu 1  and Fengsheng Xu 1
• 1 School of Mathematics and Big Data, Dezhou University, Dezhou, P R China
Xiuqing Yu
and Fengsheng Xu

## Abstract

Packet sets and inverse packet sets are two kinds of novel mathematical tools to analyze dynamic information systems. With advances in inverse packet sets, random inverse packet information is proposed by introducing random characteristics into inverse packet sets. Hence, random inverse packet information has dynamic and random characteristics, and is an extended form of inverse packet sets. Furthermore, random feature, dynamic feature, and identification relation about the random inverse packet information are discussed. Finally, based on the above theory, an instance is used to illustrate the applications of intelligent acquisition of investment information.

## 1 Introduction

With regard to finite common set theory with static feature, research on some dynamic systems often faced problems because change always exists. It became necessary to construct a new kind of set model with dynamic characteristics. Hence, Refs. [1,2,3,4] proposed two types of dynamic set models-packet sets and inverse packet sets (IPSs), by replacing “static” with “dynamic” to improve the finite common set. These dynamic set models provide a better theory foundation for dealing with dynamic applied systems. Later, the mathematical characteristics of the new sets such as quantitative characteristics, algebraic characteristics, geometrical characteristics, genetic characteristics, random characteristics, and theory applications are discussed by more and more scholars [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22]. Especially, literatures [5,6,7,8,9,10,11,12,13,14,15,16,17] developed the latter model by taking information instead of sets to obtain the inverse packet information (IPI) model and provide some applications for information fusion–separation, hidden information discovery, intelligent data digging, and big decomposition–fusion acquisition. However current research on random inverse packet information (RIPI) is rare. Hence, we consider the probabilities of information element migration in IPI and present some concepts about the RIPI and their structures. Furthermore, the random feature, dynamic characteristics, and identification relations on RIPI are discussed and applied to intelligent acquisition–separation of investment information.

Convention: (x) = {x1, x2, ···, xs} ⊂ U is a nonempty finite ordinary information and αV is its nonempty attribute set; F, are information transition function families in which fF, are transition functions, whose detailed characteristics and occurrence probabilities can be found in Hao et al. [23]. The occurrence probabilities of two events ${xi|wi∈¯(x),f(wi)=xi∈(x)}$ , ${x|x∈(x),f¯(x)=w∈¯(x)}$ are simply written as pF(f) and p() in order.

## 2 RIPI and its construction

The theory model of IPSs [3, 4] with the inner IPS F and exterior IPS combined, has the following dynamic characteristics: given a finite common element set X = {x1, x2, ..., xr} with α = {α1,α2,...,αr′}. I. If some added attributes are transferred by f to α and to get αF such that ααF, then some extra elements are accordingly removed to X to generate a new element set called inner IPS F, XF. II. If some attributes are transferred by from α to generate α such that αα, then some elements in X are accordingly deleted to generate a new element set called exterior IPS , X. III. If it happens in the same time that some extra attributes are moved into α and some other attributes in α are migrated out, that is, α becomes αF, and meanwhile α does α, αααF, then X becomes an IPS(F, ), which fulfills XF and has dynamic characteristics. All of the IPSs generated by setX constitute a set family called the IPS family ${(XiF¯,XjF)|i∈I,j∈J}$ [3]. Especially, if the above process occurs continuously, X would dynamically generate a linked IPS $(X¯1FX¯1F¯),(X¯2FX¯2F¯),...,(X¯sFX¯sF¯)$ , which has the relation $X¯iF⊆X¯i+1F$ and $X¯i+1F¯⊆X¯iF¯$ , i = 1, 2, ..., s. Let us treat the sets , X, F as information indicated orderly by ()F,(x), and (). Then we obtain IPI(()F, ()) with all the characteristics of IPS [18,19,20,21,22].

For inner IPI ()F, the dynamic process is shown by adding information elements under the condition that some attributes are migrated into α, as ∃wi ∉ (x), f(wi) = xi ∈ (x), where ()F indicates (x) ∪ {xi | wi ∉ (x), f(wi) = xi ∈ (x)}. For exterior IPI (), the dynamic process is done by some elements in (x) migrated out under the condition that some attributes in α are removed out, as ∃xi ∈ (x), (xi) = wi ∉ (x), where () indicates (x) − {xi ∈ (x)|(xi) = wi ∉ (x)}. Obviously, all of the inner IPI and exterior IPI generated by (x) can, respectively, form an inner IPI family and an exterior IPI family expressed as ${(x¯)iF|i∈I}$ , ${(x¯)jF|j∈J}$ . Certainly, all of the IPI generated by(x) can also form an IPI family as $Φ(x)={((x¯)iF,(x¯)jF)|i∈I,j∈J}$ . Considering ${xi|ui∈¯(x),f(ui)=xi∈(x)}$ as an event, $(x¯)iF$ is obtained in the case of the event occurrence probability equal to 1, namely, ()F is obtained with the fact that ${xi|ui∈¯(x),f(ui)=xi∈(x)}$ is bound to happen. As we know, it is stochastic that wi ∉ (x) is transferred in(x) by f. The same goes for exterior IPI and IPI [23].

Definition 1
is called random inner IPI generated by (x) depending on information element migration probability σ, generally written as random inner IPI, such that
$(x¯)Fσ=(x¯)F∪(x¯)σ+,$
$(x¯)σ+$ is called the added random information with
$(x¯)σ+={x|w∈¯(x),p({f(w)=x∈(x)})∈[σ,1]}$
on condition that αF fulfills
$αF=α∪{α′|δ∈V,δ∈¯α,f(δ)=α′∈α}$
where σ ∈ (0, 1) and αF are also thought to be the attribute sets of ().
Definition 2
()F̅σ is called the random exterior IPI obtained by (x) depending on information element migration probability σ, briefly written as random exterior IPI, such that
$(x¯)F¯σ=(x¯)F¯−{x|x∈(x),p({f¯(x)=w∈¯(x)})∈[σ,1)},$
$(x¯)σ−$ is called deleted random information with
$(x¯)σ−={x|x∈(x),p({f¯(x)=w∈¯(x)})∈[σ,1]}$
on condition that α fulfills
$αF¯=α−{αi|αi∈α,f¯(αi)=δi∈¯α}$
where the nonempty attribute set α of () is also that of ()F̅σ ≠ ∅ and σ ∈ (0, 1).
Definition 3
The information pair formed by the random inner IPI and the random exterior IPI generated by (x) is called a random IPI generated by (x) depending on information element migration probability σ, also called RIPI as
$((x¯)Fσ,(x¯)F¯σ)$
where (αF, α) is also the attribute set of ((), ()F̅σ).
Considering Definitions 1–3 and the above assumption, it is easily noted that Formulas (1) and (4) can, respectively, be represented with other forms as the following:
$(x¯)Fσ=(x¯)∪(x¯)σ+=(x)∪{x|w∈¯(x),pF(f)∈[σ,1]}$
$(x¯)F¯ρ=(x)−(x¯)σ−=(x)−{x|x∈(x),pF¯(f¯)∈[σ,1]}$

Formulas (8) and (9) show RIPI to be as one information pair generated by not only the corresponding IPI, but also the ordinary information (x), as shown in Fig. 1.

All of the RIPI generated by information (x) constitute an RIPI family as
$RΦ(x)={((x¯)iFσ,(x¯)jF¯σ)|σ∈(0,1),i∈I,j∈J}.$
According to Definitions 1–3, Propositions 1–4 are simply derived as follows.
Proposition 1

Let pF(f) ≡ 0 for ∀fF, then ()F, () are not indentified expressed as UNI(()F, ()).

Proposition 2

Let p() ≡ 0 for, then UNI((), ()F̅σ).

Proposition 3

Let p() = pF(f) ≡ 0 forF̅ andfF, then UNI((()F, ()), ((), ()F̅σ)).

Proposition 4

Forσ ∈ (0, 1), there is UNI(Φ(x), RΦ(x)).

Propositions 1–4 state that RIPI ((), ()F̅σ) is the extension of (()F, ()), and (()F, ()) is the particular case of ((), ()F̅σ). Under certain conditions, RIPI could restore to homologous IPI, and to information (x).

Theorem 1
(Relation theorem between RIPI and IPI) Assume ((), ()F̅σ) ∈ RΦ(x) and (()F, ()) ∈ Φ(x), then forσ ∈ (0, 1) we have
$((x¯)Fσ,(x¯)F¯σ)$
where Formula (11) represents the relation of Fig. 1.
Proof

The assumption condition and Formulas (1) and (4) guarantee that ()F ⊆ () and ()F̅σ ⊆ () are fulfilled. According to the definition of IPI derived by common information (x) in Refs [3, 4, and 18], we can obtain () ⊆ ()F. Hence, we get Formula (11) by set hereditary property.

Theorem 2

(Generation theorem of RIPI) Assume that (αF, α) is the attribute sets of RIPI ((), ()F̅σ). There exists the nonempty pairα, ∇α) ≠ ∅ such that αF − (α ∪ Δα) = ∅ and α − (α − ∇α) = ∅, whereα, ∇α) ≠ ∅ is equal to Δα ≠ ∅ and ∇α ≠ ∅.

Proof

Let ((), ()F̅σ) be different from (x), namely, () = ()F̅σ = (x) fails the Theorem 2 condition, then there exists one working between () ≠ (x) and ()F̅ρ ≠ (x) at least. Let us assume that () ≠ (x), the generating process of random inner IPI () depending on its attribute set αF points out that αF meets ααF. Assuming Δα = αFα, we obtained Δα ≠ ∅ and αF − (α ∪ Δα) = ∅ according to Definition 1. In the same way, it is proved that there exists ∇α ≠ ∅ such that α − (α − ∇α) = ∅.

## 3 RIPI characteristics

Supplementing some additional attributes into α, some information elements would be migrated into information (x) depending on certain probability in succession and form a chain of random inner IPI showing the following dynamic process:
$(x¯)1Fσ⊆(x¯)2Fσ⊆...⊆(x¯)sFσ$
Deleting some attributes out from α continuously, some information elements in (x) are migrated successively, depending on certain probability and form a chain of random exterior IPI showing the dynamic process as follows:
$(x¯)sF¯σ⊆(x¯)s−1F¯σ⊆...⊆(x¯)1F¯σ$
If the above change processes take place at the same time, we obtain a chain of the RIPI implying the dynamic process as
$((x¯)iFσ,(x¯)iF¯σ)$

According to Formulas (12)–(14), we get the dynamic characteristics depending on the attribute sets indicated by Theorems 3–5.

Theorem 3

(Depending attribute theorem of RIPI) Let$(x¯)iFρ$ , $(x¯)jFρ$be the random inner IPI and$αiF$ , $αjF$expressing their attribute sets in order. Then$(x¯)iFσ⊆(x¯)jFσiffαiF⊆αjF$ .

Theorem 4

(Depending attribute theorem of RIPI) Let$(x¯)iF¯ρ$ , $(x¯)jF¯ρ$be random exterior IPI and$αiF$ , $αjF$expressing their attribute sets in order. Then$(x¯)jF¯σ⊆(x¯)iF¯σiffαjF¯⊆αiF¯$ .

Theorem 5

(Depending attribute theorem of RIPI) Let$((x¯)iFσ,(x¯)iF¯σ)$ , $((x¯)jFσ,(x¯)jF¯σ)∈RΦ(x)$and$(αiF,αiF¯)$ , $(αjF,αjF¯)$be their attribute sets, respectively. Then$((x¯)iFσ,(x¯)iF¯σ)⊆⊃((x¯)jFσ,(x¯)jF¯σ) iff (αiF,αiF¯)⊆⊃(αjF,αjF¯)$ .

Inference 1 Let card(V − α) = t. Then information (x) can generate t! dynamic chains of RIPI.

Inference 2 Let card(α) = m. Then information (x) can generate m! dynamic chains of random exterior IPI.

Inference 3 Let card(α) = m and card(Vα) = t. Then information (x) can generate t! × m! dynamic chains of RIPI.

According to the dynamic characteristics of RIPI, the measurement of dynamic change degree is proposed in Definitions 4–6.

Definition 4
Let () be a random inner IPI derived by (x). Then call the real number γ() to be F−measure degree of () relative to (x) as
$γ(x¯)Fσ=∥x(Fσ)−x(0)∥/∥x(0)∥$
where (x) = {x1,x2,..xs},(x) = {x1,x2,..xs,xs+1,...,xs+t}; the sequence of information value is expressed as
$xi=(xi1,xi2,...,xim),i=1,2,...,s+t,xik∈[0,1], andx(0)=(∑i=1sxi1,∑i=1sxi2,...,∑i=1sxim),x(Fσ)=(∑i=1s+txi1,∑i=1s+txi2,...,∑i=1s+txim),∥x(Fσ)−x(0)∥=[∑k=1m(∑i=1s+txik−∑i=1sxik)ρ]1/ρ,,∥x(0)∥=[∑k=1m(∑i=1sxik)ρ]1/ρ,k=1,2,...,m,ρ∈Z+.$
Definition 5
Let ()F̅σ be a random exterior IPI derived by (x). Then call γ()F̅σ − measure degree of ()F̅σ relative to (x) as
$γ(x¯)F¯σ=∥x(0)−x(F¯σ)∥/∥x(0)∥$
where (x) = {x1,x2,..xs},(x)F̅σ = {x1,x2,..xsp}, 0 ≤ p < s, pZ+; x(0) and ‖ x(0) ‖ are the same as Definition 4, and the sequence of information value is written as
$xi=(xi1,xi2,...,xim),xik∈[0,1],i=1,2,...,s, andx(F¯σ)=(∑i=1s−pxi1,∑i=1s−pxi2,...,∑i=1s−pxim),∥x(0)−x(F¯σ)∥=[∑k=1m(∑i=1sxik−∑i=1s−pxik)ρ]1/ρ.$
Definition 6
Let ((), ()F̅σ) be an RIPI generated by (x). Then call the real number pair composed by Formulas (15) and (16) to be (F, )measure degree of ((), ()F̅σ) relative to (x), and
$(γ(x¯)Fσ,γ(x¯)F¯σ)$

Because (()F,()) is a special case of ((), ()F̅σ), (γ()F, γ()) is chosen to express (F, ) measure degree of (()F,()) when UNI(()F, ()), UNI((), ()F̅σ) in Definitions 4 and 5.

Formula (15) notes the change measurement between ()Fp and (x) caused by attribute supplementing set Δα; the same goes for Formulas (16) and (17). Thus Propositions 5–7 can be obtained.

Proposition 5

Given F − measure degree γ(), γ() ≠ 0 iff IDE((), (x)) or IDE(αF).

Proposition 6

Given F̅ − measure degree γ()F̅σ, γ()F̅σ ≠ 0 iff IDE(()F̅σ, (x)) or IDE(α, α).

Proposition 7

Given(F, )−measure degree (γ(), γ()F̅σ), (γ(), γ()F̅σ) ≠ 0 iff IDE(((), ()F̅σ),(x)) or IDE((αF, α),α).

Theorem 5

Let()i, ()jbe random inner IPI. Then ()i ⊆ ()jiff γi(x)γj(x).

Proof
Considering ()j = {x1,x2,x3,...,xq} and ()j = {x1,x2,x3,...,xq,xq+1,...xq+t}, ()i ⊆ ()j implies that ()j is a sub-information of ()i, so $∑i=1qxik≤∑i=1q+txik,k=1,2,...,m$ iff
$γi(x¯)Fσ=∥xi(Fσ)−x(0)∥/∥x(0)∥≤γj(x¯)Fσ=∥xj(Fσ)−x(0)∥/∥x(0)∥.$
Accordingly, we have the following result.
Theorem 6

Suppose that ()iF̅σ and ()jF̅σ are random exterior IPI. Then ()iF̅σ ≤ ()jF̅σ iff γi(x)F̅σγj(x)F̅σ.

Inference 4 Suppose (()i, ()iF̅σ), (()j, ()jF̅σ) are RIPI.
$((x¯)iFσ,(x¯)iF¯σ)⊆⊃((x¯)jFσ,(x¯)jF¯σ) iff (γi(x¯)Fσ,γi(x¯)F¯σ)≤>(γj(x¯)Fσ,γj(x¯)F¯σ)$
where $(γi(x¯)Fσ,γi(x¯)F¯σ)≤>(γj(x¯)Fσ,γj(x¯)F¯σ)$ implies that γi()γj() and γi()F̅σγj()F̅σ.

## 4 Applications of RIPI model in intelligent acquisition–separation of investment information

For convenience, call x(0) the information value and x() the inner IPI value in Definition 4; call x(F̅ρ) the exterior IPI value based on which Definition 7 is given.

Definition 7
Call
$if α⇒αF, then x(0)⇒x(Fσ)$
an inner RIPI reasoning in which ααF,x(0)x() are equivalent to ααF,x(0)x(), respectively; call
$if α⇒αF¯, then x(F¯σ)⇒x(0)$
an exterior RIPI reasoning in which αα and x(F̅σ)x(0) are equivalent so that α, x(F̅σ) are subsets of α,x(0), respectively.
Proposition 8

If$αiF⇒αi+1F$ , then$γ(x¯)iFσ≤γ(x¯)i+1Fσ$in which the attribute sets of$(x¯)iF$ , $(x¯)i+1F$are$αiF$ , $αi+1F$ , respectively, and they satisfy$αi+1F=αiF∪{α′|δ∈V,δ∉α,f(δ)=α′∈α}$ , δα, f (δ) = α′ ∈ α}.

Proposition 9

If$αi+1F¯⇒αiF¯$ , then$γ(x¯)i+1F¯σ≤γ(x¯)iF¯σ$in which the attribute sets of$(x¯)iF¯$ , $(x¯)i+1F¯$are$αiF¯$ , $αi+1F¯$ , respectively, and they satisfy$αi+1F¯=αiF−{αk|αk∈α,f¯(αk)=δk∉α}$ .

Assumption
For simplicity, this section only proposes the applications of random inner IPI in intelligent separation–acquisition of investment information. Suppose that W is a group company that produces petroleum and chemical products W = {W1,W2,W3,W4,W5}, where WiW, i = 1,2,3,4,5 are subsidiary corporations of W. α = {α1,α2,α3,α4,α5,α6} is the attribute set of W (product market characteristics of W). Information form of W is (x) = {x1,x2,x3,x4,x5}. Due to trade secret, the group company and its subsidiary corporations and attributes (market characteristics) are expressed as W,Wi, and α1,α2,α3,α4,α5,α6, respectively. x(0), $xi(0)$ are profit discrete value distributions of W,Wi from January to June in 2019 as
$x(0)=(x1(0),x2(0),x3(0),x4(0)),xi(0)=(xi1(0),xi2(0),xi3(0),...,xi6(0)),i=1,2,3,4.$
Values in x(0), $xi(0)$ are derived from dealing with the real profit value, and the results do not influence the analysis of the case. The profit discrete distributions x(0), $xi(0)$ of W,Wi are listed in Table 1.
Table 1

The profit discrete distributions x(0), $xi(0)$ of W,Wi,i = 1,2,3,4 from June to December in 2019

k123456
$x1k(0)$0.510.300.520.220.500.44
$x2k(0)$0.210.280.450.360.660.53
$x3k(0)$0.430.290.600.400.280.24
$x4k(0)$0.190.350.480.660.670.26
By profit discrete distribution in Table 1, the profit information value of W is
$x(0)=(x1(0),x2(0),x3(0),x4(0))=(∑i=14xi1(0),∑i=14xi2(0),∑i=14xi3(0),∑i=14xi4(0),∑i=14xi5(0),∑i=14xi6(0))=(1.34,1.22,2.05,1.64,2.11,1.47).$

A global disease COVID-19 broke out during preliminary stage in 2020 and caused a series of economic changes such as some manufacturing industry profits reduced in different probabilities. In contrast, products relating to protective apparatus, therapeutic apparatus, their appurtenance, and so on, have great market potential and earn better profit in big probabilities. This random dynamic change suited the RIPI model in this paper. For simplicity, this section only considers the latter.

Suppose that α7 = outbreakofCOVID 19 and the attribute set of (x) is α = {α1,α2,α3,α4,α5,α6}, then αF is derived through transferring α7 into α as
$αF={α1,α2,α3,α4,α5,α6}∪{α7}={α1,α2,α3,α4,α5,α6,α7}.$
Under the condition, sub-companies W6,W7,W8, $W9∈¯W$ (specific products of W5,W6,W7,W8 omit) would bring much profit differently with probabilities 1,0.8,0.75,0.67 after overall consideration. If probability ρ = 0.8 is chosen, then W5,W6 turn into W to form WF0.8 whose information is (x) = {x1,x2,x3,...,x6},
$x(F0.8)=(x1(F0.8),x2(F0.8),x3(F0.8),x4(F0.8),x5(F0.8),x6(F0.8)),xi(F0.8)=(xi1(F0.8),xi2(F0.8),xi3(F0.8),xi4(F0.8),xi5(F0.8),xi6(F0.8)),i=1,2,3,4,5,6.$

The detailed profit discrete distribution of WF0.8 is shown in Table 2.

Table 2

The profit discrete distributions x(F0.8), $xi(F0.8)$ of group company WF0.8 and sub-company Wi,i = 1,2,3,4,5,6 from January to June in 2020

k123456
$x1k(Fρ)$0.510.300.520.220.500.44
$x2k(Fρ)$0.210.280.450.360.660.53
$x3k(Fρ)$0.430.280.600.400.280.24
$x4k(Fρ)$0.190.350.480.660.670.26
$x5k(Fρ)$0.610.600.720.690.650.70
$x6k(Fρ)$0.710.690.820.720.660.69
By profit discrete distribution in Table 2, the profit information value of WF0.8 is
$x(F0.8)=(x1(F0.8),x2(F0.8),x3(F0.8),x4(F0.8),x5(F0.8),x6(F0.8))=(∑i=16xi1(F0.8),∑i=16xi2(F0.8),∑i=16xi3(F0.8),∑i=16xi4(F0.8),∑i=16xi5(F0.8),∑i=16xi6(F0.8))$
$=(2.66,2.51,3.59,3.05,3.42,2.86),(22)$
$γ(x)F0.8=∥x(F0.8)−x(0)∥/∥x(0)∥=0.857,$

### Analysis on intelligent acquisition of RIPI(x)Fρ

On the condition that αF is generated by supplementing attributes α7 into α, one can obtain the random inner inverse packet information (x)F0.8 = {x1,x2,x3,...,x6} based on information (x) = {x1,x2,x3,x4} through using Definition 1 and fulfill Formula (18).

x(F0.8) is intelligently separated out and acquired. If α7 does not occur, x(F0.8) would never have been gained, or (x) would never have been known depending on the probability 0.8. The example simply tells us that the following:

1. When α and αF satisfy ααF, information (x) is intelligently discovered randomly out of information (x) by using the random inner inverse packet information generation model. While W5,W6 are found out of W due to (x).
2. When α7 is thought to be a chance attribute and it invades the attribute se tα. Random inner IPI (x) is generated by (x) in Definition 1.
3. When the chance attributes α7 invades α, the profit discrete distribution data x(0) of group company is turned to x(), which makes the profit of W increase. This conclusion has been proved in the financial statement published by W.
4. Formula (23) means that W5,W6 will bring the extra profit 85.7% with a probability of 0.8 or greater.

## 5 Discussion

In Refs. [3, 4], dynamic feature was brought into common set X and proposed the structure of IPS. Based on IPS, IPI, and its applications in resolving practical problems with dynamic characteristics and heredity are discussed in [8,13,17,20, and 21]. The randomness of element transfer is considered in this paper according to the dynamic characteristics of IPI [23]. By integrating the possibility knowledge into IPI, this paper proposes the concepts and structures of RIPI and their applications. RIPI theory enriches IPI and enlarges its application category. It also provides a new theory tool for studying the information system.

Acknowledgment

The authors acknowledge the National Statistical Science Research Project (Grant No: 2018LY14).

## References

• [1]

Kaiquan Shi. P-sets [J]. Journal of Shandong University (Natural Science), 2008, 43(11):78–84(in Chinese).

• [2]

Kaiquan Shi. Big data structure-logic characteristics and big data law[J]. Journal of Shandong University (Natural Science), 2019, 54(2):1–29 (in Chinese).

• [3]

Kaiquan Shi. Inverse P-set[J]. Journal of Shandong University: Natural Science 2012, 47(1):98–103 (in Chinese).

• [4]

Hualong Guo, Baohui Chen, Jihua Tang. Inverse P-sets and intelligent fusion mining-discovery of information [J]. Journal of Shandong University (Natural Science), 2013, 13(8):97–103 (in Chinese).

• [5]

Xiumei Hao, Xiuqing Jiang. Structure characteristics on probability rough information matrix[J]. Fuzzy Systems and Mathematics, 2017, 31(3):153–158 (in Chinese).

• [6]

Kaiquan Shi. P-information law intelligent fusion and soft information image intelligent generation[J]. Journal of Shandong University (Natural Science), 2014, 49(4):1–17 (in Chinese).

• [7]

Xiaochao Li. An algebraic model of P-set [J]. Journal of Shangqiu Normal University, 2020, 36(3):1–5 (in Chinese).

• [8]

Kaiquan Shi. P-sets, inverse P-sets and the intelligent fusion-filter identification of information [J]. Computer Science, 2012, 39(4):1–13 (in Chinese).

• [9]

Jiqin Liu, Haiyue Zhang. Information P-dependence and P-dependence mining-sieving[J]. Computer Science, 2018, 45 (7): 202–206 (in Chinese).

• [10]

Ling Zhang, Jihua Tang, Kaiquan Shi. The fusion of internal P-information and its feature of attribute conjunction[J]. Journal of Shandong University (Natural Science), 2014, 49(2): 93–97 (in Chinese).

• [11]

Xiumei Hao, Ningning Li. Quantitative characteristics and applications of P-information hidden mining [J]. 2019, 54(9): 9–14 (in Chinese).

• [12]

Fengsheng Xu, Xiuqing Yu, Lihua Zhang. Intelligent fusion of information and reasoning generation of its P-augmented matrix [J]. Journal of Shandong University (Natural Science), 2019, 54(9):9–14 (in Chinese).

• [13]

Hongkang Lin, Chengxian Fan. The dual form P-reasoning and identification of unknown attribute [J]. International Journal of Digital Content Technology and its Applications, 2012, 6(1):121–131.

• Crossref
• Export Citation
• [14]

Fengsheng Xu, Xiuqing Yu. Surplus-loss value theorems of attribute cardinal numbers and inner-exterior separation of information laws[J]. Journal of Shandong University (Natural Science), 2017, 52(4):87–92 (in Chinese).

• [15]

Xiumei Hao, Tongtong Ren. F̅-ladder knowledge and F̅-hiding knowledge discovery [C]//Proceedings of the 2015 International Conference on Fuzzy Systems and Knowledge Discovery. Zhangjiajie: IEEE, 2015:1010–1016.

• [16]

Kaiquan Shi. Function P-sets [J]. International Journal of Machine Learning and Cybernetics, 2011, 2(4): 281–288.

• Crossref
• Export Citation
• [17]

Weiting Huang, Baoya Wei. Iterative intelligent camouflage and reduction of innernal inverse P-information [J]. Journal of Huaihai Institute of Technology(Natural Science), 2016, 25(3):26–29 (in Chinese).

• [18]

Ling Zhang, Jihua Tang, Kaiquan Shi. The fusion of innernal P-information and its feature of attribute conjunction[J]. Journal of Shandong University (Natural Science), 2014, 49(2): 93–97.

• [19]

Limei Yan. Inverse P-reasoning and Discovery, reasoning-search for unknown information [J]. Computer Science, 2012, 33(8)6:268–272 (in Chinese).

• [20]

Ling Zhang, Xue Fang Ren. The relationship between abnormal information system and inverse P-augmented matrices[J]. Journal of Shandong University (Natural Science), 2019, 54(9):15–21 (in Chinese).

• [21]

Kaiquan Shi, Jihua Tang, Ling Zhang. Intelligent fusion of inverse packet information and recessive transmission of informations intelligent hiding[J]. Systems Engineering and Electronics, 2015, 37(3): 599–605 (in Chinese).

• [22]

Fengsheng Xu, Xiuqing Yu, Kaiquan Shi. Embedding-camouflage of inverse P-information and its separation-discovery by inverse P-reasoning[J]. Computer Science, 2013, 40(8): 200–203 (in Chinese).

• [23]

Xiumei Hao, Huifang Guo, Xiuqing Jiang. Multi-attributes risk investment decision making based on dynamic probability rough sets [C]//Proceedings of the 2017 International Conference on Natural Computation?Fuzzy Systems and Knowledge Discovery, Guilin: IEEE, 2017: 2419–242.

If the inline PDF is not rendering correctly, you can download the PDF file here.

• [1]

Kaiquan Shi. P-sets [J]. Journal of Shandong University (Natural Science), 2008, 43(11):78–84(in Chinese).

• [2]

Kaiquan Shi. Big data structure-logic characteristics and big data law[J]. Journal of Shandong University (Natural Science), 2019, 54(2):1–29 (in Chinese).

• [3]

Kaiquan Shi. Inverse P-set[J]. Journal of Shandong University: Natural Science 2012, 47(1):98–103 (in Chinese).

• [4]

Hualong Guo, Baohui Chen, Jihua Tang. Inverse P-sets and intelligent fusion mining-discovery of information [J]. Journal of Shandong University (Natural Science), 2013, 13(8):97–103 (in Chinese).

• [5]

Xiumei Hao, Xiuqing Jiang. Structure characteristics on probability rough information matrix[J]. Fuzzy Systems and Mathematics, 2017, 31(3):153–158 (in Chinese).

• [6]

Kaiquan Shi. P-information law intelligent fusion and soft information image intelligent generation[J]. Journal of Shandong University (Natural Science), 2014, 49(4):1–17 (in Chinese).

• [7]

Xiaochao Li. An algebraic model of P-set [J]. Journal of Shangqiu Normal University, 2020, 36(3):1–5 (in Chinese).

• [8]

Kaiquan Shi. P-sets, inverse P-sets and the intelligent fusion-filter identification of information [J]. Computer Science, 2012, 39(4):1–13 (in Chinese).

• [9]

Jiqin Liu, Haiyue Zhang. Information P-dependence and P-dependence mining-sieving[J]. Computer Science, 2018, 45 (7): 202–206 (in Chinese).

• [10]

Ling Zhang, Jihua Tang, Kaiquan Shi. The fusion of internal P-information and its feature of attribute conjunction[J]. Journal of Shandong University (Natural Science), 2014, 49(2): 93–97 (in Chinese).

• [11]

Xiumei Hao, Ningning Li. Quantitative characteristics and applications of P-information hidden mining [J]. 2019, 54(9): 9–14 (in Chinese).

• [12]

Fengsheng Xu, Xiuqing Yu, Lihua Zhang. Intelligent fusion of information and reasoning generation of its P-augmented matrix [J]. Journal of Shandong University (Natural Science), 2019, 54(9):9–14 (in Chinese).

• [13]

Hongkang Lin, Chengxian Fan. The dual form P-reasoning and identification of unknown attribute [J]. International Journal of Digital Content Technology and its Applications, 2012, 6(1):121–131.

• Crossref
• Export Citation
• [14]

Fengsheng Xu, Xiuqing Yu. Surplus-loss value theorems of attribute cardinal numbers and inner-exterior separation of information laws[J]. Journal of Shandong University (Natural Science), 2017, 52(4):87–92 (in Chinese).

• [15]

Xiumei Hao, Tongtong Ren. F̅-ladder knowledge and F̅-hiding knowledge discovery [C]//Proceedings of the 2015 International Conference on Fuzzy Systems and Knowledge Discovery. Zhangjiajie: IEEE, 2015:1010–1016.

• [16]

Kaiquan Shi. Function P-sets [J]. International Journal of Machine Learning and Cybernetics, 2011, 2(4): 281–288.

• Crossref
• Export Citation
• [17]

Weiting Huang, Baoya Wei. Iterative intelligent camouflage and reduction of innernal inverse P-information [J]. Journal of Huaihai Institute of Technology(Natural Science), 2016, 25(3):26–29 (in Chinese).

• [18]

Ling Zhang, Jihua Tang, Kaiquan Shi. The fusion of innernal P-information and its feature of attribute conjunction[J]. Journal of Shandong University (Natural Science), 2014, 49(2): 93–97.

• [19]

Limei Yan. Inverse P-reasoning and Discovery, reasoning-search for unknown information [J]. Computer Science, 2012, 33(8)6:268–272 (in Chinese).

• [20]

Ling Zhang, Xue Fang Ren. The relationship between abnormal information system and inverse P-augmented matrices[J]. Journal of Shandong University (Natural Science), 2019, 54(9):15–21 (in Chinese).

• [21]

Kaiquan Shi, Jihua Tang, Ling Zhang. Intelligent fusion of inverse packet information and recessive transmission of informations intelligent hiding[J]. Systems Engineering and Electronics, 2015, 37(3): 599–605 (in Chinese).

• [22]

Fengsheng Xu, Xiuqing Yu, Kaiquan Shi. Embedding-camouflage of inverse P-information and its separation-discovery by inverse P-reasoning[J]. Computer Science, 2013, 40(8): 200–203 (in Chinese).

• [23]

Xiumei Hao, Huifang Guo, Xiuqing Jiang. Multi-attributes risk investment decision making based on dynamic probability rough sets [C]//Proceedings of the 2017 International Conference on Natural Computation?Fuzzy Systems and Knowledge Discovery, Guilin: IEEE, 2017: 2419–242.

OPEN ACCESS