# Mean square calculus and random linear fractional differential equations: Theory and applications

Open access

## Abstract

The aim of this paper is to study, in mean square sense, a class of random fractional linear differential equation where the initial condition and the forcing term are assumed to be second-order random variables. The solution stochastic process of its associated Cauchy problem is constructed combining the application of a mean square chain rule for differentiating second-order stochastic processes and the random Fröbenius method. To conduct our study, first the classical Caputo derivative is extended to the random framework, in mean square sense. Furthermore, a sufficient condition to guarantee the existence of this operator is provided. Afterwards, the solution of a random fractional initial value problem is built under mild conditions. The main statistical functions of the solution stochastic process are also computed. Finally, several examples illustrate our theoretical findings.

## Abstract

The aim of this paper is to study, in mean square sense, a class of random fractional linear differential equation where the initial condition and the forcing term are assumed to be second-order random variables. The solution stochastic process of its associated Cauchy problem is constructed combining the application of a mean square chain rule for differentiating second-order stochastic processes and the random Fröbenius method. To conduct our study, first the classical Caputo derivative is extended to the random framework, in mean square sense. Furthermore, a sufficient condition to guarantee the existence of this operator is provided. Afterwards, the solution of a random fractional initial value problem is built under mild conditions. The main statistical functions of the solution stochastic process are also computed. Finally, several examples illustrate our theoretical findings.

## 1 Introduction and motivation

It is well-known that the following linear differential equation with initial condition is a proper pattern to model a great number of physical problems

$y(i)(t)−λy(t)=c,t>0,y(j)(0)=bj,0≤j

Here $ℕ$ denotes the set of positive integers and $λ,c,b0∈ℝ$. In practice these model parameters need to be fixed using measurements, hence they contain measurement errors. This fact motivates that it is more realistic to treat them as random variables (RVs) instead of deterministic values. Apart from measurement errors, randomness can be attributed to the inherent complexity often encountered in physic phenomena. This justifies the consideration of variability in the forcing term and in the initial condition (c, b0, b1). It must be pointed out that randomness may be included also in the diffusion coefficient, however in this contribution, for simplicity, we will assume that λ is a constant.

On the other hand, over the last few decades fractional differential equations are playing a significant role in modelling phenomena with microscopic complex behaviour whose macroscopic dynamics (like viscoelasticity, fluid behaviour, etc.) cannot be properly described using classical derivatives [4]. Throughout this paper, randomness will be considered using the Mean Square Calculus that involves the so-called mean square convergence. In the context of ordinary and fractional random differential equations, this random Calculus has been successfully used to conduct a number of significant problems. Some examples can be checked in [3, 5, 6, 7]. The application mean square calculus to deal with the analysis of random fractional differential equations is adequate because it permits to take advantage of properties of mean square convergence to compute approximations of the expectation and variance of the solution stochastic process. Our contribution is based upon a number of results belonging to the mean square calculus that have been established in the extant literature.

In this contribution we deal with the following generalization of the initial value problem (IVP) (1)

$CD0+αY(t)−λY(t)=c,t>0,0<α≤2,Y(j)(0)=bj,0≤j<[α]−1,$

being [·] the integer part function. To conduct our analysis, we will assume that $λ∈ℝ$ and bj and c are independent second-order RVs (2-RVs), that is, we assume that bj and c have finite variance. The symbol $(CD0+αY)(t)$ denotes the random Caputo fractional derivative of α order of the stochastic process (SP), Y(t). Although there exist several definitions of derivatives of non-integer order (fractional derivatives), for convenience, throughout our analysis we will consider the Caputo derivative. For the sake of completeness, we recall the definition of this random derivative.

Definition 1

Let f(t) be a second-order SP (2-SP), i.e. f(t) is a 2-RV for every fixed t, and assume that it is n-times mean square differentiable over a finite interval [a, b], then the random Caputo mean square derivative of order α > 0 of f(t) is given by

$(CDa+αf)(t):=(Jn−αf(n))(t).$

Here, n = −[−α], f(n)(t) denotes the n-th mean square derivative of the 2-SP f(t) and Jnα stands for the random fractional integral of order nα, i.e.

$(Jn−αf)(t)=1Γ(n−α)∫at(t−u)n−α−1f(u)du,$

where Γ(nα) is the deterministic Gamma function.

In [8, ch.4], one can find the definition of mean square differentiability to 2-SPs. The random fractional integral (Jnαf)(t) is a natural extension of its deterministic counterpart (see [4]) using the properties of the mean square Riemann integral of 2-SPs (see [8, ch.4]).

We recall that the correlation function, Γf(u1, u2), of a 2-SP, f(t), is defined by

$Γf(u1,u2):=E[f(u1)f(u2)], u1,u2∈ℝ,$

where $E[⋅]$ denotes the expectation operator. It can be proved, by applying the Schwarz inequality, that Γf(u1, u2) always is well-defined. An important property of the correlation function of a 2-SP, f(t) and its n-th mean square derivative, f(n)(t), is the following

$Γf(n)(u1,u2)=∂2nΓf(u1,u2)∂u1n∂u2n, u1,u2∈ℝ.$

Using (4) and the characterization for the existence of the mean square Riemann integral of 2-SP f(t) (see [8, Th. 4.5.1]), one can establish following necessary and sufficient condition for the existence of the random Caputo fractional mean square derivative.

Proposition 1

Let {f(t) : t ∈ [a, b]} be a 2-SP n-times mean square differentiable with correlation function Γf(·, ·). Then, its (left-sided) random Caputo fractional mean square derivative, $(CDa+αf)(t)$, α>0, exists if, and only if, the following deterministic double Riemann integral

$∫at∫at(t−u1)n−α−1(t−u2)n−α−1∂2nΓf(u1,u2)∂u1n∂u2ndu1du2$

exists and is finite

Once we have defined the Caputo derivative of a 2-SP, in Sections 2 and 3 we will construct the solution of random IVP (2) in two steps. First, considering the case when $0<α≤1$ (Case I) and, secondly, when $1<α≤2$ (Case II). As we shall see throughout analysis, this latter case will be based upon the study carried out in the former.

## 2 Case I: Solution of the random linear fractional differential equation when $0<α≤1$

In this section we will deal with the construction of the solution SP to the IVP (2) in the case that 0<α≤1 by means of a random generalized power series. It will be done combining the Fröbenious method together with a mean square chain rule for differentiating 2-SPs. This result has been recently established by some of the authors [1].

Theorem 2

Let g be a deterministic continuous function on [a1, a2] such that g′(t) exists and is finite at some point t ∈ [a1, a2]. If ${X(v):v∈V}$ is a 2-SP such that

• The interval $V$ contains the range of g, $g([a1,a2])⊂V$.

• X(v) is mean square differentiable at the point g(t).

• The mean square derivative of X(v), $dX(v)dv$, is mean square continuous on $V$.

Then, the 2-SP, X(g (t)), is mean square differentiable at t and the mean square derivative is given by

$dX(g(t))dt=dX(v)dv|v=g(t)g′(t).$

The solution to the random IVP (2) will be seek in the following form

$Y(t)=∑m≥0Xmtαm, t≥0, 0<α≤1.$

If we define

$X(t)=∑m≥0Xmtm,$

then Y(t) = X(tα), and in this manner we can take advantage of mean square calculus for standard random power series. The random Caputo mean square derivative (3) writes

$(CDa+αY)(t)=1Γ(1−α)∫0t(t−u)−αZ(u)du,$

where Z(t) : (X(tα))′ is the mean square derivative of the SP X(t) compounded with the deterministic function tα.

If we want to apply the Fröbenius method, firstly we need to obtain the Caputo derivative of the generalized power series given by (6). This is just done by applying Theorem 2 assuming the following hypotheses:

• X(v), defined in expression (7), is differentiable in mean square sense at v = tα. Also,

$X′(tα)=∑m≥1mXmtα(m−1).$

• $dX(v)dv$ is mean square continuous on the interval v ∈ [0, Tα], T > 0.

As 0 < α ≤ 1, one gets that $V=[0,Tα]$ contains the range of g(t) = tα, that is $g([a1,a2])=g([0,T])=[0,Tα]=V$. By Theorem 2 we can check that X(g (t)) is mean square differentiable at t and its mean square derivative has the following expression

$Z(t):=Y′(t)=(X(tα))′=αtα−1X′(tα).$

Therefore

$(CDa+αY)(t)=1Γ(1−α)∫0t(t−u)−α∑m=1∞αmXmuαm−1du.$

Furthermore, assuming that

• The random power series $∑m=1∞mXmtαm−1$ is mean square convergent on the interval 0 ≤ tT,

the infinite sum and the integral in (11) can be commuted and then it can be proved that the resulting expression writes as

$(CD0+αY)(t)=∑m≥0(Xm+1Γ(α(m+1)+1)Γ(αm+1)tαm).$

In order to calculate the coefficients Xm of Y(t), we need to take into account that Y (0) = X0 = b0. Then, using the Fröbenius method one gets

$Xm+1=λm+1b0+λmcΓ(α(m+1)+1), m≥0.$

A candidate solution to the random IVP (2) when 0 < α ≤ is given by

$Y(t)=X(tα), X(v)=∑m≥0Xm,1vm+∑m≥1Xm,2vm,$

where

$Xm,1=λmb0Γ(αm+1), Xm,2=λm−1cΓ(αm+1).$

This solution must verify conditions C1–C3 in order to legitimate the assumptions that allows us to define the random Caputo mean square derivative of the 2-SP Y(t). In order to check that C1 is satisfied, we will need the following result:

Proposition 3

[2] Let $V⊂ℝ$ be an interval, mm0 ≥ 0 a non-negative integer and ${Um(v):v∈V,m≥m0}$ be a sequence of 2-SPs such that

• Um(v) is mean square differentiable on $V$.

• The mean square derivative, Um(v), is mean square continuous on $V$.

• $U(v)=∑m≥m0Um(v)$ is mean square convergent on $V$.

• $∑m≥m0Um′(v)$ is mean square uniformly convergent on $V$.

Then, the 2-SP U(v) is mean square differentiable at every $v∈V$ and

$U′(v)=∑m≥m0U′m(v).$

Now, we check the hypotheses of Proposition 3 for the two series defined in (13)(14). First, let us define

$Xm,1(t):=Xm,1tm=λmb0Γ(αm+1)tm,$

and observe that

$0<‖Xm,1(tα+h)−Xm,1(tα)h−X′m,1(tα)‖2≤|λ|m‖b0‖2Γ(αm+1)|(tα+h)m−tαmh−mtα(m−1)|→h→00,$

where in the limit we have used that the deterministic function vm is differentiable at v = tα and that $‖b0‖2<+∞$ since b0 is a 2-RV. As a consequence, Xm,1(v) is m.s. differentiable at v = tα being $X′m,1(tα)=mλmb0Γ(αm+1)tα(m−1)$ its m.s. derivative.

Secondly, using a similar reasoning it can be checked that Xm,1(v) = mλm b0vm−1/Γ(α m + 1) is mean square continuous at v = tα.

Thirdly, let us show that the random power series $∑m≥0Xm,1(v)=∑m≥0Xm,1vm$ is mean square convergent for each v: 0 < vTα. To this end, we first majorize this series

$‖Xm,1‖2tm=‖λmb0Γ(αm+1)‖2tm≤|λ|m‖b0‖2tmΓ(αm+1):=δm(t).$

Then, we check that series with general term δm(t) is convergent by applying the D’Alembert test together with the generalized Stirling formula, $Γ(x+1)≈xxe−x2πx$, x → + ∞. This leads

$limm→+∞δm+1(t)δm(t)=|λ|(limm→+∞1(α(m+1))αmm+1)t=0.$

Finally, the mean square uniform convergence of $∑m=0∞Xm,1(t)$ can be established using similar arguments to the ones shown earlier. In this way it is justified that the random power series $∑m≥0Xm,1vm$ is mean square differentiable at v = tα. Analogously, it can be proved that the second power series $∑m≥1Xm,2tm$ in (13) is mean square differentiable at v = tα. As a consequence, the random power series X(t) is mean square differentiable at v = tα. Then, condition C1 is satisfied. Based upon similar arguments, it can be shown that X(v) also satisfies conditions C2 and C3.

## 3 Case II: Solution of the random linear fractional differential equation when $1<α≤2$

In this section we will construct the solution of the random IVP (2) when $1<α≤2$ applying similar arguments to ones exhibited in previous section. Thus we seek the solution SP in the following form

$Y(t)=Y1(t)+Y2(t),$

where

$Y1(t)=∑m≥0Xmtαm, Y2(t)=∑m≥0Ymtαm+1.$

In order to obtain the expression of the Caputo derivative of order α to Y1(t), we define $Y^1(t)=∑m≥0Xmtm,$, hence $Y1(t)=Y^1(tα)$. According to (3), the random mean square Caputo derivative is given by

$(CD0αY1)(t)=(CD0αY^1)(tα)=(J2−αZ)(t), 1<α≤2,$

where $Z(t)=Y^″1'(tα)$. To compute Z(t) we will apply twice Theorem 2, assuming that hypotheses i)-iii) of Theorem 2 hold for both $Y^1(t)$ and $Y^′1(t)$. In that case

$Z(t)=[(Y^1(tα))′]′=[αtα−1Y^1′(v)|v=tα]′=α(α−1)tα−2Y^(v)|v=tα+αtα−1αtα−1Y^1′​′(v)|v=tα=α(α−1)tα−2Y^1(v)|v=tα+α2t2α−2Y^1′​′(v)|v=tα=α(α−1)∑m=0∞(m+1)Xm+1tα(m+1)−2+α2∑m=0∞(m+2)(m+1)Xm+2tα(m+2)−2.$

Observe that, we have applied Property (4.126) of [8, p.96] to compute the mean square derivative of the product of a deterministic function (α tα−1) and a mean square 2-SP $(Y^1′(tα))$. Moreover, assuming that the random power series $∑m=0∞(m+1)Xm+1tα(m+1)−2$ and $∑m=0∞(m+2)(m+1)Xm+2tα(m+2)−2$ are mean square convergent, we can obtain the random mean square Caputo derivative as follows

$(CD0∞Y1)(t)=(J2−αZ)(t)=J2−α(α(α−1)∑m=0∞(m+1)Xm+1tα(m+1)−2+α2∑m=0∞(m+2)(m+1)Xm+2tα(m+2)−2)=α(α−1)∑m=0∞(m+1)Xm+1J2−α(tα(m+1)−2)+α2∑m=0∞(m+2)(m+1)Xm+2J2−α(tα(m+2)−2)=∑m≥0Γ(α(m+1)+1)Γ(am+1)Xm+1tαm,$

where in the last inequality we have simplified and used the reproductive property of gamma function, Γ(γ+1) = γ Γ(γ), γ > 0.

The next step is to compute the random mean square Caputo derivative of Y2(t). Note that by (3) one gets

$(CD0αY2)(t)=(J2−αY2″)(t)=(J2−α(Y2′)′)(t)=(CD0α−1Y2′)(t).$

As $1<α≤2$, and $Y2′(t)=∑m≥0(am+1)Ymtam$, we can recast $α^=α−1∈(0,1]$, $Y^m=(αm+1)Ym$ and to compute the random mean square Caputo derivative of order $α^$ of $∑m≥0Y^mtam$ using the same method used in (18)(12). This leads

$(CD0αY2)(t)=∑m≥0Ym+1Γ(α(m+1)+2)Γ(αm+2)tαm+1.$

Once we have obtained the mean square Caputo derivative of both series in (18), we need to compute their coefficients Xm and Ym taking into account the initial conditions Y (0) = X0 = b0 and Y′(0) = Y0 = b1. After handling the corresponding recurrence relationships, one gets

$Xm=λmb0+λm−1cΓ(αm+1), Ym=λmb1Γ(αm+2), m≥1.$

Therefore, a candidate solution SP to the random IVP (2) with $1<α≤2$ is given by

$Y(t)=∑m≥0Xm,1tαm+∑m≥1Xm,2tαm+∑m≥0Ymtαm+1,$

where

$Xm,1=λmb0Γ(αm+1), Xm,2=λm−1cΓ(αm+1), Ym=λmb1Γ(αm+2).$

## 4 Approximating the mean, the variance, the covariance and the cross-covariance of the solution stochastic process

An important feature when solving random differential equations is that the main goal is not only to compute the solution SP but its statistical functions. This section is addressed to compute the mean, the variance, the covariance and the cross-covariance functions of the solution SP to random IVP (2) when 0 ≤ α < 1. The method to compute these statistical functions in the case that 0 < α ≤ 2. is analogous and we then skip in the subsequent development. To this end, the following property that will play a key role later. At this point, it is important to stress that that crucial property holds for mean square convergence being not true when another kind of stochastic convergences are considered.

Proposition 4

([8, Theorem 4.2.1]). Let{RM: M = 0} be a sequence of 2-RVs such that $RM→M→+∞m.s.R$, i.e. RM is mean square convergent to R. Then, the mean and the variance of the approximations ZM tend to the mean and the variance of the corresponding limits

$E[RM][M→→+∞]⟶E[R]andV[RM][M→→+∞]⟶V[R].$

In order to construct the approximations of the mean and the variance, it is convenient to introduce the truncation of order M of the solution SP, that is,

$YM(t)=∑m=0MXm,1tαm+∑m=1MXm,2tαm=∑m=0Mλmb0Γ(αm+1)tαm+∑m=1Mλm−1cΓ(αm+1)tαm.$

By applying the the expectation operator on the previous expressions, one gets

$E[YM(t)]=E[b0]∑m=0MλmΓ(αm+1)tαm+E[c]∑m=1Mλm−1Γ(αm+1)tαm.$

In order to obtain the expression of the approximation of the variance function, $V[YM(t)]$, let us first recall that

$V[YM(t)]=E[(YM(t))2]−(E[YM(t)])2,$

then it is enough to compute $E[(YM(t))2]$. To do that, we will take advantage of statistical independence of RVs b0 and c. Henceforth,

$E[YM(t)2]=E[(∑m=0Mλmb0Γ(am+1)tαm+∑m=1Mλm−1cΓ(αm+1)tαm)2]=E[(b0)2]∑m=0Mλ2m(Γ(αm+1))2t2αm+2E[(b0)2]∑m=1M∑n=0m−1λm+nΓ(αm+1)Γ(αn+1)tα(m+n)E[c2]∑m=1Mλ2(m−1)(Γ(αm+1))2t2αm+2E[c2]∑m=2M∑n−1m−1λm+n−2Γ(αm+1)Γ(αn+1)tα(m+n)+2E[b0]E[c]∑m=0M∑n=1Mλm+n−1Γ(αm+1)Γ(αn+1)tα(m+n).$

In order to compute an approximation of the cross-covariance function of the solution SP, let us consider N, M ≥ 1 and t, s > 0. Using the properties of covariance, this approximation can be represented by the following expression

$ℂYM,YN(t,s)=∑m=0M∑n=0Nℂov[Xm,1,Xn,1]tαmsαn+∑m=0M∑n=1Nℂov[Xm,1,Xn,2]tαmsαn+∑m=1M∑n=0Nℂov[Xm,2,Xn,2]tαmsαn+∑m=1M∑n=1Nℂov[Xm,2,Xn,2]tαmsαn,$

where

$ℂov[Xm,1,Xn,1]=λm+1E[(b0)2]−λm+n(E[b0])2Γ(αm+1)Γ(αn+1),ℂov[Xm,1,Xn,2]=(λm+n−1−λm+n−1)E[b0]E[c]Γ(αm+1)Γ(αn+1)=0,ℂov[Xm,2,Xn,1]=(λm+n−1−λm+n−1)E[b0]E[c]Γ(αm+1)Γ(αn+1)=0,ℂov[Xm,2,Xn,2]=λm+n−2E[c2]−λm+n−2(E[c])2Γ(αm+1)Γ(αn+1).$

Observe that $ℂYM,YN(t,t)$ corresponds to the covariance of the approximations of order M and N of the solution SP at the time instant t while $ℂYM,YM(t,s)$ gives the covariance of the approximation of order M at the two time instants t and s of the solution SP.

## 5 Numerical examples

In this section we illustrate the theoretical results established by means of several numerical examples. Let us consider the random fractional IVP (2), where $0<α≤1$, $λ=34$, b0 and c are 2-RVs such that

$E[b0]=E[c]=12, V[b0]=V[c]=14.$

Figure 1 shows the approximations of the mean and the standard deviation of the solution SP to the random IVP (2) with α = 0.7. In both plots we have carried out computations taking different orders of truncations M = 6, 7, 8, 9, 10.

To illustrate the role of λ parameter in the velocity of the convergence of approximations, let us now consider that λ parameter is greater than 1, for example $λ=54$. In that case, the approximations of the mean and standard deviation converge slower than in the previous case. In Figure 2, we show the approximations considering higher truncation order than in the previous case, namely M = 10, 12, 14, 16, 18.

On the one hand, if we assume that the mean of b0 and c are negative, the mean of the solution SP decreases as t increases, while the standard deviation of the solution SP increases. In Figure 3, the mean and standard deviation have been represented taking $E[b0]=E[c]=−1$.

In Figures 13 we observe that approximations of the mean and the standard deviation improve as M increases although the error is greater as we depart from the origin (t = 0) where the IVP is stated. These results are consistent with Proposition 4.

On the other hand is important to illustrate the behaviour of the solution SP when the fractional order, α, of the derivative changes. In Figure 4, we have fixed the order of truncation, M = 20, and we show the trajectories of the expectation and standard deviation to the solution SP for different orders of the fractional derivative. Note that the case α = 0.99 is very close to the classical derivative (α = 1).

## 6 Conclusions

In this contribution we have provided a probabilistic study of the randomized deterministic fractional linear differential equation. The study has been carried out using some results belonging to random mean square calculus that have been previously established in the extant literature. We have constructed a mean square convergent power generalized series solution stochastic process by means of a random Fröbenius method. Furthermore, we have constructed approximations for both the mean and the variance of the solution stochastic process. Our numerical experiments are in agreement with the theoretical results. This paper can stimulate the extension of well-known results of deterministic fractional differential equations to the random framework using mean square calculus.

Communicated by Juan L.G. Guirao

## References

• [1]

J.C. Cortés, L. Villafuente, C. Burgos, A mean square chain rule and its applications in solving the random Chebyschev differential equation Mediterr. J. Math. 2017 : 14(1)-14.

• [2]

J.C. Cortés, P. Sevilla-Peris, L. Jódar, Analytic-numerical approximating processes of diffusion equation with data uncertainty Comput. Math. Appl. 2005:49(7-8):1255-66.

• [3]

A.K. Golmankhaneh, N.A. Porghoveh, D. Baleanu, Mean square solutions of second-order random differential equations by using homotopy analysis method Rom. Rep. Phys. 2013:65:350-62.

• [4]

A.A. Kilbas, H.M. Srivastava, J.J. Trujillo, Theory and Applications of Fractional Differential Equations The Netherlands: Elsevier Science: 2006.

• [5]

V. Lupulescu, D. O’Reagan, Gu. Rahman, Existence results for random fractional differential equations Opuscula Mathematica. 2014:34(4):813-25.

• [6]

V. Lupulescu, K. N. Ntouyas, Random fractional differential equations Int. Electron. J. Pure Appl. Math. 2012:4(2):119-36.

• [7]

K. Nouri, H. Ranjbar, Mean square convergence of the numerical solution of random differential equations Mediterr. J. Math. 2015:12:1123-40.

• [8]

T.T. Soong, Random differential equations in science and engineering New York: Academic Press: 1973.

[1]

J.C. Cortés, L. Villafuente, C. Burgos, A mean square chain rule and its applications in solving the random Chebyschev differential equation Mediterr. J. Math. 2017 : 14(1)-14.

[2]

J.C. Cortés, P. Sevilla-Peris, L. Jódar, Analytic-numerical approximating processes of diffusion equation with data uncertainty Comput. Math. Appl. 2005:49(7-8):1255-66.

[3]

A.K. Golmankhaneh, N.A. Porghoveh, D. Baleanu, Mean square solutions of second-order random differential equations by using homotopy analysis method Rom. Rep. Phys. 2013:65:350-62.

[4]

A.A. Kilbas, H.M. Srivastava, J.J. Trujillo, Theory and Applications of Fractional Differential Equations The Netherlands: Elsevier Science: 2006.

[5]

V. Lupulescu, D. O’Reagan, Gu. Rahman, Existence results for random fractional differential equations Opuscula Mathematica. 2014:34(4):813-25.

[6]

V. Lupulescu, K. N. Ntouyas, Random fractional differential equations Int. Electron. J. Pure Appl. Math. 2012:4(2):119-36.

[7]

K. Nouri, H. Ranjbar, Mean square convergence of the numerical solution of random differential equations Mediterr. J. Math. 2015:12:1123-40.

[8]

T.T. Soong, Random differential equations in science and engineering New York: Academic Press: 1973.

# Applied Mathematics and Nonlinear Sciences

### Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 383 340 45