Open Access

Mean square calculus and random linear fractional differential equations: Theory and applications


Cite

Introduction and motivation

It is well-known that the following linear differential equation with initial condition is a proper pattern to model a great number of physical problems

y(i)(t)λy(t)=c,t>0,y(j)(0)=bj,0j<i,i=1,2,j. $$ \begin{equation}\label{IVP_deterministico} y^{(i)}(t)-\lambda y(t)=c,\quad t\gt0,\qquad y^{(j)}(0)=b_j, \qquad 0 \leq j \lt i, \qquad i=1,2,\quad j \in \mathbb{N}. \end{equation} $$

Here $\mathbb{N}$ denotes the set of positive integers and λ,c,b0 $\lambda,c,b_0 \in \mathbb{R}$ . In practice these model parameters need to be fixed using measurements, hence they contain measurement errors. This fact motivates that it is more realistic to treat them as random variables (RVs) instead of deterministic values. Apart from measurement errors, randomness can be attributed to the inherent complexity often encountered in physic phenomena. This justifies the consideration of variability in the forcing term and in the initial condition (c, b0, b1). It must be pointed out that randomness may be included also in the diffusion coefficient, however in this contribution, for simplicity, we will assume that λ is a constant.

On the other hand, over the last few decades fractional differential equations are playing a significant role in modelling phenomena with microscopic complex behaviour whose macroscopic dynamics (like viscoelasticity, fluid behaviour, etc.) cannot be properly described using classical derivatives [4]. Throughout this paper, randomness will be considered using the Mean Square Calculus that involves the so-called mean square convergence. In the context of ordinary and fractional random differential equations, this random Calculus has been successfully used to conduct a number of significant problems. Some examples can be checked in [3, 5, 6, 7]. The application mean square calculus to deal with the analysis of random fractional differential equations is adequate because it permits to take advantage of properties of mean square convergence to compute approximations of the expectation and variance of the solution stochastic process. Our contribution is based upon a number of results belonging to the mean square calculus that have been established in the extant literature.

In this contribution we deal with the following generalization of the initial value problem (IVP) (1)

CD0+αY(t)λY(t)=c,t>0,0<α2,Y(j)(0)=bj,0j<[α]1, $$ \begin{equation} \left\{\begin{array}{ccl} \left(^CD_{0^{+}}^\alpha Y\right)(t) -\lambda Y(t)& = & c, \quad t\gt0,\,\,\, \hspace{0.2cm} 0 \lt \alpha \leq 2,\\ Y^{(j)}(0) &=& b_j, \quad 0 \leq j \lt[\alpha]-1, \end{array} \right. \end{equation} $$

being [·] the integer part function. To conduct our analysis, we will assume that λ $\lambda\in \mathbb{R}$ and bj and c are independent second-order RVs (2-RVs), that is, we assume that bj and c have finite variance. The symbol (CD0+αY)(t) $\left(^CD_{0^{+}}^\alpha Y\right)(t)$ denotes the random Caputo fractional derivative of α order of the stochastic process (SP), Y(t). Although there exist several definitions of derivatives of non-integer order (fractional derivatives), for convenience, throughout our analysis we will consider the Caputo derivative. For the sake of completeness, we recall the definition of this random derivative.

Definition 1

Let f(t) be a second-order SP (2-SP), i.e. f(t) is a 2-RV for every fixed t, and assume that it is n-times mean square differentiable over a finite interval [a, b], then the random Caputo mean square derivative of order α > 0 of f(t) is given by

(CDa+αf)(t):=(Jnαf(n))(t). $$ \begin{equation} \begin{aligned} \left(^CD_{a+}^\alpha f \right)(t)&:= \left( J^{n-\alpha} f^{(n)} \right )(t). \end{aligned} \end{equation} $$

Here, n = −[−α], f(n)(t) denotes the n-th mean square derivative of the 2-SP f(t) and Jnα stands for the random fractional integral of order nα, i.e.

(Jnαf)(t)=1Γ(nα)at(tu)nα1f(u)du, $$ \begin{equation*} \left( J^{n-\alpha}f \right)(t)=\frac{1}{\Gamma(n-\alpha)} \int_a^t (t-u)^{n-\alpha-1} f(u)\mathrm{d}u, \end{equation*} $$

where Γ(nα) is the deterministic Gamma function.

In [8, ch.4], one can find the definition of mean square differentiability to 2-SPs. The random fractional integral (Jnαf)(t) is a natural extension of its deterministic counterpart (see [4]) using the properties of the mean square Riemann integral of 2-SPs (see [8, ch.4]).

We recall that the correlation function, Γf(u1, u2), of a 2-SP, f(t), is defined by

Γf(u1,u2):=E[f(u1)f(u2)],u1,u2, $$ \begin{equation*} \Gamma_f(u_1,u_2):=\mathbb{E}[f(u_1)f(u_2)],\quad u_1,u_2\in \mathbb{R}, \end{equation*} $$

where E[] $\mathbb{E}[\cdot]$ denotes the expectation operator. It can be proved, by applying the Schwarz inequality, that Γf(u1, u2) always is well-defined. An important property of the correlation function of a 2-SP, f(t) and its n-th mean square derivative, f(n)(t), is the following

Γf(n)(u1,u2)=2nΓf(u1,u2)u1nu2n,u1,u2. $$ \begin{equation} \Gamma_{f^{(n)}}(u_1,u_2)= \dfrac{\partial^{2n} \Gamma_f(u_1,u_2)} {\partial u_1^n \, \partial u_2^n},\quad u_1,u_2\in \mathbb{R}. \end{equation} $$

Using (4) and the characterization for the existence of the mean square Riemann integral of 2-SP f(t) (see [8, Th. 4.5.1]), one can establish following necessary and sufficient condition for the existence of the random Caputo fractional mean square derivative.

Proposition 1

Let {f(t) : t ∈ [a, b]} be a 2-SP n-times mean square differentiable with correlation function Γf(·, ·). Then, its (left-sided) random Caputo fractional mean square derivative, (CDa+αf)(t) $\left(^CD_{a+}^\alpha f\right)(t)$ , α>0, exists if, and only if, the following deterministic double Riemann integral

atat(tu1)nα1(tu2)nα12nΓf(u1,u2)u1nu2ndu1du2 $$ \begin{equation} \int_a^t \int_a^t (t-u_1)^{n-\alpha-1}(t-u_2)^{n-\alpha-1} \dfrac{\partial^{2n} \Gamma_f(u_1,u_2)} {\partial u_1^n \, \partial u_2^n} \, d u_1 d u_2 \end{equation} $$

exists and is finite

Once we have defined the Caputo derivative of a 2-SP, in Sections 2 and 3 we will construct the solution of random IVP (2) in two steps. First, considering the case when 0<α1 $0 \lt \alpha \leq 1$ (Case I) and, secondly, when 1<α2 $1 \lt \alpha \leq 2$ (Case II). As we shall see throughout analysis, this latter case will be based upon the study carried out in the former.

Case I: Solution of the random linear fractional differential equation when 0<α1 $\mathbf{0 \lt \alpha \leq 1}$

In this section we will deal with the construction of the solution SP to the IVP (2) in the case that 0<α≤1 by means of a random generalized power series. It will be done combining the Fröbenious method together with a mean square chain rule for differentiating 2-SPs. This result has been recently established by some of the authors [1].

Theorem 2

Let g be a deterministic continuous function on [a1, a2] such thatg′(t) exists and is finite at some pointt ∈ [a1, a2]. If {X(v):vV} $\{X(v): v\in \mathcal{V}\}$ is a 2-SP such that

The intervalV $\mathcal{V}$ contains the range ofg, g([a1,a2])V $g([a_1,a_2])\subset \mathcal{V}$ .

X(v) is mean square differentiable at the pointg(t).

The mean square derivative ofX(v), dX(v)dv $\frac{\mathrm{d} X(v)}{\mathrm{d} v}$ , is mean square continuous onV $\mathcal{V}$ .

Then, the 2-SP, X(g (t)), is mean square differentiable at t and the mean square derivative is given by

dX(g(t))dt=dX(v)dv|v=g(t)g(t). $$ \frac{\mathrm{d}X(g(t))}{\mathrm{d}t}=\frac{\mathrm{d}X(v)}{\mathrm{d}v}\Big\rvert_{v=g(t)}g'(t). $$

The solution to the random IVP (2) will be seek in the following form

Y(t)=m0Xmtαm,t0,  0<α1. $$ \begin{equation} Y(t)=\sum_{m\geq 0} X_m t^{\alpha m},\quad t\geq 0, \qquad 0\lt \alpha \leq 1. \end{equation} $$

If we define

X(t)=m0Xmtm, $$ \begin{equation} X(t) = \sum_{m\geq 0} X_m t^{m}, \end{equation} $$

then Y(t) = X(tα), and in this manner we can take advantage of mean square calculus for standard random power series. The random Caputo mean square derivative (3) writes

(CDa+αY)(t)=1Γ(1α)0t(tu)αZ(u)du, $$ \begin{equation} \left(^CD_{a+}^\alpha Y\right)(t) = \frac{1}{\Gamma(1-\alpha)}\int_0^t (t-u)^{-\alpha} Z(u) \mathrm{d}u, \end{equation} $$

where Z(t) : (X(tα))′ is the mean square derivative of the SP X(t) compounded with the deterministic function tα.

If we want to apply the Fröbenius method, firstly we need to obtain the Caputo derivative of the generalized power series given by (6). This is just done by applying Theorem 2 assuming the following hypotheses:

X(v), defined in expression (7), is differentiable in mean square sense at v = tα. Also, X(tα)=m1mXmtα(m1). $$ \begin{equation} X'(t^{\alpha}) = \sum_{m\geq 1}m X_m t^{\alpha (m-1)} . \end{equation} $$

dX(v)dv $\dfrac{\mathrm{d}X(v)}{\mathrm{d}v}$ is mean square continuous on the interval v ∈ [0, Tα], T > 0.

As 0 < α ≤ 1, one gets that V=[0,Tα] $\mathcal{V}=[0,T^{\alpha}]$ contains the range of g(t) = tα, that is g([a1,a2])=g([0,T])=[0,Tα]=V $g([a_1,a_2])=g([0,T])=[0,T^{\alpha}]=\mathcal{V}$ . By Theorem 2 we can check that X(g (t)) is mean square differentiable at t and its mean square derivative has the following expression

Z(t):=Y(t)=(X(tα))=αtα1X(tα). $$ \begin{equation} Z(t):=Y'(t)=(X(t^{\alpha}))'= \alpha t^{\alpha-1} X'(t^{\alpha}). \end{equation} $$

Therefore

(CDa+αY)(t)=1Γ(1α)0t(tu)αm=1αmXmuαm1du. $$ \begin{equation} \left(^CD_{a+}^\alpha Y\right)(t) = \frac{1}{\Gamma(1-\alpha)}\int_0^t (t-u)^{-\alpha} \sum_{m=1}^\infty \alpha m X_m u^{\alpha m-1}\mathrm{d}u. \end{equation} $$

Furthermore, assuming that

The random power series m=1mXmtαm1 $\sum_{m=1}^\infty m X_m t^{\alpha m-1}$ is mean square convergent on the interval 0 ≤ tT,

the infinite sum and the integral in (11) can be commuted and then it can be proved that the resulting expression writes as

(CD0+αY)(t)=m0(Xm+1Γ(α(m+1)+1)Γ(αm+1)tαm). $$ \begin{equation} \begin{split} \left(^CD_{0^{+}}^\alpha Y \right)(t) & = \sum_{m\geq 0} \left( X_{m+1} \dfrac{\Gamma(\alpha (m+1)+1)}{\Gamma(\alpha m+1)} t^{\alpha m} \right). \end{split} \end{equation} $$

In order to calculate the coefficients Xm of Y(t), we need to take into account that Y (0) = X0 = b0. Then, using the Fröbenius method one gets

Xm+1=λm+1b0+λmcΓ(α(m+1)+1),m0. $$ \begin{equation*} X_{m+1} = \dfrac{\lambda^{m+1} b_0 + \lambda^{m} c}{\Gamma(\alpha (m+1) +1)}, \quad m\geq 0. \end{equation*} $$

A candidate solution to the random IVP (2) when 0 < α ≤ is given by

Y(t)=X(tα),  X(v)=m0Xm,1vm+m1Xm,2vm, $$ \begin{equation} Y(t)=X(t^{\alpha}),\qquad X(v)=\sum_{m\geq 0} X_{m,1} v^m+ \sum_{m\geq 1} X_{m,2} v^m, \end{equation} $$

where

Xm,1=λmb0Γ(αm+1),  Xm,2=λm1cΓ(αm+1). $$ \begin{equation} X_{m,1} = \dfrac{\lambda^m b_0}{\Gamma(\alpha m +1)},\qquad X_{m,2} = \dfrac{\lambda^{m-1} c}{\Gamma(\alpha m +1)}. \end{equation} $$

This solution must verify conditions C1–C3 in order to legitimate the assumptions that allows us to define the random Caputo mean square derivative of the 2-SP Y(t). In order to check that C1 is satisfied, we will need the following result:

Proposition 3

[2] LetV $\mathcal{V} \subset \mathbb{R}$ be an interval, mm0 ≥ 0 a non-negative integer and{Um(v):vV,mm0} $\{U_m(v): v\in \mathcal{V}, \, m\geq m_0 \}$ be a sequence of 2-SPs such that

Um(v) is mean square differentiable onV $\mathcal{V}$ .

The mean square derivative, Um(v), is mean square continuous onV $\mathcal{V}$ .

U(v)=mm0Um(v) $U(v)=\sum_{m\geq m_0} U_m(v)$ is mean square convergent onV $\mathcal{V}$ .

mm0Um(v) $\sum_{m\geq m_0} U_m'(v)$ is mean square uniformly convergent onV $\mathcal{V}$ .

Then, the 2-SP U(v) is mean square differentiable at everyvV $v \in \mathcal{V}$ and

U(v)=mm0Um(v). $$ U'(v)=\sum_{m\geq m_0} U_m'(v). $$

Now, we check the hypotheses of Proposition 3 for the two series defined in (13)(14). First, let us define

Xm,1(t):=Xm,1tm=λmb0Γ(αm+1)tm, $$ \begin{equation*} X_{m,1}(t):= X_{m,1}t^m=\dfrac{\lambda^m b_0}{\Gamma(\alpha m + 1)} t^m, \end{equation*} $$

and observe that

0<Xm,1(tα+h)Xm,1(tα)hXm,1(tα)2|λ|mb02Γ(αm+1)|(tα+h)mtαmhmtα(m1)|h00, $$ \begin{equation} \begin{split} 0\lt \left\| \dfrac{X_{m,1}(t^{\alpha}+h)-X_{m,1}(t^{\alpha})}{h}-X_{m,1}'(t^{\alpha}) \right\|_2 % %&\ \leq \dfrac{|\lambda|^m \left\| b_0 \right\|_2}{\Gamma(\alpha m +1)} \left| \dfrac{(t^{\alpha}+h)^{m}-t^{\alpha m}}{h} - m t^{\alpha (m-1)} \right| \xrightarrow[h \to 0]{} 0, \end{split} \end{equation} $$

where in the limit we have used that the deterministic function vm is differentiable at v = tα and that b02<+$\left\| b_0\right\|_2\lt+\infty$ since b0 is a 2-RV. As a consequence, Xm,1(v) is m.s. differentiable at v = tα being Xm,1(tα)=mλmb0Γ(αm+1)tα(m1) $X_{m,1}'(t^\alpha)=\frac{m \lambda^m b_0}{\Gamma(\alpha m +1)} t^{\alpha (m-1)}$ its m.s. derivative.

Secondly, using a similar reasoning it can be checked that Xm,1(v) = mλmb0vm−1/Γ(α m + 1) is mean square continuous at v = tα.

Thirdly, let us show that the random power series m0Xm,1(v)=m0Xm,1vm$\sum_{m\geq 0} X_{m,1}(v)=\sum_{m\geq 0} X_{m,1}v^m$ is mean square convergent for each v: 0 < vTα. To this end, we first majorize this series

Xm,12tm=λmb0Γ(αm+1)2tm|λ|mb02tmΓ(αm+1):=δm(t). $$ \begin{equation} \begin{split} \left\| X_{m,1} \right\|_2 t^m = \left\| \dfrac{\lambda^m b_0}{\Gamma(\alpha m +1)} \right\|_2 t^m\leq |\lambda|^m \left\| b_0 \right\|_2 \dfrac{t^m}{\Gamma(\alpha m +1)}:=\delta_m(t). \end{split} \end{equation} $$

Then, we check that series with general term δm(t) is convergent by applying the D’Alembert test together with the generalized Stirling formula, Γ(x+1)xxex2πx $\Gamma(x+1) \approx x^x e^{-x} \sqrt{2 \pi x}$ , x → + ∞. This leads

limm+δm+1(t)δm(t)=|λ|(limm+1(α(m+1))αmm+1)t=0. $$ \begin{equation} \begin{split} \lim_{m\to +\infty}\dfrac{\delta_{m+1}(t)}{\delta_m(t)} &= |\lambda| \left( \lim_{m\to +\infty} \dfrac{1}{(\alpha (m+1))^{\alpha}} \sqrt{\dfrac{m}{m+1}} \right) t=0. \end{split} \end{equation} $$

Finally, the mean square uniform convergence of m=0Xm,1(t) $\sum_{m=0}^\infty X_{m,1} (t)$ can be established using similar arguments to the ones shown earlier. In this way it is justified that the random power series m0Xm,1vm$\sum_{m\geq 0}X_{m,1}v^m$ is mean square differentiable at v = tα. Analogously, it can be proved that the second power series m1Xm,2tm$\sum_{m\geq 1}X_{m,2}t^m$ in (13) is mean square differentiable at v = tα. As a consequence, the random power series X(t) is mean square differentiable at v = tα. Then, condition C1 is satisfied. Based upon similar arguments, it can be shown that X(v) also satisfies conditions C2 and C3.

Case II: Solution of the random linear fractional differential equation when 1<α2 $\mathbf{1 \lt \alpha \leq 2}$

In this section we will construct the solution of the random IVP (2) when 1<α2 $1 \lt \alpha \leq 2$ applying similar arguments to ones exhibited in previous section. Thus we seek the solution SP in the following form

Y(t)=Y1(t)+Y2(t), $$ \begin{equation*} Y(t)=Y_1(t)+Y_2(t), \end{equation*} $$

where

Y1(t)=m0Xmtαm,  Y2(t)=m0Ymtαm+1. $$ \begin{equation} Y_1(t)=\sum_{m \geq 0}X_m t^{\alpha m}, \qquad Y_2(t)= \sum_{m \geq 0}Y_m t^{\alpha m + 1}. \end{equation} $$

In order to obtain the expression of the Caputo derivative of order α to Y1(t), we define Y^1(t)=m0Xmtm, $\hat{Y}_1(t)= \sum_{m \geq 0}X_m t^{m}$ , hence Y1(t)=Y^1(tα) $Y_1(t)=\hat{Y}_1(t^\alpha)$ . According to (3), the random mean square Caputo derivative is given by

(CD0αY1)(t)=(CD0αY^1)(tα)=(J2αZ)(t),  1<α2, $$ \begin{equation*} \left( ^C D_0^\alpha Y_1 \right)(t)=\left( ^C D_0^\alpha \hat{Y}_1 \right)(t^\alpha)= \left( J^{2-\alpha} Z \right)(t),\qquad 1 \lt \alpha \leq 2, \end{equation*} $$

where Z(t)=Y^1'(tα) $Z(t)=\hat{Y}_1^{''}(t^\alpha)$ . To compute Z(t) we will apply twice Theorem 2, assuming that hypotheses i)-iii) of Theorem 2 hold for both Y^1(t) $\hat{Y}_1(t)$ and Y^1(t) $\hat{Y}_1^{'}(t)$ . In that case

Z(t)=[(Y^1(tα))]=[αtα1Y^1(v)|v=tα]=α(α1)tα2Y^(v)|v=tα+αtα1αtα1Y^1(v)|v=tα=α(α1)tα2Y^1(v)|v=tα+α2t2α2Y^1(v)|v=tα=α(α1)m=0(m+1)Xm+1tα(m+1)2+α2m=0(m+2)(m+1)Xm+2tα(m+2)2. $$ \begin{equation*} \begin{aligned} Z(t)&=\left [\left ( \hat{Y}_1(t^\alpha) \right)' \right]' = \left [ \alpha t^{\alpha-1} {\hat{Y}_1^{'}(v)}\Bigl\lvert_{v=t^\alpha}\right]'\\ & = \alpha (\alpha-1) t^{\alpha-2} {\hat{Y}_1^{'}(v)}\Bigl\lvert_{v=t^\alpha} + \alpha t^{\alpha-1} \alpha t^{\alpha-1} {\hat{Y}_1^{''}(v)}\Bigl\lvert_{v=t^\alpha} \\ & = \alpha (\alpha-1) t^{\alpha-2} {\hat{Y}_1^{'}(v)}\Bigl\lvert_{v=t^\alpha}+ \alpha^2 t^{2 \alpha-2} {\hat{Y}_1^{''}(v)}\Bigl\lvert_{v=t^\alpha}\\ & = \alpha (\alpha-1) \sum_{m=0}^\infty (m+1) X_{m+1} t^{\alpha (m+1) -2} + \alpha^2 \sum_{m=0}^\infty (m+2) (m+1) X_{m+2} t^{\alpha (m+2)-2}. \end{aligned} \end{equation*} $$

Observe that, we have applied Property (4.126) of [8, p.96] to compute the mean square derivative of the product of a deterministic function (α tα−1) and a mean square 2-SP (Y^1(tα)) $(\hat{Y}_1^{'}(t^{\alpha}))$ . Moreover, assuming that the random power series m=0(m+1)Xm+1tα(m+1)2 $\sum_{m=0}^\infty (m+1) X_{m+1} t^{\alpha (m+1) -2}$ and m=0(m+2)(m+1)Xm+2tα(m+2)2 $\sum_{m=0}^\infty (m+2)(m+1) X_{m+2} t^{\alpha (m+2)-2}$ are mean square convergent, we can obtain the random mean square Caputo derivative as follows

(CD0Y1)(t)=(J2αZ)(t)=J2α(α(α1)m=0(m+1)Xm+1tα(m+1)2+α2m=0(m+2)(m+1)Xm+2tα(m+2)2)=α(α1)m=0(m+1)Xm+1J2α(tα(m+1)2)+α2m=0(m+2)(m+1)Xm+2J2α(tα(m+2)2)=m0Γ(α(m+1)+1)Γ(am+1)Xm+1tαm,$$\begin{equation*} \begin{aligned} \left( ^C D_0^\alpha Y_1 \right)(t)&= \left( J^{2-\alpha} Z \right)(t)\\ &= J^{2-\alpha} \left( \alpha (\alpha-1) \sum_{m=0}^\infty (m+1) X_{m+1} t^{\alpha (m+1) -2} + \alpha^2 \sum_{m=0}^\infty (m+2)(m+1) X_{m+2} t^{\alpha (m+2)-2} \right)\\ &= \alpha (\alpha-1) \sum_{m=0}^\infty (m+1) X_{m+1} \, J^{2-\alpha}\left( t^{\alpha (m+1) -2} \right) + \alpha^2 \sum_{m=0}^\infty (m+2)(m+1) X_{m+2} \, J^{2-\alpha} \left (t^{\alpha (m+2)-2} \right)\\ &= \sum_{m \geq 0} \frac{\Gamma(\alpha (m+1)+1)}{\Gamma(\alpha m +1)} X_{m+1}t^{ \alpha m}, \end{aligned} \end{equation*}$$

where in the last inequality we have simplified and used the reproductive property of gamma function, Γ(γ+1) = γ Γ(γ), γ > 0.

The next step is to compute the random mean square Caputo derivative of Y2(t). Note that by (3) one gets

(CD0αY2)(t)=(J2αY2)(t)=(J2α(Y2))(t)=(CD0α1Y2)(t). $$ \begin{equation*} \left( ^C D_0^\alpha Y_2 \right)(t)= \left( J^{2-\alpha} Y_2'' \right) (t) = \left( J^{2-\alpha} (Y_2')' \right) (t) = \left( ^C D_0^{\alpha-1} Y_2' \right)(t). \end{equation*} $$

As 1<α2 $1 \lt \alpha \leq 2$ , and Y2(t)=m0(am+1)Ymtam$Y_2'(t)= \sum_{m \geq 0} (\alpha m +1) Y_m t^{\alpha m}$, we can recast α^=α1(0,1] $\hat{\alpha}=\alpha-1\in (0,1]$ , Y^m=(αm+1)Ym $\hat{Y}_m = (\alpha m +1) Y_m$ and to compute the random mean square Caputo derivative of order α^ $\hat{\alpha}$ of m0Y^mtam $\sum_{m \geq 0} \hat{Y}_mt^{\alpha m}$ using the same method used in (18)(12). This leads

(CD0αY2)(t)=m0Ym+1Γ(α(m+1)+2)Γ(αm+2)tαm+1. $$ \begin{equation*} \left( ^C D_0^\alpha Y_2 \right)(t)=\sum_{m \geq 0} Y_{m+1} \frac{\Gamma(\alpha (m+1)+2)}{\Gamma(\alpha m +2)} t^{\alpha m+1}. \end{equation*} $$

Once we have obtained the mean square Caputo derivative of both series in (18), we need to compute their coefficients Xm and Ym taking into account the initial conditions Y (0) = X0 = b0 and Y′(0) = Y0 = b1. After handling the corresponding recurrence relationships, one gets

Xm=λmb0+λm1cΓ(αm+1),  Ym=λmb1Γ(αm+2),  m1. $$ \begin{equation*} X_m = \frac{\lambda ^m b_0+\lambda^{m-1} c}{\Gamma(\alpha m +1)}, \qquad Y_m= \frac{\lambda^m b_1}{\Gamma(\alpha m +2)}, \qquad m \geq 1. \end{equation*} $$

Therefore, a candidate solution SP to the random IVP (2) with 1<α2 $1 \lt \alpha \leq 2$ is given by

Y(t)=m0Xm,1tαm+m1Xm,2tαm+m0Ymtαm+1, $$ \begin{equation*} Y(t)= \sum_{m \geq 0}X_{m,1} t^{\alpha m} + \sum_{m \geq 1}X_{m,2} t^{\alpha m} + \sum_{m \geq 0}Y_m t^{\alpha m + 1}, \end{equation*} $$

where

Xm,1=λmb0Γ(αm+1),  Xm,2=λm1cΓ(αm+1),  Ym=λmb1Γ(αm+2). $$ \begin{equation*} X_{m,1}= \frac{\lambda ^m b_0}{\Gamma(\alpha m +1)}, \qquad X_{m,2}= \frac{\lambda ^{m-1} c}{\Gamma(\alpha m +1)}, \qquad Y_m = \frac{\lambda^m b_1}{\Gamma(\alpha m +2)}. \end{equation*} $$

Approximating the mean, the variance, the covariance and the cross-covariance of the solution stochastic process

An important feature when solving random differential equations is that the main goal is not only to compute the solution SP but its statistical functions. This section is addressed to compute the mean, the variance, the covariance and the cross-covariance functions of the solution SP to random IVP (2) when 0 ≤ α < 1. The method to compute these statistical functions in the case that 0 < α ≤ 2. is analogous and we then skip in the subsequent development. To this end, the following property that will play a key role later. At this point, it is important to stress that that crucial property holds for mean square convergence being not true when another kind of stochastic convergences are considered.

Proposition 4

([8, Theorem 4.2.1]). Let{RM: M = 0} be a sequence of 2-RVs such thatRMM+m.s.R $R_M \stackrel[M\to +\infty]{\text{m.s.}}{\longrightarrow} R$ , i.e. RMis mean square convergent toR. Then, the mean and the variance of the approximationsZMtend to the mean and the variance of the corresponding limits

E[RM][M+]E[R]andV[RM][M+]V[R].$$\begin{equation} \mathbb{E}[R_M] [\overrightarrow{M}\to +\infty]{}{\longrightarrow} \mathbb{E}[R] \quad {and} \quad \mathbb{V}[R_M] [\overrightarrow{M}\to +\infty]{}{\longrightarrow} \mathbb{V}[R]. \end{equation}$$

In order to construct the approximations of the mean and the variance, it is convenient to introduce the truncation of order M of the solution SP, that is,

YM(t)=m=0MXm,1tαm+m=1MXm,2tαm=m=0Mλmb0Γ(αm+1)tαm+m=1Mλm1cΓ(αm+1)tαm. $$ \begin{equation} \begin{aligned} Y_M(t) = \sum_{m= 0}^M X_{m,1} t^{\alpha m} + \sum_{m= 1}^M X_{m,2} t^{\alpha m} = \sum_{m= 0}^M \dfrac{\lambda^m b_0}{\Gamma(\alpha m +1)} t^{\alpha m} + \sum_{m= 1}^M \dfrac{\lambda^{m-1} c}{\Gamma(\alpha m +1)} t^{\alpha m}. \end{aligned} \end{equation} $$

By applying the the expectation operator on the previous expressions, one gets

E[YM(t)]=E[b0]m=0MλmΓ(αm+1)tαm+E[c]m=1Mλm1Γ(αm+1)tαm. $$ \begin{equation} \begin{split} \mathbb{E}[Y_M(t)] = \,\, \mathbb{E}[b_0]\sum_{m= 0}^M \dfrac{\lambda^m}{\Gamma(\alpha m +1)} t^{\alpha m} + \mathbb{E}[ c]\sum_{m= 1}^M \dfrac{\lambda^{m-1}}{\Gamma(\alpha m +1)} t^{\alpha m}. \end{split} \end{equation} $$

In order to obtain the expression of the approximation of the variance function, V[YM(t)] $\mathbb{V}[Y_M(t)]$ , let us first recall that

V[YM(t)]=E[(YM(t))2](E[YM(t)])2, $$ \begin{equation} \mathbb{V}[Y_M(t)]=\mathbb{E}[(Y_M(t))^2]-\left( \mathbb{E}[Y_M(t)]\right)^2, \end{equation} $$

then it is enough to compute E[(YM(t))2] $\mathbb{E}[(Y_M(t))^2]$ . To do that, we will take advantage of statistical independence of RVs b0 and c. Henceforth,

E[YM(t)2]=E[(m=0Mλmb0Γ(am+1)tαm+m=1Mλm1cΓ(αm+1)tαm)2]=E[(b0)2]m=0Mλ2m(Γ(αm+1))2t2αm+2E[(b0)2]m=1Mn=0m1λm+nΓ(αm+1)Γ(αn+1)tα(m+n)E[c2]m=1Mλ2(m1)(Γ(αm+1))2t2αm+2E[c2]m=2Mn1m1λm+n2Γ(αm+1)Γ(αn+1)tα(m+n)+2E[b0]E[c]m=0Mn=1Mλm+n1Γ(αm+1)Γ(αn+1)tα(m+n). $$ \begin{equation}\label{momento_orden_2_Y} \begin{split} \mathbb{E}\left[(Y_M(t))^2\right] =& \mathbb{E}\left[ \left( \displaystyle \sum_{m= 0}^M \dfrac{\lambda^m b_0}{\Gamma(\alpha m +1)} t^{\alpha m} + \displaystyle \sum_{m= 1}^M \dfrac{\lambda^{m-1} c}{\Gamma(\alpha m +1)} t^{\alpha m} \right)^2 \right]\\ = & \displaystyle\mathbb{E}\left[(b_0)^2\right] \sum_{m= 0}^M \dfrac{\lambda^{2m} }{(\Gamma(\alpha m +1))^2} t^{2\alpha m}\\ & + \displaystyle 2\mathbb{E}\left[(b_0)^2\right]\sum_{m= 1}^M \sum_{n= 0}^{m-1} \dfrac{\lambda^{m+n} }{\Gamma(\alpha m +1)\Gamma(\alpha n +1)} t^{\alpha (m+n)} \\ &+ \displaystyle \mathbb{E}\left[c^2\right] \sum_{m= 1}^M \dfrac{\lambda^{2(m-1)} }{(\Gamma(\alpha m +1))^2} t^{2\alpha m}\\ & + \displaystyle 2 \mathbb{E}\left[c^2\right] \sum_{m= 2}^M \sum_{n= 1}^{m-1} \dfrac{\lambda^{m+n-2} }{\Gamma(\alpha m +1)\Gamma(\alpha n +1)} t^{\alpha (m+n)} \\ & +\displaystyle2\mathbb{E}[b_0]\mathbb{E}[c] \sum_{m= 0}^M \sum_{n= 1}^M \dfrac{\lambda^{m+n-1}}{\Gamma(\alpha m +1)\Gamma(\alpha n +1)} t^{\alpha (m+n)}. \end{split} \end{equation} $$

In order to compute an approximation of the cross-covariance function of the solution SP, let us consider N, M ≥ 1 and t, s > 0. Using the properties of covariance, this approximation can be represented by the following expression

YM,YN(t,s)=m=0Mn=0Nov[Xm,1,Xn,1]tαmsαn+m=0Mn=1Nov[Xm,1,Xn,2]tαmsαn+m=1Mn=0Nov[Xm,2,Xn,2]tαmsαn+m=1Mn=1Nov[Xm,2,Xn,2]tαmsαn, $$ \begin{split} \mathbb{C}_{Y_M,Y_N}(t,s) =& \sum_{m=0}^M \sum_{n=0}^N \mathbb{C}\text{ov} \left[ X_{m,1},X_{n,1} \right] t^{\alpha m} s^{\alpha n}\\ &+ \sum_{m=0}^M \sum_{n=1}^N \mathbb{C}\text{ov} \left[ X_{m,1},X_{n,2} \right] t^{\alpha m} s^{\alpha n} \\ & + \sum_{m=1}^M \sum_{n=0}^N \mathbb{C}\text{ov} \left[ X_{m,2},X_{n,1} \right] t^{\alpha m} s^{\alpha n}\\ &+ \sum_{m=1}^M \sum_{n=1}^N \mathbb{C}\text{ov} \left[ X_{m,2},X_{n,2} \right] t^{\alpha m} s^{\alpha n} , \end{split} $$

where

ov[Xm,1,Xn,1]=λm+1E[(b0)2]λm+n(E[b0])2Γ(αm+1)Γ(αn+1),ov[Xm,1,Xn,2]=(λm+n1λm+n1)E[b0]E[c]Γ(αm+1)Γ(αn+1)=0,ov[Xm,2,Xn,1]=(λm+n1λm+n1)E[b0]E[c]Γ(αm+1)Γ(αn+1)=0,ov[Xm,2,Xn,2]=λm+n2E[c2]λm+n2(E[c])2Γ(αm+1)Γ(αn+1). $$ \begin{array}{ccl} \mathbb{C}\text{ov} \left[ X_{m,1},X_{n,1} \right] & = & \dfrac{\lambda^{m+n} \mathbb{E} \left[ (b_0)^{2} \right] - \lambda^{m+n} \left( \mathbb{E} \left[ b_0 \right] \right)^2} {\Gamma(\alpha m +1)\Gamma(\alpha n +1)}, \\ \mathbb{C}\text{ov} \left[ X_{m,1},X_{n,2} \right] &=& \dfrac{\left( \lambda^{m+n-1} - \lambda^{m+n-1} \right) \mathbb{E} \left[ b_0 \right] \mathbb{E} \left[ c \right]} {\Gamma(\alpha m +1)\Gamma(\alpha n +1)}=0, \\ \mathbb{C}\text{ov} \left[ X_{m,2},X_{n,1} \right] &=& \dfrac{\left( \lambda^{m+n-1} - \lambda^{m+n-1} \right) \mathbb{E} \left[ b_0 \right] \mathbb{E} \left[ c \right]} {\Gamma(\alpha m +1)\Gamma(\alpha n +1)}=0, \\ \mathbb{C}\text{ov} \left[ X_{m,2},X_{n,2} \right] &=& \dfrac{\lambda^{m+n-2} \mathbb{E} \left[ c^{2} \right] - \lambda^{m+n-2} \left( \mathbb{E} \left[ c \right] \right)^2} {\Gamma(\alpha m +1)\Gamma(\alpha n +1)}. \end{array} $$

Observe that YM,YN(t,t) $\mathbb{C}_{Y_M,Y_N}(t,t)$ corresponds to the covariance of the approximations of order M and N of the solution SP at the time instant t while YM,YM(t,s) $\mathbb{C}_{Y_M,Y_M}(t,s)$ gives the covariance of the approximation of order M at the two time instants t and s of the solution SP.

Numerical examples

In this section we illustrate the theoretical results established by means of several numerical examples. Let us consider the random fractional IVP (2), where 0<α1 $0 \lt \alpha \leq 1$ , λ=34 $\lambda=\frac{3}{4}$ , b0 and c are 2-RVs such that

E[b0]=E[c]=12,V[b0]=V[c]=14. $$ \begin{equation} \mathbb{E}[b_0]=\mathbb{E}[c]=\frac{1}{2}, \quad \mathbb{V}[b_0]=\mathbb{V}[c]=\frac{1}{4}. \end{equation} $$

Figure 1 shows the approximations of the mean and the standard deviation of the solution SP to the random IVP (2) with α = 0.7. In both plots we have carried out computations taking different orders of truncations M = 6, 7, 8, 9, 10.

Figure 1

Approximations of the mean (left) and the standard deviation (rigth) of the solution SP to the random IVP (2)α = 0.7 and λ = 3/4 using different orders of truncations M = 6, 7, 8, 9, 10 over the time interval [0, 5].

To illustrate the role of λ parameter in the velocity of the convergence of approximations, let us now consider that λ parameter is greater than 1, for example λ=54 $\lambda=\frac{5}{4}$ . In that case, the approximations of the mean and standard deviation converge slower than in the previous case. In Figure 2, we show the approximations considering higher truncation order than in the previous case, namely M = 10, 12, 14, 16, 18.

Figure 2

Approximations of the mean (left) and the standard deviation (right) of the solution SP to the random IVP (2) with α = 0.7 and λ = 5/4 using different orders of truncations M = 10, 12, 14, 16, 18 over the time interval [0,5].

On the one hand, if we assume that the mean of b0 and c are negative, the mean of the solution SP decreases as t increases, while the standard deviation of the solution SP increases. In Figure 3, the mean and standard deviation have been represented taking E[b0]=E[c]=1 $\mathbb{E}[b_0]=\mathbb{E}[c]=-1$ .

Figure 3

Approximations of the mean (left) and the standard deviation (right) of the solution SP to the random IVP (2) with α = 0.7, λ = 5/4, E[b0]=E[c]=1 $\mathbb{E}[b_0]=\mathbb{E}[c]=-1$ and V[b0]=V[c]=1/4 $\mathbb{V}[b_0]=\mathbb{V}[c]=1/4$ using different orders of truncations M over the time intervals [0,5].

In Figures 13 we observe that approximations of the mean and the standard deviation improve as M increases although the error is greater as we depart from the origin (t = 0) where the IVP is stated. These results are consistent with Proposition 4.

On the other hand is important to illustrate the behaviour of the solution SP when the fractional order, α, of the derivative changes. In Figure 4, we have fixed the order of truncation, M = 20, and we show the trajectories of the expectation and standard deviation to the solution SP for different orders of the fractional derivative. Note that the case α = 0.99 is very close to the classical derivative (α = 1).

Figure 4

Approximations of the mean (left) and the standard deviation (right) of the solution SP to the random IVP (2) with M = 20, λ = 5/4, E[b0]=E[c]=1 $\mathbb{E}[b_0]=\mathbb{E}[c]=-1$ and V[b0]=V[c]=1/4 $\mathbb{V}[b_0]=\mathbb{V}[c]=1/4$ using different orders of the derivative α = {0.4, 0.5, 0.6, 0.7, 0.99} over the time interval [0, 5].

Conclusions

In this contribution we have provided a probabilistic study of the randomized deterministic fractional linear differential equation. The study has been carried out using some results belonging to random mean square calculus that have been previously established in the extant literature. We have constructed a mean square convergent power generalized series solution stochastic process by means of a random Fröbenius method. Furthermore, we have constructed approximations for both the mean and the variance of the solution stochastic process. Our numerical experiments are in agreement with the theoretical results. This paper can stimulate the extension of well-known results of deterministic fractional differential equations to the random framework using mean square calculus.

eISSN:
2444-8656
Language:
English
Publication timeframe:
Volume Open
Journal Subjects:
Life Sciences, other, Mathematics, Applied Mathematics, General Mathematics, Physics