This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.
Introduction and motivation
It is well-known that the following linear differential equation with initial condition is a proper pattern to model a great number of physical problems
Here $\mathbb{N}$ denotes the set of positive integers and $\lambda,c,b_0 \in \mathbb{R}$ . In practice these model parameters need to be fixed using measurements, hence they contain measurement errors. This fact motivates that it is more realistic to treat them as random variables (RVs) instead of deterministic values. Apart from measurement errors, randomness can be attributed to the inherent complexity often encountered in physic phenomena. This justifies the consideration of variability in the forcing term and in the initial condition (c, b0, b1). It must be pointed out that randomness may be included also in the diffusion coefficient, however in this contribution, for simplicity, we will assume that λ is a constant.
On the other hand, over the last few decades fractional differential equations are playing a significant role in modelling phenomena with microscopic complex behaviour whose macroscopic dynamics (like viscoelasticity, fluid behaviour, etc.) cannot be properly described using classical derivatives [4]. Throughout this paper, randomness will be considered using the Mean Square Calculus that involves the so-called mean square convergence. In the context of ordinary and fractional random differential equations, this random Calculus has been successfully used to conduct a number of significant problems. Some examples can be checked in [3, 5, 6, 7]. The application mean square calculus to deal with the analysis of random fractional differential equations is adequate because it permits to take advantage of properties of mean square convergence to compute approximations of the expectation and variance of the solution stochastic process. Our contribution is based upon a number of results belonging to the mean square calculus that have been established in the extant literature.
In this contribution we deal with the following generalization of the initial value problem (IVP) (1)
being [·] the integer part function. To conduct our analysis, we will assume that $\lambda\in \mathbb{R}$ and bj and c are independent second-order RVs (2-RVs), that is, we assume that bj and c have finite variance. The symbol $\left(^CD_{0^{+}}^\alpha Y\right)(t)$ denotes the random Caputo fractional derivative of α order of the stochastic process (SP), Y(t). Although there exist several definitions of derivatives of non-integer order (fractional derivatives), for convenience, throughout our analysis we will consider the Caputo derivative. For the sake of completeness, we recall the definition of this random derivative.
Definition 1
Let f(t) be a second-order SP (2-SP), i.e. f(t) is a 2-RV for every fixed t, and assume that it is n-times mean square differentiable over a finite interval [a, b], then the random Caputo mean square derivative of order α > 0 of f(t) is given by
Here, n = −[−α], f(n)(t) denotes the n-th mean square derivative of the 2-SP f(t) and Jn−α stands for the random fractional integral of order n−α, i.e.
In [8, ch.4], one can find the definition of mean square differentiability to 2-SPs. The random fractional integral (Jn−αf)(t) is a natural extension of its deterministic counterpart (see [4]) using the properties of the mean square Riemann integral of 2-SPs (see [8, ch.4]).
We recall that the correlation function, Γf(u1, u2), of a 2-SP, f(t), is defined by
where $\mathbb{E}[\cdot]$ denotes the expectation operator.
It can be proved, by applying the Schwarz inequality, that Γf(u1, u2) always is well-defined. An important property of the correlation function of a 2-SP, f(t) and its n-th mean square derivative, f(n)(t), is the following
Using (4) and the characterization for the existence of the mean square Riemann integral of 2-SP f(t) (see [8, Th. 4.5.1]), one can establish following necessary and sufficient condition for the existence of the random Caputo fractional mean square derivative.
Proposition 1
Let {f(t) : t ∈ [a, b]} be a 2-SP n-times mean square differentiable with correlation function Γf(·, ·). Then, its (left-sided) random Caputo fractional mean square derivative, $\left(^CD_{a+}^\alpha f\right)(t)$ , α>0, exists if, and only if, the following deterministic double Riemann integral
$$
\begin{equation}
\int_a^t \int_a^t (t-u_1)^{n-\alpha-1}(t-u_2)^{n-\alpha-1} \dfrac{\partial^{2n} \Gamma_f(u_1,u_2)}
{\partial u_1^n \, \partial u_2^n} \, d u_1 d u_2
\end{equation}
$$
exists and is finite
Once we have defined the Caputo derivative of a 2-SP, in Sections 2 and 3 we will construct the solution of random IVP (2) in two steps. First, considering the case when $0 \lt \alpha \leq 1$ (Case I) and, secondly, when $1 \lt \alpha \leq 2$ (Case II). As we shall see throughout analysis, this latter case will be based upon the study carried out in the former.
Case I: Solution of the random linear fractional differential equation when $\mathbf{0 \lt \alpha \leq 1}$
In this section we will deal with the construction of the solution SP to the IVP (2) in the case that 0<α≤1 by means of a random generalized power series. It will be done combining the Fröbenious method together with a mean square chain rule for differentiating 2-SPs. This result has been recently established by some of the authors [1].
Theorem 2
Let g be a deterministic continuous function on [a1, a2] such thatg′(t) exists and is finite at some pointt ∈ [a1, a2]. If $\{X(v): v\in \mathcal{V}\}$ is a 2-SP such that
The interval $\mathcal{V}$ contains the range ofg, $g([a_1,a_2])\subset \mathcal{V}$ .
X(v) is mean square differentiable at the pointg(t).
The mean square derivative ofX(v), $\frac{\mathrm{d} X(v)}{\mathrm{d} v}$ , is mean square continuous on $\mathcal{V}$ .
Then, the 2-SP, X(g (t)), is mean square differentiable at t and the mean square derivative is given by
then Y(t) = X(tα), and in this manner we can take advantage of mean square calculus for standard random power series. The random Caputo mean square derivative (3) writes
where Z(t) : (X(tα))′ is the mean square derivative of the SP X(t) compounded with the deterministic function tα.
If we want to apply the Fröbenius method, firstly we need to obtain the Caputo derivative of the generalized power series given by (6). This is just done by applying Theorem 2 assuming the following hypotheses:
X(v), defined in expression (7), is differentiable in mean square sense at v = tα. Also,
$$
\begin{equation}
X'(t^{\alpha})
=
\sum_{m\geq 1}m X_m t^{\alpha (m-1)} .
\end{equation}
$$
$\dfrac{\mathrm{d}X(v)}{\mathrm{d}v}$ is mean square continuous on the interval v ∈ [0, Tα], T > 0.
As 0 < α ≤ 1, one gets that $\mathcal{V}=[0,T^{\alpha}]$ contains the range of g(t) = tα, that is $g([a_1,a_2])=g([0,T])=[0,T^{\alpha}]=\mathcal{V}$ . By Theorem 2 we can check that X(g (t)) is mean square differentiable at t and its mean square derivative has the following expression
$$
\begin{equation}
X_{m,1} =
\dfrac{\lambda^m b_0}{\Gamma(\alpha m +1)},\qquad
X_{m,2} =
\dfrac{\lambda^{m-1} c}{\Gamma(\alpha m +1)}.
\end{equation}
$$
This solution must verify conditions C1–C3 in order to legitimate the assumptions that allows us to define the random Caputo mean square derivative of the 2-SP Y(t). In order to check that C1 is satisfied, we will need the following result:
Proposition 3
[2] Let $\mathcal{V} \subset \mathbb{R}$ be an interval, m ≥ m0 ≥ 0 a non-negative integer and $\{U_m(v): v\in \mathcal{V}, \, m\geq m_0 \}$ be a sequence of 2-SPs such that
Um(v) is mean square differentiable on $\mathcal{V}$ .
The mean square derivative, U′m(v), is mean square continuous on $\mathcal{V}$ .
$U(v)=\sum_{m\geq m_0} U_m(v)$ is mean square convergent on $\mathcal{V}$ .
$\sum_{m\geq m_0} U_m'(v)$ is mean square uniformly convergent on $\mathcal{V}$ .
Then, the 2-SP U(v) is mean square differentiable at every $v \in \mathcal{V}$ and
$$
U'(v)=\sum_{m\geq m_0} U_m'(v).
$$
Now, we check the hypotheses of Proposition 3 for the two series defined in (13)–(14). First, let us define
where in the limit we have used that the deterministic function vm is differentiable at v = tα and that $\left\| b_0\right\|_2\lt+\infty$ since b0 is a 2-RV. As a consequence, Xm,1(v) is m.s. differentiable at v = tα being $X_{m,1}'(t^\alpha)=\frac{m \lambda^m b_0}{\Gamma(\alpha m +1)} t^{\alpha (m-1)}$ its m.s. derivative.
Secondly, using a similar reasoning it can be checked that X′m,1(v) = mλmb0vm−1/Γ(α m + 1) is mean square continuous at v = tα.
Thirdly, let us show that the random power series $\sum_{m\geq 0} X_{m,1}(v)=\sum_{m\geq 0} X_{m,1}v^m$ is mean square convergent for each v: 0 < v ≤ Tα. To this end, we first majorize this series
Then, we check that series with general term δm(t) is convergent by applying the D’Alembert test together with the generalized Stirling formula, $\Gamma(x+1) \approx x^x e^{-x} \sqrt{2 \pi x}$ , x → + ∞. This leads
Finally, the mean square uniform convergence of $\sum_{m=0}^\infty X_{m,1} (t)$ can be established using similar arguments to the ones shown earlier. In this way it is justified that the random power series $\sum_{m\geq 0}X_{m,1}v^m$ is mean square differentiable at v = tα. Analogously, it can be proved that the second power series $\sum_{m\geq 1}X_{m,2}t^m$ in (13) is mean square differentiable at v = tα. As a consequence, the random power series X(t) is mean square differentiable at v = tα. Then, condition C1 is satisfied. Based upon similar arguments, it can be shown that X(v) also satisfies conditions C2 and C3.
Case II: Solution of the random linear fractional differential equation when $\mathbf{1 \lt \alpha \leq 2}$
In this section we will construct the solution of the random IVP (2) when $1 \lt \alpha \leq 2$ applying similar arguments to ones exhibited in previous section. Thus we seek the solution SP in the following form
In order to obtain the expression of the Caputo derivative of order α to Y1(t), we define $\hat{Y}_1(t)= \sum_{m \geq 0}X_m t^{m}$ , hence $Y_1(t)=\hat{Y}_1(t^\alpha)$ . According to (3), the random mean square Caputo derivative is given by
where $Z(t)=\hat{Y}_1^{''}(t^\alpha)$ . To compute Z(t) we will apply twice Theorem 2, assuming that hypotheses i)-iii) of Theorem 2 hold for both $\hat{Y}_1(t)$ and $\hat{Y}_1^{'}(t)$ . In that case
Observe that, we have applied Property (4.126) of [8, p.96] to compute the mean square derivative of the product of a deterministic function (α tα−1) and a mean square 2-SP $(\hat{Y}_1^{'}(t^{\alpha}))$ . Moreover, assuming that the random power series $\sum_{m=0}^\infty (m+1) X_{m+1} t^{\alpha (m+1) -2}$ and $\sum_{m=0}^\infty (m+2)(m+1) X_{m+2} t^{\alpha (m+2)-2}$ are mean square convergent, we can obtain the random mean square Caputo derivative as follows
As $1 \lt \alpha \leq 2$ , and
$Y_2'(t)= \sum_{m \geq 0} (\alpha m +1) Y_m t^{\alpha m}$, we can recast $\hat{\alpha}=\alpha-1\in (0,1]$ , $\hat{Y}_m = (\alpha m +1) Y_m$ and to compute the random mean square Caputo derivative of order $\hat{\alpha}$ of $\sum_{m \geq 0} \hat{Y}_mt^{\alpha m}$ using the same method used in (18)–(12). This leads
Once we have obtained the mean square Caputo derivative of both series in (18), we need to compute their coefficients Xm and Ym taking into account the initial conditions Y (0) = X0 = b0 and Y′(0) = Y0 = b1. After handling the corresponding recurrence relationships, one gets
$$
\begin{equation*}
X_m = \frac{\lambda ^m b_0+\lambda^{m-1} c}{\Gamma(\alpha m +1)}, \qquad Y_m= \frac{\lambda^m b_1}{\Gamma(\alpha m +2)}, \qquad m \geq 1.
\end{equation*}
$$
Therefore, a candidate solution SP to the random IVP (2) with $1 \lt \alpha \leq 2$ is given by
$$
\begin{equation*}
X_{m,1}= \frac{\lambda ^m b_0}{\Gamma(\alpha m +1)}, \qquad X_{m,2}= \frac{\lambda ^{m-1} c}{\Gamma(\alpha m +1)}, \qquad Y_m = \frac{\lambda^m b_1}{\Gamma(\alpha m +2)}.
\end{equation*}
$$
Approximating the mean, the variance, the covariance and the cross-covariance of the solution stochastic process
An important feature when solving random differential equations is that the main goal is not only to compute the solution SP but its statistical functions. This section is addressed to compute the mean, the variance, the covariance and the cross-covariance functions of the solution SP to random IVP (2) when 0 ≤ α < 1. The method to compute these statistical functions in the case that 0 < α ≤ 2. is analogous and we then skip in the subsequent development. To this end, the following property that will play a key role later. At this point, it is important to stress that that crucial property holds for mean square convergence being not true when another kind of stochastic convergences are considered.
Proposition 4
([8, Theorem 4.2.1]). Let{RM: M = 0} be a sequence of 2-RVs such that $R_M \stackrel[M\to +\infty]{\text{m.s.}}{\longrightarrow} R$ , i.e. RMis mean square convergent toR. Then, the mean and the variance of the approximationsZMtend to the mean and the variance of the corresponding limits
In order to construct the approximations of the mean and the variance, it is convenient to introduce the truncation of order M of the solution SP, that is,
then it is enough to compute $\mathbb{E}[(Y_M(t))^2]$ . To do that, we will take advantage of statistical independence of RVs b0 and c. Henceforth,
$$
\begin{equation}\label{momento_orden_2_Y}
\begin{split}
\mathbb{E}\left[(Y_M(t))^2\right]
=&
\mathbb{E}\left[
\left(
\displaystyle \sum_{m= 0}^M \dfrac{\lambda^m b_0}{\Gamma(\alpha m +1)}
t^{\alpha m}
+
\displaystyle \sum_{m= 1}^M \dfrac{\lambda^{m-1} c}{\Gamma(\alpha m +1)}
t^{\alpha m}
\right)^2
\right]\\
= &
\displaystyle\mathbb{E}\left[(b_0)^2\right]
\sum_{m= 0}^M \dfrac{\lambda^{2m}
}{(\Gamma(\alpha m +1))^2}
t^{2\alpha m}\\
& +
\displaystyle 2\mathbb{E}\left[(b_0)^2\right]\sum_{m= 1}^M \sum_{n= 0}^{m-1} \dfrac{\lambda^{m+n}
}{\Gamma(\alpha m +1)\Gamma(\alpha n +1)}
t^{\alpha (m+n)}
\\
&+ \displaystyle
\mathbb{E}\left[c^2\right]
\sum_{m= 1}^M \dfrac{\lambda^{2(m-1)}
}{(\Gamma(\alpha m +1))^2}
t^{2\alpha m}\\
& +
\displaystyle 2 \mathbb{E}\left[c^2\right] \sum_{m= 2}^M \sum_{n= 1}^{m-1} \dfrac{\lambda^{m+n-2}
}{\Gamma(\alpha m +1)\Gamma(\alpha n +1)}
t^{\alpha (m+n)}
\\
& +\displaystyle2\mathbb{E}[b_0]\mathbb{E}[c]
\sum_{m= 0}^M
\sum_{n= 1}^M
\dfrac{\lambda^{m+n-1}}{\Gamma(\alpha m +1)\Gamma(\alpha n +1)}
t^{\alpha (m+n)}.
\end{split}
\end{equation}
$$
In order to compute an approximation of the cross-covariance function of the solution SP, let us consider N, M ≥ 1 and t, s > 0. Using the properties of covariance, this approximation can be represented by the following expression
$$
\begin{array}{ccl}
\mathbb{C}\text{ov}
\left[
X_{m,1},X_{n,1}
\right]
& = &
\dfrac{\lambda^{m+n}
\mathbb{E}
\left[
(b_0)^{2}
\right]
-
\lambda^{m+n}
\left(
\mathbb{E}
\left[
b_0
\right]
\right)^2}
{\Gamma(\alpha m +1)\Gamma(\alpha n +1)},
\\
\mathbb{C}\text{ov}
\left[
X_{m,1},X_{n,2}
\right]
&=&
\dfrac{\left(
\lambda^{m+n-1}
-
\lambda^{m+n-1}
\right)
\mathbb{E}
\left[
b_0
\right]
\mathbb{E}
\left[
c
\right]}
{\Gamma(\alpha m +1)\Gamma(\alpha n +1)}=0,
\\
\mathbb{C}\text{ov}
\left[
X_{m,2},X_{n,1}
\right]
&=&
\dfrac{\left(
\lambda^{m+n-1}
-
\lambda^{m+n-1}
\right)
\mathbb{E}
\left[
b_0
\right]
\mathbb{E}
\left[
c
\right]}
{\Gamma(\alpha m +1)\Gamma(\alpha n +1)}=0,
\\
\mathbb{C}\text{ov}
\left[
X_{m,2},X_{n,2}
\right]
&=&
\dfrac{\lambda^{m+n-2}
\mathbb{E}
\left[
c^{2}
\right]
-
\lambda^{m+n-2}
\left(
\mathbb{E}
\left[
c
\right]
\right)^2}
{\Gamma(\alpha m +1)\Gamma(\alpha n +1)}.
\end{array}
$$
Observe that $\mathbb{C}_{Y_M,Y_N}(t,t)$ corresponds to the covariance of the approximations of order M and N of the solution SP at the time instant t while $\mathbb{C}_{Y_M,Y_M}(t,s)$ gives the covariance of the approximation of order M at the two time instants t and s of the solution SP.
Numerical examples
In this section we illustrate the theoretical results established by means of several numerical examples. Let us consider the random fractional IVP (2), where $0 \lt \alpha \leq 1$ , $\lambda=\frac{3}{4}$ , b0 and c are 2-RVs such that
Figure 1 shows the approximations of the mean and the standard deviation of the solution SP to the random IVP (2) with α = 0.7. In both plots we have carried out computations taking different orders of truncations M = 6, 7, 8, 9, 10.
To illustrate the role of λ parameter in the velocity of the convergence of approximations, let us now consider that λ parameter is greater than 1, for example $\lambda=\frac{5}{4}$ . In that case, the approximations of the mean and standard deviation converge slower than in the previous case. In Figure 2, we show the approximations considering higher truncation order than in the previous case, namely M = 10, 12, 14, 16, 18.
On the one hand, if we assume that the mean of b0 and c are negative, the mean of the solution SP decreases as t increases, while the standard deviation of the solution SP increases. In Figure 3, the mean and standard deviation have been represented taking $\mathbb{E}[b_0]=\mathbb{E}[c]=-1$ .
In Figures 1–3 we observe that approximations of the mean and the standard deviation improve as M increases although the error is greater as we depart from the origin (t = 0) where the IVP is stated. These results are consistent with Proposition 4.
On the other hand is important to illustrate the behaviour of the solution SP when the fractional order, α, of the derivative changes. In Figure 4, we have fixed the order of truncation, M = 20, and we show the trajectories of the expectation and standard deviation to the solution SP for different orders of the fractional derivative. Note that the case α = 0.99 is very close to the classical derivative (α = 1).
Conclusions
In this contribution we have provided a probabilistic study of the randomized deterministic fractional linear differential equation. The study has been carried out using some results belonging to random mean square calculus that have been previously established in the extant literature. We have constructed a mean square convergent power generalized series solution stochastic process by means of a random Fröbenius method. Furthermore, we have constructed approximations for both the mean and the variance of the solution stochastic process. Our numerical experiments are in agreement with the theoretical results. This paper can stimulate the extension of well-known results of deterministic fractional differential equations to the random framework using mean square calculus.