This work is licensed under the Creative Commons Attribution 4.0 Public License.
Introduction
It is well known that Backward stochastic differential equations (BSDEs in short) driven by random Poisson measure are natural extension of classical BSDEs. These equations, first discussed by Tang and Li [8] can be seen as a generalization of Pardoux and Peng’s work [6], which constitute the key point of solving problem in financial mathematics and studying non linear partial differential equations (PDEs in short) by means of stochastic tools. Since then the interest in searching probabilistic formula of solution of other type of PDEs increases a lot. Some authors studying parabolic integral-partial differential equation (PIDE), interested in BSDEs with Poisson Process (BSDEP in short). Among them we mention the result of Barles et al [1] who establish a probabilistic interpretation of a solution of a PIDE. By means of a comparison theorem, they generalized the probabilistic representation of solution of quasilinear PDEs proved in [6] to PIDEs. But all these results are obtained either with a Lipschitz condition or a monotonicity one on the drift of the stochastic equation. Several authors investigate in weakening this restrictive assumption. Among others Mao [3] investigate successfully these equations with the Osgood condition. This one is introduced by specific function which allows the use of the well known Bihari’s Lemma to get uniqueness. The limitation is that all these results are established in the one dimensional case.
The study of multidimensional BSDEs with weak conditions on the generator was discuss recently by Fan et al [2]. Using a suitable sequence, they prove an existence and uniqueness result when the generator satisfies the Osgood condition. In this work we interested in extending this result to multidimensional BSDEs driven by random Poisson measure (MBSDEPs in short) satisfying the Osgood condition. Inspired by the method introduced by Fan et al [2], we prove existence and uniqueness of solution of a MBSDEP. The paper is organized as follows. In section 2, we recall some important results on MBSDEs driven by Poisson random measure. In section 3, we establish our main result.
MBSDEP with Poisson Jumps
Definitions and preliminary results
Let Ω be a non-empty set, ℱ a σ–algebra of sets of Ω and P a probability measure defined on ℱ. The triplet (Ω, ℱ, P) defines a probability space, which is assumed to be complete. We assume given two mutually independent processes:
a d–dimensional Brownian motion (Bt)t≥0,
a random Poisson measure μ on E × R+ with compensator ν(dt, de) = λ(de)dt
where the space E = R – {0} is equipped with its Borel field 𝓔 such that {μ͠([0, t] × A) = (μ – ν)[0, t] × A} is a martingale for any A ∈ 𝓔 satisfying λ(A) < ∞. λ is a σ–finite measure on 𝓔 and satisfies
We consider the filtration (ℱt)t≥0 given by $\begin{array}{}
\displaystyle
{\mathscr{F}}_{t} = {\mathscr{F}}^B_{t} \vee {\mathscr{F}}^\mu_{t},
\end{array}$ where for any process {ηt}t≥0, $\begin{array}{}
\displaystyle
{\mathscr{F}}^\eta_{s,t}
\end{array}$ = σ{ηr – ηs, s ≤ r ≤ t} ∨ 𝓝, $\begin{array}{}
\displaystyle
{\mathscr{F}}^\eta_{t} = {\mathscr{F}}^\eta_{0,t} .
\end{array}$ Here 𝓝 denotes the class of P–null sets of ℱ.
For Q ∈ N∗, | . | stands for the Euclidian norm in RQ.
We consider the following sets (where E denotes the mathematical expectation with respect to the probability measure P), and a non-random horizon time 0 < T < +∞:
Finally let S be the set of all non-decreasing and concave function φ(⋅) : R+ → R+ satisfying φ(0) = 0, φ(s) > 0 for s > 0 and ∫0+φ–1(u)du = +∞.
Given f : Ω × [0, T] × 𝒜 → Rk a jointly measurable function and ξ ∈ L2(ℱT, Rk) the set of all Rk–valued, square integrable and ℱT–measurable random vectors, we are interested in the MBSDEP with parameters (ξ, f, T):
For instance let us precise the notion of solution to eq.(2.1).
Definition 2.1
A triplet of processes (Yt, Zt, Ut)0≤t≤Tis called a solution to eq.(2.1), if (Yt, Zt, Ut) ∈ ℬ2(Rk) and it satisfies eq.(2.1).
Now, let us introduce the following Proposition 2.2, which will play an important role in the proof of Theorem 3.4. We consider the following assumption on the generator f:
where γ, α : [0, T] → R+ satisfying $\begin{array}{}
0 \lt \int_{0}^{T} [\alpha^{2}(s) + \gamma(s)] ds \lt \infty,
\end{array}$φ ∈ ℳ2(R) is non-negative and ψ is non-decreasing concave function from R+ to itself with ψ(0) = 0.
Proposition 2.2
Assume thatfsatisfies (A) and let (Yt, Zt, Ut)0≤t≤Tbe a solution to the MBSDEP(2.1). There exists a constantc > 0 depending only onαsuch that, for any 0 ≤ t ≤ T,
$$\begin{array}{}
\displaystyle
X_t^T={\bf E} |\xi|^2 + \int_t^T (1+ 2\alpha^{2}(s)) \cdot {\bf E} \left(\sup_{s\le r \le T}|Y_r|^2\right) d s +{\bf E}\left[\int_t^T\left(2 \gamma(s) \psi(|Y_s|^2)+ \varphi_s^2 \right)ds\right].
\end{array}$$
Applying Burkhölder-Davis-Gundy inequality, we derive that the process $\begin{array}{}
\left\{M_t=\int_0^t\langle Y_s,Z_sdB_s\rangle\right\}_{0\leq t\leq T}
\end{array}$ is in fact a uniformly integrable martingale and there exists δ > 0 such that for 0 ≤ t ≤ T, we have
where g(t) stands for the left hand side of (2.8).
Applying Fubini’s theorem and Jensen’s inequality, we deduce by Gronwall’s lemma (see Lemma 3.1 below)
$$\begin{array}{}
\displaystyle
g(t) \le \left[{\bf E} |\xi|^2 + \int_t^T \gamma(s) \psi({\bf E} |Y_s|^2) ds + {\bf E} \int_t^T \varphi_s^2 ds \right]\times c \exp\left(c\int_t^T (1+ 2\alpha^{2}(s)) d s \right).
\end{array}$$
Putting C2.2(α) = $\begin{array}{}
c\exp\left(c\int_0^T (1+ 2\alpha^{2}(s)) d s \right),
\end{array}$ the result follows.□
We are in position to investigate our main result.
Existence and uniqueness of solution
Let us introduce the following assumptions on the generator f. We say that f satisfies assumptions (H1) if the following hold (were we define for 0 ≤ s ≤ T, f(s, 0) = f(s, 0, 0, 0) to ease the reading):
(H1.1): f satisfies the weak Lipschitz condition in y, i.e., there exists ρ ∈ S such that dP × dt-a.e, ∀(y, y′) ∈ (Rk)2, z ∈ Rk×d, u ∈ Rk,
(H1.2): f is Lipschitz continuous in (z, u) uniformly with respect to (ω, t, y), i.e., there exists a function β : [0, T] → R+ such that dP × dt-a.e, y ∈ Rk, (z, z′) ∈ (Rk×d)2 and (u, u′) ∈ (Rk)2
$$\begin{array}{}
\displaystyle
|f^{n}(\omega, t, y, z, u) - f^{n}(\omega, t, y, z', u') | \le k \beta(t) [|z-z'| + |u-u'|].
\end{array}$$
The integrabilty condition holds$\begin{array}{}
\displaystyle
{\bf E}\left[ \left(\int_{0}^{T} |f^{n}(t, 0) | d t \right)^{2} \right] \lt +\infty.
\end{array}$
The following Theorem 3.4 is the main result in this section.
Theorem 3.4
Givenfsatisfying assumptions(H1)andξ ∈ L2(ℱT, Rk), the MBSDEP(2.1)has a unique solution.
Proof
Uniqueness.
Let $\begin{array}{}
\displaystyle
(Y_t^i,Z_t^i,U_t^i)_{0\leq t\leq T}\,\, (i=1,2)
\end{array}$ be two solutions of eq.(2.1) and define for δ ∈ {Y, Z, U}, δ̂ = δ1 – δ2. Then the triple (Ŷt, Ẑt, Ût)0≤t≤T is a solution to the following MBSDEP with parameters (0, T, f̂):
By Remark 2 in [2], the function $\begin{array}{}
\displaystyle
H(u)=\sqrt{u} \rho(\sqrt{u})
\end{array}$ belongs in S. Then the generator ĝ of eq.(3.1) satisfies the assumption (A) with
Since fn satisfies Lipschitz condition, it follows from Proposition 2.4 in [9] (putting g ≡ 0), that the sequence Θn = (Yn, Zn, Un) is well defined. In addition for any n ≥ 1 and m ≥ 1, define for δ ∈ {Y, Z, U}, δ̂n,m = δn – δm. Then the triplet (Ŷn,m, Ẑn,m, Ûn,m) solves the following MBSDEP
By Proposition 2.2, there exists a constant c > 0 depending on k, ρ, n, m and β such that for 0 ≤ t ≤ T,
$$\begin{array}{}
\begin{split}{}
\displaystyle
{\bf E}\left[\sup_{t\le s \le T}|\widehat{Y}_s^{n, m}|^2\right] &+ {\bf E}\left[\int_t^T|\widehat{Z}_s^{n, m}|^2ds\right] + {\bf E}\left[\int_t^T\int_E|\widehat{U}_s^{n, m} (e)|^2\lambda(de)ds\right] \\
&\leq c \int_t^T\gamma(s) H \left({\bf E}\left[\sup_{s\le r \le T}|\widehat{Y}_r^{n, m}|^2\right]\right) d s + c \times \int_{0}^{T} \gamma^{2}(s) ds.
\end{split}
\end{array}$$
Then using the same arguments as in [2], we deduce that the sequence (Θn) = (Yn, Zn, Un) is a Cauchy sequence in the space ℬ2(Rk). Letting n → ∞ in eq.(3.2) in uniform convergence in probability, implies that the triple (Y, Z, U) is solution to eq.(2.1). This completes the proof.□