Skip to content


Open Access

Measure of noncompactness and application to stochastic differential equations

Advances in Difference Equations20162016:28

Received: 3 August 2015

Accepted: 10 January 2016

Published: 27 January 2016


In this paper, we study the existence and uniqueness of the solution of stochastic differential equation by means of the properties of the associated condensing nonexpansive random operator. Moreover, by taking account of the results of Diaz and Metcalf, we prove the convergence of Kirk’s process to this solution for small times.


Wiener processItô integralBanach spacefixed pointexistenceuniquenessmeasure of noncompactnesscondensing operatorsKirk’s process



1 Introduction and notations

It has been found over the years that the fixed point theory is a powerful tool for the resolution of nonlinear problems (differential equations, integro-differential equations, …). The roots of this theory go back to the famous works of Brouwer (1912) and Banach (1922), the latter author gave an abstract formulation of the successive approximations method, systematically used by Liouville (1837) in his results. We note that Banach’s work was established in the case of normed spaces and extended in metric spaces by Caccioppoli (1930). Since then this theory has become a burgeoning field for several authors who have contributed in the elaboration by thousands of papers of the subject. The development of this theory has been heavily linked to that of the functional analysis in the 1950s. The Italian mathematician Darbo has published a result which ensures the existence of fixed point for a type of operators so called condensing operators generalizing the Schauder fixed point and Banach contraction principle. This discovery was the subject of several applications both in linear and nonlinear analysis (integral equations with singular kernels, differential equations defined on unbounded domains, neutral differential equations, differential operators having non-empty essential spectra, boundary value problems in Banach spaces and others). A condensing (or densifying) mapping is a mapping for which the image of any set is in a certain sense more compact than the set itself, the degree of noncompactness of a set is measured by means of functions called measures of noncompactness. Among the application areas of these tools, the theory of probabilistic operators, which is a branch of stochastic analysis which deals with random operators and their properties, is seen as an extension of operators theory (determinist case). This axis of research has emerged in the 1950s, thanks to the works of East European school of probabilities whose main purpose was the resolution of stochastic differential equations and stochastic partial differential equations, modeling the trajectories of random phenomena, studied and developed for the first time by Itô in 1946. A stochastic differential equation is an ordinary differential equation perturbed by a white noise (involving the Brownian motion). The history of this direction goes back to the works of the English botanist Brown who described in 1827 this motion as that of an organic fine particle in suspension in a gas or a fluid. In the late 19 century, scientists (Bachelier, Smoluchowski) addressed the study of this type of motions. Afterwards, and more precisely in 1905, Einstein published a paper in which he showed that the probability density of the Brownian motion satisfies the heat equation. The first rigorous mathematical treatment is due to Wiener in the 1920s of the previous century who has proven the existence of the Brownian motion. For more details of these equations, we can quote for example [15].

In this work, we study the existence and uniqueness of the solution of the following Itô stochastic differential equation
$$ X(t) = X_{0} + \int_{0}^{t}a_{1}\bigl(\theta, X\bigl(f( \theta )\bigr)\bigr)\,d\theta+ \int_{0}^{t}a_{2}\bigl(\theta, X\bigl(f( \theta)\bigr)\bigr)\,dw (\theta), $$
where \(a_{1}\) and \(a_{2}\) are Borel measurable functions and \(0 \leq f(\theta) \leq\theta\).

Recall that this equation models for example the motion of a particle subjected to the infinity of shocks at the time t. Here \(a_{1}\) is a coefficient of transfer while \({a_{2}}\) is a diffusion coefficient. In the case where \(a_{1}\) and \(a_{2}\) satisfy the Lipschitz condition with respect to the second variable, the result for \(f(\theta) \equiv\theta\) was established by Gikhman and Skorohod [6] showing that the associated mapping to (1.1) is a contraction and the solution was obtained by means of the successive approximations method.

Our goal here is to investigate the problem (1.1) by imposing general conditions on the functions \(a_{1}\) and \(a_{2}\), therefore, we show that the associated mapping T is nonexpansive and condensing mapping having a unique fixed point which is the solution of (1.1). On the other hand, we prove that if this solution satisfies the metric property of Diaz and Metcalf, the convergence of Kirk’s process to this solution is ensured.

Definition 1.1

A probability space \((\Omega, \mathcal{F}, \mathbb{P})\) is a triplet for which Ω is a non-empty set, \(\mathcal{F}\) is a σ-algebra of Ω and \(\mathbb{P}\) is a probability measure defined on \(\mathcal{F}\) (\(\mathbb {P}(\Omega) = 1\)).

Notice that some results concerning the existence of fixed point theorems involving probabilistic metric spaces can be found for example in [79].

A real random variable X is a \(\mathbb{P}\)-measurable function defined on Ω with values in \({\mathbb{R}}\). A family of random variables \(X_{t}(\omega)\) (\(t \geq0\)) (denoted also by \(X(t,\omega )\) or simply \(X_{t}\)) is called a stochastic process. For \(\omega\in \Omega\), the function \(t \longrightarrow X(t,\omega)\) is the path of the stochastic process \(X(t,\omega)\).

The mean value or expectation \(\mathbb{E}(X)\) of the random variable X is defined as the integral
$$\mathbb{E}(X) = \int_{\Omega}X(\omega)\,d\mathbb{P}(\omega), $$
if it exists.
Two random variables X and Y are said to be independent if, for any \(a, b \in{\mathbb{R}}\),
$$\mathbb{P} \bigl\{ \omega\in\Omega\mid X(\omega) < a \mbox{ and } Y(\omega) < b \bigr\} = \mathbb{P} \bigl\{ \omega\in\Omega\mid X(\omega) < a\bigr\} \times\mathbb{P} \bigl\{ \omega\in\Omega\mid Y(\omega) < b \bigr\} . $$

Definition 1.2

A Wiener process (also called Brownian motion) \(\{w_{t}\}_{t \geq0}\) is a stochastic process with the following properties:
  1. (a)


  2. (b)

    for \(0< t_{1} <\cdots< t_{n}\) the random variables \(w_{t_{2}}- w_{t_{1}},w_{t_{3}}- w_{t_{2}},\ldots,w_{t_{n}}- w_{t_{n-1}} \) are independent;

  3. (c)

    the random variables \(w_{t+s}-w_{t}\) have a normal distribution with zero expectation and 1 as a variance.


Remark 1.1

If w is a random variable with zero expectation and variance \(\sigma= \sqrt{\mathbb{E}(w^{2})}\), then \({\mathbb{E}}|w| = \sqrt{ \frac{2}{\pi}}\sigma\). Thus, we obtain
$$\mathbb{E}|w_{t_{j + 1}} - w_{t_{j}}| = \sqrt { \frac{2}{\pi}} \sqrt{t_{j + 1} - {t_{j}}} $$
and hence the series \(\sum_{j} \mathbb {E}|w_{t^{n}_{j + 1}} - w_{t^{n}_{j}}|\) diverges with \(t_{j + 1}^{n} - t_{j}^{n}\longrightarrow0\), where \(0 < t_{1}^{n}< t_{2}^{n}<\cdots<t_{n}^{n} = T\).
For a pair \((w(t), X(t))\) of a Wiener process \(w(t)\) and random process \(X(t)\), we define the Itô integral as follows:
$$I(X) = \int_{0}^{T}X(t)\,dw(t). $$
The Itô integral is not a classical integral, this is due to the nonsmoothness of the paths \(w(t)\) and the divergence of the series \(\sum_{j} \mathbb{E}|w_{t^{n}_{j + 1}} - w_{t^{n}_{j}}|\).
With a Wiener process, we can associate a filtration \(\mathcal{F}_{t}\) (\(\mathcal{F}_{t} \subset\mathcal{F}\)), \(0 \leq t \leq T\), which is a family of σ-algebra generated by the Brownian paths up to time t, in other words,
$$\mathcal{F}_{t} = \sigma\bigl\{ X(s) : 0 \leq s \leq t \leq T \bigr\} . $$
It is easy to show that the family \(\mathcal{F}_{t}\) is nondecreasing (with respect to the inclusion).

Definition 1.3

A random variable Y is said to be \(\mathcal{F}_{t}\)-measurable if knowledge of Y depends only on the information known up to time t.

Definition 1.4

A sequence of real random variables \(X_{n}\) on Ω converges to the random variable X in probability, written
$$X_{n} \overset{p}{ \longrightarrow} X, $$
if for every \(\epsilon> 0\)
$$\lim_{n \longrightarrow+ \infty} \mathbb {P}\bigl\{ \omega\in\Omega\mid \bigl|X_{n}(\omega) - X(\omega)\bigr| < \epsilon\bigr\} = 1. $$
Let \(\mathcal{M}_{2} ([0, T])\) denote the set of all functions \(X(t,\omega)\) defined and jointly measurable in \(t \in[0, T]\) and \(\omega\in\Omega\) which are also measurable with respect to \(\mathcal{F}_{t}\) for all \(t \in[0, T]\) and such that
$$\mathbb{P}\biggl\{ \omega\in\Omega\Bigm| \int _{0}^{T}\bigl|X(t, \omega)\bigr|^{2}\,dt < \infty \biggr\} = 1. $$
In the sequel, without loss of generality, \(X(t, \cdot)\) will be denoted by \(X(t)\).
In the case where \(X(t) = X(t_{k})\) for \(t \in[t_{k}, t_{k + 1})\) (\(0 = t_{0} < t_{1} < t_{2} <\cdots< t_{n} = T\)). \(\int_{0}^{T} X(t)\,dw(t)\) is given by the formula
$$\int_{0}^{T}X(t)\,dw(t) = \sum _{k = 0}^{ n - 1}X (t_{k}) (w_{t_{k + 1}} - w_{t_{k}}). $$
In the general case where \(X(t)\) is an arbitrary element of \(\mathcal{M}_{2} ([0, T])\), then there exists a sequence of step functions \(X_{n}(t)\) such that
$$\lim_{n \longrightarrow+ \infty} \int_{0}^{T}\bigl|X_{n}(t) - X(t)\bigr|^{2}\,dt = 0\quad (\mbox{in probability}) $$
and the sequence \(\int_{0}^{T}X_{n}(t)\,dw(t)\) converges in probability to some limit ξ, which is called the Itô stochastic integral of \(X(t)\) denoted by \(\int _{0}^{T}X(t)\,dw(t)\).
Some properties of the Itô stochastic integral (see [6]):
  1. (i)

    Itô integral is linear;

  2. (ii)
    if \(\int_{0}^{T} \mathbb {E}(|f(t)|^{2})\,dt < \infty\), then
    $$ \mathbb{E}\biggl( \int_{0}^{T}f(t)\,dw(t)\biggr) = 0, $$
    $$ \mathbb{E}\biggl( \sup_{0 \leq s \leq\mu }\biggl| \int_{0}^{s}f(t)\,dw(t)\biggr|^{2}\biggr) \leq4 \int _{0}^{\mu}\mathbb{E}\bigl(\bigl|f(t)\bigr|^{2} \bigr)\,dt \quad(0 \leq\mu\leq T). $$
In the sequel, we assume that in (1.1), the initial data which is the random variable \(X_{0}\) is \(\mathcal{F}_{0}\)-measurable.

Definition 1.5

The process \(X(t)\) is called a strong solution of (1.1) if the following three conditions are satisfied:
  1. (i)

    \(X(t)\) is \(\mathcal{F}_{t}\)-measurable;

  2. (ii)

    the integrals in (1.1) exist;

  3. (iii)

    \(\mathbb{P}(\Xi) = 1\) where Ξ is the set of \(\omega\in\Omega\) such that (1.1) holds for all \(t \in [0, T]\).


2 Main results

We denote by \(X_{T}\) the vector space of measurable random functions \(\xi(t, \omega)\) with respect to the σ-algebra \(\mathcal{F}_{t}\) for any \(t \in[0, T]\) such that \(\mathbb{P}(\{\omega \in\Omega \mid t\longrightarrow\xi(t, \omega) \mbox{ is continuous}\}) = 1\). We put \(\|\xi\|_{X_{T}} = \sqrt{\mathbb{E}( \sup_{0 \leq s \leq T}|\xi(s, \omega)|)^{2}}\). It is easy to show that \(\|\cdot\|_{X_{T}}\) defines a norm on \(X_{T}\).

Theorem 2.1

\((X_{T}, \|\cdot\|_{X_{T}})\) is a Banach space.


It suffices to prove that \(X_{T}\) is complete with respect to the norm \(\|\cdot\|_{X_{T}}\). Let \(\xi_{n}\) be a Cauchy sequence in \(X_{T}\), then we can extract a subsequence \(\xi_{n_{k}}\) which converges almost surely for all \(t \in[0, T]\). From the set of indices \(\{n_{k}\} \) (\(k \geq1\)), we choose a subset of integers \(\{m_{k}\} \) (\(k \geq1\)) such that
$$\mathbb{E}\Bigl( \sup_{0 \leq s \leq t} \bigl|\xi_{n}(s) - \xi_{n'}(s)\bigr|^{2}\Bigr)< 2^{-2k}\quad \mbox{for }t \in[0, T]\mbox{ and }n, n' \geq m_{k}. $$
By multiplying by \(2^{k}\), we obtain
$$\frac{ \mathbb{E}( \sup_{0 \leq s \leq t} \bigl|\xi _{n}(s) - \xi_{n'}(s)\bigr|^{2})}{(\frac{1}{2^{\frac{k}{2}}})^{2}}< 2^{-k}. $$
Using Chebyshev’s inequality, it follows that
$$\mathbb{P} \biggl(\biggl\{ w \in\Omega\Bigm| \sup_{0 \leq s \leq t} \bigl| \xi_{n}(s) - \xi_{n'}(s)\bigr|^{2} > \frac{1}{2^{\frac{k}{2}}} \biggr\} \biggr) < 2^{-k}. $$
Since the series \(\sum_{k = 1}^{+ \infty} 2^{-k}\) converges, the Borel-Cantelli lemma gives
$$\mathbb{P} \biggl(\overline{\lim}\biggl\{ w \in\Omega\Bigm| \sup_{0 \leq s \leq t} \bigl|\xi_{n}(s) - \xi_{n'}(s)\bigr|^{2} > \frac{1}{2^{\frac {k}{2}}} \biggr\} \biggr) = 0. $$
Thus, for almost surely \(\omega\in\Omega\), there exists \(r_{0} \geq1\) such that
$$\sup_{0 \leq s \leq t} \bigl(\bigl|\xi_{n_{r}}(s) - \xi _{n_{r'}}(s)\bigr|^{2}\bigr) \leq\frac{1}{2^{\frac{k}{2}}}\quad \mbox{if }r, r' \geq r_{0}. $$
It follows that the partial sums
$$\xi_{m_{1}}(t) + \sum_{j = 1}^{k - 1} \bigl(\xi_{m_{j + 1}}(t) - \xi_{m_{j}}(t)\bigr) = \xi_{m_{k}}(t) $$
converge uniformly in \([0, T]\) and let \(\xi(t)\) its limit (for this topology). This gives
$$\sup_{0 \leq t \leq T} \bigl(\bigl|\xi_{m_{k}}(t) - \xi (t)\bigr|^{2}\bigr)\longrightarrow0 \quad(k\longrightarrow+ \infty) $$
and consequently
$$\sqrt{\mathbb{E} {\Bigl( \sup_{0 \leq t \leq T} \bigl(\bigl|\xi_{m_{k}}(t) - \xi(t)\bigr|^{2}\bigr)\Bigr)}}\longrightarrow0 \quad(k\longrightarrow+ \infty), $$
which shows that \(\xi_{m_{k}}\) converges to \(\xi(t)\) in \(X_{T}\).

Since \(\xi_{n}\) is a Cauchy sequence in \(X_{T}\) and contains a subsequence \(\xi_{m_{k}}\) which converges to ξ, \(\xi_{n}\) converges to ξ in \(X_{T}\), and by this we achieve the proof. □

Hereafter, the principal goal is to transform equation (1.1) to a fixed point problem. To this aim, we associate it to the following mapping C given by
$$ (CX) (s) = X_{0} + \int_{0}^{s}a_{1}\bigl(\theta, X\bigl(f( \theta )\bigr)\bigr)\,d\theta+ \int_{0}^{s}a_{2}\bigl(\theta, X\bigl(f( \theta)\bigr)\bigr)\,dw (\theta). $$
It is easy to observe that C is defined on \(X_{T}\), but so that it takes its values in \(X_{T}\), we need to add the following additional conditions on the functions \(a_{1}\) and \(a_{2}\), which is the goal of the following proposition.

Proposition 2.1

Assume that the following assumptions (called growth polynomial assumptions) are satisfied.
$$ \bigl|a_{1}(s_{1}, u_{1})\bigr|^{2} \leq M \bigl(|u_{1}|^{2} + 1\bigr)\quad \textit{and}\quad \bigl|a_{2}(s_{2}, u_{2})\bigr|^{2} \leq M \bigl(|u_{2}|^{2} + 1\bigr)\quad (M > 0). $$
Then C is a selfmapping on \(X_{\eta}\) for all \(\eta\in[0, T]\).


Without loss of generality, we assume that \(X_{0} = 0\). Now, let \(x_{1}, x_{2} \in\mathbb{R}\), from the inequality \((x_{1} + x_{2})^{2}\leq2 x_{1}^{2} + 2 x_{2}^{2}\), we infer that
$$ \bigl|(CX) (s)\bigr|^{2}\leq2 \biggl| \int_{0}^{s} a_{1}\bigl(\theta, X\bigl(f( \theta)\bigr)\bigr)\,d\theta\biggr|^{2} + 2 \biggl| \int_{0}^{s} a_{2}\bigl(\theta, X\bigl(f( \theta)\bigr)\bigr)\,dw (\theta) \biggr|^{2}. $$
Using the Cauchy-Schwarz inequality, it follows that
$$ \bigl|(CX) (s)\bigr|^{2}\leq2 s \int_{0}^{s} \bigl|a_{1}\bigl(\theta, X\bigl(f( \theta)\bigr)\bigr)\bigr|^{2}\,d\theta+ 2 \biggl| \int_{0}^{s} a_{2}\bigl(\theta, X\bigl(f( \theta)\bigr)\bigr)\,dw(\theta) \biggr|^{2}. $$
By passing to the sup and using the monotonicity of the expectation, we obtain
$$\begin{aligned} \mathbb{E}\Bigl( \sup_{0 \leq s \leq\eta }\bigl|(CX) (s)\bigr|^{2}\Bigr) \leq{}&2 \eta\mathbb{E} \biggl( \int_{0}^{\eta} \bigl|a_{1}\bigl(\theta, X \bigl(f(\theta)\bigr)\bigr)\bigr|^{2}\,d\theta\biggr) \\ &{}+ 2 \mathbb{E} \sup _{0 \leq s \leq\eta} \biggl| \int_{0}^{s} a_{2}\bigl(\theta, X\bigl(f( \theta)\bigr)\bigr)\,d\omega(\theta) \biggr|^{2}. \end{aligned}$$
Using (1.3) and the commutativity between the expectation and the integral, it follows that
$$\begin{aligned} \mathbb{E}\Bigl( \sup_{0 \leq s \leq\eta }\bigl|(CX) (s)\bigr|^{2}\Bigr) \leq{}&2 \eta\biggl( \int_{0}^{\eta} \mathbb {E}\bigl(\bigl|a_{1} \bigl(\theta, X\bigl(f(\theta)\bigr)\bigr)\bigr|^{2}\bigr)\,d\theta\biggr) \\ &{}+ 8 \int _{0}^{\eta} \mathbb{E}\bigl(\bigl|a_{2} \bigl(\theta, X\bigl(f(\theta)\bigr)\bigr)\bigr|^{2}\bigr)\,d\theta. \end{aligned}$$
By the assumptions given above, we get
$$\begin{aligned} \mathbb{E}\Bigl( \sup_{0 \leq s \leq\eta }\bigl|(CX)(s)\bigr|^{2}\Bigr) \leq{}&2 M \biggl[ \eta\biggl( \int_{0}^{\eta} \mathbb{E}\bigl(\bigl|X\bigl(f(\theta) \bigr)\bigr|^{2} + 1\bigr)\,d\theta\biggr) \\ &{}+ 4 \int _{0}^{\eta} \mathbb{E}\bigl(\bigl|X\bigl(f(\theta) \bigr)\bigr|^{2} + 1\bigr)\,d\theta \biggr]. \end{aligned}$$
$$ \mathbb{E}\Bigl( \sup_{0 \leq s \leq\eta }\bigl|(CX) (s)\bigr|^{2}\Bigr) \leq2 M \biggl[ \eta\biggl( \int_{0}^{\eta} \|X\| ^{2}_{X_{f(\theta)}}\,d\theta\biggr)+ \eta^{2} + 4 \int_{0}^{\eta } \|X\|^{2}_{X_{f(\theta)}}\,d\theta+ 4 \eta \biggr]. $$
The fact that \(0 \leq f(\theta) \leq\theta\) (\(\theta\in [0, T]\)) and the last inequality give
$$ \bigl\| (CX)\bigr\| ^{2}_{X_{\eta}} \leq2 M \bigl( \eta^{2} + 4 \eta\bigr) \bigl(\|X\| ^{2}_{X_{\eta}} + 1 \bigr). $$
Thus \(\|(CX)\|_{X_{\eta}}\) is finite if \(\|X\|_{X_{\eta}}\) is finite, which completes the proof. □

If \(X(t)\) is a solution of the equation (1.1) on the interval \([0, s]\), \(0 \leq s \leq T\), we denote \(\varphi(s) = \|X\|^{2}_{X_{s}}\).

Lemma 2.1

The function φ is bounded on the interval \([0, T]\).


From the inequality (2.9), by changing η by s, we infer that
$$\begin{aligned} \mathbb{E}\Bigl( \sup_{0 \leq r \leq s}\bigl|(CX) (r)\bigr|^{2}\Bigr) & \leq2 M \biggl[ s \biggl( \int_{0}^{s} \|X\|^{2}_{X_{f(\theta)}}\,d\theta\biggr)+ s^{2} + 4 \int_{0}^{s} \|X\|^{2}_{X_{f(\theta)}}\,d\theta+ 4 s \biggr] \\ &\leq2 M \biggl[ T \biggl( \int_{0}^{s} \|X\| ^{2}_{X_{f(\theta)}}\,d\theta\biggr) + T^{2} + 4 \int_{0}^{s} \| X\|^{2}_{X_{f(\theta)}}\,d\theta+ 4 T \biggr]. \end{aligned}$$
Using the fact that \(X(t)\) is a solution of the equation (1.1) on the interval \([0, s]\) and \(0 \leq f(s) \leq s\), it follows that
$$\begin{aligned} \varphi(s) \leq K + K' \int_{0}^{s}\varphi(r)\,dr, \end{aligned}$$
where \(K = 2M(T^{2} + 4 T)\) and \(K' = 2M(T + 4)\).
Now, Gronwall’s lemma implies that
$$\varphi(s) \leq K e^{K' T}, $$
which gives the result. □

We introduce here the concept of measure of noncompactness of Hausdorff which is a real positive function measuring the degree of noncompactness of sets.

Let X be a complex Banach space and let \(\mathcal{P}(X)\) be the set of all subsets of X, we denote by \(B(x, r)\) and \(\overline{B}(x, r)\), respectively, the open and closed ball of center x and radius \(r > 0\).

Definition 2.1

The Hausdorff measure of noncompactness \(\alpha(A)\) of \(A \in\mathcal{P}(X)\) is defined as the infimum of the numbers \(\epsilon> 0\) such that A has a finite ϵ-net in X. Recall that a set \(S \subseteq X\) is called an ϵ-net of A if \(A \subseteq S + \epsilon \overline{B}(0, 1) = \{s + \epsilon b: s \in S, b \in \overline{B}(0, 1) \}\).

The Hausdorff measure of noncompactness α enjoys the following properties:
  1. (a)

    regularity: \(\alpha(A) = 0\) if and only if A is totally bounded;

  2. (b)

    nonsingularity: α is equal to zero on every one-element set;

  3. (c)

    monotonicity: \(A_{1} \subseteq A_{2}\) implies \(\alpha (A_{1}) \leq\alpha(A_{2})\);

  4. (d)

    semi-additivity: \(\alpha(A_{1} \cup A_{2}) = \max\{ \alpha(A_{1}), \alpha(A_{2}) \}\);

  5. (e)

    Lipschitzianity: \(| \alpha(A_{1}) - \alpha(A_{2})| \leq \rho(A_{1}, A_{2}) \); here ρ denotes the Hausdorff semimetric: \(\rho(A_{1}, A_{2}) = \inf\{ \epsilon> 0: A_{1} + \epsilon\overline{B}(0, 1) \supset A_{2}, A_{2} + \epsilon\overline{B}(0, 1) \supset A_{1} \}\);

  6. (f)

    continuity: for any \(A_{1} \in\mathcal{P}(X)\) and any ϵ, there exists \(\delta> 0\) such that \(|\alpha(A_{1}) - \alpha(A)| < \epsilon\) for all A satisfying \(\rho(A_{1}, A) < \delta\);

  7. (g)

    semi-homogeneity: \(\alpha(t A) = |t| \alpha(A)\) for any number t;

  8. (h)

    algebraic semi-additivity: \(\alpha(A_{1} + A_{2}) \leq\alpha(A_{1}) + \alpha(A_{2})\);

  9. (i)

    invariance under translations: \(\alpha(A + x_{0}) = \alpha(A)\) for any \(x_{0} \in X\).

The goal of the following theorem is to address other properties in view of their importance.

Theorem 2.2

([1], Theorem 1.1.5)

The Hausdorff measure of noncompactness is invariant under passage to the closure and to the convex hull: \(\alpha(A) = \alpha(\overline{A}) = \alpha (co A)\).

We note that the measure of noncompactness has many applications in mathematics. On this topic, we refer to [1, 1013].

Definition 2.2

Let X be a Banach space. A function ψ defined on \(\mathcal{P}(X)\) with values in some partially ordered set \((\Gamma, \leq)\) is called a measure of noncompactness in the general sense if \(\psi(A) = \psi (\overline{co} A)\) for all \(A \in\mathcal{P}(X)\).

Definition 2.3

Let \((X, \|\cdot\|)\) be a normed space and ϑ one of measure of noncompactness given above. A continuous mapping \(G: X \longrightarrow X\) is said to be densifying or condensing, if, for every bounded subset of X such that \(\vartheta(A) > 0\), we have \(\vartheta(G(A)) < \vartheta(A)\).

Let \(\mathcal{M}([0, T])\) be the vector space of scalar functions defined on \([0, T]\); it is partially ordered by the usual order ≤. Let \(\gamma: \mathcal{P}(X_{T})\longrightarrow\mathcal{M}([0, T]) \) defined by
$$\left \{ \textstyle\begin{array}{@{}c@{}} \gamma: \mathcal{P}(X_{T}) \longrightarrow\mathcal{M}([0, T]); \\ \Lambda\longrightarrow\gamma(\Lambda). \end{array}\displaystyle \right . $$
$$\left \{ \textstyle\begin{array}{@{}c@{}} \gamma(\Lambda): [0, T] \longrightarrow\mathbb{R}; \\ t \longrightarrow \gamma_{t}(\Lambda_{t}), \end{array}\displaystyle \right . $$
where \(\Lambda_{t} = \{ X_{t} = X_{|[0, t]} : X \in\Lambda\} \subset X_{t}\) and \(\gamma_{t}\) is the measure of noncompactness of Hausdorff on the space \(X_{t}\).

Lemma 2.2

The function γ defines a measure of noncompactnesss in the general sense on \(X_{T}\) which is additively nonsingular (i.e., \(\gamma(A\cup\{X\}) = \gamma (A)\) for all \(A\subset X_{T}\) and \(X \in X_{T}\)).


Let \(A \subset X_{T}\), we have \(A \subset\overline{co}(A)\), this gives \(A_{t} \subset({\overline{co}(A)})_{t}\) for all \(t \in[0, T]\), the property of the monotonicity of the Hausdorff measure of noncompactness implies that \(\gamma_{t}(A_{t}) \leq\gamma_{t}({\overline{co}(A)})_{t}\) for all \(t \in[0, T]\), this gives \(\gamma(A) \leq\gamma(\overline {co}(A))\). On the other hand
$$\bigl(\overline{co}(A)\bigr)_{t} = \bigl\{ X_{|[0, t]}: X \in \overline {co}(A)\bigr\} \subset\overline{co}(A_{t}). $$
Again, the monotonicity and the invariance by closure of the Hausdorff measure of noncompactness leads to
$$\gamma_{t}\bigl(\bigl(\overline{co}(A)\bigr)_{t}\bigr) \leq\gamma_{t} \bigl(\overline {co}(A_{t})\bigr) = \gamma_{t} (A_{t}). $$
$$\gamma\bigl(\overline{co}(A)\bigr) (t) \leq\gamma(A) (t),\quad 0 \leq t \leq T. $$
It follows that \(\gamma(\overline{co}(A))\leq\gamma(A)\), by which we achieve the proof for the first assertion. The fact that γ is additively nonsingular is trivial. □

Definition 2.4

Let \((X, \|\cdot\|)\) be a normed space and K a nonempty bounded subset of X. A selfmapping T on K is called a nonexpansive mapping if \(\|T(x) - T(y)\| \leq\| x - y \|\) for all \(x, y \in K\).

Now, let us introduce the following conditions:

\(|a_{1}(s, u_{1}) - a_{1}(s, u_{2})|^{2}\leq h(s) g(\| u_{1}- u_{2}\|^{2})\),

where \(h: [0, T]\longrightarrow[0, + \infty[\) integrable and \(\int_{0}^{T}h(s)\,ds \leq \frac{1}{2T + 8}\) and
$$\begin{aligned} &g: [0, + \infty[ \longrightarrow[\,0, + \infty[ \mbox{ is nondecreasing and concave function with }g(s) \leq s\\ &\quad{}\mbox{for all }s \in [0, + \infty[. \end{aligned}$$
For all \(A > 0\), the inequality
$$\widetilde{h}(s) \leq A \int_{0}^{s}h(\theta ) g\bigl(\widetilde{h}\bigl(f( \theta)\bigr)\bigr)\,d\theta,\quad 0 \leq s \leq T $$
cannot admit nontrivial solutions.

\(\lambda(f^{-1}(B)) \longrightarrow0\) when \(\lambda(B) \longrightarrow0\); here λ is the Lebesgue measure and \(B \subset[0, t] \) (\(0 \leq t \leq T\)).

Remark 2.1

We note that if we take \(h(t) = \alpha\) (\(\alpha> 0\)), \(g(u) = u\), \(f(x) = x\), and \(T = -2 + \sqrt{4 + \frac{1}{2\alpha}}\), then the assumptions given in (\(\mathcal{H}_{1}\)), (\(\mathcal{H}_{2}\)), and (\(\mathcal{H}_{3}\)) are satisfied.

Proposition 2.2

Under the assumption (\(\mathcal{H}_{1}\)), the mapping C defined by (2.1) is nonexpansive on every \(X_{t}\) (\(0 \leq t \leq T\)).


We have
$$\begin{aligned} \bigl|CX(s) - CY(s)\bigr| ={}& \biggl| \int_{0}^{s} \bigl[ a_{1}\bigl(\theta, X \bigl(f(\theta)\bigr)\bigr) - a_{1} \bigl(\theta, Y\bigl(f(\theta) \bigr)\bigr) \bigr]\,d\theta \\ &{} + \int_{0}^{s} \bigl[ a_{2}\bigl(\theta, X \bigl(f(\theta )\bigr)\bigr) - a_{2} \bigl(\theta, Y\bigl(f(\theta) \bigr)\bigr) \bigr]\,d\omega(\theta) \biggr|. \end{aligned}$$
By using the inequality \((x_{1} + x_{2})^{2}\leq2 x_{1}^{2} + 2 x_{2}^{2}\), we obtain
$$\begin{aligned} \bigl|CX(s) - CY(s)\bigr|^{2} \leq{}&2 \biggl| \int_{0}^{s} \bigl[ a_{1}\bigl(\theta, X \bigl(f(\theta)\bigr)\bigr) - a_{1} \bigl(\theta, Y\bigl(f(\theta)\bigr)\bigr) \bigr]\,d\theta\biggr|^{2} \\ &{}+2 \biggl| \int_{0}^{s} \bigl[ a_{2}\bigl(\theta, X \bigl(f(\theta )\bigr)\bigr) - a_{2} \bigl(\theta, Y\bigl(f(\theta)\bigr)\bigr) \bigr]\,d\omega(\theta)\biggr|^{2}. \end{aligned}$$
The Cauchy-Schwarz inequality enables us to write
$$\begin{aligned} \bigl|CX(s) - CY(s)\bigr|^{2} \leq{}&2 s \int_{0}^{s} \bigl| \bigl[ a_{1}\bigl(\theta, X \bigl(f(\theta)\bigr)\bigr) - a_{1} \bigl(\theta, Y\bigl(f(\theta)\bigr)\bigr) \bigr]\bigr|^{2}\,d\theta \\ &{}+2 \biggl| \int_{0}^{s} \bigl[ a_{2}\bigl(\theta, X \bigl(f(\theta )\bigr)\bigr) - a_{2} \bigl(\theta, Y\bigl(f(\theta)\bigr)\bigr) \bigr]\,d\omega(\theta)\biggr|^{2}. \end{aligned}$$
Passing to the sup on \([0, T]\) and using the monotonicity of the expectation, it follows that
$$\begin{aligned} &\mathbb{E} \Bigl( \sup_{0 \leq s \leq t} \bigl|CX(s) - CY(s)\bigr|^{2} \Bigr) \\ &\quad\leq 2 t \mathbb{E} \biggl( \int_{0}^{t} \bigl| \bigl[ a_{1}\bigl(\theta, X \bigl(f(\theta)\bigr)\bigr) - a_{1} \bigl(\theta, y\bigl(f(\theta)\bigr)\bigr) \bigr]\bigr|^{2}\,d\theta\biggr) \\ &\qquad{}+2 \mathbb{E} \biggl( \sup_{0 \leq s \leq t} \biggl| \int_{0}^{s} \bigl[ a_{2}\bigl(\theta, X \bigl(f(\theta)\bigr)\bigr) - a_{2} \bigl(\theta , Y\bigl(f(\theta)\bigr)\bigr) \bigr]\,d\omega(\theta)\biggr|^{2}\biggr). \end{aligned}$$
The stochastic inequality (1.3) gives
$$\begin{aligned} \mathbb{E} \Bigl( \sup_{0 \leq s \leq t} \bigl|CX(s) - CY(s)\bigr|^{2} \Bigr) \leq{}&2 t \mathbb{E} \biggl( \int_{0}^{t} \bigl| \bigl[ a_{1}\bigl(\theta, X \bigl(f(\theta)\bigr)\bigr) - a_{1} \bigl(\theta, Y\bigl(f(\theta)\bigr)\bigr) \bigr]\bigr|^{2}\,d\theta\biggr) \\ &{}+8 \int_{0}^{t} \mathbb{E} \bigl(\bigl| \bigl[ a_{2}\bigl( \theta, X\bigl(f(\theta)\bigr)\bigr) - a_{2} \bigl(\theta, Y\bigl(f(\theta) \bigr)\bigr) \bigr]\bigr|^{2}\bigr)\,d\theta. \end{aligned}$$
$$\mathbb{E} \Bigl( \sup_{0 \leq s \leq t} \bigl|CX(s) - CY(s)\bigr|^{2} \Bigr) \leq(2 t + 8) \mathbb{E} \biggl( \int_{0}^{t} h(\theta) g\bigl(\bigl|X\bigl(f(\theta) \bigr) - Y\bigl(f(\theta)\bigr)\bigr|^{2}\bigr)\,d\theta\biggr). $$
The commutativity between the expectation and the integral together with the fact that g is concave gives
$$\mathbb{E} \Bigl( \sup_{0 \leq s \leq t} \bigl|CX(s) - CY(s)\bigr|^{2} \Bigr) \leq(2 t + 8) \biggl( \int_{0}^{t} h(\theta) g\Bigl(\mathbb{E} \Bigl(\sup _{0 \leq\theta\leq t} \bigl(\bigl|X\bigl(f(\theta)\bigr) - Y\bigl(f(\theta ) \bigr)\bigr|^{2}\bigr)\Bigr)\Bigr)\,d\theta\biggr). $$
$$\mathbb{E} \Bigl( \sup_{0 \leq s \leq t} \bigl|CX(s) - CY(s)\bigr|^{2} \Bigr) \leq(2 t + 8) \biggl( \int_{0}^{t} h(\theta) \mathbb {E} \Bigl(\sup _{0 \leq\theta\leq t} \bigl(\bigl|X\bigl(f(\theta)\bigr) - Y\bigl(f(\theta) \bigr)\bigr|^{2}\bigr)\Bigr)\,d\theta\biggr). $$
It follows that
$$\bigl\| C(X) - C(Y)\bigr\| ^{2}_{X_{t}} \leq(2 t + 8) \biggl( \int_{0}^{t} h(\theta)\,d\theta\biggr) \|X - Y \|^{2}_{X_{t}}. $$
$$\bigl\| C(X) - C(Y)\bigr\| _{X_{t}} \leq \|X - Y\|_{X_{t}}. $$
This shows that C is a nonexpansive selfmapping on \(X_{t}\), which gives the result. □

Remark 2.2

We note that nonexpansive selfmappings on bounded subsets of Banach spaces do not necessarily have fixed points, we can refer for example to the famous work of Alspach [14] who gave an example of a weakly compact convex subset M of the space \(L_{1}([0, 1])\) and a fixed point free isometry on M.

In the sequel, we will need the following two lemmas; the first lemma is one of the classical results in measure theory.

Lemma 2.3

Let \(\epsilon> 0\) and let \(\phi: [0, T] \longrightarrow\mathbb{R}\) a monotone function, then the set of discontinuity points of ϕ having a magnitudeϵ is a finite set in \([0, T]\).

Lemma 2.4

For any \(\Lambda\subset X_{T}\), the function
$$\left \{ \textstyle\begin{array}{@{}c@{}} \gamma(\Lambda): [0, T] \longrightarrow[0, + \infty[; \\ t \longrightarrow \gamma_{t}(\Lambda_{t}), \end{array}\displaystyle \right . $$
is a nondecreasing bounded function on \([0, T]\).


Let \(t_{1}, t_{2} \in[0, T]\) such that \(t_{1} \leq t_{2}\); then \(\Lambda _{t_{1}} \subset\Lambda_{t_{2}}\). The monotonicity of the measure of noncompactness of Hausdorff gives \({\gamma}_{t_{1}}(\Lambda_{t_{1}}) \leq {\gamma}_{t_{2}}(\Lambda_{t_{2}})\) and implies that \(\gamma(\Lambda)(t_{1}) \leq\gamma(\Lambda)(t_{2})\), which proves the first assertion. The second assertion follows directly from the first one, indeed by the monotonicity, we deduce that \(\gamma(\Lambda)(t) \leq\gamma(\Lambda )(T)\) for all \(t \in[0, T]\). □

Theorem 2.3

Under the assumptions (\(\mathcal{H}_{1}\)), (\(\mathcal{H}_{2}\)), (\(\mathcal{H}_{3}\)) together with the growth polynomial conditions given in (2.2), C is a condensing mapping with respect to the measure of noncompactness γ on \(X_{t}\) for all \(t \in[0, T]\).


We show that if there exists \(A \subset X_{t}\) such that \(\gamma(A) \leq\gamma(C(A))\), then we obtain necessarily \(\gamma(A) = 0\). Let \(t > 0\) and \(\epsilon> 0\), by using Lemmas 2.4 and 2.3, we denote by \(\{t_{j}\}_{j = 1}^{m_{0}}\) the set of points in \([0, t]\) for which \(\gamma(A)(t_{j} + 0) - \gamma(A)(t_{j} - 0) \geq\epsilon\). It is easy to deduce that there exists \(\delta_{1}\) sufficiently small such that \(\inf_{t, t' \in]t_{j} - \delta_{1}, t_{j} + \delta _{1}[} |\gamma(A)(t) - \gamma(A)(t')| \geq\epsilon\) for all \(j = 1, \ldots, m_{0}\). Letting \(\Im= [0, t]\backslash \bigcup_{j = 1}^{m_{0}}]t_{j} - \delta_{1}, t_{j} + \delta_{1}[\), we observe that \(\Im= \bigcup_{k = 1}^{m_{0} + 1} I_{k}\) where each \(I_{k}\) is a closed bounded interval with \(I_{k} \cap I_{j} = \emptyset\) for \(k \neq j\). On \(I_{k}\), the function \(t \longrightarrow\gamma(A) (t)\) is uniformly continuous, this implies that the existence of \(\delta_{2} > 0\) sufficiently small such that for all \(s, s' \in I_{k}: |s - s'| < \delta _{2}\), then \(|\gamma(A) (s) - \gamma(A) (s')| < \epsilon\). On the other hand, in \(I_{k}\), we choose a finite set \(\{b_{k_{s}}\}_{s = 1}^{r_{k}}\) for which \(\delta_{2} < b_{k_{s}} - b_{k_{s} - 1}< \frac{3}{2} \delta _{2}\). Now, for all \(1 \leq s \leq r_{k} \) (\(k = 1, \ldots, m_{0} + 1\)), let \(\{ M_{i_{1}}, M_{i_{2}}, M_{i_{3}}, \ldots, M_{i_{s}} \}\) a \((\gamma(A)(b_{k_{s}}) + \epsilon)\)-net of the set \(A_{b_{k_{s}}}\). Thus, we can construct a family of paths \(\{G_{l}: l = 1, \ldots, h\}\) such that \(\mathbb{P} \{ \omega\in\Omega\mid t \longrightarrow G_{l}(t)(\omega) \mbox{ is continuous}\ (l = 1, \ldots, h)\} = 1\) as follows: \(G_{l} \equiv M_{i_{f}}\) on the intervals \(J_{k_{s}} = [b_{k_{s} - 1} + \frac{\delta _{2}}{2}, b_{k_{s}} - \frac{\delta_{2}}{2}]\) for all \(1 \leq f \leq s \) (\(k = 1, \ldots, m_{0} + 1\)) (\(l = 1, \ldots, h\)) and linear on the complementary intervals. On the other hand, since \(\gamma(A) \leq\gamma (C(A))\), \(\gamma(A)(\theta) \leq\gamma(C(A))(\theta)\) for all \(\theta \in[0, t]\). Let \(Z \in(C(A))_{\theta} = \{ Y_{|[0, \theta]}/ Y \in C(A) \}\), this shows the existence of \(V \in A\) such that \(Y = C(V)\). Moreover, we have \(V_{|[0, b_{k_{s}}]} \in A_{b_{k_{s}}}\) (\(1 \leq s \leq r_{k}\)) (\(k = 1, \ldots, m_{0} + 1\)), it follows that there exists \(1 \leq f_{0} \leq s\) for which
$$\|V_{|[0, b_{k_{s}}]} - M_{i_{f_{0}}}\|_{X_{k_{s}}} \leq\gamma (A) (b_{k_{s}}) + \epsilon. $$
Since \(G_{l} \equiv M_{i_{f_{0}}}\) on \(J_{k_{s}}\), it follows that for \(\theta\in J_{k_{s}}\), we have
$$\begin{aligned} \mathbb{E}\bigl(\bigl|V(\theta) - G_{l}( \theta)\bigr|^{2}\bigr)&\leq \mathbb {E}\Bigl( \sup_{J_{k_{s}}} \bigl(\bigl|\bigl(V(\theta) - G_{l}(\theta)\bigr|^{2}\bigr)\bigr)\Bigr) \\ &\leq\|V_{|[0, b_{k_{s}}]} - M_{i_{f_{0}}}\|^{2}_{X_{k_{s}}}. \end{aligned}$$
The uniform continuity of the function \(t \longrightarrow \gamma(A) (t)\) implies that for \(\theta\in J_{k_{s}}\), we have
$$\bigl| \gamma(A) (b_{k_{s}}) - \gamma(A) (\theta)\bigr| < \epsilon. $$
This gives
$$\bigl(\gamma(A) (b_{k_{s}}) + \epsilon\bigr)^{2} \leq \bigl( \gamma (A) (\theta) + 2 \epsilon\bigr)^{2}. $$
We denote \(\chi^{\tau}_{k_{s}} = J_{k_{s}} \cap[0, \sup_{ 0 \leq\theta\leq \tau} f(\theta)]\).
By using the same techniques as Proposition 2.2, we get
$$\mathbb{E} \sup_{0 \leq\theta\leq \tau} \bigl(\bigl|(CV) (\theta) - (C G_{l}) (\theta)\bigr|^{2}\bigr) \leq (2\tau+ 8) \mathbb{E} \int_{0}^{\tau} h(\theta) g\bigl(\bigl|V\bigl(f(\theta) \bigr) - G_{l}\bigl(f(\theta )\bigr)\bigr|^{2}\bigr)\,d\theta. $$
Since g is concave, it follows that
$$\bigl\| (CV)- (C G_{l})\bigr\| ^{2}_{X_{\tau}} \leq (2\tau+ 8) \int_{0}^{\tau} h(\theta) g\bigl( \mathbb{E} \bigl(\bigl|V \bigl(f(\theta)\bigr) - G_{l}\bigl(f(\theta)\bigr)\bigr|^{2}\bigr) \bigr)\,d\theta. $$
We put \(\kappa= 2\tau+ 8\) and \(\Re_{t} = [0, t] \backslash \bigcup_{1 \leq s \leq r_{k}, k = 1, \ldots, m_{0} + 1}{f^{-1}(\chi^{\tau}_{k_{s}})}\).
Since \(f^{-1}(\{ \chi^{\tau}_{k_{s}}\}) \cap f^{-1}(\{ \chi ^{\tau}_{r_{m}}\}) = \emptyset\) (\(k \neq r\), \(s = 1, \ldots, r_{m}\), \(m = 1, \ldots, m_{0} + 1\)), we infer that
$$\begin{aligned} \bigl\| (CV)- (C G_{l})\bigr\| ^{2}_{X_{\tau}} \leq{}& \kappa \sum _{k_{s}, 1 \leq s \leq r_{k}, k = 1, \ldots, m_{0} + 1} \int _{f^{-1}(\{\chi^{\tau}_{k_{s}}\})} h(\theta) g\bigl( \mathbb{E} \bigl(\bigl|V\bigl(f( \theta)\bigr) - G_{l}\bigl(f(\theta)\bigr)\bigr|^{2}\bigr)\bigr)\,d\theta \\ &{}+\kappa \int_{\Re_{t}} h(\theta) g\bigl( \mathbb{E} \bigl(\bigl|V\bigl(f(\theta) \bigr) - G_{l}\bigl(f(\theta)\bigr)\bigr|^{2}\bigr)\bigr)\,d\theta. \end{aligned}$$
$$I_{1} = \kappa \sum_{k_{s}, 1 \leq s \leq r_{k}, k = 1, \ldots, m_{0} + 1} \int_{f^{-1}(\{\chi^{\tau}_{k_{s}}\})} h(\theta) g\bigl( \mathbb{E} \bigl(\bigl|V\bigl(f(\theta) \bigr) - G_{l}\bigl(f(\theta)\bigr)\bigr|^{2}\bigr)\bigr)\,d\theta $$
$$I_{2} = \kappa \int_{\Re_{t}} h(\theta) g\bigl( \mathbb{E} \bigl(\bigl|V\bigl(f(\theta) \bigr) - G_{l}\bigl(f(\theta)\bigr)\bigr|^{2}\bigr)\bigr)\,d\theta. $$
The fact that g is nondecreasing and (2.10) lead to
$$I_{1} \leq\kappa \int_{0}^{\tau} h(\theta) g\bigl(\gamma(A) \bigl(f( \theta)\bigr) + 2 \epsilon\bigr)^{2}\,d\theta. $$
The continuity of the integral shows that there exists \(\eta > 0\) such that, for all measurable set \(B \subset[0, \tau]\) with \(\lambda(B) < \eta\), we have
$$\int_{B} h(\theta) g\bigl( \mathbb{E} \bigl(\bigl|V\bigl(f(\theta) \bigr) - G_{l}\bigl(f(\theta)\bigr)\bigr|^{2}\bigr)\bigr)\,d\theta< \frac {\epsilon}{\kappa}. $$
Moreover, the condition (\(\mathcal{H}_{3}\)) implies that it is always possible to choose \(\delta_{2}\) given above sufficiently small such that
$$I_{2} < \epsilon. $$
Consequently, we can write
$$ \bigl\| (CV)- (C G_{l})\bigr\| ^{2}_{X_{\tau}} \leq \epsilon+ \kappa \int_{0}^{t} h(\theta) g \bigl(\gamma(A) \bigl(f( \theta)\bigr) + 2 \epsilon \bigr)^{2}\,d\theta. $$
The definition of the measure of noncompactness of Hausdorff together with (2.11) leads to
$$\begin{aligned} \bigl(\gamma(A) (t)\bigr)^{2}&\leq\bigl(\gamma\bigl(C(A) (t)\bigr) \bigr)^{2} \leq\bigl\| (CV)- (C G_{l})\bigr\| ^{2}_{X_{\tau}} \\ &\leq\epsilon+ \kappa \int_{0}^{t} h(\theta ) g\bigl(\gamma(A) \bigl(f( \theta)\bigr) + 2 \epsilon\bigr)^{2}\,d\theta. \end{aligned}$$
By using the assumption (\(\mathcal{H}_{2}\)), we obtain \(\gamma(A) \equiv0\), which gives the result. □

Theorem 2.4

([1], p.26)

Let \(A: K \longrightarrow K\) be a mapping defined on a closed, bounded, convex subset K of a Banach space X. Assume that A is condensing with respect to the additively nonsingular measure of noncompactness in the general sense Ψ. Then A has at least one fixed point in K.

Theorem 2.5

The mapping \(C: X_{T} \longrightarrow X_{T}\) defined by (2.1) has a unique fixed point in \(X_{T}\).


The existence follows from Theorem 2.4, more precisely, for τ belonging to the interval \([0, -2 + \sqrt{4 + \frac{1}{2M} \frac{H}{H + 1}}]\), the inequality (2.8) shows that the random mapping \(C: X_{\tau}\longrightarrow X_{\tau}\) leaves \(\overline{B}_{X_{T}}(0, \sqrt {H})\) (the ball of center 0 and radius \(\sqrt{H}\)) invariant, which implies the existence of the solution \(X(t)\) in \(X_{\tau}(\tau\in[0, -2 + \sqrt{4 + \frac{1}{2M} \frac{H}{H + 1}}])\), the result in \(X_{T}\) follows from an extension to the whole interval \([0, T]\).

For the uniqueness, we proceed as follows: Assume that \(X = X(t)_{t \in [0, T]}\) and \(Y = Y(t)_{t \in[0, T]}\) are two strong solutions of equation (1.1) such that \(X(0) = Y(0) = 0\). In other words
$$ X(s) = \int_{0}^{s}a_{1}\bigl(\theta, X\bigl(f( \theta)\bigr)\bigr)\,d\theta+ \int_{0}^{s}a_{2}\bigl(\theta, X\bigl(f( \theta)\bigr)\bigr)\,dw (\theta) $$
$$ Y(s) = \int_{0}^{s}a_{1}\bigl(\theta, Y\bigl(f( \theta)\bigr)\bigr)\,d\theta+ \int_{0}^{s}a_{2}\bigl(\theta, Y\bigl(f( \theta)\bigr)\bigr)\,dw (\theta). $$
It follows that
$$\begin{aligned} X(s) - Y(s) ={}& \int_{0}^{s}\bigl(a_{1}\bigl(\theta, X \bigl(f(\theta)\bigr)\bigr) - a_{1}\bigl(\theta, Y\bigl(f(\theta)\bigr) \bigr)\bigr)\,d\theta \\ &{}+ \int _{0}^{s}\bigl(a_{2}\bigl(\theta, X \bigl(f(\theta)\bigr)\bigr) - a_{2}\bigl(\theta, Y\bigl(f(\theta)\bigr) \bigr)\bigr)\,dw (\theta). \end{aligned}$$
Using the inequality \((x_{1} + x_{2})^{2}\leq2 x_{1}^{2} + 2 x_{2}^{2}\) for \(x_{1}, x_{2} \in\mathbb{R}\), we get
$$\begin{aligned} \bigl|X(s) - Y(s)\bigr|^{2} \leq{}& 2 \biggl| \int_{0}^{s}\bigl(a_{1}\bigl(\theta, X \bigl(f(\theta)\bigr)\bigr) - a_{1}\bigl(\theta, Y\bigl(f(\theta)\bigr) \bigr)\bigr)\,d\theta\biggr|^{2} \\ &{}+ 2 \biggl| \int _{0}^{s}\bigl(a_{2}\bigl(\theta, X \bigl(f(\theta)\bigr)\bigr) - a_{2}\bigl(\theta, Y\bigl(f(\theta)\bigr) \bigr)\bigr)\,dw (\theta)\biggr|^{2}. \end{aligned}$$
The Cauchy-Schwarz inequality gives
$$\begin{aligned} \bigl|X(s) - Y(s)\bigr|^{2} \leq{}& 2 s \int_{0}^{s}\bigl|\bigl(a_{1}\bigl(\theta, X\bigl(f( \theta)\bigr)\bigr) - a_{1}\bigl(\theta, Y\bigl(f(\theta)\bigr) \bigr)\bigr)\bigr|^{2}\,d\theta \\ &{}+ 2 \biggl| \int _{0}^{s}\bigl(a_{2}\bigl(\theta, X\bigl(f( \theta)\bigr)\bigr) - a_{2}\bigl(\theta, Y\bigl(f(\theta)\bigr)\bigr)\bigr)\,dw ( \theta)\biggr|^{2}. \end{aligned}$$
Passing to the sup and expectation and using the assumption (\(\mathcal{H}_{1}\)) together with the stochastic inequality (1.3), we obtain
$$\begin{aligned} \mathbb{E}\Bigl( \sup_{0 \leq s \leq u}\bigl|X(s) - Y(s)\bigr|^{2}\Bigr) \leq{}& 2 u \mathbb{E} \biggl( \int_{0}^{u}h(\theta) g\bigl(\bigl|X\bigl(f(\theta)\bigr) - Y \bigl(f(\theta)\bigr)\bigr|^{2}\bigr)\,d\theta\biggr) \\ &{}+ 8\biggl( \int_{0}^{u} \mathbb{E} \bigl(h(\theta) g\bigl(\bigl|X \bigl(f(\theta)\bigr) - Y\bigl(f(\theta)\bigr)\bigr|^{2}\bigr)\bigr)\,d\theta\biggr). \end{aligned}$$
$$ \mathbb{E}\Bigl( \sup_{0 \leq s \leq u}\bigl(\bigl|X(s) - Y(s)\bigr|^{2} \bigr)\Bigr) \leq(2 T + 8) \biggl( \int_{0}^{u} h(\theta) g \bigl(\mathbb{E}\bigl(\bigl|X \bigl(f(\theta)\bigr) - Y\bigl(f(\theta)\bigr)\bigr|^{2}\bigr)\bigr)\,d\theta\biggr). $$
By the monotonicity of the expectation and the function g, we infer that
$$\begin{aligned} &\mathbb{E}\Bigl( \sup_{0 \leq s \leq u}\bigl(\bigl|X(s) - Y(s)\bigr|^{2}\bigr)\Bigr) \\ &\quad\leq(2 T + 8) \biggl( \int_{0}^{u} h(\theta) g \bigl(\mathbb{E} \sup _{0 \leq s \leq\theta}\bigl(\bigl|X\bigl(f(s)\bigr) - Y\bigl(f(s) \bigr)\bigr|^{2}\bigr)\bigr)\,d\theta\biggr). \end{aligned}$$
By assumption (\(\mathcal{H}_{2}\)), it follows that
$$\mathbb{E}\Bigl( \sup_{0 \leq s \leq u} \bigl|X(s) - Y(s)\bigr|^{2} \Bigr) = 0 $$
for an arbitrary element \(u \in[0, T]\), consequently \(X(t) = Y(t)\), which implies the uniqueness of the strong solution. □

3 Application to the convergence of Kirk’s iterative process

Let us recall the following theorem due to Diaz and Metcalf [15].

Theorem 3.1

Let B be a continuous selfmapping on a metric space \((X, d)\) such that
  1. (i)

    \(\operatorname{Fix}(B) \neq\emptyset\) (\(\operatorname{Fix}(B)\) is the set of fixed point of B);

  2. (ii)
    for each \(y \in X\) such that \(y \notin\operatorname {Fix}(B)\), and for each \(z \in\operatorname{Fix}(B)\) we have
    $$d\bigl(B(y), z\bigr) < d( y, z). $$
Then one, and only one, of the following properties holds:
  1. (a)

    for each \(x_{0} \in X\) the Picard sequence \(\{B^{n}(x_{0})\}\) contains no convergent subsequences;

  2. (b)

    for each \(x_{0} \in X\) the sequence \(\{B^{n}(x_{0})\}\) converges to a point belonging to \(F(B)\).

Kirk iteration ([16]): Let \((X, \|\cdot\|)\) be a normed space and K a closed, convex, and bounded subset of X. Let A be a selfmapping on K. For each \(x \in K\), the sequence \(\{ S^{n}(x)\}\) defined by \(S: K \longrightarrow K\), where
$$S = \lambda_{0} I + \lambda_{1} A + \cdots + \lambda_{n} A^{n},\quad \lambda_{i} \geq0, \lambda_{1} > 0, \sum_{i = 0}^{n} \lambda _{i} = 1, $$
is said to be Kirk’s iterative process.

Theorem 3.2


Let K be a convex subset of a Banach space and \(A: K \longrightarrow K\) be a nonexpansive mapping. Then \(S(x) = x\) if and only if \(A(x) = x\).

Let \(T > 0\), and \(H = K e^{K'T}\) a real positive which is an upper bound of the function \(\varphi(t) = \|X\|_{X_{t}}^{2} \) (\(0 \leq t \leq T\)). We denote by \(\overline{B}_{X_{t}}(0, \sqrt{H})\) the closed ball of center 0 and radius \(\sqrt{H}\) in \(X_{t}\).

Theorem 3.3

If there exists \(\tau_{0} \in [0, -2 + \sqrt{4+ \frac{1}{2M} \frac{H}{H + 1}}]\) such that \(C:\overline {B}_{X_{\tau_{0}}}(0, \sqrt{H}) \longrightarrow\overline{B}_{X_{\tau _{0}}}(0, \sqrt{H})\) satisfies Diaz-Metcalf’s condition, then, for each \(X \in\overline{B}_{X_{\tau_{0}}}(0, \sqrt{H})\), Kirk’s process \(\{ S^{n}(X)\}\) (associated to the mapping C) converges to the unique fixed point of T.


Following Theorem 2.5, it follows that the mapping C has a unique fixed point \(X^{\star}\). Moreover, S is a densifying mapping. Indeed, if K is a bounded subset of \(\overline{B}_{X_{\tau}}(0, \sqrt {H})\) such that \(\gamma(K) > 0\), then
$$S(K) \subseteq\lambda_{0} K + \lambda_{1} C(K) + \cdots+ \lambda _{n} C^{n}(K). $$
Hence, by the monotonicity, semi-additivity, and homogeneity properties, we get
$$\alpha\bigl(S(K)\bigr) \leq \lambda_{0} \alpha(K) + \lambda_{1} \alpha \bigl(C(K)\bigr) + \cdots+ \lambda_{n} \alpha\bigl(C^{n}(K)\bigr). $$
The fact that C is densifying shows that
$$\begin{aligned} &\alpha\bigl(C(K)\bigr) < \alpha(K), \end{aligned}$$
$$\begin{aligned} &\vdots \\ &\gamma\bigl(C^{n}(K)\bigr) \leq\gamma \bigl(C^{n - 1}(K)\bigr)\leq\cdots\leq\gamma \bigl(C(K)\bigr)< \gamma(K) \end{aligned}$$
and therefore
$$ \gamma\bigl(S(K)\bigr) < (\lambda_{0} + \cdots+\lambda_{n}) \gamma(K) = \gamma (K). $$
Also, by use of Theorem 2.4 together with Theorem 3.2, \(\operatorname{Fix}(S) = \operatorname{Fix}(C) = \{X^{\star}\}\).
For \(X \in K\), let
$$\widetilde{K} = \bigcup_{n = 0}^{ + \infty }S^{n} \{X\}. $$
We have \(S(\widetilde{K}) = \bigcup_{n = 1}^{ + \infty}S^{n}\{X\} \subset\widetilde{K}\) and since \(\widetilde{K} = \{ X\} \cup S(\widetilde{K})\), this gives
$$\begin{aligned} \gamma(\widetilde{K}) &= \max\bigl\{ \gamma\bigl(\{X\}\bigr), \alpha \bigl(S( \widetilde{K})\bigr)\bigr\} = \max\bigl\{ 0, \gamma\bigl(S(\widetilde{K})\bigr) \bigr\} = \gamma \bigl(S(\widetilde{K})\bigr). \end{aligned}$$
Since S is densifying, we establish that \(\gamma(\widetilde {K}) = 0\), by the property of regularity, we establish that is totally bounded, and consequently \(\overline{\widetilde{K}}\) is compact (since \(X_{\tau_{0}}\) is a Banach space), therefore the sequence \(\{S^{n}(X)\}\) contains a convergent subsequence.
On the other hand, the fact that C satisfies the Diaz-Metcalf condition shows that, for all \(X \in\overline{B}_{X_{\tau _{0}}}(0, \sqrt{H})\backslash\{X^{\star}\}\), we have
$$\bigl\| C(X) - X^{\star} \bigr\| < \bigl\| X - X^{\star}\bigr\| . $$
This implies that
$$ \bigl\| C^{k}(X) - X^{\star} \bigr\| \leq\bigl\| X - X^{\star}\bigr\| . $$
$$\begin{aligned} \bigl\| S(X) - X^{\star} \bigr\| &= \Biggl\| \sum_{k = 0}^{n} \lambda_{k} C^{k}(X) - X^{\star} \Biggr\| \\ &= \Biggl\| \sum_{k = 0}^{n} \lambda_{k} C^{k}(X) - \sum_{k = 0}^{n} \lambda_{k} X^{\star} \Biggr\| \\ &\leq \sum_{k = 0}^{n} \lambda_{k} \bigl\| C^{k}(X) - X^{\star } \bigr\| \\ & < \sum_{k = 0}^{n} \lambda_{k} \bigl\| X - X^{\star} \bigr\| = \bigl\| X - X^{\star} \bigr\| . \end{aligned}$$
Then S satisfies also the assumptions of Theorem 3.1, this implies that \(\lim_{n \longrightarrow+ \infty} S^{n}(X)\) exists and is equal to \(X^{\star}\). □

Remark 3.1

Let K be a closed, bounded, and convex subset of a Banach space X and let \(A: K \longrightarrow K\) be a mapping. For each \(x \in K\), some sufficient conditions on A are given to ensure the convergence or the weak convergence of Kirk’s process \(\{S^{n}(x)\}\) to the fixed point of A (see for example [1618]) with the additional condition that the space X is uniformly convex or strictly convex. Unfortunately, in our case the Banach space \(X_{T}\) is not strictly convex or uniformly convex, indeed, it suffices to take its subspace of functions \(\zeta(t)\) independent of ω equipped with its norm sup, to see that this does not have these properties.



The authors would like to thank the editor and the anonymous referees for their remarks and valuable comments, which helped to improve this paper.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

Laboratory of Informatics and Mathematics, University of Souk-Ahras, Souk-Ahras, Algeria
Department of Mathematics, University of Constantine 1, Constantine, Algeria


  1. Akhmerov, RR, Kamenskii, MI, Potapov, AS, Rodkina, AE, Sadovskii, BN: Measures of Noncompactness and Condensing Operators. Birkhäuser, Basel (1992) View ArticleGoogle Scholar
  2. Yamada, T, Watanabe, S: On the uniqueness of solutions of stochastic differential equations. I. J. Math. Kyoto Univ. 11(1), 155-167 (1971) MathSciNetMATHGoogle Scholar
  3. Yamada, T, Watanabe, S: On the uniqueness of solutions of stochastic differential equations. II. J. Math. Kyoto Univ. 11(3), 533-563 (1971) MathSciNetMATHGoogle Scholar
  4. Rodkina, A: Solubility of stochastic differential equations with perturbed argument. Ukr. Math. J. 37(1), 98-103 (1985) MathSciNetView ArticleMATHGoogle Scholar
  5. Veretennikov, AY: On strong solutions of stochastic differential equations. Teor. Veroâtn. Primen. 24(2), 348-360 (1979) MathSciNetMATHGoogle Scholar
  6. Gikhman, II, Skorohod, AV: Introduction to the Theory of Random Processes. Nauka, Moscow (1965) (in Russian) Google Scholar
  7. Pap, E, Hadzic, O, Mesiar, R: A fixed point theorem in probabilistic metric spaces and applications. J. Math. Anal. Appl. 202(2), 431-440 (1996) MathSciNetView ArticleMATHGoogle Scholar
  8. Sehgal, VM, Bharucha-Reid, AT: Fixed points of contraction mappings on probabilistic metric spaces. Theory Comput. Syst. 6(1), 97-102 (1972) MathSciNetMATHGoogle Scholar
  9. Sen, MD, Karapınar, E: Some results on best proximity points of cyclic contractions in probabilistic metric spaces. J. Funct. Spaces 2015, Article ID 470574 (2015) MathSciNetMATHGoogle Scholar
  10. Aghajani, A, Pourhadi, E: Application of measure of noncompactness to \(l_{1}\) solvability of infinite systems of second order differential equations. Bull. Belg. Math. Soc. Simon Stevin 22(1), 105-118 (2015) MathSciNetMATHGoogle Scholar
  11. Ezzinbi, K, Taoudi, MA: Sadovskii-Krasnosel’skii type fixed point theorems in Banach spaces with application to evolution equations. J. Appl. Math. Comput. (2014). doi:10.1007/s12190-014-0836-8 MathSciNetMATHGoogle Scholar
  12. Losada, J, Nieto, JJ, Pourhadi, E: On the attractivity of solutions for a class of multi-term fractional functional differential equations. J. Comput. Appl. Math. (Available online 23 July 2015) Google Scholar
  13. Mursaleen, M, Noman, A: Hausdorff measure of noncompactness of certain matrix operators on the sequence of generalized means. J. Math. Inequal. Appl. 417, 96-111 (2014) MathSciNetMATHGoogle Scholar
  14. Alspach, DE: A fixed point free nonexpansive map. Proc. Am. Math. Soc. 3, 423-424 (1981) MathSciNetView ArticleMATHGoogle Scholar
  15. Diaz, JB, Metcalf, FT: On the set of subsequential limit points of successive approximations. Trans. Am. Math. Soc. 135, 459-485 (1969) MathSciNetMATHGoogle Scholar
  16. Kirk, WA: On successive approximations for nonexpansive mappings in Banach spaces. Glasg. Math. J. 12(1), 6-9 (1971) MathSciNetView ArticleMATHGoogle Scholar
  17. Barbuti, U, Guerra, S: Un Teorema costruttivo di punto fisso negli spazi di Banach. Rend. Ist. Mat. Univ. Trieste 4, 115-122 (1972) MathSciNetMATHGoogle Scholar
  18. Ray, BK, Singh, SP: Fixed point theorems in Banach space. Indian J. Pure Appl. Math. 9, 216-221 (1978) MathSciNetMATHGoogle Scholar


© Dehici and Redjel 2016