Regularization of the fractional Rayleigh–Stokes equation using a fractional Landweber method

In this paper, we consider a time-fractional backward problem for the fractional Rayleigh–Stokes equation in a general bounded domain. We propose a fractional Landweber regularization method for solving this problem. Error estimates between the regularized solution and the sought solution are also obtained under some choice rules for both a-priori and a-posterior regularization parameters.


Introduction
Fractional partial differential equations have applications in applied science and engineering, and these applications appear in fluid flow, heat conduction, image processing (filtering, denoising [30], restorations, segmentation, edge enhancement/detection), see [6,26].
Nonlocal properties of the fractional operator, fractional partial differential equations are useful for simulating real super-diffusion and sub-diffusion phenomena. In this paper we discuss the Rayleigh-Stokes problem for a heated generalized second grade fluid with a fractional derivative. Equation (1) below arises in Newtonian fluids and magnetohydrodynamic flows in porous media [9] and initial value problems for fractional Rayleigh-Stokes was studied, for example, in [1-4, 20, 29].
In this paper, we consider a backward problem of the fractional Rayleigh-Stokes equation with variable coefficient in a bounded domain: where Ω is a bounded domain in R d (1 ≤ d ≤ 3) with sufficiently smooth boundary ∂Ω and T > 0 is a given time. Here γ > 0 is a constant, g is the final data in L 2 (Ω), ∂ t = ∂/∂t, and ∂ α t is the Riemann-Liouville fractional derivative of order α ∈ (0, 1) defined by [13,22]: Let L be Here: (2) There exists a constant χ such that The function C ∈ C(Ω) satisfies C(x) ≤ 0, x ∈ Ω.
Our goal is to construct the initial data h(x) = u(0, x) from the given data (g, F). When we observe the data (g, F), we get approximate data (g δ , F δ ) such that where · denotes the L 2 (Ω)-norm and δ > 0 is the noise level. The corresponding direct problem for (1) is stated as follows: The backward problem for the time fractional diffusion equation was studied by many authors; see, for example, [5,18,23,24]. Such a problem is ill-posed in the sense of Hadamard. The solution (if it exists) does not depend continuously on the given data. Indeed, a small error of the given observation can result in that the solution may have a large error. This makes numerical computation troublesome. Hence a regularization is needed. There are very few results on the backward problem for the fractional Rayleigh-Stokes equation, and the first regularization result for such problems seems to be that of Tuan et al. [19] where they regularized a Rayleigh-Stokes problem with random noise.
In this paper, we do not follow the method in [19]. We will present another method called the fractional Landweber method to find a regularized solution. This method was introduced by Klann and Ramlau [15] to consider a linear ill-posed problem. The main idea of the fractional Landweber method is based on iterative sequences, which is similar to the classical iterative method [7,11,12,16,25]. Using this method, some authors developed and established a fractional Tikhonov method [10,21,27] and fractional Landweber method [28] for solving some linear ill-posed models.
The outline of the paper is as follows: Sect. 2 discusses mild solutions and the illposedness of the problem. In Sect. 3, we introduce the fractional Landweber regularization method and present a convergence estimate under an a-priori assumption on the exact solution. The a-posteriori parameter choice rule is also discussed.
This implies that where h n = h(·), X n . For the convenience of the reader, we repeat the relevant material from [19].

Lemma 2.1
Let us assume that α ∈ (0, 1). The following estimates hold: • There exists D 1 (T , α) > 0 such that • There exists a constant D 2 (α) > 0 such that where (12) where with C being a positive constant independent of n.
Next we present a representation for the mild solution of problem (1). We assume that problem (1) has a unique solution u and then u satisfies (8). By letting t = T , we have where G n = G(·), X n . It follows that where By substituting h n into (8), we obtain Hence, we get From [1, Theorem 2.2], the functions Q(t, n, α), n = 1, 2, . . . , are completely monotone for t ≥ 0, and we get Our main goal is to find the initial value u(0, x) = h(x) from given data (g, F). To find h(x), we need to solve the integral equation as follows: where Since k(x, ζ ) = k(ζ , x), it is clear that the operator K is self-adjoint. Now we prove that the operator K is a compact operator. Let us consider the finite rank operator K M defined by We have Therefore , L 2 (Ω)). Hence, K is a compact operator and from a result by Kirsch [14], we know that the problem is ill-posed. Hence we introduce the fractional Landweber regularization method to recover it.
Let us denote by μ n the singular values for the linear self-adjoint compact operator K:

The ill-posedness of a backward time-fractional problem
To illustrate an ill-posedness of the backward problem, we give an example. Let (g, F) = (0, 0) and ( g, It is easy to see that so we know that ( g, F) is an approximation of (g, F) when q is large enough. Using ( g, F), we get the corresponding initial data h and the function H as follows: Using Lemma 2.1, we obtain This gives On the other hand, we have Therefore We conclude that the backward problem is ill-posed in the Hadamard sense. Hence a regularization method is necessary.

Conditional stability
We impose the following a priori bound condition on the initial value u(0, where P and m are both positive constants. Now we construct a conditional stability estimate for this backward problem. where P is a positive constant and P(T , α) = 2 m 2m+2 . Proof Using (15) and Hölder's inequality, we get where From Lemma 2.2, we estimate I 1 as follows: For estimating I 2 , we use Lemma 2.1 to obtain Combining (30), (32) and (33), we get Thus which completes the proof of the theorem.

Regularization method and error estimate under two parameter choice rules
In this section, we introduce the fractional Landweber regularization method and also analyze the convergence properties of regularization methods under two parameter choice rules.

The a-priori parameter choice
Theorem 3.1 Let h ∈ L 2 (Ω), given by (15), be the initial value of problem (1). Suppose the a priori bound condition (28) and (23) hold. Then the error estimate between the exact solution and its regularized solution with the exact data is as follows: Proof Using Parseval's equality, we get From Lemma 2.1, we deduce that Applying Lemma 3.2, we have Thus we get where [Λ] denotes the largest integer less than or equal to Λ.
Proof From the triangle inequality, we get Using Parseval's equality, we obtain that From (53) and (54), by Lemma 3.1, we get Combining the above two inequalities (50) and (55), we obtain Choosing the regularization parameter β as β = P δ

A-posteriori parameter choice rule and convergence analysis
In this subsection, we give the convergence estimate between the regularized solution and the exact solution by using an a posteriori choice rule for the regularization parameter. From results in Morozov's discrepancy principal [8], the general a-posteriori rule can be formulated as follows: where ℘ > 1 is a constant independent of δ, β > 0 is the regularization parameter which makes (58) hold at the first iteration time.
Proof From our results, we get Remark 3.1 In this paper, without loss of generality we can assume that the noisy data H δ L 2 (Ω) is large enough such that 0 < ℘δ ≤ H δ L 2 (Ω) . From Lemma 3.3, there exists a unique minimal solution for the inequality (58). β satisfy (58). Then, we have the following inequality:

Theorem 3.3 If the a-priori condition
Proof From the triangle inequality, we get From Lemma 3.4 and (55), we see that Using the triangle inequality, the a-priori bound condition (58), and 0 < aQ 2 (T , n, α) < 1, it follows that where
Next, we present the composite Simpson's rule to approximate the integral as follows: Suppose that the interval [a, b] is split up into k subintervals, with k being an even number.
Then, the composite Simpson's rule is given by where z i = a + i z for i = 0, 1, . . . , k with z = b-a k , and in particular, z 0 = a and z n = b. In the following simulation results, we will discretize the time and spatial variables as follows: Instead of observing the exact data (g, F), we get approximate data (g δ , F δ ) such that Figure 1 The input data and its approximations         where δ > 0 is the noise level. Then the couple (g δ , F δ ), which is determined below, plays the role of measured data with a random noise as follows (see Fig. 1): g δ (·) = g(·) + δ rand(·) + 1 , F δ (·) = F(·) + 2δ rand(·).
For the best of reader's comparison, we present some results between the result of this study and the result in [19] in two subsections as follows.

Case 1: the Landweber method
We choose the regularization parameter β = [( P δ ) 2 m+1 ], then we get the absolute error estimate between the exact solution and its regularization solution as follows: where u δ = u δ β,ϑ is defined by (38).

Case 2: the filter regularization method
In this case, we present the result which was shown in [19]. There the authors considered a general filter regularization method, then they gave the following regularized solution: where R n (δ) = P n (T ) δ + P n (T ) , and P n (T ) = ∞ 0 exp(-ξ t)M n (ξ ) dξ , M n (ξ ) = γ π λ n sin(απ)ξ α (-ξ + λ n γξ α cos(απ) + λ n ) 2 + ( λ n γξ α sin(απ)) 2 · The absolute error estimate between the exact and regularized solutions is given by  (Table 1) and in case 2 ( Table 2), for α = 0.4 in case 1 (Table 3) and in case 2 (Table 4), for α = 0.6 in case 1 ( Table 5) and in case 2 (Table 6), for α = 0.8 in case 1 ( Table 7) and in case 2 (Table 8), respectively. We also present the 2D graphs of the exact and regularized solutions of two cases at t = 0.1 for δ = 0.1 (Fig. 2), δ = 0.01 (Fig. 3) and δ = 0.001 (Fig. 4). In addition, the 3D graphs of the solutions, for α = 0.8 on the domain (t, x) ∈ [0, 1] × [0, π], are shown in Figs. 5-8. From the above results, it is clear that the smaller input error, the smaller output error, when δ tends to zero, the regularized solution approaches the exact solution, the convergence results of case 1 are better compared to case 2. It is clear that the experiment convergence orders are consistent with theoretical analysis.