Skip to content

Advertisement

Open Access

Two dimensional determination of source terms in linear parabolic equation from the final overdetermination

Advances in Difference Equations20152015:98

https://doi.org/10.1186/s13662-015-0401-2

Received: 1 October 2014

Accepted: 3 February 2015

Published: 26 March 2015

Abstract

A case of steady-case heat flow through a plane wall, which can be formulated as \(u_{t}(x,y,t)- \operatorname{div} (k(x,y) \nabla u(x,y,t)) = F(x,y,t)\) with Robin boundary condition \(-k(1,y)u_{x}(1,y,t)= \nu_{1} [u(1,y,t)-T_{0}(t)]\), \(-k(x,1)u_{y}(x,1,t)= \nu_{2} [u(x,1,t)-T_{1}(t)]\), where \(\omega:=\{F(x,y,t);T_{0}(t);T_{1}(t)\}\) is to be determined, from the measured final data \(\mu_{T}(x,y)=u(x,y,T)\) is investigated. It is proved that the Fréchet gradient of the cost functional \(J(\omega )=\|\mu_{T}(x,y)-u(x,y,T;\omega)\|^{2}\) can be found via the solution of the adjoint parabolic problem. Lipschitz continuity of the gradient is derived. The obtained results permit one to prove the existence of a quasi-solution of the inverse problem. A steepest descent method with line search, which produces a monotone iteration scheme based on the gradient, is formulated. Some convergence results are given.

Keywords

two dimensional determinationcost functionalsteepest descent method

1 Introduction

Consider the one dimensional physical system in Figure 1, where the left of the solid is full of hot gas. Whenever a temperature gradient exists in the solid medium, heat will flow from the higher-temperature region to the lower-temperature region. According to Fourier’s law, for a homogeneous, isotropic solid, the following equation holds:
$$ q(x,t)=-ku_{xx}(x,t), $$
(1.1)
where \(q(x,t)\) represents heat flow per unit time, per unit area of the isothermal surface in the direction of decreasing temperature, \(u(x,t)\) is the temperature distribution in the solid, and k is called the thermal conductivity.
Figure 1
Figure 1

The one dimensional physical model.

However, in practice, the thermal conductivity may depend on x, namely, \(k:=k(x)\). Besides, there may be a heat source \(g(x,t)\) in the solid. Under these conditions, the physical system can be formulated as
$$ \left \{ \begin{array}{@{}l@{\quad}l} u_{t}=(k(x)u_{x}(x,t))_{x} + g(x,t), &(x,t)\in\tilde{\Omega}_{T},\\ u(x,0)=\mu_{0}(x), &x\in(0,L_{1}), \\ u_{x}(0,t)=0,\quad -k(L_{1})u_{x}(L_{1},t)=\sigma[u(L_{1},t)-T_{0}(t)], &t \in(0,T], \end{array} \right . $$
(1.2)
where \(\tilde{\Omega}_{T}:=\{0< x<L_{1},0<t\leqslant T\}\), and \(T_{0}(t)\) is the temperature of the cold gas on the right of the solid. \(-k(L_{1})u_{x}(L_{1},t)=\sigma[u(L_{1},t)-T_{0}(t)]\) represents the convection at \(x=L_{1}\) according to Newton’s law, and the constant σ is called the convection coefficient.
It is desired to find the pair \(\omega:=\{g(x,t);T_{0}(t)\}\) from the final state observation
$$ \mu_{T}(x)=u(x,T). $$
(1.3)
The mathematical model (1.2) can also arise in hydrology [1], material sciences [2], and transport problems [3]. In [4] a tsunami model based on shallow water theory is studied. The authors treat the inverse problem of determining an unknown initial tsunami source \(q(x,y)\) by using measurements \(f_{m}(t)\) of the height of a passing tsunami wave at a finite number of given points \((x_{m},y_{m})\), \(m=1,2,\ldots,M\), of the coastal area. These are nonlinear inverse problems, and it is well known that they are generally ill posed, i.e. the existence, uniqueness, and stability of their solutions are not always guaranteed [5]. There are many contributions for the linear parabolic equations with final overdetermination (see, for instance [612]). The time-dependent heat source \(H(t)\) of the separable sources of the form \(g(x,t)=F(x)H(t)\) is investigated in [13]. For \(g(x,t)=p(x)u\), in [14], the author proved the existence and uniqueness of \(p(x)\), and the local well-posedness of the inverse problem was discussed in [15]. The simultaneous reconstruction of the initial temperature and heat radiative coefficient was investigated in [16] by using the measurement of temperature given at a fixed time and the measurement of the temperature in a subregion of the physical domain. A determination of the unknown function \(p(x)\) in the source term \(g = p(x)f (u)\), via fixed point theory, was given in [17]. Based on the optimal control framework, [18] considered the determination of a pair \((p,u)\) in the nonlinear parabolic equation
$$u_{t}-u_{xx}+p(x)f(u)=0, $$
with initial and homogeneous Dirichlet boundary conditions:
$$u(x,0)=\phi(x),\qquad u |_{x=0}=u |_{x=L_{1}}=0, $$
from the overspecified data \(u(x,T)=\mu_{T}(x)\). The local uniqueness and stability of the solution were proved. In [19], a weak solution approach for the pair \(\omega(x, t) := \{g(x, t); T_{0}(t)\}\) was given via a steepest descent method on minimizing the cost functional
$$ J(\omega)=\bigl\| \mu_{T}(x)-u(x,T;\omega) \bigr\| ^{2}_{L^{2}[0,L_{1}]}. $$
(1.4)
In this contribution, we consider the corresponding two dimensional problem (see Figure 2) as follows:
$$ \left \{ \begin{array}{@{}l@{\quad}l} u_{t}(x,y,t)- \operatorname{div} (k(x,y) \nabla u(x,y,t)) = F(x,y,t), & \mbox{in } \Omega_{T} , \\ u_{x}(0,y,t)= u_{y}(x,0,t)= 0, & x,y \in(0,1), t \in(0,T], \\ -k(1,y)u_{x}(1,y,t)= \nu_{1} [u(1,y,t)-T_{0}(t)], & x,y \in(0,1), t \in(0,T], \\ -k(x,1)u_{y}(x,1,t)= \nu_{2} [u(x,1,t)-T_{1}(t)], & x,y \in(0,1), t \in(0,T], \\ u(x,y,0)= u_{0}(x,y), & (x,y) \in\Omega, \end{array} \right . $$
(1.5)
where \(\Omega_{T}=\Omega\times(0,T)=(0,1)\times(0,1) \times(0,T)\) and \(\nu_{1},\nu_{2}>0\). The inverse problem here is to determine \(\omega:=\{F(x,y,t);T_{0}(t);T_{1}(t)\}\) from the final state observation (the overspecified data)
$$ u_{T}(x,y)=u(x,y,T). $$
(1.6)
Figure 2
Figure 2

The two dimensional physical model.

In this work, based on a weak solution approach we will show how the inverse problem can be formulated and solved for \(\omega:=\{ F(x,y,t);T_{0}(t);T_{1}(t)\}\). Moreover, we will prove that the gradient \(J'(w)\) of the cost (auxiliary) functional
$$J(\omega)=\bigl\| \mu_{T}(x,y)-u(x,y,T;\omega)\bigr\| ^{2} $$
can be expressed via the solution \(\varphi= \varphi(x,y, t;\omega)\) of the appropriate adjoint problem.

This paper is organized as follows. In Section 2, we give an analysis of the two dimensional problem and prove the Fréchet-differentiability of \(J(\omega)\). In Section 3, we present the framework of a steepest descent iterate with line search of the two dimensional inverse problem, where \(J'(\omega)\) can be found via an adjoint parabolic problem. The convergence of the sequence is analyzed in Section 4. Conclusions are stated in Section 5.

2 The analysis of the two dimensional problem

The direct problem (1.5) is to get the solution from a given pair w. Firstly, we define \(W:=\mathcal{F} \times\mathcal{T}_{0} \times\mathcal{T}_{1}\), the set of admissible unknown sources \(F(x,y,t)\), \(T_{0}(t)\), \(T_{1}(t)\) with
$$\begin{aligned}& \begin{aligned} &F(x,y,t) \in L^{2}(\Omega_{T}), \qquad T_{0}(t), T_{1}(t) \in L^{2}[0,T], \\ &0< T_{0*}\leq T_{0}(t), T_{1}(t) \leq T_{0}^{*} < +\infty, \end{aligned} \end{aligned}$$
(2.1)
$$\begin{aligned}& k(x,y)>0,\qquad k(x,y)\in L^{\infty}(\Omega),\qquad u_{0}(x,y), u_{T}(x,y)\in L^{2}(\Omega). \end{aligned}$$
(2.2)
It is obvious that the set W is a closed and convex subset in \(L^{2}(\Omega_{T}) \times L^{2}[0,T] \times L^{2}[0,T]\). The scalar product in W is defined as
$$\begin{aligned} (\omega_{1}, \omega_{2})_{W} := {}&\int _{\Omega_{T}} F_{1}(x,y,t)F_{2}(x,y,t)\,dx \,dy \,dt + \int_{0}^{T} T_{0}^{(1)}(t)T_{0}^{(2)}(t) \,dt \\ &{}+ \int_{0}^{T} T_{1}^{(1)}(t)T_{1}^{(2)}(t) \,dt,\quad \forall \omega_{1}, \omega_{2} \in W, \end{aligned}$$
where \(\omega_{m}:=\{F_{m}; T_{0}^{(m)}; T_{1}^{(m)}\}\), \(m=1,2\). We can prove that this kind of problem has a quasi-solution \(u \in H^{1,0}(\Omega_{T})\) satisfying the identity
$$\begin{aligned} &\int_{\Omega}u(x,y,T)v(x,y,T)\,dx \,dy - \int_{\Omega}u_{0}(x,y)v(x,y,0)\,dx\,dy \\ &\qquad{}- \int_{\Omega_{T}}(uv_{t}-ku_{x}v_{x}-ku_{y}v_{y}) \,dx\,dy\,dt \\ &\quad=\nu_{1}\int^{T}_{0}\int ^{1}_{0} \bigl[u(1,y,t)-T_{0}(t) \bigr]v(1,y,t)\,dy\,dt \\ &\qquad{}+\nu_{2}\int^{T}_{0}\int ^{1}_{0} \bigl[u(x,1,t)-T_{1}(t) \bigr]v(x,1,t)\,dx\,dt + \int_{0}^{T}\int _{\Omega}F(x,y,t)v(x,y,t)\,dx\,dt, \end{aligned}$$
(2.3)
for all \(v\in H^{1,0}(\Omega_{T})\). Here \(H^{1,0}(\Omega_{T})\) is the Sobolev space with the norm
$$\| u\|_{1}:= \biggl\{ \int_{\Omega _{T}} \bigl[u^{2}+u_{x}^{2}+u_{y}^{2} \bigr]\,dx\,dy\,dt \biggr\} ^{1/2}. $$
It is also known that, under conditions (2.1) and (2.2), the weak solution \(u(x,y,t)\in H^{1,0}(\Omega_{T})\) of the direct problem (1.5) exists and is unique [20].

2.1 Method discussion

To solve the inverse problem (1.5)-(1.6), we introduce a cost functional
$$J(\omega)=\int_{\Omega}\bigl[u(x,y,T;\omega)-u_{T}(x,y) \bigr]^{2}\,dx\,dy. $$
We are going to give an iterative solution to this kind of problem. To begin, we study the derivative of the cost functional. Let us consider the first variation of the cost functional:
$$\begin{aligned} \triangle J (\omega) :=& J(\omega+\triangle\omega) -J(\omega) \\ = & \int_{\Omega} \bigl[u(x,y,T;\omega+\triangle \omega)-u_{T}(x,y) \bigr]^{2}\,dx\,dy - \int _{\Omega} \bigl[u(x,y,T;\omega)-u_{T}(x,y) \bigr]^{2}\,dx\,dy \\ = & \int_{\Omega} \triangle u(x,y,T;\omega) \bigl(\triangle u(x,y,T;\omega )+2 \bigl(u(x,y,T;\omega)-u_{T}(x,y) \bigr) \bigr) \,dx \,dy \\ = & 2\int_{\Omega} \bigl(u(x,y,T;\omega)-u_{T}(x,y) \bigr) \triangle u(x,y,T;\omega )\,dx\,dy \\ &{}+ \int_{\Omega} \bigl(\triangle u(x,y,T;\omega) \bigr)^{2} \,dx\,dy, \end{aligned}$$
(2.4)
where
$$\begin{aligned}& \omega+ \triangle\omega:= \{F+ \triangle F; T_{0} + \triangle T_{0};T_{1}+\triangle T_{1}\} \in W, \\& \triangle u(x,y,t;\omega):=u(x,y,t;\omega+\triangle\omega )-u(x,y,t;\omega) \in H^{1,0}(\Omega_{T}). \end{aligned}$$
Furthermore, the function \(\triangle u := \triangle u(x,y,t; \omega)\) is the solution of the following parabolic problem:
$$ \left \{ \begin{array}{@{}l@{\quad}l} \triangle u_{t} - \operatorname{div}(k(x,y)\nabla\triangle u) = \triangle F(x,y,t), & \mbox{in } \Omega_{T}, \\ \triangle u_{x}(0,y,t)= \triangle u_{y}(x,0,t)= 0, & x,y \in(0,1), \ t \in(0,T], \\ -k(1,y)\triangle u_{x}(1,y,t)= \nu_{1} (\triangle u(1,y,t)-\triangle T_{0}(t)), & x,y \in(0,1), t \in(0,T], \\ -k(x,1)\triangle u_{y}(x,1,t)= \nu_{2} (\triangle u(x,1,t)-\triangle T_{1}(t)), & x,y \in(0,1), t \in(0,T], \\ \triangle u(x,y,0)=0, & (x,y) \in\Omega. \end{array} \right . $$
(2.5)

Now, we are ready to estimate the derivative of the cost functional \(J(\omega)\). Firstly, we estimate the first term in (2.4), i.e. 2\(\int_{\Omega} (u(x,y,T;\omega )-u_{T}(x,y)) \triangle u(x,y,T;\omega)\,dx\,dy\).

Lemma 2.1

Let \(\omega, \omega+ \triangle\omega\in W\) be given elements. If \(u(x,y,t; \omega)\) is the corresponding solution of the direct problem (1.5), and \(\varphi(x,y,t) \in H^{1,0}(\Omega_{T})\) is the solution of the backward parabolic problem
$$ \left \{ \begin{array}{@{}l@{\quad}l} \varphi_{t}(x,y,t)+ \operatorname{div} (k(x,y) \nabla\varphi(x,y,t)) = 0, & \textit{in } \Omega_{T} , \\ \varphi_{x}(0,y,t)= \varphi_{y}(x,0,t)= 0, & x,y \in(0,1), t \in (0,T], \\ -k(1,y)\varphi_{x}(1,y,t)= \nu_{1} \varphi(1,y,t), & x,y \in(0,1), t \in(0,T], \\ -k(x,1)\varphi_{y}(x,1,t)= \nu_{2} \varphi(x,1,t), & x,y \in(0,1), t \in(0,T], \\ \varphi(x,y,T)= u(x,y,T;\omega)-u_{T}(x,y), & (x,y) \in\Omega, \end{array} \right . $$
(2.6)
then, for all \(\omega\in W\), the following integral identity holds:
$$\begin{aligned} & \int_{\Omega} \bigl(u(x,y,T; \omega)-u_{T}(x,y) \bigr) \triangle u(x,y,T;\omega )\,dx\,dy \\ &\quad = \nu_{1} \int_{0}^{T} \triangle T_{0}(t) \int_{0}^{1} \varphi(1,y,t; \omega) \,dy\,dt +\nu_{2}\int_{0}^{T} \triangle T_{1}(t) \int_{0}^{1} \varphi(x,1,t;\omega) \,dx\,dt \\ &\qquad{} + \int_{\Omega_{T}} \triangle F(x,y,t) \varphi(x,y,t) \,dx \,dy\,dt. \end{aligned}$$
(2.7)

Proof

Taking the final condition at \(t=T\) in (2.6) and the boundary conditions in (2.6) and (2.5) into account, we can deduce that
$$\begin{aligned} & \int_{\Omega} \bigl(u(x,y,T;\omega)-u_{T}(x,y) \bigr) \triangle u(x,y,T;\omega )\,dx\,dy \\ &\quad = \int_{\Omega} \varphi(x,y,T;\omega) \triangle u(x,y,T; \omega )\,dx\,dy \\ &\quad = \int_{\Omega} \int_{0}^{T} \bigl(\varphi(x,y,t;\omega) \triangle u(x,y,t;\omega) \bigr)_{t} \,dt \,dx\,dy \\ &\quad = \int_{\Omega_{T}} \varphi_{t}(x,y,t;\omega) \triangle u(x,y,t;\omega ) + \varphi(x,y,t;\omega) \triangle u_{t}(x,y,t; \omega) \,dx\,dy\,dt \\ &\quad = \int_{\Omega_{T}} -\operatorname{div} \bigl(k(x,y) \nabla\varphi(x,y,t) \bigr) \triangle u(x,y,t;\omega) \\ &\qquad{} + \varphi(x,y,t;\omega) \operatorname{div} \bigl(k(x,y)\nabla\triangle u \bigr) \,dx\,dy\,dt + \int_{\Omega_{T}} \varphi(x,y,t) \triangle F(x,y,t)\,dx\,dy\,dt \\ &\quad = - \int_{0}^{T}\int_{0}^{1} k(1,y) \varphi_{x}(1,y,t) \triangle u(1,y,t;\omega) \,dy\,dt \\ &\qquad{} -\int_{0}^{T}\int_{0}^{1} k(x,1) \varphi_{y}(x,1,t)\triangle u(x,1,t;\omega) \,dx\,dt \\ &\qquad{} + \int_{0}^{T}\int_{0}^{1} k(1,y)\triangle u_{x}(1,y,t) \varphi(1,y,t;\omega ) \,dy\,dt \\ &\qquad{} + \int_{0}^{T}\int_{0}^{1} k(x,1)\triangle u_{y}(x,1,t) \varphi(x,1,t;\omega) \,dx\,dt + \int _{\Omega_{T}} \varphi(x,y,t) \triangle F(x,y,t)\,dx\,dy\,dt \\ &\quad = \nu_{1}\int_{0}^{T} \triangle T_{0}(t) \int_{0}^{1} \varphi(1,y,t; \omega) \,dy\,dt +\nu_{2}\int_{0}^{T} \triangle T_{1}(t) \int_{0}^{1} \varphi(x,1,t;\omega) \,dx\,dt \\ &\qquad{} + \int_{\Omega_{T}} \triangle F(x,y,t) \varphi(x,y,t) \,dx \,dy\,dt. \end{aligned}$$

This completes the proof of Lemma 2.1. □

Remark

We define the parabolic problem (2.6) as an adjoint problem corresponding to the inverse problem (1.5)-(1.6). It is easy to see that the parabolic problem (2.6) is a backward one, and this problem is well posed.

Next, we show that the second term in (2.4), i.e. \(\int_{\Omega} (\triangle u(x,y,T;\omega))^{2} \,dx\,dy\), is of order \(O(\|\triangle\omega\|_{W}^{2})\).

Lemma 2.2

Let \(\triangle u = \triangle u(x, t ;\omega) \in H^{1,0}(\Omega_{T} )\) be the solution of the parabolic problem (2.5) with respect to a given \(\omega\in W\). Then the following estimate holds:
$$\begin{aligned} \int_{\Omega} \bigl(\triangle u(x,y,T) \bigr)^{2} \,dx \,dy \leq \frac{\max\{\nu_{1},\nu _{2},1\}}{\epsilon} \|\triangle\omega\|^{2}_{W}, \end{aligned}$$
where
$$\|\triangle\omega\|_{W}:= \biggl( \int_{0}^{T} \bigl(\triangle T_{0}(t) \bigr)^{2}\,dt + \int _{0}^{T} \bigl(\triangle T_{1}(t) \bigr)^{2}\,dt + \int_{\Omega_{T}} \bigl(\triangle F(x,y,t) \bigr)^{2}\,dx\,dy\,dt \biggr)^{1/2} $$
is the \(H^{0}\)-norm of the function \(\triangle\omega\in W\), and the constant ϵ is defined as follows:
$$k_{*}=\min_{x,y \in[0,1]}k(x,y)>0, \qquad\epsilon= \min \biggl\{ k_{*}; \frac{2\nu _{1}}{\nu_{1}+2} \biggr\} . $$

Proof

Multiplying u on both sides of (2.5) and integrating on \(\Omega_{T}\), we obtain
$$\begin{aligned} 0 = & \int_{\Omega_{T}} \bigl(\triangle u_{t} - \operatorname{div} \bigl(k(x,y)\nabla\triangle u \bigr)- \triangle F(x,y,t) \bigr) \triangle u \,dx\,dy\,dt \\ = & \int_{\Omega_{T}} \biggl(\frac{1}{2}\triangle u^{2} \biggr)_{t} \,dx\,dy\,dt - \int_{\Omega_{T}} \bigl(k(x,y) \triangle u_{x} u \bigr)_{x} \,dx\,dy\,dt \\ & - \int_{\Omega_{T}} \bigl(k(x,y) \triangle u_{y} u \bigr)_{y} \,dx\,dy\,dt + \int_{\Omega _{T}} k(x,y) \bigl(( \triangle u_{x})^{2} + (\triangle u_{y})^{2} \bigr)\,dx\,dy\,dt \\ &{}- \int_{\Omega_{T}} \triangle F(x,y,t) \triangle u(x,y,t) \,dx \,dy\,dt \\ = & \frac{1}{2}\int_{\Omega} \bigl(\triangle u(x,y,T) \bigr)^{2} \,dx\,dy - \int_{0}^{T} \int_{0}^{1} k(1,y) \triangle u_{x}(1,y,t) \triangle u(1,y,t) \,dy\,dt \\ &{} - \int_{0}^{T}\int_{0}^{1} k(x,1) \triangle u_{y}(x,1,t) \triangle u(x,1,t) \,dx\,dt -\int _{\Omega_{T}} \triangle F(x,y,t) \triangle u(x,y,t) \,dx\,dy\,dt \\ &{} + \int_{\Omega_{T}} k(x,y) \bigl( \bigl(\triangle u_{x}(x,y,t) \bigr)^{2} + \bigl(\triangle u_{y}(x,y,t) \bigr)^{2} \bigr) \,dx\,dy\,dt \\ = & \frac{1}{2}\int_{\Omega} \bigl(\triangle u(x,y,T) \bigr)^{2} \,dx\,dy -\int_{\Omega_{T}} \triangle F(x,y,t) \triangle u(x,y,t) \,dx\,dy\,dt \\ &{} + \nu_{1}\int_{0}^{T}\int _{0}^{1} \bigl(\triangle u(1,y,t) \bigr)^{2} \,dy\,dt + \nu_{2}\int_{0}^{T} \int_{0}^{1} \bigl(\triangle u(x,1,t) \bigr)^{2} \,dx\,dt \\ &{} - \nu_{1}\int_{0}^{T}\int _{0}^{1} \triangle T_{0}(t)\triangle u(1,y,t)\,dy\,dt - \nu_{2} \int_{0}^{T} \int_{0}^{1} \triangle T_{1}(t) \triangle u(x,1,t) \,dx\,dt \\ &{} + \int_{\Omega_{T}} k(x,y) \bigl( \bigl(\triangle u_{x}(x,y,t) \bigr)^{2} + \bigl(\triangle u_{y}(x,y,t) \bigr)^{2} \bigr) \,dx\,dy\,dt; \end{aligned}$$
here, the initial and boundary conditions are used.
This implies the following identity:
$$\begin{aligned} & \frac{1}{2}\int_{\Omega} \bigl(\triangle u(x,y,T) \bigr)^{2} \,dx\,dy \\ &\qquad{} + \nu_{1}\int_{0}^{T}\int _{0}^{1} \bigl(\triangle u(1,y,t) \bigr)^{2} \,dy\,dt + \nu_{2}\int_{0}^{T} \int_{0}^{1} \bigl(\triangle u(x,1,t) \bigr)^{2} \,dx\,dt \\ &\qquad{} + \int_{\Omega_{T}} k(x,y) \bigl( \bigl(\triangle u_{x}(x,y,t) \bigr)^{2} + \bigl(\triangle u_{y}(x,y,t) \bigr)^{2} \bigr) \,dx\,dy\,dt \\ &\quad = \nu_{1} \int_{0}^{T}\int _{0}^{1} \triangle T_{0}(t)\triangle u(1,y,t)\,dy\,dt + \nu_{2}\int_{0}^{T} \int_{0}^{1} \triangle T_{1}(t) \triangle u(x,1,t) \,dx\,dt \\ &\qquad{} + \int_{\Omega_{T}} \triangle F(x,y,t) \triangle u(x,y,t) \,dx\,dy\,dt. \end{aligned}$$
(2.8)
By the ϵ-Young inequality we make an estimate on the right-hand side integrals of (2.8):
$$\begin{aligned} & \nu_{1} \int_{0}^{T} \int_{0}^{1} \triangle T_{0}(t)\triangle u(1,y,t)\,dy\,dt + \nu _{2} \int_{0}^{T} \int_{0}^{1} \triangle T_{1}(t) \triangle u(x,1,t) \,dx\,dt \\ &\qquad{} + \int_{\Omega_{T}} \triangle F(x,y,t) \triangle u(x,y,t) \,dx\,dy\,dt \\ &\quad\leq \frac{\nu_{1}\epsilon}{2}\int_{0}^{T}\int _{0}^{1} \bigl(\triangle u(1,y,t) \bigr)^{2}\,dy\,dt + \frac{\nu_{1}}{2\epsilon} \int_{0}^{T} \bigl(\triangle T_{0}(t) \bigr)^{2}\,dt \\ &\qquad{} + \frac{\nu_{2}\epsilon}{2}\int_{0}^{T}\int _{0}^{1} \bigl(\triangle u(x,1,t) \bigr)^{2}\,dx\,dt + \frac{\nu_{2}}{2\epsilon} \int_{0}^{T} \bigl(\triangle T_{1}(t) \bigr)^{2}\,dt \\ & \qquad{} + \frac{\epsilon}{2} \int_{\Omega_{T}} \bigl(\triangle u(x,y,t) \bigr)^{2} \,dx\,dy\,dt + \frac{1}{2\epsilon} \int _{\Omega_{T}} \bigl(\triangle F(x,y,t) \bigr)^{2}\,dx\,dy \,dt. \end{aligned}$$
(2.9)
Besides, by the Cauchy-Schwarz inequality the term \((\triangle u(x,y,t))^{2}\) can be estimated as
$$\begin{aligned} \bigl(\triangle u(x,y,t) \bigr)^{2} = & \biggl( \triangle u(1,y,t)- \int_{x}^{1} \triangle u_{\zeta}(\zeta,y,t) \,d\zeta \biggr)^{2} \\ \leq& 2 \bigl(\triangle u(1,y,t) \bigr)^{2} + 2 \biggl( \int _{x}^{1} \triangle u_{\zeta}(\zeta,y,t)\,d \zeta \biggr)^{2} \\ \leq& 2 \bigl(\triangle u(1,y,t) \bigr)^{2} + 2 \int _{0}^{1} \triangle u_{x}(x,y,t)^{2} \,dx. \end{aligned}$$
By integrating both sides of the above inequality on \(\Omega_{T}\), we obtain
$$\begin{aligned} \int_{\Omega_{T}} \bigl(\triangle u(x,y,t) \bigr)^{2} \,dx\,dy\,dt \leq{}& 2 \int_{0}^{T} \int_{0}^{1} \bigl(\triangle u(1,y,t) \bigr)^{2}\,dy\,dt \\ &{}+ 2 \int_{\Omega_{T}} \triangle u_{x}(x,y,t)^{2} \,dx\,dy\,dt. \end{aligned}$$
(2.10)
So, it follows from (2.8)-(2.10) that
$$\begin{aligned} & \frac{1}{2}\int_{\Omega} \bigl(\triangle u(x,y,T) \bigr)^{2} \,dx\,dy + \biggl[\nu _{1} \biggl(1- \frac{\epsilon}{2} \biggr)-\epsilon \biggr] \\ & \qquad{}\times \int_{0}^{T}\int _{0}^{1} \bigl(\triangle u(1,y,t) \bigr)^{2} \,dy\,dt + \nu_{2} \biggl(1-\frac{\epsilon}{2} \biggr) \int_{0}^{T}\int_{0}^{1} \bigl(\triangle u(x,1,t) \bigr)^{2} \,dx\,dt \\ &\qquad{} + \int_{\Omega_{T}} \bigl(k(x,y)-\epsilon \bigr) \bigl( \triangle u_{x}(x,y,t) \bigr)^{2} + k(x,y) \bigl(\triangle u_{y}(x,y,t) \bigr)^{2} \,dx\,dy\,dt \\ &\quad \leq \frac{\nu_{1}}{2\epsilon} \int_{0}^{T} \bigl( \triangle T_{0}(t) \bigr)^{2}\,dt + \frac{\nu_{2}}{2\epsilon} \int _{0}^{T} \bigl(\triangle T_{1}(t) \bigr)^{2}\,dt + \frac{1}{2\epsilon} \int_{\Omega_{T}} \bigl( \triangle F(x,y,t) \bigr)^{2}\,dx\,dy\,dt. \end{aligned}$$
Therefore, if we set \(k_{*}=\min_{x,y \in[0,1]}k(x,y)>0\) and \(\epsilon= \min\{k_{*}; \frac{2\nu_{1}}{\nu_{1}+2}\}\), then we deduce that
$$\begin{aligned} & \frac{1}{2}\int_{\Omega} \bigl(\triangle u(x,y,T) \bigr)^{2} \,dx\,dy \\ &\quad \leq \frac{\nu_{1}}{2\epsilon} \int_{0}^{T} \bigl( \triangle T_{0}(t) \bigr)^{2}\,dt + \frac{\nu_{2}}{2\epsilon} \int _{0}^{T} \bigl(\triangle T_{1}(t) \bigr)^{2}\,dt + \frac{1}{2\epsilon} \int_{\Omega_{T}} \bigl( \triangle F(x,y,t) \bigr)^{2}\,dx\,dy\,dt \\ &\quad \leq \frac{\max\{\nu_{1},\nu_{2},1\}}{2\epsilon} \|\triangle\omega\|^{2}_{W}. \end{aligned}$$

This completes the proof. □

With the arguments above we are in a position to give the Fréchet derivative of \(J(\omega)\).

Theorem 2.1

Let conditions (2.1)-(2.2) hold. Then the cost functional is Fréchet-differentiable, i.e., \(J(\omega) \in C^{1}(W)\). Moreover, the Fréchet derivative of \(J(\omega)\) at \(\omega\in W\) can be defined, by the solution \(\varphi\in H^{1,0}(\Omega_{T})\) of the adjoint problem (2.6), as follows:
$$ J'(\omega)= \biggl\{ \varphi(x,y,t;\omega); \nu_{1}\int_{0}^{1} \varphi(1,y,t; \omega) \,dy; \nu_{2}\int_{0}^{1} \varphi(x,1,t; \omega)\,dx \biggr\} . $$
(2.11)

Corollary 2.1

Let \(J(\omega) \in C^{1}(W)\) and \(W_{*} \subset W\) be the set of quasi-solutions of the inverse problem (1.5)-(1.6). Then \(\omega_{*}\in W_{*}\) is a strict solution of the inverse problem (1.5)-(1.6) if and only if \(\varphi(x, y,t;\omega_{*}) \equiv0\), a.e. on \(\Omega_{T}\).

By the well-known theory of convex analysis [21], we can get the relationship between the minimization problem and the corresponding variational inequality in the following theorem.

Theorem 2.2

Let conditions of Theorem 2.1 hold; \(W \in H^{0}(\Omega_{T} )\times H^{0}[0,T]\times H^{0}[0,T]\) is a closed convex set of unknown sources and \(\varphi= \varphi(x, t;\omega)\) is the solution of the adjoint problem (2.6), for a given \(\omega\in W\). Then the element \(w_{*} := \{F_{*}(x,y,t); T_{0*}(t); T_{1*}(t)\} \in W\) is a quasi-solution of the inverse problem (1.5)-(1.6) if and only if the following variational inequality holds:
$$\bigl(J'(\omega_{*}),\omega-\omega_{*} \bigr)_{W}\geqslant0, \quad \forall\omega\in W, $$
where
$$\begin{aligned} \bigl(J'(\omega_{*}),\omega-\omega_{*} \bigr)_{W} =& \int_{\Omega_{T}}\varphi(x,y,t;\omega_{*}) \bigl[F(x,y,t)-F_{(}x,y,t) \bigr]\,dx\,dy\,dt \\ &{} + \nu_{1} \int_{0}^{T} \bigl[T_{0}(t)-T_{0*}(t) \bigr] \int_{0}^{1} \varphi(1,y,t;\omega) \,dy\,dt \\ &{} +\nu_{2}\int_{0}^{T} \bigl[T_{1}(t)-T_{1*}(t) \bigr] \int_{0}^{1} \varphi(x,1,t;\omega) \,dx\,dt. \end{aligned}$$

3 A steepest descent method with line search

An iteration process, known as the steepest descent method in the optimization theory, can thus be implemented
$$ \omega_{n+1}=\omega_{n}-\alpha_{n} J'(\omega_{n}), $$
(3.1)
with some appropriate chosen parameter \(\alpha_{n}\).

The details of the algorithm are written as follows.

Algorithm 3.1

A steepest descent method with line search

Step 1 Initialization
  • Choose an initial approximation \(\omega_{0}=\{F_{(0)}(x,y, t); T_{0}^{(0)}(t);T_{1}^{(0)}(t)\}\).

  • Set the stop tolerance \(\varepsilon>0\), \(n=0\).

Step 2 Stopping check
  • Solve direct problem (1.2) with \(\omega_{n}\) to get \(u(x,y,T;\omega_{n})\), then solve (1.5), and get \(J'(\omega_{n})\) as (2.11).

  • If \(\| J'(\omega_{n})\|\leqslant\varepsilon\) or \(\|\omega_{n+1}-\omega_{n} \|\leqslant\varepsilon\), then stop and get \(\omega_{n}\) as the solution.

Step 3 Update \(\omega_{n}\).
  • Solve
    $$ \min_{\alpha>0}J \bigl(\omega_{n}-\alpha J'(\omega_{n}) \bigr) $$
    (3.2)
    with line search and set \(\alpha_{n}\)=arg \(\min_{\alpha>0}J(\omega _{n}-\alpha J'(\omega_{n}))\).
  • Set \(\omega_{n+1}=\omega_{n}-\alpha_{n} J'(\omega_{n})\), \(n:=n+1\), go to Step 2.

Different choices of the parameter \(\alpha_{n}\) correspond to various gradient methods. Here we discuss both the exact and the inexact ones.

3.1 Exact line search

One of the exact line searches is the golden section method. It is adapted to solve (3.2) when the object function is a unimodal function. The main idea of this method is to construct a sequence of closed intervals \(\{[a_{k},b_{k}]\}\), which satisfies \(\bar{\alpha} \in [a_{k},b_{k}]\subset[a_{k-1},b_{k-1}]\) and makes \([a_{k}, b_{k}]\) scaling down as k increases. If the gradient \(J'(\omega)\) has Lipschitz continuity, the parameter can be estimated as follows (see Lemma 4.1):
$$ 0< \delta_{0}\leqslant\alpha_{n} \leqslant2/(L+2\delta_{1}), $$
(3.3)
where \(\delta_{0},\delta_{1}>0\) are arbitrary parameters. We can choose the initial interval as \(a_{0}=\delta_{0}\), \(b_{0}= 2/(L+2\delta_{1})\).

3.2 Inexact line search

There are two famous techniques, the Armijo line search and the Wolfe-Powell line search. If we denote \(\phi(\alpha):=J(\omega_{n}-\alpha J'(\omega_{n}))\), the Armijo line search can be stated thus: to find \(\alpha_{n}>0\) such that
$$ \phi(\alpha_{n})\leqslant\phi(0) +c_{1} \alpha_{n}\phi'(0), $$
(3.4)
where \(0< c_{1}<1\) is constant. While the step length \(\alpha_{n}\) found by (3.4) may be quite small or cannot converge to the exact minimum point of (3.2), this situation can be prevented by imposing the curvature condition,
$$ \phi'(\alpha_{n})\geqslant c_{2} \phi'(0), $$
(3.5)
where \(0< c_{1}<c_{2}<1\). Equations (3.4) and (3.5) are known collectively as the Wolfe-Powell line search.

4 Convergence analysis

Lemma 4.1

Let the conditions of Theorem 2.1 hold. Then the functional \(J(\omega)\) is of Hölder class \(C^{1,1}(W)\) and
$$ \bigl\| J'(\omega+\triangle\omega)-J'(\omega) \bigr\| _{W} \leqslant L \|\triangle \omega\|_{W}, \quad\forall \omega, \omega+\triangle\omega\in W, $$
(4.1)
where
$$\begin{aligned} \bigl\| J'(\omega+\triangle\omega)-J'(\omega) \bigr\| ^{2}_{W} =& \int_{\Omega_{T}} \bigl(\triangle \varphi(x,y,t;\omega) \bigr)^{2} \,dx\,dy\,dt \\ &{} + \nu_{1}^{2} \int_{0}^{T} \biggl(\int_{0}^{1}\triangle\varphi(1,y,t;\omega) \,dy \biggr)^{2}\,dt \\ &{} +\nu_{2}^{2}\int_{0}^{T} \biggl(\int_{0}^{1}\triangle\varphi(x,1,t;\omega) \,dx \biggr)^{2}\,dt, \end{aligned}$$
and L is defined as
$$L=\sqrt{\frac{\max\{\nu_{1},\nu_{2},1\}}{2\epsilon} \biggl(\frac{1}{\min\{\nu _{1},\nu_{2}\}}+\frac{1}{k_{*}}+ \nu_{1}+ \nu_{2} \biggr)}. $$

Proof

It is easy to check that the function \(\triangle\varphi(x,y,t;\omega):=\varphi(x,y,t;\omega+\triangle\omega )-\varphi(x,y,t;\omega)\) is the solution of the following backward parabolic problem:
$$ \left \{ \begin{array}{@{}l@{\quad}l} \triangle\varphi_{t}(x,y,t)+ \operatorname{div} (k(x,y) \nabla\triangle\varphi (x,y,t)) = 0, & \mbox{in } \Omega_{T} , \\ \triangle\varphi_{x}(0,y,t)= \triangle \varphi_{y}(x,0,t)= 0, & x,y \in(0,1), t \in(0,T], \\ -k(1,y)\triangle\varphi_{x}(1,y,t)= \nu_{1} \triangle\varphi(1,y,t), & x,y \in(0,1), t \in(0,T], \\ -k(x,1)\triangle\varphi_{y}(x,1,t)= \nu_{2} \triangle\varphi(x,1,t), & x,y \in(0,1), t \in(0,T], \\ \triangle\varphi(x,y,T)= \triangle u(x,y,T;\omega), & (x,y) \in \Omega. \end{array} \right . $$
(4.2)
Multiplying both sides of (4.2) by \(\triangle\varphi (x,y,t;\omega)\), integrating on \(\Omega_{T}\), and using the initial and boundary conditions, we can obtain the following energy identity:
$$\begin{aligned} &\frac{1}{2}\int_{\Omega} \bigl( \triangle\varphi(x,y,0;\omega) \bigr)^{2} \,dx\,dy + \nu _{1} \int_{0}^{T}\int_{0}^{1} \bigl(\triangle\varphi(1,y,t;\omega) \bigr)^{2}\,dy \,dt \\ &\qquad{}+\nu_{2}\int_{0}^{T}\int _{0}^{1} \bigl(\triangle\varphi(x,1,t;\omega) \bigr)^{2}\,dx \,dt \\ &\qquad{}+\int_{\Omega_{T}} k(x,y) \bigl[ \bigl(\triangle \varphi_{x}(x,y,t;\omega ) \bigr)^{2}+ \bigl(\triangle \varphi_{y}(x,y,t;\omega) \bigr)^{2} \bigr] \,dx\,dy \\ &\quad=\frac{1}{2}\int_{\Omega} \bigl(\triangle u(x,y,T; \omega) \bigr)^{2} \,dx\,dy. \end{aligned}$$
(4.3)
This identity implies the following two inequalities:
$$ \left \{ \begin{array}{@{}l} k_{*}\int_{\Omega_{T}} [(\triangle\varphi_{x}(x,y,t;\omega ))^{2}+(\triangle\varphi_{y}(x,y,t;\omega))^{2} ] \,dx\,dy\\ \quad\leqslant\frac {1}{2}\int_{\Omega} (\triangle u(x,y,T;\omega))^{2} \,dx\,dy, \\ \min\{\nu_{1},\nu_{2}\} [\int_{0}^{T}\int_{0}^{1}(\triangle\varphi(1,y,t;\omega ))^{2}\,dy \,dt +\int_{0}^{T}\int_{0}^{1}(\triangle\varphi(x,1,t;\omega))^{2}\,dx \,dt ] \\ \quad\leqslant\frac{1}{2}\int_{\Omega} (\triangle u(x,y,T;\omega))^{2} \,dx\,dy. \end{array} \right . $$
(4.4)
Dealing with \((\triangle\varphi(x,y,t;\omega))^{2}\) as (2.10) in the proof of Lemma 2.2, we have
$$\begin{aligned} &\int_{\Omega_{T}} \bigl(\triangle\varphi(x,y,t;\omega) \bigr)^{2}\,dx\,dy\,dt \\ &\quad\leqslant\int_{0}^{T}\int _{0}^{1} \bigl(\triangle\varphi(1,y,t;\omega) \bigr)^{2}\,dy \,dt +\int_{0}^{T}\int _{0}^{1} \bigl(\triangle\varphi(x,1,t;\omega) \bigr)^{2}\,dx \,dt \\ &\qquad{}+\int_{\Omega_{T}} \bigl[ \bigl(\triangle \varphi_{x}(x,y,t;\omega ) \bigr)^{2}+ \bigl(\triangle \varphi_{y}(x,y,t;\omega) \bigr)^{2} \bigr] \,dx\,dy. \end{aligned}$$
(4.5)
The above three inequalities imply
$$\int_{\Omega_{T}} \bigl(\triangle\varphi(x,y,t;\omega) \bigr)^{2}\,dx\,dy\,dt \leqslant\frac {1}{2} \biggl( \frac{1}{\min\{\nu_{1},\nu_{2}\}}+ \frac{1}{k_{*}} \biggr) \int_{\Omega} \bigl(\triangle u(x,y,T; \omega) \bigr)^{2} \,dx\,dy. $$
According to the energy identity (4.3), using the Cauchy-Schwarz inequality, we can also conclude
$$ \left \{ \begin{array}{@{}l@{\quad}l} \nu_{2}^{2}\int_{0}^{T} (\int_{0}^{1}\triangle\varphi(x,1,t;\omega)\,dx )^{2}\,dt &\leqslant\nu_{2}^{2} \int_{0}^{T}\int_{0}^{1}(\triangle\varphi (x,1,t;\omega))^{2}\,dx \,dt \\ &\leqslant\frac{1}{2}\nu_{2}\int_{\Omega} (\triangle u(x,y,T;\omega))^{2} \,dx\,dy, \\ \nu_{1}^{2} \int_{0}^{T} (\int_{0}^{1}\triangle\varphi(1,y,t;\omega)\,dy )^{2}\,dt &\leqslant\nu_{1}^{2}\int_{0}^{T}\int_{0}^{1}(\triangle\varphi(1,y,t;\omega ))^{2}\,dy \,dt\\ &\leqslant\frac{1}{2}\nu_{1}\int_{\Omega} (\triangle u(x,y,T;\omega))^{2} \,dx\,dy. \end{array} \right . $$
(4.6)
Thus, from the above estimates, we deduce that
$$\begin{aligned} & \int_{\Omega_{T}} \bigl(\triangle\varphi(x,y,t;\omega) \bigr)^{2} \,dx\,dy\,dt + \nu_{1}^{2} \int _{0}^{T} \biggl(\int_{0}^{1} \triangle\varphi(1,y,t;\omega)\,dy \biggr)^{2}\,dt \\ &\qquad{}+\nu_{2}^{2}\int_{0}^{T} \biggl(\int_{0}^{1}\triangle\varphi(x,1,t;\omega) \,dx \biggr)^{2}\,dt \\ &\quad \leqslant \frac{1}{2} \biggl(\frac{1}{\min\{\nu_{1},\nu_{2}\}}+\frac {1}{k_{*}}+ \nu_{1}+ \nu_{2} \biggr)\int_{\Omega} \bigl( \triangle u(x,y,T;\omega ) \bigr)^{2} \,dx\,dy. \end{aligned}$$
It follows from this inequality and Lemma 2.2 that we can set
$$L=\sqrt{\frac{\max\{\nu_{1},\nu_{2},1\}}{2\epsilon} \biggl(\frac{1}{\min\{\nu _{1},\nu_{2}\}}+\frac{1}{k_{*}}+ \nu_{1}+ \nu_{2} \biggr)}, $$
which completes the proof. □

Now we analyze the convergence of the sequence \(\{J(\omega_{n})\}\), where the iterations \(\omega_{n}\in W \) (\(n=0,1,2,\ldots\)) are produced by Algorithm 3.1.

Theorem 4.1

Let W be a closed convex set and \(J(\omega)\in C^{1,1}(W)\). If \(\omega_{n}\in W \) (\(n=0,1,2,\ldots\)) is generated by Algorithm 3.1, then \(J(\omega_{n})\) is a monotone decreasing convergent sequence and
$$ \lim_{n\rightarrow\infty}\bigl\| J'( \omega_{n})\bigr\| =0. $$
(4.7)

Proof

Set \(d_{n}:=-J'(\omega_{n})\). By using Lemma 4.1, for all \(\alpha>0\), we have
$$\begin{aligned} J(\omega_{n}+\alpha d_{n})-J(\omega_{n})&=\int ^{1}_{0} \bigl(J'(\omega_{n}+ \theta \alpha d_{n}),\alpha d_{n} \bigr)\,d\theta \\ &=\int^{1}_{0} \bigl(J'( \omega_{n}+\theta\alpha d_{n})-J'( \omega_{n}),\alpha d_{n} \bigr)\,d\theta+\alpha \bigl(J'(\omega_{n}),d_{n} \bigr) \\ &\leqslant\int^{1}_{0} \theta\alpha^{2} \| d_{n} \|^{2}\,d\theta+\alpha \bigl(J'( \omega_{n}),d_{n} \bigr) \\ &\leqslant \biggl(\frac{\alpha^{2}}{2}L-\alpha \biggr)\bigl\| J'( \omega_{n})\bigr\| ^{2}. \end{aligned}$$
Especially, this inequality holds for \(\widehat{\alpha}=1/L\), i.e.
$$J(\omega_{n})-J(\omega_{n}+\widehat{\alpha} d_{n})\geqslant\frac {1}{2L}\bigl\| J'( \omega_{n})\bigr\| ^{2}. $$
If an exact line search is used in (3.2), one has \(J(\omega _{n+1})= J(\omega_{n}+\alpha_{n} d_{n})\leqslant J(\omega_{n}+\widehat{\alpha} d_{n})\). Thus, the following inequality holds:
$$ J(\omega_{n})-J(\omega_{n+1})\geqslant \frac{1}{2L}\bigl\| J'(\omega _{n}) \bigr\| ^{2}. $$
(4.8)
This implies that \(J(\omega_{n})\) is a monotone decreasing convergent sequence and then (4.7) holds.
For the case of the inexact line search, we consider the Wolfe-Powell line search. By using Lemma 4.1, (3.5) implies
$$-(1-c_{2}) \bigl(J'(\omega_{n}),d_{n} \bigr) \leqslant \bigl(J'(\omega_{n}+ \alpha_{n} d_{n})-J'(\omega_{n}),d_{n} \bigr) \leqslant\alpha_{n} L \| d_{n}\|^{2}. $$
Thus
$$\alpha_{n} \geqslant\frac{1-c_{2}}{L}. $$
Besides, it follows from (3.4) that
$$J(\omega_{n})-J(\omega_{n+1})\geqslant\alpha_{n} \bigl\| J'(\omega _{n})\bigr\| ^{2}\geqslant \frac{1-c_{2}}{L} \bigl\| J'(\omega_{n}) \bigr\| ^{2}. $$
This completes the proof. □
Denote by
$$J_{*}:=J(\omega_{*})=\lim_{n\rightarrow\infty}J(\omega_{n}),\quad \omega _{*} \in W, $$
the limit of the sequence \(J(\omega_{n})\). Let us remark that if W is a closed convex set in \(L^{2}(\Omega)\times L^{2}[0,T]\times L^{2}[0,T]\) and the conditions (2.1)-(2.2) hold, then, for any initial data \(\omega_{0}\in W\), the sequence of iteration \(\{\omega_{n}\}\subset W\), given by Algorithm 3.1, weakly converges to a quasi-solution \(\omega_{*} \in W\) of the inverse problem (1.5)-(1.6).

5 Conclusions

This paper presents a theoretical study of a case of steady-case heat flow through a plane wall with the two dimensional Robin boundary condition. The inverse problem consists of determining the source terms \(\omega:=\{F(x,y,t);T_{0}(t);T_{1}(t)\}\) by using observational measurements of the final state \(u_{T}(x,y)=u(x,y,T)\). The proposed approach is based on the weak solution theory for parabolic PDEs and the adjoint problem method for minimization of the corresponding cost functional. The adjoint problem is defined to obtain an explicit gradient formula for the cost functional \(J(\omega)=\|\mu _{T}(x,y)-u(x,y,T;\omega)\|^{2}\). A steepest descent algorithm based on an explicit gradient formula is presented and its convergence is analyzed.

Declarations

Acknowledgements

The authors acknowledge the support of the National Natural Science Foundation of China (No. 61263006).

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Authors’ Affiliations

(1)
School of Mathematics and Statistics, Central South University, Changsha, P.R. China
(2)
School of Mathematics and Computer Science, Guangxi University for Nationalities, Nanning, P.R. China

References

  1. Bear, J: Dynamics of Fluids in Porous Media. Elsevier, New York (1972) MATHGoogle Scholar
  2. Renardy, M, Hursa, WJ, Nohel, JA: Mathematical Problems in Viscoelasticity. Wiley, New York (1987) MATHGoogle Scholar
  3. Zheng, C, Bennett, GD: Applied Contaminant Transport Modelling: Theory and Practice. Van Nostrand-Reinhold, New York (1995) Google Scholar
  4. Kabanikhin, S, Hasanov, A, Marinin, I, Krivorotko, O, Khidasheli, D: A variational approach to reconstruction of an initial tsunami source perturbation. Appl. Numer. Math. 83, 22-37 (2014) View ArticleMATHMathSciNetGoogle Scholar
  5. Hadamard, J: Lectures on the Cauchy Problem in Linear Partial Differential Equations. Oxford University Press, London (1923) Google Scholar
  6. Fu, CL, Xiong, XT, Qian, Z: Fourier regularization for a backward heat equation. J. Math. Anal. Appl. 331, 427-480 (2007) View ArticleMathSciNetGoogle Scholar
  7. Hasanov, A, Mueller, J: A numerical method for backward parabolic problems with non-self adjoint elliptic operators. Appl. Numer. Math. 37, 55-78 (2001) View ArticleMATHMathSciNetGoogle Scholar
  8. Hào, DN: Methods for Inverse Heat Conduction Problems. Peter Lang, Frankfurt am Main (1998) View ArticleMATHGoogle Scholar
  9. Liu, ZH, Li, J, Li, ZW: Regularization method with two parameters for nonlinear ill-posed problems. Sci. China Ser. A 51(1), 70-78 (2008) View ArticleMATHMathSciNetGoogle Scholar
  10. Li, J, Liu, ZH: Convergence rate analysis for parameter identification with semi-linear parabolic equation. J. Inverse Ill-Posed Probl. 17(4), 375-385 (2009) MATHMathSciNetGoogle Scholar
  11. Liu, ZH, Wang, BY: Coefficient identification in parabolic equations. Appl. Math. Comput. 209, 379-390 (2009) View ArticleMATHMathSciNetGoogle Scholar
  12. Isakov, V: Inverse parabolic problems with the final overdetermination. Commun. Pure Appl. Math. 54, 185-209 (1991) View ArticleMathSciNetGoogle Scholar
  13. Hasanov, A, Pektas, B: Identification of an unknown time-dependent heat source term from overspecified Dirichlet boundary data by conjugate gradient method. Comput. Math. Appl. 65, 42-57 (2013) View ArticleMATHMathSciNetGoogle Scholar
  14. Rundell, W: The determination of a parabolic equation from initial and final data. Proc. Am. Math. Soc. 99, 637-642 (1987) View ArticleMATHMathSciNetGoogle Scholar
  15. Choulli, M, Yamamoto, M: Generic well-posedness of an inverse parabolic problem - the Höder space approach. Inverse Probl. 12(3), 195-205 (1996) View ArticleMATHMathSciNetGoogle Scholar
  16. Yamamoto, M, Zou, J: Simultaneous reconstruction of the initial temperature and heat radiative coefficient. Inverse Probl. 17, 1181-1202 (2001) View ArticleMATHMathSciNetGoogle Scholar
  17. Zeghal, A: Existence result for inverse problems associated with a nonlinear parabolic equation. J. Math. Anal. Appl. 272, 240-248 (2002) View ArticleMATHMathSciNetGoogle Scholar
  18. Deng, ZC, Yang, L, Yu, JN, Luo, GW: An inverse problem of identifying the coefficient in a nonlinear parabolic equation. Nonlinear Anal. 71, 6212-6221 (2009) View ArticleMATHMathSciNetGoogle Scholar
  19. Hasanov, A: Simultaneous determination of source terms in a linear parabolic problem from the final over determination: weak solution approach. J. Math. Anal. Appl. 330, 766-779 (2007) View ArticleMATHMathSciNetGoogle Scholar
  20. Ladyzhenskaya, OA: Boundary Value Problems in Mathematical Physics. Springer, New York (1985) MATHGoogle Scholar
  21. Ekeland, I, Temam, R: Convex Analysis and Variational Problems. Stud. Math. Appl., vol. 1. North-Holland, New York (1976) MATHGoogle Scholar

Copyright

© Miao and Liu; licensee Springer. 2015

Advertisement