Skip to main content

Theory and Modern Applications

Lagged diffusivity method for the solution of nonlinear diffusion convection problems with finite differences

Abstract

This article concerns with the analysis of an iterative procedure for the solution of a nonlinear nonstationary diffusion convection equation in a two-dimensional bounded domain supplemented by Dirichlet boundary conditions. This procedure, denoted Lagged Diffusivity method, computes the solution by lagging the diffusion term. A model problem is considered and a finite difference discretization for that model is described. Furthermore, properties of the finite difference operator are proved. Then, a sufficient condition for the convergence of the Lagged Diffusivity method is given. At each stage of the iterative procedure, a linear system has to be solved and the Arithmetic Mean method is used. Numerical experiments show the efficiency, for different test functions, of the Lagged Diffusivity method combined with the Arithmetic Mean method as inner solver. Better results are obtained when the convection term increases.

1 Statement of the problem

Consider as model problem the nonlinear diffusion convection equation

u t = div ( σ u ) - v ̃ u - α u + s ,
(1)

where u = u(x, y, t) is the density function at the point (x, y) at the time t of a diffusion medium R, σ = σ(x, y, u) > 0 is the diffusion coefficient or diffusivity and is dependent on the solution u, α = α(x, y) ≥ 0 is the absorption term, v ̃ = v ̃ ( x , y , t , u ) is the velocity vector and the source term s(x, y, t) is a real valued sufficiently smooth function.

Equation (1) can be supplemented by the initial condition (t = 0)

u ( x , y , 0 ) = U 0 ( x , y ) ,
(2)

in the closure R ̄ of R and by Dirichlet boundary condition on the contour ∂R of R of the form

u ( x , y , t ) = U 1 ( x , y , t ) .
(3)

In the following, we suppose R to be a rectangular domain with boundary ∂R and we assume that the functions σ, α, and s satisfy the "smoothness" conditions:

  1. (i)

    the function σ = σ(u) is continuous in u; the functions α(x, y) and s(x, y, t) are continuous in x, y and in x, y, t respectively;

  2. (ii)

    there exist two positive constants σ min and σ max such that

    0 < σ min σ ( u ) σ max ,

uniformly in u; in addition, α(x, y) ≥ αmin ≥ 0;

  1. (iii)

    for fixed (x, y) R, the function σ(u) satisfies Lipschitz condition in u with constant Γ (uniformly in x, y), Γ > 0.

Here, the vector v ̃ = ( 1 , 2 ) T is assumed to be constant.

The nonlinearity introduced by the u-dependence of the coefficient σ(u) requires that, in general, the solution of equation (1) be approximated by numerical methods.

We superimpose on R ∂R a grid of points R h ∂R h ; the set of the internal points R h of the grid are the mesh points (x i , y j ), for i = 1,..., N and j = 1,..., M, with uniform mesh size h along x and y directions, respectively, i.e., xi+1= x i + h and yj+1= y j + h for i = 0,..., N, j = 0,..., M.

Thus, at the mesh points of R ∂R, (x i , y j ), for i = 0,..., N + 1, j = 0,..., M + 1, the solution u(x i , y j , t) is approximated by a grid function u ij (t) defined on R h ∂R h and satisfying the boundary condition (3) on ∂R h for t > 0 and the initial condition (2) on R h ∂R h for t = 0.

By ordering in a row lexicographic order the mesh points P l = (x i , y j ) (i.e., l = (j - 1) N + i with j = 1,..., M, and i = 1,..., N), we can write the vector u(t) of components u ij (t) and approximate the right-hand side of (1) by

A ( u ( t ) ) u ( t ) + b ( u ( t ) ) + s ( t ) ,
(4)

where the matrix A(u(t)) is of order μ = M × N and has the block tridiagonal form; the M diagonal blocks are tridiagonal matrices of order N and the M - 1 sub- and super-diagonal blocks are diagonal matrices of order N.

The five nonzero elements of A(u(t)) corresponding to uij-1(t), ui-1j(t), u ij (t), ui+1j(t) and uij+1(t) respectively, are

{ ( B i j + B ̃ i j ) , ( L i j + L ̃ i j ) , - ( D i j + D ^ i j ) , ( R i j + R ̃ i j ) , ( T i j + T ̃ i j ) } ,

where

L i j L i j ( u ( t ) ) = 1 h 2 σ u i j ( t ) + u i - 1 j ( t ) 2 , B i j B i j ( u ( t ) ) = 1 h 2 σ u i j ( t ) + u i j - 1 ( t ) 2 , R i j R i j ( u ( t ) ) = 1 h 2 σ u i + 1 j ( t ) + u i j ( t ) 2 , T i j T i j ( u ( t ) ) = 1 h 2 σ u i j + 1 ( t ) + u i j ( t ) 2 , L ̃ i j = 1 2 h , B ̃ i j = 2 2 h , R ̃ i j = - 1 2 h , T ̃ i j = - 2 2 h , D i j D i j ( u ( t ) ) = B i j + L i j + R i j + T i j , D ^ i j = α ( x i , y j ) .
(5)

The matrix A(u(t)) is an irreducible matrix [[1], p. 18].

Providing that the mesh spacing h is sufficiently small, i.e.,

h < min 2 σ min 1 , 2 σ min 2 ,
(6)

the matrix A(u(t)) is strictly (α(x, y) > 0) or irreducibly (α(x, y) = 0) diagonally dominant ([[1], p. 23]) and has negative diagonal elements, a ll (u(t)) < 0 (l = 1, ..., μ) and nonnegative off diagonal elements a lp (u(t)) ≥ 0, lp, with l, p = 1, ..., μ; therefore, -A(u(t)) is an M-matrix [[1], p. 91].

In the case of diffusion equation ( v ̃ = 0 ) , the matrix A(u(t)) is also symmetric; then -A(u(t)) is a Stieltjes matrix and is symmetric positive definite [[1], p. 91].

The vector b(u(t)) in (4) is obtained imposing Dirichlet boundary conditions (3) and it depends on the function U1(x i , y j , t) at points (x i , y j ) of ∂R and its l th component depends on the l th component of the solution u(t) for the mesh point P l of R which is neighbor of points of ∂R.

The vector s(t) in (4) has components s l (t) = s(x i , y j , t) for i = 1,..., N, j = 1,..., M and l = (j - 1) N + i.

We can apply a step-by-step method to the initial value problem of μ equations

d u ( t ) d t = A ( u ( t ) ) u ( t ) + b ( u ( t ) ) + s ( t ) ,

with the initial condition (2), u(x i , y j , 0) = U0(x i , y j ), and computes the approximation un+1to u(tn+1) using the approximate solution unat the time level t n , with t n = n Δt and Δt the time step.

Indicating sn= s(t n ), for n = 0,1,..., the well-known θ-method (see, e.g., [2]) is written

u n + 1 - u n Δ t = θ ( A ( u n + 1 ) u n + 1 + b ( u n + 1 ) + s n + 1 ) + ( 1 - θ ) ( A ( u n ) u n + b ( u n ) + s n ) ,

where θ is a real parameter such that 0 ≤ θ ≤ 1; for any θ ≠ 0, the method is implicit. I is the μ × μ identity matrix.

Thus, at each time level n = 0, 1,..., the vector un+1 μis the solution of the nonlinear system,

F ( u ) ( I - Δ t θ A ( u ) ) u - Δ t θ b ( u ) - w = 0 .
(7)

The vector w μis given by

w w n = ( I + Δ t ( 1 - θ ) A ( u n ) ) u n + Δ t ( 1 - θ ) b ( u n ) + Δ t ( θ s n + 1 + ( 1 - θ ) s n ) .

We set τ = Δ.

We can introduce an iterative method of Lagged Diffusivity, computing the new iterate u(k+1)keeping the diffusivity term at the previous iteration k. That is, since the matrix I-τ A(u) is nonsingular for all u μthen u(k+1)is the solution of the linear system

( I - τ A ( u ( k ) ) ) u = w + τ b ( u ( k ) ) ,

such that the residual

r ( k + 1 ) = ( I - τ A ( u ( k ) ) ) u ( k + 1 ) - w - τ b ( u ( k ) ) ,

satisfies the stopping condition

r ( k + 1 ) ε k + 1 ,

where ε k is a given tolerance such that ε k → 0 when k → ∞ and indicates the Euclidean norm. The initial iterate u(0) of this Lagged Diffusivity procedure can be set equal to un.

2 Uniform monotonicity and a convergence result

In this section, we consider the nonlinear system F(u) = 0 in (7) and we prove that F(u) is continuously and uniformly monotone and then F(u) = 0 has a unique solution. Moreover, we prove that the sequence {u(k)} generated by the Lagged Diffusivity procedure is convergent to the solution. Before this, we have to prove three lemmas on some properties of finite difference operators.

In the following, we may consider the matrix A(u) as

A ( u ) = Â ( u ) + Ã + D ^ ,

where  ( u ) and à are the block tridiagonal matrices whose row elements are {B ij , L ij , -D ij , R ij , T ij } and { B ̃ i j , L ̃ i j , R ̃ i j , T ̃ i j } , respectively, while the matrix D ^ is a diagonal matrix whose diagonal entries are { - D ^ i j } . Furthermore, we denote

 ( u ) =  x ( u ) +  y ( u ) ,

where  x ( u ) is the block diagonal matrix whose row elements are { L i j , - D i j x , R i j } with D i j x = L i j + R i j , and  y ( u ) is the block tridiagonal matrix whose row elements are { B i j , - D i j y , T i j } with D i j y = B i j + T i j . Analogously, we can define b(u) = bx(u) + by(u) where bx(u) contains the contributions of U1(x0, y j , t) and U1(xN+1, y j , t) for j = 1, ...,M and by(u) contains the contributions of U1(x i , y0, t) and U1(x i , yM+1, t) for i = 1,...,N.

For the sake of clarity, we set u ij u ij (t) and v ij v ij (t), (i = 0,..., N + 1, j = 0,..., M + 1), the grid functions defined on R h ∂R h and satisfying the Dirichlet boundary condition on ∂R h for t > 0.

For grid functions {u ij } and {v ij } of this type, the discrete l2 (R h ) inner product and norm are defined by the formulas

< u , v > = h 2 i = 1 N j = 1 M u i j v i j , u h = ( h 2 i = 1 N j = 1 M u i j 2 ) 1 / 2 = ( < u , u > ) 1 / 2 ,

respectively

We denote by the set of all grid functions defined on R h ∂R h and satisfying the Dirichlet boundary condition on ∂R h for t > 0 for which there exist two positive constants ρ and β, both independent of h, such that

u h ρ
(8)
x u i j β , y u i j β .
(9)

Here, x u ij and y u ij indicates the backward difference quotients

x u i j = u i j - u i - 1 j h , y u i j = u i j - u i j - 1 h .
(10)

Before to prove the main result, we summarize in three lemmas on some properties of finite difference operators.

Here, for a grid function {u ij }, we denote

u i ± 1 / 2 j = u i ± 1 j + u i j 2 , u i j ± 1 / 2 = u i j ± 1 + u i j 2 .

Lemma 1. Let {u ij }, {v ij }, {z ij } be three grid functions defined at the mesh points (x i , y j ) of a grid R h ∂R h , i = 0,...,N + 1, j = 0,..., M + 1 which are equal to the prescribed function U1 (x i , y j , t) at the point (x i , y j ) of ∂R h and t > 0; then

< - A ^ ( z ) u - b ( z ) , v > = < - Â x ( z ) u - b x ( z ) , v > + < - Â y ( z ) u - b y ( z ) , v > ,

where

< - A ^ x ( z ) u - b x ( z ) , v > = j = 1 M σ ( z 1 / 2 j ) ( u 1 j - u 0 j ) v 1 j + + i = 2 N σ ( z i - 1 / 2 j ) ( u i j - u i - 1 j ) ( v i j - v i - 1 j ) + + σ ( z N + 1 / 2 j ) ( u N j - u N + 1 j ) v N j ,
(11)
< - A ^ y ( z ) u - b y ( z ) , v > = i = 1 M σ ( z i 1 / 2 ) ( u i 1 - u i 0 ) v i 1 + + j = 2 M σ ( z i j - 1 / 2 ) ( u i j - u i j - 1 ) ( v i j - v i j - 1 ) + + σ ( z i M + 1 / 2 ) ( u i M - u i M + 1 ) v M j .
(12)

Proof. Formulae (11) and (12) follow immediately from the definition of the coefficients in (5).

Lemma 2. Let {u ij } and {v ij } be two grid functions defined at the mesh points (x i , y j ) of the grid R h ∂R h , i = 0,..., N + 1, j = 0,..., M + 1 such that, at the point (x i , y j ) of ∂R h and t > 0, the grid function {u ij } is equal to the prescribed function U1(x i , y j , t) and the grid function {v ij } is equal to the null function, respectively.

Then, we have the following expression for the discrete l2(R h ) inner product of the vectors -Â ( u ) v and v

< - A ^ ( u ) v , v > = h 2 j = 1 M i = 1 N σ ( u i - 1 / 2 j ) ( x v i j ) 2 + h 2 i = 1 N j = 1 M σ ( u i j - 1 / 2 ) ( y v i j ) 2 + + j = 1 M σ ( u N + 1 / 2 j ) v N j v N j + i = 1 N σ ( u i M + 1 / 2 ) v i M v i M
(13)

Proof. From (5) the expression of <- A ^ ( u ) v,v> is

< - A ^ ( u ) v , v > = j = 1 M σ ( u 1 / 2 j ) v 1 j v 1 j + i = 2 N σ ( u i - 1 / 2 j ) ( v i j - v i - 1 j ) ( v i j - v i - 1 j ) + σ ( u N + 1 / 2 j ) v N j v N j + + i = 1 N σ ( u i 1 / 2 ) v i 1 v i 1 + j = 2 M σ ( u i j - 1 / 2 ) ( v i j - v i j - 1 ) ( v i j - v i j - 1 ) + σ ( u i M + 1 / 2 ) v i M v i M .

Since the grid function {v ij } is equal to zero for all the points of ∂R h , we can write

< - A ^ ( u ) v , v > = j = 1 M σ ( u 1 / 2 j ) ( v 1 j - v 0 j ) ( v 1 j - v 0 j ) + i = 2 N σ ( u i - 1 / 2 j ) ( v i j - v i - 1 j ) ( v i j - v i - 1 j ) + + σ ( u N + 1 / 2 j ) v N j v N j + + i = 1 N σ ( u i 1 / 2 ) ( v i 1 - v i 0 ) ( v i 1 - v i 0 ) + j = 2 M σ ( u i j - 1 / 2 ) ( v i j - v i j - 1 ) ( v i j - v i j - 1 ) + + σ ( u i M + 1 / 2 ) v i M v i M .

From the definition of backward difference quotients (10), we have formula (13).

Lemma 3. Let {u ij }, {v ij } and { i j } be three grid functions of such that, at the points (x i , y j ) of the boundary ∂R h they are equal to the prescribed function U1(x i , y j , t) (t > 0).

Then, we have the following expression for the discrete l2(R h ) inner product

< - Â ( u ) + Â ( v ) v ̃ - b ( u ) + b ( v ) , u - v ̃ β Γ ϕ u - v h 2 + β Γ ϕ 2 h 2 u - v ̃ h 2 + + β h 2 Γ ϕ 2 j = 1 M i = 1 N x ( u i j - i j ) 2 + y ( u i j - i j ) 2 ,
(14)

where Γ > 0 is the Lipschitz constant of condition (iii), β > 0 is a constant for which | x v ij | ≤ β and | y v ij | ≤ β, and ϕ is an arbitrary positive number.

Proof. Using formulae (11) and (12) and since u i j - i j is equal to zero for all the points of ∂R h , we have

< ( A ^ ( u ) + A ^ ( v ) ) v ˜ b ( u ) + b ( v ) , u v ˜ > = j = 1 M [ ( σ ( u 1 / 2 j ) σ ( v 1 / 2 j ) ) × × ( v ˜ 1 j v ˜ 0 j ) ( ( u 1 j v ˜ 1 j ) ( u 0 j v ˜ 0 j ) ) + + ( σ ( u N + 1 / 2 j ) σ ( v N + 1 / 2 j ) ) ( v ˜ N j v ˜ N + 1 j ) ( u N j v ˜ N j ) ( u N + 1 j v ˜ N + 1 j ) ) + + i = 2 N ( σ ( u i 1 / 2 j ) σ ( v i 1 / 2 j ) ) ( v ˜ i j v ˜ i 1 j ) ( ( u i j v ˜ i j ) ( u i 1 j v ˜ i 1 j ) ) ] + + i = 1 N [ ( σ ( u i 1 / 2 ) σ ( v i 1 / 2 ) ) ( v ˜ i 1 v ˜ i 0 ) ( ( u i 1 v ˜ i 1 ) ( u i 0 v ˜ i 0 ) ) + + ( σ ( u i M + 1 / 2 ) σ ( v i M + 1 / 2 ) ) ( v ˜ i M v ˜ i M + 1 ) ( ( u i M v ˜ i M ) ( u i M + 1 v ˜ i M + 1 ) ) + + j = 2 M ( σ ( u i j 1 / 2 ) σ ( v i j 1 / 2 ) ) ( v ˜ i j v ˜ i j 1 ) ( ( u i j v ˜ i j ) ( u i j 1 v ˜ i j 1 ) ) ] .

Writing the backward difference quotients (10) for the functions i j and u i j - i j and by (9) for x i j and y i j , then

< ( - Â ( u ) + Â ( v ) ) v ̃ - b ( u ) + b ( v ) , u - v ̃ > β h 2 j = 1 M i = 1 N + 1 σ u i j + u i - 1 j 2 - σ v i j + v i - 1 j 2 × × x ( u i j - i j ) + β h 2 i = 1 N j = 1 M + 1 σ u i j + u i j - 1 2 - σ v i j + v i j - 1 2 y ( u i j - i j ) .

Since the property of Lipschitz continuity on the function σ with Lipschitz constant Γ (property (iii)), we have

< ( - Â ( u ) + Â ( v ) ) v ̃ - b ( u ) + b ( v ) , u - v ̃ > β h 2 Γ 2 j = 1 M i = 1 N + 1 u i j - v i j + u i - 1 j - v i - 1 j × × x ( u i j - i j ) + β h 2 Γ 2 i = 1 N j = 1 M + 1 u i j - v i j + u i j - 1 - v i j - 1 y ( u i j - i j ) .

Keeping into account that u i j - i j and u ij - v ij are equal to zero for the points of ∂R, for an arbitrary positive number ϕ, we have

< ( - Â ( u ) + Â ( v ) ) v ̃ - b ( u ) + b ( v ) , u - v ̃ > β h 2 Γ 2 j = 1 M i = 1 N u i j - v i j ϕ x ( u i j - v ̃ i j ) ϕ + + β h 2 Γ 2 j = 1 M i = 1 N u i j - v i j ϕ y ( u i j - i j ) ϕ + β h 2 Γ 2 j = 1 M i = 1 N u i j - v i j ϕ × × x ( u i + 1 j - i + 1 j ) ϕ + β h 2 Γ 2 j = 1 M i = 1 N u i j - v i j ϕ y ( u i j + 1 - i j + 1 ) ϕ .

By the well-known technical trick (for a and b positive numbers: ab < 1/2a2 + 1/2b2), we can write

| < ( A ^ ( u ) + A ^ ( v ) ) v ˜ b ( u ) + b ( v ) , u v ˜ > | β h 2 Γ 2 j = 1 M i = 1 N | u i j v i j | 2 2 ϕ + + β h 2 Γ 2 j = 1 M i = 1 N | x ( u i j v ˜ i j ) | 2 2 ϕ + β h 2 Γ 2 j = 1 M i = 1 N | u i j v i j | 2 2 ϕ + + β h 2 Γ 2 j = 1 M i = 1 N | y ( u i j v ˜ i j ) | 2 2 ϕ + β h 2 Γ 2 j = 1 M i = 1 N | u i j v i j | 2 2 ϕ + + β h 2 Γ 2 j = 1 M i = 1 N | x ( u i + 1 j v ˜ i + 1 j ) | 2 2 ϕ + β h 2 Γ 2 j = 1 M i = 1 N | u i j v i j | 2 2 ϕ + + β h 2 Γ 2 j = 1 M i = 1 N | y ( u i j + 1 v ˜ i j + 1 ) | 2 2 ϕ β Γ ϕ u v h 2 + β h 2 Γ ϕ 4 j = 1 M i = 1 N ( | x ( u i j v ˜ i j ) | 2 + | y ( u i j v ˜ i j ) | 2 ) + + β h 2 Γ ϕ 4 j = 1 M i = 1 N ( | x ( u i + 1 j v ˜ i + 1 j ) | 2 + | y ( u i j + 1 v ˜ i j + 1 ) | 2 ) .

Now, we analyze the term j = 1 M i = 1 N x ( u i + 1 j - i + 1 j ) 2 + y ( u i j + 1 - i j + 1 ) 2 . We have

j = 1 M i = 1 N x ( u i + 1 j - i + 1 j ) 2 + y ( u i j + 1 - i j + 1 ) 2 j = 1 M i = 1 N + 1 x ( u i j - i j ) 2 + + i = 1 N j = 1 M + 1 y ( u i j - i j ) 2 j = 1 M i = 1 N x ( u i j - i j ) 2 + y ( u i j - i j ) 2 + + j = 1 M u N j - N j 2 h 2 + i = 1 N u i M - i M 2 h 2 j = 1 M i = 1 N x ( u i j - i j ) 2 + y ( u i j - i j ) 2 + + 1 h 2 i = 1 N j = 1 M u i j - i j 2 + 1 h 2 j = 1 M i = 1 N u i j - i j 2 j = 1 M i = 1 N x ( u i j - i j ) 2 + y ( u i j - i j ) 2 + 2 h 4 u - h 2 .

Then, we have the result (14).

As a consequence of Lemmas 1, 2, and 3, we prove the result of the uniform monotonicity of the mapping F(u); thus the nonlinear system (7) has a unique solution in .

Theorem 3. If

α min + 1 τ > σ min h 2 + β 2 Γ 2 2 σ min ,
(15)

then the nonlinear system F(u) = 0, with F(u) as in (7), has a unique solution in .

Proof. We show that the mapping F(u) is continuous and uniformly monotone, i.e., there exists a positive scalar γ such that

< F ( u ) - F ( v ) , u - v > γ < u - v , u - v > ,
(16)

for all u and v in . Then, the nonlinear system in (7) has a unique solution [[3], p. 143, 167].

From

1 τ ( F ( u ) - F ( v ) ) = ( - A ( u ) + 1 τ I ) ( u - v ) + ( - A ( u ) + A ( v ) ) v - b ( u ) + b ( v ) ,

we have

1 τ < F ( u ) - F ( v ) , u - v > = < - Â ( u ) ( u - v ) , u - v > + < - Ã ( u - v ) , u - v > + + < ( - D ^ + 1 τ I ) ( u - v ) , u - v > + + < ( - Â ( u ) + Â ( v ) ) v - b ( u ) + b ( v ) , u - v > .

Since à is the skew-symmetric part of A(u), it follows that <-à ( u - v ) ,u-v>=0.

We separately examine the terms <-Â ( u ) ( u - v ) ,u-v>,< ( - D ^ + 1 τ I ) ( u - v ) ,u-v> and < ( - Â ( u ) + Â ( v ) ) v - b ( u ) + b ( v ) , u - v > .

Lemma 2 with u-v instead of v and the assumption (ii) on the uniform lower boundedness of σ respect to the variable u permit to write

< - Â ( u ) ( u - v ) , u - v > h 2 σ min j = 1 M i = 1 N ( x ( u i j - v i j ) 2 + y ( u i j - v i j ) 2 ) + + σ min j = 1 M u N j - v N j 2 + σ min i = 1 N u i M - v i M 2 .

Since α(x, y) ≥ αmin ≥ 0 we have

< ( - D ^ + 1 r I ) ( u - v ) , u - v > ( α min + 1 r ) u - v h 2 .

By using Lemma 3 with v instead of v ̃ , we can write

< ( - Â ( u ) + Â ( v ) ) v - b ( u ) + b ( v ) , u - v > - β Γ ( 1 ϕ + ϕ 2 h 2 ) u - v h 2 - - β h 2 Γ ϕ 2 j = 1 M i = 1 N ( x ( u i j - v i j ) 2 + y ( u i j - v i j ) 2 ) .

Then, collecting the last three inequalities we obtain

1 τ < F ( u ) - F ( v ) , u - v > α min + 1 τ - β Γ 1 ϕ + ϕ 2 h 2 u - v h 2 + + h 2 σ min - β Γ ϕ 2 j = 1 M i = 1 N ( x ( u i j - v i j ) 2 + y ( u i j - v i j ) 2 ) .

If we set

ϕ = 2 σ min β Γ ,

we obtain

1 τ < F ( u ) - F ( v ) , u - v > α min + 1 τ - β 2 Γ 2 2 σ min - σ min h 2 u - v h 2 .

When condition (15) holds, then the mapping F(u) is uniformly monotone on where the constant γ in (16) is

γ = τ α min - σ min h 2 - β 2 Γ 2 2 σ min + 1 .

Now, we can state a result for the convergence of the Lagged Diffusivity method where the vector u(k+1)is the approximate solution of the linear system

( I - τ A ( u ( k ) ) ) u = w + τ b ( u ( k ) ) ,
(17)

such that the residual

r ( k + 1 ) = ( I - τ A ( u ( k ) ) ) u ( k + 1 ) - w - τ b ( u ( k ) ) ,

satisfies the stopping condition

r ( k + 1 ) ε k + 1 ,
(18)

where ε k is a given tolerance such that ε k → 0 when k → ∞.

Thus, the iterate u(k+ 1)is the solution of the system (7) whose diffusivity term σ in A(u) and b(u) depends on the iterate u(k)and the inhomogeneous term now depends by u(k+1).

We suppose that the grid functions { u i j ( k ) } ,k=0,1,..., are belonging to the set . In particular, the backward difference quotients of each grid function { u i j ( k ) } are bounded. Since this bound depends on the inhomogeneous term, we have that there exist two constants β > 0 and β0 > 0 such that

x u i j ( k ) β ̃ k and y u i j ( k ) β ̃ k ,
(19)

with β ̃ k =β+ ε k β 0 . (Formula (19) replaces formula (9).)

Theorem 4. Let u * be the solution of the nonlinear system F(u) = 0 in (7). Let u(k+1)be the solution of the linear system in (17) with condition (18). If condition (15) is satisfied, and, in particular

α min + 1 τ > σ min h 2 ,
(20)

then, the sequence {u(k)} converges to u*.

Proof. The solution u * of (7) satisfies the equation

u * - τ A ( u * ) u * - w - τ b ( u * ) = 0 ,
(21)

and the iterate u(k+1)satisfies the equation

u ( k + 1 ) - τ A ( u ( k ) ) u ( k + 1 ) - w - τ b ( u ( k ) ) = r ( k + 1 ) .
(22)

Subtracting (22) from (21), we obtain

- A ( u * ) u * + A ( u ( k ) ) u ( k + 1 ) + 1 τ u * - 1 τ u ( k + 1 ) - b ( u * ) + b ( u ( k ) ) = - 1 τ r ( k + 1 ) .

Taking into account of the identity

- A ( u ) u + A ( w ) v = - A ( u ) ( u - v ) + ( - A ( u ) + A ( w ) ) v ,

for all u, v and w belonging to , we can write

- A ( u * ) + 1 τ I ( u * - u ( k + 1 ) ) + ( - A ( u * ) + A ( u ( k ) ) ) u ( k + 1 ) - b ( u * ) + b ( u ( k ) ) = 1 τ r ( k + 1 ) .

Thus, we have

< - A ( u * ) + 1 τ I ( u * - u ( k + 1 ) ) , u * - u ( k + 1 ) > + + < ( - A ( u * ) + A ( u ( k ) ) ) u ( k + 1 ) - b ( u * ) + b ( u ( k ) ) , u * - u ( k + 1 ) > = < - 1 τ r ( k + 1 ) , u * - u ( k + 1 ) > .

Since à is the skew-symmetric part of A(u), for any vector z μwe have

< ( Â ( u ) + Ã + D ^ ) z , z > = < ( Â ( u ) + D ^ ) z , z > ,

and then, we can write

< - 1 τ r ( k + 1 ) , u * - u ( k + 1 ) > < - Â ( u * ) ( u * - u ( k + 1 ) ) , u * - u ( k + 1 ) > + < ( - D ^ + 1 τ I ) ( u * - u ( k + 1 ) ) , u * - u ( k + 1 ) > - - < ( - Â ( u * ) + Â ( u ( k ) ) ) u ( k + 1 ) - b ( u * ) + b ( u ( k ) ) , u * - u ( k + 1 ) > .

From Lemmas 2 and 3 we obtain

< 1 τ r ( k + 1 ) , u * u ( k + 1 ) > h 2 σ min i = 1 N j = 1 M ( | x ( u i j * u i j ( k + 1 ) ) | 2 + | y ( u i j * u i j ( k + 1 ) ) | 2 ) + + σ min j = 1 M | u N j * u N j ( k + 1 ) | 2 + σ min i = 1 N | u i M * u i M ( k + 1 ) | 2 + + ( α min + 1 τ ) u * u ( k + 1 ) h 2 β ˜ k + 1 Γ ϕ u * u ( k ) h 2 β ˜ k + 1 Γ ϕ 2 h 2 u * u ( k + 1 ) h 2 β ˜ k + 1 h 2 Γ ϕ 2 j = 1 M i = 1 N ( | x ( u i j * u i j ( k + 1 ) ) | 2 + | y ( u i j * u i j ( k + 1 ) ) | 2 ) h 2 ( σ min β ˜ k + 1 Γ ϕ 2 ) i = 1 N j = 1 M ( | x ( u i j * u i j ( k + 1 ) ) | 2 + | y ( u i j * u i j ( k + 1 ) ) | 2 ) + + ( α min + 1 τ ) u * u ( k + 1 ) h 2 β ˜ k + 1 Γ ϕ u * u ( k ) h 2 β ˜ k + 1 Γ ϕ 2 h 2 u * u ( k + 1 ) h 2 .
(23)

Keeping into account of the stopping condition (18) we can write

1 τ ε k + 1 u * - u ( k + 1 ) h 1 τ r ( k + 1 ) u * - u ( k + 1 ) h < - 1 τ r ( k + 1 ) , u * - u ( k + 1 ) > .
(24)

Since ϕ is an arbitrary positive number, we can choose ϕ such that in (23)

σ min - β ̃ k + 1 Γ ϕ 2 = 0 ,

that is

ϕ = 2 σ min β ̃ k + 1 Γ .

Thus, from (23) and (24) we have

1 τ ε k + 1 u * - u ( k + 1 ) h α min + 1 τ - σ min h 2 u * - u ( k + 1 ) h 2 - β ̃ k + 1 2 Γ 2 2 σ min u * - u ( k ) h 2 .

Since the grid function { u i j ( k + 1 ) } belongs to , it then satisfies the inequality (8) and so we have

u * - u ( k + 1 ) h 2 ρ ,

then

2 ρ τ ε k + 1 α min + 1 τ - σ min h 2 u * - u ( k + 1 ) h 2 - β ̃ k + 1 2 Γ 2 2 σ min u * - u ( k ) h 2 .

We assume that condition (15) holds. Then, keeping into account of the expression of β ̃ k , we have the inequality

u * - u ( k + 1 ) h 2 γ ^ u * - u ( k ) h 2 + â ε k + 1 ,
(25)

where

a = α min + 1 τ - σ min h 2 > 0 , γ ^ = ( β + ε k + 1 β 0 ) 2 Γ 2 2 σ min a , â = 2 ρ τ a .

Now, since there exists an integer k0 such that γ ^ <1 for all kk0, we can write formula (25) as

u * - u ( k 0 + r ) h 2 γ ^ r u * - u ( k 0 ) h 2 + â j = 1 r γ ^ r - j ε k 0 + j ,

r = 1, 2,..., and since ε k → 0 for k → ∞, it follows from the general Toeplitz Lemma (e.g., see [[3], p. 399]) that

lim k u * - u ( k ) h 2 = 0 .

Therefore, the sequence {u(k)} of approximate solutions of the Lagged Diffusivity method converges to the solution u* of the system (7).

3 Numerical experiments

In this section, we consider a numerical experimentation of the Lagged Diffusivity method for the solution of the nonlinear system generated by the θ-method applied on the model problem (1) in a rectangular domain with Dirichlet boundary conditions. Indeed we solve with the Lagged Diffusivity procedure the nonlinear system

( I - τ A ( u ) ) u - w - b ( u ) = 0 .

The Lagged Diffusivity procedure is an efficient and robust method, even if only linearly convergent, for solving the digital image restoration problem using diffusive filters.

One of the most used diffusive filter is defined (in equation (1) with ( v ̃ = 0 ) ) by a diffusivity σ(u) chosen as a rapidly decreasing function of the gradient magnitude |u|2. Specifically, σ(s) : [0, ∞) → (0, ∞) is a decreasing function satisfying σ(0) = 1 and lims→∞σ(s) = 0 and s = |u|2 = (∂u/∂x)2 + (∂u/∂y)2.

Due to the presence of this term σ(u), the operator div(σu) is highly nonlinear and, when linearized by lagging the diffusivity σ, it has highly varying coefficients [48].

Often happens in real images that |u| = 0. This makes necessary the use of numerical regularization, consisting in replacing the term |u| by (|u|2 + β)1/2 for a "small enough" positive artificial parameter β. Due to the presence of the highly nonlinear operator div(σu), Newton's method for solving the nonlinear system (7) does not work satisfactory, in the sense that its domain of convergence is very small.

These diffusion problems for image restoration are also solved using operator splitting methods (e.g., [911]). Operator splitting is a powerful concept used in Computational Mathematics for the design of effective numerical methods. These splitting methods are essentially based on certain special relaxation processes which allow one, to reduce the complicated problem into a sequence of simpler problems which can be effectively solved with a computer.

At present, there exists a lot of interest in the applications of operator splitting methods to problems of Financial Mathematics, and, in particular, to diffusion convection equations with mixed derivatives terms (e.g., [1214]).

Operator splitting is also a very common tool for solving nonlinear evolution equations which include hyperbolic conservation laws and degenerate diffusion convection equations with nonsmooth solutions. In these evolution equations, the convection dominates diffusion (e.g., [1519]).

An alternative case has been considered in our computational experiments. Indeed, we consider diffusion convection problems which are of diffusion dominated nature, as those concerning the groundwater transport of a contaminant in an aquifer or the control of heating processes of industrial kilns.

The convergence of the "outer" iteration {u(k)} to u* of the Lagged Diffusivity procedure involves solution for an unknown vector u of the matrix equation (17). This linear system may be solved by an operator splitting method. The effect of this "inner" iteration on convergence of the "outer" iteration to u* must be analyzed in order to define a good strategy for the convergence of the Lagged Diffusivity procedure. Indeed, a significant reduction in total effort can often be achieved by proper coordination of inner and outer iteration.

In the experiments, the vector solution u* is prefixed and is composed by the values of prescribed functions u(x, y, t) defined on [a, b] × [a, b] × [0, ∞). In all the experiments, we have a = 0 and b = 1. We choose different solution functions where the time value t is set equal to 1:

u 1 : u ( x , y , t ) = x y t , u 2 : u ( x , y , t ) = i = 1 3 15 e - 8 ( ( x - ξ i ) 2 + ( y - η i ) 2 ) t , u 3 : u ( x , y , t ) = ( x + y ) t , u 4 : u ( x , y , t ) = ( 1 + x - y ) 3 t .

For the function u 2, we have ξ1 = η1 = 0.2, ξ2 = 0.2, η2 = 0.8 and ξ3 = η3 = 0.8.

The chosen functions σ(u) are:

σ 1 : σ ( u ) = 0 . 01 + 0 . 5 u , σ 2 : σ ( u ) = 0 . 01 + 0 . 5 u 2 , σ 3 : σ ( u ) = 100 1 + 500 u .

The vector w is computed as

w w * = ( I - τ A ( u * ) ) u * - b ( u * ) ,

where the matrix A(u*) and the vector b(u*) have elements as in formula (5) with N = M and α(x, y) = 0. We set θ = 1 and Δt = 10-3 or Δt = 10-4; τ = θ Δt. Here, the condition (20) holds.

At each iteration k of the Lagged Diffusivity procedure, we have to solve the linear system of order μ = N × N:

( I - τ A ( u ( k ) ) ) u = w * + b ( u ( k ) ) .

We consider the case that the coefficient matrix is an M-matrix ( v ̃ 0 ) .

In these experiments, the iterative method of the Arithmetic Mean is used as linear solver in the form introduced in [20]. This method is convergent when the coefficient matrix is a nonsingular M-matrix. This form of the Arithmetic Mean method is a variant of the Alterating Group Explicit (AGE) decomposition introduced by Evans (e.g., see [2123]). The effectiveness of the Arithmetic Mean method, even on parallel architectures, is highlighted in [20, 2427]. Some recent papers on the Arithmetic Mean method on linear systems are in [2831].

We call u(k+1)the solution of the linear system above computed with j k iterations of the Arithmetic Mean solver where the inner residual

r ( k + 1 ) = ( I - τ A ( u ( k ) ) ) u ( k + 1 ) - w * - b ( u ( k ) ) ,

satisfies the condition

r ( k + 1 ) ε k + 1 ,

with ε1 = 0.1F(u(0)) and

ε k + 1 = 0 . 5 ε k .
(26)

The vector F(u(0)) = (I - τA(u(0)))u(0) - w* - b(u(0)) is the initial outer residual and its Euclidean norm is called res0.

The initial vector u(0) is taken as the null vector (u(0) = 0) or as the vector e which is the vector with all the components equal to 1 (u(0) = e).

The Lagged Diffusivity procedure has been implemented in a Fortran code with machine precision 2.2 × 10-16 and stops when

ε k + 1 ε ,
(27)

with ε =1 0 - 4 .

An experiment evaluates the effectiveness of the Lagged Diffusivity method for different values of ε ; another experiment shows the behavior of the method for different choices of εk+1respect to the one in (26) with ε =1 0 - 4 .

We call k* the iteration of the Lagged Diffusivity procedure for which condition (27) is satisfied.

In the tables, we report the number of iterations k*, the total number of iterations of the Arithmetic Mean method j T , the discrete l2(R h ) norm of the error, err= u * - u ( k * ) h , the Euclidean norm of the outer residual

r e s = F ( u ( k * ) ) = ( I - τ A ( u ( k * ) ) ) u ( k * ) - w * - b ( u ( k * ) ) ,

and res0.

The symbol * close to the value of res indicates that the behavior of the norm of the outer residual (F(u(k))) is not monotone decreasing. In addition, σ ̄ * is an approximation of σmax.

Furthermore, in the tables, writing 5.26(-7) means 5.26 × 10-7.

From the numerical experiments, we can drawn the below conclusions (See Tables 1, 2 and 3).

Table 1 Results with N = 256, 1 = 2 =1, u(0) = 0
Table 2 Results with N = 256, 1 = 2 =1, u(0) = e
Table 3 Results for different ε and εk+1(u(0) = 0)
  • We observe that, since εk+1decreases, for k increasing, as (26) and the Lagged Diffusivity method stops at the iteration k* when the criterium for ε k * + 1 in (27) is satisfied, we have

    ε k * + 1 = 1 2 ε k * = 1 2 2 ε k * - 1 = = 1 2 k * ε 1 ε ,

where we set ε1 = 0.1F(u(0)). Then

k * > log 2 ε 1 ε .

In the experiments, we obtain

k * = lo g 2 ε 1 ε .
  • We observe that in all the experiments with the rule (26) the outer residual F ( u ( k * ) ) has the same order of ε with an error in the discrete l2(R h ) norm of order of h ε .

  • From the experiments, the choice for εk+1as in (26) gives satisfactory results in terms of iterations of the inner solver and in relation to the outer residual and the error.

  • About the initial vectors, we can say that, generally, the null vector is a good choice, in terms of total number of inner iterations, for σ(u) = σ 1 and σ(u) = σ 2; while for the functions u (x, y, t) and σ(u) = σ 3 we have better results when u(0) = e.

A detection of that the null vector is not a good initial vector for the problems with σ(u) = σ 3, is the number of inner iterations at the first outer iteration. Indeed, for the functions u 2, u 3, and u 4, most of the inner iterations happen at the first iteration, k = 1; for instance, in Table 1 when σ(u) = σ 3 and τ = 10-4, at k = 1, for u 2 we have 1,484 inner iterations on the total number 1,534, for u 3 we have 1,421 inner iterations on the total number 1,515, for u 4 we have 1,407 inner iterations on the total number 1,986.

  • When the behavior of the norm of the outer residual F(u(k)) is not monotone decreasing (i.e., σ(u) = σ 2, u(x, y, t) = u 2, u 4, τ = 10-3) we can have a large total number of inner iterations at an outer iteration. We suggest to change the initial vector to obtain a monotone decreasing of the norm of the outer residual that implies a reduction of the total number of inner iterations. Changing the initial vector u(0), the rows marked with "✔" in Table 1 become the results of Table 4.

Table 4 Results for the cases in Table 1 marked with "✔" with different initial vectors
  • From Table 1, we observe that the Arithmetic Mean method gives better performances when the ratio σ ¯ * / 1 ( 1 = 2 ) is small, that is the coefficient matrix of the inner linear system is strongly asymmetric (see [24]).

  • A very general conclusion is that, at each time step t n , the Lagged Diffusivity method (17)-(18)-(26)-(27) allows to obtain an approximate solution of the nonlinear system (7) with sufficiently high accuracy and low computational complexity, when the initial vector u(0) has been chosen properly.

Remark. Given the exact solution u(x, y, t) of equations (1), (3) at time t = tn+1(for example, tn+1= 1), in the above numerical experiments, we have considered in system (7) the "source" vector w as ww* = (I-τA(u*)) u* -b(u*) with u* = {u(x i , y j , tn+1)}.

With this definition of w, it has been possible to obtain, for our test problems, the behavior of the error associated with the numerical computation of the solution of (7), i.e., an indication of the accuracy on the solution of the nonlinear equations (7). We have denoted this error by err.

In order to analyze the behavior of the effective one-step error (in the discrete l2 (R h ) norm) of the θ-method, denoted by Estep, we must consider in system (7) the "source" vector w as

w w n = I + Δ t ( 1 - θ ) A ( u n ) u n + Δ t ( 1 - θ ) b ( u n ) + Δ t ( θ s n + 1 + ( 1 - θ ) s n )

with un= {u(x i , y j , t n )} and t n = tn+1- Δ t.

From Table 5, we observe that the values of Estep are comparable with those of err, with the exception of the test problems σ(u) = σ 1, σ 2, u(x, y, t) = u 2 for τ = 10-3, 10-4, where the discrepancy between err and Estep depends on the fact that the matrix A(u) is ill conditioned because the Lipschitz constant is large.

Table 5 Results with N=256, 1 = 2 =1, u ( 0 ) = u n ,w= w n

Furthermore, the global error for solving the problem (1)-(2)-(3) with the θ-method combined with the Lagged Diffusivity procedure is computed for the cases

c 1 : θ = 1 , u x , y , t = u 2 , σ ( u ) = σ 1 , c 2 : θ = 0 . 5 , u ( x , y , t ) = u 2 , σ ( u ) = σ 1 , c 3 : θ = 1 , u ( x , y , t ) = u 4 , σ ( u ) = σ 1 , c 4 : θ = 0 . 5 , u ( x , y , t ) = u 4 , σ ( u ) = σ 1 ,

and it is denoted with E(c 1), E(c 2), E(c 3), and E(c 4).

The behavior of this global error, step-by-step, from 1 ≤ t ≤ 1.2 and Δt = 10-3 is highlighted in Figure 1; the numerical results are seen to be largely in keeping with the theory.

Figure 1
figure 1

Global error for cases c 1- c 4.

References

  1. Varga R: Matrix Iterative Analysis. 2nd edition. Berlin: Springer; 2000.

    Book  MATH  Google Scholar 

  2. Isaacson E, Keller HB: Analysis of Numerical Methods. New York: John Wiley & Sons; 1966.

    MATH  Google Scholar 

  3. Ortega JM, Rheinboldt WC: Iterative Solution of Nonlinear Equations in Several Variables. New York: Academic Press; 1970.

    MATH  Google Scholar 

  4. Vogel CR, Oman ME: Iterative methods for total variation denoising. SIAM Journal on Scientific Computing 1996, 17: 227–238. 10.1137/0917016

    Article  MathSciNet  MATH  Google Scholar 

  5. Dobson DO, Vogel CR: Convergence of an iterative method for total variation denoising. SIAM Journal on Numerical Analysis 1997, 34: 1779–1791. 10.1137/S003614299528701X

    Article  MathSciNet  MATH  Google Scholar 

  6. Chan TF, Mulet P: On the convergence of the lagged diffusivity fixed point method in total variation image restoration. SIAM Journal on Numerical Analysis 1999, 36: 354–367. 10.1137/S0036142997327075

    Article  MathSciNet  MATH  Google Scholar 

  7. Chang Q, Chern IL: Acceleration methods for total variation-based image denoising. SIAM Journal on Scientific Computing 2003, 25: 982–994. 10.1137/S106482750241534X

    Article  MathSciNet  MATH  Google Scholar 

  8. Shi Y, Chang Q, Xu J: Convergence of fixed point iteration for deblurring and denoising problem. Applied Mathematics and Computation 2007, 189: 1178–1185. 10.1016/j.amc.2006.12.004

    Article  MathSciNet  MATH  Google Scholar 

  9. Weickert J: Anisotropic Diffusion in Image Processing. Stuttgart: B. G. Teubner; 1998.

    MATH  Google Scholar 

  10. Weickert J, ter Haar Romeny BM, Viergever MA: Efficient and reliable schemes for nonlinear diffusion filtering. IEEE Transaction on Image Processing 1998, 7: 398–410. 10.1109/83.661190

    Article  Google Scholar 

  11. Barash D, Schlick T, Israeli M, Kimmel R: Multiplicative operator splitting in nonlinear diffusion: from spatial splitting to multiple timesteps. Journal of Mathematical Image and Vision 2003, 19: 33–48. 10.1023/A:1024484920022

    Article  MathSciNet  MATH  Google Scholar 

  12. in 't Hout KJ, Welfert BD: Stability of ADI schemes applied to convection-diffusion equations with mixed derivative terms. Applied Numerical Mathematics 2007, 57: 19–35. 10.1016/j.apnum.2005.11.011

    Article  MathSciNet  MATH  Google Scholar 

  13. in 't Hout KJ, Welfert BD: Unconditional stability of second-order ADI schemes applied to multi-dimensional diffusion equations with mixed derivative terms. Applied Numerical Mathematics 2009, 59: 677–692. 10.1016/j.apnum.2008.03.016

    Article  MathSciNet  MATH  Google Scholar 

  14. in 't Hout KJ, Foulon S: ADI finite difference schemes for option pricing in the Heston model with correlation. International Journal of Numerical Analysis and Modeling 2010, 7: 303–320.

    MathSciNet  Google Scholar 

  15. Karlsen KH, Risebro NH: An operator splitting method for nonlinear convection-diffusion equations. Numerische Mathematik 1997, 77: 365–382. 10.1007/s002110050291

    Article  MathSciNet  MATH  Google Scholar 

  16. Karlsen KH, Lie KA, Natvig JR, Nordhaug HF, Dahle HK: Operator splitting methods for systems of convection-diffusion equations: nonlinear error mechanism and correction strategies. Journal of Computational Physics 2001, 173: 636–663. 10.1006/jcph.2001.6901

    Article  MathSciNet  MATH  Google Scholar 

  17. Holden H, Karlsen KH, Lie KA, Risebro NH: Splitting Methods for Partial Differential Equations with Rough Solutions. Zurich: European Mathematical Society Publishing House; 2010. [Series of Lectures in Mathematics]

    Book  MATH  Google Scholar 

  18. Geiser J: Modified Jacobian Newton iterative method: theory and applications. Mathematical Problems in Engineering 2009, 24. Article ID 307298

    Google Scholar 

  19. Geiser J: Consistency of iterative operator-splitting methods. theory and applications. Numerical Methods for Partial Differential Equations 2010, 26: 135–158. 10.1002/num.20422

    Article  MathSciNet  MATH  Google Scholar 

  20. Ruggiero V, Galligani E: A parallel algorithm for solving block tridiagonal linear systems. Computers Mathematics with Applications 1992, 24: 15–21. 10.1016/0898-1221(92)90003-Z

    Article  MathSciNet  MATH  Google Scholar 

  21. Evans DJ: Group explicit iterative methods for solving large linear systems. International Journal of Computer Mathematics 1985, 17: 81–108. 10.1080/00207168508803452

    Article  MATH  Google Scholar 

  22. Evans DJ, Yousif WS: The solution of two-point boundary value problems by the alternating group explicit (AGE) method. SIAM Journal on Scientific and Statistic Computing 1988, 9: 474–484. 10.1137/0909031

    Article  MathSciNet  MATH  Google Scholar 

  23. Tavakoli R, Davami P: 2D parallel and stable group explicit finite difference method for solution of diffusion equation. Applied Mathematics and Computation 2007, 188: 1184–1192. 10.1016/j.amc.2006.10.057

    Article  MathSciNet  MATH  Google Scholar 

  24. Ruggiero V, Galligani E: An iterative method for large sparse linear systems on a vector computer. Computers Mathematics with Applications 1990, 20: 25–28.

    Article  MathSciNet  MATH  Google Scholar 

  25. Galligani E, Ruggiero V: Analysis of splitting parallel methods for solving block tridiagonal linear systems. In Proceedings of 2nd International Conference on Software for Multiprocessors and Supercomputers, Theory, Practice, Experience - SMS TPE'94: 19–23 September 1994; Moscow (Russia). Edited by: Ivannikov VP, Serebriakov VA. Moscow: Russian Academy of Sciences; 1994:406–416.

    Google Scholar 

  26. Galligani E, Ruggiero V: Implementation of splitting methods for solving block tridiagonal linear systems on transputers. In Proceedings of 3rd EuroMicro Workshop on Parallel and Distributed Processing - Euromicro PDP'95: 25–27 January 1995; Sanremo (Italy). Edited by: Valero M, Gonzalez A, Los Alamitos CA. IEEE Computer Society Press; 1995:409–415.

    Chapter  Google Scholar 

  27. Galligani E, Ruggiero V: The two-stage arithmetic mean method. Applied Mathematics and Computation 1997, 85: 245–264. 10.1016/S0096-3003(96)00139-7

    Article  MathSciNet  MATH  Google Scholar 

  28. Sulaiman J, Othman M, Hasan MK: A new half-sweep arithmetic mean (HSAM) algorithm for two-point boundary value problems. Proceedings of the International Conference on Statistics and Mathematics and Its Application in the Development of Science and Technology: 4–6 October 2004; Bandung (Indonesia) 2004, 169–173.

    Google Scholar 

  29. Sulaiman J, Othman M, Hasan MK: A new quarter sweep arithmetic mean (QSAM) method to solve diffusion equations. Chamchuri Journal of Mathematics 2009, 1: 93–103.

    MathSciNet  MATH  Google Scholar 

  30. Sulaiman J, Hasan MK, Othman M, Yaacob Z: Quarter-sweep arithmetic mean iterative algorithm to solve fourth-order parabolic equations. In Proceedings of the 2009 International Conference on Electrical Engineering and Informatics - ICEEI 2009, vol. 1: 5–7 August 2009; Bangi Selangor (Malaysia). Institute of Electrical and Electronics Engineers (IEEE); 2009:194–198.

    Google Scholar 

  31. Hasan MK, Sulaiman J, Karim SAA, Othman M: Development of some numerical methods applying complexity reduction approach for solving scientific problems. Journal of Applied Sciences 2011, 11: 1255–1260.

    Article  Google Scholar 

Download references

Acknowledgements

The author is very grateful to the anonymous referees for their valuable comments and suggestions, which have improved the article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emanuele Galligani.

Additional information

Competing interests

The author declares that he has no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Galligani, E. Lagged diffusivity method for the solution of nonlinear diffusion convection problems with finite differences. Adv Differ Equ 2012, 30 (2012). https://doi.org/10.1186/1687-1847-2012-30

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2012-30

Keywords