Open Access

Fast multipole method for singular integral equations of second kind

Advances in Difference Equations20152015:191

https://doi.org/10.1186/s13662-015-0515-6

Received: 5 December 2014

Accepted: 21 May 2015

Published: 20 June 2015

Abstract

This paper explores the numerical methods for a singular integral equation (SIE), which arise in the study of various problems of mathematical physics and engineering. The idea behind the boundary element method (BEM) is used to discretize the SIE. The fast multipole method (FMM), which is a very efficient and popular algorithm for the rapid solution of boundary value problems, is used to accelerate the BEM solutions of SIE. The effectiveness and accuracy of the proposed method are tested by numerical examples.

Keywords

singular integral equationboundary element methodfast multipole methodCauchy principal valuefinite-part integral

1 Introduction

Various problems of mathematical physics and engineering can be described by differential equations which can often be reformulated as an equivalent integral equation. For example, the integral equations with singular kernel often arise in practical applications such as potential problems, Dirichlet problem, and radiative equilibrium [1], etc., which in general can be described as
$$ au(x)+\frac{b}{\pi}\int_{-1}^{1} \frac{u(y)}{y-x}\,dy+\lambda\int_{-1}^{1}k(y,x)u(y) \,dy=f(x), \quad x\in(-1,1), $$
(1)
where a, b are two real constants, \(f(x)\) and \(k(y,x)\) are given functions, the first integral in (1) is defined in the sense of Cauchy principal value, and λ is not an eigenvalue. In last few years, many numerical methods have been developed to solve SIE (1), among which collocation methods, Galerkin methods, spectral methods, etc. [211] have been widely used for solving these kinds of problems for many years. Recently, Xiang and Brunner [12, 13] introduced collocation and discontinuous Galerkin methods for Volterra integral equations with highly oscillatory Bessel kernels, and they concluded that the collocation methods are much more easily implemented and can get higher accuracy than discontinuous Galerkin methods under the same piecewise polynomials space. Volterra integral equations with oscillatory trigonometric kernels are given in [14, 15], which shows that the convergence order of collocation on graded meshes is not necessarily better than that on uniform meshes when the kernels are oscillatory trigonometric. Numerical solutions for the Fredholm integral equations are discussed in [16, 17].

In general, it will give rise to a standard linear system of equations when approximating the solution of SIE by numerical methods. Specially, collocation methods would lead to systems of equations with dense and non-symmetrical coefficient matrices where \(O(N^{2})\) elements need to be stored, with N being the number of degrees of freedom. Solving the systems of equations directly will need \(O(N^{3})\) arithmetic operations. Fortunately, Rokhlin and Greengard innovated the fast multipole method (FMM) which has been widely used for solving large scale engineering problems such as potential, elastostatic, Stokes flow, and acoustic wave problems. For one dimension (SIE (1)), the interval \([-1,1]\) is not a closed curve; however, we can also utilize BEM to solve SIE and accelerate BEM by FMM when the kernel \(k(y,x)\) has multipole expansion or \(k(y,x)= 0\), for details, see [1820].

In this paper, we are concerned with the evaluation of SIE
$$ u(x)+\lambda\int_{-1}^{1} \frac{u(y)}{(y-x)^{m}}\,dt=f(x),\quad x\in(-1,1), $$
(2)
where \(m\in\mathbb{Z}\), \(m\geq1\), \(u(x)\) is an unknown function and \(f(x)\) is a given function. The integral in (2) is defined in the sense of Hadamard finite-part integral for \(m>1\). For simplicity, we denote SIE (2) as
$$(I+\lambda K)u=f. $$

We approximate the solution of SIE by the collocation methods and utilize the FMM to improve the efficiency of algorithm. The paper is organized as follows. In Section 2 we give a brief description of the FMM, where the multipole expansion theory is introduced and also moment to moment translation (M2M), moment to local translation (M2L), and local to local translation (L2L). In Section 3, we give the convergence analysis of the proposed method. In Section 4 we give preliminary numerical examples to illustrate the effectiveness and accuracy of the proposed method.

2 Fast multipole boundary element method for the solution of (2)

In this section, we recall some basic formulations for the fast multipole boundary element method. In order to solve a SIE numerically for the unknown function, we need to discretize the SIE, firstly. We divide the interval \((-1,1)\) into several segments \((x_{j-1},x_{j})\), \(j=1,\ldots, N\), and use the piecewise constant collocation method [16, 21], then SIE (2) becomes
$$ u(\tilde{x}_{i})+\lambda\sum _{j=1}^{N}\int_{x_{j-1}}^{x_{j}} \frac {u(y)}{(y-\tilde{x}_{i})^{m}}\,dy=f(\tilde{x}_{i}),\quad \tilde{x}_{i} \in(-1,1). $$
(3)
Denote \(A=(a_{ij})\) with \(a_{ij}=\lambda\int_{x_{j-1}}^{x_{j}}\frac {1}{(y-\tilde{x}_{i})^{m}}\,dy+\delta_{ij}\), and \(\mathbf{b}=[f(\tilde {x}_{1}),\ldots,f(\tilde{x}_{N})]^{T}\), \(\mathbf{u}=[u(\tilde{x}_{1}),\ldots, u(\tilde{x}_{N})]^{T}\), we obtain a standard linear system of equations
$$A\mathbf{u}=\mathbf{b}. $$
Due to matrix-vector multiplication, solving this system by iterative solvers such as the generalized minimum residue (GMRES) method needs \(O(N^{2})\) operations, and even worse by direct solvers such as Gauss elimination. Based on the multipole expansion of the kernel, the FMM can be used to accelerate the matrix-vector multiplication.

2.1 Multipole expansion of the kernel

To derive the multipole expansion of kernel \(G(y,x):=\frac {1}{(y-x)^{m}}\), we need the following formulation.

Lemma 1

([22])

Let a be any real or complex constant and \(\vert z\vert <1\), then we have
$$ (1+z)^{a}=1+\frac{a}{1!}z+\frac{a(a-1)}{2!}z^{2}+ \frac {a(a-1)(a-2)}{3!}z^{3}+\cdots. $$
(4)
Applying Lemma 1, set \(z=\frac{y-y_{c}}{y_{c}-x}\), then the kernel \(G(y,x)\) can be expressed as the multipole expansion in the following, if \(\vert y-y_{c}\vert <\vert y_{c}-x\vert \),
$$\begin{aligned} G(y,x) =&(y_{c}-x)^{-m}\biggl(1+\frac{y-y_{c}}{y_{c}-x} \biggr)^{-m} \\ =& (y_{c}-x)^{-m} \biggl(1+\frac{-m}{1!}z+ \frac{-m(-m-1)}{2!}z^{2}+\frac {-m(-m-1)(-m-2)}{3!}z^{3}+\cdots\biggr) \\ =&\sum_{k=0}^{\infty} O_{k}(y_{c}-x)I_{k}(y-y_{c}), \end{aligned}$$
(5)
where
$$\begin{aligned}& I_{k}(x)=\frac{x^{k}}{k!} , \quad k\geq0, \end{aligned}$$
(6)
$$\begin{aligned}& O_{k}(x)= \textstyle\begin{cases} x^{-m},&k=0,\\ x^{-m-k}(-m)(-m-1)\cdots(-m-k+1),&k\geq1. \end{cases}\displaystyle \end{aligned}$$
(7)
In addition, we have the following two results:
$$\begin{aligned} I_{k}(x_{1}+x_{2}) =&\sum _{l=0}^{k} I_{k}(x_{1})I_{k-l}{(x_{2})}= \sum_{l=0}^{k}I_{k}{(x_{2})}I_{k-l}(x_{1}), \end{aligned}$$
(8)
$$\begin{aligned} O_{k}(x_{1}+x_{2}) =&\sum _{l=0}^{\infty}O_{k+l}(x_{1})I_{l}(x_{2}), \quad \vert x_{1}\vert >\vert x_{2}\vert . \end{aligned}$$
(9)
The integral in (3) is now evaluated as follows:
$$\begin{aligned} \int_{x_{j-1}}^{x_{j}} G(y, \tilde{x}_{i})u(y)\,dy = & \sum_{k=0}^{\infty }O_{k}(y_{c}- \tilde{x}_{i})M_{k}(y_{c}), \end{aligned}$$
(10)
where \(M_{k}(y_{c})=\int_{x_{j-1}}^{x_{j}}I_{k}(y-y_{c})u(y)\,dy\) are called moments about \(y_{c}\), which are independent of the collocation point \(\tilde{x}_{i}\).
If the expansion point \(y_{c}\) is moved to a new location \(y_{c'}\), we obtain this translation by (6) and
$$\begin{aligned} M_{k}(y_{c'})=\int_{x_{j-1}}^{x_{j}}I_{k}(y-y_{c'})u(y) \,dy, \end{aligned}$$
which leads to
$$\begin{aligned} M_{k} (y_{c'})=\sum _{l=0}^{k} I_{k-l}(y_{c}-y_{c'})M_{l} (y_{c}). \end{aligned}$$
(11)
We call this formula moment-to-moment translation (M2M).
On the other hand, if \(\vert y_{c}-x_{l}\vert >\vert x_{l}-\tilde{x}_{i}\vert \), then (10) can be rewritten as
$$\begin{aligned} \int_{x_{j-1}}^{x_{j}} G(y, \tilde{x}_{i})u(y)\,dy = & \sum_{k=0}^{\infty }O_{k}(y_{c}- \tilde{x}_{i})M_{k}(y_{c}) \\ =&\sum_{k=0}^{\infty}O_{k}(y_{c}-x_{l}+x_{l}- \tilde{x}_{i})M_{k}(y_{c}) \\ =&\sum_{k=0}^{\infty}L_{k}(x_{l})I_{k}(x_{l}- \tilde{x}_{i}) \end{aligned}$$
(12)
with
$$\begin{aligned} L_{k}(x_{l})=\sum _{l=0}^{\infty}O_{k+l}(y_{c}-x_{l})M_{l}(y_{c}), \end{aligned}$$
(13)
where \(L_{k}(x_{l})\) denotes the local expansion coefficients about \(x_{l}\). We call the formula (13) moment-to-local translation (M2L).
If the point for local expansion is moved from \(x_{l}\) to \(x_{l'}\), using a local expansion with p terms, we obtain this translation by
$$\begin{aligned} \int_{x_{j-1}}^{x_{j}} G(y,\tilde{x}_{i})u(y) \,dy =&\sum_{k=0}^{p}L_{k}(x_{l})I_{k}(x_{l}- \tilde{x}_{i}) \\ =&\sum_{k=0}^{p} L_{k}(x_{l})I_{k}(x_{l}-x_{l'}+x_{l'}- \tilde{x}_{i}) \\ =&\sum_{k=0}^{p} L_{k}(x_{l}) \sum_{j=0}^{k} I_{k}(x_{l}-x_{l'})I_{k-j}(x_{l'}- \tilde{x}_{i}), \end{aligned}$$
which leads to
$$\begin{aligned} L_{k} (x_{l'})=\sum _{j=0}^{k} L_{k} (x_{l})I_{k-j}(x_{l'}- \tilde{x}_{i}). \end{aligned}$$
(14)
We call this formula local-to-local translation (L2L).

2.2 Evaluation of the integrals

The multipole expansion is used to evaluate the integrals that are far away from the collocation point, whereas the direct approach is applied to evaluate the integrals on the remaining segments that are close to the collocation point. In the following, we are concerned with the piecewise constant collocation method for (2), where we can evaluate the moments analytically by
$$\begin{aligned} M_{k}(y_{c}) =&\int_{x_{j-1}}^{x_{j}}I_{k}(y-y_{c})u(y) \,dy \\ =&u_{j}\int_{x_{j-1}}^{x_{j}} \frac{(y-y_{c})^{k}}{k!}\,dy \\ =&u_{j}\bigl(I_{k+1}(x_{j}-y_{c})-I_{k+1}(x_{j-1}-y_{c}) \bigr). \end{aligned}$$
(15)
When the integrating interval \((x_{j-1},x_{j})\) is close to the collocation point \(\tilde{x}_{i}\) but not coincidence with the interval \((x_{i-1},x_{i})\), the integral is not singular, we can evaluate it analytically by
$$\begin{aligned} \int_{x_{j-1}}^{x_{j}}\frac{u_{j}}{(y-\tilde{x}_{i})^{m}}\,dy= \frac {u_{j}}{(-m+1)}(y-\tilde{x}_{i})^{-m+1}\mid_{x_{j-1}}^{x_{j}}. \end{aligned}$$
(16)
For the integrating interval \((x_{i-1},x_{i})\), where \(\tilde{x}_{i}\in (x_{i-1}, x_{i})\), we can evaluate the integral analytically by
$$\begin{aligned} \int_{x_{i-1}}^{x_{i}}\frac{u_{i}}{(y-c)^{m}}\,dy= \textstyle\begin{cases} \ln(\frac{1-c}{1+c})u_{i},& m=1,\\ u_{j}\frac{1}{(m-1)!}\frac{d^{(m-1)}}{dc^{(m-1)}}\int_{x_{i-1}}^{x_{i}}\frac{1}{y-c}\,dy,&m> 1, \end{cases}\displaystyle \end{aligned}$$
(17)
where \(x_{i-1}< c< x_{i}\).

3 Convergence analysis

In this section, we derive the error bound for (10) when \(m=1\).

Lemma 2

([23])

The Cauchy integral operator \(K: C^{(0,\alpha)(-1,1)}\rightarrow C^{(0,\alpha)(-1,1)}\) defined by
$$ (Ku) (z):=\frac{1}{\pi i}\int_{-1}^{1} \frac{u(y)}{y-z}\,dy,\quad z\in(-1,1) $$
(18)
is bounded, where \(C^{(0,\alpha)}(-1,1)\) denotes the Hölder continuous function on \((-1,1)\).

Lemma 3

([24])

Suppose that \(f(z)\) is analytic in \(\vert z\vert < R\), then the Taylor series of \(f(z)\) is converging absolutely in \(\vert z\vert < R\), and we have
$$\begin{aligned} \vert R_{n}\vert \leq& \frac{1}{2\pi}\oint_{c} \frac{\vert f(t)\vert \vert \,dt\vert }{\vert t\vert (1-\vert \frac {z}{t}\vert )}\biggl\vert \frac{z}{t}\biggr\vert ^{n} \\ \leq& M_{1}\biggl(\frac{r}{R}\biggr)^{n}, \end{aligned}$$
where \(r=\vert z\vert \), c is a circle centered at origin with radius R and \(R_{n}\) is the remainder of Taylor series.

Theorem 1

If we apply a multipole expansion with p terms and suppose that \(\vert y_{c}-\tilde{x}_{i}\vert /h\geq2\), we have the following error bound:
$$\begin{aligned} E_{M}^{p} =&\Biggl\vert \int_{x_{j-1}}^{x_{j}} G(y,\tilde{x}_{i})u(y)\,dy- \sum_{k=0}^{p}O_{k}(y_{c}- \tilde{x}_{i})M_{k}(y_{c})\Biggr\vert \\ \leq&C \frac{1}{h}\biggl(\frac{1}{2}\biggr)^{p+1}, \end{aligned}$$
(19)
where \(h=\max_{y\in(x_{j-1},x_{j})}\vert y-y_{c}\vert \), \(C=\int_{x_{j-1}}^{x_{j}}\vert u(y)\vert \,dy\).

Proof

$$\begin{aligned} E_{M}^{p} =&\Biggl\vert \int_{x_{j-1}}^{x_{j}} G(y,\tilde{x}_{i})u(y)\,dy- \sum_{k=0}^{p}O_{k}(y_{c}- \tilde{x}_{i})M_{k}(y_{c})\Biggr\vert \\ =& \Biggl\vert \sum_{k=p+1}^{\infty}O_{k}(y_{c}- \tilde{x}_{i})M_{k}(y_{c})\Biggr\vert \\ \leq&C\sum_{k=p+1}^{\infty} \bigl\vert O_{k}(y_{c}-\tilde{x}_{i})\bigr\vert \frac{h^{k}}{k!} \\ \leq&C\sum_{k=p+1}^{\infty}{C_{m+k-1}^{k}} \frac{h^{k}}{(y_{c}-\tilde {x}_{i})^{m+k}}, \end{aligned}$$
(20)
where \(\vert y-y_{c}\vert \leq h\), \(C_{m+k-1}^{k}\) is the binomial coefficients.
For \(m=1\), we have \(C_{m+k-1}^{k}=1\), then
$$\begin{aligned} E_{M}^{p}\leq C \frac{h^{(p+1)}}{\vert y_{c}-\tilde{x}_{k}\vert ^{(p+2)}}\frac {1}{1-h/\vert y_{c}-\tilde{x}_{i}\vert }. \end{aligned}$$
(21)
Let \(\eta=\vert y_{c}-\tilde{x}_{i}\vert /h\), then the preceding error bound can be written as
$$\begin{aligned} E_{M}^{p}\leq C \frac{1}{h(\eta-1)}\biggl(\frac{1}{\eta} \biggr)^{p+1}, \end{aligned}$$
(22)
which establishes the desired error bound. □

Theorem 2

If we apply multipole expansion and local expansion with p terms and suppose that \(\vert y_{c}-\tilde{x}_{i}\vert /h\geq2\), we have the following error bound:
$$\begin{aligned} E_{L}^{p} =&\Biggl\vert \int_{x_{j-1}}^{x_{j}} G(y,\tilde{x}_{i})u(y)\,dy- \sum_{k=0}^{p}L_{k}(x_{l})I_{k}(x_{l}- \tilde{x}_{i})\Biggr\vert \\ \leq& C M\biggl(\frac{1}{2}\biggr)^{p+1}, \end{aligned}$$
(23)
where C, M are some constants.

Proof

From (12) and (13), we have
$$\begin{aligned} E_{L}^{p} =&\Biggl\vert \int _{x_{j-1}}^{x_{j}} G(y,\tilde{x}_{i})u(y)\,dy- \sum_{k=0}^{p}L_{k}(x_{l})I_{k}(x_{l}- \tilde{x}_{i})\Biggr\vert \\ \leq&\Biggl\vert \int_{x_{j-1}}^{x_{j}} G(y, \tilde{x}_{i})u(y)\,dy-\sum_{k=0}^{p}O_{k}(y_{c}- \tilde{x}_{i})M_{k}(y_{c})\Biggr\vert \\ &{}+\Biggl\vert \sum_{k=0}^{p} \bigl(O_{k}(y_{c}-\tilde{x}_{i})- \tilde{O}_{k}(y_{c}-\tilde {x}_{i}) \bigr) M_{k}(y_{c})\Biggr\vert , \end{aligned}$$
(24)
where \(\tilde{O}_{k}(y_{c}-\tilde{x}_{i})=\sum_{n=0}^{p}O_{k+n}(y_{c}-x_{l})I_{n}(x_{l}-\tilde{x}_{i})\).
Due to
$$\begin{aligned} O_{k}(y_{c}-\tilde{x}_{i})= \frac{k!}{(y_{c}-\tilde{x}_{i})^{k+1}}=\frac {k!}{(y_{c}-x_{l})^{k+1}}\biggl(1+\frac{x_{l}-\tilde{x}_{i}}{y_{c}-x_{l}} \biggr)^{-k-1}, \end{aligned}$$
(25)
it follows that
$$\begin{aligned} \vert R_{p+1}\vert =&\bigl\vert O_{k}(y_{c}- \tilde{x}_{i})-\tilde{O}_{k}(y_{c}-\tilde {x}_{i})\bigr\vert =\Biggl\vert \sum_{n=p+1}^{\infty}O_{k+n}(y_{c}-x_{l})I_{n}(x_{l}- \tilde{x}_{i}) \Biggr\vert , \end{aligned}$$
(26)
which is the remainder of Taylor series expansion of (25).
Define \(g(z)=(1+z)^{-k-1}\), then \(g(z)\) is analytic in \(\vert z\vert <1\). Since \(x_{l}\) and \(y_{c}\) are well-separated points, we could set \(\vert \frac{x_{l}-\tilde{x}_{i}}{y_{c}-x_{l}}\vert <1/2\). By Lemma 3 we can estimate the remainder of Taylor series \(R_{p+1}\) as follows:
$$\begin{aligned} \vert R_{p+1}\vert \leq M_{1}\biggl\vert \frac{k!}{|(y_{c}-x_{l})|^{k+1}}\biggr\vert \biggl(\frac{1}{2}\biggr)^{p+1}, \end{aligned}$$
(27)
where \(M_{1}\) is constant determined by a function \(g(z)\).
By Theorem 1, (27) and (24), we have
$$\begin{aligned} E_{L}^{p}\leq C \frac{1}{h}\biggl(\frac{1}{2} \biggr)^{p+1}+(p+1)CM_{1}C_{1}\biggl( \frac {1}{2}\biggr)^{p+1}, \end{aligned}$$
(28)
where \(C_{1}=\max\{ \frac{h^{k}}{\vert (y_{c}-x_{l})\vert ^{k+1}}, k=0,\ldots,p\}\).

The desired error bound is established by setting \(M= \frac{1}{h}+(p+1)M_{1}C_{1}\). □

Remark 1

In the FMM, \(x_{l}\) and \(y_{c}\) are well-separated points, we could obtain \(\vert y_{c}-x_{l}\vert > 2h\), which leads to \(C_{1}\) is bounded.

When we use the FMM to solve SIE (3), the integral operator K is approximated by multipole expansion; if we define \(K_{p}\) as an approximate operator used in the collocation method, then Theorem 1 and Theorem 2 indicate that
$$\begin{aligned} \lim_{p\rightarrow\infty} \Vert K_{p}-K\Vert =0. \end{aligned}$$
(29)
It follows from Lemma 2 and (29) that the integral operator \(((I+\lambda K_{p})u)(x)\) is bounded.

4 Numerical examples

We illustrate the efficiency and accuracy of the methods described in this paper by numerical examples. Here \(\hat{u}_{N}\) denotes the piecewise constant collocation method, N is the number of collocation points. We choose the uniform mesh, and \(\tilde{x}_{i}\) is the middle point of \((x_{i-1},x_{i})\). Denote by \(I_{h}^{N}=\{\tilde{x}_{i},i=1,\ldots,N\}\) the set of collocation points.

Example 4.1

We consider
$$u(x)-\frac{1}{2\pi}\int_{-1}^{1} \frac{u(y)}{(y-x)^{m}}\,dy=f(x),\quad \vert x\vert < 1, $$
where
$$f(x)=1-\frac{1}{2\pi}\int_{-1}^{1} \frac{1}{(y-x)^{m}}\,dy, $$
and for \(m=1\) or \(m=2\),
$$u(x)=1 $$
is the exact solution of equation.
Tables 1-4 illustrate the efficiency and accuracy of the methods.
Table 1

Approximations at \(\pmb{x=-0.9,-0.5,-0.1}\) for \(\pmb{u(x)-\frac{1}{2\pi}\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x)}\)

x

−0.9

−0.5

−0.1

\(\hat{u}_{10}\)

1.000000000008664

0.999999999966656

1.000000002837690

\(\hat{u}_{100}\)

0.999999988989405

0.999999996983473

0.999999998141215

\(\hat{u}_{1000}\)

1.000000006379357

0.999999997488601

0.999999998811889

u

1

1

1

Table 2

Approximations at \(\pmb{x=0.1,0.5,0.9}\) for \(\pmb{u(x)-\frac{1}{2\pi}\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x)}\)

x

0.1

0.5

0.9

\(\hat{u}_{10}\)

0.999999996078344

1.000000000199640

1.000000000023095

\(\hat{u}_{100}\)

1.000000000689915

1.000000003352249

1.000000033188974

\(\hat{u}_{1000}\)

1.000000000150622

1.000000002673433

0.999999995294152

u

1

1

1

Table 3

Approximations at \(\pmb{x=-0.9,-0.5,-0.1}\) for \(\pmb{u(x)-\frac{1}{2\pi}\int_{-1}^{1}\frac{u(y)}{(y-x)^{2}}\,dy=f(x)}\)

x

−0.9

−0.5

−0.1

\(\hat{u}_{10}\)

0.999999993535333

0.999999981428145

0.999999917999549

\(\hat{u}_{100}\)

0.999999993845970

0.999999988715871

0.999999990115998

\(\hat{u}_{1000}\)

0.999999995690078

0.999999991935489

0.999999993153335

u

1

1

1

Table 4

Approximations at \(\pmb{x=0.1,0.5,0.9}\) for \(\pmb{u(x)-\frac{1}{2\pi}\int_{-1}^{1}\frac{u(y)}{(y-x)^{2}}\,dy=f(x)}\)

x

0.1

0.5

0.9

\(\hat{u}_{10}\)

0.999999917999549

0.999999981428145

0.999999993535332

\(\hat{u}_{100}\)

0.999999990036579

0.999999988582430

0.999999992194477

\(\hat{u}_{1000}\)

0.999999993375294

0.999999992061794

0.999999995726361

u

1

1

1

From Tables 1-4, it is easy to see that the proposed method is effective. It might also be noted that \(\tilde{x}_{i} \in I_{h}^{10}\) but \(\tilde {x}_{i} \notin I_{h}^{100}\) and \(\tilde{x}_{i} \notin I_{h}^{1000}\), i.e., the points \(\{-0.9,-0.5,-0.1,0.1, 0.5,0.9 \}\subset I_{h}^{10}\), but it is not a subset of \(I_{h}^{100}\) or \(I_{h}^{1000}\).

Example 4.2

Let us consider
$$u(x)-\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy =f(x), \quad \vert x\vert < 1, $$
where
$$f(x)=x-\bigl(2-x\ln(x+1)+x\ln(1-x)\bigr), $$
then
$$u(x)=x, $$
is the exact solution of equation.
Tables 5-6 also show that the proposed method is effective and the convergence order is \(O(1/N)\), which coincides with the classical theory of collocation methods.
Table 5

Approximations at \(\pmb{x=-0.9,-0.5,-0.1}\) for \(\pmb{u(x)-\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x)}\)

x

−0.9

−0.5

−0.1

\(\hat{u}_{10}\)

−1.194519628297830

−0.601189820467772

−0.156767445549832

\(\hat{u}_{100}\)

−0.902665658196492

−0.496591018161909

−0.094652542268498

\(\hat{u}_{1000}\)

−0.900369341742219

−0.499656299326138

−0.099458627117756

u

−0.9

−0.5

−0.1

Table 6

Approximations at \(\pmb{x=0.1,0.5,0.9}\) for \(\pmb{u(x)-\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x)}\)

x

0.1

0.5

0.9

\(\hat{u}_{10}\)

0.057786709933708

0.465859007671879

0.872182894009698

\(\hat{u}_{100}\)

0.106011730395414

0.507142753891225

0.908324232818361

\(\hat{u}_{1000}\)

0.100609029282889

0.500725569118200

0.900861403735125

u

0.1

0.5

0.9

Example 4.3

For the more general case, we consider
$$u(x) \bigl(1+x^{2}\bigr)-\bigl(1+x^{2}\bigr)\int _{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x),\quad \vert x\vert < 1, $$
where
$$f(x)=x^{3}- \biggl(\frac{4 - \pi}{2} + 2x^{2} + x^{3} \log\bigl[-1 + 2/(1 + x)\bigr] \biggr) $$
and the exact solution is
$$u(x)=\frac{x^{3}}{1+x^{2}}. $$
Tables 7-8 also illustrate the efficiency and accuracy of the methods.
Table 7

Approximations at \(\pmb{x=-0.9,-0.5,-0.1}\) for \(\pmb{u(x)-\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x)}\)

x

−0.9

−0.5

−0.1

\(\hat{u}_{10}\)

−0.608896542845208

−0.143953205178288

−0.019345651255813

\(\hat{u}_{100}\)

−0.400907820963934

−0.096826636160551

−0.002238964950304

\(\hat{u}_{1000}\)

−0.402641823825626

−0.099675717661578

−0.001102497056280

u

−0.402762430939227

−0.100000000000000

−0.000990099009901

Table 8

Approximations at \(\pmb{x=0.1,0.5,0.9}\) for \(\pmb{u(x)-\int_{-1}^{1}\frac{u(y)}{(y-x)}\,dy=f(x)}\)

x

0.1

0.5

0.9

\(\hat{u}_{10}\)

−0.025111748216482

0.052110055810136

0.363240467300408

\(\hat{u}_{100}\)

−0.001202605523378

0.101502077326766

0.409787561038363

\(\hat{u}_{1000}\)

0.000783503911385

0.100161733460852

0.403492522553747

u

0.000990099009901

0.1

0.402762430939227

5 Conclusion

In this paper, we explore collocation methods for a singular Fredholm integral equation of the second kind and utilize the FMM to improve the efficiency of algorithm. Based on the multipole expansion of kernel, we demonstrate that the approximate operator used in the collocation equation converges to the initial operator. Numerical examples demonstrate the performance of the proposed algorithm.

Declarations

Acknowledgements

The authors are grateful to the anonymous referees for their constructive comments and helpful suggestions to improve this paper greatly. This work is supported by the Scientific Research Foundation of Education Bureau of Hunan Province under grant 14C0495, the Natural Science Foundation of Hunan Province of China under grant 14JJ3134, the NSF of China under grant 11371376, the Mathematics and Interdisciplinary Sciences Project of Central South University.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Mathematics and Computational Science, Hunan University of Science and Engineering, Yongzhou, P.R. China
(2)
School of Mathematics and Statistics, Central South University, Changsha, P.R. China

References

  1. Kythe, PK, Puri, P: Computational Methods for Linear Integral Equations. Birkhauser, Boston (2002) MATHView ArticleGoogle Scholar
  2. Junghanns, P, Kaiser, R: Collocation for Cauchy singular integral equations. Linear Algebra Appl. 439, 729-770 (2013) MATHMathSciNetView ArticleGoogle Scholar
  3. Venturino, E: The Galerkin method for singular integral equations revisited. J. Comput. Appl. Math. 40, 91-103 (1992) MATHMathSciNetView ArticleGoogle Scholar
  4. Gong, Y: Galerkin solution of a singular integral equation with constant coefficients. J. Comput. Appl. Math. 230, 393-399 (2009) MATHMathSciNetView ArticleGoogle Scholar
  5. Bonis, MCD, Laurita, C: Numerical solution of systems of Cauchy singular integral equations with constant coefficients. Appl. Math. Comput. 219, 1391-1410 (2012) MATHMathSciNetView ArticleGoogle Scholar
  6. Setia, A: Numerical solution of various cases of Cauchy type singular integral equation. Appl. Math. Comput. 230, 200-207 (2014) View ArticleGoogle Scholar
  7. Akel, MS, Hussein, HS: Numerical treatment of solving singular integral equations by using sinc approximations. Appl. Math. Comput. 218, 3565-3573 (2011) MATHMathSciNetView ArticleGoogle Scholar
  8. Mikhlin, SG, Prössdorf, S: Singular Integral Operators. Akademie, Berlin (1986) View ArticleGoogle Scholar
  9. Prössdorf, S, Silbermann, B: Numerical Analysis of Integral and Related Operator Equations. Akademie, Berlin (1991) Google Scholar
  10. Graham, IG: Galerkin methods for second kind integral equations with singularities. Math. Comput. 39, 519-533 (1982) MATHView ArticleGoogle Scholar
  11. Herman, JJ, Riele, T: Collocation methods for weakly singular second-kind Volterra integral equations with non-smooth solution. IMA J. Numer. Anal. 2, 437-449 (1982) MATHMathSciNetView ArticleGoogle Scholar
  12. Xiang, S, He, K: On the implementation of discontinuous Galerkin methods for Volterra integral equations with highly oscillatory Bessel kernels. Appl. Math. Comput. 219, 4884-4891 (2013) MathSciNetView ArticleGoogle Scholar
  13. Xiang, S, Brunner, H: Efficient methods for Volterra integral equations with highly oscillatory Bessel kernels. J. Comput. Phys. 53, 241-263 (2013) MATHMathSciNetGoogle Scholar
  14. Xiang, S, Wu, Q: Numerical solutions to Volterra integral equations of the second kind with oscillatory trigonometric kernels. Appl. Math. Comput. 223, 34-44 (2013) MathSciNetView ArticleGoogle Scholar
  15. Wu, Q: On graded meshes for weakly singular Volterra integral equations with oscillatory trigonometric kernels. J. Comput. Appl. Math. 263, 370-376 (2014) MATHMathSciNetView ArticleGoogle Scholar
  16. Atkinson, KE: The Numerical Solution of Integral Equations of the Second Kind. Cambridge University Press, Cambridge (1997) MATHView ArticleGoogle Scholar
  17. Golberg, MA: Numerical Solution of Integral Equations. Plenum Press, New York (1990) MATHView ArticleGoogle Scholar
  18. Rokhlin, V: Rapid solution of integral equations of classical potential theory. J. Comput. Phys. 60, 187-207 (1985) MATHMathSciNetView ArticleGoogle Scholar
  19. Greengard, L, Rokhlin, V: A fast algorithm for particle simulations. J. Comput. Phys. 73, 325-348 (1987) MATHMathSciNetView ArticleGoogle Scholar
  20. Liu, YJ: Fast Multipole Boundary Element Method: Theory and Applications in Engineering. Cambridge University Press, Cambridge (2009) View ArticleGoogle Scholar
  21. Brunner, H: Collocation Methods for Volterra Integral and Related Functional Equations. Cambridge University Press, Cambridge (2004) MATHView ArticleGoogle Scholar
  22. Abramowitz, M, Stegun, IA: Handbook of Mathematical Functions. National Bureau of Standards, Washington (1964) MATHGoogle Scholar
  23. Kress, R: Linear Integral Equation. Springer, Berlin (1999) View ArticleGoogle Scholar
  24. Polya, G, Latta, G: Complex Variables. Wiley, New York (1974) MATHGoogle Scholar

Copyright

© Wu and Xiang 2015