Skip to content

Advertisement

  • Research
  • Open Access

A new approach for one-dimensional sine-Gordon equation

Advances in Difference Equations20162016:8

https://doi.org/10.1186/s13662-015-0734-x

  • Received: 15 September 2015
  • Accepted: 21 December 2015
  • Published:

Abstract

In this work, we use a reproducing kernel method for investigating the sine-Gordon equation with initial and boundary conditions. Numerical experiments are studied to show the efficiency of the technique. The acquired results are compared with the exact solutions and results obtained by different methods. These results indicate that the reproducing kernel method is very effective.

Keywords

  • reproducing kernel method
  • sine-Gordon equation
  • bounded linear operator
  • homogenizing

1 Introduction

The nonlinear one-dimensional sine-Gordon (SG) equation came into sight in the differential geometry and attracted a lot of attention because of the collisional behaviors of solitons that arise from this equation. Numerical solutions of the SG equation have been widely investigated in recent years [15]. Compact finite difference and diagonally implicit Runge-Kutta-Nyström (DIRKN) methods were used [3]. The authors of [6] introduced a numerical method for solving the SG equation by using collocation and radial basis functions. The boundary integral equation technique is presented in [7]. Bratsos [8, 9] has researhed a numerical technique for solving the one-dimensional SG equation and a third-order numerical technique for the two-dimensional SG equation. A numerical technique using radial basis functions for the solution of the two-dimensional SG equation has been shown in [10]. Some authors advised spectral techniques and Fourier pseudospectral technique for solving nonlinear wave equation taking a discrete Fourier series and Chebyshev orthogonal polynomials [1113]. Ma and Wu [14] used a meshless technique by using a multiquadric (MQ) quasi-interpolation. In this paper, we investigate the one-dimensional nonlinear sine-Gordon equation
$$ \frac{\partial^{2}u}{\partial\tau^{2}}(\eta,\tau)=\frac{\partial ^{2}u}{\partial\eta^{2}}(\eta,\tau)-\sin \bigl(u(\eta,\tau) \bigr),\quad 0\leq \eta\leq1, \tau\geq0, $$
(1)
with initial conditions
$$ \begin{aligned} &u(\eta,0)=f(\eta),\quad 0\leq\eta\leq1, \\ &\frac{\partial u}{\partial\tau}(\eta,0)=g(\eta),\quad 0\leq \eta\leq1, \end{aligned} $$
(2)
and boundary conditions
$$ u(0,\tau)=h_{1}(\tau),\qquad u(1,\tau)=h_{2}( \tau),\quad \tau \geq0, $$
(3)
by using the reproducing kernel method (RKM). We can get numerical results in very short time. By this method nonlinear problems can be solved easily like linear problems. Reproducing kernel functions are very important for numerical results. We can change the inner product in the spaces and obtain different reproducing kernel functions for better results. These are advantages of this method. Homogenizing the initial and boundary conditions is very significant for this method. We give a general transformation to homogenize the initial and boundary conditions in Section 3.

The theory of reproducing kernels [15] was used for the first time at the beginning of the 20th century by Zaremba. Reproducing kernel theory has important implementations in numerical analysis, differential equations, and probability and statistics [1625]. The efficiency of the method was used by many authors to research several scientific implementations. The reproducing kernel functions can be represented by piecewise polynomials, and the higher the order of derivatives, the simpler the reproducing kernel function statements. Such statements of reproducing kernel functions are the simplest from the computational viewpoint, and the speed and accuracy can be significantly improved in scientific and engineering implementations. The productivity of such reproducing kernel functions is indicated to be very exhorting by experimental results [26].

This work is arranged as follows. Section 2 introduces several useful reproducing kernel functions. A representation of solution in\(W_{2}^{(3,3)} ( \Omega ) \) is given in Section 3. Section 4 presents the essential results: exact and approximate solutions of (1)-(3); enhancement of the method to some problems in the reproducing kernel space; and convergence of the approximate solution. Some numerical examples are discussed in Section 5. There are some conclusions in the final section.

2 Reproducing kernel functions

We obtain some useful reproducing kernel functions in this section.

Definition 1

[16]

Let E be a nonempty set. A function \(K:E\times E\longrightarrow C\) is called a reproducing kernel function of the Hilbert space H if
  1. (a)

    \(\forall\tau\in E\), \(K ( \cdot,\tau ) \in H\),

     
  2. (b)

    \(\forall\tau\in E\), \(\forall\varphi\in H\), \(\langle\varphi ( \cdot ) ,K ( \cdot,\tau ) \rangle=\varphi ( \tau ) \).

     

Definition 2

[16]

A Hilbert space H defined on a nonempty set E is called a reproducing kernel Hilbert space if there exists a reproducing kernel function \(K(\eta,\tau)\).

Definition 3

[16]

We define the \(W_{2}^{3}[0,1]\) by
$$ W_{2}^{3}[0,1]= \left \{ \textstyle\begin{array}{@{}c@{}} u\mid u,u^{\prime},u^{\prime\prime}\mbox{ are absolutely continuous real-valued functions in }[0,1], \\ u^{ ( 3 ) }\in L^{2}[0,1], \eta\in [0,1], u(0)=0, u(1)=0 \end{array}\displaystyle \right \}. $$
The inner product and the norm in \(W_{2}^{3}[0,1]\) are defined respectively by
$$ \langle u,v \rangle_{{ W}_{{ 2}}^{{ 3}}}=u(0)v(0)+u^{\prime}(0)v^{\prime}(0)+u^{\prime}(1)v^{\prime }(1)+ \int_{0}^{1}u^{(3)}(\eta)v^{(3)}( \eta)\,d\eta, \quad u,v\in W_{2}^{3}[0,1], $$
and
$$ \Vert u\Vert _{{ W}_{{ 2}}^{{ 3}}}=\sqrt{ \langle u,u \rangle_{W_{2}^{3}}},\quad u\in W_{2}^{3}[0,1]. $$

Definition 4

[16]

We define the space \(F_{2}^{3}[0,T]\) by
$$ F_{2}^{3}[0,T]= \left \{ \textstyle\begin{array}{@{}c@{}} u\mid u,u^{\prime},u^{\prime\prime}\mbox{ are absolutely continuous in }[0,T], \\ u^{(3)}\in L^{2}[0,T], \tau\in{}[0,T], u(0)=0, u^{\prime}(0)=0\end{array}\displaystyle \right \} $$
with the inner product and norm
$$ \langle u,v \rangle_{{{F}_{{ 2}}^{{ 3}}}}=\sum_{i=0}^{2}u^{(i)}(0)v^{(i)}(0)+ \int_{0}^{1}u^{(3)}(T)v^{(3)}(\tau)\,d\tau,\quad u,v\in F_{2}^{3}[0,T], $$
and
$$ \Vert u\Vert _{F_{2}^{3}}=\sqrt{ \langle u,u \rangle_{{{F}_{{ 2}}^{{ 3}}}}}, \quad u \in F_{2}^{3}[0,T]. $$
The space \(F_{2}^{3}[0,T]\) is a reproducing kernel space, and its reproducing kernel function \({r}_{s}\) is given by
$$ r_{s} ( \tau ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{1}{4}s^{2}\tau^{2}+\frac{1}{12}s^{2}\tau^{3}-\frac{1}{24}s\tau ^{4}+\frac{1}{120}\tau^{5}, &\tau\leq s, \\ \frac{1}{4}s^{2}\tau^{2}+\frac{1}{12}s^{3}\tau^{2}-\frac{1}{24}\tau s^{4}+\frac{1}{120}s^{5},& \tau>s.\end{array}\displaystyle \right . $$

Definition 5

[16]

We define the space \(H_{2}^{1}[0,T]\) by
$$ H_{2}^{1}[0,T]=\left \{ \textstyle\begin{array}{@{}c@{}} u\mid u\mbox{ is absolutely continuous in }[0,1], \\ u^{\prime}\in L^{2}[0,T], \tau\in [0,T]\end{array}\displaystyle \right \} $$
the inner product and norm
$$ \langle u,v \rangle_{{ H}_{{ 2}}^{{ 1}}}=u(0)v(0)+ \int_{0}^{\tau}u^{\prime}(T)v^{\prime}( \tau)\,d\tau ,\quad u,v\in H_{2}^{1}[0,T], $$
and
$$ \Vert u\Vert _{{ H}_{{ 2}}^{{ 1}}}=\sqrt{ \langle u,u \rangle_{{{ H}_{{ 2}}^{{ 1}}}}},\quad u\in H_{2}^{1}[0,T]. $$
Its reproducing kernel function \({ q}_{s}\) is
$$ q_{s}(\tau)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} 1+\tau,& \tau\leq s, \\ 1+s,& \tau>s.\end{array}\displaystyle \right . $$

Definition 6

[16]

We define the space \(G_{2}^{1}[0,1]\) by
$$ G_{2}^{1}[0,1]=\left \{ \textstyle\begin{array}{@{}c@{}} u\mid u\mbox{ is absolutely continuous in }[0,1], \\ u^{\prime}\in L^{2}[0,1], \eta\in[0,1]\end{array}\displaystyle \right \} $$
with the inner product and norm
$$ \langle u,v \rangle_{{{ G}_{{ 2}}^{{ 1}}}}=u(0)v(0)+ \int_{0}^{1}u^{\prime}(\eta)v^{\prime}( \eta)\,d\eta,\quad u,v\in G_{2}^{1}[0,1], $$
and
$$ \Vert u\Vert _{{G}_{{ 2}}^{{ 1}}}=\sqrt{ \langle u,u \rangle_{{ G}_{{ 2}}^{{ 1}}}},\quad u\in G_{2}^{1}[0,1]. $$
The space \(G_{2}^{1}[0,1]\) is a reproducing kernel space, and its reproducing kernel function \(Q_{y}\) is given by
$$ { Q}_{y}{ (\eta)}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} 1+\eta,& \eta\leq y, \\ 1+y, &\eta>y.\end{array}\displaystyle \right . $$

Theorem 2.1

The reproducing kernel function \(R_{y}\) of \(W_{2}^{3} [ 0,1 ] \) is
$$ R_{y} ( \eta ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sum_{i=1}^{6}c_{i} ( y ) \eta^{i-1},& \eta\leq y,\\ \sum_{i=1}^{6}d_{i} ( y ) \eta^{i-1},& \eta>y,\end{array}\displaystyle \right . $$
(4)
where
$$\begin{aligned}& c_{1}(y)=0, \qquad{ c}_{4}{ (y)=0}, \qquad d_{1}(y)=\frac{1}{120}y^{5},\qquad { d}_{4}{ (y)=-}\frac {1}{12}y^{2}, \\& { c}_{2}{ (y)=-}\frac{1}{122}y^{5}+ \frac {5}{244}y^{4}-\frac{127}{244}y^{2}+ \frac{31}{61}y, \\& { c}_{3}{ (y)=-}\frac{1}{2{,}928}y^{5}+ \frac {127}{5{,}856}y^{4}-\frac{1}{12}y^{3}+ \frac{1{,}137}{1{,}952}y^{2}-\frac{127}{244}y, \\& { c}_{5}{ (y)=}\frac{1}{2{,}938}y^{5}- \frac {5}{5{,}856}y^{4}+\frac{127}{5{,}856}y^{2}- \frac{31}{1{,}464}y, \\& { c}_{6}{ (y)=-}\frac{1}{7{,}320}y^{5}+\frac{1}{2{,}928} y^{4}-\frac{1}{2{,}928}y^{2}-\frac{1}{122}y+ \frac{1}{120}, \\& { d}_{2}{ (y)=-}\frac{1}{122}y^{5}-\frac{31}{1{,}464} y^{4}-\frac{127}{244}y^{2}+\frac{31}{61}y, \\& { d}_{3}{ (y)=-}\frac{1}{2{,}928}y^{5}+ \frac {127}{5{,}856}y^{4}+\frac{1{,}137}{1{,}952}y^{2}- \frac{127}{244}y, \\& { d}_{5}{ (y)=}\frac{1}{2{,}928}y^{5}- \frac {5}{5{,}856}y^{4}+\frac{127}{5{,}856}y^{2}+ \frac{5}{244}y, \\& { d}_{6}{ (y)=-}\frac{1}{7{,}320}y^{5}+\frac{1}{2{,}928} y^{4}-\frac{1}{2{,}928}y^{2}-\frac{1}{122}y. \end{aligned}$$

Proof

Let \(u\in W_{2}^{3}[0,1]\) and \(0\leq y\leq1\). Define \(R_{y}\) by (4). Note that
$$\begin{aligned}& R_{y}^{\prime} ( \eta ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sum_{i=1}^{5}ic_{i+1} ( y ) \eta^{i-1}, &\eta< y, \\ \sum_{i=1}^{5}id_{i+1} ( y ) \eta^{i-1}, &\eta>y,\end{array}\displaystyle \right . \\& R_{y}^{\prime\prime} ( \eta ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sum_{i=1}^{4}i(i+1)c_{i+2} ( y ) \eta^{i-1},& \eta< y, \\ \sum_{i=1}^{4}i(i+1)d_{i+2} ( y ) \eta^{i-1},& \eta>y,\end{array}\displaystyle \right . \\& R_{y}^{(3)} ( \eta ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sum_{i=1}^{3}i(i+1)(i+2)c_{i+3} ( y ) \eta^{i-1},& \eta< y,\\ \sum_{i=1}^{3}i(i+1)(i+2)d_{i+3} ( y ) \eta^{i-1},& \eta >y,\end{array}\displaystyle \right . \\& R_{y}^{(4)} ( \eta ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sum_{i=1}^{2}i(i+1)(i+2)(i+3)c_{i+4} ( y ) \eta^{i-1},& \eta < y, \\ \sum_{i=1}^{2}i(i+1)(i+2)(i+3)d_{i+4} ( y ) \eta^{i-1}, & \eta>y,\end{array}\displaystyle \right . \end{aligned}$$
and
$$ R_{y}^{(5)} ( \eta ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} 120c_{6}(y),& \eta< y, \\ 120d_{6}(y),& \eta>y.\end{array}\displaystyle \right . $$
By Definition 5 and integration by parts we have
$$\begin{aligned} \bigl\langle u(\eta),{ R}_{y}{ (\eta)} \bigr\rangle _{{{ W}_{2}^{3}}}={}&u(0)R_{y}(0)+u^{\prime}(0)R_{y}^{\prime }(0)+u^{\prime}(1)R_{y}^{\prime}(1)+{ u}^{\prime\prime }(1){ R}_{y}^{(3)}{ (1)} \\ &{}-{ u}^{\prime\prime }(0){ R}_{y}^{(3)}{ (0)}-{ u}^{\prime}(1){ R}_{y}^{(4)}{ (1)}+{ u}^{\prime}(0){ R}_{y}^{(4)}{ (0)} + \int_{0}^{1}u^{\prime}{ (\eta)R}_{y}^{(5)}{ (\eta)\,d\eta} \\ ={}&u^{\prime}(0) \bigl(R_{y}^{\prime}(0)+{ R}_{y}^{(4)}{ (0)}\bigr)+u^{\prime}(1) \bigl(R_{y}^{\prime }(1)-{ R}_{y}^{(4)}{ (1)} \bigr) \\ &{}+{ u}^{\prime\prime}(1){ R}_{y}^{(3)}{ (1)}-{ u}^{\prime\prime}(0){ R}_{y}^{(3)}{ (0)} \\ &{}+ \int_{0}^{y}{ R}_{y}^{(5)}{ (\eta)}u^{\prime}{ (\eta)\,d\eta+} \int_{y}^{1} { R}_{y}^{(5)}{ (\eta)}u^{\prime}{ (\eta )\,d\eta} \\ ={}&u^{\prime}(0) \bigl(c_{2}(y)+{ 24c}_{5}(y)\bigr)-{ u}^{\prime\prime}(0) \bigl(6c_{4}(y)\bigr) \\ &{}+u^{\prime }(1) \bigl(d_{2}(y)+2d_{3}(y)+3d_{4}(y)-20d_{5}(y)-115d_{6}(y) \bigr) \\ &{}+{ u}^{\prime\prime}(1) \bigl(6d_{4}(y)+24d_{5}(y)+60d_{6}(y) \bigr) \\ &{}+ \int_{0}^{y}120c_{6}(y)u{ ( \eta)\,d\eta+} \int_{y}^{1}120d_{6}(y)u{ (\eta)\,d\eta} \\ ={}&120u(y) \biggl(\frac{1}{120}\biggr)=u(y). \end{aligned}$$
This completes the proof. □

Definition 7

[16]

We define the space \({ W}_{2}^{(3,3)} ( \Omega ) \) by
$$ { W}_{2}^{(3,3)} ( \Omega ) =\left \{ \textstyle\begin{array}{@{}c@{}} u\mid\frac{{ \partial}^{4}{ u}}{{ \partial\eta }^{2}{ \partial\tau}^{2}}\mbox{ is completely continuous in }\Omega =[0,1]\times{}[0,\tau], \\ \frac{{ \partial}^{6}{ u}}{{ \partial\eta }^{3}{ \partial\tau}^{3}}\in L^{2} ( \Omega ) , u(\eta ,0)=0, \frac{{ \partial u(\eta,0)}}{{ \partial\tau }}=0, u(0,\tau)=0, u(1,\tau)=0\end{array}\displaystyle \right \} $$
with the inner product and norm
$$\begin{aligned} \langle u,v \rangle_{{{ W}_{2}^{(3,3)}}} =&\sum_{i=0}^{2} \int_{0}^{\tau} \biggl[ \frac{\partial^{3}}{\partial \tau ^{3}} \frac{\partial^{i}}{\partial\eta^{i}}u(0,\tau)\frac{\partial ^{3}}{\partial\tau^{3}}\frac{\partial^{i}}{\partial\eta^{i}}v(0,\tau ) \biggr]\,d\tau \\ &{}+\sum_{j=0}^{2} \biggl\langle \frac{\partial^{j}}{\partial\tau ^{j}}u(\cdot,0),\frac{\partial^{j}}{\partial\tau^{j}}v(\cdot,0) \biggr\rangle _{{ W}_{{ 2}}^{{ 3}}} \\ &{}+ \int_{0}^{1} \int_{0}^{\tau} \biggl[ \frac{\partial^{3}}{\partial \eta^{3}} \frac{\partial^{3}}{\partial\tau^{3}}u(\eta,\tau)\frac{\partial ^{3}}{\partial\eta^{3}}\frac{\partial^{3}}{\partial\tau^{3}}v(\eta,\tau ) \biggr]\,d\tau \,d\eta,\quad u,v\in{ W}_{2}^{(3,3)} ( \Omega) \end{aligned}$$
and
$$ \Vert u\Vert _{W}=\sqrt{ \langle u,u \rangle _{W}}, \quad u \in{ W}_{2}^{(3,3)} ( \Omega ) . $$

Theorem 2.2

Let \(K_{ ( y,s ) } ( \eta,\tau ) \) be a reproducing kernel function \({ W}_{2}^{(3,3)} ( \Omega ) \). We have
$$ K_{ ( y,s ) } ( \eta,\tau ) =R_{y}(\eta)r_{s}(\tau), $$
and for any \(u\in{ W}_{2}^{(3,3)} ( \Omega ) \),
$$ u(y,s)= \bigl\langle u(\eta,\tau),K_{ ( y,s ) } ( \eta ,\tau ) \bigr\rangle _{{ W}_{2}^{(3,3)}} $$
and
$$ K_{ ( y,s ) } ( \eta,\tau ) =K_{ ( \eta,\tau ) } ( y,s ) . $$

Definition 8

[16]

We define the space \(\widehat{W}_{2}^{(1,1)} ( \Omega ) \) by
$$ \widehat{W}_{2}^{(1,1)} ( \Omega ) =\left \{ \textstyle\begin{array}{@{}c@{}} u\mid u\mbox{ is completely continuos in }\Omega=[0,1]\times{}[ 0,\tau], \\ \frac{{ \partial}^{{ 2}}{u}}{{\partial\eta \partial\tau}}\in L^{2} ( \Omega ) \end{array}\displaystyle \right \} $$
with the inner product and norm
$$\begin{aligned} \langle u,v \rangle_{\widehat{W}_{2}^{(1,1)}} =& \int _{0}^{\tau} \biggl[ \frac{\partial}{\partial\tau}u(0,\tau)\frac{\partial }{\partial \tau}v(0,\tau) \biggr]\,d\tau+ \bigl\langle u(\cdot,0),v(\cdot,0) \bigr\rangle _{{ G}_{{ 2}}^{{ 1}}} \\ &{}+ \int_{0}^{1} \int_{0}^{\tau} \biggl[ \frac{\partial}{\partial\eta } \frac{\partial}{\partial\tau}u(\eta,\tau)\frac{\partial}{\partial\eta} \frac{\partial}{\partial\tau}v( \eta,\tau) \biggr]\,d\tau \,d\eta,\quad u,v\in\widehat{W}_{2}^{(1,1)} \end{aligned}$$
and
$$ \Vert u\Vert _{\widehat{W}_{2}^{(1,1)}}= \sqrt{ \langle u,u \rangle_{\widehat{W}_{2}^{(1,1)}}}, \quad u \in\widehat {W}_{2}^{(1,1)}. $$
\(\widehat{W}_{2}^{(1,1)} ( \Omega ) \) is a reproducing kernel space, and its reproducing kernel function \(G_{ ( y,s ) } ( \eta ,\tau ) \) is given as
$$ G_{ ( y,s ) } ( \eta,\tau ) =Q_{y}(\eta)q_{s}(\tau). $$

3 Solutions in \({ W}_{2}^{(3,3)}(\Omega)\)

In this section, we give the solution of (1)-(3) in the reproducing kernel space \({ W}_{2}^{(3,3)}(\Omega)\). We define the linear operator \(L:{ W}_{2}^{(3,3)}(\Omega)\rightarrow\widehat{W}_{2}^{(1,1)}(\Omega)\) as
$$ Lv=\frac{\partial^{2}v}{\partial\tau^{2}}(\eta,\tau)-\frac{\partial ^{2}v}{\partial\eta^{2}}(\eta,\tau),\quad v\in W_{2}^{(3,3)}(\Omega). $$
If we homogenize the conditions of the model problem (1)-(3), then it changes to the following problem:
$$ \left \{ \textstyle\begin{array}{@{}l} Lv=M(\eta,\tau),\quad ( \eta,\tau ) \in\Omega =[0,1]\times{}[0,\tau], \\ v(\eta,0)=\frac{{ \partial v}}{{ \partial\tau}}(\eta ,0)=v(0,\tau)=v(1,\tau)=0,\end{array}\displaystyle \right . $$
(5)
where
$$\begin{aligned} M{ (\eta,\tau)} =&\frac{ ( \eta-1 ) f(\eta)h_{1}^{\prime\prime}(\tau)}{h_{1}(0)}-\eta h_{2}^{\prime \prime }(\tau)- \frac{2f^{\prime}(\eta)h_{1}(\tau)}{h_{1}(0)}-\frac{ ( \eta -1 ) f^{\prime\prime}(\eta)h_{1}(\tau)}{h_{1}(0)} \\ &{}+2f^{\prime}(\eta)+\eta f^{\prime\prime}(\eta)+\tau \biggl( g^{\prime \prime}(\eta)+\frac{2g(0)f^{\prime}(\eta)}{h_{1}(0)}+\frac{ ( \eta -1 ) f^{\prime\prime}(\eta)g(0)}{h_{1}(0)} \biggr) \\ &{}-\sin \left [ \textstyle\begin{array}{@{}c@{}} v(\eta,\tau)-\frac{ ( \eta-1 ) f(\eta)h_{1}(\tau )}{h_{1}(0)}+\eta h_{2}(\tau)+\eta f(\eta)-\eta h_{2}(0) \\ +\tau ( g(\eta)+\frac{ ( \eta-1 ) f(\eta )g(0)}{h_{1}(0)}-\eta g(1) )\end{array}\displaystyle \right ] \end{aligned}$$
and
$$\begin{aligned} v(\eta,\tau) =&u(\eta,\tau)+\frac{ ( \eta-1 ) f(\eta )h_{1}(\tau)}{h_{1}(0)}-\eta h_{2}(\tau)-\eta f(\eta)+\eta h_{2}(0) \\ &{}-\tau \biggl( g(\eta)+\frac{ ( \eta-1 ) f(\eta )g(0)}{h_{1}(0)}-\eta g(1) \biggr) \end{aligned}$$
with \(h_{1}(0)\neq0\).

Lemma 3.1

The operator L is bounded linear.

Proof

By Definition 8 we have
$$\begin{aligned} \Vert Lu\Vert _{\widehat{W}_{2}^{(1,1)}}^{2} =& \int _{0}^{\tau} \biggl[ \frac{\partial}{\partial\tau}Lu(0,\tau) \biggr] ^{2}\,d\tau + \bigl\langle Lu( \cdot,0),Lu(\cdot,0) \bigr\rangle _{{G}_{{ 2}}^{{ 1}}} \\ &{}+ \int_{0}^{1} \int_{0}^{\tau} \biggl[ \frac{\partial}{\partial\eta } \frac{\partial}{\partial\tau}Lu(\eta,\tau) \biggr] ^{2}\,d\tau \,d\eta \\ =& \int_{0}^{\tau} \biggl[ \frac{\partial}{\partial\tau}Lu(0,\tau ) \biggr] ^{2}\,d\tau+ \bigl[ Lu(0,0) \bigr] ^{2} \\ &{}+ \int_{0}^{1} \biggl[ \frac{\partial}{\partial\eta}Lu(\eta,0) \biggr] ^{2}\,d\eta+ \int_{0}^{1} \int_{0}^{\tau} \biggl[ \frac{\partial}{\partial \eta } \frac{\partial}{\partial\tau}Lu(\eta,\tau) \biggr] ^{2}\,d\tau \,d\eta. \end{aligned}$$
Since
$$\begin{aligned}& u(\eta,\tau) = \bigl\langle u(\xi,\gamma),K_{ ( { \eta ,\tau} ) } ( \xi,\gamma ) \bigr\rangle _{{ W}_{2}^{(3,3)}}, \\& Lu(\eta,\tau) = \bigl\langle u(\xi,\gamma),LK_{ ( \eta,\tau ) } ( \xi,\gamma ) \bigr\rangle _{{ W}_{2}^{(3,3)}}, \end{aligned}$$
from the continuity of \(K_{ ( { \eta,\tau} ) } ( \xi ,\gamma ) \) we have
$$ \bigl\vert Lu(\eta,\tau) \bigr\vert \leq \Vert u\Vert _{{W}_{2}^{(3,3)}} \bigl\Vert LK_{ ( \eta,\tau ) } ( \xi,\gamma ) \bigr\Vert _{{ W}_{2}^{(3,3)}}=a_{0} \Vert u\Vert _{{W}_{2}^{(3,3)}}. $$
Accordingly, for \(i=0,1\),
$$\begin{aligned}& \frac{\partial^{i}}{\partial\eta^{i}}Lu(\eta,\tau)= \biggl\langle u(\xi ,\gamma), \frac{\partial^{i}}{\partial\eta^{i}}{ LK}_{ ( { \eta,\tau} ) } ( \xi,\gamma ) \biggr\rangle _{ { W}_{2}^{(3,3)}}, \\& \frac{\partial}{\partial\tau}\frac{\partial^{i}}{\partial\eta^{i}} Lu(\eta,\tau)= \biggl\langle u( \xi,\gamma),\frac{\partial}{\partial \tau}\frac{\partial^{i}}{\partial\eta^{i}}LK_{ ( { \eta,\tau} ) } ( \xi,\gamma ) \biggr\rangle _{{ W}_{2}^{(3,3)}}, \end{aligned}$$
and then
$$\begin{aligned}& \biggl\vert \frac{\partial^{i}}{\partial\eta^{i}}Lu(\eta,\tau ) \biggr\vert \leq e_{i}\Vert u\Vert _{{ W}_{2}^{(3,3)}}, \qquad \biggl\vert \frac{\partial}{\partial\tau}\frac{\partial^{i}}{\partial \eta^{i}}Lu(\eta,\tau) \biggr\vert \leq f_{i}\Vert u\Vert _{{ W}_{2}^{(3,3)}}. \end{aligned}$$
Therefore,
$$ \bigl\Vert Lu(\eta,\tau) \bigr\Vert _{\widehat{W}_{2}^{(1,1)}}^{2}{ \leq}\sum_{i=0}^{1} \bigl( e_{i}^{2}+\tau f_{i}^{2} \bigr) \Vert u\Vert _{{ W}_{2}^{(3,3)}}^{2}={ A}\Vert u\Vert _{{ W}_{2}^{(3,3)}}^{2}, $$
where \(A=\sum_{i=0}^{1} ( e_{i}^{2}+\tau f_{i}^{2} ) \). □
Now, choose a countable dense subset \(\{ ( \eta_{1},\tau _{1} ) , ( \eta_{2},\tau_{2} ) ,\dots \} \) in Ω and define
$$ \varphi_{i}(\eta,\tau)=G_{ ( \eta_{i},\tau_{i} ) } ( \eta ,\tau ) ,\qquad \Psi_{i}(\eta,\tau)=L^{\ast}\varphi _{i}(\eta , \tau), $$
where \(L^{\ast}\) is the adjoint operator of L. The orthonormal system \(\{ \widehat{\Psi}_{i}(\eta,\tau) \} _{i=1}^{\infty}\) of \(W_{2}^{(3,3)} ( \Omega ) \) can be obtained by the Gram-Schmidt orthogonalization of \(\{ \Psi_{i}(\eta,\tau) \} _{i=1}^{\infty}\) as
$$ \widehat{\Psi}_{i}(\eta,\tau)=\sum_{k=1}^{i} \beta_{ik}\Psi_{k}(\eta ,\tau). $$

Theorem 3.2

Assume that \(\{ (\eta_{i},\tau_{i}) \} _{i=1}^{\infty}\) is dense in Ω. Then \(\{ \Psi_{i}(\eta ,\tau ) \} _{i=1}^{\infty}\) is a complete system in \({ W}_{2}^{(3,3)}(\Omega)\), and
$$ \Psi_{i}(\eta,\tau)= L_{ ( { y,s} ) } K_{({y,s} )}(\eta,\tau )| _{({y,s}) = (\eta_{i},\tau_{i})} . $$

Proof

We have
$$\begin{aligned} \Psi_{i}(\eta,\tau) =& \bigl( L^{\ast}\varphi_{i} \bigr) ( \eta ,\tau ) = \bigl\langle \bigl( L^{\ast}\varphi_{i} \bigr) ( y,s ) ,K_{ ( { \eta,} { \tau} ) } ( y,s ) \bigr\rangle _{{ W}_{2}^{(3,3)}} \\ =& \bigl\langle \varphi_{i} ( y,s ) ,L_{ ( { y,s} ) }K_{ ( { \eta,\tau} ) } ( y,s ) \bigr\rangle _{\widehat{W}_{2}^{(1,1)}} \\ =& L_{ ( { y,s} ) }K_{ ( { \eta,} { \tau} ) } ( y,s ) |_{ ( { y,s} ) = ( { \eta}_{{ i}}, { \tau }_{{ i}} ) } \\ =& L_{ ( { y,s} ) }K_{ ( { y,s} ) } ( \eta,\tau ) |_{ ( { y,} { s} ) = ( { \eta}_{{ i}}, { \tau}_{{ i}} ) }. \end{aligned}$$
Clearly, \(\Psi_{i}(\eta,\tau)\in W ( \Omega ) \). For each fixed \(u(\eta,\tau)\in{ W}_{2}^{(3,3)}(\Omega)\), if
$$ \bigl\langle u(\eta,\tau),\Psi_{i}(\eta,\tau) \bigr\rangle _{W_{2}^{(3,3)}}=0, \quad i=1,2,\dots, $$
then
$$\begin{aligned} \bigl\langle u(\eta,\tau), \bigl( L^{\ast}\varphi_{i} \bigr) ( \eta ,\tau ) \bigr\rangle _{{ W}_{2}^{(3,3)}} =& \bigl\langle Lu(\eta,\tau), \varphi_{i} ( \eta,\tau ) \bigr\rangle _{{ \widehat{W}_{2}^{(1,1)}}} = ( { Lu} ) ( { \eta}_{{ i}}, { \tau}_{{ i}} ) =0, \quad i=1,2,\dots . \end{aligned}$$
Since \(\{ (\eta_{i},\tau_{i}) \} _{i=1}^{\infty}\) is dense in Ω, \(( Lu ) ( \eta,\tau ) =0\). Therefore, \(u=0\) by the existence of \(L^{-1}\). □

Theorem 3.3

If \(\{ (\eta_{i},\tau_{i}) \} _{i=1}^{\infty}\) is dense in Ω, then the solution of (5) is
$$ u=\sum_{i=1}^{\infty}\sum _{k=1}^{i}\beta_{ik}M( \eta_{k},\tau_{k})\widehat{ \Psi}_{i}(\eta,\tau). $$
(6)

Proof

The system \(\{ \Psi_{i}(\eta,\tau) \} _{i=1}^{\infty}\) is complete in \({ W}_{2}^{(3,3)}(\Omega)\). Therefore, we get
$$\begin{aligned} u =&\sum_{i=1}^{\infty} \langle u,\widehat{ \Psi}_{i} \rangle_{{ W}_{2}^{(3,3)}}\widehat{\Psi}_{i}=\sum _{i=1}^{\infty }\sum_{k=1}^{i} \beta_{ik} \langle u,\Psi_{k} \rangle_{{ W}_{2}^{(3,3)}} \widehat{\Psi}_{i} =\sum_{i=1}^{\infty}\sum _{k=1}^{i}\beta_{ik} \bigl\langle u,L^{\ast }\varphi_{k} \bigr\rangle _{{ W}_{2}^{(3,3)}}\widehat{ \Psi} _{i}\\ =&\sum_{i=1}^{\infty} \sum_{k=1}^{i}\beta_{ik} \langle Lu,\varphi _{k} \rangle_{{\widehat{W}_{2}^{(1,1)}}}\widehat{ \Psi}_{i} =\sum_{i=1}^{\infty}\sum _{k=1}^{i}\beta_{ik} \langle Lu,G_{ ( { \eta}_{{ k}}, { \tau}_{{ k}} ) } \rangle_{{\widehat{W}_{2}^{(1,1)}}}\widehat{\Psi}_{i}\\ =& \sum_{i=1}^{\infty}\sum _{k=1}^{i}\beta_{ik}Lu ( \eta _{k},\tau _{k} ) \widehat{\Psi}_{i} =\sum_{i=1}^{\infty}\sum _{k=1}^{i}\beta_{ik}M( \eta_{k},\tau_{k})\widehat{ \Psi}_{i}. \end{aligned}$$

This completes the proof. □

Now an approximate solution \(u_{n}\) can be obtained from the n-term intercept of the exact solution u:
$$ { u}_{n}=\sum_{i=1}^{n}\sum _{k=1}^{i}\beta _{ik}M( \eta_{k},\tau_{k})\widehat{\Psi}_{i}(\eta, \tau). $$
Obviously,
$$ \bigl\Vert u_{n}(\eta,\tau)-u(\eta,\tau) \bigr\Vert \rightarrow0\ \quad ( n\rightarrow\infty ) . $$

4 The method implementation

If M is linear, then the analytical solution of (5) can be obtained directly by (6). If M is nonlinear, then the solution of (5) can be obtained either by (6) or by an iterative method as follows. We construct an iterative sequence \(u_{n}\) by putting
$$ \left \{ \textstyle\begin{array}{@{}l} \mbox{any fixed }u_{0}\in{W}_{2}^{(3,3)}, \\ u_{n}=\sum_{i=1}^{n}A_{i}\widehat{\Psi}_{i},\end{array}\displaystyle \right . $$
(7)
where
$$ \left \{ \textstyle\begin{array}{@{}l} A_{1}=\beta_{11}M(\eta_{k},\tau_{k}), \\ A_{2}=\sum_{k=1}^{2}\beta_{2k}M(\eta_{k},\tau_{k}), \\ \cdots\\ A_{n}=\sum_{k=1}^{n}\beta_{nk}M(\eta_{k},\tau_{k}).\end{array}\displaystyle \right . $$
(8)

Lemma 4.1

If
$$ u_{n}\stackrel{\Vert \cdot \Vert }{\rightarrow}\widehat{u},\quad \Vert u_{n}\Vert \textit{ is bounded}, ( \eta_{n},\tau _{n})\rightarrow(y,s),\textit{ and }M(\eta, \tau) \textit{ is continuous}, $$
then
$$ M(\eta_{n},\tau_{n})\rightarrow M(y,s). $$

Proof

By the reproducing property and Cauchy-Schwarz inequality we have
$$\begin{aligned}[b] \bigl\vert u(\eta,\tau) \bigr\vert &= \bigl\vert \bigl\langle u(y,s),K_{ ( \eta,\tau ) }(y,s) \bigr\rangle _{{ W}_{2}^{(3,3)}} \bigr\vert \\ &\leq \bigl\Vert u(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}} \bigl\Vert K_{ ( \eta,\tau ) }(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}}=N_{1} \bigl\Vert u(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}}. \end{aligned} $$
Thus, we obtain
$$\begin{aligned} \biggl\vert \frac{\partial u(\eta,\tau)}{\partial\eta} \biggr\vert =& \biggl\vert \biggl\langle u(y,s),\frac{\partial K_{ ( \eta,\tau ) }(y,s)}{\partial\eta} \biggr\rangle _{{ W}_{2}^{(3,3)}} \biggr\vert \leq \bigl\Vert u(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}} \biggl\Vert \frac{\partial K_{ ( \eta,\tau ) }(y,s)}{\partial\eta} \biggr\Vert _{{ W}_{2}^{(3,3)}}\\ =&N_{2} \bigl\Vert u(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}} \end{aligned}$$
and
$$\begin{aligned} \biggl\vert \frac{\partial u(\eta,\tau)}{\partial\tau} \biggr\vert =& \biggl\vert \biggl\langle u(y,s),\frac{\partial K_{ ( \eta,\tau ) }(y,s)}{\partial\tau} \biggr\rangle _{{ W}_{2}^{(3,3)}} \biggr\vert \leq \bigl\Vert u(y,s) \bigr\Vert _{W} \biggl\Vert \frac{\partial K_{ ( \eta,\tau ) }(y,s)}{\partial\tau} \biggr\Vert _{{ W}_{2}^{(3,3)}}\\ =&N_{3} \bigl\Vert u(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}}. \end{aligned}$$
One the one hand, we have
$$\begin{aligned} \bigl\vert u_{n-1}(y,s)-\widehat{u}(y,s) \bigr\vert =&\bigl\vert \bigl\langle u_{n-1}(\eta,\tau)-\widehat{u}(\eta, \tau),K_{ ( y,s ) }( \eta,\tau) \bigr\rangle _{{W}_{2}^{(3,3)}}\bigr\vert \\ \leq& \bigl\Vert u_{n-1}(\eta,\tau)-\widehat{u}(\eta,\tau) \bigr\Vert _{{ W}_{2}^{(3,3)}} \bigl\Vert K_{ ( \eta,\tau ) }(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}} \\ =&N_{4} \bigl\Vert u_{n-1}(\eta,\tau)-\widehat{u}(\eta, \tau) \bigr\Vert _{{ W}_{2}^{(3,3)}}; \end{aligned}$$
on the other hand, we get
$$\begin{aligned} \bigl\vert u_{n-1}(\eta_{n},\tau_{n})- \widehat{u}(y,s) \bigr\vert =& \bigl\vert u_{n-1}( \eta_{n},\tau_{n})-u_{n-1}(y,s)+u_{n-1}(y,s)-\widehat{u}(y,s) \bigr\vert \\ \leq& \bigl\vert \nabla u_{n-1}(\xi,\eta) \bigr\vert \bigl\vert ( \eta _{n},\tau_{n})-(y,s) \bigr\vert + \bigl\vert u_{n-1}(y,s)-\widehat{u}(y,s) \bigr\vert . \end{aligned}$$
Using these inequalities with \(u_{n}\stackrel{\Vert \cdot \Vert }{\rightarrow}\widehat{u}\), we find
$$ \bigl\vert u_{n-1}(y,s)-\widehat{u}(y,s) \bigr\vert \rightarrow 0, \qquad \bigl\vert \nabla u_{n-1}(\xi,\eta) \bigr\vert { \leq}\sqrt{{ c}_{1}^{2}{ +c}_{2}^{2}} \Vert u\Vert _{{ W}_{2}^{(3,3)}}. $$
Therefore, as \(n\rightarrow\infty\), using the boundedness of \(\Vert u_{n}\Vert \) gives
$$ \bigl\vert u_{n-1}(\eta_{n},\tau_{n})- \widehat{u}(y,s) \bigr\vert \rightarrow0. $$
As \(n\rightarrow\infty\), with the continuity of \(M(\eta,\tau)\) we get
$$ M (\eta_{n},\tau_{n})\rightarrow M(y,s). $$
This completes the proof. □

Theorem 4.2

Assume that \(\Vert u_{n}\Vert \) is a bounded in (7) and that (5) has a unique solution. If \(\{ (\eta _{i},\tau_{i}) \} _{i=1}^{\infty}\) is dense in \({ W}_{2}^{(3,3)} ( \Omega ) \), then the n-term approximate solutions \(u_{n}(\eta,\tau)\) converge to the analytical solution \(u(\eta,\tau)\) of (5), and
$$ u(\eta,\tau)=\sum_{i=1}^{\infty}A_{i} \widehat{\Psi}_{i}(\eta,\tau), $$
where \(A_{i}\) is given by (8).

Proof

First, we prove the convergence of \(u_{n}(\eta,\tau)\). From (7) we infer that
$$ {u}_{n+1}{ (\eta,\tau)=u}_{n}{ (\eta ,\tau)+A}_{n+1} \widehat{\Psi}_{n+1}{ (\eta,\tau),} $$
The orthonormality of \(\{ \widehat{\Psi}_{i} \} _{i=1}^{\infty } \) yields that
$$ \Vert u_{n+1}\Vert ^{2}=\Vert u_{n}\Vert ^{2}{ +A}_{n+1}^{2}=\sum _{i=1}^{n+1}{ A}_{i}^{2}. $$
(9)
In terms of (9), we have that \(\Vert u_{n+1}\Vert >\Vert u_{n}\Vert \). Since \(\Vert u_{n}\Vert \) is bounded, \(\Vert u_{n}\Vert \) is convergent, and there exists a constant c such that
$$ \sum_{i=1}^{\infty}A_{i}^{2}=c. $$
This implies that
$$ \{ A_{i} \} _{i=1}^{\infty}\in l^{2}. $$
If \(m>n\), then
$$\begin{aligned} \Vert u_{m}-u_{n}\Vert ^{2} =&\Vert u_{m}-u_{m-1}+u_{m-1}-u_{m-2}+ \cdots+u_{n+1}-u_{n}\Vert ^{2} \\ \leq&\Vert u_{m}-u_{m-1}\Vert ^{2}+\Vert u_{m-1}-u_{m-2}\Vert ^{2}+\cdots+\Vert u_{n+1}-u_{n}\Vert ^{2}. \end{aligned}$$
Since
$$ \Vert u_{m}-u_{m-1}\Vert ^{2}=A_{m}^{2}, $$
we have
$$ \Vert u_{m}-u_{n}\Vert ^{2}=\sum _{l=n+1}^{m}A_{l}^{2} \rightarrow0\quad \mbox{as }n\rightarrow\infty. $$
The completeness of \({ W}_{2}^{(3,3)}(\Omega)\) shows that \(u_{n}\rightarrow\widehat{u}\) as \(n\rightarrow\infty\). We have
$$ \widehat{u}(\eta,\tau)=\sum_{i=1}^{\infty}A_{i} \widehat{\Psi }_{i}(\eta ,\tau). $$
Note that
$$ (L\widehat{u}) (\eta,\tau)=\sum_{i=1}^{\infty}A_{i}L \widehat{\Psi}_{i}(\eta,\tau) $$
and
$$\begin{aligned}[b] (L\widehat{u}) (\eta_{l},\tau_{l}) &=\sum _{i=1}^{\infty }A_{i}L\widehat{\Psi}_{i}(\eta_{l},\tau_{l}) =\sum_{i=1}^{\infty}A_{i} \bigl\langle L\widehat{\Psi}_{i}(\eta ,\tau), \varphi_{l}(\eta,\tau) \bigr\rangle _{{\widehat{W}_{2}^{(1,1)}}} \\ &=\sum_{i=1}^{\infty}A_{i} \bigl\langle \widehat{\Psi}_{i}(\eta,\tau ),L^{\ast} \varphi_{l}(\eta,\tau) \bigr\rangle _{{ W}_{2}^{(3,3)}} =\sum_{i=1}^{\infty}A_{i} \bigl\langle \widehat{\Psi}_{i}(\eta,\tau ),\Psi_{l}(\eta,\tau) \bigr\rangle _{{ W}_{2}^{(3,3)}}. \end{aligned} $$
Therefore,
$$\begin{aligned} \sum_{l=1}^{i}\beta_{il}(L \widehat{u}) (\eta_{l},\tau_{l}) =&\sum _{i=1}^{\infty}B_{i} \Biggl\langle \widehat{ \Psi}_{i}(\eta,\tau ),\sum_{l=1}^{i} \beta_{il}\Psi_{l}(\eta,\tau) \Biggr\rangle _{{ W}_{2}^{(3,3)}} \\ =&\sum_{i=1}^{\infty}B_{i} \bigl\langle \widehat{\Psi}_{i}(\eta,\tau ), \widehat{ \Psi}_{l}(\eta,\tau) \bigr\rangle _{{ W}_{2}^{(3,3)}}=A_{l}. \end{aligned}$$
In view of (8), we have
$$ L\widehat{u} (\eta_{l},\tau_{l}) =M( \eta_{l},\tau_{l}). $$
Since \(\{ (\eta_{i},\tau_{i}) \} _{i=1}^{\infty}\) is dense in Ω, for each \((y,s)\in\Omega\), there exists a subsequence \(\{ ( \eta_{n_{j}},\tau_{n_{j}} ) \} _{j=1}^{\infty}\) such that
$$ ( \eta_{n_{j}},\tau_{n_{j}} ) \rightarrow(y,s) \quad (j \rightarrow \infty). $$
We know that
$$ L\widehat{u} ( \eta_{n_{j}},\tau_{n_{j}} ) =M(\eta _{n_{j}},\tau _{n_{j}}). $$
Let \(j\rightarrow\infty\); by the continuity of M we have
$$ (L\widehat{u}) (y,s)=M(y,s), $$
which proves that \(\widehat{u}(\eta,\tau)\) satisfies (5). □
We obtain an approximate solution \(\zeta_{n}(t)\) as
$$ \zeta_{n}(t)=\sum_{i=1}^{n} \sum_{k=1}^{i}\sigma_{ik}z \bigl(t_{k},\zeta(t_{k}) \bigr)\widehat { \eta}_{i}(t). $$
(10)

Remark

Let consider a countable dense set
$$ \bigl\{ ( \eta_{1},\tau_{1} ) , ( \eta_{2}, \tau_{2} ) ,\dots \bigr\} \in\Omega $$
and define
$$ \varphi_{i}=G_{ ( \eta_{i},\tau_{i} ) },\qquad \Psi_{i}=L^{\ast} \varphi_{i},\qquad \widehat{\Psi}_{i}{=} \sum _{k=1}^{i}{\beta}_{ik}{ \Psi}_{k}. $$
Then the coefficients \(\beta_{ik}\) can be found by
$$\begin{aligned}& { \beta}_{11} =\frac{1}{\Vert \Psi_{1}\Vert }, \quad{ \beta}_{ii}= \frac{1}{\sqrt{\Vert \Psi_{i}\Vert ^{2}}-\sum_{k=1}^{i-1}c_{ik}^{2}},\quad { \beta}_{ij} =\frac{-\sum_{k=j}^{i-1} c_{ik}\beta_{kj}}{\sqrt{\Vert \Psi_{i}\Vert ^{2}} -\sum_{k=1}^{i-1} c_{ik}^{2}},\quad { c}_{ik} = \langle \Psi_{i},\widehat{\Psi}_{k} \rangle. \end{aligned}$$

5 Numerical experiments

In this section, we solve two examples were solved with RKM. We show our results by tables and figures. The numerical results are compared with exact solutions and existing numerical approximations to illustrate the efficiency and high accuracy of the method. The method presents the solutions in terms of convergent series with easily computable components and improves the convergence of the series solution. The method was used in a direct way without using restrictive assumptions. Throughout this work, all computations are implemented by using Maple 16 software package.

Example 5.1

Let us consider the problem with the following initial conditions:
$$ u(\eta,0)=\sin(\pi\eta), \qquad\frac{\partial u}{\partial\tau }(\eta,0)=0. $$
The exact solution is [28]
$$ u ( \eta,\tau ) =\frac{1}{2}\bigl(\sin \pi(\eta+\tau)+\sin\pi (\eta- \tau) \bigr). $$
After homogenizing the initial conditions and using our method, we obtain the results presented in Tables 1-3 and Figures 1-4.
Figure 1
Figure 1

Plots of RKM solution for Example 5.1 .

Figure 2
Figure 2

Plots of absolute error for Example 5.1 .

Figure 3
Figure 3

Plots of absolute error for Example 5.1 .

Figure 4
Figure 4

Plots of absolute error for Example 5.1 .

Table 1

Numerical results for Example 5.1

η

τ

ES

AS

AE

Time

CPU (s)

0.1

0.1

0.2938926262

0.2938930965

4.703 × 10−7

3.860

0.2

0.2

0.4755282582

0.4755313577

3.0995 × 10−6

3.016

0.3

0.3

0.4755282582

0.4755183355

9.9227 × 10−6

2.984

0.4

0.4

0.2938926261

0.2939007109

8.0848 × 10−6

3.000

0.5

0.5

0.0

0.0000282140

2.82140 × 10−5

3.094

0.6

0.6

−0.2938926264

−0.2939063137

1.36873 × 10−5

3.031

0.7

0.7

−0.4755282583

−0.4755305759

2.3176 × 10−6

3.031

0.8

0.8

−0.4755282581

−0.4755277748

4.833 × 10−7

2.953

0.9

0.9

−0.2938926260

−0.2938966580

4.0320 × 10−6

3.204

1.0

1.0

0.0

−3.690702068 × 10−7

3.690702068 × 10−7

3.578

Table 2

Numerical results for Example 5.1 with \(\pmb{\tau= 1}\)

η

ES

AS

AE

RE

−0.80

0.5877852522

1.756 × 10−7

0.5877854278

2.987485639 × 10−7

−0.40

0.9510565165

7.70 × 10−8

0.9510565935

8.096259125 × 10−8

0.40

−0.9510565165

7.70 × 10−8

−0.9510565935

8.096259125 × 10−8

0.80

−0.5877852522

1.756 × 10−7

−0.5877854278

2.987485639 × 10−7

Table 3

Comparison of AE and RE for Example 5.1

η

AE [27]

AE [RKM]

RE [27]

RE [RKM]

−0.80

1.94E − 05

1.756E − 07

3.29E − 05

2.987485639E − 7

−0.40

2.84E − 07

7.700E − 08

2.98E − 07

8.096259125E − 8

0.40

2.84E − 07

7.700E − 08

2.98E − 07

8.096259125E − 8

0.80

1.94E − 05

1.756E − 07

3.29E − 05

2.987485639E − 7

Example 5.2

We solve the SG equation (1) in the region Ω with the following initial conditions:
$$ u(\eta,0)=0, \qquad\frac{\partial u}{\partial\tau}(\eta ,0)=4\sec h(\eta). $$
The exact solution is [27]
$$ u ( \eta,\tau ) =4\arctan \bigl(\sec h(\eta)\tau \bigr). $$
After homogenizing the initial conditions by RKM, we get the results presented in Tables 4-16 and Figures 5-8.
Figure 5
Figure 5

Plots of absolute error for Example 5.1 .

Figure 6
Figure 6

Plots of absolute error for Example 5.1 .

Figure 7
Figure 7

Plots of absolute error for Example 5.1 .

Figure 8
Figure 8

Plots of absolute error for Example 5.1 .

Table 4

Numerical results for Example 5.2

η

τ

ES

AS

AE

RE

Time

CPU (s)

0.1

0.1

0.396702532289366215

0.39670253228936612

9.43 × 10−17

2.37 × 10−16

0.874

0.2

0.2

0.77443854423966038

0.774438544239660056

3.24 × 10−16

4.19 × 10−16

0.843

0.3

0.3

1.11790876976715877

1.117908769767159201

4.29 × 10−16

3.83 × 10−16

0.874

0.4

0.4

1.41753016381685945

1.417530163816859722

2.65 × 10−16

1.87 × 10−16

0.952

0.5

0.5

1.66943886908587781

1.669438869085877479

3.33 × 10−16

1.99 × 10−16

0.874

0.6

0.6

1.87415961287513759

1.874159612875137133

4.57 × 10−16

2.44 × 10−16

0.904

0.7

0.7

2.03492391748435838

2.034923917484357432

9.47 × 10−16

4.65 × 10−16

0.921

0.8

0.8

2.15626165034295019

2.156261650342951155

9.63 × 10−16

4.46 × 10−16

0.967

0.9

0.9

2.24305837947587106

2.243058379475871257

1.88 × 10−16

8.39 × 10−17

0.999

1.0

1.0

2.30002473031364741

2.300024730313647325

8.66 × 10−17

3.76 × 10−17

0.967

Table 5

Numerical results for Example 5.2 with \(\pmb{\tau=1}\)

η

ES

AS

AE

RE

Time

CPU (s)

−0.80

2.5681097221289163512

2.5681097220865804301

4.2 × 10−11

1.6 × 10−11

0.842

−0.40

2.9858433445292456583

2.9858433445285564413

6.8 × 10−13

2.3 × 10−13

0.873

0.00

3.1415926535897932385

3.1415926535905335235

7.4 × 10−13

2.3 × 10−13

0.936

0.40

2.9858433445292456583

2.9858433445285564413

6.8 × 10−13

2.3 × 10−13

0.686

0.80

2.5681097221289163512

2.5681097220865804301

4.2 × 10−11

1.6 × 10−11

0.733

Table 6

Comparison of absolute and relative errors for Example 5.2

η

AE

[27]

AE

[RKM]

RE

[27]

RE

[RKM]

−0.80

1.53E − 08

4.23359211E − 11

5.96E − 09

1.64852462241777932E − 11

−0.40

3.54E − 10

6.89217E − 13

1.18E − 10

2.30828252012217808E − 13

0.00

1.62E − 10

7.40285E − 13

5.15E − 11

2.35640034093567477E − 13

0.40

3.54E − 10

6.89217E − 13

1.18E − 10

2.30828252012217808E − 13

0.80

1.53E − 08

4.23359211E − 11

5.96E − 09

1.64852462241777932E − 11

Table 7

Numerical results for Example 5.2 with \(\pmb{\eta=2.5}\)

τ

ES

AS

AE

RE

Time

CPU (s)

0.02

0.013045652299470337726

0.013045652299470429228

9.15 × 10−17

7.01 × 10−15

1.014

0.04

0.026091027076458045383

0.026091027076458055762

1.03 × 10−17

3.97 × 10−16

0.905

0.06

0.039135846843901571207

0.039135846843901591760

2. × 10−17

5.25 × 10−16

0.873

0.08

0.05217983418557021854

0.052179834185570326022

1.07 × 10−16

2.05 × 10−15

0.842

0.1

0.065222711791451326376

0.06522271179145110938

2.16 × 10−16

3.32 × 10−15

0.858

0.3

0.19552959072837645953

0.19552959072837616718

2.9 × 10−16

1.49 × 10−15

0.749

Table 8

Numerical solutions for Example 5.2 with \(\pmb{\eta =5.0}\)

τ

ES

AS

AE

RE

Time

CPU (s)

0.02

0.0010780225516042560299

0.0010780225516046950169

4.3 × 10−16

4.07 × 10−13

1.233

0.04

0.0021560449466078880312

0.0021560449466103154491

2.4 × 10−15

1.12 × 10−12

0.655

0.06

0.003234067028410408468

0.0032340670284064571408

3.9 × 10−15

1.22 × 10−12

0.811

0.08

0.0043120886404116027904

0.0043120886404125186434

9.1 × 10−16

2.12 × 10−13

0.936

0.1

0.0053901096260116659256

0.0053901096260170153463

5.3 × 10−15

9.92 × 10−13

1.092

0.3

0.016170250578558993341

0.016170250578577492240

1.8 × 10−14

1.14 × 10−12

1.139

Table 9

Numerical results for Example 5.2 with \(\pmb{\eta =7.5}\)

τ

ES

AS

AE

RE

Time

CPU (s)

0.02

0.0000884934721388573766

0.0000884934721392853003

4.2 × 10−16

4.8 × 10−12

0.827

0.04

0.0001769869441910896592

0.0001769869441950110056

3.9 × 10−15

2.2 × 10−11

0.749

0.06

0.0002654804160700717544

0.0002654804160796786727

9.6 × 10−15

3.6 × 10−11

0.889

0.08

0.0003539738876891785695

0.0003539738876799533295

9.2 × 10−15

2.6 × 10−11

0.967

0.1

0.0004424673589617850136

0.0004424673589703796322

8.5 × 10−15

1.9 × 10−11

0.812

0.3

0.0013274020335728111501

0.0013274020333220852196

2.5 × 10−13

1.8 × 10−10

1.326

Table 10

Numerical results for Example 5.2 with \(\pmb{\eta =10.0}\)

τ

ES

AS

AE

RE

Time

CPU (s)

0.02

0.00000726398874701739435

0.0000072639887470122372

5.1 × 10−18

7.0 × 10−13

0.874

0.04

0.00001452797749398687768

0.0000145279774939233104

6.3 × 10−17

4.3 × 10−12

0.936

0.06

0.00002179196624086053894

0.0000217919662408353413

2.5 × 10−17

1.1 × 10−12

0.998

0.08

0.00002905595498759046712

0.0000290559549867947262

7.9 × 10−16

2.7 × 10−11

1.170

0.1

0.00003631994373412875118

0.000036319943734399877

2.7 × 10−16

7.4 × 10−12

0.858

0.3

0.00010895983117843073892

0.0001089598311794268356

9.9 × 10−16

9.1 × 10−12

0.889

Table 11

Comparison of absolute errors for Example 5.2 with \(\pmb{\eta=2.5}\) and \(\pmb{\eta=5.0}\)

τ

AE

RKM

η  = 2.5

AE

MHPM

η  = 2.5 [29]

AE

ADM

η  = 2.5 [29]

AE

RKM

η  = 5.0

AE

MHPM

η  = 5.0 [29]

AE

ADM

η  = 5.0 [29]

0.02

9.15E − 17

9.25104E − 8

9.25104E − 8

4.3E − 16

5.22002E − 11

5.22002E − 11

0.04

1.03E − 17

7.40084E − 7

7.40084E − 7

2.4E − 15

4.17602E − 10

4.17602E − 10

0.06

2.E − 17

2.49778E − 6

2.49778E − 6

3.9E − 15

1.40941E − 9

1.40941E − 9

0.08

1.07E − 16

5.92068E − 6

5.92068E − 6

9.1E − 16

3.34082E − 9

3.34082E − 9

0.1

2.16E − 16

1.15638E − 5

1.15638E − 5

5.3E − 15

6.52506E − 9

6.52506E − 9

0.3

2.9E − 16

3.12304E − 4

3.12304E − 4

1.8E − 14

1.76230E − 7

1.76230E − 7

Table 12

Comparison of absolute errors for Example 5.2 with \(\pmb{\eta=7.5}\) and \(\pmb{\eta=10.0}\)

τ

AE

RKM

η  = 7.5

AE

MHPM

η  = 7.5 [29]

AE

ADM

η  = 7.5 [29]

AE

RKM

η  = 10.0

AE

MHPM

η  = 10.0 [29]

AE

ADM

η  = 10.0 [29]

0.02

4.2E − 16

2.88750E − 14

2.88750E − 14

5.1E − 18

1.59700E − 17

1.59700E − 17

0.04

3.9E − 15

2.31000E − 13

2.31000E − 13

6.3E − 17

1.27763E − 16

1.27763E − 16

0.06

9.6E − 15

7.79626E − 13

7.79626E − 13

2.5E − 17

4.31201E − 16

4.31201E − 16

0.08

9.2E − 15

1.84800E − 12

1.84800E − 12

7.9E − 16

1.02210E − 15

1.02210E − 15

0.1

8.5E − 15

3.60939E − 12

3.60939E − 12

2.7E − 16

1.99629E − 15

1.99629E − 15

0.3

2.5E − 13

9.74833E − 11

9.74833E − 11

9.9E − 16

5.39165E − 14

5.39165E − 14

Table 13

Numerical results for Example 5.2 with \(\pmb{\eta =0.06}\)

τ

ES

AS

AE

RE

Time

CPU (s)

0.02

0.07984560896434381352

0.079845608964343853426

3.99 × 10−17

4.99 × 10−16

0.889

0.04

0.15962763841261813303

0.15962763841261815794

2.49 × 10−17

1.56 × 10−16

1.232

0.06

0.23928281206851416623

0.23928281206851423765

7.14 × 10−17

2.98 × 10−16

0.920

0.08

0.31874845652859878735

0.31874845652859888367

9.63 × 10−17

3.02 × 10−16

0.904

0.1

0.39796279376194770105

0.39796279376194771082

9.77 × 10−18

2.45 × 10−17

0.858

Table 14

Numerical results for Example 5.2 with \(\pmb{\eta =0.06}\)

τ

ES

AS

AE

RE

Time

CPU (s)

0.02

0.079734118548588251664

0.079734118548588199339

5.23 × 10−17

6.56 × 10−16

0.811

0.04

0.15940492340173257962

0.15940492340173286314

2.83 × 10−16

1.77 × 10−15

0.796

0.06

0.23894940199533129661

0.23894940199533120491

9.17 × 10−17

3.83 × 10−16

0.889

0.08

0.31830514045634341908

0.31830514045634405770

6.38 × 10−16

2.0 × 10−15

0.982

0.1

0.39741061409807554739

0.39741061409807569743

1.5 × 10−16

3.77 × 10−16

0.811

Table 15

Numerical results for Example 5.2 with \(\pmb{\eta =0.1}\)

τ

ES

AS

AE

RE

Time

CPU (s)

0.02

0.079591154289758679228

0.079591154289759560107

8.8 × 10−16

1.1 × 10−14

0.874

0.04

0.15911933466140392832

0.15911933466140305331

8.75 × 10−16

5.49 × 10−15

0.874

0.06

0.23852186564169922344

0.23852186564169936963

1.46 × 10−16

6.12 × 10−16

0.920

0.08

0.31773666512002042948

0.31773666512002071579

2.86 × 10−16

9.01 × 10−16

1.295

0.1

0.396702532289366215

0.39670253228936612

9.43 × 10−17

2.37 × 10−16

0.874

Table 16

Comparison of absolute errors for Example 5.2

τ

AE

RKM

η  = 0.06

AE

MDM

η  = 0.06 [30]

AE

RKM

η  = 0.08

AE

MDM

η  = 0.08 [30]

AE

RKM

η  = 0.1

AE

MDM

η  = 0.1 [30]

0.02

3.99E − 17

2.22045E − 16

5.23E − 17

4.49640E − 15

8.8E − 16

4.47420E − 14

0.04

2.49E − 17

2.22045E − 16

2.83E − 16

4.44089E − 15

8.75E − 16

4.44644E − 14

0.06

7.14E − 17

1.94289E − 16

9.17E − 17

4.38538E − 15

1.46E − 16

4.41314E − 14

0.08

9.63E − 17

1.94289E − 16

6.38E − 16

4.38538E − 15

2.86E − 16

4.36318E − 14

0.1

9.77E − 18

1.94289E − 16

1.5E − 16

4.32987E − 15

9.43E − 17

4.29656E − 14

Remark

In Tables 1-16, we abbreviate the exact solution and the approximate solution by AS and ES, respectively. AE stands for the absolute error, that is, the absolute value of the difference of the exact and approximate solutions, whereas RE indicates the relative error, that is the absolute error divided by the absolute value of the exact solution.

6 Conclusion

Linear and nonlinear SG equations were investigated by RKM in this work. Homogenizing the initial and boundary conditions is very crucial for this method. We gave a general transformation to homogenize the conditions. This transformation will be very useful for researches who study RKM. We obtained very accurate numerical results and showed them by tables and figures. The computational results confirmed the efficiency, reliability, and accuracy of our method, which is easily applicable. RKM produced a rapidly convergent series with easily computable components using symbolic computation software. The results obtained by RKM are very effective and convenient with less computational work and time.

Declarations

Acknowledgements

This paper is a part of the PhD thesis of the first author. The authors would like to thank to the referees for their very useful comments and remarks.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Mathematics, Art and Science Faculty, Siirt University, Siirt, 56100, Turkey
(2)
Department of Mathematics, Science Faculty, Fırat University, Elazığ, 23119, Turkey
(3)
Department of Mathematics and Institute for Mathematical Research, University Putra Malaysia, Selangor, 43400, Malaysia
(4)
Department of Mathematics and Computer Sciences, Faculty of Arts and Sciences, Çankaya University, Ankara, 0630, Turkey
(5)
Department of Mathematics, Institute of Space Sciences, Magurele, Bucharest, Romania

References

  1. Polyanin, AD, Valentin, FZ: Handbook of Nonlinear Partial Differential Equations. Chapman & Hall/CRC, Boca Raton (2004) xx+814 pp. ISBN:1-58488-355-3 MATHGoogle Scholar
  2. Ramamurti, R: Solitons and instantons. In: An Introduction to Solitons and Instantons in Quantum Field Theory. North-Holland, Amsterdam (1982) viii+409 pp. ISBN:0-444-86229-3 Google Scholar
  3. Akbar, M, Dehghan, M: High-order solution of one-dimensional sine-Gordon equation using compact finite difference and DIRKN methods. Math. Comput. Model. 51(5-6), 537-549 (2010) MATHView ArticleGoogle Scholar
  4. Scott, A: Nonlinear Science. Emergence and Dynamics of Coherent Structures, 2nd edn. Oxford Texts in Applied and Engineering Mathematics, vol. 8. Oxford University Press, Oxford (2003) xxiv+480 pp. ISBN:0-19-852852-3 MATHGoogle Scholar
  5. Dauxois, T, Peyrard, M: Physics of Solitons. Cambridge University Press, Cambridge (2006) MATHGoogle Scholar
  6. Dehghan, M, Shokri, A: A numerical method for one-dimensional nonlinear sine-Gordon equation using collocation and radial basis functions. Numer. Methods Partial Differ. Equ. 24(2), 687-698 (2008) MATHMathSciNetView ArticleGoogle Scholar
  7. Dehghan, M, Davoud, M: The boundary integral equation approach for numerical solution of the one-dimensional sine-Gordon equation. Numer. Methods Partial Differ. Equ. 24(6), 1405-1415 (2008) MATHView ArticleGoogle Scholar
  8. Bratsos, AG: A third order numerical scheme for the two-dimensional sine-Gordon equation. Math. Comput. Simul. 76(4), 271-282 (2007) MATHMathSciNetView ArticleGoogle Scholar
  9. Bratsos, AG: A numerical method for the one-dimensional sine-Gordon equation. Numer. Methods Partial Differ. Equ. 24(3), 833-844 (2008) MATHMathSciNetView ArticleGoogle Scholar
  10. Dehghan, M, Shokri, A: A numerical method for solution of the two-dimensional sine-Gordon equation using the radial basis functions. Math. Comput. Simul. 79(3), 700-715 (2008) MATHMathSciNetView ArticleGoogle Scholar
  11. Lakestani, M, Dehghan, M: Collocation and finite difference-collocation methods for the solution of nonlinear Klein-Gordon equation. Comput. Phys. Commun. 181(8), 1392-1401 (2010) MATHMathSciNetView ArticleGoogle Scholar
  12. Dehghan, M, Farhad, FI: The spectral collocation method with three different bases for solving a nonlinear partial differential equation arising in modeling of nonlinear waves. Math. Comput. Model. 53(9-10), 1865-1877 (2011) MATHView ArticleGoogle Scholar
  13. Ahmed, H, Ahmed, A: Chebyshev collocation spectral method for solving the RLW equation. Int. J. Nonlinear Sci. 7(2), 131-142 (2009) MathSciNetGoogle Scholar
  14. Ma, ML, Wu, ZM: A numerical method for one-dimensional nonlinear sine-Gordon equation using multiquadric quasi-interpolation. Chin. Phys. B 18, 3099-3103 (2009) View ArticleGoogle Scholar
  15. Aronszajn, N: Theory of reproducing kernels. Trans. Am. Math. Soc. 68, 337-404 (1950) MATHMathSciNetView ArticleGoogle Scholar
  16. Cui, M, Lin, Y: Nonlinear Numerical Analysis in the Reproducing Kernel Space. Nova Science Publishers, New York (2009) xiv+226 pp. ISBN:978-1-60456-468-6; 1-60456-468-7 MATHGoogle Scholar
  17. Geng, F, Cui, M, Zhang, B: Method for solving nonlinear initial value problems by combining homotopy perturbation and reproducing kernel Hilbert spaces methods. Nonlinear Anal., Real World Appl. 11, 637-644 (2010) MATHMathSciNetView ArticleGoogle Scholar
  18. Mohammadi, M, Mokhtari, R: Solving the generalized regularized long wave equation on the basis of a reproducing kernel space. J. Comput. Appl. Math. 235, 4003-4014 (2011) MATHMathSciNetView ArticleGoogle Scholar
  19. Jiang, W, Lin, Y: Representation of exact solution for the time-fractional telegraph equation in the reproducing kernel space. Commun. Nonlinear Sci. Numer. Simul. 16, 3639-3645 (2011) MATHMathSciNetView ArticleGoogle Scholar
  20. Wang, Y, Su, L, Cao, X, Li, X: Using reproducing kernel for solving a class of singularly perturbed problems. Comput. Math. Appl. 61, 421-430 (2011) MATHMathSciNetView ArticleGoogle Scholar
  21. Wu, B, Xiu, YL: A new algorithm for a class of linear nonlocal boundary value problems based on the reproducing kernel method. Appl. Math. Lett. 24, 156-159 (2011) MATHMathSciNetView ArticleGoogle Scholar
  22. Yao, H, Lin, Y: New algorithm for solving a nonlinear hyperbolic telegraph equation with an integral condition. Int. J. Numer. Methods Biomed. Eng. 27, 1558-1568 (2011) MATHView ArticleGoogle Scholar
  23. Inc, M, Akgül, A: The reproducing kernel Hilbert space method for solving Troesch’s problem. J. Assoc. Arab Univ. Basic Appl. Sci. 14, 19-27 (2013) Google Scholar
  24. Inc, M, Akgül, A, Geng, F: Reproducing kernel Hilbert space method for solving Bratu’s problem. Bull. Malays. Math. Soc. 38(1), 271-287 (2015) MATHView ArticleGoogle Scholar
  25. Inc, M, Akgül, A, Kilicman, A: Explicit solution of telegraph equation based on reproducing kernel method. J. Funct. Spaces Appl. 2012, Article ID 984682 (2012) View ArticleGoogle Scholar
  26. Zhang, S, Lei, L, LuHong, D: Reproducing kernel functions represented by form of polynomials. In: Proceedings of the Second Symposium International Computer Science and Computational Technology (ISCSCT ’09), Huangshan, P.R. China, 26-28 December, pp. 353-358 (2009) Google Scholar
  27. Sari, M, Gürhan, G: A sixth-order compact finite difference method for the one-dimensional sine-Gordon equation. Int. J. Numer. Methods Biomed. Eng. 27(7), 1126-1138 (2011) MATHMathSciNetView ArticleGoogle Scholar
  28. Jiang, ZW, Wang, RH: Numerical solution of one dimensional sine-Gordon equation using high accuracy multiquadratic quasi-interpolation. Appl. Math. Comput. 218(15), 7711-7716 (2012) MATHMathSciNetView ArticleGoogle Scholar
  29. Lu, J: An analytic approach to the sine-Gordon equation using the modified homotopy perturbation method. Comput. Math. Appl. 58, 2313-2319 (2009) MATHMathSciNetView ArticleGoogle Scholar
  30. Kaya, D: A numerical solution of the sine-Gordon equation using the modified decomposition method. Appl. Math. Comput. 143, 309-317 (2003) MATHMathSciNetView ArticleGoogle Scholar

Copyright

Advertisement