Skip to main content

Theory and Modern Applications

A new approach for one-dimensional sine-Gordon equation

Abstract

In this work, we use a reproducing kernel method for investigating the sine-Gordon equation with initial and boundary conditions. Numerical experiments are studied to show the efficiency of the technique. The acquired results are compared with the exact solutions and results obtained by different methods. These results indicate that the reproducing kernel method is very effective.

1 Introduction

The nonlinear one-dimensional sine-Gordon (SG) equation came into sight in the differential geometry and attracted a lot of attention because of the collisional behaviors of solitons that arise from this equation. Numerical solutions of the SG equation have been widely investigated in recent years [15]. Compact finite difference and diagonally implicit Runge-Kutta-Nyström (DIRKN) methods were used [3]. The authors of [6] introduced a numerical method for solving the SG equation by using collocation and radial basis functions. The boundary integral equation technique is presented in [7]. Bratsos [8, 9] has researhed a numerical technique for solving the one-dimensional SG equation and a third-order numerical technique for the two-dimensional SG equation. A numerical technique using radial basis functions for the solution of the two-dimensional SG equation has been shown in [10]. Some authors advised spectral techniques and Fourier pseudospectral technique for solving nonlinear wave equation taking a discrete Fourier series and Chebyshev orthogonal polynomials [1113]. Ma and Wu [14] used a meshless technique by using a multiquadric (MQ) quasi-interpolation. In this paper, we investigate the one-dimensional nonlinear sine-Gordon equation

$$ \frac{\partial^{2}u}{\partial\tau^{2}}(\eta,\tau)=\frac{\partial ^{2}u}{\partial\eta^{2}}(\eta,\tau)-\sin \bigl(u(\eta,\tau) \bigr),\quad 0\leq \eta\leq1, \tau\geq0, $$
(1)

with initial conditions

$$ \begin{aligned} &u(\eta,0)=f(\eta),\quad 0\leq\eta\leq1, \\ &\frac{\partial u}{\partial\tau}(\eta,0)=g(\eta),\quad 0\leq \eta\leq1, \end{aligned} $$
(2)

and boundary conditions

$$ u(0,\tau)=h_{1}(\tau),\qquad u(1,\tau)=h_{2}( \tau),\quad \tau \geq0, $$
(3)

by using the reproducing kernel method (RKM). We can get numerical results in very short time. By this method nonlinear problems can be solved easily like linear problems. Reproducing kernel functions are very important for numerical results. We can change the inner product in the spaces and obtain different reproducing kernel functions for better results. These are advantages of this method. Homogenizing the initial and boundary conditions is very significant for this method. We give a general transformation to homogenize the initial and boundary conditions in Section 3.

The theory of reproducing kernels [15] was used for the first time at the beginning of the 20th century by Zaremba. Reproducing kernel theory has important implementations in numerical analysis, differential equations, and probability and statistics [1625]. The efficiency of the method was used by many authors to research several scientific implementations. The reproducing kernel functions can be represented by piecewise polynomials, and the higher the order of derivatives, the simpler the reproducing kernel function statements. Such statements of reproducing kernel functions are the simplest from the computational viewpoint, and the speed and accuracy can be significantly improved in scientific and engineering implementations. The productivity of such reproducing kernel functions is indicated to be very exhorting by experimental results [26].

This work is arranged as follows. Section 2 introduces several useful reproducing kernel functions. A representation of solution in\(W_{2}^{(3,3)} ( \Omega ) \) is given in Section 3. Section 4 presents the essential results: exact and approximate solutions of (1)-(3); enhancement of the method to some problems in the reproducing kernel space; and convergence of the approximate solution. Some numerical examples are discussed in Section 5. There are some conclusions in the final section.

2 Reproducing kernel functions

We obtain some useful reproducing kernel functions in this section.

Definition 1

[16]

Let E be a nonempty set. A function \(K:E\times E\longrightarrow C\) is called a reproducing kernel function of the Hilbert space H if

  1. (a)

    \(\forall\tau\in E\), \(K ( \cdot,\tau ) \in H\),

  2. (b)

    \(\forall\tau\in E\), \(\forall\varphi\in H\), \(\langle\varphi ( \cdot ) ,K ( \cdot,\tau ) \rangle=\varphi ( \tau ) \).

Definition 2

[16]

A Hilbert space H defined on a nonempty set E is called a reproducing kernel Hilbert space if there exists a reproducing kernel function \(K(\eta,\tau)\).

Definition 3

[16]

We define the \(W_{2}^{3}[0,1]\) by

$$ W_{2}^{3}[0,1]= \left \{ \textstyle\begin{array}{@{}c@{}} u\mid u,u^{\prime},u^{\prime\prime}\mbox{ are absolutely continuous real-valued functions in }[0,1], \\ u^{ ( 3 ) }\in L^{2}[0,1], \eta\in [0,1], u(0)=0, u(1)=0 \end{array}\displaystyle \right \}. $$

The inner product and the norm in \(W_{2}^{3}[0,1]\) are defined respectively by

$$ \langle u,v \rangle_{{ W}_{{ 2}}^{{ 3}}}=u(0)v(0)+u^{\prime}(0)v^{\prime}(0)+u^{\prime}(1)v^{\prime }(1)+ \int_{0}^{1}u^{(3)}(\eta)v^{(3)}( \eta)\,d\eta, \quad u,v\in W_{2}^{3}[0,1], $$

and

$$ \Vert u\Vert _{{ W}_{{ 2}}^{{ 3}}}=\sqrt{ \langle u,u \rangle_{W_{2}^{3}}},\quad u\in W_{2}^{3}[0,1]. $$

Definition 4

[16]

We define the space \(F_{2}^{3}[0,T]\) by

$$ F_{2}^{3}[0,T]= \left \{ \textstyle\begin{array}{@{}c@{}} u\mid u,u^{\prime},u^{\prime\prime}\mbox{ are absolutely continuous in }[0,T], \\ u^{(3)}\in L^{2}[0,T], \tau\in{}[0,T], u(0)=0, u^{\prime}(0)=0\end{array}\displaystyle \right \} $$

with the inner product and norm

$$ \langle u,v \rangle_{{{F}_{{ 2}}^{{ 3}}}}=\sum_{i=0}^{2}u^{(i)}(0)v^{(i)}(0)+ \int_{0}^{1}u^{(3)}(T)v^{(3)}(\tau)\,d\tau,\quad u,v\in F_{2}^{3}[0,T], $$

and

$$ \Vert u\Vert _{F_{2}^{3}}=\sqrt{ \langle u,u \rangle_{{{F}_{{ 2}}^{{ 3}}}}}, \quad u \in F_{2}^{3}[0,T]. $$

The space \(F_{2}^{3}[0,T]\) is a reproducing kernel space, and its reproducing kernel function \({r}_{s}\) is given by

$$ r_{s} ( \tau ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{1}{4}s^{2}\tau^{2}+\frac{1}{12}s^{2}\tau^{3}-\frac{1}{24}s\tau ^{4}+\frac{1}{120}\tau^{5}, &\tau\leq s, \\ \frac{1}{4}s^{2}\tau^{2}+\frac{1}{12}s^{3}\tau^{2}-\frac{1}{24}\tau s^{4}+\frac{1}{120}s^{5},& \tau>s.\end{array}\displaystyle \right . $$

Definition 5

[16]

We define the space \(H_{2}^{1}[0,T]\) by

$$ H_{2}^{1}[0,T]=\left \{ \textstyle\begin{array}{@{}c@{}} u\mid u\mbox{ is absolutely continuous in }[0,1], \\ u^{\prime}\in L^{2}[0,T], \tau\in [0,T]\end{array}\displaystyle \right \} $$

the inner product and norm

$$ \langle u,v \rangle_{{ H}_{{ 2}}^{{ 1}}}=u(0)v(0)+ \int_{0}^{\tau}u^{\prime}(T)v^{\prime}( \tau)\,d\tau ,\quad u,v\in H_{2}^{1}[0,T], $$

and

$$ \Vert u\Vert _{{ H}_{{ 2}}^{{ 1}}}=\sqrt{ \langle u,u \rangle_{{{ H}_{{ 2}}^{{ 1}}}}},\quad u\in H_{2}^{1}[0,T]. $$

Its reproducing kernel function \({ q}_{s}\) is

$$ q_{s}(\tau)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} 1+\tau,& \tau\leq s, \\ 1+s,& \tau>s.\end{array}\displaystyle \right . $$

Definition 6

[16]

We define the space \(G_{2}^{1}[0,1]\) by

$$ G_{2}^{1}[0,1]=\left \{ \textstyle\begin{array}{@{}c@{}} u\mid u\mbox{ is absolutely continuous in }[0,1], \\ u^{\prime}\in L^{2}[0,1], \eta\in[0,1]\end{array}\displaystyle \right \} $$

with the inner product and norm

$$ \langle u,v \rangle_{{{ G}_{{ 2}}^{{ 1}}}}=u(0)v(0)+ \int_{0}^{1}u^{\prime}(\eta)v^{\prime}( \eta)\,d\eta,\quad u,v\in G_{2}^{1}[0,1], $$

and

$$ \Vert u\Vert _{{G}_{{ 2}}^{{ 1}}}=\sqrt{ \langle u,u \rangle_{{ G}_{{ 2}}^{{ 1}}}},\quad u\in G_{2}^{1}[0,1]. $$

The space \(G_{2}^{1}[0,1]\) is a reproducing kernel space, and its reproducing kernel function \(Q_{y}\) is given by

$$ { Q}_{y}{ (\eta)}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} 1+\eta,& \eta\leq y, \\ 1+y, &\eta>y.\end{array}\displaystyle \right . $$

Theorem 2.1

The reproducing kernel function \(R_{y}\) of \(W_{2}^{3} [ 0,1 ] \) is

$$ R_{y} ( \eta ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sum_{i=1}^{6}c_{i} ( y ) \eta^{i-1},& \eta\leq y,\\ \sum_{i=1}^{6}d_{i} ( y ) \eta^{i-1},& \eta>y,\end{array}\displaystyle \right . $$
(4)

where

$$\begin{aligned}& c_{1}(y)=0, \qquad{ c}_{4}{ (y)=0}, \qquad d_{1}(y)=\frac{1}{120}y^{5},\qquad { d}_{4}{ (y)=-}\frac {1}{12}y^{2}, \\& { c}_{2}{ (y)=-}\frac{1}{122}y^{5}+ \frac {5}{244}y^{4}-\frac{127}{244}y^{2}+ \frac{31}{61}y, \\& { c}_{3}{ (y)=-}\frac{1}{2{,}928}y^{5}+ \frac {127}{5{,}856}y^{4}-\frac{1}{12}y^{3}+ \frac{1{,}137}{1{,}952}y^{2}-\frac{127}{244}y, \\& { c}_{5}{ (y)=}\frac{1}{2{,}938}y^{5}- \frac {5}{5{,}856}y^{4}+\frac{127}{5{,}856}y^{2}- \frac{31}{1{,}464}y, \\& { c}_{6}{ (y)=-}\frac{1}{7{,}320}y^{5}+\frac{1}{2{,}928} y^{4}-\frac{1}{2{,}928}y^{2}-\frac{1}{122}y+ \frac{1}{120}, \\& { d}_{2}{ (y)=-}\frac{1}{122}y^{5}-\frac{31}{1{,}464} y^{4}-\frac{127}{244}y^{2}+\frac{31}{61}y, \\& { d}_{3}{ (y)=-}\frac{1}{2{,}928}y^{5}+ \frac {127}{5{,}856}y^{4}+\frac{1{,}137}{1{,}952}y^{2}- \frac{127}{244}y, \\& { d}_{5}{ (y)=}\frac{1}{2{,}928}y^{5}- \frac {5}{5{,}856}y^{4}+\frac{127}{5{,}856}y^{2}+ \frac{5}{244}y, \\& { d}_{6}{ (y)=-}\frac{1}{7{,}320}y^{5}+\frac{1}{2{,}928} y^{4}-\frac{1}{2{,}928}y^{2}-\frac{1}{122}y. \end{aligned}$$

Proof

Let \(u\in W_{2}^{3}[0,1]\) and \(0\leq y\leq1\). Define \(R_{y}\) by (4). Note that

$$\begin{aligned}& R_{y}^{\prime} ( \eta ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sum_{i=1}^{5}ic_{i+1} ( y ) \eta^{i-1}, &\eta< y, \\ \sum_{i=1}^{5}id_{i+1} ( y ) \eta^{i-1}, &\eta>y,\end{array}\displaystyle \right . \\& R_{y}^{\prime\prime} ( \eta ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sum_{i=1}^{4}i(i+1)c_{i+2} ( y ) \eta^{i-1},& \eta< y, \\ \sum_{i=1}^{4}i(i+1)d_{i+2} ( y ) \eta^{i-1},& \eta>y,\end{array}\displaystyle \right . \\& R_{y}^{(3)} ( \eta ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sum_{i=1}^{3}i(i+1)(i+2)c_{i+3} ( y ) \eta^{i-1},& \eta< y,\\ \sum_{i=1}^{3}i(i+1)(i+2)d_{i+3} ( y ) \eta^{i-1},& \eta >y,\end{array}\displaystyle \right . \\& R_{y}^{(4)} ( \eta ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \sum_{i=1}^{2}i(i+1)(i+2)(i+3)c_{i+4} ( y ) \eta^{i-1},& \eta < y, \\ \sum_{i=1}^{2}i(i+1)(i+2)(i+3)d_{i+4} ( y ) \eta^{i-1}, & \eta>y,\end{array}\displaystyle \right . \end{aligned}$$

and

$$ R_{y}^{(5)} ( \eta ) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} 120c_{6}(y),& \eta< y, \\ 120d_{6}(y),& \eta>y.\end{array}\displaystyle \right . $$

By Definition 5 and integration by parts we have

$$\begin{aligned} \bigl\langle u(\eta),{ R}_{y}{ (\eta)} \bigr\rangle _{{{ W}_{2}^{3}}}={}&u(0)R_{y}(0)+u^{\prime}(0)R_{y}^{\prime }(0)+u^{\prime}(1)R_{y}^{\prime}(1)+{ u}^{\prime\prime }(1){ R}_{y}^{(3)}{ (1)} \\ &{}-{ u}^{\prime\prime }(0){ R}_{y}^{(3)}{ (0)}-{ u}^{\prime}(1){ R}_{y}^{(4)}{ (1)}+{ u}^{\prime}(0){ R}_{y}^{(4)}{ (0)} + \int_{0}^{1}u^{\prime}{ (\eta)R}_{y}^{(5)}{ (\eta)\,d\eta} \\ ={}&u^{\prime}(0) \bigl(R_{y}^{\prime}(0)+{ R}_{y}^{(4)}{ (0)}\bigr)+u^{\prime}(1) \bigl(R_{y}^{\prime }(1)-{ R}_{y}^{(4)}{ (1)} \bigr) \\ &{}+{ u}^{\prime\prime}(1){ R}_{y}^{(3)}{ (1)}-{ u}^{\prime\prime}(0){ R}_{y}^{(3)}{ (0)} \\ &{}+ \int_{0}^{y}{ R}_{y}^{(5)}{ (\eta)}u^{\prime}{ (\eta)\,d\eta+} \int_{y}^{1} { R}_{y}^{(5)}{ (\eta)}u^{\prime}{ (\eta )\,d\eta} \\ ={}&u^{\prime}(0) \bigl(c_{2}(y)+{ 24c}_{5}(y)\bigr)-{ u}^{\prime\prime}(0) \bigl(6c_{4}(y)\bigr) \\ &{}+u^{\prime }(1) \bigl(d_{2}(y)+2d_{3}(y)+3d_{4}(y)-20d_{5}(y)-115d_{6}(y) \bigr) \\ &{}+{ u}^{\prime\prime}(1) \bigl(6d_{4}(y)+24d_{5}(y)+60d_{6}(y) \bigr) \\ &{}+ \int_{0}^{y}120c_{6}(y)u{ ( \eta)\,d\eta+} \int_{y}^{1}120d_{6}(y)u{ (\eta)\,d\eta} \\ ={}&120u(y) \biggl(\frac{1}{120}\biggr)=u(y). \end{aligned}$$

This completes the proof. □

Definition 7

[16]

We define the space \({ W}_{2}^{(3,3)} ( \Omega ) \) by

$$ { W}_{2}^{(3,3)} ( \Omega ) =\left \{ \textstyle\begin{array}{@{}c@{}} u\mid\frac{{ \partial}^{4}{ u}}{{ \partial\eta }^{2}{ \partial\tau}^{2}}\mbox{ is completely continuous in }\Omega =[0,1]\times{}[0,\tau], \\ \frac{{ \partial}^{6}{ u}}{{ \partial\eta }^{3}{ \partial\tau}^{3}}\in L^{2} ( \Omega ) , u(\eta ,0)=0, \frac{{ \partial u(\eta,0)}}{{ \partial\tau }}=0, u(0,\tau)=0, u(1,\tau)=0\end{array}\displaystyle \right \} $$

with the inner product and norm

$$\begin{aligned} \langle u,v \rangle_{{{ W}_{2}^{(3,3)}}} =&\sum_{i=0}^{2} \int_{0}^{\tau} \biggl[ \frac{\partial^{3}}{\partial \tau ^{3}} \frac{\partial^{i}}{\partial\eta^{i}}u(0,\tau)\frac{\partial ^{3}}{\partial\tau^{3}}\frac{\partial^{i}}{\partial\eta^{i}}v(0,\tau ) \biggr]\,d\tau \\ &{}+\sum_{j=0}^{2} \biggl\langle \frac{\partial^{j}}{\partial\tau ^{j}}u(\cdot,0),\frac{\partial^{j}}{\partial\tau^{j}}v(\cdot,0) \biggr\rangle _{{ W}_{{ 2}}^{{ 3}}} \\ &{}+ \int_{0}^{1} \int_{0}^{\tau} \biggl[ \frac{\partial^{3}}{\partial \eta^{3}} \frac{\partial^{3}}{\partial\tau^{3}}u(\eta,\tau)\frac{\partial ^{3}}{\partial\eta^{3}}\frac{\partial^{3}}{\partial\tau^{3}}v(\eta,\tau ) \biggr]\,d\tau \,d\eta,\quad u,v\in{ W}_{2}^{(3,3)} ( \Omega) \end{aligned}$$

and

$$ \Vert u\Vert _{W}=\sqrt{ \langle u,u \rangle _{W}}, \quad u \in{ W}_{2}^{(3,3)} ( \Omega ) . $$

Theorem 2.2

Let \(K_{ ( y,s ) } ( \eta,\tau ) \) be a reproducing kernel function \({ W}_{2}^{(3,3)} ( \Omega ) \). We have

$$ K_{ ( y,s ) } ( \eta,\tau ) =R_{y}(\eta)r_{s}(\tau), $$

and for any \(u\in{ W}_{2}^{(3,3)} ( \Omega ) \),

$$ u(y,s)= \bigl\langle u(\eta,\tau),K_{ ( y,s ) } ( \eta ,\tau ) \bigr\rangle _{{ W}_{2}^{(3,3)}} $$

and

$$ K_{ ( y,s ) } ( \eta,\tau ) =K_{ ( \eta,\tau ) } ( y,s ) . $$

Definition 8

[16]

We define the space \(\widehat{W}_{2}^{(1,1)} ( \Omega ) \) by

$$ \widehat{W}_{2}^{(1,1)} ( \Omega ) =\left \{ \textstyle\begin{array}{@{}c@{}} u\mid u\mbox{ is completely continuos in }\Omega=[0,1]\times{}[ 0,\tau], \\ \frac{{ \partial}^{{ 2}}{u}}{{\partial\eta \partial\tau}}\in L^{2} ( \Omega ) \end{array}\displaystyle \right \} $$

with the inner product and norm

$$\begin{aligned} \langle u,v \rangle_{\widehat{W}_{2}^{(1,1)}} =& \int _{0}^{\tau} \biggl[ \frac{\partial}{\partial\tau}u(0,\tau)\frac{\partial }{\partial \tau}v(0,\tau) \biggr]\,d\tau+ \bigl\langle u(\cdot,0),v(\cdot,0) \bigr\rangle _{{ G}_{{ 2}}^{{ 1}}} \\ &{}+ \int_{0}^{1} \int_{0}^{\tau} \biggl[ \frac{\partial}{\partial\eta } \frac{\partial}{\partial\tau}u(\eta,\tau)\frac{\partial}{\partial\eta} \frac{\partial}{\partial\tau}v( \eta,\tau) \biggr]\,d\tau \,d\eta,\quad u,v\in\widehat{W}_{2}^{(1,1)} \end{aligned}$$

and

$$ \Vert u\Vert _{\widehat{W}_{2}^{(1,1)}}= \sqrt{ \langle u,u \rangle_{\widehat{W}_{2}^{(1,1)}}}, \quad u \in\widehat {W}_{2}^{(1,1)}. $$

\(\widehat{W}_{2}^{(1,1)} ( \Omega ) \) is a reproducing kernel space, and its reproducing kernel function \(G_{ ( y,s ) } ( \eta ,\tau ) \) is given as

$$ G_{ ( y,s ) } ( \eta,\tau ) =Q_{y}(\eta)q_{s}(\tau). $$

3 Solutions in \({ W}_{2}^{(3,3)}(\Omega)\)

In this section, we give the solution of (1)-(3) in the reproducing kernel space \({ W}_{2}^{(3,3)}(\Omega)\). We define the linear operator \(L:{ W}_{2}^{(3,3)}(\Omega)\rightarrow\widehat{W}_{2}^{(1,1)}(\Omega)\) as

$$ Lv=\frac{\partial^{2}v}{\partial\tau^{2}}(\eta,\tau)-\frac{\partial ^{2}v}{\partial\eta^{2}}(\eta,\tau),\quad v\in W_{2}^{(3,3)}(\Omega). $$

If we homogenize the conditions of the model problem (1)-(3), then it changes to the following problem:

$$ \left \{ \textstyle\begin{array}{@{}l} Lv=M(\eta,\tau),\quad ( \eta,\tau ) \in\Omega =[0,1]\times{}[0,\tau], \\ v(\eta,0)=\frac{{ \partial v}}{{ \partial\tau}}(\eta ,0)=v(0,\tau)=v(1,\tau)=0,\end{array}\displaystyle \right . $$
(5)

where

$$\begin{aligned} M{ (\eta,\tau)} =&\frac{ ( \eta-1 ) f(\eta)h_{1}^{\prime\prime}(\tau)}{h_{1}(0)}-\eta h_{2}^{\prime \prime }(\tau)- \frac{2f^{\prime}(\eta)h_{1}(\tau)}{h_{1}(0)}-\frac{ ( \eta -1 ) f^{\prime\prime}(\eta)h_{1}(\tau)}{h_{1}(0)} \\ &{}+2f^{\prime}(\eta)+\eta f^{\prime\prime}(\eta)+\tau \biggl( g^{\prime \prime}(\eta)+\frac{2g(0)f^{\prime}(\eta)}{h_{1}(0)}+\frac{ ( \eta -1 ) f^{\prime\prime}(\eta)g(0)}{h_{1}(0)} \biggr) \\ &{}-\sin \left [ \textstyle\begin{array}{@{}c@{}} v(\eta,\tau)-\frac{ ( \eta-1 ) f(\eta)h_{1}(\tau )}{h_{1}(0)}+\eta h_{2}(\tau)+\eta f(\eta)-\eta h_{2}(0) \\ +\tau ( g(\eta)+\frac{ ( \eta-1 ) f(\eta )g(0)}{h_{1}(0)}-\eta g(1) )\end{array}\displaystyle \right ] \end{aligned}$$

and

$$\begin{aligned} v(\eta,\tau) =&u(\eta,\tau)+\frac{ ( \eta-1 ) f(\eta )h_{1}(\tau)}{h_{1}(0)}-\eta h_{2}(\tau)-\eta f(\eta)+\eta h_{2}(0) \\ &{}-\tau \biggl( g(\eta)+\frac{ ( \eta-1 ) f(\eta )g(0)}{h_{1}(0)}-\eta g(1) \biggr) \end{aligned}$$

with \(h_{1}(0)\neq0\).

Lemma 3.1

The operator L is bounded linear.

Proof

By Definition 8 we have

$$\begin{aligned} \Vert Lu\Vert _{\widehat{W}_{2}^{(1,1)}}^{2} =& \int _{0}^{\tau} \biggl[ \frac{\partial}{\partial\tau}Lu(0,\tau) \biggr] ^{2}\,d\tau + \bigl\langle Lu( \cdot,0),Lu(\cdot,0) \bigr\rangle _{{G}_{{ 2}}^{{ 1}}} \\ &{}+ \int_{0}^{1} \int_{0}^{\tau} \biggl[ \frac{\partial}{\partial\eta } \frac{\partial}{\partial\tau}Lu(\eta,\tau) \biggr] ^{2}\,d\tau \,d\eta \\ =& \int_{0}^{\tau} \biggl[ \frac{\partial}{\partial\tau}Lu(0,\tau ) \biggr] ^{2}\,d\tau+ \bigl[ Lu(0,0) \bigr] ^{2} \\ &{}+ \int_{0}^{1} \biggl[ \frac{\partial}{\partial\eta}Lu(\eta,0) \biggr] ^{2}\,d\eta+ \int_{0}^{1} \int_{0}^{\tau} \biggl[ \frac{\partial}{\partial \eta } \frac{\partial}{\partial\tau}Lu(\eta,\tau) \biggr] ^{2}\,d\tau \,d\eta. \end{aligned}$$

Since

$$\begin{aligned}& u(\eta,\tau) = \bigl\langle u(\xi,\gamma),K_{ ( { \eta ,\tau} ) } ( \xi,\gamma ) \bigr\rangle _{{ W}_{2}^{(3,3)}}, \\& Lu(\eta,\tau) = \bigl\langle u(\xi,\gamma),LK_{ ( \eta,\tau ) } ( \xi,\gamma ) \bigr\rangle _{{ W}_{2}^{(3,3)}}, \end{aligned}$$

from the continuity of \(K_{ ( { \eta,\tau} ) } ( \xi ,\gamma ) \) we have

$$ \bigl\vert Lu(\eta,\tau) \bigr\vert \leq \Vert u\Vert _{{W}_{2}^{(3,3)}} \bigl\Vert LK_{ ( \eta,\tau ) } ( \xi,\gamma ) \bigr\Vert _{{ W}_{2}^{(3,3)}}=a_{0} \Vert u\Vert _{{W}_{2}^{(3,3)}}. $$

Accordingly, for \(i=0,1\),

$$\begin{aligned}& \frac{\partial^{i}}{\partial\eta^{i}}Lu(\eta,\tau)= \biggl\langle u(\xi ,\gamma), \frac{\partial^{i}}{\partial\eta^{i}}{ LK}_{ ( { \eta,\tau} ) } ( \xi,\gamma ) \biggr\rangle _{ { W}_{2}^{(3,3)}}, \\& \frac{\partial}{\partial\tau}\frac{\partial^{i}}{\partial\eta^{i}} Lu(\eta,\tau)= \biggl\langle u( \xi,\gamma),\frac{\partial}{\partial \tau}\frac{\partial^{i}}{\partial\eta^{i}}LK_{ ( { \eta,\tau} ) } ( \xi,\gamma ) \biggr\rangle _{{ W}_{2}^{(3,3)}}, \end{aligned}$$

and then

$$\begin{aligned}& \biggl\vert \frac{\partial^{i}}{\partial\eta^{i}}Lu(\eta,\tau ) \biggr\vert \leq e_{i}\Vert u\Vert _{{ W}_{2}^{(3,3)}}, \qquad \biggl\vert \frac{\partial}{\partial\tau}\frac{\partial^{i}}{\partial \eta^{i}}Lu(\eta,\tau) \biggr\vert \leq f_{i}\Vert u\Vert _{{ W}_{2}^{(3,3)}}. \end{aligned}$$

Therefore,

$$ \bigl\Vert Lu(\eta,\tau) \bigr\Vert _{\widehat{W}_{2}^{(1,1)}}^{2}{ \leq}\sum_{i=0}^{1} \bigl( e_{i}^{2}+\tau f_{i}^{2} \bigr) \Vert u\Vert _{{ W}_{2}^{(3,3)}}^{2}={ A}\Vert u\Vert _{{ W}_{2}^{(3,3)}}^{2}, $$

where \(A=\sum_{i=0}^{1} ( e_{i}^{2}+\tau f_{i}^{2} ) \). □

Now, choose a countable dense subset \(\{ ( \eta_{1},\tau _{1} ) , ( \eta_{2},\tau_{2} ) ,\dots \} \) in Ω and define

$$ \varphi_{i}(\eta,\tau)=G_{ ( \eta_{i},\tau_{i} ) } ( \eta ,\tau ) ,\qquad \Psi_{i}(\eta,\tau)=L^{\ast}\varphi _{i}(\eta , \tau), $$

where \(L^{\ast}\) is the adjoint operator of L. The orthonormal system \(\{ \widehat{\Psi}_{i}(\eta,\tau) \} _{i=1}^{\infty}\) of \(W_{2}^{(3,3)} ( \Omega ) \) can be obtained by the Gram-Schmidt orthogonalization of \(\{ \Psi_{i}(\eta,\tau) \} _{i=1}^{\infty}\) as

$$ \widehat{\Psi}_{i}(\eta,\tau)=\sum_{k=1}^{i} \beta_{ik}\Psi_{k}(\eta ,\tau). $$

Theorem 3.2

Assume that \(\{ (\eta_{i},\tau_{i}) \} _{i=1}^{\infty}\) is dense in Ω. Then \(\{ \Psi_{i}(\eta ,\tau ) \} _{i=1}^{\infty}\) is a complete system in \({ W}_{2}^{(3,3)}(\Omega)\), and

$$ \Psi_{i}(\eta,\tau)= L_{ ( { y,s} ) } K_{({y,s} )}(\eta,\tau )| _{({y,s}) = (\eta_{i},\tau_{i})} . $$

Proof

We have

$$\begin{aligned} \Psi_{i}(\eta,\tau) =& \bigl( L^{\ast}\varphi_{i} \bigr) ( \eta ,\tau ) = \bigl\langle \bigl( L^{\ast}\varphi_{i} \bigr) ( y,s ) ,K_{ ( { \eta,} { \tau} ) } ( y,s ) \bigr\rangle _{{ W}_{2}^{(3,3)}} \\ =& \bigl\langle \varphi_{i} ( y,s ) ,L_{ ( { y,s} ) }K_{ ( { \eta,\tau} ) } ( y,s ) \bigr\rangle _{\widehat{W}_{2}^{(1,1)}} \\ =& L_{ ( { y,s} ) }K_{ ( { \eta,} { \tau} ) } ( y,s ) |_{ ( { y,s} ) = ( { \eta}_{{ i}}, { \tau }_{{ i}} ) } \\ =& L_{ ( { y,s} ) }K_{ ( { y,s} ) } ( \eta,\tau ) |_{ ( { y,} { s} ) = ( { \eta}_{{ i}}, { \tau}_{{ i}} ) }. \end{aligned}$$

Clearly, \(\Psi_{i}(\eta,\tau)\in W ( \Omega ) \). For each fixed \(u(\eta,\tau)\in{ W}_{2}^{(3,3)}(\Omega)\), if

$$ \bigl\langle u(\eta,\tau),\Psi_{i}(\eta,\tau) \bigr\rangle _{W_{2}^{(3,3)}}=0, \quad i=1,2,\dots, $$

then

$$\begin{aligned} \bigl\langle u(\eta,\tau), \bigl( L^{\ast}\varphi_{i} \bigr) ( \eta ,\tau ) \bigr\rangle _{{ W}_{2}^{(3,3)}} =& \bigl\langle Lu(\eta,\tau), \varphi_{i} ( \eta,\tau ) \bigr\rangle _{{ \widehat{W}_{2}^{(1,1)}}} = ( { Lu} ) ( { \eta}_{{ i}}, { \tau}_{{ i}} ) =0, \quad i=1,2,\dots . \end{aligned}$$

Since \(\{ (\eta_{i},\tau_{i}) \} _{i=1}^{\infty}\) is dense in Ω, \(( Lu ) ( \eta,\tau ) =0\). Therefore, \(u=0\) by the existence of \(L^{-1}\). □

Theorem 3.3

If \(\{ (\eta_{i},\tau_{i}) \} _{i=1}^{\infty}\) is dense in Ω, then the solution of (5) is

$$ u=\sum_{i=1}^{\infty}\sum _{k=1}^{i}\beta_{ik}M( \eta_{k},\tau_{k})\widehat{ \Psi}_{i}(\eta,\tau). $$
(6)

Proof

The system \(\{ \Psi_{i}(\eta,\tau) \} _{i=1}^{\infty}\) is complete in \({ W}_{2}^{(3,3)}(\Omega)\). Therefore, we get

$$\begin{aligned} u =&\sum_{i=1}^{\infty} \langle u,\widehat{ \Psi}_{i} \rangle_{{ W}_{2}^{(3,3)}}\widehat{\Psi}_{i}=\sum _{i=1}^{\infty }\sum_{k=1}^{i} \beta_{ik} \langle u,\Psi_{k} \rangle_{{ W}_{2}^{(3,3)}} \widehat{\Psi}_{i} =\sum_{i=1}^{\infty}\sum _{k=1}^{i}\beta_{ik} \bigl\langle u,L^{\ast }\varphi_{k} \bigr\rangle _{{ W}_{2}^{(3,3)}}\widehat{ \Psi} _{i}\\ =&\sum_{i=1}^{\infty} \sum_{k=1}^{i}\beta_{ik} \langle Lu,\varphi _{k} \rangle_{{\widehat{W}_{2}^{(1,1)}}}\widehat{ \Psi}_{i} =\sum_{i=1}^{\infty}\sum _{k=1}^{i}\beta_{ik} \langle Lu,G_{ ( { \eta}_{{ k}}, { \tau}_{{ k}} ) } \rangle_{{\widehat{W}_{2}^{(1,1)}}}\widehat{\Psi}_{i}\\ =& \sum_{i=1}^{\infty}\sum _{k=1}^{i}\beta_{ik}Lu ( \eta _{k},\tau _{k} ) \widehat{\Psi}_{i} =\sum_{i=1}^{\infty}\sum _{k=1}^{i}\beta_{ik}M( \eta_{k},\tau_{k})\widehat{ \Psi}_{i}. \end{aligned}$$

This completes the proof. □

Now an approximate solution \(u_{n}\) can be obtained from the n-term intercept of the exact solution u:

$$ { u}_{n}=\sum_{i=1}^{n}\sum _{k=1}^{i}\beta _{ik}M( \eta_{k},\tau_{k})\widehat{\Psi}_{i}(\eta, \tau). $$

Obviously,

$$ \bigl\Vert u_{n}(\eta,\tau)-u(\eta,\tau) \bigr\Vert \rightarrow0\ \quad ( n\rightarrow\infty ) . $$

4 The method implementation

If M is linear, then the analytical solution of (5) can be obtained directly by (6). If M is nonlinear, then the solution of (5) can be obtained either by (6) or by an iterative method as follows. We construct an iterative sequence \(u_{n}\) by putting

$$ \left \{ \textstyle\begin{array}{@{}l} \mbox{any fixed }u_{0}\in{W}_{2}^{(3,3)}, \\ u_{n}=\sum_{i=1}^{n}A_{i}\widehat{\Psi}_{i},\end{array}\displaystyle \right . $$
(7)

where

$$ \left \{ \textstyle\begin{array}{@{}l} A_{1}=\beta_{11}M(\eta_{k},\tau_{k}), \\ A_{2}=\sum_{k=1}^{2}\beta_{2k}M(\eta_{k},\tau_{k}), \\ \cdots\\ A_{n}=\sum_{k=1}^{n}\beta_{nk}M(\eta_{k},\tau_{k}).\end{array}\displaystyle \right . $$
(8)

Lemma 4.1

If

$$ u_{n}\stackrel{\Vert \cdot \Vert }{\rightarrow}\widehat{u},\quad \Vert u_{n}\Vert \textit{ is bounded}, ( \eta_{n},\tau _{n})\rightarrow(y,s),\textit{ and }M(\eta, \tau) \textit{ is continuous}, $$

then

$$ M(\eta_{n},\tau_{n})\rightarrow M(y,s). $$

Proof

By the reproducing property and Cauchy-Schwarz inequality we have

$$\begin{aligned}[b] \bigl\vert u(\eta,\tau) \bigr\vert &= \bigl\vert \bigl\langle u(y,s),K_{ ( \eta,\tau ) }(y,s) \bigr\rangle _{{ W}_{2}^{(3,3)}} \bigr\vert \\ &\leq \bigl\Vert u(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}} \bigl\Vert K_{ ( \eta,\tau ) }(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}}=N_{1} \bigl\Vert u(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}}. \end{aligned} $$

Thus, we obtain

$$\begin{aligned} \biggl\vert \frac{\partial u(\eta,\tau)}{\partial\eta} \biggr\vert =& \biggl\vert \biggl\langle u(y,s),\frac{\partial K_{ ( \eta,\tau ) }(y,s)}{\partial\eta} \biggr\rangle _{{ W}_{2}^{(3,3)}} \biggr\vert \leq \bigl\Vert u(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}} \biggl\Vert \frac{\partial K_{ ( \eta,\tau ) }(y,s)}{\partial\eta} \biggr\Vert _{{ W}_{2}^{(3,3)}}\\ =&N_{2} \bigl\Vert u(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}} \end{aligned}$$

and

$$\begin{aligned} \biggl\vert \frac{\partial u(\eta,\tau)}{\partial\tau} \biggr\vert =& \biggl\vert \biggl\langle u(y,s),\frac{\partial K_{ ( \eta,\tau ) }(y,s)}{\partial\tau} \biggr\rangle _{{ W}_{2}^{(3,3)}} \biggr\vert \leq \bigl\Vert u(y,s) \bigr\Vert _{W} \biggl\Vert \frac{\partial K_{ ( \eta,\tau ) }(y,s)}{\partial\tau} \biggr\Vert _{{ W}_{2}^{(3,3)}}\\ =&N_{3} \bigl\Vert u(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}}. \end{aligned}$$

One the one hand, we have

$$\begin{aligned} \bigl\vert u_{n-1}(y,s)-\widehat{u}(y,s) \bigr\vert =&\bigl\vert \bigl\langle u_{n-1}(\eta,\tau)-\widehat{u}(\eta, \tau),K_{ ( y,s ) }( \eta,\tau) \bigr\rangle _{{W}_{2}^{(3,3)}}\bigr\vert \\ \leq& \bigl\Vert u_{n-1}(\eta,\tau)-\widehat{u}(\eta,\tau) \bigr\Vert _{{ W}_{2}^{(3,3)}} \bigl\Vert K_{ ( \eta,\tau ) }(y,s) \bigr\Vert _{{ W}_{2}^{(3,3)}} \\ =&N_{4} \bigl\Vert u_{n-1}(\eta,\tau)-\widehat{u}(\eta, \tau) \bigr\Vert _{{ W}_{2}^{(3,3)}}; \end{aligned}$$

on the other hand, we get

$$\begin{aligned} \bigl\vert u_{n-1}(\eta_{n},\tau_{n})- \widehat{u}(y,s) \bigr\vert =& \bigl\vert u_{n-1}( \eta_{n},\tau_{n})-u_{n-1}(y,s)+u_{n-1}(y,s)-\widehat{u}(y,s) \bigr\vert \\ \leq& \bigl\vert \nabla u_{n-1}(\xi,\eta) \bigr\vert \bigl\vert ( \eta _{n},\tau_{n})-(y,s) \bigr\vert + \bigl\vert u_{n-1}(y,s)-\widehat{u}(y,s) \bigr\vert . \end{aligned}$$

Using these inequalities with \(u_{n}\stackrel{\Vert \cdot \Vert }{\rightarrow}\widehat{u}\), we find

$$ \bigl\vert u_{n-1}(y,s)-\widehat{u}(y,s) \bigr\vert \rightarrow 0, \qquad \bigl\vert \nabla u_{n-1}(\xi,\eta) \bigr\vert { \leq}\sqrt{{ c}_{1}^{2}{ +c}_{2}^{2}} \Vert u\Vert _{{ W}_{2}^{(3,3)}}. $$

Therefore, as \(n\rightarrow\infty\), using the boundedness of \(\Vert u_{n}\Vert \) gives

$$ \bigl\vert u_{n-1}(\eta_{n},\tau_{n})- \widehat{u}(y,s) \bigr\vert \rightarrow0. $$

As \(n\rightarrow\infty\), with the continuity of \(M(\eta,\tau)\) we get

$$ M (\eta_{n},\tau_{n})\rightarrow M(y,s). $$

This completes the proof. □

Theorem 4.2

Assume that \(\Vert u_{n}\Vert \) is a bounded in (7) and that (5) has a unique solution. If \(\{ (\eta _{i},\tau_{i}) \} _{i=1}^{\infty}\) is dense in \({ W}_{2}^{(3,3)} ( \Omega ) \), then the n-term approximate solutions \(u_{n}(\eta,\tau)\) converge to the analytical solution \(u(\eta,\tau)\) of (5), and

$$ u(\eta,\tau)=\sum_{i=1}^{\infty}A_{i} \widehat{\Psi}_{i}(\eta,\tau), $$

where \(A_{i}\) is given by (8).

Proof

First, we prove the convergence of \(u_{n}(\eta,\tau)\). From (7) we infer that

$$ {u}_{n+1}{ (\eta,\tau)=u}_{n}{ (\eta ,\tau)+A}_{n+1} \widehat{\Psi}_{n+1}{ (\eta,\tau),} $$

The orthonormality of \(\{ \widehat{\Psi}_{i} \} _{i=1}^{\infty } \) yields that

$$ \Vert u_{n+1}\Vert ^{2}=\Vert u_{n}\Vert ^{2}{ +A}_{n+1}^{2}=\sum _{i=1}^{n+1}{ A}_{i}^{2}. $$
(9)

In terms of (9), we have that \(\Vert u_{n+1}\Vert >\Vert u_{n}\Vert \). Since \(\Vert u_{n}\Vert \) is bounded, \(\Vert u_{n}\Vert \) is convergent, and there exists a constant c such that

$$ \sum_{i=1}^{\infty}A_{i}^{2}=c. $$

This implies that

$$ \{ A_{i} \} _{i=1}^{\infty}\in l^{2}. $$

If \(m>n\), then

$$\begin{aligned} \Vert u_{m}-u_{n}\Vert ^{2} =&\Vert u_{m}-u_{m-1}+u_{m-1}-u_{m-2}+ \cdots+u_{n+1}-u_{n}\Vert ^{2} \\ \leq&\Vert u_{m}-u_{m-1}\Vert ^{2}+\Vert u_{m-1}-u_{m-2}\Vert ^{2}+\cdots+\Vert u_{n+1}-u_{n}\Vert ^{2}. \end{aligned}$$

Since

$$ \Vert u_{m}-u_{m-1}\Vert ^{2}=A_{m}^{2}, $$

we have

$$ \Vert u_{m}-u_{n}\Vert ^{2}=\sum _{l=n+1}^{m}A_{l}^{2} \rightarrow0\quad \mbox{as }n\rightarrow\infty. $$

The completeness of \({ W}_{2}^{(3,3)}(\Omega)\) shows that \(u_{n}\rightarrow\widehat{u}\) as \(n\rightarrow\infty\). We have

$$ \widehat{u}(\eta,\tau)=\sum_{i=1}^{\infty}A_{i} \widehat{\Psi }_{i}(\eta ,\tau). $$

Note that

$$ (L\widehat{u}) (\eta,\tau)=\sum_{i=1}^{\infty}A_{i}L \widehat{\Psi}_{i}(\eta,\tau) $$

and

$$\begin{aligned}[b] (L\widehat{u}) (\eta_{l},\tau_{l}) &=\sum _{i=1}^{\infty }A_{i}L\widehat{\Psi}_{i}(\eta_{l},\tau_{l}) =\sum_{i=1}^{\infty}A_{i} \bigl\langle L\widehat{\Psi}_{i}(\eta ,\tau), \varphi_{l}(\eta,\tau) \bigr\rangle _{{\widehat{W}_{2}^{(1,1)}}} \\ &=\sum_{i=1}^{\infty}A_{i} \bigl\langle \widehat{\Psi}_{i}(\eta,\tau ),L^{\ast} \varphi_{l}(\eta,\tau) \bigr\rangle _{{ W}_{2}^{(3,3)}} =\sum_{i=1}^{\infty}A_{i} \bigl\langle \widehat{\Psi}_{i}(\eta,\tau ),\Psi_{l}(\eta,\tau) \bigr\rangle _{{ W}_{2}^{(3,3)}}. \end{aligned} $$

Therefore,

$$\begin{aligned} \sum_{l=1}^{i}\beta_{il}(L \widehat{u}) (\eta_{l},\tau_{l}) =&\sum _{i=1}^{\infty}B_{i} \Biggl\langle \widehat{ \Psi}_{i}(\eta,\tau ),\sum_{l=1}^{i} \beta_{il}\Psi_{l}(\eta,\tau) \Biggr\rangle _{{ W}_{2}^{(3,3)}} \\ =&\sum_{i=1}^{\infty}B_{i} \bigl\langle \widehat{\Psi}_{i}(\eta,\tau ), \widehat{ \Psi}_{l}(\eta,\tau) \bigr\rangle _{{ W}_{2}^{(3,3)}}=A_{l}. \end{aligned}$$

In view of (8), we have

$$ L\widehat{u} (\eta_{l},\tau_{l}) =M( \eta_{l},\tau_{l}). $$

Since \(\{ (\eta_{i},\tau_{i}) \} _{i=1}^{\infty}\) is dense in Ω, for each \((y,s)\in\Omega\), there exists a subsequence \(\{ ( \eta_{n_{j}},\tau_{n_{j}} ) \} _{j=1}^{\infty}\) such that

$$ ( \eta_{n_{j}},\tau_{n_{j}} ) \rightarrow(y,s) \quad (j \rightarrow \infty). $$

We know that

$$ L\widehat{u} ( \eta_{n_{j}},\tau_{n_{j}} ) =M(\eta _{n_{j}},\tau _{n_{j}}). $$

Let \(j\rightarrow\infty\); by the continuity of M we have

$$ (L\widehat{u}) (y,s)=M(y,s), $$

which proves that \(\widehat{u}(\eta,\tau)\) satisfies (5). □

We obtain an approximate solution \(\zeta_{n}(t)\) as

$$ \zeta_{n}(t)=\sum_{i=1}^{n} \sum_{k=1}^{i}\sigma_{ik}z \bigl(t_{k},\zeta(t_{k}) \bigr)\widehat { \eta}_{i}(t). $$
(10)

Remark

Let consider a countable dense set

$$ \bigl\{ ( \eta_{1},\tau_{1} ) , ( \eta_{2}, \tau_{2} ) ,\dots \bigr\} \in\Omega $$

and define

$$ \varphi_{i}=G_{ ( \eta_{i},\tau_{i} ) },\qquad \Psi_{i}=L^{\ast} \varphi_{i},\qquad \widehat{\Psi}_{i}{=} \sum _{k=1}^{i}{\beta}_{ik}{ \Psi}_{k}. $$

Then the coefficients \(\beta_{ik}\) can be found by

$$\begin{aligned}& { \beta}_{11} =\frac{1}{\Vert \Psi_{1}\Vert }, \quad{ \beta}_{ii}= \frac{1}{\sqrt{\Vert \Psi_{i}\Vert ^{2}}-\sum_{k=1}^{i-1}c_{ik}^{2}},\quad { \beta}_{ij} =\frac{-\sum_{k=j}^{i-1} c_{ik}\beta_{kj}}{\sqrt{\Vert \Psi_{i}\Vert ^{2}} -\sum_{k=1}^{i-1} c_{ik}^{2}},\quad { c}_{ik} = \langle \Psi_{i},\widehat{\Psi}_{k} \rangle. \end{aligned}$$

5 Numerical experiments

In this section, we solve two examples were solved with RKM. We show our results by tables and figures. The numerical results are compared with exact solutions and existing numerical approximations to illustrate the efficiency and high accuracy of the method. The method presents the solutions in terms of convergent series with easily computable components and improves the convergence of the series solution. The method was used in a direct way without using restrictive assumptions. Throughout this work, all computations are implemented by using Maple 16 software package.

Example 5.1

Let us consider the problem with the following initial conditions:

$$ u(\eta,0)=\sin(\pi\eta), \qquad\frac{\partial u}{\partial\tau }(\eta,0)=0. $$

The exact solution is [28]

$$ u ( \eta,\tau ) =\frac{1}{2}\bigl(\sin \pi(\eta+\tau)+\sin\pi (\eta- \tau) \bigr). $$

After homogenizing the initial conditions and using our method, we obtain the results presented in Tables 1-3 and Figures 1-4.

Figure 1
figure 1

Plots of RKM solution for Example 5.1 .

Figure 2
figure 2

Plots of absolute error for Example 5.1 .

Figure 3
figure 3

Plots of absolute error for Example 5.1 .

Figure 4
figure 4

Plots of absolute error for Example 5.1 .

Table 1 Numerical results for Example 5.1
Table 2 Numerical results for Example 5.1 with \(\pmb{\tau= 1}\)
Table 3 Comparison of AE and RE for Example 5.1

Example 5.2

We solve the SG equation (1) in the region Ω with the following initial conditions:

$$ u(\eta,0)=0, \qquad\frac{\partial u}{\partial\tau}(\eta ,0)=4\sec h(\eta). $$

The exact solution is [27]

$$ u ( \eta,\tau ) =4\arctan \bigl(\sec h(\eta)\tau \bigr). $$

After homogenizing the initial conditions by RKM, we get the results presented in Tables 4-16 and Figures 5-8.

Figure 5
figure 5

Plots of absolute error for Example 5.1 .

Figure 6
figure 6

Plots of absolute error for Example 5.1 .

Figure 7
figure 7

Plots of absolute error for Example 5.1 .

Figure 8
figure 8

Plots of absolute error for Example 5.1 .

Table 4 Numerical results for Example 5.2
Table 5 Numerical results for Example 5.2 with \(\pmb{\tau=1}\)
Table 6 Comparison of absolute and relative errors for Example 5.2
Table 7 Numerical results for Example 5.2 with \(\pmb{\eta=2.5}\)
Table 8 Numerical solutions for Example 5.2 with \(\pmb{\eta =5.0}\)
Table 9 Numerical results for Example 5.2 with \(\pmb{\eta =7.5}\)
Table 10 Numerical results for Example 5.2 with \(\pmb{\eta =10.0}\)
Table 11 Comparison of absolute errors for Example 5.2 with \(\pmb{\eta=2.5}\) and \(\pmb{\eta=5.0}\)
Table 12 Comparison of absolute errors for Example 5.2 with \(\pmb{\eta=7.5}\) and \(\pmb{\eta=10.0}\)
Table 13 Numerical results for Example 5.2 with \(\pmb{\eta =0.06}\)
Table 14 Numerical results for Example 5.2 with \(\pmb{\eta =0.06}\)
Table 15 Numerical results for Example 5.2 with \(\pmb{\eta =0.1}\)
Table 16 Comparison of absolute errors for Example 5.2

Remark

In Tables 1-16, we abbreviate the exact solution and the approximate solution by AS and ES, respectively. AE stands for the absolute error, that is, the absolute value of the difference of the exact and approximate solutions, whereas RE indicates the relative error, that is the absolute error divided by the absolute value of the exact solution.

6 Conclusion

Linear and nonlinear SG equations were investigated by RKM in this work. Homogenizing the initial and boundary conditions is very crucial for this method. We gave a general transformation to homogenize the conditions. This transformation will be very useful for researches who study RKM. We obtained very accurate numerical results and showed them by tables and figures. The computational results confirmed the efficiency, reliability, and accuracy of our method, which is easily applicable. RKM produced a rapidly convergent series with easily computable components using symbolic computation software. The results obtained by RKM are very effective and convenient with less computational work and time.

References

  1. Polyanin, AD, Valentin, FZ: Handbook of Nonlinear Partial Differential Equations. Chapman & Hall/CRC, Boca Raton (2004) xx+814 pp. ISBN:1-58488-355-3

    MATH  Google Scholar 

  2. Ramamurti, R: Solitons and instantons. In: An Introduction to Solitons and Instantons in Quantum Field Theory. North-Holland, Amsterdam (1982) viii+409 pp. ISBN:0-444-86229-3

    Google Scholar 

  3. Akbar, M, Dehghan, M: High-order solution of one-dimensional sine-Gordon equation using compact finite difference and DIRKN methods. Math. Comput. Model. 51(5-6), 537-549 (2010)

    Article  MATH  Google Scholar 

  4. Scott, A: Nonlinear Science. Emergence and Dynamics of Coherent Structures, 2nd edn. Oxford Texts in Applied and Engineering Mathematics, vol. 8. Oxford University Press, Oxford (2003) xxiv+480 pp. ISBN:0-19-852852-3

    MATH  Google Scholar 

  5. Dauxois, T, Peyrard, M: Physics of Solitons. Cambridge University Press, Cambridge (2006)

    MATH  Google Scholar 

  6. Dehghan, M, Shokri, A: A numerical method for one-dimensional nonlinear sine-Gordon equation using collocation and radial basis functions. Numer. Methods Partial Differ. Equ. 24(2), 687-698 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  7. Dehghan, M, Davoud, M: The boundary integral equation approach for numerical solution of the one-dimensional sine-Gordon equation. Numer. Methods Partial Differ. Equ. 24(6), 1405-1415 (2008)

    Article  MATH  Google Scholar 

  8. Bratsos, AG: A third order numerical scheme for the two-dimensional sine-Gordon equation. Math. Comput. Simul. 76(4), 271-282 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  9. Bratsos, AG: A numerical method for the one-dimensional sine-Gordon equation. Numer. Methods Partial Differ. Equ. 24(3), 833-844 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  10. Dehghan, M, Shokri, A: A numerical method for solution of the two-dimensional sine-Gordon equation using the radial basis functions. Math. Comput. Simul. 79(3), 700-715 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  11. Lakestani, M, Dehghan, M: Collocation and finite difference-collocation methods for the solution of nonlinear Klein-Gordon equation. Comput. Phys. Commun. 181(8), 1392-1401 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  12. Dehghan, M, Farhad, FI: The spectral collocation method with three different bases for solving a nonlinear partial differential equation arising in modeling of nonlinear waves. Math. Comput. Model. 53(9-10), 1865-1877 (2011)

    Article  MATH  Google Scholar 

  13. Ahmed, H, Ahmed, A: Chebyshev collocation spectral method for solving the RLW equation. Int. J. Nonlinear Sci. 7(2), 131-142 (2009)

    MathSciNet  Google Scholar 

  14. Ma, ML, Wu, ZM: A numerical method for one-dimensional nonlinear sine-Gordon equation using multiquadric quasi-interpolation. Chin. Phys. B 18, 3099-3103 (2009)

    Article  Google Scholar 

  15. Aronszajn, N: Theory of reproducing kernels. Trans. Am. Math. Soc. 68, 337-404 (1950)

    Article  MATH  MathSciNet  Google Scholar 

  16. Cui, M, Lin, Y: Nonlinear Numerical Analysis in the Reproducing Kernel Space. Nova Science Publishers, New York (2009) xiv+226 pp. ISBN:978-1-60456-468-6; 1-60456-468-7

    MATH  Google Scholar 

  17. Geng, F, Cui, M, Zhang, B: Method for solving nonlinear initial value problems by combining homotopy perturbation and reproducing kernel Hilbert spaces methods. Nonlinear Anal., Real World Appl. 11, 637-644 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  18. Mohammadi, M, Mokhtari, R: Solving the generalized regularized long wave equation on the basis of a reproducing kernel space. J. Comput. Appl. Math. 235, 4003-4014 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  19. Jiang, W, Lin, Y: Representation of exact solution for the time-fractional telegraph equation in the reproducing kernel space. Commun. Nonlinear Sci. Numer. Simul. 16, 3639-3645 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  20. Wang, Y, Su, L, Cao, X, Li, X: Using reproducing kernel for solving a class of singularly perturbed problems. Comput. Math. Appl. 61, 421-430 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  21. Wu, B, Xiu, YL: A new algorithm for a class of linear nonlocal boundary value problems based on the reproducing kernel method. Appl. Math. Lett. 24, 156-159 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  22. Yao, H, Lin, Y: New algorithm for solving a nonlinear hyperbolic telegraph equation with an integral condition. Int. J. Numer. Methods Biomed. Eng. 27, 1558-1568 (2011)

    Article  MATH  Google Scholar 

  23. Inc, M, Akgül, A: The reproducing kernel Hilbert space method for solving Troesch’s problem. J. Assoc. Arab Univ. Basic Appl. Sci. 14, 19-27 (2013)

    Google Scholar 

  24. Inc, M, Akgül, A, Geng, F: Reproducing kernel Hilbert space method for solving Bratu’s problem. Bull. Malays. Math. Soc. 38(1), 271-287 (2015)

    Article  MATH  Google Scholar 

  25. Inc, M, Akgül, A, Kilicman, A: Explicit solution of telegraph equation based on reproducing kernel method. J. Funct. Spaces Appl. 2012, Article ID 984682 (2012)

    Article  Google Scholar 

  26. Zhang, S, Lei, L, LuHong, D: Reproducing kernel functions represented by form of polynomials. In: Proceedings of the Second Symposium International Computer Science and Computational Technology (ISCSCT ’09), Huangshan, P.R. China, 26-28 December, pp. 353-358 (2009)

    Google Scholar 

  27. Sari, M, Gürhan, G: A sixth-order compact finite difference method for the one-dimensional sine-Gordon equation. Int. J. Numer. Methods Biomed. Eng. 27(7), 1126-1138 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  28. Jiang, ZW, Wang, RH: Numerical solution of one dimensional sine-Gordon equation using high accuracy multiquadratic quasi-interpolation. Appl. Math. Comput. 218(15), 7711-7716 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  29. Lu, J: An analytic approach to the sine-Gordon equation using the modified homotopy perturbation method. Comput. Math. Appl. 58, 2313-2319 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  30. Kaya, D: A numerical solution of the sine-Gordon equation using the modified decomposition method. Appl. Math. Comput. 143, 309-317 (2003)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

This paper is a part of the PhD thesis of the first author. The authors would like to thank to the referees for their very useful comments and remarks.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali Akgül.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors have read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Akgül, A., Inc, M., Kilicman, A. et al. A new approach for one-dimensional sine-Gordon equation. Adv Differ Equ 2016, 8 (2016). https://doi.org/10.1186/s13662-015-0734-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-015-0734-x

Keywords