Skip to content

Advertisement

  • Research
  • Open Access

A new generalized Jacobi Galerkin operational matrix of derivatives: two algorithms for solving fourth-order boundary value problems

Advances in Difference Equations20162016:22

https://doi.org/10.1186/s13662-016-0753-2

  • Received: 22 May 2015
  • Accepted: 11 January 2016
  • Published:

Abstract

This paper reports a novel Galerkin operational matrix of derivatives of some generalized Jacobi polynomials. This matrix is utilized for solving fourth-order linear and nonlinear boundary value problems. Two algorithms based on applying Galerkin and collocation spectral methods are developed for obtaining new approximate solutions of linear and nonlinear fourth-order two point boundary value problems. In fact, the key idea for the two proposed algorithms is to convert the differential equations with their boundary conditions to systems of linear or nonlinear algebraic equations which can be efficiently solved by suitable numerical solvers. The convergence analysis of the suggested generalized Jacobi expansion is carefully discussed. Some illustrative examples are given for the sake of indicating the high accuracy and effectiveness of the two proposed algorithms. The resulting approximate solutions are very close to the analytical solutions and they are more accurate than those obtained by other existing techniques in the literature.

Keywords

  • generalized Jacobi polynomials
  • Legendre polynomials
  • operational matrix
  • fourth-order boundary value problems
  • Galerkin and collocation methods

MSC

  • 65M70
  • 65N35
  • 35C10
  • 42C10

1 Introduction

Spectral methods are global methods. The main idea behind spectral methods is to approximate solutions of differential equations by means of truncated series of orthogonal polynomials. The spectral methods play prominent roles in various applications such as fluid dynamics. The three most used versions of spectral methods are: tau, collocation, and Galerkin methods (see for example [18]). The choice of the suitable used spectral method suggested for solving the given equation depends certainly on the type of the differential equation and also on the type of the boundary conditions governed by it.

In the collocation approach, the test functions are the Dirac delta functions centered at special collocation points. This approach requires the differential equation to be satisfied exactly at the collocation points. The tau-method is a synonym for expanding the residual function as a series of orthogonal polynomials and then applying the boundary conditions as constraints. The tau approach has an advantage that it can be applied to problems with complicated boundary conditions. In the Galerkin method, the test functions are chosen in a way such that each member of them satisfies the underlying boundary conditions of the given differential equation.

There is extensive work in the literature on the numerical solutions of high-order boundary value problems (BVPs). The great interest in such problems is due to their importance in various fields of applied science. For example, a large number of problems in physics and fluid dynamics are described by problems of this kind. In this respect, there is a huge number of articles handling both high odd- and high even-order BVPs. For example, in the sequence of papers, [5, 911], the authors have obtained numerical solutions for even-order BVPs by applying the Galerkin method. The main idea for obtaining these solutions is to construct suitable basis functions satisfying the underlying boundary conditions on the given differential equation, then applying Galerkin method to convert each equation to a system of algebraic equations. The suggested algorithms in these articles are suitable for handling one and two dimensional linear even-order BVPs. The Galerkin and Petrov-Galerkin methods have the advantage that their applications on linear problems enable one to investigate carefully the resulting systems, especially their complexities and condition numbers.

There are many algorithms in the literature which are applied for handling fourth-order boundary value problems. For example, Bernardi et al. in [12] suggested some spectral approximations for handling two dimensional fourth-order problems. In the two leading articles of Shen [13, 14], the author developed direct solutions of fourth-order two point boundary value problems. The suggested algorithms in these articles are based on constructing compact combinations of Legendre and Chebyshev polynomials together with the application of the Galerkin method. Many other techniques were used for solving fourth-order BVPs, for example, variational iteration method is applied in [15], non-polynomial sextic spline method in [16], quintic non-polynomial spline method in [17], and the Galerkin method (see [18, 19]). Theorems which list the conditions for the existence and uniqueness of solution of such problems are thoroughly discussed in the important book of Agarwal [20].

The approach of employing operational matrices of differentiation and integration is considered an important technique for solving various kinds of differential and integral equations. The main advantage of this approach is its simplicity in application and its capability for handling linear differential equations as well as nonlinear differential equations. There are a large number of articles in the literature in this direction. For example, the authors in [6], employed the tau operational matrices of derivatives of Chebyshev polynomials of the second kind for handling the singular Lane-Emden type equations. Some other studies in [21, 22] employ tau operational matrices of derivatives for solving the same type of equations. The operational matrices of shifted Chebyshev, shifted Jacobi, and generalized Laguerre polynomials and other kinds of polynomials are employed for solving some fractional problems (see for example, [2327]). In addition, recently in the two papers of Abd-Elhameed [28, 29] one introduced and used two Galerkin operational matrices for solving, respectively, the sixth-order two point BVPs and Lane-Emden equations.

In this paper, our main aim is fourfold:
  • Establishing a novel Galerkin operational matrix of derivatives of some generalized Jacobi polynomials.

  • Investigating the convergence analysis of the suggested generalized Jacobi expansion.

  • Employing the introduced operational matrix of derivatives to numerically solve linear fourth-order BVPs based on the application of Galerkin method.

  • Employing the introduced operational matrix of derivatives for solving the nonlinear fourth-order BVPs based on the application of collocation method.

The contents of the paper is organized as follows. Section 2 is devoted to presenting an overview on classical Jacobi and generalized Jacobi polynomials. Section 3 is concerned with deriving the Galerkin operational matrix of derivatives of some generalized Jacobi polynomials. In Section 4, we implement and present two numerical algorithms for the sake of handling linear and nonlinear fourth-order BVPs based on the application of generalized Jacobi-Galerkin operational matrix method (GJGOMM) for linear problems and generalized Jacobi collocation operational matrix method (GJCOMM) for nonlinear problems. Convergence analysis of the generalized Jacobi expansion is discussed in detail in Section 5. Numerical examples including some discussions and comparisons are given in Section 6 for the sake of testing the efficiency, accuracy, and applicability of the suggested algorithms. Finally, conclusions are reported in Section 7.

2 An overview on classical Jacobi and generalized Jacobi polynomials

The classical Jacobi polynomials \(P_{n}^{(\alpha,\beta)}(x)\) associated with the real parameters (\(\alpha>-1\), \(\beta>-1\)) (see [30] and [31]) are a sequence of polynomials defined on \([-1,1]\). Define the normalized orthogonal polynomials \(R_{n}^{(\alpha,\beta)}(x)\) (see [32])
$$R_{n}^{(\alpha,\beta)}(x)=\frac{P_{n}^{(\alpha,\beta )}(x)}{P_{n}^{(\alpha,\beta)}(1)}, $$
and define the shifted normalized Jacobi polynomials on \([a,b]\) as
$$ \tilde{R}^{(\alpha ,\beta )}_{n}(x)=R^{(\alpha ,\beta )}_{n} \biggl(\frac {2 x-a-b}{b-a} \biggr). $$
(1)
The polynomials \(\tilde{R}^{(\alpha ,\beta )}_{n}(x)\) are orthogonal on \([a,b]\) with respect to the weight function \((b-x)^{\alpha } (x-a)^{\beta }\), in the sense that
$$ \int_{a}^{b}(b-x)^{\alpha}(x-a)^{\beta} \tilde{R}_{m}^{(\alpha,\beta)}(x) \tilde{R}_{n}^{(\alpha,\beta )}(x) \,dx = \textstyle\begin{cases} 0, & m \neq n,\\ \tilde{h}^{\alpha,\beta}_{n}, & m=n, \end{cases} $$
(2)
where
$$ \tilde{h}^{\alpha,\beta}_{n}=\frac{(b-a)^{\lambda} n! \Gamma (n+\beta+1) [\Gamma(\alpha+1) ]^{2}}{ (2n+\lambda) \Gamma(n+\lambda) \Gamma (n+\alpha+1)},\quad \lambda=\alpha +\beta +1. $$
(3)
It should be noted here that the Legendre polynomials are particular polynomials of Jacobi polynomials. In fact, \(R_{n}^{(0,0)}(x)=L_{n}(x)\), where \(L_{n}(x)\) is the standard Legendre polynomial of degree n.
Let \(w^{\alpha,\beta}(x)=(b-x)^{\alpha}(x-a)^{\beta}\). We denote by \(L^{2}_{w^{\alpha,\beta}}(a,b)\) the weighted \(L^{2}\) space with inner product:
$$(u,v)_{w^{\alpha,\beta}}(x):= \int_{a}^{b}u(x) v(x) w^{\alpha,\beta}(x)\,dx, $$
and the associated norm \(\|u\|_{w^{\alpha,\beta}}=(u,u)^{\frac{1}{2}}_{w^{\alpha,\beta}}\). Now, the definition of the shifted Jacobi polynomials will be extended to include the cases in which α and/or \(\beta\le-1\). Now assume that \(\ell,m\in\mathbb{Z}\), and define
$$ \tilde{J}_{i}^{(\ell,m)}(x)= \textstyle\begin{cases} (b-x)^{-\ell} (x-a)^{-m} \tilde{R}^{(-\ell,-m)}_{i+\ell +m}(x), &\ell,m\le-1, \\ (b-x)^{-\ell} \tilde{R}^{(-\ell,m)}_{i+\ell}(x), & \ell\le -1, m>-1, \\ (x-a)^{-m} \tilde{R}^{(\ell,-m)}_{i+m}(x), & \ell> -1, m\leq-1, \\ \tilde{R}^{(\ell,m)}_{i}(x), & \ell,m>-1. \end{cases} $$
(4)
It is worthy to note here that in the case of \([a,b]=[-1,1]\), the polynomials defined in (4) are the so-called generalized Jacobi polynomials \((J_{i}^{(\ell,m)}(x))\), which are defined by Guo et al. in [33]. Now, the symmetric generalized Jacobi polynomials \(J^{(-n,-n)}_{i}(x)\) can be expressed explicitly in terms of the Legendre polynomials, while the symmetric shifted generalized Jacobi polynomials \(\tilde{J}^{(-n,-n)}_{i}(x)\) can be expressed in terms of the shifted Legendre polynomials. These results are given in the following two lemmas.

Lemma 1

For every nonnegative integer n, and for all \(i\ge2n\), one has
$$J_{i}^{(-n,-n)}(x)=n! \sum _{j=0}^{n}\frac{(-1)^{j} \binom{n}{j}(i-2n+2j+\frac{1}{2}) \Gamma(i-2n+j+\frac{1}{2})}{\Gamma(i-n+j+\frac{3}{2})} L_{i+2j-2n}(x), $$
(5)
and in particular,
$$J^{(-2,-2)}_{i}(x)=\frac{8}{(2i-5) (2i-3)} \biggl[ L_{i-4}(x)-\frac{2(2i-3)}{2i-1} L_{i-2}(x)+ \frac{2i-5}{2i-1} L_{i}(x) \biggr]. $$
(6)

Proof

For the proof of Lemma 1, see [9]. □

Now, Lemma 2 is a direct consequence of Lemma 1.

Lemma 2

For every nonnegative integer n, for all \(i\ge2n\), one has
$$ \tilde{J}_{i}^{(-n,-n)}(x)= \biggl( \frac{b-a}{2} \biggr)^{2n} n! \sum_{j=0}^{n} \frac{(-1)^{j} \binom{n}{j}(i-2n+2j+\frac{1}{2}) \Gamma(i-2n+j+\frac{1}{2})}{\Gamma(i-n+j+\frac{3}{2})} L^{*}_{i+2j-2n}(x), $$
(7)
and in particular,
$$ \tilde{J}^{(-2,-2)}_{i}(x)=\frac{(b-a)^{4}}{2 (2i-5) (2i-3)} \biggl[ L^{*}_{i-4}(x)-\frac{2(2i-3)}{2i-1} L^{*}_{i-2}(x)+ \frac{2i-5}{2i-1} L^{*}_{i}(x) \biggr]. $$
(8)

The following lemma is also of interest in the sequel.

Lemma 3

The following integral formula holds:
$$ \int J^{(-2,-2)}_{j+4}(x)\,dx=\frac{-L_{j-1}(x)}{(j+\frac{1}{2})_{3}}+ \frac{3 L_{j+1}(x)}{ (j+\frac{1}{2})(j+\frac{5}{2})_{2}}-\frac{3 L_{j+3}}{ (j+\frac{9}{2})(j+\frac{3}{2})_{2}}+\frac{L_{j+5}(x)}{(j+\frac{5}{2})_{3}}. $$
(9)

Proof

Lemma 3 follows if we integrate equation (8) (for the case \([a,b]=[-1,1]\)) together with the aid of the following integral formula:
$$ \int L_{j}(x)\,dx=\frac{L_{j+1}(x)-L_{j-1}(x)}{2j+1}. $$
(10)
 □

3 Generalized Jacobi Galerkin operational matrix of derivatives

In this section, a novel operational matrix of derivatives will be developed. For this purpose, we choose the following set of basis functions:
$$ \phi_{i}(x)=\tilde{J}^{(-2,-2)}_{i+4}(x)=(x-a)^{2} (b-x)^{2} \tilde{R}^{(2,2)}_{i}(x),\quad i=0,1,2, \ldots. $$
(11)
It is easy to see that, the set of polynomials \(\{\phi_{i}(x): i=0,1,2,\ldots\}\) is a linearly independent set. Moreover, they are orthogonal on \([a,b]\) with respect to the weight function \(w(x)=\frac{1}{(x-a)^{2} (b-x)^{2}}\), in the sense that
$$\int_{a}^{b}\frac{\phi_{i}(x) \phi_{j}(x)\,dx}{(x-a)^{2} (b-x)^{2}} = \textstyle\begin{cases} 0, &i\neq j,\\ \frac{4 (b-a)^{5}}{(2 i+5)(i+1)_{4}}, &i=j. \end{cases} $$
Let us denote \(H_{w}^{r}(I)\) (\(r=0,1,2,\ldots\)), as the weighted Sobolev spaces, whose inner products and norms are denoted by \((\cdot,\cdot)_{r,w}\) and \(\|\cdot\|_{r,w}\), respectively (see [4]). To account for homogeneous boundary conditions, we define
$$H_{0,w}^{2}(I)= \bigl\{ v\in H_{w}^{2}(I): v(a)=v(b)=v'(a)=v'(b)=0 \bigr\} , $$
where \(I=(a,b)\).
Define the following subspace of \(H_{0,w}^{2}(I)\):
$$V_{N}=\operatorname{span}\bigl\{ \phi_{0}(x), \phi_{1}(x),\ldots,\phi_{N}(x)\bigr\} . $$
Any function \(f(x)\in H_{0,w}^{2}(I)\) can be expanded as
$$ f(x)=\sum_{i=0}^{\infty}c_{i} \phi_{i}(x), $$
(12)
where
$$ c_{i}=\frac{(2i+5)(i+1)_{4}}{4 (b-a)^{5}} \int_{a}^{b}\frac{f(x) \phi_{i}(x)}{(x-a)^{2}(b-x)^{2}}\,dx. $$
(13)
Assume that \(f(x)\) in equation (12) can be approximated as
$$ f(x)\simeq f_{N}(x)=\sum_{i=0}^{N}c_{i} \phi_{i}(x)={\boldsymbol {C}}^{T} {\boldsymbol {\Phi}}(x), $$
(14)
where
$$ {\boldsymbol {C}}^{T}=[c_{0}, c_{1},\ldots,c_{N}], \qquad {\boldsymbol {\Phi}}(x)=\bigl[\phi_{0}(x), \phi_{1}(x), \ldots, \phi_{N}(x)\bigr]^{T}. $$
(15)
Now, we are going to state and prove the main theorem, from which a novel Galerkin operational matrix of derivatives will be introduced.

Theorem 1

If the polynomials \(\phi_{i}(x)\) are selected as in (11), then for all \(i\ge1\), one has
$$ \begin{aligned} D \phi_{i}(x)= \frac{2}{b-a} \mathop{\sum_{j=0}}_{(i+j)\ \mathrm{odd}}^{i-1}(2 j+5) \phi_{j}(x)+\eta_{i}(x), \end{aligned} $$
(16)
where \(\eta_{i}(x)\) is given by
$$ \eta_{i}(x)=2 (x-a) \textstyle\begin{cases} (b-x) (a+b-2 x),& i \textit{ even},\\ (a-b) (b-x),& i \textit{ odd}. \end{cases} $$
(17)

Proof

The key idea is to prove Theorem 1 on \([-1,1]\), and hence the proof on the general interval \([a,b]\) can easily be transported. Now, we intend to prove the relation
$$ \begin{aligned} D \psi_{i}(x)= \mathop{\sum _{j=0}}_{(i+j)\ \mathrm{odd}}^{i-1}(2j+5) \psi_{j}(x) +\delta_{i}(x), \end{aligned} $$
(18)
where
$$\psi_{i}(x)=J^{(-2,-2)}_{i+4}(x), $$
and
$$\delta_{i}(x)=4 \bigl(x^{2}-1\bigr) \textstyle\begin{cases} x,& i \mbox{ even},\\ 1,& i \mbox{ odd}. \end{cases} $$
To prove (18), it is sufficient to prove that the following identity holds, up to a constant:
$$ \int Q_{i}(x)\,dx=\psi_{i}(x), $$
(19)
where
$$Q_{i}(x)=\mathop{\sum_{j=0}}_{(i+j)\ \mathrm{odd}}^{i-1}(2j+5) \psi_{j}(x) +\delta_{i}(x). $$
Indeed
$$ \int Q_{i}(x)\,dx=\mathop{\sum_{j=0}}_{(i+j)\ \mathrm{odd}}^{i-1}(2j+5) \int J^{(-2,-2)}_{j+4}(x)\,dx + \int\delta_{i}(x)\,dx. $$
(20)
If we make use of Lemma 3, then the latter equation-after performing some manipulations-is turned into the relation
$$\begin{aligned} \int Q_{i}(x)\,dx={}&8\mathop{\sum _{j=1}}_{(i+j)\ \mathrm{odd}}^{i-1} \biggl[ \frac{-L_{j-1}(x)}{(2j+1)(2j+3)}+\frac{3 L_{j+1}(x)}{ (2j+1)(2j+7)}-\frac{3 L_{j+3}(x)}{(2j+3)(2j+9)} \\ &{}+ \frac{L_{j+5}(x)}{(2j+7)(2j+9)} \biggr] +5 \mu_{i} \int\psi_{0}(x)\,dx+ \int\delta_{i}(x)\,dx, \end{aligned}$$
(21)
where
$$\mu_{i}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} 1, & i \mbox{ odd}, \\ 0, & i \mbox{ even}. \end{array}\displaystyle \right . $$
After performing some rather lengthy manipulations on the right hand side of (21), equation (19) is obtained.
Now, if x in (18) is replaced by \(\frac{2x-a-b}{b-a}\), then after performing some manipulations, we get
$$ \begin{aligned} D \phi_{i}(x)=\frac{2}{b-a} \mathop{\sum_{j=0}}_{(i+j)\ \mathrm{odd}}^{i-1}(2 j+5) \phi_{j}(x)+\eta_{i}(x), \end{aligned} $$
(22)
where \(\eta_{i}(x)\) is given by
$$\eta_{i}(x)=2 (x-a) \textstyle\begin{cases}(b-x) (a+b-2 x),& i\mbox{ even},\\ (a-b) (b-x),& i \mbox{ odd}, \end{cases} $$
and this completes the proof of Theorem 1. □
Now, with the aid of Theorem 1, the first derivative of the vector \(\boldsymbol {\Phi}(x)\) defined in (15) can be expressed in matrix form:
$$ \frac{d{\boldsymbol {\Phi}}(x)}{dx}={H} {\boldsymbol {\Phi}}(x) +{\boldsymbol {\eta}}(x), $$
(23)
where \(\boldsymbol {\eta}(x)= (\eta_{0}(x),\eta_{1}(x),\dots,\eta _{N}(x) )^{T}\), and \(H= (h_{ij} )_{0\leqslant i,j\leqslant N}\) is an \((N+1)\times(N+1)\) matrix whose nonzero elements can be given explicitly from equation (16) by
$$h_{ij}= \textstyle\begin{cases} \frac{2}{b-a}(2j+5),& i>j, (i+j) \mbox{ odd},\\ 0,& \mbox{otherwise}. \end{cases} $$
For example, for \(N=5\), the operational matrix M is the following \((6\times6)\) matrix:
$$H=\frac{2}{b-a} \left .\begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 5 & 0 & 0 & 0 & 0 & 0 \\ 0 & 7 & 0 & 0 & 0 & 0 \\ 5 & 0 & 9 & 0 & 0 & 0 \\ 0 & 7 & 0 & 11 & 0 & 0 \\ 5 & 0 & 9 & 0 & 13 & 0 \end{pmatrix} \right ._{6\times6}. $$

Corollary 1

The second-, third- and fourth-order derivatives of the vector \(\boldsymbol {\Phi}(x)\) are given, respectively, by
$$\begin{aligned} &\frac{d^{2} {\boldsymbol {\Phi}}(x)}{dx^{2}}={H}^{2} \boldsymbol {\Phi(x)}+{H} {\boldsymbol { \eta}}(x)+{\boldsymbol {\eta}}^{(1)}(x), \end{aligned}$$
(24)
$$\begin{aligned} &\frac{d^{3} {\boldsymbol {\Phi}}(x)}{dx^{3}}={H}^{3} \boldsymbol {\Phi(x)}+{H^{2}} {\boldsymbol {\eta}}(x)+H {\boldsymbol {\eta}}^{(1)}(x)+{\boldsymbol {\eta}}^{(2)}(x), \end{aligned}$$
(25)
$$\begin{aligned} &\frac{d^{4} {\boldsymbol {\Phi}}(x)}{dx^{4}}={H}^{4} \boldsymbol {\Phi(x)}+{H}^{3} {\boldsymbol {\eta}}(x)+{H^{2}} {\boldsymbol {\eta}}^{(1)}(x)+H {\boldsymbol { \eta}}^{(2)}(x)+{\boldsymbol {\eta}}^{(3)}(x). \end{aligned}$$
(26)

4 Two algorithms for fourth-order two point BVPs

In this section, we are interested in developing two numerical algorithms for solving both of the linear and nonlinear fourth-order two point BVPs. The Galerkin operational matrix of derivatives that introduced in Section 3 is employed for this purpose. The linear equations are handled by the application of the Galerkin method, while the nonlinear equations are handled by the application of the typical collocation method.

4.1 Linear fourth-order BVPs

Consider the linear fourth-order boundary value problem
$$ u^{(4)}(x)+f_{3}(x) u^{(3)}(x)+f_{2}(x) u^{(2)}(x) +f_{1}(x) u^{(1)}(x)+f_{0}(x) u(x)=g(x),\quad x\in(a, b), $$
(27)
subject to the homogeneous boundary conditions
$$ u(a)=u(b)=u'(a)=u'(b)=0. $$
(28)
If \(u(x)\) is approximated as
$$ u(x)\simeq u_{N}(x)=\sum_{k=0}^{N}c_{k} \phi_{k}(x)={\boldsymbol {C}}^{T} {\boldsymbol {\Phi}}(x), $$
(29)
then making use of equations (23)-(26), the following approximations for \(y^{(\ell)}(x)\), \(1\le\ell\le4\), are obtained:
$$\begin{aligned} &u^{(1)}(x)\simeq {\boldsymbol {C}}^{T} \bigl(H {\boldsymbol { \Phi}}(x)+{\boldsymbol {\eta}} \bigr),\qquad\qquad u^{(2)}(x)\simeq{ \boldsymbol {C}}^{T} \bigl({H}^{2} \boldsymbol {\Phi(x)}+{\boldsymbol {\theta _{2}}}(x) \bigr), \end{aligned}$$
(30)
$$\begin{aligned} &u^{(3)}(x)\simeq{\boldsymbol {C}}^{T} \bigl({H}^{3} \boldsymbol {\Phi(x)}+{\boldsymbol {\theta _{3}}}(x) \bigr), \qquad u^{(4)}(x)\simeq{\boldsymbol {C}}^{T} \bigl({H}^{4} \boldsymbol {\Phi(x)}+{\boldsymbol {\theta _{4}}}(x) \bigr), \end{aligned}$$
(31)
where
$$\begin{aligned} &{\boldsymbol {\theta_{2}}}(x)={H} {\boldsymbol {\eta}}(x)+{\boldsymbol {\eta}}^{(1)}(x), \\ &{\boldsymbol {\theta_{3}}}(x)={H^{2}} {\boldsymbol {\eta}}(x)+H {\boldsymbol { \eta}}^{(1)}(x)+{\boldsymbol {\eta}}^{(2)}(x), \\ &{\boldsymbol {\theta_{4}}}(x)={H}^{3} {\boldsymbol {\eta}}(x)+{H^{2}} {\boldsymbol {\eta}}^{(1)}(x)+H {\boldsymbol {\eta}}^{(2)}(x)+{\boldsymbol { \eta}}^{(3)}(x). \end{aligned}$$
If we substitute equations (29)-(31) into equation (27), then the residual, \(r(x)\), of this equation can be written
$$ \begin{aligned}[b] r(x)={}&{\boldsymbol {C}}^{T} \bigl({H}^{4} \boldsymbol {\Phi(x)} +{\boldsymbol { \theta_{4}}}(x) \bigr) +f_{3}(x) {\boldsymbol {C}}^{T} \bigl({H}^{3} \boldsymbol {\Phi(x)}+{\boldsymbol {\theta_{3}}}(x) \bigr)\\ &{}+f_{2}(x) \bigl({H}^{2} \boldsymbol {\Phi(x)}+{\boldsymbol { \theta_{2}}}(x) \bigr)+f_{1}(x) {\boldsymbol {C}}^{T} \bigl(H {\boldsymbol {\Phi}}(x)+{\boldsymbol {\eta}}(x) \bigr)+ f_{0}(x) { \boldsymbol {C}}^{T} {\boldsymbol {\Phi}}(x)-g(x). \end{aligned} $$
(32)
The application of the Galerkin method (see [4]) yields the following \((N+1)\) linear equations in the unknown expansion coefficients, \(c_{i}\), namely
$$ \int_{a}^{b}w(x) r(x) \phi_{i}(x)\,dx= \int_{a}^{b} r(x) \tilde{R}^{(2,2)}_{i}(x) \,dx=0,\quad i=0,1,\ldots,N. $$
(33)
Thus equation (33) generates a set of \((N+1)\) linear equations which can be solved for the unknown components of the vector C, and hence the approximate spectral solution \(u_{N}(x)\) given in (29) can be obtained.

Remark 1

It should be noted that the problem (27), governed by the nonhomogeneous boundary conditions
$$ u(a)=\gamma_{1}, \qquad u(b)=\gamma_{2}, \qquad u'(a)=\zeta_{1},\qquad u'(b)= \zeta_{2}, $$
(34)
can easily be transformed to a problem similar to (27)-(28) (see [10]).

4.2 Solution of nonlinear fourth-order two point BVPs

Consider the following nonlinear fourth-order boundary value problem:
$$ u^{(4)}(x)=F \bigl(x,u(x),u^{(1)}(x),u^{(2)}(x),u^{(3)}(x) \bigr), $$
(35)
governed by the homogeneous boundary conditions
$$ u(a)=u(b)=u'(a)=u'(b)=0. $$
(36)
If \(u^{(\ell)}(x)\), \(0\le\ell\le4\), are approximated as in (29)-(31), then the following nonlinear equations in the unknown vector C can be obtained:
$$\begin{aligned} {{\boldsymbol {C}}^{T}} \bigl(H^{4}{\boldsymbol {\Phi}}(x)+ \boldsymbol {\theta _{4}}(x) \bigr) \approx{}& F \bigl( x,{{\boldsymbol {C}}^{T}} {\boldsymbol {\Phi}}(x),{{ \boldsymbol {C}}^{T}} \bigl(H {\boldsymbol {\Phi}}(x) +\boldsymbol {\eta}(x)\bigr), \\ &{}{{ \boldsymbol {C}}^{T}} \bigl(H^{2} {\boldsymbol {\Phi}}(x)+\boldsymbol { \theta}_{2}(x)\bigr), {{\boldsymbol {C}}^{T}} \bigl(M^{3} { \boldsymbol {\Phi}}(x)+\boldsymbol {\theta}_{3}(x)\bigr) \bigr). \end{aligned}$$
(37)
An approximate solution \(u_{N}(x)\) can be obtained by employing the typical collocation method. For this purpose, equation (37) is collocated at \((N+1)\) points. These points may be taken to be the zeros of the polynomial \(\tilde{R}^{(2,2)}_{N+1}(x)\), or by any other choice. Hence, a set of \((N+1)\) nonlinear equations is generated in the expansion coefficients, \(c_{i}\). This nonlinear system can be solved with the aid of a suitable solver, such as the well-known Newton iterative method. Therefore, the corresponding approximate solution \(u_{N}(x)\) can be obtained.

5 Convergence analysis of the approximate expansion

In this section, the convergence analysis of the suggested generalized Jacobi approximate solution will be investigated. We will state and prove a theorem in which the expansion in (12) of a function \(f(x)=(x-a)^{2} (b-x)^{2} G(x)\in H_{0,w}^{2}(I)\), where \(G(x)\) is of bounded fourth derivative, converges uniformly to \(f(x)\).

Theorem 2

A function \(f(x)=(x-a)^{2} (b-x)^{2} G(x)\in H_{0,w}^{2}(I)\), \(w(x) =\frac{1}{(x-a)^{2} (b-x)^{2}}\) with \(|G^{(4)}(x)|\leqslant M\), can be expanded as an infinite sum of the basis given in (12). This series converges uniformly to \(f(x)\), and the coefficients in (12) satisfy the inequality
$$ |c_{i}|< \frac{M (b-a)^{4} \sqrt{2 \pi}}{8 i \sqrt{i-4}}, \quad\forall i\ge6. $$
(38)

Proof

From equation (13), one has
$$c_{i}=\frac{(2i+5)(i+1)_{4}}{4 (b-a)^{5}} \int_{a}^{b}\frac{f(x) \phi_{i}(x)}{(x-a)^{2}(b-x)^{2}}\,dx, $$
and with the aid of equation (11), the coefficients \(c_{i}\) may be written alternatively in the form
$$ c_{i}=\frac{(2i+5)(i+1)_{4}}{4 (b-a)^{5}} \int_{a}^{b} \tilde {J}^{(-2,-2)}_{i+4}(x) G(x)\,dx. $$
(39)
Making use of Lemma 2, the polynomials \(\tilde{J}^{(-2,-2)}_{i}(x)\) can be expanded in terms of the shifted Legendre polynomials, and so the coefficients \(c_{i}\) take the form
$$c_{i}=\frac{(i+1)_{4}}{8(b-a) (2i+3)} \int_{a}^{b} \biggl[ L^{*}_{i}(x)- \frac{2(2i+5)}{2i+7} L^{*}_{i+2}+ \frac{2i+3}{2i+7} L^{*}_{i+4} \biggr] G(x)\,dx. $$
If the last relation is integrated by parts four times, then the repeated application of equation (10) yields
$$ c_{i}=\frac{(b-a)^{3} (i+1)_{4}}{8 (2i+3)} \int _{a}^{b}I^{(4)}(x) G^{(4)}(x)\,dx,\quad i\ge4, $$
(40)
where \(I^{(4)}(x)\) is given by
$$\begin{aligned} I^{(4)}(x)={}&\frac{L^{*}_{i-4}(x)}{ 16 (2 i-5) (2 i-3) (2 i-1) (2 i+1)}-\frac{3 L^{*}_{i-2}(x)}{8 (2 i-5) (2 i-1) (2 i+1) (2 i+7)} \\ &{}+\frac{15 L^{*}_{i}(x)}{16 (2 i-3) (2 i-1) (2 i+7) (2 i+9)}-\frac{5 (2 i+5) L^{*}_{i+2} (x)}{4 (2 i-1) (2 i+1) (2 i+7) (2 i+9) (2 i+11)} \\ &{}+\frac{15 L^{*}_{i+4}(x)}{16 (2 i+1) (2 i+7) (2 i+11) (2 i+13)}-\frac{3 L^{*}_{i+6} (x)}{8 (2 i+7) (2 i+9) (2 i+11) (2 i+15)} \\ &{}+\frac{(2 i+3) L^{*}_{i+8} (x)}{16 (2 i+7) (2 i+9) (2 i+11) (2 i+13) (2 i+15)}, \end{aligned}$$
(41)
which can be written as
$$I^{(4)}(x)=\sum_{m=0}^{6}B_{m} L^{*}_{i+2m-4}(x), \mbox{ say}, $$
and then the coefficients \(c_{i}\) take the form
$$ c_{i}=\frac{(b-a)^{3} (i+1)_{4}}{8 (2i+3)} \int_{a}^{b} \Biggl\{ \sum _{m=0}^{6}B_{m} L^{*}_{i+2m-4}(x) \Biggr\} G^{(4)}(x)\,dx. $$
(42)
Now, making use of the substitution \(\frac{2x-a-b}{b-a}=\cos \theta\) enables one to put the coefficients \(c_{i}\) in the form
$$\begin{aligned} c_{i}={}&\frac{(b-a)^{4} (i+1)_{4}}{16 (2i+3)} \int_{0}^{\pi} \Biggl\{ \sum _{m=0}^{6}B_{m} L_{i+2m-4}(\cos \theta) \Biggr\} \\ &{}\times G^{(4)} \biggl(\frac{1}{2} \bigl(a+b+(b-a) \cos \theta \bigr) \biggr) \sin\theta \,d\theta,\quad i\ge4. \end{aligned}$$
(43)
Taking into consideration the assumption \(\vert G^{(4)}(x)\vert \le M\), then we have
$$ |c_{i}|\le\frac{M (b-a)^{4} (i+1)_{4}}{16(2i+3)} \sum _{m=0}^{6} \int_{0}^{\pi} \vert B_{m}\vert \bigl\vert L_{i+2m-4}(\cos\theta)\bigr\vert \sqrt{\sin\theta}\,d\theta. $$
(44)
From a Bernstein type inequality (see [34]), it is easy to see that
$$\bigl\vert L_{i+2m-4}(\cos\theta)\bigr\vert \sqrt{\sin\theta} < \sqrt{ \frac{2}{(i-4)\pi}},\quad \forall 0\le m\le6, $$
and hence (44) together with the last inequality leads to the estimation
$$\begin{aligned} |c_{i}|&< \frac{M (b-a)^{4} (i+1)_{4}}{16(2i+3)}\frac{\sqrt {2 \pi}}{\sqrt{i-4}} \sum _{m=0}^{6}\vert B_{m}\vert \\ &= \frac{M (b-a)^{4} (i+1)_{4}}{16(2i+3)}\frac{\sqrt{2 \pi}}{\sqrt{i-4}} \frac{4 (2i+5)}{(2i-5)(2i-1)(2i+7)(2i+11)(2i+15)}. \end{aligned}$$
(45)
Finally it is easy show that for all \(i\ge6\),
$$|c_{i}|< \frac{M (b-a)^{4} \sqrt{2 \pi}}{8 i \sqrt{i-4}}. $$
This completes the proof of the theorem. □

6 Numerical results and discussions

In this section, the two proposed algorithms in Section 4 are applied to solve linear and nonlinear fourth-order two point boundary value problems. The numerical results ensure that the two algorithms are very efficient and accurate.

Example 1

Consider the fourth-order linear boundary value problem (see [35]):
$$ \begin{aligned} &y^{(4)}(x)-y^{(2)}(x)-y(x)=(x-4) e^{x},\quad 0\le x\le 2, \\ &y(0)=2, \qquad y'(0)=1,\qquad y(2)=0,\qquad y'(2)=-e^{2}. \end{aligned} $$
(46)
The exact solution of (46) is
$$y(x)=(2-x) e^{x}. $$
Table 1 lists the maximum absolute errors E, which resulted from the application of GJGOMM, for various values of N, while in Table 2 we display a comparison between the relative errors obtained by the application of the two methods namely, the first-order method (1OM) and the second-order methods (2OMs) developed in [35] with the relative errors resulting from the application of GJGOMM.
Table 1

Maximum absolute error of \(\pmb{|y-y_{N}|}\) for Example 1

N

E

4

1.283929 × 10−5

6

2.28788 × 10−8

8

3.45852 × 10−11

10

4.17094 × 10−14

12

2.03407 × 10−15

14

1.96477 × 10−15

Table 2

Comparison between the relative errors for Example 1

x

1OM [35]

2OMs [35]

GJGOMM ( N  = 8)

GJGOMM ( N  = 12)

0.25

5.39791 × 10−3

8.11851 × 10−3

1.09077 × 10−10

2.75173 × 10−15

0.50

1.57835 × 10−2

2.09205 × 10−2

1.14685 × 10−10

1.33747 × 10−15

0.75

2.54797 × 10−2

2.93281 × 10−2

8.24739 × 10−11

4.50799 × 10−16

1.00

3.14713 × 10−2

3.09264 × 10−2

3.3689 × 10−12

5.37406 × 10−16

1.25

3.22814 × 10−2

2.65317 × 10−2

7.75226 × 10−11

2.27451 × 10−16

1.50

2.73173 × 10−2

1.82938 × 10−2

9.08649 × 10−11

2.54606 × 10−15

1.75

1.64910 × 10−2

8.68533 × 10−3

7.53861 × 10−11

9.75198 × 10−16

Example 2

Consider the following fourth-order nonlinear boundary value problem (see [36, 37]):
$$ \begin{aligned} &y^{(4)}(x)=-6 e^{-4 y(x)},\quad 0< x< 4-e, \\ &y(0)=1, \qquad y'(0)=\frac{1}{e},\qquad y(4-e)=\ln(4), \qquad y'(4-e)= \frac{1}{4}. \end{aligned} $$
(47)
The exact solution of the above problem is
$$y(x)=\ln(e+x). $$
In Table 3, we list the maximum absolute errors using GJCOMM for various values of N. Let \(E_{1},E_{2},E_{3}\), and \(E_{4}\) denote the maximum absolute errors if the selected collocation points are respectively, the zeros of the shifted Legendre polynomial \(L^{*}_{N+1}(x)\), the shifted Chebyshev polynomials of the first and second kinds \(T^{*}_{N+1}(x)\), \(U^{*}_{N+1}(x)\), and the shifted symmetric Jacobi polynomial \({\tilde{R}^{(2,2)}}_{N+1}(x)\), while Figures 1 and 2 display a comparison between the maximum absolute errors resulting from the application of GJCOMM for \(N=4\) and 6, respectively. Table 3 and Figures 1 and 2 show that the best choice among the previous choices for the collocation points are obtained if the selected collocation points are the zeros of the polynomial \(\tilde{R}^{(2,2)}_{N+1}(x)\). Table 4 displays a comparison between the errors obtained by the application of GJCOMM for \(N=4\), with the errors resulting from the application of the three methods developed in [36, 37]. This comparison ascertains that our results are more accurate than those obtained in [36, 37].
Figure 1
Figure 1

Comparison between the errors obtained by the application of GJCOMM for \(\pmb{N=4}\) , for different choices to the selected collocation points.

Figure 2
Figure 2

Comparison between the errors obtained by the application of GJCOMM for \(\pmb{N=6}\) , for different choices to the selected collocation points.

Table 3

Maximum absolute error of \(\pmb{|y-y_{N}|}\) for Example 2 for \(\pmb{N=2,4,6,8,10,12}\)

N

\(\boldsymbol {E_{1}}\)

\(\boldsymbol {E_{2}}\)

\(\boldsymbol {E_{3}}\)

\(\boldsymbol {E_{4}}\)

2

1.36037 × 10−6

2.11456 × 10−6

8.70645 × 10−7

8.46647 × 10−8

4

1.23719 × 10−8

2.69013 × 10−8

5.15533 × 10−9

5.83994 × 10−10

6

4.34874 × 10−11

1.19637 × 10−10

1.93656 × 10−11

4.17821 × 10−12

8

2.65565 × 10−13

4.8217 × 10−13

1.57874 × 10−13

3.44169 × 10−14

10

9.10383 × 10−15

9.76996 × 10−15

1.19904 × 10−14

8.88178 × 10−15

12

8.65974 × 10−15

8.65974 × 10−15

1.06581 × 10−14

8.43769 × 10−15

Table 4

Comparison between the absolute errors for Example 2

x

Method in [36]

Method in [37] ( \(\boldsymbol {|\phi_{2}-y|}\) )

Method in [37] ( \(\boldsymbol {|\psi_{2}-y|}\) )

GJCOMM ( N  = 4)

0.0

0.0

0.0

0.0

2.55351 × 10−15

0.2

1.79 × 10−4

4.1883 × 10−4

4.1012 × 10−6

6.94449 × 10−8

0.4

3.25 × 10−4

1.2786 × 10−3

1.2574 × 10−5

6.30457 × 10−8

0.6

4.08 × 10−4

1.9971 × 10−3

1.9762 × 10−5

4.48952 × 10−8

0.8

4.02 × 10−4

2.0753 × 10−3

2.0714 × 10−5

6.49632 × 10−8

1.0

2.94 × 10−4

1.3038 × 10−3

1.3163 × 10−5

1.47025 × 10−8

1.2

-

1.8581 × 10−4

1.9026 × 10−6

2.10697 × 10−8

4-e

2 × 10−9

2.4400 × 10−15

0

7.77156 × 10−16

Example 3

Consider the following fourth-order nonlinear boundary value problem (see [37]):
$$ \begin{aligned} &y^{(4)}(x)=y^{2}(x)+g(x),\quad 0< x< 1, \\ &y(0)=0,\qquad y'(0)=0, \qquad y(1)=1,\qquad y'(1)=1, \end{aligned} $$
(48)
where \(g(x)=-x^{10}+4 x^{9}-4 x^{8}-4 x^{7}+8 x^{6}-4 x^{4}+120 x-48\). The exact solution of the above problem is
$$y(x)=x^{5}-2 x^{4}+2 x^{2}. $$
In Table 5, we list the maximum absolute errors using GJCOMM for various values of N. Let E denote the maximum pointwise errors if the selected collocation points are the zeros of the polynomial \(\tilde{R}^{(2,2)}_{N+1}(x)\). Moreover, Table 6 displays a comparison between the errors obtained by the application of GJCOMM with the method developed in [37] for the case \(N=2\). The comparison ascertains that our results are more accurate than those obtained by [37].
Table 5

Maximum absolute error of \(\pmb{|y-y_{N}|}\) for Example 3 for \(\pmb{N=2,4,6}\)

N

E

2

5.489 × 10−17

4

2.265 × 10−17

6

2.10662 × 10−17

Table 6

Comparison between the absolute errors for Example 3

x

Method in [37] ( \(\boldsymbol {|\psi_{2}-y|}\) )

Method in [37] ( \(\boldsymbol {|\phi_{2}-y|}\) )

GJCOMM ( N  = 2)

0.0

0.0

0.0

0.0

0.2

8.1093 × 10−10

3.5906 × 10−5

2.03045 × 10−18

0.4

2.0542 × 10−9

1.0188 × 10−4

9.04773 × 10−18

0.6

2.2272 × 10−9

1.3579 × 10−4

2.10718 × 10−17

0.8

1.0115 × 10−9

8.5908 × 10−5

3.74696 × 10−17

1.0

0.0

5.5799 × 10−13

5.489 × 10−17

Example 4

Consider the following nonlinear fourth-order boundary value problem (see [38]):
$$ \begin{aligned} &y^{(4)}(x)-e^{x} y''(x)+y(x)+ \sin\bigl(y(x)\bigr)=1- \bigl(-2+e^{x}\bigr) \sinh(x)+\sin\bigl(\sinh(x)+1\bigr), \\ &\quad 0\leq x \leq1,\\ &y(0)=y'(0)=1, \qquad y(1)=1+\sinh(1),\qquad y'(1)= \cosh(1), \end{aligned} $$
(49)
with the exact solution \(y(x) =\sinh(x)+1\).
In Table 7, the absolute errors are listed for various values of N. In order to compare the absolute errors obtained by applying GJCOMM with those obtained by applying RHKSM in [38], we list the absolute errors obtained by the application of RHKSM in the last column of this table. This table shows that the approximate solution of problem (49) obtained by using GJCOMM is of high efficiency and more accurate than the approximate solution obtained by RHKSM [38].
Table 7

Comparison between the absolute errors for Example 4

x

GJCOMM ( N  = 4)

GJCOMM ( N  = 6)

GJCOMM ( N  = 8)

RKHSM ( \(\boldsymbol {u^{101}_{1}}\) ) [38]

0.0

0.0

0.0

0.0

0.0

0.1

1.74774 × 10−11

2.42005 × 10−14

9.34812 × 10−17

2.78 × 10−8

0.2

8.7269 × 10−11

1.5889 × 10−14

8.29415 × 10−18

8.09 × 10−8

0.3

2.27345 × 10−11

3.59562 × 10−14

2.17857 × 10−18

1.20 × 10−7

0.4

1.213 × 10−10

6.36858 × 10−14

6.03901 × 10−17

1.25 × 10−7

0.5

6.32272 × 10−12

1.87133 × 10−15

2.06595 × 10−16

9.56 × 10−8

0.6

1.27768 × 10−10

6.44606 × 10−14

1.21431 × 10−17

4.82 × 10−8

0.7

1.44017 × 10−11

4.09586 × 10−14

1.10589 × 10−16

7.38 × 10−9

0.8

9.92086 × 10−11

1.56333 × 10−14

8.74301 × 10−16

1.07 × 10−8

0.9

2.37061 × 10−11

2.57225 × 10−14

3.38271 × 10−17

7.08 × 10−9

1.0

4.44089 × 10−16

2.70617 × 10−16

2.27249 × 10−16

0.0

7 Concluding remarks

In this article, a novel operational matrix of derivatives of certain generalized Jacobi polynomials is derived and used for introducing spectral solutions of linear and nonlinear fourth-order two point boundary value problems. The two spectral methods, namely the Galerkin and collocation methods are employed for this purpose. The main advantages of the introduced algorithms are their simplicity in application, and also their high accuracy, since highly accurate approximate solutions can be achieved by using a small number of terms of the suggested expansion. The numerical results are convincing and the resulting approximate solutions are very close to the exact ones.

Declarations

Acknowledgements

The authors are grateful to the referees for their valuable comments and suggestions which have improved the manuscript in its present form.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Mathematics, Faculty of Science, University of Jeddah, Jeddah, Saudi Arabia
(2)
Department of Mathematics, Faculty of Science, Cairo University, Giza, Egypt
(3)
Department of Mathematics, Faculty of Industrial Education, Helwan University, Cairo, Egypt
(4)
Department of Mathematics, Faculty of Science, Shaqra University, Shaqra, Saudi Arabia

References

  1. Rashidinia, J, Ghasemi, M: B-spline collocation for solution of two-point boundary value problems. J. Comput. Appl. Math. 235(8), 2325-2342 (2011) View ArticleMathSciNetMATHGoogle Scholar
  2. Elbarbary, EME: Efficient Chebyshev-Petrov-Galerkin method for solving second-order equations. J. Sci. Comput. 34(2), 113-126 (2008) View ArticleMathSciNetMATHGoogle Scholar
  3. Julien, K, Watson, M: Efficient multi-dimensional solution of PDEs using Chebyshev spectral methods. J. Comput. Phys. 228, 1480-1503 (2009) View ArticleMathSciNetMATHGoogle Scholar
  4. Canuto, C, Hussaini, MY, Quarteroni, A, Zang, TA: Spectral Methods in Fluid Dynamics. Springer, Berlin (1988) View ArticleMATHGoogle Scholar
  5. Doha, EH, Abd-Elhameed, WM: Efficient spectral-Galerkin algorithms for direct solution of second-order equations using ultraspherical polynomials. SIAM J. Sci. Comput. 24, 548-571 (2002) View ArticleMathSciNetMATHGoogle Scholar
  6. Doha, EH, Abd-Elhameed, WM, Youssri, YH: Second kind Chebyshev operational matrix algorithm for solving differential equations of Lane-Emden type. New Astron. 23/24, 113-117 (2013) View ArticleGoogle Scholar
  7. Bhrawy, AH, Hafez, RM, Alzaidy, JF: A new exponential Jacobi pseudospectral method for solving high-order ordinary differential equations. Adv. Differ. Equ. 2015, 152 (2015) View ArticleMathSciNetGoogle Scholar
  8. Doha, EH, Bhrawy, AH, Abd-Elhameed, WM: Jacobi spectral Galerkin method for elliptic Neumann problems. Numer. Algorithms 50(1), 67-91 (2009) View ArticleMathSciNetMATHGoogle Scholar
  9. Doha, EH, Abd-Elhameed, WM, Bhrawy, AH: New spectral-Galerkin algorithms for direct solution of high even-order differential equations using symmetric generalized Jacobi polynomials. Collect. Math. 64(3), 373-394 (2013) View ArticleMathSciNetMATHGoogle Scholar
  10. Doha, EH, Abd-Elhameed, WM, Bassuony, MA: New algorithms for solving high even-order differential equations using third and fourth Chebyshev- Galerkin methods. J. Comput. Phys. 236, 563-579 (2013) View ArticleMathSciNetMATHGoogle Scholar
  11. Doha, EH, Abd-Elhameed, WM, Bhrawy, AH: Efficient spectral ultraspherical-Galerkin algorithms for the direct solution of 2nth-order linear differential equations. Appl. Math. Model. 33, 1982-1996 (2009) View ArticleMathSciNetMATHGoogle Scholar
  12. Bernardi, C, Giuseppe, C, Maday, Y: Some spectral approximations of two-dimensional fourth-order problems. Math. Comput. 59(199), 63-76 (1992) View ArticleMATHGoogle Scholar
  13. Shen, J: Efficient spectral-Galerkin method I. Direct solvers of second-and fourth-order equations using Legendre polynomials. SIAM J. Sci. Comput. 15(6), 1489-1505 (1994) View ArticleMathSciNetMATHGoogle Scholar
  14. Shen, J: Efficient spectral-Galerkin method II. Direct solvers of second-and fourth-order equations using Chebyshev polynomials. SIAM J. Sci. Comput. 16(1), 74-87 (1995) View ArticleMathSciNetMATHGoogle Scholar
  15. Noor, MA, Mohyud-Din, ST: An efficient method for fourth-order boundary value problems. Comput. Math. Appl. 54(7), 1101-1111 (2007) View ArticleMathSciNetMATHGoogle Scholar
  16. Khan, A, Khandelwal, P: Non-polynomial sextic spline approach for the solution of fourth-order boundary value problems. Appl. Math. Comput. 218(7), 3320-3329 (2011) View ArticleMathSciNetMATHGoogle Scholar
  17. Lashien, IF, Ramadan, MA, Zahra, WK: Quintic nonpolynomial spline solutions for fourth order two-point boundary value problem. Commun. Nonlinear Sci. Numer. Simul. 14(4), 1105-1114 (2009) View ArticleMathSciNetMATHGoogle Scholar
  18. Doha, EH, Bhrawy, AH: Efficient spectral-Galerkin algorithms for direct solution of fourth-order differential equations using Jacobi polynomials. Appl. Numer. Math. 58(8), 1224-1244 (2008) View ArticleMathSciNetMATHGoogle Scholar
  19. Doha, EH, Bhrawy, AH: A Jacobi spectral Galerkin method for the integrated forms of fourth-order elliptic differential equations. Numer. Methods Partial Differ. Equ. 25(3), 712-739 (2009) View ArticleMathSciNetMATHGoogle Scholar
  20. Agarwal, RP: Boundary Value Problems for Higher-Order Differential Equations. World Scientific, Singapore (1986) View ArticleMATHGoogle Scholar
  21. Öztürk, Y, Gülsu, M: An operational matrix method for solving Lane-Emden equations arising in astrophysics. Math. Methods Appl. Sci. 37(15), 2227-2235 (2014) View ArticleMathSciNetMATHGoogle Scholar
  22. Bhardwaj, A, Pandey, RK, Kumar, N, Dutta, G: Solution of Lane-Emden type equations using Legendre operational matrix of differentiation. Appl. Math. Comput. 218(14), 7629-7637 (2012) View ArticleMathSciNetMATHGoogle Scholar
  23. Bhrawy, AH, Zaky, MA: Numerical simulation for two-dimensional variable-order fractional nonlinear cable equation. Nonlinear Dyn. 80(1-2), 101-116 (2015) View ArticleMathSciNetGoogle Scholar
  24. Bhrawy, AH, Taha, TM, Alzahrani, EO, Baleanu, D, Alzahrani, AA: New operational matrices for solving fractional differential equations on the half-line. PLoS ONE 10(5), e0126620 (2015). doi:10.1371/journal.pone.0126620 View ArticleGoogle Scholar
  25. Saadatmandi, A, Dehghan, M: A new operational matrix for solving fractional-order differential equations. Comput. Math. Appl. 59(3), 1326-1336 (2010) View ArticleMathSciNetMATHGoogle Scholar
  26. Maleknejad, K, Basirat, B, Hashemizadeh, E: A Bernstein operational matrix approach for solving a system of high order linear Volterra- Fredholm integro-differential equations. Math. Comput. Model. 55(3), 1363-1372 (2012) View ArticleMathSciNetMATHGoogle Scholar
  27. Zhu, L, Fan, Q: Solving fractional nonlinear Fredholm integro-differential equations by the second kind Chebyshev wavelet. Commun. Nonlinear Sci. Numer. Simul. 17(6), 2333-2341 (2012) View ArticleMathSciNetMATHGoogle Scholar
  28. Abd-Elhameed, WM: On solving linear and nonlinear sixth-order two point boundary value problems via an elegant harmonic numbers operational matrix of derivatives. Comput. Model. Eng. Sci. 101(3), 159-185 (2014) MathSciNetGoogle Scholar
  29. Abd-Elhameed, WM: New Galerkin operational matrix of derivatives for solving Lane-Emden singular-type equations. Eur. Phys. J. Plus 130, 52 (2015) View ArticleGoogle Scholar
  30. Abramowitz, M, Stegun, IA: Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables. Dover, New York (2012) Google Scholar
  31. Andrews, GE, Askey, R, Roy, R: Special Functions. Cambridge University Press, Cambridge (1999) View ArticleMATHGoogle Scholar
  32. Doha, EH, Abd-Elhameed, WM, Ahmed, HM: The coefficients of differentiated expansions of double and triple Jacobi polynomials. Bull. Iran. Math. Soc. 38(3), 739-766 (2012) MathSciNetMATHGoogle Scholar
  33. Guo, B-Y, Shen, J, Wang, L-L: Optimal spectral-Galerkin methods using generalized Jacobi polynomials. J. Sci. Comput. 27(1-3), 305-322 (2006) View ArticleMathSciNetMATHGoogle Scholar
  34. Chow, Y, Gatteschi, L, Wong, R: A Bernstein-type inequality for the Jacobi polynomial. Proc. Am. Math. Soc. 121(3), 703-709 (1994) View ArticleMathSciNetMATHGoogle Scholar
  35. Xu, L: The variational iteration method for fourth order boundary value problems. Chaos Solitons Fractals 39(3), 1386-1394 (2009) View ArticleMATHGoogle Scholar
  36. Wazwaz, AM: The numerical solution of special fourth-order boundary value problems by the modified decomposition method. Int. J. Comput. Math. 79(3), 345-356 (2002) View ArticleMathSciNetMATHGoogle Scholar
  37. Singh, R, Kumar, J, Nelakanti, G: Approximate series solution of fourth-order boundary value problems using decomposition method with Green’s function. J. Math. Chem. 52(4), 1099-1118 (2014) View ArticleMathSciNetMATHGoogle Scholar
  38. Geng, F: A new reproducing kernel Hilbert space method for solving nonlinear fourth-order boundary value problems. Appl. Math. Comput. 213, 163-169 (2009) View ArticleMathSciNetMATHGoogle Scholar

Copyright

Advertisement