# An effective computational approach based on Gegenbauer wavelets for solving the time-fractional Kdv-Burgers-Kuramoto equation

## Abstract

In this paper, our purpose is to present a wavelet Galerkin method for solving the time-fractional KdV-Burgers-Kuramoto (KBK) equation, which describes nonlinear physical phenomena and involves instability, dissipation, and dispersion parameters. The presented computational method in this paper is based on Gegenbauer wavelets. Gegenbauer wavelets have useful properties. Gegenbauer wavelets and the operational matrix of integration, together with the Galerkin method, were used to transform the time-fractional KBK equation into the corresponding nonlinear system of algebraic equations, which can be solved numerically with Newton’s method. Our aim is to show that the Gegenbauer wavelets-based method is efficient and powerful tool for solving the KBK equation with time-fractional derivative. In order to compare the obtained numerical results of the wavelet Galerkin method with exact solutions, two test problems were chosen. The obtained results prove the performance and efficiency of the presented method.

## Introduction

Wavelets analysis is the decomposition of a function onto shifted and scaled versions of the basic wavelet. Wavelets possess many good features, such as compact support, orthogonality, exact representation of polynomials to a certain degree and facility to correspond functions at various levels of resolution. So, it has been applied in many various fields of engineering and science. Wavelets, introduced by Daubechies, have been used to obtain approximate solution of physical and mathematical problems related to various branches of engineering and applied sciences. Since the beginning of 1990s, wavelet-based methods have been used to solve partial differential equations. Generally, the wavelet algorithms for solving partial differential equation have been established on the Galerkin or collocation methods. But the Daubechies wavelet family has a drawback. Because the Daubechies wavelet family have implicit expression, analytical differentiation or integration of Daubechies wavelets is impossible. And thus, simpler wavelets, which are based on orthogonal polynomials, such as Haar, Gegenbauer, Legendre, Hermite and Chebyshev polynomials, are commonly used in wavelet-based numerical methods by many researchers.

Many important nonlinear phenomena in the mathematical, physical, chemical and engineering sciences can be well modeled by partial differential equations and fractional partial differential equations. It is well known that most physical and engineering problems are nonlinear, and it may be very difficult to find exact solutions of fractional partial differential equations for some cases. Due to this fact, numerical solutions of nonlinear fractional partial differential equations are very important. For that we need a reliable and efficient technique for the numerical solution of nonlinear partial differential equations and fractional partial differential equations.

In this paper, we introduce a numerical solution by aid of the Gegenbauer wavelet Galerkin method for the following time-fractional KdV-Burgers-Kuramoto (KBK) equation :

\begin{aligned} &\frac{\partial ^{\alpha } u(x,t)}{\partial t^{\alpha }} + u(x,t)\frac{ \partial u(x,t)}{\partial x} -\alpha _{1} \frac{\partial ^{2}u(x,t)}{ \partial x^{2}} + \alpha _{2}\frac{\partial ^{3}u(x,t)}{\partial x^{3}} + \alpha _{3}\frac{\partial ^{4}u(x,t)}{\partial x^{4}} = f(x,t), \\ &\quad t > 0, x > 0 \end{aligned}
(1)

with initial and boundary conditions

$$u(x,0) = 0$$
(2)

and

$$\textstyle\begin{cases} u(0,t) = h_{1}(t),\qquad u(1,t) = h_{2}(t), \\ u_{x}(0,t) = h_{3}(t), \\ u_{xx}(0,t) = h_{4}(t), \end{cases}$$
(3)

in which α is the order of the fractional time derivative, $$f(x,t)$$ is the forcing term and the $$\alpha _{1},\alpha _{2},\alpha _{3}$$ parameters characterize instability, dispersion, and dissipation, respectively . The study of nonlinear physical phenomena has been an active subject in various fields of science, such as applied mathematics and physics, and issues related to engineering. The classical KBK equation, which is one of the best-known equations among nonlinear partial differential equations, describes some physical processes in motion of turbulence and other unstable process systems. This equation can be also used to describe long waves on a viscous fluid flowing down along an inclined plane , unstable drift waves in plasma , and a turbulent cascade model in a barotropic atmosphere . The classical KBK equation given with initial and boundary conditions can be solved analytically in spite of nonlinearity. As computational techniques develop, to compare numerical results with exact solutions, many authors have applied different numerical and semi-analytical methods to solve the classical KBK equation. They have obtained high accuracy and good performance. In , a trigonometric function expansion method was applied to find exact solution of the KBK equation. In , the KBK equation was converted into an equivalent 3D system, and then this system was solved with the Lie symmetry reduction method and the Preller–Singer procedure. In , the KBK equation was solved using a combination method. He Shuqi and Chen Lie used the ($$g'/g \sim 2$$) expansion method to solve the KBK equation . Sayed and Elhamahmy proposed a scheme to solve the KBK equation using a sech-tanh method and Wu’s elimination method . But the study of the nonlinear time-fractional KBK equation is very few in the literature. In , Legendre wavelet method was presented to solve the fractional KBK equation. Song and Zhang used the homotopy analysis method to solve a fractional KBK equation . In , the fractional KBK equation was solved using He’s variational iteration method and Adomian’s decomposition method.

For many years, the Galerkin method has been used to find numerical solutions of differential equations. The most important advantage of the presented method is that it transforms Eq. (1) into a nonlinear system of algebraic equations which can be easily solved with Newton’s method. Until recently, the Galerkin method has been used to obtain numerical solutions to fractional-order boundary value problems in , to the fractional Benney equation in , to hyperbolic partial differential equations in , to the stochastic heat equation in , to fractional sub-diffusion and time-fractional diffusion-wave equations in , to nonlinear stochastic integral equations in , to ordinary differential equations with non-analytic solution in , to the one-dimensional advection-diffusion equation in  and to second-order parabolic partial differential equations in .

The structure of this article is as follows. Firstly some definitions and mathematical preliminaries of the fractional calculus is mentioned in Sect. 2. Gegenbauer polynomials and Gegenbauer wavelets are introduced in Sect. 3. The approximation of a function by using Gegenbauer wavelets, followed by block pulse functions and nonlinear term approximation by Gegenbauer wavelets are given in Sect. 4 and Sect. 5, respectively. We define the operational matrix of fractional integration in Sect. 6. In Sect. 7, to obtain the numerical solution for the time-fractional KBK equation, the Gegenbauer wavelet Galerkin method (GWGM) is applied. In Sect. 8 presents the test examples. A conclusion is given in Sect. 9.

## Preliminaries and notations

We give some fundamental definitions and properties of the fractional calculus theory which are required for establishing our results.

### Definition

(Riemann–Liouville fractional integral operator of order α)

The Riemann–Liouville fractional integral operator $$I^{\alpha } ( \alpha > 0 )$$ of a function $$u ( t )$$, is defined as [23, 24]

$$I^{\alpha } u ( t ) = \textstyle\begin{cases} \frac{1}{\varGamma ( \alpha )}\int _{0}^{t} ( t -\rho )^{\alpha -1}u ( \rho )\,d\rho , & \alpha > 0, \alpha \in R^{ +}. \\ u(t), & \alpha = 0. \end{cases}$$

The operator $$I^{\alpha }$$ has the following properties:

\begin{aligned} &I^{\alpha } I^{\mu } = I^{\alpha + \mu },\quad ( \alpha > 0,\beta > 0 ), \\ &I^{\alpha } I^{\mu } = I^{\mu } I^{\alpha }, \\ & \bigl( I^{\alpha } I^{\mu } u \bigr) ( t ) = \bigl( I ^{\mu } I^{\alpha } u \bigr) ( t ), \\ &I^{\alpha } ( t -a )^{\gamma } = \frac{\varGamma ( \gamma + 1 )}{\varGamma ( \alpha + \gamma + 1 )} ( t -a )^{\alpha + \gamma },\quad ( \gamma > -1 ). \end{aligned}

### Definition

(Caputo fractional derivative operator of order α)

The fractional derivative operator of order $$\alpha > 0$$ in the Caputo sense is given as [23, 24]

$$D_{t}^{\alpha } u ( t ) = I^{m -\alpha } D^{m}u(t) = \textstyle\begin{cases} \frac{1}{\varGamma (m -\alpha )}\int _{0}^{t} \frac{1}{ ( t -\rho )^{ ( \alpha -m + 1 )}}\frac{d^{m}u(\rho )}{d \rho ^{m}}\,d\rho , & m -1 < \alpha \le m, m \in N, \\ \frac{d^{m}u(\rho )}{d\rho ^{m}}, & \alpha = m, m \in N. \end{cases}$$

Some of the properties of Caputo fractional derivative are:

\begin{aligned} &\mathrm{(i)}\quad I^{\alpha } D^{\alpha } u(t) = u ( t ) -\sum _{k = 0}^{m -1} u^{ ( k )} \bigl( 0^{ +} \bigr)\frac{t^{k}}{k!},\quad m -1 < \alpha \le m, m \in N \\ &\mathrm{(ii)}\quad D^{\alpha } I^{\alpha } u ( t ) = u ( t ) \\ &\mathrm{(iii)}\quad D^{\alpha } t^{\mu } = \textstyle\begin{cases} \frac{\varGamma ( \mu + 1 )}{\varGamma ( \mu -\alpha + 1 )}t^{\mu -\alpha }, & \mu > \alpha -1, \\ 0, & \mu \le \alpha -1. \end{cases}\displaystyle \end{aligned}

## Gegenbauer polynomials and Gegenbauer wavelets

For Gegenbauer polynomials (a special type of Jacobi polynomials) , $$G_{n}^{\beta } (x)$$ is defined, for $$\beta > -\frac{1}{2},n \in Z ^{ +}$$, on $$[ -1,1 ]$$, and the Gegenbauer polynomial recurrence formulas are defined by

\begin{aligned} &G_{0}^{\beta } (x) = 1,\qquad G_{1}^{\beta } (x) = 2\beta x, \\ &G_{n + 1}^{\beta } (x) = \frac{1}{n + 1} \bigl( 2 ( n + \beta )xG _{n}^{\beta } (x) -(n + 2\beta -1)G_{n -1}^{\beta } (x) \bigr),\quad n = 1,2,3,\ldots. \end{aligned}

These polynomials are defined by the generating function

$$\frac{1}{ ( 1 -2xt + t^{2} )^{\beta }} = \sum_{n = 0} ^{\infty } G_{n}^{\beta } ( x )t^{n}.$$

Some of the properties of Gegenbauer polynomials are

\begin{aligned} &\frac{d}{dx} \bigl( G_{n}^{\beta } ( x ) \bigr) = 2 \beta G_{n -1}^{\beta + 1} ( x ),\qquad \frac{d^{k}}{dx^{k}} \bigl( G_{n}^{\beta } ( x ) \bigr) = 2^{k}\beta ^{k}G _{n -k}^{\beta + k} ( x ),\quad n \ge 1, \\ &( n + \beta )G_{n}^{\beta } ( x ) = \beta \bigl( G_{n}^{\beta + 1} ( x ) -G_{n -2}^{\beta + 1} ( x ) \bigr),\quad n \ge 2, \\ &\frac{d}{dx} \bigl( G_{n + 1}^{\beta } ( x ) -G_{n -1} ^{\beta } ( x ) \bigr) = 2\beta \bigl( G_{n}^{\beta + 1} ( x ) -G_{n -2}^{\beta + 1} ( x ) \bigr) = 2 ( n + \beta )G_{n}^{\beta } ( x ). \end{aligned}

The equation given as

$$\int \bigl( 1 -x^{2} \bigr)^{\beta -1/2}G_{n}^{\beta } ( x )\,dx = -\frac{2\beta ( 1 -x^{2} )^{\beta + 1/2}}{n ( n + 2\beta )} G_{n -1}^{\beta + 1} ( x ), \quad n \ge 1$$

is obtained from Rodrigues formula .

Gegenbauer polynomials are orthogonal with respect to the weight function $$\omega (x) = ( 1 -x^{2} )^{\beta -\frac{1}{2}}$$; that is,

$$\int _{ -1}^{1} \bigl( 1 -x^{2} \bigr)^{\beta -\frac{1}{2}}G_{m} ^{\beta } ( x ) G_{n}^{\beta } ( x )\,dx = K _{n}^{\beta } \delta _{nm},\quad \beta > -\frac{1}{2}$$

in which $$K_{n}^{\beta } = \frac{\pi 2^{1 -2\beta } \varGamma (n + 2 \beta )}{n!(n + \beta ) ( \varGamma ( \beta ) ) ^{2}}$$ is called the normalizing factor, and $$\delta _{nm}$$ is the Kronecker symbol.

Legendre polynomials and Chebyshev polynomials are special types of Gegenbauer polynomials. For $$\beta = 0$$, we get the first-kind Chebyshev polynomials; for $$\beta = 1/2$$, we get Legendre polynomials; for $$\beta = 1$$, we get second-kind Chebyshev polynomials.

The basic wavelet (mother wavelet) is given on the basis of scaling and translation parameters as

$$\psi _{a,b} ( x ) = \frac{1}{\sqrt{ \vert a \vert }} \psi \biggl( \frac{x -b}{a} \biggr),\quad a,b \in R,a \ne 0,$$

in which a and b are the scaling and translation parameters, respectively. By restricting $$a,b$$ to discrete values as $$a = a_{0} ^{ -k},b = nb_{0}a_{0}^{ -k}$$, where $$a_{0} > 1,b_{0} > 0$$ and $$k,n \in N$$, we acquire the following discrete wavelets:

$$\psi _{k,n}(x) = ( a_{0} )^{\frac{k}{2}}\psi \bigl( a _{0}^{k}x -nb_{0} \bigr)$$

in which an orthogonal basis of $$L_{2} ( R )$$ is formed. If $$a_{0} = 2$$ and $$b_{0} = 1$$, then $$\psi _{k,n}$$ forms an orthonormal basis.

The discrete wavelet transform is defined as

$$\psi _{k,n} ( x ) = ( 2 )^{\frac{k}{2}}\psi \bigl( 2^{k}x -n \bigr).$$

Gegenbauer wavelets are defined on the interval $$[ 0,1 ]$$ by

$$\psi _{n,m}^{\beta } ( x ) = \textstyle\begin{cases} \frac{1}{\sqrt{K_{m}^{\beta }}} 2^{\frac{k}{2}}G_{m}^{\beta } ( 2^{k}x -\hat{n} ), & \frac{\hat{n} -1}{2^{k}} \le x \le \frac{ \hat{n} + 1}{2^{k}}, \\ 0, & \text{elsewhere}, \end{cases}$$

in which $$k = 1,2,3,\ldots$$ , is the level of resolution, $$n = 1,2,3,\ldots,2^{k -1},\hat{n} = 2n -1$$, is the translation parameter, and $$m = 0,1,2,\ldots,M -1$$ is the order of the Gegenbauer polynomials, $$M > 0$$. Corresponding to each $$\beta > -\frac{1}{2}$$, a different wavelet family is obtained, i.e., when $$\beta = \frac{1}{2}$$, Gegenbauer wavelets are identical to Legendre wavelets. For $$\beta = 0$$ and $$\beta = 1$$, we obtain the Chebyshev wavelets of the first kind and the Chebyshev wavelets of the second kind, respectively. In this study, we use the Gegenbauer wavelets at the values $$\beta = \frac{1}{2}$$ and $$\beta = \frac{3}{2}$$.

For Gegenbauer wavelets the weight function is given as follows:

$$\omega _{n} ( x ) = \textstyle\begin{cases} \omega ( 2^{k}x -2n + 1 ) = ( 1 - ( 2^{k}x -2n + 1 )^{2} )^{\beta -\frac{1}{2}}, & x \in [ \frac{n -1}{2^{k -1}},\frac{n}{2^{k -1}} ], \\ 0, & \text{otherwise}. \end{cases}$$

## Function approximation

$$u(x) \in L^{2} [ 0,1 ]$$ can be expanded in terms of Gegenbauer wavelets as

$$u ( x ) = \sum_{n = 0}^{\infty } \sum _{m = 0}^{\infty } c _{n,m}\psi _{n,m}(x)$$
(4)

in which $$c_{n,m}$$ values are wavelet coefficients, and $$c_{n,m}$$ wavelet coefficients can be determined by

$$c_{n,m} = \bigl\langle u ( x ),\psi _{n,m}(x) \bigr\rangle _{\omega _{n}}.$$

We approximate the infinite series expansion in Eq. (4) by a truncated series as

$$u ( x ) = \sum_{n = 1}^{2^{k -1}} \sum _{m = 0}^{M -1} c _{n,m}\psi _{n,m} ( x ) = C^{T}\varPsi ( x ),$$
(5)

where T means transposition and $$\varPsi ( x )$$ and C are $$2^{k -1}M \times 1$$ matrices.

For simplicity, Eq. (4) can be written as

$$u(x) = \sum_{i = 1}^{\tilde{m}} c_{i}\psi _{i} ( x )$$
(6)

in which $$\tilde{m} = ( 2^{k -1}M )$$, $$C \stackrel{ {\Delta }}{=} [ c_{1},c_{2},\ldots,c_{\tilde{m}} ]^{T}$$,

$$\varPsi ( x ) \stackrel{{\Delta }}{=} \bigl[ \psi _{1} ( x ),\ldots, \tilde{\psi }_{\tilde{m}} ( x ) \bigr] ^{T}$$
(7)

and the index i can be obtained from the relation $$i = M ( n -1 ) + m + 1$$.

Similarly, $$u(x,t) \in L^{2} ( [ 0,1 ] \times [ 0,1 ] )$$ can be approximated in terms of a Gegenbauer wavelet as

$$u ( x,t ) = \sum_{i = 1}^{\tilde{m}} \sum _{j = 1}^{ \tilde{m}} u_{i,j}\psi _{i}(x)\psi _{j} ( t ) = \varPsi ^{T} ( x )U \varPsi ( t )$$
(8)

in which the $$u_{i,j}$$ wavelets coefficients can be calculated by

$$u_{i,j} = \bigl\langle \psi _{i} ( x ), \bigl\langle u(x,t), \psi _{j} ( t ) \bigr\rangle _{\omega _{n}} \bigr\rangle _{\omega _{n}}.$$

By substituting the collocation points $$x_{i} = \frac{2i -1}{2 \tilde{m}},i = 1,2,\ldots,\tilde{m}$$ into Eq. (7), the Gegenbauer wavelet matrix $$\varPhi _{\tilde{m} \times \tilde{m}}$$ is defined as

$$\varPhi _{\tilde{m} \times \tilde{m}} = \biggl[ \varPsi \biggl( \frac{1}{2 \tilde{m}} \biggr), \varPsi \biggl( \frac{3}{2\tilde{m}} \biggr),\ldots, \varPsi \biggl( \frac{2\tilde{m} -1}{2\tilde{m}} \biggr) \biggr].$$
(9)

We need the following theorem for the convergence analysis for the Gegenbauer wavelet expansion.

### Theorem 4.1

(Bernstein-type inequality )

For Gegenbauer polynomials,

$$( \sin \theta )^{\beta } \bigl\vert G_{m}^{\beta } ( \cos \theta ) \bigr\vert < \frac{2^{1 -\beta } \varGamma ( m + 3\beta /2 )}{\varGamma ( \beta )\varGamma ( m + 1 + \beta /2 )}, \quad 0 \le \theta \le \pi , 0 < \beta < 1.$$

### Theorem 4.2

(Convergence theorem)

A function $$u ( x,t ) \in L^{2} ( R \times R )$$ defined on $$[ 0,1 ] \times [ 0,1 ]$$ can be expanded as an infinite series of Gegenbauer wavelets, which converges uniformly to $$u(x,t)$$, provided $$u(x,t)$$ has a bounded mixed fourth partial derivative $$\vert \frac{ \partial ^{4}u ( x,t )}{\partial x^{2}\,\partial t^{2}} \vert \le M$$.

### Proof

Let $$u(x,t)$$ be a function defined on $$[ 0,1 ] \times [ 0,1 ]$$ and $$\vert \frac{\partial ^{4}u ( x,t )}{\partial x^{2}\,\partial t^{2}} \vert \le M$$. The Gegenbauer wavelet coefficients of continuous functions $$u ( x,t )$$ are defined as

\begin{aligned} u_{ij} ={}& \int _{0}^{1} \int _{0}^{1} u ( x,t )\psi _{i} ( x )\psi _{j} ( t )\omega ( x ) \omega ( t ) \,dx\,dt \\ ={}& \frac{1}{\sqrt{K_{m_{1}}^{\beta }}} \frac{1}{\sqrt{K_{m_{2}} ^{\beta }}} 2^{\frac{k_{1} + k_{2}}{2}} \int _{\frac{n_{2} -1}{2^{k_{2} -1}}}^{\frac{n_{2}}{2^{k_{2} -1}}} \int _{\frac{n_{1} -1}{2^{k_{1} -1}}}^{\frac{n_{1}}{2^{k_{1} -1}}} u ( x,t )G_{m_{1}}^{\beta } \bigl( 2^{k_{1}}x -2n_{1} -1 \bigr) \\ &{}\times\omega \bigl( 2^{k_{1}}x -2n_{1} -1 \bigr)\psi _{j} \bigl( 2^{k_{1}}t -2n_{1} -1 \bigr)\omega \bigl( 2^{k_{1}}t -2n_{1} -1 \bigr)\,dx\,dt. \end{aligned}
(10)

Let us use the change of variable $$2^{k_{1}}x -2n_{1} -1 = x_{1}$$, we get

\begin{aligned} u_{ij} ={}& \frac{1}{\sqrt{K_{m_{1}}^{\beta }}} \frac{1}{\sqrt{K_{m _{2}}^{\beta }}} \frac{2^{\frac{k_{1} + k_{2}}{2}}}{2^{k_{1}}} \int _{\frac{n_{2} -1}{2^{k_{2} -1}}}^{\frac{n_{2}}{2^{k_{2} -1}}} \biggl( \int _{ -1}^{1} u \biggl( \frac{x_{1} + 2n -1}{2^{k_{1}}},t \biggr)G _{m_{1}}^{\beta } ( x_{1} )\omega ( x_{1} )\,dx _{1} \biggr) \\ &{}\times\psi _{j} \bigl( 2^{k_{1}}t -2n_{1} -1 \bigr)\omega \bigl( 2^{k_{1}}t -2n_{1} -1 \bigr)\,dt. \end{aligned}
(11)

Now, we calculate the integral using the integration by parts, to obtain

\begin{aligned} &\int _{ -1}^{1} u \biggl( \frac{x_{1} + 2n -1}{2^{k_{1}}},t \biggr)G _{m_{1}}^{\beta } ( x_{1} )\omega ( x_{1} )\,dx _{1} \\ &\quad = \frac{1}{2^{k_{1}}} \frac{2\beta }{m_{1} ( m_{1} + 2\beta )} \int _{ -1}^{1} \frac{\partial u ( \frac{x_{1} + 2n -1}{2^{k _{1}}},t )}{\partial x_{1}}G_{m_{1} -1}^{\beta + 1} ( x _{1} ) \bigl( 1 -x_{1}^{2} \bigr)^{\beta + 1/2}\,dx_{1}. \end{aligned}
(12)

Provided we integrate (12) by parts again, we obtain

\begin{aligned} &\int _{ -1}^{1} u \biggl( \frac{x_{1} + 2n -1}{2^{k_{1}}},t \biggr)G _{m_{1}}^{\beta } ( x_{1} )\omega ( x_{1} )\,dx _{1} \\ &\quad = \frac{2^{2}\beta ( \beta + 1 )}{2^{2k_{1}}m_{1} ( m_{1} + 2\beta ) ( m_{1} -1 ) ( m _{1} + 1 + 2\beta )} \\ &\qquad{}\times \int _{ -1}^{1} \frac{\partial ^{2}u}{\partial x_{1}^{2}}G_{m _{1} -2}^{\beta + 2} ( x_{1} ) \bigl( 1 - ( x_{1} )^{2} \bigr)^{\beta + 3/2}\,dx_{1}. \end{aligned}
(13)

Let $$x_{1} = \cos \theta _{1}$$, then

\begin{aligned} &\int _{ -1}^{1} u \biggl( \frac{x_{1} + 2n -1}{2^{k_{1}}},t \biggr)G _{m_{1}}^{\beta } ( x_{1} )\omega ( x_{1} )\,dx _{1} \\ &\quad = \frac{2^{2}\beta ( \beta + 1 )}{2^{2k_{1}}m_{1} ( m_{1} + 2\beta ) ( m_{1} -1 ) ( m _{1} + 1 + 2\beta )} \\ &\qquad{}\times \int _{0}^{\pi } \frac{\partial ^{2}u}{\partial \theta _{1}^{2}}G _{m_{1} -2}^{\beta + 2} ( \cos \theta _{1} ) ( \sin \theta _{1} )^{2\beta + 4}\,d\theta _{1}. \end{aligned}
(14)

By substituting Eq. (14) in Eq. (11),

\begin{aligned} u_{ij} = {}&\frac{1}{\sqrt{K_{m_{1}}^{\beta }}} \frac{1}{\sqrt{K_{m _{2}}^{\beta }}} \frac{2^{\frac{k_{1} + k_{2}}{2}}}{2^{3k_{1}}}\frac{2^{2} \beta ( \beta + 1 )}{m_{1} ( m_{1} + 2\beta ) ( m_{1} -1 ) ( m_{1} + 1 + 2\beta )} \\ &{}\times \int _{0}^{\pi } \biggl( \int _{\frac{n_{2} -1}{2^{k_{2} -1}}} ^{\frac{n_{2}}{2^{k_{2} -1}}} \frac{\partial ^{2}u}{\partial \theta _{1}^{2}}G_{m_{2}}^{\beta } \bigl( 2^{k_{2}}t -2n_{2} -1 \bigr)w \bigl( 2^{k_{2}}t -2n_{2} -1 \bigr)\,dt \biggr) \\ &{}\times G_{m_{1} -2}^{ \beta + 2} ( \cos \theta _{1} ) ( \sin \theta _{1} ) ^{2\beta + 4}\,d\theta _{1}. \end{aligned}
(15)

Similarly, we can calculate the following integral using the integration by parts

\begin{aligned} &\int _{\frac{n_{2} -1}{2^{k_{2} -1}}}^{\frac{n_{2}}{2^{k_{2} -1}}} \frac{ \partial ^{2}u}{\partial \theta _{1}^{2}}G_{m_{2}}^{\beta } \bigl( 2^{k _{2}}t -2n_{2} -1 \bigr)\omega \bigl( 2^{k_{2}}t -2n_{2} -1 \bigr)\,dt \\ &\quad = \frac{1}{2^{k_{2}}} \int _{ -1}^{1} \frac{\partial ^{2}u ( \frac{\cos \theta _{1} + 2n -1}{2^{k_{1}}},\frac{t _{1} + 2n -1}{2^{k2}} )}{\partial \theta _{1}^{2}}G_{m_{2}}^{\beta } ( t_{1} )\omega ( t_{1} )\,dt _{1}, \end{aligned}
(16)

where $$2^{k_{1}}t -2n_{2} -1 = t_{1}$$. If we integrate (16) two times by parts and make use of the substitution: $$t_{1} = \cos \theta _{2}$$, then

\begin{aligned} &\frac{1}{2^{k_{2}}} \int _{ -1}^{1} \frac{\partial ^{2}u ( \frac{ \cos \theta _{1} + 2n_{1} -1}{2^{k_{1}}},\frac{\cos \theta _{2} + 2n _{2} -1}{2^{k2}} )}{\partial \theta _{1}^{2}}G_{m_{2}}^{\beta } ( t_{1} )\omega ( t_{1} )\,dt_{1} \\ &\quad = \frac{2^{2} \beta ( \beta + 1 )}{2^{3k_{2}}m_{2} ( m_{2} + 2 \beta ) ( m_{2} -1 ) ( m_{2} + 1 + 2\beta )} \\ &\qquad{}\times \int _{0}^{\pi } \frac{\partial ^{4}u}{\partial \theta _{1}^{2} \partial \theta _{2}^{2}}G_{m_{2} -2}^{\beta + 2} ( \cos \theta _{2} ) ( \sin \theta _{2} )^{2\beta + 4}\,d\theta _{2} . \end{aligned}
(17)

By substituting Eq. (17) in Eq. (15), we obtain

\begin{aligned} u_{ij} = {}&\frac{1}{\sqrt{K_{m_{1}}^{\beta }}} \frac{1}{\sqrt{K_{m _{2}}^{\beta }}} \frac{2^{\frac{k_{1} + k_{2}}{2}}}{2^{3k_{1}}2^{3k _{2}}} \\ &{}\times \frac{2^{4}\beta ^{2} ( \beta + 1 )^{2}}{m_{1} ( m_{1} + 2\beta ) ( m_{1} -1 ) ( m_{1} + 1 + 2 \beta )m_{2} ( m_{2} + 2\beta ) ( m_{2} -1 ) ( m_{2} + 1 + 2\beta )} \\ &{}\times \int _{0}^{\pi } \biggl( \int _{0}^{\pi } \frac{\partial ^{4}u}{ \partial \theta _{1}^{2}\partial \theta _{2}^{2}}G_{m_{2}}^{\beta } ( \cos \theta _{2} ) ( \sin \theta _{2} )^{2 \beta + 4}\,d\theta _{2} \biggr) G_{m_{1} -2}^{\beta + 2} ( \cos \theta _{1} ) ( \sin \theta _{1} )^{2\beta + 4}\,d \theta _{1}, \\ u_{ij} ={}& \frac{1}{\sqrt{K_{m_{1}}^{\beta }}} \frac{1}{\sqrt{K_{m _{2}}^{\beta }}} \frac{1}{2^{\frac{5 ( k_{1} + k_{2} ) -8}{2}}}\frac{ \beta ^{2} ( \beta + 1 )^{2}}{ ( m_{1} -1 ) _{2} ( m_{1} -1 + 2\beta )_{2} ( m_{2} -1 ) _{2} ( m_{2} -1 + 2\beta )_{2}} \\ &{}\times \int _{0}^{\pi } \biggl( \int _{0}^{\pi } \frac{\partial ^{4}u}{ \partial \theta _{1}^{2}\partial \theta _{2}^{2}}G_{m_{1} -2}^{\beta + 2} ( \cos \theta _{1} ) ( \sin \theta _{1} ) ^{2\beta + 4} \\ &{}\times G_{m_{2}}^{\beta } ( \cos \theta _{2} ) ( \sin \theta _{2} )^{2\beta + 4}\,d\theta _{1}\,d\theta _{2} \biggr). \end{aligned}
(18)

From $$\vert \frac{\partial ^{4}u ( x,t )}{\partial x^{2} \partial t^{2}} \vert \le M$$ and Theorem 4.2,

\begin{aligned} \vert u_{ij} \vert \le{}& \frac{1}{\sqrt{K_{m_{1}}^{\beta }}} \frac{1}{\sqrt{K _{m_{2}}^{\beta }}} \frac{1}{2^{\frac{5 ( k_{1} + k_{2} ) -8}{2}}}\frac{\beta ^{2} ( \beta + 1 )^{2}}{ ( m_{1} -1 )_{2} ( m_{1} -1 + 2\beta )_{2} ( m_{2} -1 )_{2} ( m_{2} -1 + 2\beta )_{2}} \\ &{}\times \int _{0}^{\pi } \int _{0}^{\pi } \biggl\vert \frac{\partial ^{4}u}{ \partial \theta _{1}^{2}\partial \theta _{2}^{2}} \biggr\vert \bigl\vert G_{m _{1} -2}^{\beta + 2} ( \cos \theta _{1} ) \bigr\vert ( \sin \theta _{1} )^{2\beta + 4} \bigl\vert G_{m_{2}}^{\beta } ( \cos \theta _{2} ) \bigr\vert ( \sin \theta _{2} )^{2 \beta + 4}\,d\theta _{1}\,d\theta _{2} \\ \le{}& \lambda M \int _{0}^{\pi } \int _{0}^{\pi } \bigl\vert G_{m_{1} -2} ^{\beta + 2} ( \cos \theta _{1} ) \bigr\vert ( \sin \theta _{1} )^{2\beta + 4} \bigl\vert G_{m_{2} -2}^{\beta } ( \cos \theta _{2} ) ( \sin \theta _{2} ) \bigr\vert ^{2 \beta + 4}\,d\theta _{1}\,d\theta _{2} \\ < {}& \lambda M\pi ^{2}\frac{1}{2^{2\beta + 2}}\frac{\varGamma ( m_{1} + 1 + \frac{3\beta }{2} )\varGamma ( m_{2} + 1 + \frac{3 \beta }{2} )}{\varGamma ( m_{1} + \frac{\beta }{2} ) \varGamma ( m_{2} + \frac{\beta }{2} )\varGamma ( \beta + 2 )^{2}}, \end{aligned}

where

$$\lambda = \frac{1}{\sqrt{K_{m_{1}}^{\beta }}} \frac{1}{\sqrt{K _{m_{2}}^{\beta }}} \frac{1}{2^{\frac{5 ( k_{1} + k_{2} ) -8}{2}}} \frac{\beta ^{2} ( \beta + 1 )^{2}}{ ( m_{1} -1 )_{2} ( m_{1} -1 + 2\beta )_{2} ( m_{2} -1 )_{2} ( m_{2} -1 + 2\beta )_{2}}.$$

Accordingly, $$\sum_{i = 0}^{\infty } \sum_{i = 0}^{\infty } u_{ij}$$ is absolutely convergent. □

## Block Pulse Functions (BPFs)

Block pulse functions (BPFs) constitute a complete set of orthogonal functions , which are given on the interval $$[ 0,b )$$ by

$$b_{i} ( x ) = \textstyle\begin{cases} 1, & \frac{i -1}{\hat{m}} \le x < \frac{i}{\hat{m}}b, \\ 0, & \text{otherwise}, \end{cases}\displaystyle \quad i = 1,2,\ldots, \hat{m}.$$

An arbitrary function $$u(x)$$ on the interval $$[ 0,b )$$ can be represented by BPFs as

$$u ( x ) \simeq \zeta ^{T}B_{\hat{m}} ( x ),$$

where

\begin{aligned} &\zeta ^{T} = [ u_{1},u_{2}, \ldots,u_{\hat{m}} ], \\ &B_{\hat{m}} = \bigl[ b_{1}(x),b_{2} ( x ), \ldots,b_{ \tilde{m}}(x) \bigr] \end{aligned}

in which the $$u_{i}$$ variables are the coefficients of the block pulse function,

$$u_{i} = \frac{\tilde{m}}{b} \int _{0}^{b} u ( x )b_{i} ( x ) \,dx = \frac{m}{b} \int _{ ( ( i -1 )/m )b}^{ ( i/m )b} u ( x )b_{i} ( x ) \,dx.$$

### Lemma 1

Assume that $$f(x)$$ and $$g(x)$$ are two absolutely integrable functions, and these functions can be expanded in block pulse functions as

\begin{aligned} &f ( x ) = F^{T}B(x), \\ &g(x) = G^{T}B(x). \end{aligned}

Then

$$f(x)g(x) = F^{T}B(x)B^{T}(x)G = HB(x)$$

in which $$H = F^{T} \otimes G^{T}$$ .

### Lemma 2

Assume that $$f(x,t)$$ and $$g(x,t)$$ are two absolutely integrable functions, and these functions can be expanded in block pulse functions as

$$f(x,t)g(x,t) = B^{T}(x)HB(x)$$

in which $$H = F \otimes G$$ .

### Nonlinear term approximation by Gegenbauer wavelets

Gegenbauer wavelets may be represented  with an -set of block pulse functions as

$$\varPsi ( t ) = \varPhi _{\tilde{m} \times \tilde{m}}B_{ \tilde{m}} ( t ).$$
(19)

The operational matrix of the product of Gegenbauer wavelets can be calculated using the properties of BPFs. The absolutely integrable $$f_{1}(x,t)$$ and $$f_{2}(x,t)$$ functions can be represented by Gegenbauer wavelets as

$$f_{1} ( x,t ) = \varPsi ^{T} ( x )F_{1}\varPsi ( t )$$
(20)

and

$$f_{2} ( x,t ) = \varPsi ^{T} ( x )F_{2}\varPsi ( t ).$$
(21)

From Eq. (19), Eqs. (20)–(21) can be written as

\begin{aligned} &f_{1} ( x,t ) = \varPsi ^{T} ( x )F_{1}\varPsi ( t ) = B^{T} ( x ) \varPhi _{\tilde{m}x\tilde{m}}^{T}F _{1}\varPhi _{\tilde{m}x\tilde{m}}B(t) = B^{T} ( x )F_{a}B(t), \\ &f_{2} ( x,t ) = \varPsi ^{T} ( x )F_{2}\varPsi ( t ) = B^{T} ( x )\varPhi _{\tilde{m}x\tilde{m}}^{T}F _{2}\varPhi _{\tilde{m}x\tilde{m}}B(t) = B^{T} ( x )F_{b}B(t), \end{aligned}
(22)

where $$F_{a} = \varPhi _{\tilde{m}x\tilde{m}}^{T}F_{1} \varPhi _{\tilde{m}x\tilde{m}}$$ and $$F_{b} = \varPhi _{\tilde{m}x\tilde{m}} ^{T}F_{2}\varPhi _{\tilde{m}x\tilde{m}}$$. Let $$F_{3} = F_{a} \otimes F _{b}$$, then

\begin{aligned} f_{1}(x,t)f_{2}(x,t) ={}& B^{T} ( x )F_{3}B ( t ) \\ ={}& B^{T} ( x )\varPhi _{\tilde{m}x\tilde{m}}^{T}inv\bigl( \varPhi _{\tilde{m}x\tilde{m}}^{T}\bigr)F_{3}inv ( \varPhi _{\tilde{m}x \tilde{m}} )\varPhi _{\tilde{m}x\tilde{m}}B ( t ) \\ ={}& \varPsi ^{T} ( x )F_{4}\varPsi ( t ) \end{aligned}

in which $$F_{4} = inv(\varPhi _{\tilde{m}x\tilde{m}}^{T})F_{3}inv ( \varPhi _{\tilde{m}x\tilde{m}} )$$.

## Operational matrix of integration

The fractional integration of the vector $$\varPsi (x)$$, which is defined in (7), can be approximated as

$$\bigl( I^{\alpha } \varPsi \bigr) ( x ) \simeq P^{\alpha } \varPsi ( x ),$$

where $$P^{\alpha }$$ is called the Gegenbauer wavelet operational matrix of fractional integration. As given in , the matrix $$P^{\alpha }$$ is defined as

$$P^{\alpha } \simeq \varPhi _{\hat{m} \times \hat{m}}\tilde{P}^{\alpha } \varPhi _{\hat{m} \times \hat{m}}^{ -1},$$

where the $$\hat{m} \times \hat{m}$$ matrix is called the BPFs operational matrix of integration and is given in [29, 30] as

$P ˜ α = 1 m ˜ α 1 Γ ( α + 2 ) [ 1 χ 1 χ 2 … χ m ˜ − 1 0 1 χ 1 … χ m ˜ − 2 0 0 1 … χ m ˜ − 3 ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 0 1 ] ,$

where $$\chi _{j} = ( j + 1 )^{\alpha + 1} -2j^{\alpha + 1} + ( j -1 )^{\alpha + 1}$$.

## Description of the presented method

In this section, the Gegenbauer wavelet expansion combined with the operational matrix of fractional integration, is applied to obtain the numerical solution of the nonlinear time-fractional KBK equation, defined by

\begin{aligned} &\frac{\partial ^{\alpha } u(x,t)}{\partial t^{\alpha }} + u(x,t)\frac{ \partial u(x,t)}{\partial x} -\alpha _{1} \frac{\partial ^{2}u(x,t)}{ \partial x^{2}} + \alpha _{2}\frac{\partial ^{3}u(x,t)}{\partial x^{3}} + \alpha _{3}\frac{\partial ^{4}u(x,t)}{\partial x^{4}} = f(x,t), \\ &\quad t > 0, x > 0 \end{aligned}
(23)

subject to the initial and boundary conditions

$$u(x,0) = 0$$
(24)

and

$$\textstyle\begin{cases} u(0,t) = h_{1}(t),\qquad u(1,t) = h_{2}(t), \\ u_{x}(0,t) = h_{3}(t), \\ u_{xx}(0,t) = h_{4}(t), \end{cases}$$
(25)

in which $$\alpha _{1},\alpha _{2},\alpha _{3} \ge 0$$ parameters are constant. $$u(x,t)$$ is a function to be determined.

To solve Eq. (23), when we use the fractional integration of order α with respect to t to Eq. (23) and consider the initial condition (24), then the following equation is obtained:

\begin{aligned} u(x,t) ={}& {-} \biggl( I_{t}^{\alpha } u \frac{\partial u}{\partial x} \biggr) ( x,t ) + \alpha _{1} \biggl( I_{t}^{\alpha } \frac{ \partial ^{2}u}{\partial x^{2}} \biggr) ( x,t ) -\alpha _{2} \biggl( I_{t}^{\alpha } \frac{\partial ^{3}u}{\partial x^{3}} \biggr) ( x,t ) \\ &{} -\alpha _{3} \biggl( I_{t}^{\alpha } \frac{ \partial ^{4}u}{\partial x^{4}} \biggr) ( x,t ) + \bigl( I_{t}^{\alpha } f \bigr) ( x,t ). \end{aligned}
(26)

Now we approximate $$\frac{\partial ^{4}u(x,t)}{\partial x^{4}}$$ by the Gegenbauer wavelets as follows:

$$\frac{\partial ^{4}u(x,t)}{\partial x^{4}} \simeq \sum_{i = 1}^{ \tilde{m}} \sum_{j = 1}^{\tilde{m}} u_{ij}\psi _{i}(x)\psi _{j}(t) = \varPsi ^{T}(x)U\varPsi (t)$$
(27)

in which $$U = [u_{ij}]_{\tilde{m} \times \tilde{m}}$$ is an unknown matrix that should be determined and $$\varPsi (\cdot)$$ is the Gegenbauer wavelet vector defined in (7). Then we integrate Eq. (27) four times with respect to x and consider the boundary conditions in (25), and the following relations can be acquired:

\begin{aligned} &\frac{\partial ^{3}u(x,t)}{\partial x^{3}} = \frac{\partial ^{3}u(x,t)}{ \partial x^{3}}\Big|_{x = 0} + \varPsi ^{T}(x)P^{T}U\varPsi (t), \end{aligned}
(28)
\begin{aligned} &\frac{\partial ^{2}u(x,t)}{\partial x^{2}} = \frac{\partial ^{2}u(x,t)}{ \partial x^{2}}\Big|_{x = 0} + x \biggl( \frac{\partial ^{3}u(x,t)}{\partial x^{3}}\Big|_{x = 0} \biggr) + \varPsi ^{T}(x) \bigl( P^{2} \bigr)^{T}U \varPsi (t), \end{aligned}
(29)
\begin{aligned} &\frac{\partial u(x,t)}{\partial x} = \frac{\partial u(x,t)}{\partial x}\Big|_{x = 0} + x \biggl( \frac{\partial ^{2}u(x,t)}{\partial x^{2}}\Big|_{x = 0} \biggr) + \frac{x^{2}}{2} \biggl( \frac{\partial ^{3}u(x,t)}{\partial x^{3}}\Big|_{x = 0} \biggr) \\ &\phantom{\frac{\partial u(x,t)}{\partial x} =}{} + \varPsi ^{T}(x) \bigl( P^{3} \bigr)^{T}U \varPsi (t), \end{aligned}
(30)
\begin{aligned} &u(x,t) = u(0,t) + x \biggl( \frac{\partial u(x,t)}{\partial x}\Big|_{x = 0} \biggr) + \frac{x^{2}}{2} \biggl( \frac{\partial ^{2}u(x,t)}{\partial x^{2}}\Big|_{x = 0} \biggr) \\ &\phantom{u(x,t) = }{} + \frac{x^{3}}{6} \biggl( \frac{\partial ^{3}u(x,t)}{ \partial x^{3}}\Big|_{x = 0} \biggr) + \varPsi ^{T}(x) \bigl( P^{4} \bigr) ^{T}U\varPsi (t). \end{aligned}
(31)

By taking $$x = 1$$ into Eq. (31), we acquire

\begin{aligned} \frac{\partial ^{3}u(x,t)}{\partial x^{3}}\Big|_{x = 0} = 6h_{2}(t) -6h _{1}(t) -6h_{3}(t) -3h_{4}(t) -6\varPsi ^{T}(1) \bigl(P^{4}\bigr)^{T}U\varPsi (t). \end{aligned}
(32)

We can expand $$h_{1}(t),h_{2}(t),h_{3}(t)$$ and $$h_{4}(t)$$ can be expressed by the Gegenbauer wavelets as follows:

\begin{aligned} &h_{1}(t) = H_{1}^{T}\varPsi (t), \\ &h_{2}(t) = H_{2}^{T}\varPsi (t), \\ &h_{3}(t)=H_{3}^{T}\varPsi (t), \\ &h_{4}(t)=H_{4}^{T}\varPsi (t), \end{aligned}
(33)

in which $$H_{1},H_{2},H_{3}$$, and $$H_{4}$$ are the Gegenbauer wavelet coefficients vectors. When we substitute (31) into (32), we get

$$\frac{\partial ^{3}u(x,t)}{\partial x^{3}}\Big|_{x = 0} = \bigl( 6H_{2} ^{T} -6H_{1}^{T} -6H_{3}^{T} -3H_{4}^{T} -6\varPsi ^{T}(1) \bigl(P^{4}\bigr)^{T}U \bigr)\varPsi (t) \stackrel{{\Delta }}{=} \tilde{U}^{T}\varPsi (t).$$
(34)

By substituting (34) into Eqs. (28)–(31), we obtain

\begin{aligned} &\frac{\partial ^{3}u(x,t)}{\partial x^{3}} = \varPsi ^{T}(x)A_{1}\varPsi (t), \end{aligned}
(35)
\begin{aligned} &\frac{\partial ^{2}u(x,t)}{\partial x^{2}} = \varPsi ^{T}(x)A_{2}\varPsi (t), \end{aligned}
(36)
\begin{aligned} &\frac{\partial u(x,t)}{\partial x} = \varPsi ^{T}(x)A_{3}\varPsi (t), \end{aligned}
(37)
\begin{aligned} &u(x,t) = \varPsi ^{T}(x)A_{4}\varPsi (t), \end{aligned}
(38)

where

\begin{aligned} &A_{1} = E\tilde{U}^{T} + P^{T}U, \\ &A_{2} = EH_{4}^{T} + X\tilde{U}^{T} + \bigl( P^{2} \bigr)^{T}U, \\ &A_{3} = EH_{3}^{T} + XH_{4}^{T} + H_{5}\tilde{U}^{T} + \bigl( P^{3} \bigr) ^{T}U, \\ &A_{4} = EH_{1}^{T} + XH_{3}^{T} + H_{5}H_{4}^{T} + H_{6} \tilde{U}^{T} + \bigl( P^{4} \bigr)^{T}U \end{aligned}

and $$X,H_{5},H_{6}$$ and E are the Gegenbauer wavelet coefficients vectors for $$x,\frac{x^{2}}{2},\frac{x^{3}}{6}$$ and the unit step function, respectively. We can also expand $$f(x,t)$$ by the Gegenbauer wavelets as follows:

$$f(x,t) = \varPsi ^{T}(x)F\varPsi (t)$$
(39)

in which F is the Gegenbauer wavelet coefficient vector.

Now, we substitute Eqs. (35)–(39) and (27) into Eq. (26) and, by using the operational matrix of fractional integration, the residual function $$R(x,t)$$ for Eq. (23) can be written as follows:

$$R(x,t) = \varPsi ^{T}(x) \bigl[ A_{4} + \varLambda P^{\alpha } -\alpha _{1}A _{2}P^{\alpha } + \alpha _{2}A_{1}P^{\alpha } + \alpha _{3}UP^{\alpha } -FP^{\alpha } \bigr]\varPsi (t)$$
(40)

in which $$[ \varPsi ^{T}(x)A_{3}\varPsi (t) ] [ \varPsi ^{T}(x)A _{4}\varPsi (t) ] = \varPsi ^{T}(x)\varLambda \varPsi (t)$$.

As in a typical Galerkin method , we obtain a system of nonlinear algebraic equations which are solved numerically by Newton’s method and whose solution gives the Gegenbauer wavelet coefficients, $$u_{ij},i,j = 1,2,\ldots,\hat{m}$$, as

$$\int _{0}^{1} \int _{0}^{1} R(x,t)\psi _{i}(x) \psi _{j}(t)w_{n}(x)w_{n}(t)\,dx\,dt = 0,\quad i,j = 1,2,\ldots, \hat{m}.$$
(41)

When this system is solved for the unknown matrix, we acquire an approximate solution for this problem.

## Test problems

In this section, we present two test problems to check the accuracy of the presented method. In order to measure the difference between the analytic and numerical solutions, we use the following error functions:

\begin{aligned} &E(x_{i},t_{i}) = \bigl\vert u_{\mathrm{exactsol}}(x_{i},t_{i}) -u(x_{i},t_{i}) \bigr\vert , \\ &L_{\infty } = \max_{1 \le i \le \hat{m}} \bigl\vert u_{\mathrm{exactsol}}(x_{i},t _{i}) -u(x_{i},t_{i}) \bigr\vert . \end{aligned}
(42)

Here, the our test problems are applied by the presented method for $$k = 1,M = 3$$.

### Example 1

Let us consider the following time-fractional KBK equation:

$$\frac{\partial ^{\alpha } u(x,t)}{\partial t^{\alpha }} + u(x,t)\frac{ \partial u(x,t)}{\partial x} -\alpha _{1} \frac{\partial ^{2}u(x,t)}{ \partial x^{2}} + \alpha _{2}\frac{\partial ^{3}u(x,t)}{\partial x^{3}} + \alpha _{3}\frac{\partial ^{4}u(x,t)}{\partial x^{4}} = f(x,t)$$

in which $$f(x,t) = \frac{t^{\alpha } \cos (x)}{\varGamma ( 1 + \alpha )} -\frac{t^{4\alpha } \cos (x)\sin (x)}{ ( \varGamma ( 1 + 2\alpha ) )^{2}} + \alpha _{1}\frac{t ^{2\alpha } \cos (x)}{\varGamma ( 1 + 2\alpha )} + \alpha _{2}\frac{t^{2\alpha } \sin (x)}{\varGamma ( 1 + 2\alpha )} + \alpha _{3} \frac{t^{2\alpha } \cos (x)}{\varGamma ( 1 + 2\alpha )}$$ and $$\alpha _{1} = \alpha _{2} = \alpha _{3} = 1$$. Initial and boundary conditions are given as

$$u ( x,0 ) = 0$$

and

$$\textstyle\begin{cases} u(0,t) = \frac{t^{2\alpha }}{\varGamma ( 1 + 2\alpha )},\qquad u(1,t) = \frac{t^{2\alpha } \cos (1)}{\varGamma ( 1 + 2\alpha )}; \\ u_{x}(0,t) = 0, \\ u_{xx}(0,t) = -\frac{t^{2\alpha }}{\varGamma ( 1 + 2\alpha )}. \end{cases}$$

The exact solution for this problem is $$u(x,t) = \frac{t^{2\alpha } \cos (x)}{\varGamma ( 1 + 2\alpha )}$$, given in .

Table 1 and Table 2 display the absolute errors between the approximate solution acquired using the GWGM for $$\beta = 1/2,\beta = 3/2,\alpha = 1$$ and the exact solution of Example 1. Table 3 shows $$L_{\infty }$$ errors obtained using the GWGM for $$\beta = 1/2, \alpha = 1$$ and the Legendre wavelet method used in  at different points of x and t. In addition, the graphics of the exact and approximate solutions for $$\beta=1/2$$ and $$\alpha=1$$ are given in Fig. 1. From Table 1 and Table 2, we can see that the numerical solution acquired by using the GWGM for $$\beta = 1/2,\beta = 3/2$$ and $$\alpha = 1$$ is in good agreement with the exact solution more than the acquired numerical solution using Legendre wavelet method in . From Table 4, we can see that the numerical solution acquired by using the GWGM for $$\beta = 1/2,\alpha = 0.75$$ and $$\alpha = 0.90$$ is in good agreement with the exact solution more than the acquired numerical solution using Legendre wavelet method for $$\alpha = 0.75$$ in . From Table 3 and Table 4, we can see that our approach is more efficient and useful.

### Example 2

Let us consider the following time-fractional KBK equation:

$$\frac{\partial ^{\alpha } u(x,t)}{\partial t^{\alpha }} + u(x,t)\frac{ \partial u(x,t)}{\partial x} -\alpha _{1} \frac{\partial ^{2}u(x,t)}{ \partial x^{2}} + \alpha _{2}\frac{\partial ^{3}u(x,t)}{\partial x^{3}} + \alpha _{3}\frac{\partial ^{4}u(x,t)}{\partial x^{4}} = f(x,t)$$

in which $$f(x,t) = \frac{t^{\alpha } \sin (x)}{\varGamma ( 1 + \alpha )} + \frac{t^{4\alpha } \cos (x)\sin (x)}{ ( \varGamma ( 1 + 2\alpha ) )^{2}} + \alpha _{1}\frac{t ^{2\alpha } \sin (x)}{\varGamma ( 1 + 2\alpha )} -\alpha _{2}\frac{t^{2\alpha } \cos (x)}{\varGamma ( 1 + 2\alpha )} + \alpha _{3} \frac{t^{2\alpha } \sin (x)}{\varGamma ( 1 + 2\alpha )}$$ and $$\alpha _{1} = \alpha _{2} = \alpha _{3} = 1$$. Initial and boundary conditions are given as

$$u(x,0) = 0$$

and

$$\textstyle\begin{cases} u(0,t) = 0,\qquad u(1,t) = \frac{t^{2\alpha } \sin (1)}{\varGamma ( 1 + 2 \alpha )}, \\ u_{x}(0,t) = \frac{t^{2\alpha }}{\varGamma ( 1 + 2\alpha )}, \\ u_{xx}(0,t) = 0. \end{cases}$$

The exact solution for this problem is $$u(x,t) = \frac{t^{2\alpha } \sin (x)}{\varGamma ( 1 + 2\alpha )}$$, as given in .

Table 5 and Table 6 display the absolute errors between the exact solution and the approximate solutions acquired using the GWGM for $$\beta = 1/2$$, $$\beta = 3/2$$ and $$\alpha = 1$$. Table 6 shows $$L_{\infty }$$ errors obtained using GWGM for $$\beta = 1/2$$, $$\beta = 3/2, \alpha = 1$$ and the Legendre wavelet method used in  at different points of x and t. In addition, the graphics of the exact and approximate solutions for different values of β and $$\alpha=1$$ are given in Fig. 2 and Fig. 3. From Tables 5 and 6, we can see that the numerical solutions acquired using the GWGM for $$\beta = 1/2$$ and $$\beta = 3/2$$ are in good agreement with the exact solution. From Table 8, we can see that the numerical solution acquired by using the GWGM for $$\beta = 1/2, \alpha = 0.99$$ is in good agreement with the exact solution more than the acquired numerical solution using the GWGM for $$\beta = 1/2, \alpha = 0.75$$ and $$\alpha = 0.90$$. From Tables 7 and 8, we can see that our approach is more efficient and useful.

## Discussion

In this study, we practically applied the Gegenbauer Wavelet Galerkin Method to solve the time-fractional KdV-Burgers-Kuramoto equation. The presented scheme was tested on two test problems to demonstrate the accuracy and efficiency of the presented method, and the obtained numerical results were then compared with the exact solutions. These comparisons reveal that the presented method is efficient and practically suited to find the approximate solution of the time-fractional KdV-Burgers-Kuramoto equation. So, the presented method is an alternative way to obtain the numerical solutions of the time-fractional KdV-Burgers-Kuramoto equation. Moreover, the computer implementation of the presented method is simple and straightforward. It is seen that the presented method is computationally fast and provide accurate results. All of the above numerical computations were calculated using Maple software. The presented method is very well suited to solve boundary value problems because the boundary conditions are used automatically during solution procedure. As the next step, the presented scheme can be used to find the approximation solution of a partial differential equation which has different nonlinearity, systems of partial differential equations and fractional partial differential equations.

## References

1. 1.

Wei, L., He, Y., Yildirim, A., Kumar, S.: Numerical algorithm based on an implicit fully discrete local discontinuous Galerkin method for the time-fractional KdV-Burgers-Kuramoto equation. Z. Angew. Math. Mech. 93(1), 14–28 (2013)

2. 2.

Kawahara, T.: Formation of saturated solitons in a nonlinear dispersive system with instability and dissipation. Phys. Rev. Lett. 51(5), 381 (1983)

3. 3.

Topper, J., Kawahara, T.: Approximate equations for long nonlinear waves on a viscous fluid. J. Phys. Soc. Jpn. 44(2), 663–666 (1978)

4. 4.

Cohen, B.I., Krommes, J.A., Tang, W.M., Rosenbluth, M.N.: Non-linear saturation of the dissipative trapped-ion mode by mode coupling. Nucl. Fusion 16(6), 971 (1976)

5. 5.

Huang, F., Liu, S.: Physical mechanism and model of turbulent cascades in a barotropic atmosphere. Adv. Atmos. Sci. 21(1), 34–40 (2004)

6. 6.

Fu, Z., Liu, S., Liu, S.: New exact solutions to the KdV-Burgers-Kuramoto equation. Chaos Solitons Fractals 23(2), 609–616 (2005)

7. 7.

Zhaosheng, F.: Symmetry Analysis to the KdV-Burgers-Kuramoto Equation. University of Texas-Rio Grande, Valley (2018)

8. 8.

Xie, Y., Zhu, S., Su, K.: Solving the KdV-Burgers-Kuramoto equation by a combination method. Int. J. Mod. Phys. 23(08), 2101–2106 (2009)

9. 9.

Ebadian, A., Khajehnasiri, A.A.: Block pulse functions and their applications to solving systems of higher-order nonlinear Volterra integro-differential equations. Electron. J. Differ. Equ. 2014, 54 (2014)

10. 10.

Sayed, S.M., Elhamahmy, O.O., Gharib, G.M.: Travelling wave solutions for the KdV-Burgers-Kuramoto and nonlinear Schrödinger equations which describe pseudospherical surfaces. J. Appl. Math. 2008, Article ID 576783 (2008)

11. 11.

Gupta, A.K., Ray, S.S.: Traveling wave solution of fractional KdV-Burger-Kuramoto equation describing nonlinear physical phenomena. AIP Adv. 4(9), 097120 (2014)

12. 12.

Song, L., Zhang, H.: Application of homotopy analysis method to fractional KdV-Burgers-Kuramoto equation. Phys. Lett. A 367, 88–94 (2007)

13. 13.

Safari, M., Ganji, D.D., Moslemi, M.: Application of He’s variational iteration method and Adomian’s decomposition method to the fractional KdV-Burgers-Kuramoto equation. Comput. Math. Appl. 58, 2091–2097 (2009)

14. 14.

Secer, A., Alkan, S., Akinlar, M.A., Bayram, M.: Sinc–Galerkin method for approximate solutions of fractional order boundary value problems. Bound. Value Probl. 2013(1), 281 (2013)

15. 15.

Akinlar, M.A., Secer, A., Bayram, M.: Numerical solution of fractional Benney equation. Appl. Math. Inf. Sci. 8(4), 1633 (2014)

16. 16.

Secer, A.: Sinc–Galerkin method for solving hyperbolic partial differential equations. Int. J. Optim. Control Theor. Appl. 8(2), 250–258 (2018)

17. 17.

Hooshmandasl, M.R., Heydari, M.H., Cattani, C.: Wavelets Galerkin method for solving stochastic heat equation. Int. J. Comput. Math. 93(9), 1579–1596 (2016)

18. 18.

Hooshmandasl, M.R., Heydari, M.H., Cattani, C.: Numerical solution of fractional sub-diffusion and time-fractional diffusion-wave equations via fractional-order Legendre functions. Eur. Phys. J. Plus 131(8), 268 (2016)

19. 19.

Heydari, M.H., Hooshmandasl, M.R., Shakiba, A., Cattani, C.: Legendre wavelets Galerkin method for solving nonlinear stochastic integral equations. Nonlinear Dyn. 85(2), 1185–1202 (2016)

20. 20.

Mohammadi, F., Hosseini, M.M., Mohyud-Din, S.T.: Legendre wavelet Galerkin method for solving ordinary differential equations with non-analytic solution. Int. J. Syst. Sci. 42(4), 579–585 (2011)

21. 21.

Zheng, X., Wei, Z.: Discontinuous Legendre wavelet Galerkin method for one-dimensional advection-diffusion equation. Springer Proc. Math. Stat. 6(09), 1581 (2015)

22. 22.

Secer, A.: Numerical solution and simulation of second-order parabolic PDEs with Sinc-Galerkin method using Maple. In: Abstract and Applied Analysis vol. 2013. Hindawi, United Kingdom (2013)

23. 23.

Podlubny, I.: Fractional Differential Equations. Academic, NY (1999)

24. 24.

Samko, S., Kilbas, A.A., Marichev, O.I.: Fractional Integrals and Derivatives: Theory and Applications. Taylor and Francis, London (1993)

25. 25.

Elgindy, K.T., Smith-Miles, K.A.: Solving boundary value problems, integral, and integro-differential equations using Gegenbauer integration matrices. J. Comput. Appl. Math. 237(1), 307–325 (2013)

26. 26.

Giordano, C., Laforgia, A.: On the Bernstein-type inequalities for ultraspherical polynomials. J. Comput. Appl. Math. 153(1–2), 243–248 (2013)

27. 27.

Chi-Hsu, W.: On the generalization of block pulse operational matrices for fractional and operational calculus. J. Franklin Inst. 315(2), 91–102 (1983)

28. 28.

Yin, F., Song, J., Cao, X., Lu, F.: Couple of the variational iteration method and Legendre wavelets for nonlinear partial differential equations. J. Appl. Math. 2013, Article ID 157956 (2013)

29. 29.

Maleknejad, K., Khodabin, M., Rostami, M.: Numerical solution of stochastic Volterra integral equations by a stochastic operational matrix based on block pulse functions. Math. Comput. Model. 55(3–4), 791–800 (2012)

30. 30.

Maleknejad, K., Khodabin, M., Rostami, M.: A numerical method for solving m-dimensional stochastic Itô–Volterra integral equations by stochastic operational matrix. Comput. Math. Appl. 63(1), 133–143 (2012)

31. 31.

Canuto, C., Hussaini, M.Y., Quarteroni, A., Zang, T.A. Jr.: Spectral Methods in Fluid Dynamics. Springer, Berlin (2012)

### Acknowledgements

The authors are very grateful to the Journal editors and the anonymous reviewers for very useful suggestions and comments.

Not applicable.

Not applicable.

## Author information

Authors

### Contributions

The authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Aydin Secer.

## Ethics declarations

Not applicable.

### Competing interests

The authors declare that they have no competing interests.

### Consent for publication

Not applicable. 