Skip to main content

Bivariate Chebyshev polynomials of the fifth kind for variable-order time-fractional partial integro-differential equations with weakly singular kernel

Abstract

The shifted Chebyshev polynomials of the fifth kind (SCPFK) and the collocation method are employed to achieve approximate solutions of a category of the functional equations, namely variable-order time-fractional weakly singular partial integro-differential equations (VTFWSPIDEs). A pseudo-operational matrix (POM) approach is developed for the numerical solution of the problem under study. The suggested method changes solving the VTFWSPIDE into the solution of a system of linear algebraic equations. Error bounds of the approximate solutions are obtained, and the application of the proposed scheme is examined on five problems. The results confirm the applicability and high accuracy of the method for the numerical solution of fractional singular partial integro-differential equations.

Introduction

Partial integro-differential equations (PIDEs) with weakly singular kernels emerge in some physical and chemical phenomena, such as the radiation of heat from semi-infinite solids, stereology, hydrodynamics, heat conduction, and theory of elasticity [1, 2]. Due to possessing singular kernels, this category of functional equations is complicated, and directly obtaining exact solutions of singular PIDEs is usually extremely hard. Therefore, the study of numerical solutions of these equations is severely significant. Numerous numerical schemes have been established to numerically solve the singular PIDEs, such as the Sinc methods [36], the optimum q-homotopic analysis method [7], the finite difference methods [810], wavelets collocation schemes [11], and many other methods (readers can refer to [1215]). Fractional derivative and integral operators can better describe physical phenomena of the real world, and hence researchers handle solving diverse fractional functional equations. For example, the exact solutions of the time-fractional extended \((2+1)\)-dimensional Zakharov–Kuznetsov equation and a high-dimensional time-fractional KdV-type equation were obtained by means of the symmetry analysis method in [16, 17]. Sing et al. suggested q-Elzakri transform method for solving multi-dimensional diffusion equations [18]. In [19], the Sumudu transform method was employed successfully for a fractional blood alcohol model. Authors in [20] presented a method based on Chebyshev polynomials to solve the fractional Bratu’s equation. Alderremy et al. used the finite-difference method for the fractional two-cell cubic autocatalysis reaction model [21]. Fractional functional equations with variable order (where orders are functions of the time or space or both of them) were introduced in 1993 [22]. Due to having the memory properties, this class of equations is a powerful tool to describe mechanics of an oscillating mass subjected to a variable viscoelasticity damper and a linear spring, the motion of spherical particle sedimentation in a quiescent viscous liquid, analysis of elastoplastic indentation problems, and interpolating the behavior of systems with multiple fractional terms [2225]. Since limited works have been done on VTFPIDEs, in the present work a numerical approach based on the bivariate Chebyshev polynomials of the fifth kind is presented to obtain approximate solutions of the PIDEs as follows:

$$ \begin{aligned} &{}_{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}(x, t) + \sum _{i=0}^{2} \sum_{j=0}^{2} \nu _{i j} \frac{\partial ^{i+j} \mathcal{V}(x, t)}{\partial x^{i} \partial t^{j}}+ \rho \int _{0}^{x} \int _{0}^{\mathfrak{h}(t)} \frac{\mathcal{F}(\mathcal{V}(\omega , \varpi ))}{(x-\omega )^{\theta } (t-\varpi )^{\vartheta }} \,d \varpi \,d \omega \\ &\quad =\mathfrak{f}(x, t), \quad m-1 < \sigma (x, t) \leqslant m, (x, t) \in \overline{\mathbf{J}}, \end{aligned} $$
(1)

with the initial and boundary conditions

$$ \mathcal{V}(x, 0)=\phi _{0}(x), \quad\quad \frac{\partial \mathcal{V}(x, 0)}{\partial t}=\phi _{1}(x), \quad\quad \mathcal{V}(0, t)=\psi _{0}(t), \quad\quad \mathcal{V}(1, t)=\psi _{1}(t), $$
(2)

where \(\nu _{i j}\), ρ are real constants, \(\theta , \vartheta \in [0, 1)\), \(m \in \mathbb{Z}^{+}\), and \(\overline{\mathbf{J}}=[0, 1]\times [0, 1]\). The functions \(\mathfrak{h}(t)\), \(\phi _{0}(x)\), \(\phi _{1}(x)\), \(\psi _{0}(t)\), \(\psi _{1}(t)\) are known continuous ones. The functions \(\mathcal{V}(x, t)\) and \(\mathfrak{f}(x, t)\) are assumed to be sufficiently smooth, which guarantees the existence and uniqueness of the solution of Eq. (1). \(\mathcal{F}\) is an identity or a differential operator and \({}_{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)} \) designates the variable-order time-fractional derivative operator of the Caputo type. Equations of the form (1) arise in problems dealing with heat conduction in materials with memory, population dynamics, viscoelasticity, and theory of nuclear reactors (see [26, 27]).

The Chebyshev polynomials of the fifth kind have been utilized for fractional ordinary differential equations with constant orders [28]. Modeling diverse phenomena in the different fields of science is performed by Eq. (1), [26, 27]. The integral part of Eq. (1) possesses a singular kernel and one of its limits is a function, which makes it hard to find the exact solution of problem (1)–(2). The Chebyshev polynomials of the first to fourth kinds have been widely used to solve diverse functional equations, but the orthogonal Chebyshev polynomials of the fifth kind were taken into account less. These reasons and the importance of the equation under study motivate the authors of the current paper to present a new approach for solving the equation in (1). By generalizing the fifth-kind Chebyshev polynomials to the two-dimensional case, their efficiency as basis functions is demonstrated. Accordingly, pseudo-operational matrices of the integration and a pseudo-operational matrix for the approximation of the integral part with the singular kernel are derived. Besides, the relation between the one-variable basis vector \(\mathbf{X}(t)\) and its shifted form, \(\mathbf{X}(\mathfrak{h}(t))\), is exhibited as a matrix. Using the collocation method along with the resultant matrices converts the solution of the main problem to the solution of a system of algebraic equations, and solving this algebraic system leads to an approximate solution.

Authors in [31] utilized two different basis functions (Legendre and Laguerre polynomials) to construct the two-variable basis, and all computations are done for both bases, while the proposed method in the current paper constructs the two-variable basis utilizing the fifth-kind Chebyshev polynomials. It is expected that the obtained results enjoy less computational cost and more accuracy.

The rest of the paper is structured as follows: definitions of the fractional derivative and integral operators, one- and two-variable Chebyshev polynomials of the fifth kind are introduced in Sect. 2. Pseudo-operational matrices of the integration with integer and fractional orders are derived in Sect. 3, and then with the aim of them, operational matrices for the two-dimensional basis are constructed. The operational matrix of the product is constructed, the integral part in (1) is approximated, and the shifted basis vector \(\mathbf{X}(\mathfrak{h}(t))\) is approximated in terms of the basis vector \(\mathbf{X}(t)\). Section 4 is dedicated to describing the suggested method, and error bounds for approximate solutions are computed in Sect. 5. The applicability and efficiency of the proposed scheme are illustrated through several experimental examples in Sect. 6. The paper concludes with a conclusion and discussion in Sect. 7.

Fractional operators and SCPFK

Fractional operators

Definition 2.1

The variable-order time-fractional derivative operator of the Caputo type is defined as follows [22]:

$$ _{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}(x, t) = \frac{1}{\Gamma (m-\sigma (x, t))} \int _{0} ^{t} (t-\varpi )^{m- \sigma (x, t)-1} \frac{\partial ^{m} \mathcal{V}(x, \varpi )}{\partial \varpi ^{m}} \,d \varpi , \quad (x, t) \in \overline{\mathbf{J}}, $$

where \(m-1<\sigma (x, t) \leqslant m\), \(m \in \mathbb{Z}^{+}\) and \(\mathcal{V}(x, t) \in C^{m}(\overline{\mathbf{J}})\).

Definition 2.2

The variable-order fractional integral operator in the Riemann–Liouville sense \({_{0}^{\mathit{RL}}}\mathcal{I} _{t} ^{\sigma (x, t)}\) is defined as follows [22]:

$$ {_{0}^{\mathit{RL}}} \mathcal{I} _{t} ^{\sigma (x, t)} \mathcal{V}(x, t) = \frac{1}{\Gamma (\sigma (x, t))} \int _{0}^{t} (t-\varpi )^{\sigma (x, t) -1} \mathcal{V}(x, \varpi ) \,d\varpi , $$

where \(m-1<\sigma (x, t) \leqslant m\), \(m \in \mathbb{Z}^{+}\) and \(\mathcal{V}(x, t) \in C(\overline{\mathbf{J}})\).

Some properties of these operators are as follows:

$$\begin{aligned}& \begin{aligned} 1\quad {_{0}^{\mathit{RL}}}\mathcal{I} _{t} ^{\sigma _{1}(x, t)} \bigl({_{0}^{\mathit{RL}}} \mathcal{I} _{t} ^{\sigma _{2}(x, t)}g(x, t) \bigr)&={_{0}^{\mathit{RL}}} \mathcal{I} _{t} ^{\sigma _{2}(x, t)} \bigl({_{0}^{\mathit{RL}}} \mathcal{I} _{t} ^{\sigma _{1}(x, t)}g(x, t) \bigr) \\ &={_{0}^{\mathit{RL}}}\mathcal{I} _{t} ^{\sigma _{1}(x, t)+\sigma _{2}(x, t)}g(x, t), \end{aligned} \\& 2\quad {_{0}^{\mathit{RL}}}\mathcal{I} _{t} ^{\sigma (x, t)} \bigl(_{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)}g(x, t) \bigr)=g(x, t)-g(x, 0), \quad 0< \sigma (x, t) \leqslant 1, \\& 3\quad {}_{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)}t^{\gamma }= \textstyle\begin{cases} 0, & \lfloor \sigma (x, t) \rfloor >\gamma , \\ \frac{\Gamma (\gamma +1)}{\Gamma (\gamma -\sigma (x, t)+1)}t^{ \gamma -\sigma (x, t)}, & \text{otherwise}, \end{cases}\displaystyle \\& 4\quad {_{0}^{\mathit{RL}}}\mathcal{I} _{t} ^{\sigma (x, t)} t^{v} = \frac{\Gamma (v+1)}{\Gamma (v+\sigma (x, t)+1)} t^{v+\sigma (x, t)}, \quad v>-1 . \end{aligned}$$

SCPFK

The bivariate Chebyshev polynomials of the fifth kind (BCPFK) are obtained from the one-variable SCPFK. The SCPFKs are defined over the interval \(J=[0, 1]\) as follows [28]:

$$ \mathcal{X}_{j}(t)=\sum_{r=0}^{j} \rho _{r, j} t^{r}, $$
(3)

where

$$ \rho _{r, j}=\frac{2^{2 r-j}}{(2 r)!} \textstyle\begin{cases} 2 \sum_{l=\lfloor \frac{r+1}{2} \rfloor }^{\frac{j}{2}} \frac{(-1)^{\frac{j}{2}+l-r} l \delta _{l} (2l+r-1)!}{(2 l-r)!}, &j\text{ even}, \\ \frac{1}{j} \sum_{l=\lfloor \frac{r}{2} \rfloor }^{\frac{j-1}{2}} \frac{(-1)^{\frac{j+1}{2}+l-r}(2 l+1)^{2} (2l+r)!}{(2l-r+1)!}, & j\text{ odd}, \end{cases} $$
(4)

and

$$ \delta _{l}= \textstyle\begin{cases} \frac{1}{2}, &l=0, \\ 1, &l>0, \end{cases} $$

and \(\rho _{0,2 j}=1/2^{j}\) for \(j=0, 1, 2,\ldots \) . These polynomials are orthogonal with the weight function \(w(t)=(2t-1)^{2}/ \sqrt{t-t^{2}}\), that is,

$$ \int _{0}^{1} \mathcal{X}_{i}(t) \mathcal{X}_{j}(t) w(t) \,dt ={\hbar }_{i} \delta _{i j}, $$

where \(\delta _{i j}\) is the Kronecker delta function and

$$ {\hbar }_{i}= \textstyle\begin{cases} \frac{\pi }{2^{2 i+1}}, &i\text{ even}, \\ \frac{\pi (i+2)}{i 2^{2 i+1}}, &i\text{ odd}. \end{cases} $$
(5)

The SCPFKs satisfy the following recurrence relation:

$$ \begin{aligned} &\mathcal{X}_{j+1}(t)= (2t-1) \mathcal{X}_{j}(t) -\epsilon _{j+1} \mathcal{X}_{j-1}(t), \quad j\geqslant 1, t \in J, \\ & \mathcal{X}_{0}(t)=1, \quad\quad \mathcal{X}_{1}(t)=2t-1, \end{aligned} $$
(6)

where

$$\begin{aligned} \epsilon _{j+1} =\frac{(j-1)^{2}+j+(-1)^{j} (2j-1)}{4j(j-1)}. \end{aligned}$$

A function \(\mathcal{Y} \in L_{w}^{2}(J)\) can be expanded in terms of the SCPFKs as follows:

$$ \mathcal{Y}(t)=\sum_{j=0}^{\infty } Y_{j} \mathcal{X}_{j}(t), \quad t \in J, $$
(7)

where coefficients \(Y_{j}\), \(j\geqslant 0\), are computed as

$$ Y_{j}=\frac{1}{\hbar _{j}} \int _{0}^{1} \mathcal{Y}(t) \mathcal{X}_{j}(t) w(t) \,dt. $$

The first few terms in (7) are practically required for approximating the function \(\mathcal{Y}\), i.e.,

$$ \mathcal{Y}(t) \approx \mathcal{Y}_{N}(t)= \sum _{j=0}^{N} Y_{j} \mathcal{X}_{j}(t)=\mathbf{X}^{T}(t) \mathbf{Y}= \mathbf{Y}^{T} \mathbf{X}(t), $$
(8)

where Y and \(\mathbf{X}(t)\) are vectors as

$$ \mathbf{Y}= \begin{bmatrix}Y_{0} &Y_{1}& \ldots &Y_{N}\end{bmatrix}^{T}, \quad\quad \mathbf{X}(t)= \begin{bmatrix} \mathcal{X}_{0}(t)&\mathcal{X}_{1}(t)& \ldots &\mathcal{X}_{N}(t) \end{bmatrix} ^{T}. $$
(9)

Now, BCPFKs are constructed using the one-variable SCPFKs on the domain \(\overline{\mathbf{J}}=[0, 1]\times [0, 1]\) as follows:

$$ \mathcal{Z}_{ij}(x, t)=\mathcal{X}_{i}(x) \mathcal{X}_{j}(t), \quad i, j=0, 1,\ldots, (x, t) \in \overline{ \mathbf{J}}. $$
(10)

These two-variable polynomials are orthogonal with respect to the weight function \(W (x, t)=w(x) w(t)\) on \(\overline{\mathbf{J}}\), that is,

$$ \int _{0}^{1} \int _{0}^{1} \mathcal{Z}_{ij}(x, t) \mathcal{Z}_{k l}(x, t) W (x, t) \,dx \,dt=\hbar _{i} \hbar _{j} \delta _{ik} \delta _{jl}, $$

where \(\hbar _{i}\), \(\hbar _{j}\) are defined in (5). The two-variable function \(\overline{Y} \in L^{2}_{W}(\overline{\mathbf{J}})\) can be approximated as follows:

$$ \overline{Y}(x, t)\approx \overline{Y}_{N}(x, t) = \sum_{i=0}^{N} \sum _{j=0}^{M} \overline{Y}_{ij} \mathcal{Z}_{ij}(x, t)= \overline{\mathbf{Y}}^{T} \boldsymbol{\Phi}(x, t), $$
(11)

where

$$\begin{aligned} \begin{aligned}& \begin{aligned} \overline{\mathbf{Y}}={}& \left[\textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} \overline{Y}_{00} &\overline{Y}_{01}& \ldots & \overline{Y}_{0M}&\overline{Y}_{10}&\overline{Y}_{11}& \ldots &\overline{Y}_{1M}& \ldots \end{array}\displaystyle \right. \\ &{} \left.\textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} \overline{Y}_{N0}&\overline{Y}_{N1}& \ldots &\overline{Y}_{\mathit{NM}}\end{array}\displaystyle \right] ^{T}, \end{aligned} \\ & \begin{aligned} \boldsymbol{\Phi}(x, t)= {}&\left[\textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} \mathcal{Z}_{00}(x, t)&\mathcal{Z}_{01}(x, t)& \ldots &\mathcal{Z}_{0M}(x, t)&\mathcal{Z}_{10}& \mathcal{Z}_{11}(x, t)& \ldots &\mathcal{Z}_{1M}(x, t) \end{array}\displaystyle \right. \\ &{} \left. \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} \ldots& \mathcal{Z}_{N0}(x, t)&\mathcal{Z}_{N1}(x, t)& \ldots &\mathcal{Z}_{\mathit{NM}}(x, t) \end{array}\displaystyle \right] ^{T}. \end{aligned} \end{aligned} \end{aligned}$$
(12)

It must be mentioned that the vector \(\boldsymbol{\Phi}(x, t)\) is written as \(\boldsymbol{\Phi}(x, t)=\mathbf{X}(x) \otimes \mathbf{X }(t)\), where denotes the Kronecker product.

Pseudo-operational matrices

In this section, the pseudo-operational matrices of the integration (with integer and fractional orders) and the operational matrix of the product are derived. The shifted vector \(\mathbf{X}(\mathfrak{h}(t))\) is approximated in the vector \(\mathbf{X}(t)\), and then a pseudo-operational matrix is computed to approximate the integral part in (1).

The integral pseudo-operational matrices

Lemma 3.1

For \(m \in \mathbb{R}^{+}\), one has

$$ \int _{0}^{1} t^{m} \mathcal{X}_{j}(t) w(t) \,dt= \sum_{r=0}^{j} \rho _{r, j} \sqrt{\pi } \biggl\{ \frac{4 \Gamma (m+r+\frac{5}{2})}{\Gamma (m+r+3)} - \frac{4 \Gamma (m+r+\frac{3}{2})}{\Gamma (m+r+2)} + \frac{\Gamma (m+r+\frac{1}{2})}{\Gamma (m+r+1)} \biggr\} . $$

Proof

Using series (3) and the weight function \(w(t)\), one has

$$\begin{aligned} \int _{0}^{1} t^{m} \mathcal{X}_{j}(t) w(t) \,dt &= \sum_{r=0}^{j} \rho _{r, j} \int _{0}^{1} \bigl( 4 t^{m+r+ \frac{3}{2}} -4 t^{m+r+\frac{1}{2}}+ t^{m+r-\frac{1}{2}} \bigr) (1-t)^{- \frac{1}{2}} \,dt \\ &= \sum_{m=0}^{k} \rho _{r, j} \biggl(4 B \biggl(m+r+ \frac{5}{2}, \frac{1}{2} \biggr)-4 B \biggl(m+r+ \frac{3}{2}, \frac{1}{2} \biggr) \\ &\quad {} +B \biggl(m+r+ \frac{1}{2}, \frac{1}{2} \biggr) \biggr), \end{aligned}$$

where \(B(r, s)\) is the well-known beta function. □

Theorem 3.2

If \(\mathbf{X}(t)\) is the basis vector in (9) and \(\varepsilon \in \mathbb{Z}^{+}\), then one gets

$$ \int _{0}^{t} \eta ^{\varepsilon } \mathbf{X}(\eta ) \,d\eta \approx t^{ \varepsilon +1} \mathbf{P}_{\varepsilon } \mathbf{X}(t), \quad t \in I, $$
(13)

where \(\mathbf{P}_{\varepsilon }\) is the \((N+1)\)-order pseudo-operational matrix of the integration of integer order, and its entries are computed as follows:

$$\begin{aligned} & \mathbf{P}_{\varepsilon } [j, m]=\frac{\sqrt{\pi }}{\hbar _{m}} \sum _{r=0}^{j} \frac{\rho _{r, j} }{r+\varepsilon +1 } \sum _{l=0}^{m} \rho _{l, m} \biggl( \frac{4 \Gamma (r+l+\frac{5}{2})}{\Gamma (r+l+3)} - \frac{4 \Gamma (r+l+\frac{3}{2})}{\Gamma (r+l+2)} + \frac{\Gamma (r+l+\frac{1}{2})}{\Gamma (r+l+1)} \biggr), \\ & \quad j, m=0, 1,\ldots, N. \end{aligned}$$

Proof

According to the definition of \(\mathbf{X}(t)\), the left-hand side of relation (13) is calculated as follows:

$$\begin{aligned} \begin{aligned} \int _{0}^{t} \eta ^{\varepsilon } \mathbf{X}(\eta ) \,d\eta &= \Biggl[ \sum_{r=0}^{0} \rho _{r, 0} \int _{0} ^{t} \eta ^{r+ \varepsilon } \,d\eta , \sum _{r=0}^{1} \rho _{r, 1} \int _{0} ^{t} \eta ^{r+\varepsilon } \,d\eta ,\ldots, \sum_{r=0}^{N} \rho _{r, N} \int _{0} ^{t} \eta ^{r+ \varepsilon } \,d\eta \Biggr]^{T} \\ &= \Biggl[ \sum_{r=0}^{0} \rho _{r, 0} \frac{t^{r+\varepsilon +1}}{r+\varepsilon +1}, \sum_{r=0}^{1} \rho _{r, 1} \frac{t^{r+\varepsilon +1}}{r+\varepsilon +1},\ldots, \sum _{r=0}^{N} \rho _{r, N} \frac{t^{r+\varepsilon +1}}{r+\varepsilon +1} \Biggr]^{T} \\ &=t^{\varepsilon +1} \Biggl[ \sum_{r=0}^{0} \rho _{r, 0} \frac{t^{r}}{r+\varepsilon +1}, \sum_{r=0}^{1} \rho _{r, 1} \frac{t^{r}}{r+\varepsilon +1},\ldots, \sum _{r=0}^{N} \rho _{r, N} \frac{t^{r}}{r+\varepsilon +1} \Biggr]^{T}. \end{aligned} \end{aligned}$$
(14)

Now, \(t^{r}\) is approximated in the polynomials \(\mathcal{X}_{m}(t), m=0, 1,\ldots, N\),

$$ t^{r}\approx \sum_{m=0}^{N} a _{m}^{r} \mathcal{X}_{m}(t), \quad r=0, 1,\ldots, N, $$

such that

$$\begin{aligned} a _{m}^{r} &= \frac{1}{\hbar _{m}} \int _{0}^{1} t^{r} \mathcal{X}_{m}(t) w(t) \,dt \\ &=\sum_{l=0}^{m} \rho _{l, m} \sqrt{\pi } \biggl( \frac{4 \Gamma (r+l+\frac{5}{2})}{\Gamma (r+l+3)}- \frac{4 \Gamma (r+l+\frac{3}{2})}{\Gamma (r+l+2)}+ \frac{\Gamma (r+l+\frac{1}{2})}{\Gamma (r+l+1)} \biggr). \end{aligned}$$

Substituting the last equality in the last row of (14) leads to the desired result. □

Remark 3.3

If \(\boldsymbol{\Phi} (x, t)\) is the bivariate vector in (12), then the pseudo-operational matrices of integer order regarding the variables t and x are as follows:

$$\begin{aligned} \begin{aligned} \int _{0}^{x} \xi ^{\varepsilon } \boldsymbol{\Phi}(\xi , t) \,d\xi &= \biggl( \int _{0}^{x} \xi ^{\varepsilon } \mathbf{X}(\xi ) \,d \xi \biggr) \otimes \mathbf{X}(t) \\ &\approx \bigl( x ^{\varepsilon +1} \mathbf{P}_{\varepsilon } \mathbf{X}(x) \bigr) \otimes \mathbf{X}(t) \\ &=x ^{\varepsilon +1}(\mathbf{P}_{\varepsilon } \otimes I) \bigl( \mathbf{X}(x) \otimes \mathbf{X}(t) \bigr) \\ &=x ^{\varepsilon +1} \mathbb{P}^{\varepsilon }_{(x)} \boldsymbol{\Phi} (x, t), \end{aligned} \\ \begin{aligned} \int _{0}^{t} \eta ^{\varepsilon } \boldsymbol{\Phi}(x, \eta ) \,d\eta &= \mathbf{X}(x) \otimes \biggl( \int _{0}^{t} \eta ^{\varepsilon } \mathbf{X}(\eta ) \,d\eta \biggr) \\ &\approx \mathbf{X}(x) \otimes \bigl( t ^{\varepsilon +1} \mathbf{P}_{ \varepsilon } \mathbf{X}(t) \bigr) \\ &=t ^{\varepsilon +1}(I \otimes \mathbf{P}_{\varepsilon }) \bigl( \mathbf{X}(x) \otimes \mathbf{X}(t) \bigr) \\ &=t ^{\varepsilon +1} \mathbb{P}^{\varepsilon }_{(t)} \boldsymbol{\Phi} (x, t), \end{aligned} \end{aligned}$$

where \(\mathbb{P}^{\varepsilon }_{(x)}\) and \(\mathbb{P}^{\varepsilon }_{(t)}\) are the pseudo-operational matrices in the two-dimensional case regarding x and t, respectively.

Theorem 3.4

Assume that \(\mathbf{X}(t)\) is the basis vector in (9) and \({_{0}^{\mathit{RL}}}\mathcal{I} _{t} ^{\sigma (t)}\) is the variable-order fractional integral operator in the Riemann–Liouville sense as follows:

$$ {_{0}^{\mathit{RL}}} \mathcal{I} _{t} ^{\sigma (t)} g(t) = \frac{1}{\Gamma (\sigma (t))} \int _{0}^{t} (t-\eta )^{\sigma (t) -1} g(\eta ) \,d \eta , $$

where \(\sigma (t)\), \(g(t)\) are continuous functions. Then one has

$$ {_{0}^{\mathit{RL}}} \mathcal{I} _{t} ^{\sigma (t)} \bigl( t^{\varepsilon } \mathbf{X} (t) \bigr) \approx t^{\sigma (t)+\varepsilon } \mathbf{P}_{ \varepsilon }^{(\sigma )} \mathbf{X}(t), $$

where \(\mathbf{P}_{\varepsilon }^{(\sigma )}\) is the \((N+1)\)-order pseudo-operational matrix of the integration of variable order, and its entries are computed as follows:

$$\begin{aligned} \mathbf{P}_{\varepsilon }^{(\sigma )} [j, i]= {}&\frac{\sqrt{\pi }}{\hbar _{i}} \sum _{r=0}^{j} \frac{\rho _{r, j} \Gamma (r+\varepsilon +1)}{\Gamma (r+\sigma +\varepsilon +1) } \sum _{l=0}^{i} \rho _{l, i} \biggl( \frac{4 \Gamma (r+l+\frac{5}{2})}{\Gamma (r+l+3)} - \frac{4 \Gamma (r+l+\frac{3}{2})}{\Gamma (r+l+2)} \\ & {}+\frac{\Gamma (r+l+\frac{1}{2})}{\Gamma (r+l+1)} \biggr), \quad j, i=0, 1,\ldots, N. \end{aligned}$$

Proof

The proof is similar to what was done for Theorem 3.2. □

Remark 3.5

If \(\boldsymbol{\Phi}(x, t)\) is the bivariate basis function and \({_{0}^{\mathit{RL}}} \mathcal{I} _{t} ^{\sigma (x, t)}\) is the variable-order Riemann–Liouville integral operator and \(\varepsilon \in \mathbb{Z}^{+}\), then a pseudo-operational matrix of the integration with the variable order \(\sigma (x, t)\) is found as follows:

$$\begin{aligned} {_{0}^{\mathit{RL}}} \mathcal{I} _{t} ^{\sigma (x, t)} \bigl( t^{\varepsilon } \boldsymbol{\Phi}(x, t) \bigr)&=\mathbf{X}(x) \otimes {_{0}^{\mathit{RL}}} \mathcal{I} _{t} ^{\sigma (x, t)} \bigl( t^{\varepsilon } \mathbf{X}(t) \bigr) \\ &\approx \mathbf{X}(x) \otimes \bigl( t^{\varepsilon +\sigma (x, t)} \mathbf{P}_{\varepsilon }^{(\sigma )} \mathbf{X}(t) \bigr) \\ &=t^{\varepsilon +\sigma (x, t)} \bigl(I \otimes \mathbf{P}_{\varepsilon }^{( \sigma )} \bigr) \bigl(\mathbf{X}(x) \otimes \mathbf{X}(t) \bigr) \\ &=t^{\varepsilon +\sigma (x, t)} \mathbb{P}_{t}^{(\sigma , \varepsilon )} \boldsymbol{\Phi}(x, t), \end{aligned}$$

where \(\mathbb{P}_{t}^{(\sigma , \varepsilon )}\) is the integral pseudo-operational matrix of variable order for the two-variable basis regarding the time variable.

The operational matrix of product

Lemma 3.6

The integration of three polynomials \(\mathcal{X}_{i}(t)\), \(\mathcal{X}_{j}(t)\), and \(\mathcal{X}_{k}(t)\) is calculated as

$$\begin{aligned}& \begin{aligned} q_{ijk}&= \int _{0}^{1} \mathcal{X}_{i}(t) \mathcal{X}_{j}(t) \mathcal{X}_{k}(t) w(t) \,dt \\ &=\sum_{r=0}^{j+k} \sum _{l=0}^{i} \gamma _{r}^{(j, k)} \rho _{l, i} \sqrt{\pi } \biggl( \frac{4 \Gamma (r+l+\frac{5}{2})}{\Gamma (r+l+3)} - \frac{4 \Gamma (r+l+\frac{3}{2})}{\Gamma (r+l+2)} + \frac{\Gamma (r+l+\frac{1}{2})}{\Gamma (r+l+1)} \biggr), \\ &i, j, k=0, 1,\ldots, N. \end{aligned} \end{aligned}$$

Proof

First, the products of \(\mathcal{X}_{j}(t)\) and \(\mathcal{X}_{k}(t)\) are written as follows:

$$ \mathcal{Q}_{j+k}(t)=\mathcal{X}_{j}(t) \mathcal{X}_{k}(t) = \Biggl( \sum_{m=0}^{j} \rho _{m, j} t^{m} \Biggr) \Biggl( \sum _{n=0}^{k} \rho _{n, k} t^{n} \Biggr)= \Biggl( \sum_{r=0}^{j+k} \gamma _{r}^{(j, k)} t^{r} \Biggr), $$

where the coefficients \(\gamma _{r}^{(j, k)}\), \(j, k=0, 1,\ldots, N\), \(r=0, 1,\ldots, j+k\), can be calculated from Algorithms 1 and 2.

Algorithm 1
figure a

The computation of the coefficient \(\gamma _{r}^{(j, k)}\) if \(j \geqslant k\)

Algorithm 2
figure b

The computation of the coefficient \(\gamma _{r}^{(j, k)}\) if \(j < k\)

Now, the quantity \(q_{ijk}\) is calculated as follows:

$$\begin{aligned} q_{ijk} &= \int _{0}^{1} \mathcal{X}_{i}(t) \mathcal{X}_{j}(t) \mathcal{X}_{k}(t) w(t)\,dt=\sum _{r=0}^{j+k} \gamma _{r}^{(j, k)} \int _{0}^{1} t^{r} \mathcal{X}_{i}(t) w(t) \,dt \\ &=\sum_{r=0}^{j+k} \sum _{l=0}^{i} \gamma _{r}^{(j, k)} \rho _{l, i} \int _{0}^{1} \bigl( 4t^{r+l+\frac{3}{2}}-4 t^{r+l+\frac{1}{2}}+t^{r+l- \frac{1}{2}} \bigr) (1-t)^{-\frac{1}{2}} \,dt. \end{aligned}$$

By definition of the beta function, the desired result is achieved. □

Theorem 3.7

Suppose that \(\mathbf{X}(t)\) is the basis vector in (9) and U is an \((N+1)\)-order arbitrary vector. Then one has

$$ \mathbf{X}(t) \mathbf{X}^{T}(t) U \approx \widetilde{U} \mathbf{X}(t), $$
(15)

where Ũ is the \((N+1)\)-order product matrix, and its entries are computed as follows:

$$ \widetilde{U}[i, j]=\sum_{k=0}^{N} \frac{U_{i} q_{ijk}}{\hbar _{k}}, \quad i, j=0, 1,\ldots, N, $$

and \(q_{ijk}\) is given by Lemma 3.6.

Proof

See Theorem 4 in [29]. □

The approximation of the integral part

Theorem 3.8

If \(\mathbf{X}(t)\) is the one-variable basis vector, then one has

$$ \int _{0}^{t} \frac{\eta ^{\varepsilon } \mathbf{X}(\eta )}{(t-\eta )^{\beta }} \,d \eta \approx t^{\varepsilon -\beta +1} \mathbf{B}^{(\beta )} \mathbf{X}(t), \quad \beta \in (0, 1), \varepsilon \in \mathbb{Z}^{+}, $$

where \(\mathbf{B}^{(\beta )}\) is the \((N+1)\)-order pseudo-operational matrix for the integral with the weakly singular kernel, and its entries are as follows:

$$\begin{aligned} \mathbf{B}^{(\beta )}[l, j]={}&\sum_{r=0}^{l} \frac{\rho _{r, l} \sqrt{\pi } \Gamma (r+\varepsilon +1) \Gamma (1-\beta )}{\hbar _{j} \Gamma (r+\varepsilon -\beta +2)} \sum_{m=0}^{j} \rho _{m, j} \biggl( \frac{4 \Gamma (r+m+\frac{5}{2})}{\Gamma (r+m+3)} \\ & {} - \frac{4 \Gamma (r+m+\frac{3}{2})}{\Gamma (r+m+2)} + \frac{\Gamma (r+m+\frac{1}{2})}{\Gamma (r+m+1)} \biggr), \quad l, j=0, 1,\ldots, N. \end{aligned}$$

Proof

According to the definition of \(\mathbf{X}(t)\) and noting that

$$ \int _{0}^{t} \frac{\eta ^{r}}{(t-\eta )^{\beta }}\,d\eta = \frac{\Gamma (r+1) \Gamma (1-\beta )}{\Gamma (r-\beta +2)} t^{r- \beta +1}, \quad r=0, 1,\ldots, $$

one gets

$$ \begin{aligned} \int _{0}^{t} \frac{\eta ^{\varepsilon } \mathbf{X}(\eta )}{(t-\eta )^{\beta }} \,d \eta &= \begin{bmatrix} \sum_{r=0}^{0} \rho _{r, 0} \frac{\Gamma (r+\varepsilon +1) \Gamma (1-\beta )}{\Gamma (r+\varepsilon -\beta +2)} t^{r+\varepsilon -\beta +1} \\ \sum_{r=0}^{1} \rho _{r, 1} \frac{\Gamma (r+\varepsilon +1) \Gamma (1-\beta )}{\Gamma (r+\varepsilon -\beta +2)} t^{r+\varepsilon -\beta +1} \\ \vdots \\ \sum_{r=0}^{N} \rho _{r, N} \frac{\Gamma (r+\varepsilon +1) \Gamma (1-\beta )}{\Gamma (r+\varepsilon -\beta +2)} t^{r+\varepsilon -\beta +1} \end{bmatrix} \\ &=t^{\varepsilon -\beta +1} \begin{bmatrix} \sum_{r=0}^{0} \rho _{r, 0} \frac{\Gamma (r+\varepsilon +1) \Gamma (1-\beta )}{\Gamma (r+\varepsilon -\beta +2)} t^{r} \\ \sum_{r=0}^{1} \rho _{r, 1} \frac{\Gamma (r+\varepsilon +1) \Gamma (1-\beta )}{\Gamma (r+\varepsilon -\beta +2)} t^{r} \\ \vdots \\ \sum_{r=0}^{N} \rho _{r, N} \frac{\Gamma (r+\varepsilon +1) \Gamma (1-\beta )}{\Gamma (r+\varepsilon -\beta +2)} t^{r} \end{bmatrix}. \end{aligned} $$
(16)

Using Lemma 3.1, \(t^{r}\), \(0\leqslant r \leqslant N\), is written as

$$\begin{aligned} t^{r}&= \sum_{j=0}^{N} a _{j}^{(r)} \mathcal{X}_{j}(t)\\&= \sum _{j=0}^{N} \Biggl\{ \frac{1}{\hbar _{j}} \sum _{m=0}^{j} \rho _{m, j} \sqrt{\pi } \biggl( \frac{4 \Gamma (r+m+\frac{5}{2})}{\Gamma (r+m+3)} - \frac{4 \Gamma (r+m+\frac{3}{2})}{\Gamma (r+m+2)} \\ &\quad {}+\frac{\Gamma (r+m+\frac{1}{2})}{\Gamma (r+m+1)} \biggr) \Biggr\} \mathcal{X}_{j}(t). \end{aligned}$$

So, each component of the vector in (16) is approximated as follows:

$$\begin{aligned} &\sum_{r=0}^{l} \rho _{r, N} \frac{\Gamma (r+\varepsilon +1) \Gamma (1-\beta )}{\Gamma (r+\varepsilon -\beta +2)} t^{r+\varepsilon -\beta +1} \\ &\quad =t^{\varepsilon -\beta +1} \\ &\quad \quad{} \times\sum_{j=0}^{N} \Biggl( \sum _{r=0}^{l} \frac{\rho _{r, l} \sqrt{\pi } \Gamma (r+\varepsilon +1) \Gamma (1-\beta )}{\hbar _{j} \Gamma (r+\varepsilon -\beta +2)} \sum _{m=0}^{j} \rho _{m, j} \biggl( \frac{4 \Gamma (r+m+\frac{5}{2})}{\Gamma (r+m+3)} \\ &\quad\quad{} - \frac{4 \Gamma (r+m+\frac{3}{2})}{\Gamma (r+m+2)} + \frac{\Gamma (r+m+\frac{1}{2})}{\Gamma (r+m+1)} \biggr) \Biggr) \mathcal{X}_{j}(t) \\ &\quad =t^{\varepsilon -\beta +1} \sum_{j=0}^{N} b(l, j) \mathcal{X}_{j}(t), \quad l=0, 1,\ldots, N. \end{aligned}$$

Thus, one gets the following matrix representation:

$$\begin{aligned} \int _{0}^{t} \frac{\eta ^{\varepsilon } \mathbf{X}(\eta )}{(t-\eta )^{\beta }} \,d \eta &=t^{\varepsilon -\beta +1} \begin{bmatrix} b(0, 0) & b(0, 1) & \cdots & b(0, N) \\ b(1, 0) & b(1, 1) & \cdots & b(1, N) \\ \vdots &\vdots & \ddots & \vdots \\ b(N, 0) & b(N, 1) & \cdots & b(N, N) \end{bmatrix} \begin{bmatrix} \mathcal{X}_{0}(t) \\ \mathcal{X}_{1}(t) \\ \vdots \\ \mathcal{X}_{N}(t) \end{bmatrix} \\ &= t^{\varepsilon -\beta +1} \mathbf{B}^{(\beta )} \mathbf{X}(t). \end{aligned}$$

 □

The approximation of the shifted basis \(\mathbf{X}(\mathfrak{h}(t))\)

Theorem 3.9

If \(\mathfrak{h}(t) \in C(J)\), then the shifted basis vector \(\mathbf{X}(\mathfrak{h}(t))\) is approximated in \(\mathbf{X}(t)\) as

$$ \mathbf{X} \bigl(\mathfrak{h}(t) \bigr) \approx \boldsymbol{\Lambda} _{\mathfrak{h}} \mathbf{X}(t), $$

where \(\boldsymbol{\Lambda} _{\mathfrak{h}} \) is an \((N+1)\)-order matrix and is found later.

Proof

Noting the definition of \(\mathcal{X}_{j}(t)\) in (3), each \(\mathcal{X}_{j}(\mathfrak{h}(t))\) is written as

$$ \mathcal{X}_{j} \bigl(\mathfrak{h}(t) \bigr)=\sum _{k=0}^{j} \rho _{j, k} \mathfrak{h}^{k}(t), \quad j=0, 1,\ldots, N, $$
(17)

where coefficients \(\rho _{j, k}\), \(j, k=0, 1,\ldots, N\), are given by (4). Thus, the vector \(\mathbf{X}(\mathfrak{h}(t))\) is written as

$$\begin{aligned} \mathbf{X} \bigl(\mathfrak{h}(t) \bigr)&= \bigl[\mathcal{X}_{0} \bigl( \mathfrak{h}(t) \bigr), \mathcal{X}_{1} \bigl(\mathfrak{h}(t) \bigr),\ldots, \mathcal{X}_{N} \bigl(\mathfrak{h}(t) \bigr) \bigr]^{T} \\ &= \begin{bmatrix} \rho _{0, 0} & 0 & 0 & \cdots & 0 & 0 \\ \rho _{1, 0} & \rho _{1, 1} & 0 & \cdots & 0 & 0 \\ \rho _{2, 0} & \rho _{2, 1} & \rho _{2, 2} & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ \rho _{N-1, 0}& \rho _{N-1, 1} & \rho _{N-1, 2} & \cdots & \rho _{N-1, N-1} & 0 \\ \rho _{N, 0} & \rho _{N, 1} & \rho _{N, 2} & \cdots & \rho _{N, N-1} & \rho _{N, N} \end{bmatrix} \begin{bmatrix} 1 \\ \mathfrak{h}(t) \\ \mathfrak{h}^{2}(t) \\ \vdots \\ \mathfrak{h}^{N-1}(t) \\ \mathfrak{h}^{N}(t) \end{bmatrix} \\ &= \Delta H(t). \end{aligned}$$

Now, the functions \(\mathfrak{h}^{k}\), \(k=1, 2,\ldots, N\), must be approximated. First, \(\mathfrak{h}(t)\) is approximated as follows:

$$ \mathfrak{h}(t) \approx \sum_{k=0}^{N} \varpi _{j} \mathcal{X}_{j}(t) =\Pi ^{T} \mathbf{X}(t), $$
(18)

where \(\varpi _{j}=\frac{1}{\hbar _{j}} \int _{0}^{1} \mathfrak{h}(t) \mathcal{X}_{j}(t) w(t) \,dt\). With the aid of (18), one achieves

$$ \mathfrak{h}^{k}(t)\approx {\overline{\Pi }_{i}}^{T} \mathbf{X}(t), \quad\quad {\overline{\Pi }_{i}}= \bigl(\widetilde{\Pi }^{i-1} \bigr)^{T} \Pi , \quad i=1, 2,\ldots, N, $$

where Π̃ is the \((N+1)\)-order operational matrix of the product, and its entries are calculated in terms of the components of the vector Π. Hence, the vector \(H(t)\) is approximated as follows:

$$ H(t)\approx [ e_{1}, \overline{\Pi }_{1}, \overline{\Pi }_{2},\ldots, \overline{\Pi }_{N}]^{T} \mathbf{X}(t) =\overline{\overline{\Pi }} \mathbf{X}(t), $$

where \(e_{1}=[1, 0,\ldots, 0]^{T}\). So, one attains

$$ \mathbf{X} \bigl(\mathfrak{h}(t) \bigr)\approx \Delta H(t) \approx \Delta \overline{\overline{\Pi }} \mathbf{X}(t)=\boldsymbol{\Lambda} _{ \mathfrak{h}} \mathbf{X}(t), \quad \text{s.t.} \quad \boldsymbol{\Lambda} _{\mathfrak{h}}=\Delta \overline{\overline{\Pi }}. $$

 □

Corollary 3.10

The following approximations are achieved by Theorems 3.2, 3.8, and 3.9:

$$\begin{aligned} & \int _{0}^{\mathfrak{h}(t)} \eta ^{\varepsilon } \mathbf{X}(\eta ) \,d \eta \approx \bigl(\mathfrak{h}(t) \bigr)^{\varepsilon +1} \mathbf{P}_{ \varepsilon } \mathbf{X} \bigl(\mathfrak{h}(t) \bigr) \approx \bigl(\mathfrak{h}(t) \bigr)^{ \varepsilon +1} \mathbf{P}_{\varepsilon } \boldsymbol{\Lambda} _{ \mathfrak{h}} \mathbf{X}(t), \\ & \int _{0}^{\mathfrak{h}(t)} \frac{\eta ^{\varepsilon } \mathbf{X}(\eta )}{(t-\eta )^{\beta }} \,d \eta \approx \bigl(\mathfrak{h}(t) \bigr)^{\varepsilon -\beta +1} \mathbf{B}^{( \beta )} \mathbf{X} \bigl(\mathfrak{h}(t) \bigr) \approx \bigl(\mathfrak{h}(t) \bigr)^{ \varepsilon -\beta +1} \mathbf{B}^{(\beta )} \boldsymbol{\Lambda} _{ \mathfrak{h}} \mathbf{X}(t). \end{aligned}$$

Methodology

To find an approximate solution for problem (1)–(2), first consider the following approximation:

$$ \frac{\partial ^{4} \mathcal{V}(x, t)}{\partial x^{2} \partial t^{2}} \approx U^{T} \boldsymbol{\Phi}(x, t). $$
(19)

By twice integrating (19) regarding x, the following approximations are obtained:

$$\begin{aligned}& \frac{\partial ^{3} \mathcal{V}(x, t)}{\partial x \partial t^{2}} \approx x U^{T} \mathbb{P}_{(x)}^{0} \boldsymbol{\Phi}(x, t)+ \frac{\partial ^{3} \mathcal{V}(0, t)}{\partial x \partial t^{2}}, \end{aligned}$$
(20)
$$\begin{aligned}& \frac{\partial ^{2} \mathcal{V}(x, t)}{\partial t^{2}} \approx x^{2} U^{T} \mathbb{P}_{(x)}^{0} \mathbb{P}_{(x)}^{1} \boldsymbol{\Phi}(x, t)+ \frac{\partial ^{3} \mathcal{V}(0, t)}{\partial x \partial t^{2}} x+ \psi ^{\prime \prime } _{0}(t). \end{aligned}$$
(21)

Now, setting \(x=1\) in (21) leads to determining the approximate value of \(\partial ^{3} \mathcal{V}(0, t)/\partial x \partial t^{2}\):

$$ \frac{\partial ^{3} \mathcal{V}(0, t)}{\partial x \partial t^{2}} \approx \psi ^{\prime \prime } _{1}(t)-\psi ^{\prime \prime } _{0}(t)-U^{T} \mathbb{P}_{(x)}^{0} \mathbb{P}_{(x)}^{1} \boldsymbol{\Phi}(1, t)=G_{1}(t). $$

Twice integrating (21) regarding t leads to the following approximations:

$$\begin{aligned}& \frac{\partial \mathcal{V}(x, t)}{\partial t} \approx x^{2} t U^{T} \mathbb{P}_{(x)}^{0} \mathbb{P}_{(x)}^{1} \mathbb{P}_{(t)}^{0} \boldsymbol{\Phi}(x, t)+x \int _{0} ^{t} G_{1}(\eta ) \,d\eta +\psi ^{ \prime } _{0}(t)-\psi ^{\prime } _{0}(0) +\phi _{1}(x), \end{aligned}$$
(22)
$$\begin{aligned}& \begin{aligned} \mathcal{V}(x, t) &\approx x^{2} t^{2} U^{T} \mathbb{P}_{(x)}^{0} \mathbb{P}_{(x)}^{1} \mathbb{P}_{(t)}^{0} \mathbb{P}_{(t)}^{1} \boldsymbol{\Phi}(x, t)+x \int _{0} ^{t} \int _{0}^{t^{\prime }}G_{1}( \eta ) \,d\eta \,dt^{\prime }+\psi _{0}(t) \\ & \quad{} -\psi _{0}(0)-\psi ^{\prime } _{0}(0) t+t \phi _{1}(x)+\phi _{0}(x). \end{aligned} \end{aligned}$$
(23)

Again twice integrating (19) with respect to t yields the following approximations:

$$\begin{aligned}& \frac{\partial ^{3} \mathcal{V}(x, t)}{\partial x^{2} \partial t} \approx t U^{T} \mathbb{P}_{(t)}^{0} \boldsymbol{\Phi}(x, t)+\phi ^{ \prime \prime } _{1}(x), \end{aligned}$$
(24)
$$\begin{aligned}& \frac{\partial ^{2} \mathcal{V}(x, t)}{\partial x^{2}} \approx t^{2} U^{T} \mathbb{P}_{(t)}^{0} \mathbb{P}_{(t)}^{1} \boldsymbol{\Phi}(x, t)+t \phi ^{\prime \prime } _{1}(x)+\phi ^{\prime \prime } _{0}(x). \end{aligned}$$
(25)

Integrating (25) regarding x gives the following approximation to \(\partial \mathcal{V}(x, t)/\partial x\):

$$ \begin{aligned} \frac{\partial \mathcal{V}(x, t)}{\partial x} \approx {}&x t^{2} U^{T} \mathbb{P}_{(t)}^{0} \mathbb{P}_{(t)}^{1} \mathbb{P}_{(x)}^{0} \boldsymbol{\Phi}(x, t)+t \bigl(\phi ^{\prime } _{1}(x)-\phi ^{\prime } _{1}(0) \bigr) \\ &{} + \phi ^{\prime } _{0}(x)-\phi ^{\prime } _{0}(0)+ \frac{\partial \mathcal{V}(0, t)}{\partial x}. \end{aligned} $$
(26)

Deriving (23) regarding x and setting \(x=0\) lead to an approximation for \(\partial \mathcal{V}(0, t)/\partial x\):

$$ \frac{\partial \mathcal{V}(0, t)}{\partial x} \approx \int _{0}^{t} \int _{0} ^{t^{\prime }} G_{1}(\eta ) \,d\eta \,dt^{\prime } +t \phi ^{ \prime } _{1}(0)+\phi ^{\prime } _{0}(0)=G_{2}(t). $$

Integrating (20) regarding t leads to the following approximation:

$$ \frac{\partial ^{2} \mathcal{V}(x, t)}{\partial x \partial t} \approx x t U^{T} \mathbb{P}_{(x)}^{0} \mathbb{P}_{(t)}^{0} \boldsymbol{\Phi}(x, t)+ \int _{0} ^{t} G_{1}(\eta ) \,d\eta +\phi ^{\prime } _{1}(x). $$
(27)

To approximate \({}_{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}(x, t) \), consider the definition of the Riemann–Liouville integral operator and the approximation in (21). Using Remark 3.5, one can write

$$ \begin{aligned} _{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}(x, t) &= {}_{0} ^{\mathit{RL}} \mathcal{I}_{t} ^{2-\sigma (x, t)} \bigl( {D}_{t} ^{2} \mathcal{V}(x, t) \bigr) \\ &\approx{} _{0} ^{\mathit{RL}} \mathcal{I}_{t} ^{2-\sigma (x, t)} \bigl(x^{2} U^{T} \mathbb{P}_{(x)}^{0} \mathbb{P}_{(x)}^{1} \boldsymbol{\Phi}(x, t)+ x G_{1}(t)+ \psi ^{\prime \prime } _{0}(t) \bigr) \\ &\approx x^{2} t^{2-\sigma (x, t)} U^{T} \mathbb{P}_{(x)}^{0} \mathbb{P}_{(x)}^{1} \mathbb{P}_{(t)}^{(\sigma , 0)} \boldsymbol{\Phi}(x, t)+ x {_{0} ^{\mathit{RL}} }\mathcal{I}_{t} ^{2-\sigma (x, t)}G_{1}(t) \\ & \quad {} + {}_{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)}\psi _{0}(t), \end{aligned} $$
(28)

where \({D}_{t} ^{2}=\partial ^{2} /\partial t^{2}\). Suppose that \(\mathcal{F}(\mathcal{V}(x, t))\) in the integral part in Eq. (1) is approximated as follows:

$$ \mathcal{F} \bigl(\mathcal{V}(x, t) \bigr) \approx x^{\varepsilon _{1}} t^{ \varepsilon _{2}} V^{T} \boldsymbol{\Phi}(x, t). $$

Now, the double integral in Eq. (1) is approximated as

$$ \begin{aligned} & \int _{0}^{x} \int _{0}^{\mathfrak{h}(t)} \frac{\mathcal{F}(\mathcal{V}(\omega , \varpi ))}{(x-\omega )^{\theta } (t-\varpi )^{\vartheta }} \,d \varpi \,d \omega \\&\quad \approx \int _{0}^{x} \int _{0}^{\mathfrak{h}(t)} \frac{\omega ^{\varepsilon _{1}} \varpi ^{\varepsilon _{2}} V^{T} \boldsymbol{\Phi}(\omega , \varpi )}{(x-\omega )^{\theta } (t-\varpi )^{\vartheta }}\,d \varpi \,d \omega \\ &\quad = V^{T} \int _{0}^{x} \frac{\omega ^{\varepsilon _{1}} \mathbf{X}(\omega )}{(x-\omega )^{\theta }} \,d\omega \otimes \int _{0}^{\mathfrak{h}(t)} \frac{\varpi ^{\varepsilon _{2}} \mathbf{X}(\varpi )}{(t-\varpi )^{\vartheta }} \,d\varpi \\ &\quad \approx x^{\varepsilon _{1}-\theta +1} \bigl(\mathfrak{h}(t) \bigr)^{ \varepsilon _{2} -\vartheta +1} V^{T} \bigl(\mathbf{B}^{(\theta )} \mathbf{X}(x) \bigr) \otimes \bigl(\mathbf{B}^{(\vartheta )} \mathbf{X} \bigl( \mathfrak{h}(t) \bigr) \bigr) \\ &\quad \approx x^{\varepsilon _{1}-\theta +1} \bigl(\mathfrak{h}(t) \bigr)^{ \varepsilon _{2} -\vartheta +1} V^{T} \bigl(\mathbf{B}^{(\theta )} \otimes \bigl( \mathbf{B}^{(\vartheta )} \boldsymbol{\Lambda}_{\mathfrak{h}} \bigr) \bigr) \bigl( \mathbf{X}(x) \otimes \mathbf{X}(t) \bigr) \\ &\quad =x^{\varepsilon _{1}-\theta +1} \bigl(\mathfrak{h}(t) \bigr)^{\varepsilon _{2} - \vartheta +1} V^{T} \mathbb{B}^{(\theta , \vartheta )} \boldsymbol{\Phi}(x, t), \end{aligned} $$
(29)

where \(\mathbb{B}^{(\theta , \vartheta )}=\mathbf{B}^{(\theta )} \otimes ( \mathbf{B}^{(\vartheta )} \boldsymbol{\Lambda}_{\mathfrak{h}})\) is an \((N+1)^{2}\)-order matrix. Substituting approximations (19)–(29) into Eq. (1) gives a residual function, and collocating it at tensor points \(\{ (x_{i}, t_{j}) \}_{i, j=0}^{N}\) leads to an algebraic system including \((N+1)^{2}\) equations. \(x_{i}\), \(t_{j}\) are the roots of \(\mathcal{X}_{N+1}(x)\) and \(\mathcal{X}_{N+1}(t)\), respectively. By solving the resultant system, the unknown coefficient vector U is determined, and an approximate solution is achieved from (23).

Error bounds

In this section, by computing error bounds of obtained approximations, an error bound for the proposed method is presented.

Set \(\mathbb{P}_{N}=\operatorname{Span}\{ \mathcal{Z}_{i j}(t), i, j=0, 1,\ldots, N \}\) and suppose that \(\mathfrak{u}_{N}(x, t) \in \mathbb{P}_{N} \) is the best approximation to \(\mathfrak{u}(x, t) \in L^{2}_{W}(\overline{\mathbf{J}})\), in other words,

$$ \Vert \mathfrak{u}-\mathfrak{u}_{N} \Vert = \inf _{\mathfrak{z}\in \mathbb{P}_{N}} \Vert \mathfrak{u}- \mathfrak{z} \Vert , $$

where \(\mathfrak{u}_{N}(x, t)=\sum_{i=0}^{N} \sum_{j=0}^{N} u_{i j} \mathcal{Z}_{i, j}(x, t)\).

Theorem 5.1

Suppose that \(\mathcal{T}_{\mathcal{V}}^{N}(x, t)\) and \(\mathcal{V}_{N}(x, t) \in \mathbb{P}_{N}\) are the Taylor expansion and the best approximation to \(\mathcal{V}(x, t) \in L^{2}_{W}(\overline{\mathbf{J}})\), respectively. An approximation error can be computed as follows:

$$ \Vert \mathcal{V} -\mathcal{V}_{N} \Vert _{L^{2}_{W}( \overline{\mathbf{J}})} \leqslant C_{0} \frac{\sqrt{\pi } \Theta _{0}}{\sqrt{2 N} \Gamma ^{2}(N+2)}, $$

where \(C_{0}\) is a positive constant and \(\Theta _{0}=\max_{(x, t) \in \overline{\mathbf{J}}} \vert \frac{\partial ^{2N+2} \mathcal{V}(x, t)}{\partial x^{N+1} \partial t^{N+1}} \vert \).

Proof

The error of the Taylor expansion for function \(\mathcal{V}\) is

$$ \mathcal{V}(x, t)-\mathcal{T}_{\mathcal{V}}^{N}(x, t)= \frac{x^{N+1} t^{N+1}}{\Gamma ^{2}(N+2)} \frac{\partial ^{2N+2} \mathcal{V}(\xi _{x}, \xi _{t})}{\partial x^{N+1} \partial t^{N+1}}, \quad (\xi _{x}, \eta _{t}) \in \overline{\mathbf{J}}. $$
(30)

Taking the \(L^{2}\)-norm of Eq. (30) and using the definition of the beta function lead to the following inequality:

$$ \begin{aligned} \Vert \mathcal{V}- \mathcal{V}_{N} \Vert _{L^{2}_{W}( \overline{\mathbf{J}})} ^{2} &\leqslant \bigl\Vert \mathcal{V}- \mathcal{T}^{N} _{\mathcal{V}} \bigr\Vert _{L^{2}_{W}(\overline{\mathbf{J}})} ^{2} \\ &\leqslant \int _{0}^{1} \int _{0}^{1} \frac{\Theta _{0} ^{2} x^{2N+2} t^{2N+2}}{\Gamma ^{4}(N+2)}W(x, t) \,dt \,dx \\ &=\frac{\Theta _{0} ^{2} \pi }{\Gamma ^{4}(N+1)} \biggl( \frac{4 \Gamma (2N+\frac{9}{2})}{\Gamma (2N+5)}- \frac{4 \Gamma (2N+\frac{7}{2})}{\Gamma (2N+4)}+ \frac{\Gamma (2N+\frac{5}{2})}{\Gamma (2N+3)} \biggr)^{2}. \end{aligned} $$
(31)

Utilizing the Stirling formula in [30] for sufficiently large N leads to the following inequalities:

$$ \frac{\Gamma (2N+\frac{9}{2})}{\Gamma (2N+5)} \leqslant c_{1} (2N)^{- \frac{1}{2}}, \quad\quad \frac{ \Gamma (2N+\frac{7}{2})}{\Gamma (2N+4)} \leqslant c_{2} (2N)^{-\frac{1}{2}}, \quad\quad \frac{\Gamma (2N+\frac{5}{2})}{\Gamma (2N+3)} \leqslant c_{3} (2N)^{- \frac{1}{2}}, $$

where \(c_{i}\), \(i=1, 2, 3\), are positive constants. Using the above inequalities for (31) results in the desired result. □

Theorem 5.2

Suppose that \(\mathcal{V}_{N}(x, t)\) and \(\mathcal{T}_{\mathcal{V}}^{N}(x, t)\) are the best approximation and Taylor expansion of \(\mathcal{V}(x, t) \in L^{2}_{W}(\overline{\mathbf{J}})\). Error bounds for derivatives of \(\mathcal{V}(x, t)\) can be approximated as follows:

$$ \biggl\Vert \frac{\partial ^{i+j} \mathcal{V}}{\partial x^{i} \partial t^{j}} - \frac{\partial ^{i+j} \mathcal{V}_{N}}{\partial x^{i} \partial t^{j}} \biggr\Vert _{L^{2} _{W}(\overline{\mathbf{J}})} \leqslant C_{1} \frac{\Theta _{i, j} \sqrt{\pi }}{\Gamma (N-i+2) \Gamma (N-j+2)} \bigl( (2N-2i) (2N-2j) \bigr)^{-\frac{1}{4}}, $$

where \(C_{1}\) is a positive constant and

$$ \Theta _{i, j}=\max_{(x, t) \in \overline{\mathbf{J}}} \biggl\vert \frac{\partial ^{2N-i-j+2} \mathcal{V}(x, t)}{ \partial x^{N-i+1} \partial t^{N-j+1}} \biggr\vert , \quad i, j=0, 1, 2. $$

Proof

According to the Taylor expansion of the function \(\mathcal{V}\), one has

$$ \begin{aligned} &\frac{\partial ^{i+j} \mathcal{V}(x, t)}{\partial x^{i} \partial t^{j}}- \frac{\partial ^{i+j} \mathcal{T}_{\mathcal{V}}^{N}(x, t)}{\partial x^{i} \partial t^{j}} =\frac{x^{N-i+1} t^{N-j+1}}{\Gamma (N-i+2) \Gamma (N-j+2)} \frac{\partial ^{2N-i-j+2} \mathcal{V}(\xi _{x_{i}}, \eta _{t_{j}})}{\partial x^{N-i+1} \partial t^{N-j+1}}, \\ &\quad (\xi _{x_{i}}, \eta _{t_{j}}) \in \overline{\mathbf{J}}, i, j=0, 1, 2. \end{aligned} $$
(32)

By taking the \(L^{2}\)-norm of (32), one gets

$$ \begin{aligned} &\biggl\Vert \frac{\partial ^{i+j} \mathcal{V}}{\partial x^{i} \partial t^{j}}- \frac{\partial ^{i+j} \mathcal{T}_{\mathcal{V}}^{N}}{\partial x^{i} \partial t^{j}} \biggr\Vert _{L^{2}_{W}(\overline{\mathbf{J}})}^{2} \\ &\quad \leqslant \int _{0}^{1} \int _{0}^{1} \Theta _{i, j}^{2} \frac{x^{2N-2i+2} t^{2N-2j+2}}{\Gamma ^{2}(N-i+2) \Gamma ^{2}(N-j+2)} W(x, t) \,dt \,dx \\ &\quad = \frac{\Theta _{i, j}^{2} \pi }{\Gamma ^{2}(N-i+2) \Gamma ^{2}(N-j+2)} \\ &\quad\quad{}\times \biggl( \frac{4\Gamma (2N-2i+\frac{9}{2})}{\Gamma (2N-2i+5)}- \frac{4\Gamma (2N-2i+\frac{7}{2})}{\Gamma (2N-2i+4)}+ \frac{\Gamma (2N-2i+\frac{5}{2})}{\Gamma (2N-2i+3)} \biggr) \\ & \quad \quad {}\times \biggl( \frac{4\Gamma (2N-2j+\frac{9}{2})}{\Gamma (2N-2j+5)}- \frac{4\Gamma (2N-2j+\frac{7}{2})}{\Gamma (2N-2j+4)}+ \frac{\Gamma (2N-2j+\frac{5}{2})}{\Gamma (2N-2j+3)} \biggr). \end{aligned} $$
(33)

By the Stirling formula for the sufficiently large N, one has

$$\begin{aligned} & \frac{\Gamma (2N-2r+\frac{9}{2})}{\Gamma (2N-2r+5)} \leqslant c_{1}^{r} (2N-2r)^{-\frac{1}{2}}, \quad\quad \frac{\Gamma (2N-2r+\frac{7}{2})}{\Gamma (2N-2r+4)} \leqslant c_{2}^{r} (2N-2r)^{-\frac{1}{2}}, \\ & \frac{\Gamma (2N-2r+\frac{5}{2})}{\Gamma (2N-2r+3)} \leqslant c_{3}^{r} (2N-2r)^{-\frac{1}{2}}, \quad r=i, j, i, j=0, 1, 2, \end{aligned}$$

where \(c_{m}^{r}\), \(m=1, 2, 3\), are positive constants. Combining the last inequalities with (33) leads to the desired result. □

Theorem 5.3

Assume that \(\sigma (x, t)\) is a known continuous function and \(\mathcal{V}(x, t)\in L^{2}_{W}(\overline{\mathbf{J}})\). An error bound for the variable-order fractional derivative of the approximation of \(\mathcal{V}(x, t)\) is as follows:

$$ \bigl\Vert {_{0} ^{C}} \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V} - {_{0} ^{C} } \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}_{N} \bigr\Vert _{L^{2}_{W}(\overline{\mathbf{J}})} \leqslant C_{2} \frac{\Theta _{0} \sqrt{\pi }}{ \Gamma (N+2) \Gamma (N-\sigma ^{*}+2)} \bigl(2N \bigl(2N-2 \sigma ^{*} \bigr) \bigr)^{-\frac{1}{4}}, $$

where \(C_{2}\) is a positive constant, \(\Theta _{0}\) is the same in Theorem 5.1, and \(\sigma ^{*}=\max_{(x, t) \in \overline{\mathbf{J}}} \{ \sigma (x, t) \}\).

Proof

By applying the operator \({_{0} ^{C}} \mathcal{D}_{t} ^{\sigma (x, t)}\) to (30), one gains

$$ \begin{aligned}& \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}(x, t) - {_{0} ^{C} } \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{T}_{\mathcal{V}}^{N}(x, t)= \frac{x^{N+1} t^{N-\sigma (x, t)+1}}{\Gamma (N+2) \Gamma (N-\sigma (x, t)+2)} \frac{\partial ^{2N+2} \mathcal{V}(\xi _{x}, \eta _{t})}{\partial x^{N+1} \partial ^{N+1}}, \\ & \quad (\xi _{x}, \eta _{t}) \in \overline{\mathbf{J}}. \end{aligned} $$
(34)

Taking the \(L^{2}\)-norm of (34) leads to the following inequality:

$$\begin{aligned} &\bigl\Vert {_{0} ^{C}} \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V} - {_{0} ^{C} }\mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}_{N} \bigr\Vert _{L^{2}_{W}(\overline{\mathbf{J}})} \\ &\quad \leqslant \int _{0}^{1} \int _{0}^{1} \frac{\Theta _{0} x^{2N+2} t^{2N-2\sigma ^{*} +2}}{\Gamma ^{2}(N+2) \Gamma ^{2}(N-\sigma ^{*}+2)} W(x, t) \,dt \,dx \\ &\quad = \frac{\Theta _{0}^{2} \pi }{\Gamma ^{2}(N+2) \Gamma ^{2}(N-\sigma ^{*} +2)} \biggl( \frac{4\Gamma (2N+\frac{9}{2})}{\Gamma (2N+5)}- \frac{4\Gamma (2N+\frac{7}{2})}{\Gamma (2N+4)}+ \frac{\Gamma (2N+\frac{5}{2})}{\Gamma (2N+3)} \biggr) \\ &\quad \quad {}\times \biggl( \frac{4\Gamma (2N-2\sigma ^{*}+\frac{9}{2})}{\Gamma (2N-2\sigma ^{*}+5)}- \frac{4\Gamma (2N-2\sigma ^{*}+\frac{7}{2})}{\Gamma (2N-2\sigma ^{*}+4)}+ \frac{\Gamma (2N-2\sigma ^{*}+\frac{5}{2})}{\Gamma (2N-2\sigma ^{*}+3)} \biggr). \end{aligned}$$
(35)

Using the Stirling formula for large values of N results in the following inequalities:

$$\begin{aligned} & \frac{\Gamma (2N+\frac{9}{2})}{\Gamma (2N+5)} \leqslant c_{1} (2N)^{- \frac{1}{2}}, \quad\quad \frac{\Gamma (2N+\frac{7}{2})}{\Gamma (2N+4)} \leqslant c_{2} (2N)^{-\frac{1}{2}}, \quad\quad \frac{\Gamma (2N+\frac{5}{2})}{\Gamma (2N+3)} \leqslant c_{3} (2N)^{- \frac{1}{2}}, \\ & \frac{\Gamma (2N-2\sigma ^{*}+\frac{9}{2})}{\Gamma (2N-2\sigma ^{*}+5)} \leqslant d_{1} \bigl(2N-2\sigma ^{*} \bigr)^{-\frac{1}{2}}, \quad\quad \frac{\Gamma (2N-2\sigma ^{*}+\frac{7}{2})}{\Gamma (2N-2\sigma ^{*}+4)} \leqslant d_{2} \bigl(2N-2\sigma ^{*} \bigr)^{-\frac{1}{2}}, \\ & \frac{\Gamma (2N-2\sigma ^{*}+\frac{5}{2})}{\Gamma (2N-2\sigma ^{*}+3)} \leqslant d_{3} \bigl(2N-2\sigma ^{*} \bigr)^{-\frac{1}{2}}, \end{aligned}$$

where \(c_{i}\), \(d_{i}\), \(i=1, 2, 3\), are positive constants. Combining the resultant inequalities with (35) leads to the desired results. □

Theorem 5.4

Suppose that \(\mathcal{V}(x, t) \in L^{2}_{W}(\overline{\mathbf{J}})\) and \(\mathcal{V}_{N}(x, t)\) is its approximation obtained from the proposed method, \(\mathcal{F}: \mathbb{R}\times \mathbb{R} \rightarrow \mathbb{R}\) is a continuous differential operator, \(\mathfrak{h}(t) \in C(I)\), \(\mathfrak{h}_{0}=\max_{t\in I} \{\mathfrak{h}(t) \}\), and the real number \(\varrho >0\) exists such that

$$ \bigl\Vert \mathcal{F} \bigl(\mathcal{V}(x, t) \bigr)-\mathcal{F} \bigl( \mathcal{V}_{N}(x, t) \bigr) \bigr\Vert _{L^{2}_{W}(\overline{\mathbf{J}})} \leqslant \varrho \bigl\Vert \mathcal{V}(x, t) -\mathcal{V}_{N}(x, t) \bigr\Vert _{L^{2}_{W}( \overline{\mathbf{J}})}. $$

The bound of the approximation error related to the integral part in Eq. (1) can be approximated as follows:

$$\begin{aligned} & \biggl\Vert \int _{0}^{x} \int _{0} ^{\mathfrak{h}(t)} \frac{\mathcal{F}(\mathcal{V}(\omega , \varpi ))}{(x-\omega )^{\theta } (t-\varpi )^{\vartheta }} \,d\varpi \,d\omega - \int _{0}^{x} \int _{0} ^{\mathfrak{h}(t)} \frac{\mathcal{F}(\mathcal{V}_{N}(\omega , \varpi ))}{(x-\omega )^{\theta } (t-\varpi )^{\vartheta }} \,d\varpi \,d\omega \biggr\Vert _{L^{2}_{W}(\overline{\mathbf{J}})} \\ &\quad \leqslant \frac{C_{0} \varrho \Theta _{0} \mathcal{A}_{0} \sqrt{\pi }}{\sqrt{2N} \Gamma ^{2}(N+2)}, \end{aligned}$$

where \(C_{0}\) is a positive constant, \(\Theta _{0}\) is introduced by Theorem 5.1, \(\theta , \vartheta \in (0, 1)\) and

$$ \mathcal{A}_{0}=\sqrt{\mathfrak{h}_{0} \pi } \biggl( \frac{\Gamma (\frac{1}{2}-2\theta ) \Gamma (\frac{1}{2}-2\vartheta ) (4\theta ^{2}-2\theta +1) (4\vartheta ^{2}-2\vartheta +1)}{\Gamma (3-2\theta ) \Gamma (3-2\vartheta )} \biggr)^{\frac{1}{2}}. $$

Proof

According to the hypotheses of the theorem and using Theorem 5.1, one has

$$\begin{aligned} \biggl\Vert & \int _{0}^{x} \int _{0} ^{\mathfrak{h}(t)} \frac{\mathcal{F}(\mathcal{V}(\omega , \varpi ))}{(x-\omega )^{\theta } (t-\varpi )^{\vartheta }} \,d\varpi \,d\omega - \int _{0}^{x} \int _{0} ^{\mathfrak{h}(t)} \frac{\mathcal{F}(\mathcal{V}_{N}(\omega , \varpi ))}{(x-\omega )^{\theta } (t-\varpi )^{\vartheta }} \,d\varpi \,d\omega \biggr\Vert _{L^{2}_{W}(\overline{\mathbf{J}})} ^{2} \\ &\leqslant \varrho ^{2} \int _{0}^{x} \int _{0} ^{\mathfrak{h}(t)} \bigl\Vert (x-\omega )^{-\theta } (t-\varpi )^{-\vartheta } \bigr\Vert _{L^{2}_{W}( \overline{\mathbf{J}})} ^{2} \Vert \mathcal{V} -\mathcal{V}_{N} \Vert _{L^{2}_{W}(\overline{\mathbf{J}})} ^{2} \,d\varpi \,d\omega \\ &\leqslant \frac{\varrho ^{2} C_{0}^{2} \Theta _{0}^{2} \pi }{2N \Gamma ^{4}(N+2)} \int _{0}^{x} \int _{0}^{\mathfrak{h}(t)} \bigl\Vert (x-\omega )^{-\theta } (t- \varpi )^{-\vartheta } \bigr\Vert _{L^{2}_{W}(\overline{\mathbf{J}})} ^{2} \,d \varpi \,d\omega \\ &\leqslant \frac{\varrho ^{2} C_{0}^{2} \Theta _{0}^{2} \pi \mathfrak{h}_{0}}{2N \Gamma ^{4}(N+2)} \biggl(\pi \frac{\Gamma (\frac{1}{2}-2\theta ) \Gamma (\frac{1}{2}-2\vartheta ) (4\theta ^{2}-2\theta +1) (4\vartheta ^{2}-2\vartheta +1)}{\Gamma (3-2\theta ) \Gamma (3-2\vartheta )} \biggr). \end{aligned}$$

 □

Theorem 5.5

Suppose that \(\mathcal{V}(x, t)\) and \(\mathcal{V}_{N}(x, t)\) are the exact and approximate solutions of Eq. (1) and \(\mathcal{R}(x, t)\) is the residual function/perturbation term. Then \(\mathcal{R}(x, t) \rightarrow 0\) when \(N \rightarrow \infty \).

Proof

\(\mathcal{V}_{N}(x, t)\) is the approximate solution of Eq. (1), thus it satisfies the following equation:

$$ \begin{aligned}&{} _{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}_{N}(x, t) + \sum _{i=0}^{2} \sum _{j=0}^{2} \nu _{i j} \frac{\partial ^{i+j} \mathcal{V}_{N}(x, t)}{\partial x^{i} \partial t^{j}}+ \rho \int _{0}^{x} \int _{0}^{\mathfrak{h}(t)} \frac{\mathcal{F}(\mathcal{V}_{N}(\omega , \varpi ))}{(x-\omega )^{\theta } (t-\varpi )^{\vartheta }} \,d \varpi \,d \omega \\ &\quad{} -\mathfrak{f}(x, t)+\mathcal{R}(x, t)=0, \end{aligned} $$
(36)

where \(\mathcal{R}(x, t)\) is the residual function/perturbation term. Subtracting Eq. (36) from Eq. (1) leads to the following equation:

$$ \begin{aligned} \mathcal{R}(x, t)&= {_{0} ^{C}} \mathcal{D}_{t} ^{\sigma (x, t)} \bigl( \mathcal{V}(x, t)-\mathcal{V}_{N}(x, t) \bigr)+ \sum_{i=0}^{2} \sum_{j=0}^{2} \nu _{i j} \frac{\partial ^{i+j} (\mathcal{V}(x, t)-\mathcal{V}_{N}(x, t))}{\partial x^{i} \partial t^{j}} \\ & \quad {} + \rho \int _{0}^{x} \int _{0}^{\mathfrak{h}(t)} \frac{\mathcal{F}(\mathcal{V}(\omega , \varpi ))-\mathcal{F}(\mathcal{V}_{N}(\omega , \varpi ))}{(x-\omega )^{\theta } (t-\varpi )^{\vartheta }} \,d \varpi \,d \omega. \end{aligned} $$
(37)

By taking the \(L^{2}\)-norm of Eq. (37) and using Theorems 5.15.4, a bound for \(\mathcal{R}(x, t)\) can be obtained as follows:

$$ \begin{aligned} \Vert \mathcal{R} \Vert _{L^{2}_{W}(\overline{\mathbf{J}})}&\leqslant C_{2} \frac{\Theta _{0} \sqrt{\pi }}{\Gamma (N+2)\Gamma (N-\sigma ^{*}+2)} \bigl(2N \bigl(2N-2\sigma ^{*} \bigr) \bigr)^{-\frac{1}{4}} \\ & \quad {} +\sum_{i=0}^{2} \sum _{j=0}^{2} \vert \nu _{i j} \vert C_{1} \frac{\Theta _{i, j} \sqrt{\pi }}{\Gamma (N-i+2) \Gamma (N-j+2)} \bigl( (2N-2i) (2N-2j) \bigr)^{-\frac{1}{4}} \\ & \quad{} + \vert \rho \vert C_{0} \frac{\varrho \Theta _{0} \sqrt{\pi } \mathcal{A}_{0}}{\sqrt{2N} \Gamma ^{2}(N+2)}. \end{aligned} $$
(38)

As seen, the right-hand side of (38) approaches zero if N is sufficiently large. □

Numerical examples

To illustrate the efficiency and applicability of the proposed scheme, five cases of Eq. (1) are considered. Maximum absolute errors and CPU times are computed, and numerical results are compared to those reported in [31]. Computations and simulations are done by Maple 16 software.

Example 6.1

Consider the following variable-order time-fractional singular partial integro-differential equation:

$$ _{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}(x, t) - \mathcal{V}(x, t)- \int _{0}^{x} \int _{0}^{\mathfrak{h}(t)} \frac{\frac{\partial ^{2} \mathcal{V}(\omega , \varpi )}{\partial \varpi ^{2}}}{(x-\omega )^{\frac{1}{2}} } \,d \varpi \,d \omega =\mathfrak{f}(x, t), \quad 0 < \sigma (x, t) \leqslant 1, $$
(39)

where \((x, t) \in \overline{\mathbf{J}}\). The initial conditions and the exact solution are as follows:

$$ \mathcal{V}(x, 0) =\frac{\partial \mathcal{V}(x, 0) }{\partial t}=0, \quad\quad \mathcal{V}(x, t) =x^{2}t, $$

and

$$ \mathfrak{f}(x, t)=\frac{\Gamma (3)}{\Gamma (3-\sigma (x, t))}xt^{2- \sigma (x, t)} -xt^{2}- \frac{8}{3} x^{\frac{3}{2}}\mathfrak{h}(t). $$

Based on what was stated in Sect. 4, the following approximations are needed to calculate the residual function \(\mathcal{R}(x, t)\).

$$\begin{aligned} &\frac{\partial ^{2} \mathcal{V}(x, t)}{\partial t^{2}}\approx U^{T} \Phi (x, t), \quad\quad \frac{\partial \mathcal{V}(x, t)}{\partial t} \approx t U^{T} \mathbb{P}_{(t)}^{0} \Phi (x, t), \\ &\mathcal{V}(x, t)\approx t^{2} U^{T} \mathbb{P}_{(t)}^{0} \mathbb{P}_{(t)}^{1} \Phi (x, t), \\ & _{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}(x, t) ={}_{0} ^{\mathit{RL}} \mathcal{I}_{t} ^{1-\sigma (x, t)} \frac{\partial \mathcal{V}(x, t)}{\partial t} \approx t^{2-\sigma (x, t)} U^{T} \mathbb{P}_{(t)}^{0} \mathbb{P}_{(t)}^{1} \mathbb{P}_{(t)}^{(1- \sigma , 1)} \Phi (x, t), \\ & \int _{0}^{x} \int _{0}^{\mathfrak{h}(t)} \frac{\frac{\partial ^{2} \mathcal{V}(\omega , \varpi )}{\partial \varpi ^{2}}}{(x-\omega )^{\frac{1}{2}} } \,d \varpi \,d \omega \approx x^{\frac{1}{2}} \mathfrak{h}(t) U^{T} \mathbb{B} \Phi (x, t), \quad\quad \mathbb{B}=\mathbf{B}^{(\frac{1}{2})} \otimes (\mathbf{P}_{0} \boldsymbol{\Lambda}_{\mathfrak{h}}). \end{aligned}$$

Substituting the above approximations into Eq. (39) leads to the following residual function:

$$ \begin{aligned} \mathcal{R}(x, t)={}& t^{2-\sigma (x, t)} U^{T} \mathbb{P}_{(t)}^{0} \mathbb{P}_{(t)}^{1} \mathbb{P}_{(t)}^{(1-\sigma , 1)} \Phi (x, t)-t^{2} U^{T} \mathbb{P}_{(t)}^{0} \mathbb{P}_{(t)}^{1} \Phi (x, t) \\ & {} -x^{\frac{1}{2}} \mathfrak{h}(t) U^{T} \mathbb{B} \Phi (x, t)- \mathfrak{f}(x, t). \end{aligned} $$
(40)

The \(L^{2}\)-norm of (40) is \(1.2672 \times 10^{-19}\) and the CPU time is 5.04 s for \(\mathfrak{h}(t)= \cos (t)\), \(\sigma (x, t)=1 \), and \(N=4\), while the \(L^{2}\)-norm of the residual function reported by [31] is \(2.5549 \times 10^{-16}\). The maximum absolute errors (MAE) and the CPU time are computed for \(N=4\) and various choices of functions \(\mathfrak{h}(t)\) and \(\sigma (x, t)\). The reported CPU times in Table 1 show that the proposed method has a reasonable computational time. Comparing results to those reported by [31] emphasizes higher accuracy and coincidence of the proposed method. The 3D plots of the exact and approximate solutions and the absolute error function are depicted in Fig. 1 for \(N=4\), \(\sigma (x, t)=1-e^{-xt}\), \(\mathfrak{h}(t)=\cos (t)\). Absolute errors of approximate solutions at equally spaced points are listed in Table 2 and calculated for \(N=4\), \(\sigma _{i}(x, t)=1, 0.875, 0.8+0.005 \cos (xt)\), \(i=1, 2, 3\), and \(\mathfrak{h}(t)=t, e^{t^{2}}\). Data of this table show that the numerical results are in agreement with the exact ones.

Figure 1
figure 1

(a) 3D plot of exact solution, (b) 3D plot of approximate solution, (c) 3D plot of absolute error function for Example 6.1 for \(N=4\), \(\sigma (x, t)=1-0.5 e^{-x t}\), and \(\mathfrak{h}(t)=\cos (t)\)

Table 1 Maximum absolute errors and CPU time for \(N=4\) for Example 6.1
Table 2 Absolute errors for \(N=4\) at equally spaced points \((x_{j}, t_{j})\) for Example 6.1

Example 6.2

Consider the following variable-order time-fractional singular partial integro-differential equation:

$$ _{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}(x, t) - \mathcal{V}(x, t)-\frac{\partial \mathcal{V}(x, t)}{\partial x}- \int _{0}^{x} \int _{0}^{\mathfrak{h}(t)} \frac{\frac{\partial ^{3} \mathcal{V}(\omega , \varpi )}{\partial \omega \partial \varpi ^{2}}}{(x-\omega )^{\frac{1}{4}} } \,d \varpi \,d \omega =\mathfrak{f}(x, t), $$
(41)

where \(1 < \sigma (x, t) \leqslant 2\), \((x, t) \in \overline{\mathbf{J}}\). The initial and boundary conditions are as follows:

$$ \mathcal{V}(x, 0) =-x, \quad\quad \frac{\partial \mathcal{V}(x, 0) }{\partial t}=\mathcal{V}(0, t) =0, \quad\quad \mathcal{V}(1, t)=t^{3}-1. $$

The exact solution is \(\mathcal{V}(x, t)=x(t^{3}-1)\) and

$$ \mathfrak{f}(x, t)= \frac{6xt^{3-\sigma (x, t)}}{\Gamma (4-\sigma (x, t))}-\frac{2}{3} x^{ \frac{3}{4}} \mathfrak{h}^{2}(t)-x \bigl(t^{3}-1 \bigr)-t^{3}+1. $$

Substituting appropriate approximations into Eq. (41) leads to the following residual function:

$$ \begin{aligned} \mathcal{R}(x, t)={}&x t^{2-\sigma (x, t)} U^{T} \mathbb{P}_{(x)}^{0} \mathbb{P}_{(t)}^{(2-\sigma , 0)} \Phi (x, t)- \bigl(x t^{2} U^{T} \mathbb{P}_{(x)}^{0} \mathbb{P}_{(t)}^{0} \mathbb{P}_{(t)}^{1} \Phi (x, t)-x \bigr) \\ &{}- \bigl(t^{2} U^{t} \mathbb{P}_{(t)}^{0} \mathbb{P}_{(t)}^{1} \Phi (x, t)-1 \bigr) -x^{\frac{3}{4}} \mathfrak{h}(t) U^{T} \mathbb{B} \Phi (x, t)- \mathfrak{f}(x, t), \end{aligned} $$
(42)

where \(\mathbb{B}=\mathbf{B}^{(\frac{1}{4})}\otimes (\mathbf{P}_{0} \boldsymbol{\Lambda}_{\mathfrak{h}})\). Maximum absolute errors of obtained approximate solutions are seen in Table 3 for \(N=5\) and various choices of functions \(\sigma (x, t)\) and \(\mathfrak{h}(t)\). The results have more accuracy for \(\mathfrak{h}(t)=t, 1+t^{2}\). For \(\mathfrak{h}(t)=\sin (t)\), the errors decrease as \(\sigma (x, t) \rightarrow 2\). The 3D plots of the absolute error functions are depicted in Fig. 2 for \(N=5\), \(\sigma (x, t)=2-0.2e^{-xt}\), and diverse cases of \(\mathfrak{h}(t)\).

Figure 2
figure 2

3D plots of absolute error functions for (a\(\mathfrak{h}(t)=t\), (b\(\mathfrak{h}(t)=t^{2}+1\), (c\(\mathfrak{h}(t)=\cos (t)\), (d\(\mathfrak{h}(t)=\sin (t)\), \(\sigma (x,t)=2-0.2e^{-xt}\), and \(N=5\) for Example 6.2

Table 3 Maximum absolute errors for \(N=5\) and different choices of \(\mathfrak{h}(t)\) and \(\sigma (x, t)\) for Example 6.2

Example 6.3

Consider the following VTFSPIDE with the exact solution as \(\mathcal{V}(x, t)=10(t+1)x^{2}(1-x)^{2}\):

$$ _{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}(x, t) + \frac{\partial \mathcal{V}(x, t)}{\partial t}+ \frac{\partial \mathcal{V}(x, t)}{\partial x}- \frac{\partial ^{2} \mathcal{V}(x, t)}{\partial x^{2}}- \int _{0}^{x} \int _{0}^{\mathfrak{h}(t)} \frac{\mathcal{V}(\omega , \varpi )}{(x-\omega )^{\frac{1}{2}} } \,d \varpi \,d \omega =\mathfrak{f}(x, t), $$

where \(0 < \sigma (x, t) \leqslant 1\), \((x, t) \in \overline{\mathbf{J}}\). The initial and boundary conditions are

$$ \mathcal{V}(x, 0) =10x^{2}(1-x)^{2}, \quad\quad \mathcal{V}(0, t) = \mathcal{V}(1, t)=0, $$

and

$$\begin{aligned} \mathfrak{f}(x, t)={}&10 x^{2}(1-x)^{2} \biggl(1+ \frac{t^{2-\sigma (x, t)}}{\Gamma (2-\sigma (x, t))} \biggr)- \frac{16}{63} \mathfrak{h}(t) x^{\frac{5}{2}}(t+2) \bigl(16x^{2}-36x+21 \bigr) \\ &{}+10 \bigl(4x^{3}-6x^{2}+2x-12x^{2}+12x-2 \bigr) (t+1). \end{aligned}$$

The values of the absolute errors of the approximate solutions are computed at the points \(x_{j}=t_{j}=0.1 j\), \(j=0, 1,\ldots, 10\), for \(N=3\), \(\sigma (x, t)=1-0.5 e^{-xt}, 1-\cos (x) e^{-t}, 0.25, 0.50, 1\), the results are seen in Tables 4 and 5 for \(\mathfrak{h}(t)=t\) and \(\mathfrak{h}(t)=\cos (t)\), respectively. Data of these tables demonstrate the agreement of the numerical results with the exact ones. The method has high accuracy even when \(\mathfrak{h}(t)\) is a continuous function. The 3D plots of the exact and approximate solutions and the absolute error function are observed in Fig. 3 for \(N=3\), \(\sigma (x, t)=1-0.3 (1+x^{3}) e^{-t}\), \(\mathfrak{h}(t)=t^{2}\). The computational times of approximate solutions are listed in Table 6 for \(N=3\), \(\mathfrak{h}(t)=t\), and various cases of \(\sigma (x, t)\). When values of \(\sigma (x, t)\) approach 1, the computational time decreases. The exact and approximate solutions are compared in Fig. 4 for \(N=3\), \(\sigma (x, t)=1-\cos (x) e^{-t}\), \(\mathfrak{h}(t)=\cos (t)\) at \(t=0.25, 0.50, 0.75, 1\). As seen, the approximate solutions are in good agreement with the exact ones.

Figure 3
figure 3

(a) 3D plot of exact solution, (b) 3D plot of approximate solution, (c) 3D plot of absolute error function for \(N=3\), \(\mathfrak{h}(t)=t^{2}\), \(\sigma (x, t)=1-0.3 (1+x^{3})e^{-t}\) for Example 6.3

Figure 4
figure 4

Exact and approximate solutions for \(N=3\), \(\sigma (x, t)=1-\cos (x) e^{-t}\), \(\mathfrak{h}(t)=\cos (t)\) at times \(t=0.25, 0.50, 0.75, 1\) for Example 6.3

Table 4 Absolute errors for \(N=3\) and \(\mathfrak{h}(t)=t\) at equally spaced points \((x_{j}, t_{j})\) for Example 6.3
Table 5 Absolute errors for \(N=3\) and \(\mathfrak{h}(t)=\cos (t)\) at equally spaced points \((x_{j}, t_{j})\) for Example 6.3
Table 6 CPU time for \(N=3\) and \(\mathfrak{h}(t)=t\) for Example 6.3

Example 6.4

Consider the variable-order fractional partial integro-differential equation with the weakly singular kernel

$$ _{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}(x, t) + \frac{\partial \mathcal{V}(x, t)}{\partial x}- \int _{0}^{x} \int _{0}^{\mathfrak{h}(t)} \frac{\mathcal{V}(\omega , \varpi )}{(t-\varpi )^{\frac{1}{2}} } \,d \varpi \,d \omega =\mathfrak{f}(x, t), $$

where \(0 < \sigma (x, t) \leqslant 1\), \((x, t) \in \overline{\mathbf{J}}\). The initial and boundary conditions and the exact solution are, respectively,

$$ \mathcal{V}(x, 0) = \mathcal{V}(0, t) =0, \quad\quad \mathcal{V}(x, t) =t \sin (x), $$

and

$$ \mathfrak{f}(x, t)= \frac{t^{1-\sigma (x, t)}}{\Gamma (2-\sigma (x, t))} \sin (x)+t \cos (x)+ \biggl( \frac{4}{3} t^{\frac{3}{2}}-\frac{2}{3} \bigl(t- \mathfrak{h}(t) \bigr)^{ \frac{1}{2}} \bigl(2t+ \mathfrak{h}(t) \bigr) \biggr) \bigl(\cos (x)-1 \bigr). $$

The values of absolute errors of the approximate solutions are computed at the points \(x_{j}=t_{j}=0.1 j\), \(j=0, 1,\ldots, 10\), for \(N=3, 4, 5\) and \(\mathfrak{h}(t)=t\), and the results are seen in Tables 7 and 8 for \(\sigma (x, t)=0.5\) and \(\sigma (x, t)= 0.8+0.005\cos (xt) \sin (x)\), respectively. As seen, by increasing N the values of the absolute errors decrease. Also, results are compared to the results of [31] for \(M=4\), \(N=1\). The results obtained from the proposed method enjoy higher accuracy than those reported in [31]. The figures of the absolute error functions are depicted in Fig. 5 for \(N=5\), \(x=1\), \(\mathfrak{h}(t)=t\), and \(\sigma (x, t)=1-0.1 e^{-xt}, 1-0.3 e^{-xt}, 1-0.5 e^{-xt}, 1-0.7 e^{-xt}\).

Figure 5
figure 5

Absolute error functions for \(N=5\), \(\mathfrak{h}(t)=t\), \(x=1\), and diverse cases of \(\sigma (x, t)\) for Example 6.4

Table 7 Absolute errors for \(\mathfrak{h}(t)=t\) and \(\sigma (x,t)=0.5\) at equally spaced points \((x_{j}, t_{j})\) for Example 6.4
Table 8 Absolute errors for \(\mathfrak{h}(t)=t\) and \(\sigma (x, t)=0.8+0.005\cos (xt) \sin (x)\) at equally spaced points \((x_{j}, t_{j})\) for Example 6.4

Example 6.5

Consider the variable-order time-fractional partial integro-differential equation with the weakly singular kernel

$$ _{0} ^{C} \mathcal{D}_{t} ^{\sigma (x, t)} \mathcal{V}(x, t) + \mathcal{V}(x, t)- \int _{0}^{x} \int _{0}^{t} \frac{\frac{\partial \mathcal{V}(\omega , \varpi )}{\partial \varpi }}{(x-\omega )^{\frac{1}{3}} } \,d \varpi \,d \omega =\mathfrak{f}(x, t), \quad 0 < \sigma (x, t) \leqslant 1, (x, t) \in \overline{ \mathbf{J}}. $$

The initial and boundary conditions are

$$ \mathcal{V}(x, 0) =\cos (x), \quad\quad \mathcal{V}(0, t) =1+\sin (t), $$

the exact solution is \(\mathcal{V}(x, t) =\cos (x)+ \sin (t)\) if \(\sigma (x, t)=1\), and

$$ \mathfrak{f}(x, t)=\cos (x)+\sin (t)+\cos (t)+\frac{3}{2} x^{ \frac{2}{3}}\sin (t). $$

Maximum absolute errors and CPU time are seen in Table 9 for \(\mathfrak{h}(t)=t\), \(\sigma (x, t)=1\), and various values of N. By increasing N, the maximum absolute error decreases and CPU times show that the proposed method has a reasonable computational cost. The plots of the approximate solutions are observed in Fig. 6 for \(N=5\), \(\mathfrak{h}(t)=t\), \(\sigma (x, t)=0.8, 0.85, 0.9, 0.95, 1\), and \(x=1\). When the values of \(\sigma (x, t)\) approach 1, approximate solutions get close to the exact solution in the case \(\sigma (x, t)=1\).

Figure 6
figure 6

Approximate solutions of Example 6.5 for \(N=5\), \(\mathfrak{h}(t)=t\), \(x=1\), and different values of \(\sigma (x, t)\)

Table 9 Maximum absolute errors and CPU time of Example 6.5 for \(\mathfrak{h}(t)=t\) and \(\sigma (x,t)=1\)

Conclusion

The fifth-kind Chebyshev polynomials were proposed to deal with the numerical solution of a class of partial integro-differential equations with weakly singular kernels. To this end, bivariate Chebyshev polynomials were constructed with the help of the one-variable ones, and their pseudo-operational matrices were derived by obtaining matrices for the one-dimensional case. Resultant matrices and approximations were utilized along with the collocation method to solve the problem under study. Error bounds were computed, and using them we showed that the residual function tends to zero if the number of terms in the solution series is chosen sufficiently large. The proposed method reduces the volume of computations by presenting reliable algorithms. The numerical results demonstrated the efficiency and applicability of the method since they had a good agreement with the exact ones, and the numerical results had less error in comparison with those reported in [31]. Therefore, the Chebyshev polynomials of the fifth kind are suggested to be considered as basis functions for various types of spectral and pseudo-spectral methods. By observing the stated facts, the suggested method can be employed easily to solve delay integro-partial differential equations and some nonlinear partial differential equations.

Availability of data and materials

Not applicable.

References

  1. Keller, J.B., Olmstead, W.E.: Temperature of a nonlinearly radiating semi-infinite solid. Q. Appl. Math. 29(4), 559–566 (1972)

    MathSciNet  Article  Google Scholar 

  2. Linz, P.: Analytical and Numerical Methods for Volterra Equations. Society for Industrial and Applied Mathematics, Philadelphia (1985). https://doi.org/10.1137/1.9781611970852

    Book  MATH  Google Scholar 

  3. Cuesta, E., Palencia, C.: A fractional trapezoidal rule for integro-differential equations of fractional order in Banach spaces. Appl. Numer. Math. 45, 139–159 (2003)

    MathSciNet  Article  Google Scholar 

  4. Du, Q., Ju, L., Tian, L.: Analysis of a mixed finite-volume discretization of fourth-order equations on general surfaces. IMA J. Numer. Anal. 29, 376–403 (2008)

    MathSciNet  Article  Google Scholar 

  5. El-Gamel, M., Cannon, J., Zayed, A.: Sinc-Galerkin method for solving linear sixth-order boundary-value problems. Math. Comput. 73, 1325–1343 (2004)

    MathSciNet  Article  Google Scholar 

  6. Yang, X., Xu, D., Zhang, H.: Crank–Nicolson/quasi-wavelets method for solving fourth order partial integro-differential equation with a weakly singular kernel. J. Comput. Phys. 234, 317–329 (2013)

    MathSciNet  Article  Google Scholar 

  7. Baleanu, D., Darzi, R., Agheli, B.: New study of weakly singular kernel fractional fourth-order partial integro-differential equations based on the optimum q-homotopic analysis method. J. Comput. Appl. Math. 320, 193–201 (2017)

    MathSciNet  Article  Google Scholar 

  8. Tang, T.: A finite difference scheme for partial integro-differential equations with a weakly singular kernel. Appl. Numer. Math. 11(4), 309–319 (1993)

    MathSciNet  Article  Google Scholar 

  9. Dong, B., Shu, C.W.: Analysis of a local discontinuous Galerkin method for linear time-dependent fourth-order problems. SIAM J. Numer. Anal. 47, 3240–3268 (2009)

    MathSciNet  Article  Google Scholar 

  10. Li, X., Xu, C.: A space-time spectral method for the time fractional diffusion equation. SIAM J. Numer. Anal. 47, 2108–2131 (2009)

    MathSciNet  Article  Google Scholar 

  11. Patel, V.K., Singh, S., Singh, V.K., Tohidi, E.: Two dimensional wavelets collocation scheme for linear and nonlinear Volterra weakly singular partial integro-differential equations. Int. J. Appl. Comput. Math. 4(5) (2018). https://doi.org/10.1007/s40819-018-0560-4

  12. Behzadi, Sh.S.: The use of iterative methods to solve two-dimensional nonlinear Volterra–Fredholm integro-differential equations. Commun. Numer. Anal. (2012). https://doi.org/10.5899/2012/cna-00108

    MathSciNet  Article  MATH  Google Scholar 

  13. Dehghan, M.: Solution of a partial integro-differential equation arising from viscoelasticity. Int. J. Comput. Math. 83(1), 123–129 (2006)

    MathSciNet  Article  Google Scholar 

  14. Nemati, S., Lima, P.M., Ordokhani, Y.: Numerical solution of a class of two-dimensional nonlinear Volterra integral equations using Legendre polynomials. J. Comput. Appl. Math. 242, 53–69 (2013)

    MathSciNet  Article  Google Scholar 

  15. Wazwaz, A.M.: A reliable treatment for mixed Volterra–Fredholm integral equations. Appl. Math. Comput. 127, 405–414 (2002)

    MathSciNet  MATH  Google Scholar 

  16. Liu, J.G., Yang, X.J., Feng, Y.Y., Cui, P.: On group analysis of the time fractional extended \((2+1)\)-dimensional Zakharov–Kuznetsov equation in quantum magneto-plasmas. Math. Comput. Simul. 178, 407–421 (2020)

    MathSciNet  Article  Google Scholar 

  17. Liu, J.G., Yang, X.J., Feng, Y.Y., Cui, P., Geng, L.L.: On integrability of the higher dimensional time fractional KdV-type equation. J. Geom. Phys. 160, 104000 (2021). https://doi.org/10.1016/j.geomphys.2020.104000

    MathSciNet  Article  MATH  Google Scholar 

  18. Sing, J., Kumar, D., Purohit, S.D., Mishra, A.M., Bohra, L.: An efficient numerical approach for fractional multidimensional diffusion equations with exponential memory. Numer. Methods Partial Differ. Equ. (2020). https://doi.org/10.1002/num.22601

    Article  Google Scholar 

  19. Sing, J.: Analysis of fractional blood alcohol model with composite fractional derivative. Chaos Solitons Fractals 140, 110127 (2020). https://doi.org/10.1016/j.chaos.2020.110127

    MathSciNet  Article  Google Scholar 

  20. Singh, H., Singh, A.K., Pandey, R.K., Kumar, D., Singh, J.: An efficient computational approach for fractional Bratu’s equation arising in electrospinning process. Math. Methods Appl. Sci. 44, 10225–10238 (2021). https://doi.org/10.1002/mma.7401

    Article  Google Scholar 

  21. Alderremy, A.A., Saad, K.M., Gomez-Aguilar, J.F., Aly, S., Kumar, D., Sing, J.: New models of fractional blood ethanol and two-cell cubic autocatalator reaction equations. Math. Methods Appl. Sci. (2021). https://doi.org/10.1002/mma.7188

    Article  Google Scholar 

  22. Coimbra, C.F.M.: Mechanics with variable-order differential operators. Ann. Phys. 12, 692–703 (2003)

    MathSciNet  Article  Google Scholar 

  23. Ramirez, L.E.S., Coimbra, C.F.M.: On the variable order dynamics of the nonlinear wake caused by a sedimenting particle. Physica D 240(13), 1111–1118 (2011)

    MathSciNet  MATH  Google Scholar 

  24. Ingman, D., Suzdalnitsky, J.: Control of damping oscillations by fractional differential operator with time-dependent order. Comput. Methods Appl. Mech. Eng. 193(52), 5585–5595 (2004)

    MathSciNet  Article  Google Scholar 

  25. Soon, C.M., Coimbra, C.F.M., Kobayashi, M.H.: The variable viscoelasticity oscillator. Ann. Phys. 14(6), 378–389 (2005)

    Article  Google Scholar 

  26. Tang, J., Xu, D.: The global behavior of finite difference-spatial spectral collocation methods for a partial integro-differential equation with a weakly singular kernel. Numer. Math., Theory Methods Appl. 6(3), 556–570 (2013)

    MathSciNet  Article  Google Scholar 

  27. Tang, T.: A finite difference scheme for partial integro-differential equations with a weakly singular kernel. Appl. Numer. Math. 11, 309–319 (1993)

    MathSciNet  Article  Google Scholar 

  28. Abd-Elhameed, W.M., Youssri, Y.H.: Fifth-kind orthonormal Chebyshev polynomial solutions for fractional differential equations. Comput. Appl. Math. 37, 2897–2921 (2018)

    MathSciNet  Article  Google Scholar 

  29. Biazar, J., Sadri, K.: Solution of weakly singular fractional integro-differential equations by using a new operational approach. Comput. Appl. Math. 352, 453–477 (2019)

    MathSciNet  Article  Google Scholar 

  30. Guo, B.Y., Wang, L.L.: Jacobi approximations in non-uniformly Jacobi-weighted Sobolev spaces. J. Approx. Theory 128, 1–41 (2004)

    MathSciNet  Article  Google Scholar 

  31. Dehestani, H., Ordokhani, Y., Razzaghi, M.: Numerical solution of variable-order time fractional weakly singular partial integro-differential equations with error estimation. Math. Model. Anal. 25(4), 680–701 (2020)

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

Authors would like to thank the reviewers for their thoughtful comments and efforts toward improving our manuscript.

Funding

The author(s) received no financial support for this article.

Author information

Authors and Affiliations

Authors

Contributions

The authors declare that the study was realized in collaboration with a distribution of responsibility. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Khadijeh Sadri.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sadri, K., Hosseini, K., Baleanu, D. et al. Bivariate Chebyshev polynomials of the fifth kind for variable-order time-fractional partial integro-differential equations with weakly singular kernel. Adv Differ Equ 2021, 348 (2021). https://doi.org/10.1186/s13662-021-03507-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-021-03507-5

MSC

  • 35Q80
  • 45D05
  • 45E10
  • 45K05
  • 45K05

Keywords

  • Variable-order time-fractional weakly singular partial integro-differential equations
  • Pseudo-operational matrix
  • Fifth-kind Chebyshev polynomials
  • Caputo derivative
  • Riemann–Liouville integral