# Chebyshev wavelets operational matrices for solving nonlinear variable-order fractional integral equations

## Abstract

In this study, a wavelet method is developed to solve a system of nonlinear variable-order (V-O) fractional integral equations using the Chebyshev wavelets (CWs) and the Galerkin method. For this purpose, we derive a V-O fractional integration operational matrix (OM) for CWs and use it in our method. In the established scheme, we approximate the unknown functions by CWs with unknown coefficients and reduce the problem to an algebraic system. In this way, we simplify the computation of nonlinear terms by obtaining some new results for CWs. Finally, we demonstrate the applicability of the presented algorithm by solving a few numerical examples.

## Introduction

Fractional calculus is a useful extension of the classical calculus by allowing derivatives and integrals of arbitrary orders. It arose from a famous scientific discussion between Leibniz and L’Hopital in 1695 and was developed by other scientists like Laplace, Abel, Euler, Riemann, and Liouville . In recent years, fractional calculus has become a popular topic for researchers in mathematics, physics, and engineering because the fractional differential (integral) equations govern the behavior of many physical systems with more precision . We remind that the main advantage of using fractional differential (integral) equations for modeling applied problems is their nonlocal property , i.e., in a fractional dynamical system, the next state depends on all the previous situations so far .

Another interesting extension to fractional order calculus is considering the fractional order to be a known time-dependent function $$\alpha (t)$$ . This generalization is called variable-order (V-O) fractional calculus. This subject finds enormous applications in science and engineering because the nonlocal property of fractional calculus becomes more evident . Usually, the V-O fractional functional equations are difficult to solve, analytically. So, finding the exact solutions for these problems is impossible in most cases. Therefore, it is very important to propose approximation/numerical procedures to find numerical solutions for these problems. Some recent studies and numerical methods for V-O fractional functional equations can be observed in .

During last decades, orthogonal functions have been applied extensively for solving different classes of problems. The major reason for this is that solving the main problem turns into solving a simple algebraic system . We remind that the Chebyshev polynomials can be effectively utilized for approximating any sufficiently differentiable function. In this case, the approximation error is rapidly converted to zero . This property usually is named “the spectral accuracy”.

Wavelets, compared to other functions, favor many advantages which allow the investigation of problems which the conventional numerical methods cannot handle . During last years, different categories of fractional problems have been solved by eliciting the OM of classical fractional integration for well-known orthogonal wavelets, e.g., . CWs are a particular type of orthonormal wavelets which satisfy orthogonality and spectrality, which are the properties of the Chebyshev polynomials in addition to the properties of wavelets. These useful features have led to the widespread use of CWs in solving fractional differential equations (FDEs). In , a category of fractional systems of singular integro-differential equations was solved using CWs. Multi-order FDEs were solved in  by proposing a CWs-based numerical method. Also, nonlinear fractional integro-differential equations in large intervals were solved by Heydari et al. in  by CWs. A class of nonlinear FDEs was solved by applying CWs in . The generalized Burgers–Huxley equation was solved by applying a CWs-based collocation method in . In , CWs were utilized for solving FDEs with nonsingular kernel.

Many applied problems with memory can be successfully modeled via a fractional system of differential and/or integral equations, for example, semi-conductor devices , population dynamics , and identification of memory kernels in heat conduction . During last years, several numerical techniques have been applied for solving fractional systems of differential and integral equations, for example, fractional power Jacobi spectral method , Bernoulli wavelets method , Haar wavelets method , Müntz–Legendre wavelets method , Block pulse functions method , finite difference method , spline collocation method , and spectral method .

The major aim of the current study is to treat the following nonlinear V-O fractional system by proposing the CWs Galerkin method:

$$\textstyle\begin{cases} u_{1}(t)={F}_{1} (t,u_{1}(t),u_{2}(t),\ldots ,u_{d}(t) )\\ \hphantom{u_{1}(t)=}{}+\int _{0}^{t} (t-s )^{\alpha _{1}(t)-1}K_{1} (t,s ){G}_{1} (s,u_{1}(s),u_{2}(s),\ldots ,u_{d}(s) )\,ds, \\ u_{2}(t)={F}_{2} (t,u_{1}(t),u_{2}(t),\ldots ,u_{d}(t) )\\ \hphantom{u_{2}(t)=}{}+\int _{0}^{t} (t-s )^{\alpha _{2}(t)-1}K_{2} (t,s ){G}_{2} (s,u_{1}(s),u_{2}(s),\ldots ,u_{d}(s) )\,ds, \\ \vdots \\ u_{d}(t)={F}_{d} (t,u_{1}(t),u_{2}(t),\ldots ,u_{d}(t) )\\ \hphantom{u_{d}(t)=}{}+\int _{0}^{t} (t-s )^{\alpha _{d}(t)-1}K_{d} (t,s ){G}_{d} (s,u_{1}(s),u_{2}(s),\ldots ,u_{d}(s) )\,ds, \end{cases}$$
(1.1)

where $$u_{i}:[0,1] \rightarrow \mathbb{R}$$, $$i=1,2,\ldots ,d$$, are unknown functions, $$q_{i}-1<\alpha _{i}(t)\leq q_{i}$$, $$i=1,2,\ldots ,d$$, are given functions, $$q_{i}$$, $$i=1,2,\ldots ,d$$, are natural constants, $${F}_{i}$$ and $${G}_{i}: [0,1]\times \mathbb{R}^{d}\rightarrow \mathbb{R}$$, $$i=1,2,\ldots ,d$$, are continuous maps which satisfy appropriate Lipschitz conditions, and $$K_{i}: [0,1]\times [0,1]\rightarrow \mathbb{R}$$, $$i=1,2,\ldots ,d$$, are continuous functions. Note that fractional system (1.1) is a generalization of the classical fractional system introduced in . So, the method of this paper can also be used for the fractional system investigated in .

To implement the proposed method, a new OM for CWs is elicited as follows:

$$\bigl(I^{\alpha (t)}\Psi \bigr) (t)\simeq \mathbf{P}^{(\alpha )} \Psi (t),$$
(1.2)

where

$$\Psi (t)=\bigl[\psi _{1}(t),\psi _{2}(t),\ldots ,\psi _{\hat{m}}(t)\bigr]^{T},$$
(1.3)

and $$\mathbf{P}^{\alpha }$$ is the V-O fractional integration OM of CWs and $$\psi _{i}(t)$$, $$i=1,2,\ldots ,\hat{m}$$, are the CWs basis functions. We construct this new OM using HFs and their properties. The proposed technique is based upon expanding the unknown functions by CWs for transforming the main system to a system of algebraic equations using the mentioned OM and applying the Galerkin technique. In this way, a new method is introduced to compute nonlinear terms in such systems.

This article is structured as follows: In Sect. 2, we express some required preliminaries of HFs and the V-O fractional calculus. In Sect. 3, we introduce CWs and their required properties. In Sect. 4, we study the existence of a unique solution for fractional system (1.1). The expressed method for solving system (1.1) is described in Sect. 5. We examine the convergence of CWs in Sect. 6. Some illustrative examples are solved in Sect. 7. Eventually, we draw a conclusion in the last section.

## Mathematical preliminaries

Here, some preliminaries which are necessary for this study are reviewed.

### V-O fractional integral

In the development of the fractional calculus theory with V-O, many definitions have appeared. In this section, we give the most popular definition of V-O fractional integral.

### Definition 2.1

()

Let $$\alpha (t)$$ be a continuous function and $$f(t)$$ be a given function. The fractional integral operator of order $$\alpha (t)\geq 0$$ in the Riemann–Liouville type is given by

$$\bigl(I^{\alpha (t)}f \bigr) (t)= \textstyle\begin{cases} \frac{1}{\Gamma (\alpha (t))}\int _{0}^{t}(t-s)^{ \alpha (t)-1}f(s)\,ds, & \alpha (t)>0, \\ f(t), & \alpha (t)=0. \end{cases}$$
(2.1)

### Remark 1

The following useful property is implied by the definition:

$$I^{\alpha (t)}t^{\beta }= \frac{\Gamma {(\beta +1)}}{\Gamma (\alpha (t)+\beta +1)} t^{\alpha (t)+ \beta }.$$
(2.2)

### HFs and their properties

An -set of HFs is introduced on $$[0,1]$$ as follows [38, 39]:

\begin{aligned}& \varphi _{0}(t)= \textstyle\begin{cases} \frac{h-t}{h}, & 0\leq t< h, \\ 0, & \text{otherwise}, \end{cases}\displaystyle \\ \end{aligned}
(2.3)
\begin{aligned}& \varphi _{i}(t)= \textstyle\begin{cases} \frac{t-(i-1)h}{h}, & (i-1)h\leq t< ih, \\ \frac{(i+1)h-t}{h}, & ih\leq t< (i+1)h, \\ 0, & \text{otherwise}, \end{cases}\displaystyle \quad 1 \leq i\leq \hat{m}-2, \end{aligned}
(2.4)

and

$$\varphi _{\hat{m}-1}(t)= \textstyle\begin{cases} \frac{t-(1-h)}{h}, & 1-h\leq t\leq 1, \\ 0, & \text{otherwise}, \end{cases}$$
(2.5)

where $$h=\frac{1}{(\hat{m}-1)}$$. The HFs can be utilized to express any function $$f(t)$$ on $$[0,1]$$ as

$$f(t)\simeq \sum_{i=0}^{\hat{m}-1}f(ih) \varphi _{i}(t)=\bar{F}^{T} \Phi (t)=\Phi (t)^{T} \bar{F},$$
(2.6)

where

\begin{aligned} &\bar{F}\triangleq \bigl[f(0),f(h),f(2h),\ldots ,f(1)\bigr]^{T}, \end{aligned}
(2.7)
\begin{aligned} &\Phi (t)\triangleq \bigl[\varphi _{0}(t),\varphi _{1}(t), \ldots ,\varphi _{\hat{m}-1}(t)\bigr]^{T}. \end{aligned}
(2.8)

Similarly, they can be utilized to express any function $$f(x,t)$$ on $$[0,1]\times [0,1]$$ as

$$f(x,t)\simeq \Phi (x)^{T}\Lambda \Phi (t),$$
(2.9)

in which Λ is the coefficients matrix and

$$\Lambda _{ij}=f \bigl((i-1)h,(j-1)h \bigr),\quad 1\leq i,j \leq \hat{m}.$$
(2.10)

### Lemma 2.2

()

If $$\Phi (t)$$ is the vector expressed in (2.8), then

$$\Phi (t)\Phi (t)^{T}\simeq diag \bigl(\varphi _{0}(t), \varphi _{1}(t), \ldots ,\varphi _{\hat{m}-1}(t) \bigr) \triangleq diag \bigl(\Phi (t) \bigr),$$
(2.11)

in which $$diag (\Phi (t) )$$ is an -order diagonal matrix.

### Lemma 2.3

()

If X is an arbitrary -column vector and $$\Phi (t)$$ is the HFs vector in (2.8), then

$$\Phi (t)\Phi (t)^{T}X\simeq \hat{X}\Phi (t),$$
(2.12)

where $$\hat{X}=diag (x_{1},x_{2},\ldots ,x_{\hat{m}} )$$ and is an -order matrix, namely the product OM for HFs.

### Lemma 2.4

If A is an -order square matrix and $$\Phi (t)$$ is the vector in (2.8), then

$$\Phi (t)^{T}A\Phi (t)\simeq \hat{A}^{T} \Phi (t),$$
(2.13)

where $$\hat{A}=diag(A)$$ is an -column vector.

### Proof

The proof is easy regarding Lemma 2.2. □

### Theorem 2.5

()

Suppose that $$\alpha (t): [0,1]\longrightarrow \mathbb{R}^{+}$$ is continuous and $$\Phi (t)$$ is the vector in (2.8). Then we have

$$\bigl(I^{\alpha (t)}\Phi \bigr) (t)\simeq \hat{ \mathbf{P}}^{(\alpha )}\Phi (t),$$
(2.14)

where $$\hat{\mathbf{P}}^{(\alpha )}$$ is known as the V-O fractional integral OM for HFs. Moreover, we have

$$\hat{\mathbf{P}}^{(\alpha )}= \begin{pmatrix} 0 & \zeta _{1} & \zeta _{2} & \ldots &\zeta _{\hat{m}-2} & \zeta _{ \hat{m}-1} \\ 0 & \xi _{1\,1} & \xi _{1\,2} & \ldots & \xi _{1\,\hat{m}-2} & \xi _{1 \,\hat{m}-1} \\ 0 & 0 & \xi _{2\,2} & \ldots & \xi _{2\,\hat{m}-2}& \xi _{2\, \hat{m}-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \xi _{\hat{m}-2\,\hat{m}-2} & \xi _{\hat{m}-2\, \hat{m}-1} \\ 0 & 0 & 0 & 0 & 0 & \xi _{\hat{m}-1\,\hat{m}-1} \end{pmatrix}_{\hat{m}\times \hat{m}},$$

where

$$\zeta _{j}=\frac{h^{\alpha (jh)}}{\Gamma (\alpha (jh)+2)} \bigl( (j-1)^{ \alpha (jh)+1}+j^{\alpha (jh)} \bigl(\alpha (jh)-j+1 \bigr) \bigr) , \quad 1\leq j\leq \hat{m}-1,$$

and

$$\xi _{ij}= \textstyle\begin{cases} 0, & j< i, \\ \frac{h^{\alpha (jh)}}{\Gamma (\alpha (jh)+2)}, & j=i, \\ \frac{h^{\alpha (jh)}}{\Gamma (\alpha (jh)+2)} ( (j-i+1 )^{\alpha (jh)+1}-2(j-i)^{\alpha (jh)+1}+(j-i-1)^{ \alpha (jh)+1} ), & j>i. \end{cases}$$

### Definition 2.6

Let $$V^{T}=[v_{1},v_{2},\ldots ,v_{\hat{m}}]$$ and $$U^{T}_{i}=[u_{1}^{i},u_{2}^{i},\ldots ,u_{\hat{m}}^{i}]$$, $$i=1,2,\ldots ,d$$, be arbitrary constant vectors, and $${F}:\mathbb{R}^{d+1}\rightarrow \mathbb{R}$$ be any continuous function. Then we define $${F} (V^{T},U_{1}^{T},U_{2}^{T},\ldots ,U_{d}^{T} )$$ as

\begin{aligned} &{F} \bigl(V^{T},U_{1}^{T},U_{2}^{T}, \ldots ,U_{d}^{T} \bigr)\\ &\quad = \bigl[{F} \bigl(v_{1},u_{1}^{1},u_{1}^{2}, \ldots ,u_{1}^{d} \bigr),{F} \bigl(v_{2},u_{2}^{1},u_{2}^{2}, \ldots ,u_{2}^{d} \bigr),\ldots ,{F} \bigl(v_{\hat{m}},u_{\hat{m}}^{1},u_{ \hat{m}}^{2}, \ldots ,u_{\hat{m}}^{d} \bigr) \bigr]. \end{aligned}

### Lemma 2.7

If $$V^{T}\Phi (t)$$ and $$U_{i}^{T}\Phi (t)$$ are the approximations of $$v(t)=t$$ and $$u_{i}(t)$$ by HFs, then

$${F} \bigl(v(t),u_{1}(t),u_{2}(t),\ldots ,u_{d}(t) \bigr)\simeq {F} \bigl(V^{T},U_{1}^{T},U_{2}^{T}, \ldots ,U_{d}^{T} \bigr)\Phi (t)$$
(2.15)

for any continuous function $${F}:\mathbb{R}^{d+1}\rightarrow \mathbb{R}$$.

### Proof

Equations (2.6) and (2.7) together with Definition 2.6 complete the proof. □

## Chebyshev wavelets

Herein, CWs and some of their properties, which will be used in the sequel, are reviewed.

### CWs and function expansion

CWs are defined over $$[0,1]$$ as follows :

$$\psi _{nm}(t)= \textstyle\begin{cases} \beta _{m} 2^{\frac{k}{2}}T_{m} (2^{k+1}t-2n-1 ), & t\in [\frac{n}{2^{k}}, \frac{n+1}{2^{k}} ], \\ 0, & \text{otherwise}, \end{cases}$$
(3.1)

where

$$\beta _{m}= \textstyle\begin{cases} \sqrt{\frac{2}{\pi }}, & m=0, \\ \frac{2}{\sqrt{\pi }}, & m\geq 1, \end{cases}$$

$$m=0,1,\ldots ,M-1$$, $$M\in \mathbb{N}$$ and $$n=0,1,\ldots ,2^{k}-1$$ for $$k\in \mathbb{Z}^{+}\cup \{0\}$$. Here, $$T_{m}(t)$$ denotes the Chebyshev polynomials which are recursively defined over $$[-1,1]$$ as follows :

$$T_{0}(t)=1,\qquad T_{1}(t)=t,\qquad T_{m+1}(t)=2t T_{m}(t)-T_{m-1}(t), \quad m\in \mathbb{N}.$$
(3.2)

Let $$w_{n}(t)$$ be a weight function defined as

$$w_{n}(t)= \textstyle\begin{cases} \frac{1}{\sqrt{1- (2^{k+1}t-2n-1 )^{2}}}, & t\in [\frac{n}{2^{k}},\frac{n+1}{2^{k}} ], \\ 0, & \text{otherwise}. \end{cases}$$
(3.3)

The CWs can be utilized to expand any function $$u(t)$$ on $$[0,1]$$ as

$$u(t)=\sum_{n=0}^{\infty }\sum _{m=0}^{\infty }{c_{nm}\psi _{nm}(t)},$$
(3.4)

where $$c_{nm}= \langle u(t),\psi _{nm}(t) \rangle _{w_{n}(t)}$$. This function can be approximated as

$$u(t)\simeq \sum_{n=0}^{2^{k}-1} \sum_{m=0}^{M-1}{c_{nm}\psi _{nm}(t)=C^{T} \Psi (t)},$$
(3.5)

where the symbol T denotes transposition and $$\Psi (t)$$ and C are $$\hat{m}=2^{k}M$$ column vectors. Relation (3.5) can be simplified as follows:

$$u(t)\simeq \sum_{i=1}^{\hat{m}}{c_{i} \psi _{i}(t)=C^{T}\Psi (t)},$$
(3.6)

where $$\psi _{i}(t)=\psi _{nm}(t)$$, $$c_{i}=c_{nm}$$, and $$i=Mn+m+1$$. This results in

\begin{aligned} &C\triangleq [c_{1},c_{2},\ldots ,c_{\hat{m}} ]^{T}, \\ &\Psi (t)\triangleq \bigl[\psi _{1}(t),\psi _{2}(t),\ldots ,\psi _{ \hat{m}}(t) \bigr]^{T}. \end{aligned}
(3.7)

Likely, the CWs can be used to expand any two-variable functions $$u(x, t)\in L^{2}_{w_{n,n^{\prime }}} ((0, 1)\times (0, 1) )$$ as

$$u(x,t)\simeq \sum_{i=1}^{\hat{m}}\sum _{j=1}^{\hat{m}}u_{ij}\psi _{i}(x) \psi _{j}(t)=\Psi (x)^{T}U\Psi (t),$$
(3.8)

where $$U_{ij}= \langle \psi _{i}(x), \langle u(x,t),\psi _{j}(t) \rangle _{w_{n^{ \prime }}(t)} \rangle _{w_{n}(x)}$$, $$i,j=1,2,\ldots ,\hat{m}$$.

### Novel results for CWs

Here, some technical results are extracted for CWs which are applicable in the following.

### Lemma 3.1

Assume $$\Psi (t)$$ and $$\Phi (t)$$ are respectively the CWs and the HFs vectors in (3.7) and (2.8). Then we have

$$\Psi (t)\simeq Q\Phi (t),$$
(3.9)

where Q is the CWs matrix of size $$\hat{m}\times \hat{m}$$ with

$$Q_{ij}=\psi _{i}\bigl((j-1)h\bigr), \quad 1 \leq i,j\leq \hat{m}.$$
(3.10)

### Proof

The function $$\psi _{i}(t)$$ (ith component of $$\Psi (t)$$) can be approximated via HFs as follows:

$$\psi _{i}(t)\simeq \sum_{j=0}^{\hat{m}-1} \psi _{i}(jh)\varphi _{j}(t)= \sum _{j=1}^{\hat{m}}\psi _{i}\bigl((j-1)h\bigr) \varphi _{j-1}(t)=Q_{i}^{T} \Phi (t), \quad 1\leq i \leq \hat{m},$$
(3.11)

in which $$Q_{i}$$ is the ith row of the matrix Q. Then the proof is concluded. □

### Corollary 3.2

If X is an -column vector and $$\Psi (t)$$ is the CWs vector in (3.7), then

$$\Psi (t)\Psi (t)^{T}X\simeq \widehat{X}\Psi (t),$$
(3.12)

where $$\widehat{X}=Qdiag (Q^{T}X )Q^{-1}$$, and the matrix is the product OM for the CWs of size $$\hat{m}\times \hat{m}$$.

### Proof

The proof is easy by applying Lemmas 2.3 and 3.1. □

### Corollary 3.3

If $$\Psi (t)$$ is the CWs vector in (3.7) and A is an -order matrix, then

$$\Psi (t)^{T}A\Psi (t)\simeq \widehat{A}^{T}\Psi (t),$$
(3.13)

in which $$\widehat{A}^{T}=B^{T}Q^{-1}$$ and $$B=diag (Q^{T}AQ )$$ is an -column vector.

### Proof

The proof is immediate from Lemmas 2.4 and 3.1. □

### Corollary 3.4

Assume that $$V^{T}\Psi (t)$$ and $$U_{i}^{T}\Psi (t)$$ are the approximations of $$v(t)=t$$ and $$u_{i}(t)$$ via CWs, respectively, for $$i=1,2,\ldots , d$$. Then we have

\begin{aligned}[b] {F} \bigl(v(t),u_{1}(t),u_{2}(t),\ldots ,u_{d}(t) \bigr)&\simeq {F} \bigl(\widetilde{V}^{T}, \widetilde{U}_{1}^{T},\widetilde{U}_{2}^{T}, \ldots ,\widetilde{U}_{d}^{T} \bigr)Q^{-1}\Psi (t)\\ &= \Psi (t)^{T}\bigl(Q^{-1}\bigr)^{T}{F} \bigl( \widetilde{V}^{T},\widetilde{U}_{1}^{T}, \widetilde{U}_{2}^{T}, \ldots ,\widetilde{U}_{d}^{T} \bigr)^{T} \end{aligned}
(3.14)

for any continuous function $${F}:\mathbb{R}^{d+1}\rightarrow \mathbb{R}$$, where $$\widetilde{V}^{T}=V^{T}Q$$ and $$\widetilde{U}_{i}^{T}=U_{i}^{T}Q$$ for $$i=1,2,\ldots ,d$$.

### Proof

The proof is clear by Lemmas 2.7 and 3.1. □

### Theorem 3.5

Assume that $$\Psi (t)$$ is the CWs vector in (3.7) and $$\alpha (t): [0,1]\longrightarrow \mathbb{R}^{+}$$ is continuous, then

$$\bigl(I^{\alpha (t)}\Psi \bigr) (t) \simeq \bigl(Q \hat{ \mathbf{P}}^{(\alpha )}Q^{-1} \bigr)\Psi (t)\triangleq \mathbf{P}^{(\alpha )}\Psi (t),$$
(3.15)

where Q and $$\hat{\mathbf{P}}^{(\alpha )}$$ are respectively defined in (2.14) and (3.9), and $$\mathbf{P}^{(\alpha )}$$ is the V-O fractional integration OM of CWs.

### Proof

By Theorem 2.5 and Lemma 3.1, we get

$$\bigl(I^{\alpha (t)}\Psi \bigr) (t)\simeq \bigl(I^{\alpha (t)}Q \Phi \bigr) (t)=Q \bigl( I^{\alpha (t)}\Phi \bigr) (t)\simeq Q \hat{ \mathbf{P}}^{(\alpha )}\Phi (t).$$
(3.16)

Hence, by applying Lemma 3.1, the desired result is obtained. □

## Existence and uniqueness

The uniqueness and the existence of a solution for fractional system (1.1) is studied in this section. Since norms in $$\mathbb{R}^{d}$$ are equivalent, we use the sup-norm $$\|\cdot\|$$ which, for any $$\mathbf{v}= [v_{1},v_{2},\ldots ,v_{d} ]^{T}=[v_{i}]_{i=1,2, \ldots ,d}^{T}\in \mathbb{R}^{d}$$, is given as follows:

$$\Vert \mathbf{v} \Vert =\max_{1\leq i\leq d} \vert v_{i} \vert .$$
(4.1)

Similarly, for $$\Omega =[0,1]$$ or $$\Omega =[0,1]\times [0,1]$$ and any $$\mathbf{V}\in C(\Omega ,\mathbb{R}^{d})$$, we consider the following norm:

$$\Vert \mathbf{V} \Vert =\max_{\omega \in \Omega } \bigl\Vert \mathbf{V}(\omega ) \bigr\Vert ,$$
(4.2)

where $$\|\mathbf{V}(\omega )\|$$ is the sup-norm of $$\mathbf{V}(\omega )\in \mathbb{R}^{d}$$. It is obvious that the space $$C(\Omega ,\mathbb{R}^{d})$$ endowed with the above norm constitutes a Banach space. It is also worth mentioning that we have

$$\biggl\Vert \int _{0}^{t}\mathbf{v}(s)\,ds \biggr\Vert \leq \biggl\Vert \biggl[ \int _{0}^{t} \bigl\vert v_{i}(s) \bigr\vert \,ds \biggr]^{T}_{i=1,2,\ldots ,d} \biggr\Vert$$
(4.3)

for $$\mathbf{v}=[v_{1},v_{2},\ldots ,v_{d}]^{T}\in C([0,1],\mathbb{R}^{d})$$ and $$t\in [0,1]$$.

### Definition 4.1

(Lipschitz continuous function )

Let $$\mathbf{F}= [{F}_{1},{F}_{2},\ldots ,{F}_{d} ]^{T}$$ and $$\mathbf{G}= [{G}_{1},{G}_{2},\ldots ,{G}_{d} ]^{T} \in C([0,1] \times \mathbb{R}^{d},\mathbb{R}^{d})$$. These functions are Lipschitz continuous if there exist $$\rho , \sigma \in \mathbb{R}^{+}$$ such that

\begin{aligned} &\bigl\Vert \mathbf{F}(t,V)-\mathbf{F}(t,W) \bigr\Vert \leq \rho \Vert V-W \Vert , \\ &\bigl\Vert \mathbf{G}(t,V)-\mathbf{G}(t,W) \bigr\Vert \leq \sigma \Vert V-W \Vert \quad \forall V, W \in \mathbb{R}^{d}. \end{aligned}
(4.4)

### Remark 2

Fractional system (1.1) can be rewritten as $$\mathcal{A}U=U$$, where the operator $$\mathcal{A}: C([0,1],\mathbb{R}^{d})\rightarrow \mathbb{R}^{d}$$ for each $$U\in C([0,1],\mathcal{R}^{d})$$ and $$t\in [0,1]$$ is defined as follows:

$$\mathcal{A}U(t)=\mathbf{F}\bigl(t,U(t)\bigr)+ \int _{0}^{t}(t-s)^{\alpha (t)-1} \mathbf{K} (t,s ) \mathbf{G} \bigl(s,U(s) \bigr)\,ds.$$
(4.5)

Then solving problem (1.1) is equivalent to obtaining a fixed point $$U\in ([0,1],\mathbb{R}^{d})$$ for the operator $$\mathcal{A}$$.

### Theorem 4.2

Suppose that F and $$\mathbf{G}\in C([0,1]\times \mathbb{R}^{d},\mathbb{R}^{d})$$ satisfy Lipschitz conditions (4.4). If $$\rho +\sigma \|\mathbf{K}\| \Vert \frac{1}{\alpha (t)} \Vert < \frac{1}{2}$$, then system (1.1) has a unique solution.

### Proof

Choose $$r \geq 2 (M_{1}+M_{2}\|\mathbf{K}\| \Vert \frac{1}{\alpha (t)} \Vert )$$ and let $$\|\mathbf{F}(t,0)\|=M_{1}$$ and $$\|\mathbf{G}(s,0)\|=M_{2}$$. Then we show that $$\mathcal{A}B_{r}\subset B_{r}$$, where $$B_{r}\equiv \{ V\in C([0,1]\times \mathbb{R}^{d},\mathbb{R}^{d}): \|V\|\leq r \}$$. So, let $$V\in B_{r}$$. Then we have

\begin{aligned} \bigl\Vert \mathcal{A}V(t) \bigr\Vert &\leq \bigl\Vert \mathbf{F} \bigl(t,V(t)\bigr) \bigr\Vert + \biggl\Vert \biggl[ \int _{0}^{t}(t-s)^{ \alpha _{i}(t)-1} \bigl\vert K_{i} (t,s ) \bigr\vert \bigl\vert {G}_{i} \bigl(s,V(s) \bigr) \bigr\vert \,ds \biggr]_{i=1,2,\ldots ,d}^{T} \biggr\Vert \\ &\leq \bigl\Vert \mathbf{F} \bigl(t,V(t) \bigr)-\mathbf{F} (t,0 ) \bigr\Vert + \bigl\Vert \mathbf{F} (t,0 ) \bigr\Vert \\ &\quad{}+ \biggl\Vert \biggl[ \int _{0}^{t}(t-s)^{\alpha _{i}(t)-1} \bigl\vert K_{i}(t,s) \bigr\vert \bigl( \bigl\vert {G}_{i} \bigl(s,V(s) \bigr)-{G}_{i} (s,0 ) \bigr\vert + \bigl\vert {G}_{i} (s,0 ) \bigr\vert \bigr)\,ds \biggr]_{i=1,2, \ldots ,d}^{T} \biggr\Vert \\ &\leq \rho r+M_{1}+ (\sigma r+M_{2} ) \Vert \mathbf{K} \Vert \biggl\Vert \biggl[ \int _{0}^{t}(t-s)^{\alpha _{i}(t)-1}\,ds \biggr]_{i=1,2, \ldots ,d}^{T} \biggr\Vert \\ &\leq \rho r+M_{1}+ (\sigma r+M_{2} ) \Vert \mathbf{K} \Vert \biggl\Vert \frac{1}{\alpha (t)} \biggr\Vert \\ &= r \biggl(\rho +\sigma \Vert \mathbf{K} \Vert \biggl\Vert \frac{1}{\alpha (t)} \biggr\Vert \biggr) +M_{1}+M_{2} \Vert \mathbf{K} \Vert \biggl\Vert \frac{1}{\alpha (t)} \biggr\Vert < r. \end{aligned}

Now, suppose that $$V, W\in C ([0,1]\times \mathbb{R}^{d},\mathbb{R}^{d} )$$ and $$t\in [0,1]$$. Then we have

\begin{aligned} &\bigl\Vert \mathcal{A}V(t)-\mathcal{A}W(t) \bigr\Vert \\ &\quad \leq \bigl\Vert \mathbf{F}\bigl(t,V(t)\bigr)-\mathbf{F}\bigl(t,W(t)\bigr) \bigr\Vert \\ &\qquad{}+ \biggl\Vert \biggl[ \int _{0}^{t}(t-s)^{\alpha _{i}(t)-1} \bigl\vert K_{i} (t,s ) \bigr\vert \bigl\vert {G}_{i} \bigl(s,V(s) \bigr)-{G}_{i}\bigl(s,W(s)\bigr) \bigr\vert \,ds \biggr]_{i=1,2,\ldots ,d}^{T} \biggr\Vert \\ &\quad \leq \rho \bigl\Vert V(t)-W(t) \bigr\Vert +\sigma \Vert \mathbf{K} \Vert \bigl\Vert V(t)-W(t) \bigr\Vert \biggl\Vert \biggl[ \int _{0}^{t}(t-s)^{\alpha _{i}(t)-1}\,ds \biggr]_{i=1,2,\ldots ,d}^{T} \biggr\Vert \\ &\quad \leq \biggl(\rho +\sigma \Vert \mathbf{K} \Vert \biggl\Vert \frac{1}{\alpha (t)} \biggr\Vert \biggr) \bigl\Vert V(t)-W(t) \bigr\Vert . \end{aligned}

Since $$\rho +\sigma \|\mathbf{K}\| \Vert \frac{1}{\alpha (t)} \Vert <1$$, using the contraction mapping principle, the desired result is obtained. □

### Remark 3

The Lipschitz hypotheses are spontaneously satisfied when the vectors F and K adopt as follows:

$$\mathbf{F}(t,V)= \begin{pmatrix} {F}_{11}(t)& {F}_{12}(t)&\ldots & {F}_{1(d-1)}(t) &{F}_{1d}(t) \\ {F}_{21}(t)& {F}_{22}(t)&\ldots & {F}_{2(d-1)}(t) &{F}_{2d}(t) \\ \vdots &\vdots &\vdots &\vdots &\vdots \\ {F}_{(d-1)1}(t)& {F}_{(d-1)2}(t)&\ldots & {F}_{(d-1)(d-1)}(t) &{F}_{(d-1)d}(t) \\ {F}_{d1}(t)& {F}_{d2}(t)&\ldots & {F}_{d(d-1)}(t)& {F}_{dd}(t) \end{pmatrix} \begin{pmatrix} v_{1} \\ v_{2} \\ \vdots \\ v_{d-1} \\ v_{d} \end{pmatrix},$$

and

\begin{aligned} &\mathbf{K}(t,s) \mathbf{G}(s,V)\\ &\quad = \begin{pmatrix} {K}_{1}(t,s) {G}_{11}(t)& {K}_{1}(t,s){G}_{12}(t)&\ldots & {K}_{1}(t,s){G}_{1(d-1)}(t) &{K}_{1}(t,s){G}_{1d}(t) \\ {K}_{2}(t,s){G}_{21}(t)& {K}_{2}(t,s){G}_{22}(t)&\ldots & {K}_{2}(t,s){G}_{2(d-1)}(t) &{K}_{2}(t,s){G}_{2d}(t) \\ \vdots &\vdots &\vdots &\vdots &\vdots \\ {K}_{d}(t,s){G}_{d1}(t)& {K}_{d}(t,s){G}_{d2}(t)&\ldots & {K}_{d}(t,s){G}_{d(d-1)}(t)& {K}_{d}(t,s){G}_{dd}(t) \end{pmatrix} \begin{pmatrix} v_{1} \\ v_{2} \\ \vdots \\ v_{d} \end{pmatrix} \end{aligned}

for all $$s,t \in [0,1]$$ and any $$V=[v_{1},v_{2},\ldots ,v_{d}]^{T}\in \mathbb{R}^{d}$$. Here, $${F}_{ij}$$ and $${G}_{ij}: [0,1]\rightarrow \mathbb{R}$$ are continuous maps. Hence, the system of nonlinear integral equations (1.1) can be defined as

$$u_{i}(t)=\sum_{j=1}^{d} \biggl( {F}_{ij}(t)u_{j}(t)+ \int _{0}^{t}(t-s)^{ \alpha _{ij}(t)-1} {K}_{i}(t,s){G}_{ij}(s)u_{j}(s)\,ds \biggr), \quad i=1,2,\ldots ,d,$$

and Theorem 4.2 admits a unique solution.

## The established wavelet method

Herein, a new computational method using CWs is established for solving the VO fractional system (1.1). We utilize the results yielded in Sect. 4 for transforming fractional system (1.1) to an algebraic system by expressing the functions $$u_{i}(t)$$, $${\tilde{K}}_{i}(t,s)$$ for $$i=1,2,\ldots ,d$$ and $$v(t)=t$$ via CWs as follows:

$$\begin{gathered} u_{i}(t)\simeq U_{i}^{T}\Psi (t),\quad i=1,2,\ldots ,d, \\ {\tilde{K}}_{i}(t,s)= \Gamma { \bigl(\alpha _{i}(t) \bigr)} {K}_{i}(t,s)\simeq \Psi (t)^{T}K_{i} \Psi (s),\quad i=1,2,\ldots ,d, \end{gathered}$$
(5.1)

and

$$v(t)\simeq V^{T}\Psi (t),$$
(5.2)

where $$U_{i}$$ are unknown vectors, $$K_{i}$$ are coefficient matrices for kernels $${K}_{i}$$ for $$i=1,2,\ldots ,d$$, and V is the coefficient vector for $$v(t)$$. By substituting (5.1) and (5.2) into (1.1), we have

\begin{aligned} U_{i}^{T}\Psi (t)&\simeq {F}_{i} \bigl(V^{T}\Psi (t),U_{1}^{T}\Psi (t),U_{2}^{T} \Psi (t),\ldots ,U_{q}^{T} \Psi (t) \bigr) \\ &\quad{}+\frac{1}{\Gamma {(\alpha _{i}(t))}} \int _{0}^{t} (t-s )^{ \alpha _{i}(t)-1} \Psi (t)^{T}K_{i} \Psi (s)\\ &\quad {}\times{G}_{i} \bigl(V^{T}\Psi (s),U_{1}^{T} \Psi (s),U_{2}^{T}\Psi (s),\ldots ,U_{d}^{T} \Psi (s) \bigr)\,ds. \end{aligned}
(5.3)

From Corollary 3.4, we can write Eq. (5.3) for $$i=1,2,\ldots ,d$$ in the form

\begin{aligned} U_{i}^{T}\Psi (t)&\simeq {F}_{i} \bigl(\widetilde{V}^{T}, \widetilde{U}_{1}^{T}, \widetilde{U}_{2}^{T},\ldots ,\widetilde{U}_{d}^{T} \bigr)Q^{-1}\Psi (t) \\ &\quad{}+\Psi (t)^{T}K_{i} \frac{1}{\Gamma {(\alpha _{i}(t))}} \int _{0}^{t} (t-s )^{\alpha _{i}(t)-1}\Psi (s)\Psi (s)^{T}\bigl(Q^{-1}\bigr)^{T}\\ &\quad {}\times{G}_{i} \bigl(\widetilde{V}^{T},\widetilde{U}_{1}^{T}, \widetilde{U}_{2}^{T}, \ldots ,\widetilde{U}_{d}^{T} \bigr)^{T}\,ds. \end{aligned}
(5.4)

Now, by Corollary 3.2, we have

$$\Psi (s)\Psi (s)^{T}\bigl(Q^{-1} \bigr)^{T}{G}_{i} \bigl(\widetilde{V}^{T}, \widetilde{U}_{1}^{T},\widetilde{U}_{2}^{T}, \ldots ,\widetilde{U}_{d}^{T} \bigr)^{T}\simeq \widehat{{G}}_{i}\Psi (s),$$
(5.5)

where $$\widehat{{G}}_{i}$$ are $$\hat{m}\times \hat{m}$$ matrices given as

$$\widehat{{G}}_{i}=Q diag \bigl({G}_{i} \bigl(\widetilde{V}^{T}, \widetilde{U}_{1}^{T}, \widetilde{U}_{2}^{T},\ldots ,\widetilde{U}_{d}^{T} \bigr) \bigr)Q^{-1},$$
(5.6)

for $$i=1,2,\ldots ,d$$. By substituting (5.5) into (5.4) and employing the fractional integration matrix for CWs, we get

\begin{aligned}[b] U_{i}^{T}\Psi (t)&\simeq {F}_{i} \bigl( \widetilde{V}^{T},\widetilde{U}_{1}^{T}, \widetilde{U}_{2}^{T},\ldots ,\widetilde{U}_{d}^{T} \bigr)Q^{-1} \Psi (t)\\ &\quad {}+\Psi (t)^{T}K_{i} \widehat{{G}}_{i}\mathbf{P}^{(\alpha _{i})} \Psi (t),\quad i=1,2,\ldots ,d. \end{aligned}
(5.7)

By Corollary 3.3, we have

$$U_{i}^{T}\Psi (t)\simeq {F}_{i} \bigl(\widetilde{V}^{T},\widetilde{U}_{1}^{T}, \widetilde{U}_{2}^{T},\ldots ,\widetilde{U}_{d}^{T} \bigr)Q^{-1} \Psi (t)+\widehat{K}_{i}^{T}\Psi (t), \quad i=1,2,\ldots ,d,$$
(5.8)

where $$\widehat{K}_{i}^{T}=B_{i}^{T}Q^{-1}$$ and $$B_{i}$$ are -column vectors given as

$$B_{i}=diag \bigl(Q^{T}K_{i} \widehat{{G}}_{i}\mathbf{P}^{(\alpha _{i})}Q \bigr)$$
(5.9)

for $$i=1,2,\ldots ,d$$. So, the residual functions $$\mathbf{R}_{i}(t)$$ for fractional system (1.1) can be written as follows:

$$\mathbf{R}_{i}(t)= \bigl(U_{i}^{T}-{F}_{i} \bigl(\widetilde{V}^{T}, \widetilde{U}_{1}^{T}, \widetilde{U}_{2}^{T},\ldots ,\widetilde{U}_{d}^{T} \bigr)Q^{-1}-\widehat{K}_{i}^{T} \bigr)\Psi (t), \quad 1\leq i \leq d.$$
(5.10)

Similar to the typical Galerkin method , m̂d nonlinear algebraic equations are generated as follows:

$$\bigl(\mathbf{R}_{i}(t),\psi _{j}(t) \bigr)= \int _{0}^{1}\mathbf{R}_{i}(t) \psi _{j}(t)w_{n}(t)\,dt=0,\quad 1\leq i\leq d, 1\leq j \leq \hat{m},$$
(5.11)

where the index j is computed as $$j=Mn+m+1$$ and $$\psi _{j}(t)=\psi _{nm}(t)$$.

### Remark 4

Note that a set of m̂d nonlinear algebraic equations is generated by Eq. (5.11) as follows:

$$U_{i}^{T}-{F}_{i} \bigl(\widetilde{V}^{T}, \widetilde{U}_{1}^{T}, \widetilde{U}_{2}^{T}, \ldots ,\widetilde{U}_{d}^{T} \bigr)Q^{-1}- \widehat{K}_{i}^{T}=0,\quad 1\leq i\leq d.$$
(5.12)

Finally, we get a wavelet solution for the original system (1.1) using (5.1) by solving the above system for the vectors $$U_{i}$$, $$i=1,2,\ldots ,d$$.

### Remark 5

After simplification we can compute the vectors $$B_{i}$$ in (5.9) as follows:

$$B_{i}=diag \bigl(Q^{T}K_{i}Q diag \bigl({G}_{i} \bigl(\widetilde{V}^{T}, \widetilde{U}_{1}^{T}, \widetilde{U}_{2}^{T},\ldots ,\widetilde{U}_{d}^{T} \bigr) \bigr)\hat{\mathbf{P}}^{(\alpha _{i})} \bigr),$$
(5.13)

where $$\hat{\mathbf{P}}^{(\alpha _{i})}$$, $$i=1,2,\ldots ,d$$, are defined in (2.14). Moreover, by Lemma 3.1 and Eqs. (2.9) and (2.10), we have

$$Q^{T}K_{l}Q\simeq \Lambda _{l}, \quad l=1,2,\ldots ,d,$$
(5.14)

where

$$\Lambda _{l}={K}_{l}\bigl((i-1)h,(j-1)h\bigr),\quad 1 \leq i,j\leq \hat{m}.$$
(5.15)

Hence, with Eqs. (5.13) and (5.14), we can compute the vectors $$B_{i}$$, $$i=1,2,\ldots ,d$$, as follows:

$$B_{i}=diag \bigl(\Lambda _{i} diag \bigl({G}_{i} \bigl(\widetilde{V}^{T}, \widetilde{U}_{1}^{T}, \widetilde{U}_{2}^{T},\ldots ,\widetilde{U}_{d}^{T} \bigr) \bigr)\hat{\mathbf{P}}^{(\alpha _{i})} \bigr).$$
(5.16)

## Analysis of convergence

The convergence analysis of CWs is surveyed in this section.

### Definition 6.1

()

Let $$\bar{m}\geq 0$$ be an integer, w be a suitable function, and $$(a, b)$$ be a bounded interval. The Sobolev space $$H_{w}^{\bar{m}}(a,b)$$ is defined by

$$H_{w}^{\bar{m}}(a,b)= \bigl\{ u\in L_{w}^{2}(a,b): u^{(j)}(x)\in L^{2}_{w}(a, b), 0\leq j\leq \bar{m} \bigr\} .$$
(6.1)

### Remark 6

The weighted inner product in the Sobolev space $$H_{w}^{\bar{m}}(a,b)$$ is given as

$$\langle u,v\rangle_{\bar{m},w}=\sum_{j=0}^{\bar{m}} \int _{a}^{b}u^{(j)}(x)v^{(j)}(x)w(x) \,dx,$$
(6.2)

for which $$H_{w}^{\bar{m}}(a,b)$$ is a Hilbert space with the norm

$$\Vert u \Vert _{H^{\bar{m}}_{w}(a,b)}= \Biggl(\sum _{j=0}^{\bar{m}} \bigl\Vert u^{(j)} \bigr\Vert ^{2}_{L^{2}_{w}(a,b)} \Biggr)^{1/2}.$$
(6.3)

### Remark 7

For convenience, we introduce the semi-norm

$$\vert u \vert _{H^{\bar{m};N}_{w}(a,b)}= \Biggl(\sum _{j=\min (\bar{m},N+1)}^{ \bar{m}} \bigl\Vert u^{(j)} \bigr\Vert ^{2}_{L^{2}_{w}(a,b)} \Biggr)^{1/2},$$
(6.4)

since some of the $$L^{2}_{w}$$-norms in (6.3) come into play when we put an upper bound to the approximation error.

### Remark 8

Note that $$|u|_{H^{\bar{m};N}_{w}(a,b)}\leq \|u\|_{H^{\bar{m}}_{w}(a,b)}$$

$$\vert u \vert _{H^{\bar{m};N}_{w}(a,b)}= \bigl\Vert u^{(\bar{m})} \bigr\Vert ^{2}_{L^{2}_{w}(a,b)}= \vert u \vert _{H^{ \bar{m}}_{w}(a,b)}$$
(6.5)

for $$\bar{m}\leq N+1$$.

### Remark 9

The Sobolev space $$H_{w}^{\bar{m}}(a, b)$$ constructs a hierarchy of Hilbert spaces, because $$\ldots H_{w}^{\bar{m}+1}(a, b)\subset H_{w}^{\bar{m}}(a, b) \subset \cdots \subset H_{w}^{0}(a, b)\equiv L_{w}^{2}(a, b)$$.

### Definition 6.2

The truncated Chebyshev series for a function $$u\in L^{2}_{w}(-1,1)$$ equals

$$P_{N}u=\sum_{j=0}^{N} \hat{u}_{j}T_{j}(x),$$

where

$$\begin{gathered} \hat{u}_{j}= \langle u,T_{j} \rangle _{w}/ \langle T_{j},T_{j} \rangle _{w}= \frac{2}{\pi \gamma _{j}} \int _{-1}^{1}u(x)T_{j}(x)w(x)\,dx, \\ w(x)=1/\sqrt{1-x^{2}},\qquad \gamma _{j}= \textstyle\begin{cases} 2,&j=0, \\ 1,&j\geq 1. \end{cases}\displaystyle \end{gathered}$$

### Lemma 6.3

()

Let $$u\in H_{w}^{\bar{m}}(-1,1)$$, $$w(x)=1/\sqrt{1-x^{2}}$$, and $$P_{N}u=\sum_{j=0}^{N}\hat{u}_{j}T_{j}(x)$$. Then we have

$$\Vert u-P_{N}u \Vert _{L^{2}_{w}(-1,1)}\leq \bar{C}N^{-\bar{m}} \vert u \vert _{H^{\bar{m};N}_{w}(-1,1)},$$
(6.6)

where is a constant dependent on and independent of N. In addition, it yields

$$\Vert u-P_{N}u \Vert _{L^{\infty }(-1,1)}\leq \hat{C} \bigl(1+\log (N)\bigr)N^{-\bar{m}} \sum_{j=\min (\bar{m},N+1)}^{\bar{m}} \bigl\Vert u^{(j)} \bigr\Vert _{L^{ \infty }(-1,1)}$$
(6.7)

in the infinity norm, in which $$\|u\|_{L^{\infty }(-1,1)}=\sup_{-1\leq x\leq 1}|u(x)|$$ and Ĉ is a constant dependent on and independent of N.

### Theorem 6.4

Suppose $$u\in H_{w^{\ast }}^{\bar{m}}(a,b)$$, $$w^{\ast }(t)=w (\frac{2}{b-a} t-\frac{b+a}{b-a} )$$ and $$P_{N}^{\ast }u=\sum_{j=0}^{N}\hat{u}^{\ast }_{j} T^{\ast }_{j}(t)$$, where $$T^{\ast }_{j}(t)=T_{j} (\frac{2}{b-a} t-\frac{b+a}{b-a} )$$ and

$$\hat{u}^{\ast }_{j}= \bigl\langle u,T^{\ast }_{j} \bigr\rangle _{w^{\ast }}/ \bigl\langle T^{ \ast }_{j},T^{\ast }_{j} \bigr\rangle _{w^{\ast }}= \frac{4}{(b-a)\pi \gamma _{j}} \int _{a}^{b}u(t)T^{\ast }_{j}(t)w^{\ast }(t) \,dt.$$

Then we have

$$\bigl\Vert u-P_{N}^{\ast }u \bigr\Vert _{L^{2}_{w^{\ast }}(a,b)} \leq \bar{C}N^{-\bar{m}}|\!|\!|u|\!|\!|_{H^{\bar{m};N}_{w^{\ast }}(a,b)},$$
(6.8)

where

$$|\!|\!|u|\!|\!|_{H^{\bar{m};N}_{w^{\ast }}(a,b)}= \Biggl(\sum_{j=\min (\bar{m},N+1)}^{ \bar{m}} \biggl(\frac{b-a}{2} \biggr)^{2j} \bigl\Vert u^{(j)} \bigr\Vert ^{2}_{L^{2}_{w^{ \ast }}(a,b)} \Biggr)^{1/2}.$$

Moreover, we have

$$\bigl\Vert u-P_{N}^{\ast }u \bigr\Vert _{L^{\infty }(a,b)} \leq \hat{C} \bigl(1+\log (N)\bigr)N^{- \bar{m}}\sum _{j=\min (\bar{m},N+1)}^{\bar{m}} \biggl(\frac{b-a}{2} \biggr)^{j} \bigl\Vert u^{(j)} \bigr\Vert _{L^{\infty }(a,b)},$$
(6.9)

where $$\|u\|_{L^{\infty }(a,b)}=\sup_{a\leq t\leq b}|u(t)|$$.

### Proof

It is the case that

$$\bigl\Vert u-P_{N}^{\ast }u \bigr\Vert _{L^{2}_{w^{\ast }}(a,b)}= \Biggl( \int _{a}^{b} \Biggl( u(t)-\sum _{j=0}^{N}\hat{u}^{\ast }_{j} T^{\ast }_{j}(t) \Biggr)^{2}w^{\ast }(t)\,dt \Biggr)^{1/2}.$$

By considering $$t=\frac{b-a}{2} x+\frac{b+a}{2}$$ and $$dt=\frac{b-a}{2} \,dx$$ as change of variables, we get

\begin{aligned}[b] &\bigl\Vert u-P_{N}^{\ast }u \bigr\Vert _{L^{2}_{w^{\ast }}(a,b)}\\ &\quad =\sqrt{\frac{b-a}{2}} \Biggl( \int _{-1}^{1} \Biggl( u \biggl(\frac{b-a}{2} x+\frac{b+a}{2} \biggr)-\sum_{j=0}^{N} \hat{\mathbf{u}}_{j}T_{j}(x) \Biggr)^{2}w(x)\,dx \Biggr)^{1/2}, \end{aligned}
(6.10)

where $$\hat{\mathbf{u}}_{j}= \langle u (\frac{b-a}{2} x+\frac{b+a}{2} ),T_{j}(x) \rangle _{w}/ \langle T_{j}(x),T_{j}(x) \rangle _{w}$$. Letting $$v(x)=u (\frac{b-a}{2} x+\frac{b+a}{2} )$$ yields

$$\Biggl( \int _{-1}^{1} \Biggl( u \biggl(\frac{b-a}{2} x+\frac{b+a}{2} \biggr)-\sum_{j=0}^{N} \hat{\mathbf{u}}_{j}T_{j}(x) \Biggr)^{2}w(x)\,dx \Biggr)^{1/2}= \Vert v-P_{N}v \Vert _{L^{2}_{w}(-1,1)}.$$
(6.11)

By (6.6), we obtain

$$\Vert v-P_{N}v \Vert _{L^{2}_{w}(-1,1)} \leq \bar{C}N^{-\bar{m}} \vert v \vert _{H^{ \bar{m};N}_{w}(-1,1)},$$
(6.12)

where

$$\vert v \vert _{H^{\bar{m};N}_{w}(-1,1)}= \Biggl(\sum _{j=\min (\bar{m},N+1)}^{ \bar{m}} \bigl\Vert v^{(j)} \bigr\Vert ^{2}_{L^{2}_{w}(-1,1)} \Biggr)^{1/2}.$$
(6.13)

Meanwhile, we have

$$\bigl\Vert v^{(j)} \bigr\Vert ^{2}_{L^{2}_{w}(-1,1)}= \int _{-1}^{1} \bigl(v^{(j)}(x) \bigr)^{2}w(x)\,dx= \int _{-1}^{1} \biggl( \biggl(\frac{b-a}{2} \biggr)^{j}u^{(j)} \biggl(\frac{b-a}{2} x+ \frac{b+a}{2} \biggr) \biggr)^{2}w(x)\,dx.$$

By considering $$t=\frac{b-a}{2} x+\frac{b+a}{2}$$ and $$\frac{b-a}{2} \,dx=dt$$ as change of variables, we obtain

\begin{aligned}[b] \bigl\Vert v^{(j)} \bigr\Vert ^{2}_{L^{2}_{w}(-1,1)}&= \frac{2}{b-a} \biggl( \frac{b-a}{2} \biggr)^{2j} \int _{a}^{b} \bigl(u^{(j)}(t) \bigr)^{2}w^{ \ast }(t)\,dt\\ &=\frac{2}{b-a} \biggl( \frac{b-a}{2} \biggr)^{2j} \bigl\Vert u^{(j)} \bigr\Vert ^{2}_{L^{2}_{w^{\ast }}(a,b)}. \end{aligned}
(6.14)

Hence, (6.4) and (6.14) yield

$$\vert v \vert _{H^{\bar{m};N}_{w}(-1,1)}=\sqrt{\frac{2}{b-a}} \Biggl(\sum_{j= \min (\bar{m},N+1 )}^{\bar{m}} \biggl( \frac{b-a}{2} \biggr)^{2j} \bigl\Vert u^{(j)} \bigr\Vert ^{2}_{L^{2}_{w^{\ast }}(a,b)} \Biggr)^{1/2},$$
(6.15)

and (6.10)–(6.15) yield

$$\bigl\Vert u-P_{N}^{\ast }u \bigr\Vert _{L^{2}_{w^{\ast }}(a,b)} \leq \bar{C}N^{-\bar{m}} \Biggl(\sum_{j=\min (\bar{m},N+1 )}^{\bar{m}} \biggl( \frac{b-a}{2} \biggr)^{2j} \bigl\Vert u^{(j)} \bigr\Vert ^{2}_{L^{2}_{w^{ \ast }}(a,b)} \Biggr)^{1/2}.$$

The following equality holds for the infinity norm:

$$\bigl\Vert u-P_{N}^{\ast }u \bigr\Vert _{L^{\infty }(a,b)}=\sup_{a\leq t\leq b} \bigl\vert u(t)-P_{N}^{ \ast }u(t) \bigr\vert =\sup_{a\leq t\leq b} \Biggl\vert u(t)-\sum _{j=0}^{N} \hat{u}^{\ast }_{j} T^{\ast }_{j}(t) \Biggr\vert .$$
(6.16)

By considering $$t=\frac{b-a}{2} x+\frac{b+a}{2}$$ as change of variables, we obtain

$$\sup_{a\leq t\leq b} \Biggl\vert u(t)-\sum _{j=0}^{N}\hat{u}^{\ast }_{j} T^{ \ast }_{j}(t) \Biggr\vert = \sup_{-1\leq x\leq 1} \Biggl\vert u \biggl( \frac{b-a}{2} x+\frac{b+a}{2} \biggr)-\sum _{j=0}^{N} \hat{\mathbf{u}}_{j} T_{j}(x) \Biggr\vert .$$
(6.17)

Letting $$v(x)=u (\frac{b-a}{2} x+\frac{b+a}{2} )$$, one has

\begin{aligned}[b] \sup_{-1\leq x\leq 1} \Biggl\vert u \biggl(\frac{b-a}{2} x+ \frac{b+a}{2} \biggr)-\sum_{j=0}^{N} \hat{\mathbf{u}}_{j} T_{j}(x) \Biggr\vert &=\sup _{-1 \leq x\leq 1} \Biggl\vert v(x)-\sum_{j=0}^{N} \hat{v}_{j}T_{j}(x) \Biggr\vert \\ & = \Vert v-P_{N}v \Vert _{L^{\infty }(-1,1)}. \end{aligned}
(6.18)

Considering (6.7) we have

$$\Vert u-P_{N}v \Vert _{L^{\infty }(-1,1)}\leq \hat{C} \bigl(1+\log (N)\bigr)N^{-\bar{m}} \sum_{j=\min (\bar{m},N+1)}^{\bar{m}} \bigl\Vert v^{(j)} \bigr\Vert _{L^{ \infty }(-1,1)}.$$
(6.19)

Furthermore, we have

$$\bigl\Vert v^{(j)} \bigr\Vert _{L^{\infty }(-1,1)}= \biggl( \frac{b-a}{2} \biggr)^{j} \bigl\Vert u^{(j)} \bigr\Vert _{L^{\infty }(a,b)}.$$
(6.20)

Thus

$$\Vert u-P_{N}v \Vert _{L^{\infty }(-1,1)}\leq \hat{C} \bigl(1+\log (N)\bigr)N^{-\bar{m}} \sum_{j=\min (\bar{m},N+1)}^{\bar{m}} \biggl(\frac{b-a}{2} \biggr)^{j} \bigl\Vert u^{(j)} \bigr\Vert _{L^{\infty }(a,b)}.$$
(6.21)

Finally, (6.16)–(6.21) result in

$$\bigl\Vert u-P_{N}^{\ast }u \bigr\Vert _{L^{\infty }(a,b)} \leq \hat{C} \bigl(1+\log (N)\bigr)N^{- \bar{m}}\sum _{j=\min (\bar{m},N+1)}^{\bar{m}} \biggl(\frac{b-a}{2} \biggr)^{j} \bigl\Vert u^{(j)} \bigr\Vert _{L^{\infty }(a,b)},$$

which completes the proof. □

### Corollary 6.5

Given the satisfaction of assumptions in Theorem 6.4, the following rates of convergence hold:

\begin{aligned} &\lim_{N\longrightarrow \infty } \bigl\Vert u-P_{N}^{\ast }u \bigr\Vert _{L^{2}_{w^{\ast }}(a,b)} \longrightarrow 0, \quad \textit{with } \mathcal{O} \bigl(N^{- \bar{m}} \bigr), \\ &\lim_{N\longrightarrow \infty } \bigl\Vert u-P_{N}^{\ast }u \bigr\Vert _{L^{\infty }(a,b)}\longrightarrow 0, \quad \textit{with } \mathcal{O} \bigl(\bigl(1+\log (N)\bigr)N^{-\bar{m}} \bigr). \end{aligned}

### Theorem 6.6

Let $$u\in H_{w_{\widetilde{w}}}^{\bar{m}}(0,1)$$, $$\widetilde{w}(t)=w (2t-1 )$$ and $$P_{M,k}u=\sum_{n=0}^{2^{k}-1}\sum_{m=0}^{M-1}c_{nm} \psi _{nm}(t)$$. Then we have

$$\Vert u-P_{M,k}u \Vert _{L^{2}_{\widetilde{w}}(0,1)}\leq \bar{C}(M-1)^{- \bar{m}} \Biggl(\sum_{n=0}^{2^{k}-1}\sum _{j=\min (\bar{m},M)}^{ \bar{m}} \biggl(\frac{1}{2^{k+1}} \biggr)^{2j} \bigl\Vert u^{(j)} \bigr\Vert ^{2}_{L^{2}_{w_{n}} (I_{nk} )} \Biggr)^{1/2},$$
(6.22)

where $$I_{nk}= [\frac{n}{2^{k}},\frac{n+1}{2^{k}} ]$$ and $$w_{n}(t)$$ is defined in (3.3). Moreover, we have

\begin{aligned}[b] &\Vert u-P_{M,k}u \Vert _{L^{\infty }(0,1)}\\ &\quad \leq \hat{C} \bigl(1+\log (M-1)\bigr) (M-1)^{- \bar{m}}\sum_{j=\min (\bar{m},M)}^{\bar{m}} \biggl(\frac{1}{2^{k+1}} \biggr)^{j} \bigl\Vert u^{(j)} \bigr\Vert _{L^{\infty } (0,1 )}. \end{aligned}
(6.23)

### Proof

We have

\begin{aligned}[b] \Vert u-P_{M,k}u \Vert _{L^{2}_{\widetilde{w}}(0,1)}&= \biggl( \int _{0}^{1} \bigl\vert u(t)-P_{M,k}u(t) \bigr\vert ^{2} \widetilde{w}(t)\,dt \biggr)^{1/2}\\ &= \Biggl( \sum_{n=0}^{2^{k}-1} \int _{I_{nk}} \bigl\vert u(t)-P_{M-1}^{ \ast }u(t) \bigr\vert ^{2}w_{n}(t)\,dt \Biggr)^{1/2}, \end{aligned}
(6.24)

where $$P_{M-1}^{\ast }u=\sum_{j=0}^{M-1}\hat{u}^{\ast }_{j} T^{\ast }_{j}(t)$$, $$T^{\ast }_{j}(t)=T_{j} (2^{k+1}t-2n-1 )$$ and

$$\hat{u}^{\ast }_{j}= \bigl\langle u,T^{\ast }_{j} \bigr\rangle _{w_{n}}/ \bigl\langle T^{ \ast }_{j},T^{\ast }_{j} \bigr\rangle _{w_{n}}= \frac{2^{k+2}}{\pi \gamma _{j}} \int _{I_{nk}}u(t)T^{\ast }_{j}(t)w_{n}(t) \,dt.$$

From (6.24) and Theorem 6.4 we get

\begin{aligned} \Vert u-P_{M,k}u \Vert _{L^{2}_{\widetilde{w}}(0,1)}&= \Biggl(\sum _{n=0}^{2^{k}-1} \bigl\Vert u-P_{M-1}^{\ast }u \bigr\Vert _{L^{2}_{w_{n}} (I_{nk} )}^{2} \Biggr)^{1/2} \\ &\leq \bar{C}(M-1)^{-\bar{m}} \Biggl(\sum_{n=0}^{2^{k}-1} \sum_{j= \min (\bar{m},M)}^{\bar{m}} \biggl(\frac{1}{2^{k+1}} \biggr)^{2j} \bigl\Vert u^{(j)} \bigr\Vert ^{2}_{L^{2}_{w_{n}} (I_{nk} )} \Biggr)^{1/2}. \end{aligned}

Moreover, we have

$$\Vert u-P_{M,k}u \Vert _{L^{\infty }(0,1)}=\max_{n=0,1,\ldots ,2^{k}-1} \bigl\Vert u-P^{ \ast }_{M-1}u \bigr\Vert _{L^{\infty } (I_{nk} )}$$

for the maximum norm. Considering Theorem 6.4, we obtain

$$\bigl\Vert u-P_{M-1}^{\ast }u \bigr\Vert _{L^{\infty } (I_{nk} )} \leq \hat{C} \bigl(1+ \log (M-1)\bigr) (M-1)^{-\bar{m}}\sum _{j=\min (\bar{m},M)}^{\bar{m}} \biggl( \frac{1}{2^{k+1}} \biggr)^{j} \bigl\Vert u^{(j)} \bigr\Vert _{L^{\infty } (I_{nk} )},$$

and since $$\Vert u^{(j)} \Vert _{L^{\infty } (0,1 )}= \max_{n=0,1,\ldots ,2^{k}-1} \Vert u^{(j)} \Vert _{L^{\infty } (I_{nk} )}$$, the proof is completed. □

## Illustrative test problems

In this section, a few examples of V-O fractional integral equations are given to show the reliability of the presented algorithm. Note that the maximum absolute errors are reported as

$$e(u_{i})=\max_{t\in [0,1]} \bigl\vert u_{i}(t)-U_{i}^{T}\Psi (t) \bigr\vert , \quad i=1,2,\ldots ,d.$$
(7.1)

All the simulations are carried out via Maple 18 with 20 decimal digits.

### Example 1

Consider fractional system (1.1) where

\begin{aligned} & {F}_{1} \bigl(t,u_{1}(t),u_{2}(t) \bigr)=\cos ( t ) -\sin ( t ) \bigl( -3-{t}^{2}\cos ( t ) +2 \cos ( t ) +2 t\sin ( t ) \bigr)-u_{2}(t), \\ & {F}_{2} \bigl(t,u_{1}(t),u_{2}(t) \bigr)=\cos ( t ) -\sin ( t ) - \frac{1}{2} t\sin ( t ) +u_{1}(t), \\ & {K}_{1} (t,s )= \frac{s^{2}\sin (t)}{\Gamma { (\alpha _{1}(t) )}}, \\ & {K}_{2} (t,s )= \frac{\sin (t-s)}{\Gamma { (\alpha _{2}(t) )}}, \\ & {G}_{1} \bigl(s,u_{1}(s),u_{2}(s) \bigr)=u_{1}(s), \\ & {G}_{2} \bigl(s,u_{1}(s),u_{2}(s) \bigr)=u_{2}(s). \end{aligned}

The exact solution of this linear system when $$\alpha _{1}(t)=\alpha _{2}(t)=1$$ is $$\langle u_{1}(t),u_{2}(t) \rangle = \langle \sin (t), \cos (t) \rangle$$. We have used the proposed method to solve this problem with $$\hat{m}=40$$, ($$k=2$$ and $$M=10$$) for some different functions $$\alpha _{1}(t)$$ and $$\alpha _{2}(t)$$

$$\begin{gathered} (\boldsymbol{A})\quad \textstyle\begin{cases} \alpha _{1}(t)=1, \\ \alpha _{2}(t)=1, \end{cases}\displaystyle \\ (\boldsymbol{B}) \quad \textstyle\begin{cases} \alpha _{1}(t)=0.65+0.2\sin (10 t), \\ \alpha _{2}(t)=0.65+0.2\sin (50 t), \end{cases}\displaystyle \\ ( \boldsymbol{C}) \quad \textstyle\begin{cases} \alpha _{1}(t)=2-0.3 \vert 2t-1 \vert \cos (t), \\ \alpha _{2}(t)=2-0.3 \vert 2t-1 \vert \cos (20t), \end{cases}\displaystyle \\ (\boldsymbol{D}) \quad \textstyle\begin{cases} \alpha _{1}(t)=1.0+0.4 \vert 2t-1 \vert \sin (100t), \\ \alpha _{2}(t)=1.0+0.6 \vert 2t-1 \vert \sin (150t). \end{cases}\displaystyle \end{gathered}$$
(7.2)

Figure 1 demonstrates the numerical solutions behavior for these problems. It should be noted that we have the analytic solution only for $$\alpha _{1}(t)=\alpha _{2}(t)=1$$. For this case, the maximum absolute errors are $$e (u_{1} )=2.3490 \times 10^{-5}$$ and $$e (u_{2} )=2.4951 \times 10^{-5}$$. Moreover, it is obvious that the established technique can be simply implemented, and satisfactory numerical results for this problem are obtained. Note that a little time is necessary for obtaining high accuracy.

### Example 2

Consider fractional system (1.1) where

\begin{aligned} & {F}_{1} \bigl(t,u_{1}(t),u_{2}(t) \bigr)=\tan (t)- \arctan (t)-\frac{2!}{\Gamma { (3+\alpha _{1}(t) )}} \sin (t) t^{2+\alpha _{1}(t)}+u_{2}(t), \\ & {F}_{2} \bigl(t,u_{1}(t),u_{2}(t) \bigr)=\tan (t)+\arctan (t)- \frac{4!}{\Gamma { (5+\alpha _{2}(t) )}} t^{7+\alpha _{2}(t)}-u_{1}(t), \\ & {K}_{1} (t,s )= \frac{s\sin (t)}{\Gamma { (\alpha _{1}(t) )}}, \\ & {K}_{2} (t,s )= \frac{t^{3}s^{3}}{\Gamma { (\alpha _{2}(t) )}}, \\ & {G}_{1} \bigl(s,u_{1}(s),u_{2}(s) \bigr)= \arctan \bigl(u_{1}(s)\bigr), \\ & {G}_{2} \bigl(s,u_{1}(s),u_{2}(s) \bigr)=\tan \bigl(u_{2}(s)\bigr). \end{aligned}

The analytic solution for this system is $$\langle u_{1}(t),u_{2}(t) \rangle = \langle \tan (t),\arctan (t) \rangle$$. This system is solved via the presented technique for $$\hat{m}=24$$ ($$k=1$$ and $$M=12$$) and $$\hat{m}=48$$ ($$k=2$$ and $$M=12$$). The maximum absolute errors of the obtained solutions and the run time for some different functions $$\alpha _{1}(t)$$ and $$\alpha _{2}(t)$$ as

\begin{aligned}& (\boldsymbol{A})\quad \textstyle\begin{cases} \alpha _{1}(t)=0.75+0.25\cos (t), \\ \alpha _{2}(t)=0.65+0.35\sin (t), \end{cases}\displaystyle \\& (\boldsymbol{B})\quad \textstyle\begin{cases} \alpha _{1}(t)=1.75+0.25\cos (t), \\ \alpha _{2}(t)=1.65+0.35\sin (t), \end{cases}\displaystyle \\ & \\ & ( \boldsymbol{C}) \quad \textstyle\begin{cases} \alpha _{1}(t)=3-0.4 \vert 2t-1 \vert \cos (t), \\ \alpha _{2}(t)=3-0.4 \vert 2t-1 \vert \sin (t), \end{cases}\displaystyle \\& (\boldsymbol{D})\quad \textstyle\begin{cases} \alpha _{1}(t)=4-0.3 \vert 2t-1 \vert \cos (10t), \\ \alpha _{2}(t)=4-0.3 \vert 2t-1 \vert \cos (20t), \end{cases}\displaystyle \end{aligned}
(7.3)

are reported in Table 1. As it can be verified by Table 1, numerical solutions with high accuracy are provided by the proposed method. Moreover, it shows that, by increasing the numbers of CWs, the numerical results are improved. It is worth noting that for the case $$0<\alpha _{1}(t)$$, $$\alpha _{2}(t)<1$$ system (1.1) transforms to a singular system of integral equations. So, it can be deduced that the presented technique works well for such cases and computes numerical solutions with high accuracy. Another significant conclusion from the obtained results is that as $$\alpha _{i}(t)$$ for $$i=1,2$$ increase, the absolute errors and the run time decrease.

### Example 3

Consider fractional system (1.1) where

\begin{aligned} & {F}_{1} \bigl(t,u_{1}(t),u_{2}(t),u_{3}(t) \bigr)=t^{4}+t^{3}-{ \frac{5!}{ \Gamma ( 6+{\alpha _{1}(t)} ) }} {t}^{6+{\alpha _{1}} ( t ) }-t u_{2}(t)+u_{3}(t) , \\ & {F}_{2} \bigl(t,u_{1}(t),u_{2}(t),u_{3}(t) \bigr)={t}^{5}-{t}^{4}+{t}^{3}-{ \frac{8!}{\Gamma ( 9+{\alpha _{2}} ( t ) ) } } {t}^{10+{\alpha _{2}} ( t )} +u_{1}(t)u_{2}(t) , \\ & \begin{aligned} {F}_{3} \bigl(t,u_{1}(t),u_{2}(t),u_{3}(t) \bigr)&={t}^{5}-{t}^{4}-{t}^{3}+{t}^{2}- \biggl( { \frac{4! {t}^{4+{\alpha _{3}} ( t ) }}{\Gamma ( 5+{\alpha _{3}} ( t ) ) }} -{ \frac{5! {t}^{5+{\alpha _{3}} ( t ) }}{\Gamma ( 6 +{\alpha _{3}} ( t ) ) }} \biggr)\tan (t) \\ &\quad {}-u_{1}(t) \bigl(u_{2}(t)-u_{1}(t) \bigr), \end{aligned} \\ & {K}_{1} (t,s )= \frac{ts}{\Gamma { (\alpha _{1}(t) )}}, \\ & {K}_{2} (t,s )= \frac{t^{2}s^{2}}{\Gamma { (\alpha _{2}(t) )}}, \\ & {K}_{3} (t,s )= \frac{s^{2}\tan (t)}{\Gamma { (\alpha _{3}(t) )}}, \\ & {G}_{1} \bigl(s,u_{1}(s),u_{2}(s),u_{3}(s) \bigr)=u_{1}(s)^{2}, \\ & {G}_{2} \bigl(s,u_{1}(s),u_{2}(s),u_{3}(s) \bigr)=u_{2}(s)^{2}, \\ & {G}_{3} \bigl(s,u_{1}(s),u_{2}(s),u_{3}(s) \bigr)=u_{3}(s). \end{aligned}

The analytic solution for this system is $$\langle u_{1}(t),u_{2}(t),u_{3}(t) \rangle = \langle t^{2},t^{3},t^{2}-t^{3} \rangle$$. We have used the presented technique to solve this system for $$\hat{m}=16$$ ($$k=1$$ and $$M=8$$) and $$\hat{m}=32$$ ($$k=2$$ and $$M=8$$). Table 2 contains the maximum absolute errors of obtained solutions and the run time for some different functions $$\alpha _{1}(t)$$, $$\alpha _{2}(t)$$, and $$\alpha _{3}(t)$$ as follows:

$$\begin{gathered} (\boldsymbol{A})\quad \textstyle\begin{cases} \alpha _{1}(t)=0.75+0.2t^{2}, \\ \alpha _{2}(t)=0.75+0.2t^{3}, \\ \alpha _{3}(t)=0.75+0.2t^{4}, \end{cases}\displaystyle \\ (\boldsymbol{B})\quad \textstyle\begin{cases} \alpha _{1}(t)=1.75+0.2t^{2}, \\ \alpha _{2}(t)=1.75+0.2t^{3}, \\ \alpha _{3}(t)=1.75+0.2t^{4}, \end{cases}\displaystyle \\ ( \boldsymbol{C}) \quad \textstyle\begin{cases} \alpha _{1}(t)=3-0.4 \vert 2t-1 \vert \cos (t), \\ \alpha _{2}(t)=3-0.4 \vert 2t-1 \vert \sin (t), \\ \alpha _{3}(t)=3-0.4 \vert 2t-1 \vert \sin (t), \end{cases}\displaystyle \\ (\boldsymbol{D}) \quad \textstyle\begin{cases} \alpha _{1}(t)=4-0.4 \vert 2t-1 \vert \cos (t), \\ \alpha _{2}(t)=4-0.4 \vert 2t-1 \vert \sin (t), \\ \alpha _{3}(t)=4-0.4 \vert 2t-1 \vert \sin (t). \end{cases}\displaystyle \end{gathered}$$
(7.4)

It can be observed in Table 2 that the elicited results are in good agreement with the exact solution. Also, the established technique requires pretty little time to calculate the numerical solutions. Moreover, similar to the previous example, it can be observed that for the cases $$0<\alpha _{1}(t)$$, $$\alpha _{2}(t), \alpha _{3}(t)<1$$, the proposed method works very well and provides numerical solutions with high accuracy. Furthermore, Table 2 shows that the obtained solutions are improved as the numbers of CWs increase.

### Example 4

Consider fractional system (1.1) where

\begin{aligned}& {F}_{1} \bigl(t,u_{1}(t),u_{2}(t),u_{3}(t),u_{4}(t) \bigr)=t^{\frac{9}{2}}+t^{\frac{3}{2}}-{ \frac{5! \sin (t) {t}^{5+{\alpha _{1}} ( t ) }}{ \Gamma ( 6+{\alpha _{1}(t)} ) }}-u_{4}(t) , \\& {F}_{2} \bigl(t,u_{1}(t),u_{2}(t),u_{3}(t),u_{4}(t) \bigr)=t^{\frac{5}{2}}+t^{\frac{3}{2}}-{ \frac{6! {t}^{7+{\alpha _{2}} ( t ) }}{ 2 \Gamma ( 7+{\alpha _{2}(t)} ) }}-u_{1}(t) , \\& {F}_{3} \bigl(t,u_{1}(t),u_{2}(t),u_{3}(t),u_{4}(t) \bigr)=t^{\frac{7}{2}}+t^{\frac{5}{2}}+t^{\frac{3}{2}}-{ \frac{\Gamma { (\frac{11}{2} )} {t}^{\frac{11}{2}+{\alpha _{3}} ( t ) }}{ \Gamma ( \frac{11}{2}+{\alpha _{3}(t)} ) }}-u_{1}(t)-u_{2}(t), \\& \begin{aligned} {F}_{4} \bigl(t,u_{1}(t),u_{2}(t),u_{3}(t),u_{4}(t) \bigr)&=t^{\frac{9}{2}}+t^{\frac{5}{2}}-t^{\frac{3}{2}}-t^{7}-{ \frac{\Gamma { (\frac{11}{2} )} \sin (t) {t}^{\frac{9}{2}+{\alpha _{4}} ( t ) }}{ \Gamma ( \frac{11}{2}+{\alpha _{4}(t)} ) }}\\ &\quad {}+u_{1}(t)-u_{2}(t)+u_{3}(t)^{2}, \end{aligned} \\& {K}_{1} (t,s )= \frac{s^{2}\sin (t)}{\Gamma { (\alpha _{1}(t) )}}, \\& {K}_{2} (t,s )= \frac{t s}{2 \Gamma { (\alpha _{2}(t) )}}, \\& {K}_{3} (t,s )= \frac{ts}{\Gamma { (\alpha _{3}(t) )}}, \\& {K}_{4} (t,s )= \frac{\sin (t)}{\Gamma { (\alpha _{4}(t) )}}, \\& {G}_{1} \bigl(s,u_{1}(s),u_{2}(s),u_{3}(s),u_{4}(s) \bigr)=u_{1}(s)^{2}, \\& {G}_{2} \bigl(s,u_{1}(s),u_{2}(s),u_{3}(s),u_{4}(s) \bigr)=u_{2}(s)^{2}, \\& {G}_{3} \bigl(s,u_{1}(s),u_{2}(s),u_{3}(s),u_{4}(s) \bigr)=u_{3}(s), \\& {G}_{4} \bigl(s,u_{1}(s),u_{2}(s),u_{3}(s),u_{4}(s) \bigr)=u_{4}(s). \end{aligned}

The analytic solution of this system is $$\langle u_{1}(t),u_{2}(t),u_{3}(t),u_{4}(t) \rangle = \langle t^{\frac{3}{2}},t^{\frac{5}{2}},t^{\frac{7}{2}}, t^{ \frac{9}{2}} \rangle$$. We have used the presented algorithm to solve this problem for $$\hat{m}=12$$ ($$k=1$$ and $$M=6$$) and $$\hat{m}=24$$ ($$k=2$$ and $$M=6$$). Table 3 contains the maximum absolute errors of the obtained solutions and the run time for some different functions $$\alpha _{1}(t)$$, $$\alpha _{2}(t)$$, $$\alpha _{3}(t)$$, and $$\alpha _{4}(t)$$ as follows:

$$\begin{gathered} (\boldsymbol{A})\quad \textstyle\begin{cases} \alpha _{1}(t)=0.75+0.2\cos (t), \\ \alpha _{2}(t)=0.85+0.1\cos (t), \\ \alpha _{3}(t)=1-0.1 t^{3}, \\ \alpha _{4}(t)=1-0.2 t^{3}, \end{cases}\displaystyle \\ (\boldsymbol{B})\quad \textstyle\begin{cases} \alpha _{1}(t)=1.75+0.2\cos (t), \\ \alpha _{2}(t)=1.85+0.1\cos (t), \\ \alpha _{3}(t)=2-0.1 t^{3}, \\ \alpha _{4}(t)=2-0.2 t^{3}, \end{cases}\displaystyle \\ ( \boldsymbol{C}) \quad \textstyle\begin{cases} \alpha _{1}(t)=3-0.3 \vert 2t-1 \vert \cos (10t), \\ \alpha _{2}(t)=3-0.3 \vert 2t-1 \vert \cos (20t), \\ \alpha _{3}(t)=3-0.3 \vert 2t-1 \vert \cos (30t), \\ \alpha _{4}(t)=3-0.3 \vert 2t-1 \vert \cos (40t), \end{cases}\displaystyle \\ (\boldsymbol{D})\quad \textstyle\begin{cases} \alpha _{1}(t)=4-0.3 \vert 2t-1 \vert \cos (10t), \\ \alpha _{2}(t)=4-0.3 \vert 2t-1 \vert \cos (20t), \\ \alpha _{3}(t)=4-0.3 \vert 2t-1 \vert \cos (30t), \\ \alpha _{4}(t)=4-0.3 \vert 2t-1 \vert \cos (40t). \end{cases}\displaystyle \end{gathered}$$
(7.5)

As it can be observed in Table 3, we obtain numerical solutions with high accuracy by the proposed method. In addition, it can be observed that the presented technique requires quite little time to calculate the numerical solutions.

## Conclusion

In this paper, we established a Chebyshev wavelets (CWs) Galerkin technique for a class of systems of variable-order (V-O) fractional integral equations. First, we derived a fractional integral operational matrix (OM) for CWs which was then employed to find approximate solutions for the problem. Also, the hat functions were reviewed and used to elicit a method for forming this novel matrix. We also obtained the expansion of the unknown functions in terms of CWs with undetermined coefficients. Then, by employing some properties of CWs and their fractional integration OM, we reduced the fractional system to an algebraic system. Also, the existence of a unique solution for the system under consideration was proved. Furthermore, the reliability of the presented scheme is studied on some numerical examples which show the accuracy of the established technique.

Not applicable.

## References

1. 1.

Podlubny, I.: Fractional Differential Equations. Academic Press, San Diego (1999)

2. 2.

Oldham, K.B., Spanier, J.: The Fractional Calculus. Academic Press, New York (1974)

3. 3.

Hesameddini, E., Rahimi, A., Asadollahifard, E.: On the convergence of a new reliable algorithm for solving multi-order fractional differential equations. Commun. Nonlinear Sci. Numer. Simul. 34, 154–164 (2016)

4. 4.

Almeida, R., Torres, D.F.M.: An introduction to the fractional calculus and fractional differential equations. Sci. World J. 2013, Article ID 915437 (2013)

5. 5.

Zaky, M.A., Doha, E.H., Taha, T.M., Baleanu, D.: New recursive approximations for variable-order fractional operators with applications. Math. Model. Anal. 23, 227–239 (2018)

6. 6.

Zaky, M.A., Baleanu, D., Alzaidy, J.F., Hashemizadeh, E.: Operational matrix approach for solving the variable-order nonlinear Galilei invariant advection-diffusion equation. Adv. Differ. Equ. 2018, 102 (2018)

7. 7.

Heydari, M.H., Avazzadeh, Z.: Numerical study of non-singular variable-order time fractional coupled Burgers’ equations by using the Hahn polynomials. Eng. Comput. (2020). https://doi.org/10.1007/s00366-020-01036-5

8. 8.

Heydari, M.H., Avazzadeh, Z.: Chebyshev–Gauss–Lobatto collocation method for variable-order time fractional generalized Hirota–Satsuma coupled KdV system. Eng. Comput. (2020). https://doi.org/10.1007/s00366-020-01125-5

9. 9.

Heydari, M.H., Hosseininia, M.: A new variable-order fractional derivative with non-singular Mittag-Leffler kernel: application to variable-order fractional version of the 2D Richard equation. Eng. Comput. (2020). https://doi.org/10.1007/s00366-020-01121-9

10. 10.

Hassani, H., Machado, J.A.T., Avazzadeh, Z., Naraghirad, E.: Generalized shifted Chebyshev polynomials: solving a general class of nonlinear variable order fractional PDE. Commun. Nonlinear Sci. Numer. Simul. 85, 105229 (2020)

11. 11.

Yang, Y., Huang, Y., Zhou, Y.: Numerical solutions for solving time fractional Fokker–Planck equations based on spectral collocation methods. J. Comput. Appl. Math. 33, 389–404 (2018)

12. 12.

Yang, Y., Chen, Y., Huang, Y., Wei, H.: Spectral collocation method for the time-fractional diffusion-wave equation and convergence analysis. Comput. Math. Appl. 37, 1218–1232 (2017)

13. 13.

Ezz-Eldien, S.S., Wang, Y., Abdelkawy, M.A., Zaky, M.A.: Chebyshev spectral methods for multi-order fractional neutral pantograph equations. Nonlinear Dyn. 100, 3785–3797 (2020)

14. 14.

Heydari, M.H., Hooshmandasl, M.R., Maalek Ghaini, F.M., Fereidouni, F.: Two-dimensional Legendre wavelets for solving fractional Poisson equation with Dirichlet boundary conditions. Eng. Anal. Bound. Elem. 37(11), 1331–1338 (2013)

15. 15.

Zhu, L., Fan, Q.: Solving fractional nonlinear Fredholm integro-differential equations by the second kind Chebyshev wavelet. Commun. Nonlinear Sci. Numer. Simul. 17(6), 2333–2341 (2012)

16. 16.

Li, Y.L., Zhao, W.W.: Haar wavelet operational matrix of fractional order integration and its applications in solving the fractional order differential equations. Appl. Math. Comput. 216, 2276–2285 (2010)

17. 17.

Saeedi, H.: A cas wavelet method for solving nonlinear Fredholm integro-differential equation of fractional order. Commun. Nonlinear Sci. Numer. Simul. 16, 1154–1163 (2011)

18. 18.

Heydari, M.H., Hooshmandasl, M.R., Mohammadi, F., Cattani, C.: Wavelets method for solving systems of nonlinear singular fractional Volterra integro-differential equations. Commun. Nonlinear Sci. Numer. Simul. 19, 37–48 (2014)

19. 19.

Heydari, M.H., Hooshmandasl, M.R., Maalek Ghaini, F.M., Mohammadi, F.: Wavelet collocation method for solving multi order fractional differential equations. J. Appl. Math. 2012, Article ID 542401, 19 pages (2012). https://doi.org/10.1155/2012/542401

20. 20.

Heydari, M.H., Hooshmandasl, M.R., Maalek Ghaini, F.M., Li, M.: Chebyshev wavelets method for solution of nonlinear fractional integrodifferential equations in a large interval. Adv. Math. Phys. 2013, Article ID 482083, 12 pages (2013). https://doi.org/10.1155/2013/482083

21. 21.

Li, Y.L.: Solving a nonlinear fractional differential equation using Chebyshev wavelets. Commun. Nonlinear Sci. Numer. Simul. 15, 2284–2292 (2010)

22. 22.

Celik, I.: Chebyshev wavelet collocation method for solving generalized Burgers–Huxley equation. Math. Methods Appl. Sci. 39(3), 366–377 (2016)

23. 23.

Baleanu, D., Shiri, B., Srivastava, H.M., Al Qurashi, M.: A Chebyshev spectral method based on operational matrix for fractional differential equations involving non-singular Mittag-Leffler kernel. Adv. Differ. Equ. 2018, 353 (2018)

24. 24.

Unterreiter, A.: Volterra integral equation models for semiconductor devices. Math. Methods Appl. Sci. 19(6), 425–450 (1996)

25. 25.

Brauer, F., Castillo-Chavez, C.: Mathematical Models in Population Biology and Epidemiology. Springer, Berlin (2001)

26. 26.

Brunner, H.: Collocation Methods for Volterra Integral and Related Functional Differential Equations. Cambridge University Press, Cambridge (2004)

27. 27.

Bhrawy, A., Zaky, M.A.: Shifted fractional-order Jacobi orthogonal functions: application to a system of fractional differential equations. Appl. Math. Model. 40(2), 832–845 (2016)

28. 28.

Wang, J., Xu, T.Z., Wei, Y.Q., Xie, J.Q.: Numerical solutions for systems of fractional order differential equations with Bernoulli wavelets. Int. J. Comput. Math. 96(2), 317–336 (2019)

29. 29.

Abdeljawad, T., Amin, R., Shah, K., Al-Mdallal, Q., Jarad, F.: Efficient sustainable algorithm for numerical solutions of systems of fractional order differential equations by Haar wavelet collocation method. Alex. Eng. J. (2020). https://doi.org/10.1016/j.aej.2020.02.035

30. 30.

Saemi, F., Ebrahimi, H., Shafiee, M.: An effective scheme for solving system of fractional Volterra–Fredholm integro-differential equations based on the Müntz–Legendre wavelets. J. Comput. Appl. Math. 374, 112773 (2020)

31. 31.

Saemi, F., Ebrahimi, H., Shafiee, M.: Numerical research of nonlinear system of fractional Volterra–Fredholm integral-differential equations via Block–Pulse functions and error analysis. J. Comput. Appl. Math. 345, 159–167 (2019)

32. 32.

Jhinga, A., Daftardar-Gejji, V.: A new finite-difference predictor-corrector method for fractional differential equations. Appl. Math. Comput. 336, 418–432 (2018)

33. 33.

Alijani, Z., Baleanu, D., Shiri, B., Wu, G.C.: Spline collocation methods for systems of fuzzy fractional differential equations. Chaos Solitons Fractals 2020, 109510 (2018)

34. 34.

Dadkhah Khiabani, E., Ghaffarzadeh, H., Shiri, B., Katebi, J.: Spline collocation methods for seismic analysis of multiple degree of freedom systems with visco-elastic dampers using fractional models. J. Vib. Control (2018). https://doi.org/10.1177/1077546319898570

35. 35.

Dadkhah Khiabani, E., Shiri, B., Ghaffarzadeh, H., Baleanu, D.: Visco-elastic dampers in structural buildings and numerical solution with spline collocation methods. J. Appl. Math. Comput. 63, 29–57 (2020)

36. 36.

Jhinga, A., Daftardar-Gejji, V.: An accurate spectral collocation method for nonlinear systems of fractional differential equations and related integral equations with nonsmooth solutions. Appl. Numer. Math. 154, 205–222 (2020)

37. 37.

Chen, Y., Liu, L., Li, B., Sun, Y.: Numerical solution for the variable order linear cable equation with Bernstein polynomials. Appl. Math. Comput. 238, 329–341 (2014)

38. 38.

Heydari, M.H., Hooshmandasl, M.R., Maalek Ghaini, F.M., Cattani, C.: A computational method for solving stochastic Itô–Volterra integral equations based on stochastic operational matrix for generalized hat basis functions. J. Comput. Phys. 270, 402–415 (2014)

39. 39.

Heydari, M.H., Hooshmandasl, M.R., Cattani, C., Maalek Ghaini, F.M.: An efficient computational method for solving nonlinear stochastic Itô integral equations: application for stochastic problems in physics. J. Comput. Phys. 283, 148–168 (2015)

40. 40.

Heydari, M.H., Avazzadeh, Z.: A new wavelet method for variable-order fractional optimal control problems. Asian J. Control 20(5), 1804–1817 (2018)

41. 41.

Canuto, C., Hussaini, M.Y., Quarteroni, A., Zang, T.A.: Spectral Methods, Fundamentals in Single Domains. Springer, Berlin (2010)

42. 42.

Hunter, J.K., Nachtergaele, B.: Applied Analysis. World Scientific, Singapore (2001)

## Acknowledgements

We thank the reviewers for their valuable comments and suggestions.

## Funding

This work was supported by the National Natural Science Foundation of China Project (12071402, 11931003), the Project of Scientific Research Fund of the Hunan Provincial Science and Technology Department (2020JJ2027, 2020ZYT003, 2018WK4006).

## Author information

Authors

### Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

## Rights and permissions 