Skip to main content

Theory and Modern Applications

A numerical method using Laplace-like transform and variational theory for solving time-fractional nonlinear partial differential equations with proportional delay

Abstract

Time-fractional nonlinear partial differential equations (TFNPDEs) with proportional delay are commonly used for modeling real-world phenomena like earthquake, volcanic eruption, and brain tumor dynamics. These problems are quite challenging, and the transcendental nature of the delay makes them even more difficult. Hence, the development of efficient numerical methods is open for research. In this paper, we use the concepts of Laplace-like transform and variational theory to develop a new numerical method for solving TFNPDEs with proportional delay. The stability and convergence of the method are analyzed in the Banach sense. The efficiency of the proposed method is demonstrated by solving some test problems. The numerical results show that the proposed method performs much better than some recently developed methods and enables us to obtain more accurate solutions.

1 Introduction

Fractional calculus is a valuable concept in applied mathematics dedicated to the study of integrals and derivatives of arbitrary positive order [1, 2]. The fact of the matter is that fractional differential equations (FDEs), which are considered to be generalization of differential equations, enable us to describe functional values at continuous points not just at discrete points. For this reason, they are a popular choice for modeling complicated phenomena marked by memory and hereditary behaviors. Hence, fractional derivatives and integrals facilitate the analysis of real-world situations. It is also worth mentioning that nonlinear FDEs are preferable when it comes to dealing with various real-world events such as earthquake propagation, volcanic eruption, population growth, and the likes [3, 4].

In the case when one is interested in incorporating the present time as well as history in the mathematical formulation of a certain physical situation, one needs to consider delay differential equations. These kinds of differential equations (DEs), which are also referred to as time-delay differential equations, are commonly used for modeling various phenomena in science and engineering. Combining the concepts of fractional derivatives and time-delay differential equations gives rise to fractional delay differential equations which are appropriate for capturing and modeling real-world events that involve memory and history. In this paper, we are interested in time-fractional nonlinear partial differential equations (TFNPDEs) with proportional delay. TFNPDEs are popular in different fields such as environmental sciences, economics, chemistry, physics, and life-sciences [57].

In this paper, we consider the following type of TFNPDEs with proportional delay:

$$ \textstyle\begin{cases} {}^{C}_{0}{D^{\alpha }_{t}}v (x,t ) =f (x,t,v (p_{0}{x,q}_{0}t ),\frac{\partial }{\partial x}v (p_{1}x,q_{1}t ), \dots ,\frac{{\partial }^{n}}{{\partial x}^{n}}v (p_{n}x,q_{n}t ) ), \\ v^{k} (x,0 ) = g_{k}(x), \end{cases} $$
(1.1)

where \(g_{k}(x)\) is a specified initial function for \(k=0, 1, 2, \dots , m-1\), \(m-1<\alpha \leq m\), and \(m\in \mathbb{N}\), \(p_{i},q_{i}\in (0,1 )\) for \(i = 0, 1, 2, \ldots, n\), α is the order of the time-fractional derivative, and f is the partial differential operator.

TFNPDE equation (1.1) can be used to represent models of real-world phenomena by selecting f wisely. For example, the time-fractional differential equation given by

$$ {}^{C}_{0}{D^{\alpha }_{t}v (x,t )=bv \frac{\partial }{\partial x}v (p_{0}x,q_{0}t )+ \frac{{\partial }^{3}}{{\partial x}^{3}}v (p_{1}x,q_{1}t )},\quad 0< \alpha \le 1, b \in \mathbb{R} $$

is known as Korteweg–de Vries equation arising in various studies related to shallow water waves.

The Klein–Gordon equation which is a TFNPDE with proportional delay is given by

$$ {}^{C}_{0}{D^{\alpha }_{t}v (x,t )= \frac{{\partial }^{2}}{{\partial x}^{2}}v (p_{0}x,q_{0}t )-bv(p_{1}x,q_{1}t)}-F \bigl(v (p_{2}x,q_{2}t ) \bigr)+h (x,t ), \quad 1< \alpha \leq 2, $$

where \(h(x,t)\) is a known analytic function, and F is a nonlinear function of \(u(x,t)\). This equation is usually used in the description of nonlinear wave interaction which arises in quantum field theory.

Since TFNPDEs, and in general nonlinear differential equations, are difficult to solve and it is very hard to obtain exact solutions, many attempts have been made to develop numerical methods that provide good approximate solutions. For this reason, several techniques have been proposed to solve these kinds of DEs. Abazari and Ganji [8] proposed a two-dimensional differential transform method and its reduced form to solve nonlinear partial differential equations with proportional delay. Variational iteration method (VIM), initially proposed by He [9], has been proved to be a powerful technique for solving nonlinear DEs [10]. However, the success of this method mainly depends on accurate identification of Lagrange multipliers by using variational theory and integration by parts [9, 11]. Moreover, application of VIM for obtaining series solutions of nonlinear problems requires repeated calculations and computations of a large number of unnecessary terms which reduce the efficiency of the method [12]. To avoid some of the shortcomings of VIM, Abassy et al. [12, 13] proposed the modified variational iteration method (MVIM) and used it to find an approximate power series solutions of nonlinear problems. It is reported that MVIM facilitates and minimizes the computational work significantly thereby leading to fast convergence. Sakar and Saldir [14] also proposed an improved version of VIM based on incorporating an auxiliary parameter. The authors obtained the optimal value of this parameter by using residual error function and applied their method to solve various TFNPDEs. Saldir et al. [15] successfully applied reproducing kernel method (RKM) for solving time-fractional Kawahara equation with variable coefficients. The authors managed to construct orthonormal bases functions on specific Hilbert spaces that facilitate the solution process. Sakar et al. [16] developed an iterative method based on RKM to solve the time-fractional Cahn–Allen differential equation with Caputo derivative. Sakar et al. [17] designed a homotopy perturbation method (HPM) to solve various TFNPDEs with proportional delays in which the fractional derivative was taken in the Caputo sense. The authors compared their approach with traditional methods and reported that HPM is cost efficient. Singh and Kumar [18] combined HPM with Laplace transform and proposed a hybrid method called homotopy perturbation transform method (HPTM) to solve TFNPDEs with proportional delay. They reported that HPTM provides solutions in a series form and is a powerful technique for treating various physical phenomena in science and engineering. Singh and Kumar [19] developed an analytical method called alternative variational iteration method (AVIM) which is capable of proving approximate power series solutions to TFNPDE problems. The authors reported that AVIM competes favorably with DTM, HPM, and HPTM. Wang et al. [20] proposed two analytical methods called residual power series method (RPSM) and homotopy analysis transform method (HATM) to solve TFNPDEs with proportional delay. It is reported that the above methods provide approximate solutions which are in good agreement with HPM, HPTM, and exact solutions.

In this paper, we propose a new numerical method which benefits from the nice properties of Laplace-like transform and variational theory. We have observed that the new method facilitates computing generalized Lagrange multipliers which are crucial for solving nonlinear DEs.

This paper is organized as follows: In Sect. 2, we discuss some basic concepts from fractional calculus and provide brief descriptions of a new Laplace-like transform and variational theory. Section 3 is dedicated to developing a new numerical method. In Sects. 3.1 and 3.2, we analyze the stability and convergence of the proposed method in the Banach sense. In Sect. 4, we present and analyze various numerical results. Moreover, we compare the efficiency of some methods which are used to solve TFNPDEs with proportional delay. Finally, in Sect. 5, we provide the conclusive remarks.

2 Preliminaries

In this section we present some basic concepts from fractional calculus which are crucial for understanding the remaining part of the paper. We also briefly describe Laplace-like transform and variational theory which will be used to design the new method.

Definition 2.1

([1, 2])

A real function \(v(t)\), \(t>0\), is in the space \(C_{\mu }\), \(\mu \in \mathbb{R}\), if there exists a real number p (\(p > \mu \)) such that \(v(t) = t^{p} v_{1} (t)\), where \(v_{1} (t) \in C[0,\infty )\), and it is said to be in the space \(C_{\mu }^{m}\) if and only if \(v^{(m)}\in C_{\mu }\), \(m\in \mathbb{N}\).

Definition 2.2

([1, 2])

The Riemann–Liouville fractional integral operator of order \(\alpha \geq 0\) of a function \(v \in C_{\mu }\), \(\mu \geq -1\), is defined as follows:

$$\begin{aligned}& D^{-\alpha }_{a,t} v(t) = \frac{1}{\Gamma (\alpha )} \int _{0}^{t} (t- \tau )^{\alpha -1}v(\tau )\,d\tau , \quad \alpha > 0, t > 0, \\& D^{0}_{a,t} v(t) = v(t). \end{aligned}$$

Definition 2.3

([1, 2])

The Caputo time-fractional derivative of order α, \(m-1 < \alpha \leq m\), \(m\in \mathbb{N}\) of a function \(v(t)\), \(t>0\), is defined as follows:

$$ {}^{C}_{a}D_{t}^{\alpha }v(t) = D_{a,t}^{-(m-\alpha )}\bigl[{}^{C}_{a}D_{t}^{m}v(t) \bigr] = \frac{1}{\Gamma (m-\alpha )} \int _{a}^{t}(t-\tau )^{m-(\alpha + 1)}v^{(m)}( \tau )\,d\tau . $$

2.1 Laplace-like transform

Laplace-like transform (LlT), also known as “Shehu transform”, is a new integral transform derived to facilitate solving DEs [21]. LlT is defined over the set

$$ A = \bigl\{ v (t ):\exists M,k_{1,}k_{2}>0, \bigl\vert v(t) \bigr\vert < Me^{\frac{ \vert t \vert }{k_{j}}}, \text{if } t\in {(-1)}^{j} \times [0,\infty ), j = 1,2 \bigr\} , $$

where the constant M is finite and \(k_{1}\), \(k_{2}\) are either finite or infinite. Then LlT of the function \(v(t)\), denoted by \(S[v(t)]\), is defined as follows:

$$ S \bigl[v(t) \bigr] = G (s,u )= \int ^{\infty }_{0}{e^{ \frac{-st}{u}}v (t )}\,dt,\quad s>0, u > 0, $$

provided the integral on the right-hand side exists [21]. In particular,

$$\begin{aligned}& S[1] = \frac{u}{s}, \\& S \biggl[ \frac{t^{\alpha }}{\Gamma (\alpha +1)} \biggr] = \biggl( \frac{u}{s} \biggr)^{\alpha +1}, \end{aligned}$$

where Γ is the gamma function. Moreover, LlT of the fractional order derivative \({}^{C}_{0}{D^{\alpha }_{t}v (x,t )}\) for \(\alpha \geq 0\) is given by [22]

$$ S \bigl[{}^{C}_{0}{D^{\alpha }_{t}v (x,t )} \bigr]={ \biggl(\frac{s}{u} \biggr)}^{\alpha }G (x,s,u )- \sum^{m-1}_{k=0}{{ \biggl( \frac{s}{u} \biggr)}^{\alpha - (k+1 )}v^{ (k )} (x,0 )}, $$
(2.1)

where \(G (x,s,u )=S [v (x,t ) ]\) for \(m-1<\alpha \le m\), \(m\in \mathbb{N}\).

2.2 Variational theory

Modified variational iteration method (MVIM) is an efficient technique which has been implemented successfully to solve a large class of nonlinear problems [12, 13, 23]. Efficient implementation of MVIM rests on establishing the correction functional and identifying optimal Lagrange multipliers.

To illustrate the basic idea of MVIM, we consider the following TFNPDE problem:

$$ {}^{C}_{0}{D^{\alpha }_{t}}v (x,t ) + Rv (x,t ) + Nv (x,t ) - g (x,t ) = 0,\quad 0< \alpha < 1, $$
(2.2)

where \({}^{C}_{0}{D^{\alpha }_{t}}\) is the Caputo fractional derivative operator, R is a linear operator, \(Nv (x,t )\) is the nonlinear term, and \(g (x,t )\) is a given continuous function [11, 23, 24].

One of the most important steps of MVIM is to construct the correction functional for (2.2) which is defined as follows [12]:

$$ v_{n+1} (x,t ) = v_{n} (x,t )+ \int ^{t}_{0}{ \lambda \bigl[{}^{C}_{0}{D^{\alpha }_{t}}v (x,t ) + R{ \tilde{v}}_{n} (x,s ) + N{\tilde{v}}_{n} (x,s )-g (x,s ) \bigr]}\,ds, $$
(2.3)

where λ is a general Lagrange multiplier, \(v_{n}\) is the approximate solution at the nth iteration, and \({\tilde{v}}_{n}\) denotes a restricted variation, i.e., \(\delta {\tilde{v}}_{n} = 0\) with variational operator δ [25].

Making the correction functional in equation (2.3) stationary, i.e., \(\delta v_{n+1} (x,t ) = 0\), we obtain

$$ \delta v_{n} (x,t ) + \delta \int ^{t}_{0}\lambda\, {}^{C}_{0}D^{ \alpha }_{t}v_{n} (x,s )\,ds = 0. $$
(2.4)

The other crucial step in MVIM is computing the Lagrange multiplier λ by solving the system of stationary conditions which can be derived from (2.4). In ordinary calculus, the optimal value of λ can be obtained by using variational theory and integration by parts. However, in fractional calculus, it is not possible to perform integration by parts on the second term of the above equation [24].

2.3 Banach’s fixed point theorem

Definition 2.4

([26])

Let \(T: X \rightarrow X\) be a mapping of a set X into itself. The fixed point of T is \(x\in X\) which is mapped onto itself, that is,

$$ Tx = x, $$

the image Tx coincides with x.

Definition 2.5

(Contraction [26])

Let \(X = (X,d)\) be a metric space. A mapping \(T: X \rightarrow X\) is called a contraction on X if there is a nonnegative real number \(\gamma < 1\) such that, for all \(x, y \in X\),

$$ d(Tx, Ty) \leq \gamma d(x, y),\quad 0 \leq \gamma < 1. $$
(2.5)

Theorem 2.1

(Banach’s fixed point theorem [26])

Consider a metric space \(X = (X, d)\), where \(X \neq \emptyset \). Suppose that X is complete, and let \(T: X \rightarrow X\) be a contraction on X. Then T has a unique fixed point.

Corollary 2.1

(Iteration, error bounds [26])

Under the conditions of Theorem 2.1, the Picard iterative sequence \(\{x_{n}\}_{n=1}^{\infty }\) defined by the procedure \(x_{n+1} = Tx_{n}\) with arbitrary \(x_{0}\in X\) converges to the unique fixed point x of T. Error estimates of the iterative procedure are prior estimate

$$ d(x_{n},x) \leq \frac{\gamma ^{n}}{1-\gamma } d(x_{0},x_{1}) $$

and posterior estimate

$$ d(x_{n},x) \leq \frac{\gamma }{1-\gamma } d(x_{n-1},x_{n}). $$

Theorem 2.2

([27, 28])

Let \((X, d)\) be a Banach space and \(T: X \rightarrow X\) be a self-map of X satisfying

$$ d(Tx,Ty) \leq \beta d(x,Tx) + \gamma d(x,y) $$

for all \(x, y \in X\), \(\beta \geq 0\), \(0\leq \gamma < 1\). Then T is Picard T-stable.

3 A new numerical method for solving TFNPDEs with proportional delay

Even if there are methods which approximate the solution of TFNPDE problems, for instance HPM [17], HPTM [18], RPSM, and HATM [20], the challenging nature of these DEs inspires the search for new and efficient methods. In the sequel, we derive a new method which makes use of Laplace-like (Shehu) transform and variational theory. We name this new method Laplace transform variational iteration method (LTVIM).

Consider the general TFNPDE problem given in equation (1.1). Applying LlT as given in equation (2.1) to transform the fractional derivative in (1.1) gives

$$ \begin{aligned} & { \biggl(\frac{s}{u} \biggr)}^{\alpha }G (x,s,u )-\sum^{m-1}_{k=0}{{ \biggl(\frac{s}{u} \biggr)}^{\alpha - (k+1 )}v^{ (k )} (x,0 ) } \\ &\quad = S \biggl[f \biggl(x,t,v (p_{0}{x,q}_{0}t ), \frac{\partial }{\partial x}v (p_{1}x,q_{1}t ),\dots , \frac{{\partial }^{n}}{{\partial x}^{n}}v (p_{n}x,q_{n}t ) \biggr) \biggr], \\ & G (x,s,u ) = S \bigl[v (x,t ) \bigr]. \end{aligned} $$
(3.1)

Applying MVIM as given in equation (2.3) to construct a correction functional of the transformed equation (3.1), we obtain

$$ \begin{aligned} G_{n+1}(x,s,u) & = G_{n}(x,s,u) +\lambda \Biggl({ \biggl( \frac{s}{u} \biggr)}^{\alpha }G_{n}(x,s,u)-\sum ^{m-1}_{k=0}{{ \biggl( \frac{s}{u} \biggr)}^{\alpha - (k+1 )}{\tilde{v}_{n}}^{ (k )} (x,0 )} \Biggr) \\ &\quad {} - \lambda S \biggl[\tilde{f} \biggl(x,t,v_{n} (p_{0}{x,q}_{0}t ),\frac{\partial }{\partial x}v_{n} (p_{1}x,q_{1}t ), \dots ,\frac{{\partial }^{n}}{{\partial x}^{n}}v_{n} (p_{n}x,q_{n}t ) \biggr) \biggr]. \end{aligned} $$
(3.2)

Since \(\tilde{v}_{n}\) and are restricted variations, i.e., \(\delta \tilde{v}_{n} = 0\) and \(\delta \tilde{f} = 0\), applying the variational operator δ on (3.2) we obtain

$$ {\delta G}_{n+1}(x,s,u) = {\delta G}_{n}(x,s,u)+ \lambda { \biggl( \frac{s}{u} \biggr)}^{\alpha }\delta G_{n}(x,s,u). $$
(3.3)

Making equation (3.3) stationary, i.e., \(\delta G_{n+1}(x,s,u) = 0\), gives

$$ {\delta G}_{n}(x,s,u) \biggl[1+\lambda { \biggl( \frac{s}{u} \biggr)}^{ \alpha } \biggr]=0. $$

Hence, the value of the Lagrange multiplier which satisfies the stationary condition \(1+\lambda { (\frac{s}{u} )}^{\alpha } = 0\) is

$$ \lambda =-{ \biggl(\frac{s}{u} \biggr)}^{-\alpha }. $$

Inserting the Lagrange multiplier found above into equation (3.2) gives

$$ \begin{aligned} &G_{n+1}(x,s,u) \\ &\quad = \biggl(\frac{s}{u} \biggr)^{-\alpha } \Biggl(\sum ^{m-1}_{k=0}{{ \biggl(\frac{s}{u} \biggr)}^{\alpha - (k+1 )}v_{n}^{ (k )} (x,0 )} \Biggr) \\ &\qquad {} + \biggl(\frac{s}{u} \biggr)^{-\alpha } S \biggl[f \biggl(x,t,v_{n} (p_{0}{x,q}_{0}t ), \frac{\partial }{\partial x}v_{n} (p_{1}x,q_{1}t ), \dots , \frac{{\partial }^{n}}{{\partial x}^{n}}v_{n} (p_{n}x,q_{n}t ) \biggr) \biggr]. \end{aligned} $$
(3.4)

Taking the inverse LlT of both sides of (3.4) gives

$$\begin{aligned} &v_{n+1}(x,t) \\ &\quad = S^{-1} \Biggl[ \biggl(\frac{s}{u} \biggr)^{-\alpha } \Biggl(\sum^{m-1}_{k=0}{{ \biggl( \frac{s}{u} \biggr)}^{\alpha - (k+1 )}{{v}_{n}}^{ (k )} (x,0 )} \Biggr) \Biggr] \\ &\qquad {} + S^{-1} \biggl[{ \biggl(\frac{s}{u} \biggr)}^{-\alpha }S \biggl[f \biggl( x,t,v_{n} (p_{0}{x,q}_{0}t ), \frac{\partial }{\partial x}v_{n} (p_{1}x,q_{1}t ),\dots , \frac{{\partial }^{n}}{{\partial x}^{n}}v_{n} (p_{n}x,q_{n}t ) \biggr) \biggr] \biggr]. \end{aligned}$$
(3.5)

In this paper, we are interested in \(\alpha \in (0, 1]\). Therefore, \(m=1\) and \(k = 0\) in (3.5) and the first term on the right-hand side of this equation becomes

$$\begin{aligned} S^{-1} \Biggl[ \biggl(\frac{s}{u} \biggr)^{-\alpha } \Biggl(\sum^{m-1}_{k=0}{{ \biggl( \frac{s}{u} \biggr)}^{\alpha - (k+1 )}{{v}_{n}}^{ (k )} (x,0 )} \Biggr) \Biggr] & = S^{-1} \biggl[ \biggl( \frac{s}{u} \biggr)^{-\alpha } \biggl( \biggl( \frac{s}{u} \biggr)^{ \alpha -1}v_{n}(x,0) \biggr) \biggr] \\ & = S^{-1} \biggl[ \biggl(\frac{u}{s} \biggr)v_{n}(x,0) \biggr] \\ & = v_{n}(x,0). \end{aligned}$$

Hence, the iteration formula of LTVIM is given by

$$\begin{aligned} \begin{aligned} &v_{n+1}(x,t) \\ &\quad = v_{n}(x,0) \\ &\qquad {}+ S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{ \alpha }S \biggl[f \biggl( x,t,v_{n} (p_{0}{x,q}_{0}t ), \frac{\partial }{\partial x}v_{n} (p_{1}x,q_{1}t ),\dots , \frac{{\partial }^{n}}{{\partial x}^{n}}v_{n} (p_{n}x,q_{n}t ) \biggr) \biggr] \biggr], \end{aligned} \end{aligned}$$
(3.6)

which is the \((n+1)\)th approximate solution of the TFNPDE problem in (1.1) for \(0< \alpha \leq 1\).

3.1 Stability analysis

In the sequel, we state and prove an important result regarding the stability of LTVIM. To show Picard stability, it suffices to show that the mapping associated with LTVIM satisfies the conditions of Theorem 2.2.

Theorem 3.1

Let \((X,\| \cdot \|)\) be a Banach space and \(T: X\rightarrow X\) be a self-map of X. Then the LTVIM iteration procedure defined by

$$\begin{aligned} &v_{n+1} (x,t ) \\ &\quad = Tv_{n}(x,t) \\ &\quad = {v_{n} (x,0 )} \\ &\qquad {}+S^{-1} \biggl[{ \biggl( \frac{u}{s} \biggr)}^{ \alpha }S \biggl[f \biggl( x,t,v_{n} (p_{0}x,q_{0}t ), \frac{\partial }{\partial x}v_{n} (p_{1}x,q_{1}t ), \dots , \frac{{\partial }^{n}}{{\partial x}^{n}}v_{n}(p_{n}x,q_{n}t) \biggr) \biggr] \biggr] \end{aligned}$$
(3.7)

is Picard T-stable provided that

  1. i)

    \(\|v_{n}(x,0)-v_{m}(x,0)\|\leq \delta _{0} \|v_{n}(x,t) - v_{m}(x,t) \| \) for some \(\delta _{0} \in \mathbb{R}_{+}\) and for any t in the domain;

  2. ii)

    \(\Vert f (v_{n} (p_{0}x,q_{0}t )-v_{m} (p_{0}x,q_{0}t ),\frac{\partial }{\partial x}v_{n} (p_{1}x,q_{1}t )- \frac{\partial }{\partial x}v_{m} (p_{1}x,q_{1}t ),\dots ) \Vert \leq \delta _{1}\|v_{n}-v_{m}\| \) for some \(\delta _{1} \in \mathbb{R}_{+}\);

  3. iii)

    \(\gamma = \delta _{0} + \delta _{1} \Vert \frac{t^{\alpha }}{\Gamma (\alpha +1)} \Vert < 1\).

Proof

For the sake of convenience, we drop \((x,t)\) from \(v_{n}(x,t)\) and \(v_{m}(x,t)\). Thus, for \(n, m\in \mathbb{N}\), we have

$$ \begin{aligned} &Tv_{n}-Tv_{m} \\ &\quad = v_{n} (x,0 )-v_{m} (x,0 ) \\ &\qquad {} +S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \biggl[f \biggl( x,t,v_{n} (p_{0}x,q_{0}t ), \frac{\partial }{\partial x}v_{n} (p_{1}x,q_{1}t ),\dots , \frac{{\partial }^{n}}{{\partial x}^{n}}v_{n}(p_{n}x,q_{n}t) \biggr) \biggr] \biggr] \\ &\qquad {} -S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \biggl[f \biggl( x,t,v_{m} (p_{0}{x,b}_{0}t ), \frac{\partial }{\partial x}v_{m} (p_{1}x,q_{1}t ),\dots , \frac{{\partial }^{n}}{{\partial x}^{n}}v_{m}(p_{n}x,q_{n}t) \biggr) \biggr] \biggr]. \end{aligned} $$
(3.8)

Taking the norm of both sides of equation (3.8), we have

$$ \begin{aligned} &\Vert Tv_{n}- Tv_{m} \Vert \\ &\quad \le \bigl\Vert v_{n} (x,0 )- v_{m} (x,0 ) \bigr\Vert \\ &\qquad {} + \biggl\Vert S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \biggl[f \biggl( x,t,v_{n} (p_{0}x,q_{0}t ), \frac{\partial }{\partial x}v_{n} (p_{1}x,q_{1}t ),\dots , \frac{{\partial }^{n}}{{\partial x}^{n}}v_{n}(p_{n}x,q_{n}t) \biggr) \biggr] \biggr] \biggr\Vert \\ &\qquad {} - \biggl\Vert S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{ \alpha }S \biggl[f \biggl( x,t,v_{m} (p_{0}x,q_{0}t ), \frac{\partial }{\partial x}v_{m} (p_{1}x,q_{1}t ),\dots , \frac{{\partial }^{n}}{{\partial x}^{n}}v_{m}(p_{n}x,q_{n}t) \biggr) \biggr] \biggr] \biggr\Vert . \end{aligned} $$

Using the linearity of S, \(S^{-1}\) and the partial differential operator f, we obtain

$$ \begin{aligned} &\Vert Tv_{n}-Tv_{m} \Vert \\ &\quad \le \bigl\Vert v_{n} (x,0 )-v_{m} (x,0 ) \bigr\Vert \\ &\qquad {} +S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \biggl[ \biggl\Vert f \biggl(v_{n} (p_{0}x,q_{0}t )-v_{m} (p_{0}x,q_{0}t ), \\ &\qquad \frac{\partial }{\partial x} \bigl(v_{n} (p_{1}x,q_{1}t )-v_{m} (p_{1}x,q_{1}t ) \bigr),\dots \biggr) \biggr\Vert \biggr] \biggr]. \end{aligned} $$
(3.9)

Using conditions i) and ii) of the theorem, equation (3.9) becomes

$$\begin{aligned} \Vert Tv_{n}-Tv_{m} \Vert & \leq \delta _{0} \Vert v_{n}-v_{m} \Vert + \biggl\Vert S^{-1} \biggl[ \biggl(\frac{u}{s} \biggr)^{\alpha }S\big[\delta _{1}\|v_{n}-v_{m}\|\big] \biggr] \biggr\Vert \\ & = \delta _{0} \Vert v_{n}-v_{m} \Vert + \delta _{1} \Vert v_{n}-v_{m} \Vert \biggl\Vert S^{-1} \biggl[ \biggl(\frac{u}{s} \biggr)^{\alpha }S[1] \biggr] \biggr\Vert . \end{aligned}$$

Observe that

$$ S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S[1] \biggr]=S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }\frac{u}{s} \biggr]= S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha + 1} \biggr] = \frac{t^{\alpha }}{{\Gamma } (\alpha +1 )}. $$

Therefore, we have

$$ \begin{aligned} \Vert Tv_{n}-Tv_{m} \Vert & \le \biggl({\delta }_{0}+ {\delta }_{1} \biggl\Vert \frac{t^{\alpha }}{{\Gamma } (\alpha +1 )} \biggr\Vert \biggr) \Vert v_{n}-v_{m} \Vert \\ & = \gamma \Vert v_{n}-v_{m} \Vert \\ & \leq \beta \Vert v_{n}-Tv_{n} \Vert + \gamma \Vert v_{n}-v_{m} \Vert ,\quad \beta \geq 0, \end{aligned} $$
(3.10)

where \(\gamma = {\delta }_{0}+{\delta }_{1} \Vert \frac{t^{\alpha }}{{\Gamma } (\alpha +1 )} \Vert \). Hence, by Theorem 2.2, LTVIM is Picard T-stable if \(\gamma < 1\). □

3.2 Convergence analysis and error estimate

To analyze the convergence of LTVIM and to estimate its error, we state and prove two important results.

Theorem 3.2

Let \((X,\| \cdot \|)\) be a Banach space and \(T:X \to X\), a mapping associated with LTVIM, be defined by (3.7). Then T has a unique fixed point and the sequence \(\{ v_{n} \} _{n=1}^{\infty }\) generated by LTVIM with an initial value \(v_{0}\in X\) converges to this fixed point.

Proof

Combination of Definition 2.5, Theorem 2.1, and equation (3.10) implies that T has a unique fixed point. Next choose \(v_{0}\in X\) and consider the sequence \(\{v_{n}\}_{n=1}^{\infty }\) generated by LTVIM with \(v_{n+1} = Tv_{n}\) as in equation (3.7). Observe that

$$ x_{1} = Tx_{0},\qquad x_{2} = Tx_{1} = T^{2}x_{0},\qquad \ldots ,\qquad x_{n} = T^{n}x_{0}, \qquad \ldots . $$
(3.11)

Equation (3.11) contains a sequence of images of \(x_{0}\) under repeated application of T. To show the convergence of LTVIM, it suffices to show that the generated sequence is Cauchy. By applying (3.10) and (3.11) one obtains

$$ \begin{aligned} \Vert v_{n+1} - v_{n} \Vert & = \Vert Tv_{n} - Tv_{n-1} \Vert \\ & \leq \gamma \Vert v_{n} - v_{n-1} \Vert \\ & = \gamma \Vert Tv_{n-1} - Tv_{n-2} \Vert \\ & \leq \gamma ^{2} \Vert v_{n-1} - v_{n-2} \Vert \\ &\ \ \vdots \\ & \leq \gamma ^{n} \Vert x_{1}-x_{0} \Vert . \end{aligned} $$
(3.12)

By the triangle inequality and equation (3.12), for any \(m,n\in \mathbb{N}\) such that \(n > m\), we have

$$\begin{aligned} \Vert v_{m}-v_{n} \Vert & = \Vert v_{m} - v_{m+1} + v_{m+1} - v_{m+2} + v_{m+2} - v_{m+3} + \cdots + v_{n-1} - v_{n} \Vert \\ & \leq \Vert v_{m} - v_{m+1} \Vert + \Vert v_{m+1}-v_{m+2} \Vert + \Vert v_{m+2}-v_{m+3} \Vert + \cdots + \Vert v_{n-1}-v_{n} \Vert \\ & = \Vert Tv_{m-1} - Tv_{m} \Vert + \Vert Tv_{m}-Tv_{m+1} \Vert + \cdots + \Vert Tv_{n-2}-Tv_{n-1} \Vert \\ & \leq \gamma ^{m} \Vert v_{1}-v_{0} \Vert + \gamma ^{m+1} \Vert v_{1}-v_{0} \Vert + \cdots + \gamma ^{n-1} \Vert v_{1}-v_{0} \Vert \\ & = \bigl(\gamma ^{m} + \gamma ^{m+1} + \cdots + \gamma ^{n-1} \bigr) \Vert v_{1}-v_{0} \Vert \\ & = \gamma ^{m} \bigl(1 + \gamma + \gamma ^{2} + \cdots + \gamma ^{n-m-1} \bigr) \Vert v_{1}-v_{0} \Vert . \end{aligned}$$

Since \(0\leq \gamma < 1\), the sum \(1 + \gamma + \gamma ^{2} + \cdots + \gamma ^{n-m-1}\) represents a finite geometric progression whose total sum is \(\frac{1-\gamma ^{n-m}}{1-\gamma }\). Therefore,

$$ \Vert v_{m} - v_{n} \Vert \leq \gamma ^{m} \biggl( \frac{1-\gamma ^{n-m}}{1-\gamma } \biggr) \Vert v_{1}-v_{0} \Vert \leq \frac{ \gamma ^{m}}{1-\gamma } \Vert v_{1}-v_{0} \Vert ,\quad \text{as } 1-\gamma ^{n-m} \leq 1. $$
(3.13)

Since \(0\leq \gamma < 1\) and \(\|v_{1}-v_{0}\|\) is fixed, the right-hand side of equation (3.13) can be made very small as needed by taking m sufficiently large. Hence, the sequence \(\{v_{n}\}\) is Cauchy and therefore convergent.

Let \(\{v_{n}\}\) converge to \(v\in X\). To complete the proof, we need to show that v is the (unique) fixed point of T. From the triangle inequality and equation (3.10), we have

$$\begin{aligned} \Vert v - Tv \Vert & \leq \Vert v - v_{n} \Vert + \Vert v_{n} - Tv \Vert \\ & = \Vert v - v_{n} \Vert + \Vert Tv_{n-1} - Tv \Vert \\ & \leq \Vert v - v_{n} \Vert + \gamma \Vert v_{n-1} - v \Vert \\ & = 0\quad \text{as } n\rightarrow \infty . \end{aligned}$$

This implies that \(\|v - Tv\| = 0\). Since \(\|\cdot \|\) is a metric, we have \(Tv=v\), i.e., v is the fixed point of T which is unique. □

Corollary 3.1

The maximum absolute error of the approximate solution generated by LTVIM is estimated to be

$$ \Vert v_{n}-v \Vert \le \frac{{\gamma }^{n}}{1-\gamma } \Vert v_{1}-v_{0} \Vert ,\quad 0 \leq \gamma < 1. $$

Proof

Taking \(n\rightarrow \infty \) in equation (3.13) gives

$$ \Vert v_{m}-v \Vert \leq \frac{\gamma ^{m}}{1-\gamma } \Vert v_{1}-v_{0} \Vert . $$

The rest follows from Corollary 2.1. □

4 Numerical results and discussion

In this section, we show the efficiency of LTVIM by solving some TFNPDE problems with proportional delay. We also compare the performance of LTVIM with recently proposed methods, namely HPM [17], HPTM [18], RPSM, and HATM [20].

Example 4.1

([17, 18, 20])

Consider the following time-fractional generalized Burgers equation with proportional delay:

$$ {{}^{C}_{0}D}^{\alpha }_{t}v (x,t )= \frac{{\partial }^{2}}{{\partial x}^{2}}v (x,t )+ \frac{\partial }{\partial x}v \biggl(x, \frac{t}{2} \biggr)v \biggl( \frac{x}{2},\frac{t}{2} \biggr)+\frac{1}{2}v (x,t ), $$
(4.1)

\(x, t \in [0,1]\) and \(0<\alpha \le 1\), with initial conditions \(v (x,0 )=x\).

For the special case \(\alpha = 1\), the exact solution of (4.1) is given by \(v(x,t) = xe^{t}\) [17, 18, 20].

The \((n+1)\)th approximate solution of (4.1) is obtained by applying the LTVIM iteration formula given in equation (3.6):

$$ v_{n+1} (x,t )=v_{n} (x,0 )+S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \biggl[ \frac{{\partial }^{2}}{{\partial x}^{2}}v_{n} (x,t )+ \frac{\partial }{\partial x}v_{n} \biggl(x,\frac{t}{2} \biggr)v_{n} \biggl( \frac{x}{2},\frac{t}{2} \biggr)+\frac{1}{2}v_{n} (x,t ) \biggr] \biggr]. $$

First iteration

Since \(v_{0}(x,0) = x\), the first iteration solution is given by

$$ \begin{aligned} v_{1}(x,t) & = v_{0} (x,0 )+S^{-1} \biggl[{ \biggl( \frac{u}{s} \biggr)}^{\alpha }S \biggl[ \frac{{\partial }^{2}}{{\partial x}^{2}}v_{0} (x,t )+ \frac{\partial }{\partial x}v_{0} \biggl(x,\frac{t}{2} \biggr)v_{0} \biggl( \frac{x}{2},\frac{t}{2} \biggr)+\frac{1}{2}v_{0} (x,t ) \biggr] \biggr] \\ & = x+S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \biggl[ \frac{x}{2}+\frac{1}{2}x \biggr] \biggr]=x+S^{-1} \biggl[{ \biggl( \frac{u}{s} \biggr)}^{\alpha }S [x ] \biggr] \\ & = {x+S}^{-1} \biggl[x{ \biggl(\frac{u}{s} \biggr)}^{\alpha +1} \biggr]= x+xS^{-1} \biggl[{ \biggl( \frac{u}{s} \biggr)}^{\alpha +1} \biggr] \\ & = x \biggl(1 + \frac{t^{\alpha }}{\Gamma (\alpha +1)} \biggr). \end{aligned} $$

Second iteration

The second iteration solution, for \(v_{1}(x,0) = x\), is

$$ \begin{aligned} &v_{2}(x,t) \\ &\quad = v_{1} (x,0 )+S^{-1} \biggl[{ \biggl( \frac{u}{s} \biggr)}^{\alpha }S \biggl[ \frac{{\partial }^{2}}{{\partial x}^{2}}v_{1} (x,t )+ \frac{\partial }{\partial x}v_{1} \biggl(x,\frac{t}{2} \biggr)v_{1} \biggl( \frac{x}{2},\frac{t}{2} \biggr)+\frac{1}{2}v_{1} (x,t ) \biggr] \biggr] \\ &\quad = {x+S}^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \biggl[ \biggl(1+ \frac{t^{\alpha }}{2^{\alpha }\Gamma (\alpha +1 )} \biggr) \biggl( \frac{x}{2}+ \frac{xt^{\alpha }}{2^{\alpha +1}\Gamma (\alpha +1 )} \biggr)+ \frac{x}{2}+ \frac{xt^{\alpha }}{2\Gamma (\alpha +1 )} \biggr] \biggr] \\ &\quad = x+S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \biggl[ \frac{x}{2}+ \frac{xt^{\alpha }}{2^{\alpha +1}\Gamma (\alpha +1 )}+ \frac{xt^{\alpha }}{2^{\alpha +1}\Gamma (\alpha +1 )}+ \frac{xt^{2\alpha }}{2^{2\alpha +1}{(\Gamma (\alpha +1 ))}^{2}} \biggr] \biggr] \\ &\qquad {} + S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \biggl[ \frac{x}{2} + \frac{xt^{\alpha }}{2\Gamma (\alpha +1 )} \biggr] \biggr] \\ &\quad = {x+S}^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \biggl[ x + a_{1}\frac{xt^{\alpha }}{\Gamma (\alpha +1 )}+a_{2} \frac{xt^{2\alpha }}{\Gamma (2\alpha +1)} \biggr] \biggr], \end{aligned} $$

where \(a_{1}=\frac{1}{2^{\alpha }}+\frac{1}{2}\) and \(a_{2}= \frac{\Gamma (2\alpha +1)}{2^{2\alpha +1}({\Gamma (\alpha +1 ))}^{2}}\). Then, by the linearity of S and \(S^{-1}\), we have

$$ \begin{aligned} v_{2}(x,t) & = {x+S}^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha } \biggl( \frac{u}{s} x+a_{1}x{ \biggl(\frac{u}{s} \biggr)}^{\alpha +1}+ a_{2}x{ \biggl(\frac{u}{s} \biggr)}^{2\alpha +1} \biggr) \biggr] \\ & = {x+S}^{-1} \biggl[x{ \biggl(\frac{u}{s} \biggr)}^{\alpha +1}+a_{1}x{ \biggl(\frac{u}{s} \biggr)}^{2\alpha +1}+ a_{2}x{ \biggl( \frac{u}{s} \biggr)}^{3\alpha +1} \biggr] \\ & = x+\frac{xt^{\alpha }}{\Gamma (\alpha +1 )}+ \frac{a_{1}xt^{2\alpha }}{\Gamma (2\alpha +1 )}+ \frac{a_{2}xt^{3\alpha }}{\Gamma (3\alpha +1 )} \\ & = x \biggl(1 + \frac{t^{\alpha }}{\Gamma (\alpha +1 )} + a_{1}\frac{t^{2\alpha }}{\Gamma (2\alpha +1 )} + a_{2} \frac{t^{3\alpha }}{\Gamma (3\alpha +1 )} \biggr). \end{aligned} $$

Third iteration

The third iteration solution, for \(v_{2}(x,0) = x\), is given by

$$ \begin{aligned} v_{3}(x,t) & = v_{2} (x,0 )+S^{-1} \biggl[{ \biggl( \frac{u}{s} \biggr)}^{\alpha }S \biggl[ \frac{{\partial }^{2}}{{\partial x}^{2}}v_{2} (x,t )+ \frac{\partial }{\partial x}v_{2} \biggl(x,\frac{t}{2} \biggr)v_{2} \biggl( \frac{x}{2},\frac{t}{2} \biggr)+\frac{1}{2}v_{2} (x,t ) \biggr] \biggr] \\ & = x+S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \biggl[x + a_{1} \frac{xt^{\alpha }}{\Gamma (\alpha + 1)} + b_{1} \frac{xt^{2\alpha }}{\Gamma (2\alpha + 1)} + b_{2} \frac{xt^{3\alpha }}{\Gamma (3\alpha + 1)} \biggr] \biggr] \\ &\quad {} + S^{-1} \biggl[ \biggl(\frac{u}{s} \biggr)^{\alpha }S \biggl[ b_{3} \frac{xt^{4\alpha }}{\Gamma (4\alpha +1)} + b_{4} \frac{xt^{5\alpha }}{\Gamma (5\alpha +1)} + b_{5} \frac{xt^{6\alpha }}{\Gamma (6\alpha +1)} \biggr] \biggr] \\ & = x \biggl(1 + \frac{t^{\alpha }}{\Gamma (\alpha + 1 )} + \frac{a_{1}t^{2\alpha }}{\Gamma (2\alpha +1)} + \frac{b_{1}t^{3\alpha }}{\Gamma (3\alpha +1)} + \frac{b_{2}t^{4\alpha }}{\Gamma (4\alpha +1)} + \frac{b_{3}t^{5\alpha }}{\Gamma (5\alpha +1)} \biggr) \\ &\quad {} + x \biggl(\frac{b_{4}t^{6\alpha }}{\Gamma (6\alpha +1)} + \frac{b_{5}t^{7\alpha }}{\Gamma (7\alpha +1)} \biggr), \end{aligned} $$

where

$$\begin{aligned}& b_{1} = a_{2} + a_{1} \biggl( \frac{1}{2^{2\alpha }} + \frac{1}{2} \biggr),\qquad b_{2} = \frac{a_{1}\Gamma (3\alpha + 1)}{2^{3\alpha }\Gamma (\alpha +1 ) \Gamma (2\alpha +1)} + \frac{a_{2}}{2^{3\alpha }}+\frac{a_{2}}{2}, \\& b_{3} =\frac{\Gamma (4\alpha +1)}{2^{4\alpha }} \biggl( \frac{a_{1}^{2}}{{2(\Gamma (2\alpha + 1))^{2}}} + \frac{a_{2}}{{\Gamma (\alpha + 1)}{\Gamma (3\alpha + 1)}} \biggr), \\& b_{4} = \frac{a_{1}a_{2}{\Gamma (5\alpha +1)}}{2^{5\alpha }{\Gamma (2\alpha + 1)} {\Gamma (3\alpha + 1)}}, \qquad b_{5} = \frac{a^{2}_{2}{\Gamma (6\alpha +1)}}{2^{6\alpha +1}{(\Gamma (3\alpha + 1))^{2}}}. \end{aligned}$$

Fourth iteration

Since \(v_{3} (x,0 )=x\), the fourth iteration solution is given by

$$ \begin{aligned} v_{4} (x,t ) &= {v_{3} (x,0 )+S}^{-1} \biggl[{ \biggl( \frac{u}{s} \biggr)}^{\alpha }S \biggl[ \frac{{\partial }^{2}}{{\partial x}^{2}}v_{3} (x,t )+ \frac{\partial }{\partial x}v_{3} \biggl(x,\frac{t}{2} \biggr)v_{3} \biggl(\frac{x}{2},\frac{t}{2} \biggr)+\frac{1}{2}v_{3} (x,t ) \biggr] \biggr] \\ & = x \biggl(1+ \frac{t^{\alpha }}{\Gamma (\alpha + 1 )} + \frac{a_{1}t^{2\alpha }}{\Gamma (2\alpha +1)} + \frac{b_{1}t^{3\alpha }}{\Gamma (3\alpha +1)} \\ &\quad {}+ \frac{c_{1}t^{4\alpha }}{\Gamma (4\alpha +1)} + \frac{c_{2}t^{5\alpha }}{\Gamma (5\alpha +1)} + \cdots + \frac{c_{12}t^{15\alpha }}{\Gamma (15\alpha +1)} \biggr), \end{aligned} $$
(4.2)

where

$$\begin{aligned}& c_{1} = b_{1} \biggl( \frac{1}{2^{3\alpha }} + \frac{1}{2} \biggr) + \frac{a_{1}\Gamma (3\alpha +1)}{2^{3\alpha }\Gamma (\alpha +1)\Gamma (2\alpha +1)}, \\& c_{2} = b_{2} \biggl(\frac{1}{2^{4\alpha }} + \frac{1}{2} \biggr) + \frac{\Gamma (4\alpha +1)}{2^{4\alpha }} \biggl( \frac{b_{1}}{\Gamma (\alpha +1)\Gamma (3\alpha +1)} + \frac{a^{2}_{1}}{2(\Gamma (2\alpha +1))^{2}} \biggr), \\& c_{3} = b_{3} \biggl(\frac{1}{2^{5\alpha }} + \frac{1}{2} \biggr) + \frac{\Gamma (5\alpha +1)}{2^{5\alpha }} \biggl( \frac{b_{2}}{\Gamma (\alpha +1)\Gamma (4\alpha +1)} + \frac{a_{1}b_{1}}{\Gamma (2\alpha +1)\Gamma (3\alpha +1)} \biggr), \\& c_{4} = b_{4} \biggl(\frac{1}{2^{6\alpha }} + \frac{1}{2} \biggr) + \frac{\Gamma (6\alpha +1)}{2^{6\alpha }} \biggl( \frac{b_{3}}{\Gamma (\alpha +1)\Gamma (5\alpha +1)} + \frac{a_{1}b_{2}}{\Gamma (2\alpha +1)\Gamma (4\alpha +1)} \biggr) \\& \hphantom{c_{4} ={}}{} + \frac{b_{1}^{2}\Gamma (6\alpha + 1)}{2^{6\alpha +1}(\Gamma (3\alpha +1))^{2}}, \\& c_{5} = b_{5} \biggl(\frac{1}{2^{7\alpha }} + \frac{1}{2} \biggr)+ \frac{\Gamma (7\alpha +1)}{2^{7\alpha }} \biggl( \frac{b_{4}}{\Gamma (\alpha +1)\Gamma (6\alpha +1)} \hphantom{c_{5} = {}}{}+ \frac{a_{1}b_{3}}{\Gamma (2\alpha +1)\Gamma (5\alpha +1)} \biggr) \\& \hphantom{c_{5} = {}}{} + \frac{b_{1}b_{2}\Gamma (7\alpha + 1)}{2^{7\alpha }\Gamma (3\alpha +1)\Gamma (4\alpha +1)}, \\& c_{6} = \frac{\Gamma (8\alpha +1)}{2^{8\alpha }} \biggl( \frac{b_{5}}{\Gamma (\alpha +1)\Gamma (7\alpha +1)} + \frac{a_{1}b_{4}}{\Gamma (2\alpha +1)\Gamma (6\alpha +1)} + \frac{b_{1}b_{3}}{\Gamma (3\alpha +1)\Gamma (5\alpha +1)} \biggr) \\& \hphantom{c_{6} ={}}{} + \frac{b_{2}^{2}\Gamma (8\alpha + 1)}{2^{8\alpha +1}(\Gamma (4\alpha +1))^{2}}, \\& c_{7} = \frac{\Gamma (9\alpha +1)}{2^{9\alpha }} \biggl( \frac{a_{1}b_{5}}{\Gamma (2\alpha +1)\Gamma (7\alpha +1)} + \frac{b_{1}b_{4}}{\Gamma (3\alpha +1)\Gamma (6\alpha +1)} + \frac{b_{2}b_{3}}{\Gamma (4\alpha +1)\Gamma (5\alpha +1)} \biggr), \\& c_{8} = \frac{\Gamma (10\alpha +1)}{2^{10\alpha }} \biggl( \frac{b_{1}b_{5}}{\Gamma (3\alpha +1)\Gamma (7\alpha +1)} + \frac{b_{2}b_{4}}{\Gamma (4\alpha +1)\Gamma (6\alpha +1)} + \frac{b_{3}^{2}}{2\Gamma (5\alpha +1))^{2}} \biggr), \\& c_{9} = \frac{\Gamma (11\alpha +1)}{2^{11\alpha }} \biggl( \frac{b_{2}b_{5}}{\Gamma (4\alpha +1)\Gamma (7\alpha +1)} + \frac{b_{3}b_{4}}{\Gamma (5\alpha +1)\Gamma (6\alpha +1)} \biggr), \\& c_{10} = \frac{\Gamma (12\alpha +1)}{2^{12\alpha }} \biggl( \frac{b_{3}b_{5}}{\Gamma (5\alpha +1)\Gamma (7\alpha +1)} + \frac{b_{4}^{2}}{2(\Gamma (6\alpha +1))^{2}} \biggr), \\& c_{11} = \frac{b_{4}b_{5}\Gamma (13\alpha +1)}{2^{13\alpha }\Gamma (6\alpha +1)\Gamma (7\alpha +1)},\qquad c_{12} = \frac{b_{5}^{2}\Gamma (14\alpha +1)}{2^{14\alpha +1}(\Gamma (7\alpha +1))^{2}}. \end{aligned}$$

In order to investigate the evolution of the solution approximated by the fourth iteration \(v_{4}(x,t)\) of LTVIM given in (4.2), we vary the values of the fractional order (\(\alpha = 0.7, 0.8, 0.9, 1\)) and depict the results in Fig. 1. The surface behavior of \(v_{4}(x,t)\) for \(\alpha = 0.7,0.8,0.9,1\) and \(0 \leq x\), \(t \leq 1\) are depicted in Fig. 2.

Figure 1
figure 1

Evolution of LTVIM’s fourth iteration approximate solution of Example 4.1 for \(x = 1\), \(t\in [0,1 ]\) and different values of α

Figure 2
figure 2

Evolution of surface behavior of LTVIM’s fourth iteration approximate solution of Example 4.1 for \(x, t\in [0,1 ]\) and different values of α

It can be seen from Fig. 1 that the approximate solution by LTVIM gets closer to the exact solution as \(\alpha \rightarrow 1\). We have also compared some numerical methods on the basis of absolute error and recorded the results in Table 1. The intention is to compare LTVIM with HPM [17], HPTM [18], RPSM, and HATM [20]. The results in this table show that the solutions of the proposed method are in better agreement with the exact solution than those of the other methods. The maximum error obtained with LTVIM is \(2.25387\times 10^{-3}\), but the maximum error obtained with the others is \(7.46137 \times 10^{-3}\), which is almost three times bigger than that of LTVIM. In general, the numerical results recorded in this table show that LTVIM performs much better than its counterparts. Therefore, the results obtained with only four iterations of LTVIM are very promising, thereby signaling that the method can be successfully applied for solving nonlinear fractional order DEs.

Table 1 Comparison of LTVIM, HPM [17], HPTM [18], RPSM, and HATM [20] on Example 4.1. Here \(\alpha = 1\) is used for all methods

Example 4.2

([17, 18, 20])

Consider the following fractional partial differential equation with proportional delay:

$$ {}^{C}_{0}D^{\alpha }_{t}v(x,t)= \frac{{\partial }^{2}}{{\partial x}^{2}}v \biggl(x,\frac{t}{2} \biggr)v \biggl(x, \frac{t}{2} \biggr)-v (x,t ), $$
(4.3)

\(x, t \in [0,1]\) and \(0<\alpha \le 1\), with initial conditions \(v(x,0) = x^{2}\).

For \(\alpha = 1\), the exact solution of (4.3) is given by \(v(x,t)=x^{2}e^{t}\) [17, 18, 20].

The \((n+1)\)th approximate solution of (4.3) is obtained by applying the LTVIM iteration formula given in equation (3.6):

$$ v_{n+1} (x,t )=v_{n} (x,0 )+S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \biggl[ \frac{{\partial }^{2}}{{\partial x}^{2}}v_{n} \biggl(x,\frac{t}{2} \biggr)v_{n} \biggl(x,\frac{t}{2} \biggr)-v_{n} (x,t ) \biggr] \biggr]. $$

First iteration

Since \(v_{0}(x,0) = x^{2}\), the first iteration solution is given by

$$ \begin{aligned} v_{1}(x,t) & = v_{0} (x,0 )+S^{-1} \biggl[{ \biggl( \frac{u}{s} \biggr)}^{\alpha }S \biggl[ \frac{{\partial }^{2}}{{\partial x}^{2}}v_{0} \biggl(x,\frac{t}{2} \biggr)v_{0} \biggl(x,\frac{t}{2} \biggr)-v_{0} (x,t ) \biggr] \biggr] \\ & = x^{2} + S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \bigl[x^{2} \bigr] \biggr] = x^{2} + S^{-1} \biggl[x^{2}{ \biggl( \frac{u}{s} \biggr)}^{\alpha +1} \biggr] = x^{2} + x^{2}S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha +1} \biggr] \\ & = x^{2} \biggl(1 + \frac{t^{\alpha }}{\Gamma (\alpha +1)} \biggr). \end{aligned} $$

Second iteration

Since \(v_{1}(x,0) = x^{2}\), the second iteration solution is given by

$$ \begin{aligned} v_{2}(x,t) & = v_{1} (x,0 )+S^{-1} \biggl[{ \biggl( \frac{u}{s} \biggr)}^{\alpha }S \biggl[ \frac{{\partial }^{2}}{{\partial x}^{2}}v_{1} \biggl(x,\frac{t}{2} \biggr)v_{1} \biggl(x,\frac{t}{2} \biggr)-v_{1} (x,t ) \biggr] \biggr] \\ & = x^{2} + S^{-1} \biggl[{ \biggl(\frac{u}{s} \biggr)}^{\alpha }S \biggl[x^{2} \biggl(1+a_{1} \frac{t^{\alpha }}{\Gamma (\alpha +1)} + a_{2} \frac{t^{2\alpha }}{\Gamma (2\alpha +1)} \biggr) \biggr] \biggr] \\ & = x^{2} + S^{-1} \biggl[x^{2} \biggl({ \biggl(\frac{u}{s} \biggr)}^{ \alpha + 1} + a_{1} \biggl(\frac{u}{s} \biggr)^{2\alpha +1} + a_{2} \biggl( \frac{u}{s} \biggr)^{3\alpha +1} \biggr) \biggr] \\ & = x^{2} \biggl(1 + \frac{t^{\alpha }}{\Gamma (\alpha +1)} + a_{1} \frac{t^{2\alpha }}{\Gamma (2\alpha +1)} + a_{2} \frac{t^{3\alpha }}{\Gamma (3\alpha +1)} \biggr), \end{aligned} $$

where \(a_{1} = \frac{1}{2^{\alpha -2}} -1\) and \(a_{2} = \frac{\Gamma (2\alpha +1)}{2^{2\alpha -1}(\Gamma (\alpha +1)^{2})}\).

Third iteration

Since \(v_{2}(x,0) = x^{2}\), the third iteration solution is given by

$$ \begin{aligned} v_{3}(x,t) & = v_{2} (x,0 )+S^{-1} \biggl[{ \biggl( \frac{u}{s} \biggr)}^{\alpha }S \biggl[ \frac{{\partial }^{2}}{{\partial x}^{2}}v_{2} \biggl(x,\frac{t}{2} \biggr)v_{2} \biggl(x,\frac{t}{2} \biggr)-v_{2} (x,t ) \biggr] \biggr] \\ & =x^{2} \biggl(1 + \frac{t^{\alpha }}{\Gamma (\alpha + 1 )} + \frac{a_{1}t^{2\alpha }}{\Gamma (2\alpha +1)} + \frac{b_{1}t^{3\alpha }}{\Gamma (3\alpha +1)} + \frac{b_{2}t^{4\alpha }}{\Gamma (4\alpha +1)} + \frac{b_{3}t^{5\alpha }}{\Gamma (5\alpha +1)} \biggr) \\ &\quad {} + x^{2} \biggl(\frac{b_{4}t^{6\alpha }}{\Gamma (6\alpha +1)} + \frac{b_{5}t^{7\alpha }}{\Gamma (7\alpha +1)} \biggr), \end{aligned} $$

where

$$ \begin{aligned} b_{1} & = a_{1} \biggl( \frac{1}{2^{2\alpha -2}}-1 \biggr) + a_{2},\qquad b_{2} = a_{2} \biggl(\frac{1}{2^{3\alpha -2}}-1 \biggr) + \frac{a_{1}\Gamma (3\alpha +1)}{2^{3\alpha -2}\Gamma (\alpha +1)\Gamma (2\alpha +1)}, \\ b_{3} & = \frac{\Gamma (4\alpha +1)}{2^{4\alpha -2}} \biggl( \frac{a_{2}}{\Gamma (\alpha +1)\Gamma (3\alpha +1)} + \frac{a_{1}^{2}}{2(\Gamma (2\alpha +1))^{2}} \biggr), \\ b_{4} &= \frac{a_{1}a_{2}{\Gamma (5\alpha +1)}}{2^{5\alpha -2}{\Gamma (2\alpha + 1)} {\Gamma (3\alpha + 1)}},\qquad b_{5} = \frac{a^{2}_{2}{\Gamma (6\alpha +1)}}{2^{6\alpha -1}{(\Gamma (3\alpha + 1))^{2}}}. \end{aligned} $$

Fourth iteration

Since \(v_{3} (x,0 ) = x^{2}\), the fourth iteration solution is given by

$$ \begin{aligned} v_{4}(x,t) &= v_{3} (x,0 )+S^{-1} \biggl[{ \biggl( \frac{u}{s} \biggr)}^{\alpha }S \biggl[ \frac{{\partial }^{2}}{{\partial x}^{2}}v_{3} \biggl(x,\frac{t}{2} \biggr)v_{3} \biggl(x, \frac{t}{2} \biggr)-v_{3} (x,t ) \biggr] \biggr] \\ & = x^{2} \biggl(1+ \frac{t^{\alpha }}{\Gamma (\alpha + 1 )} + \frac{a_{1}t^{2\alpha }}{\Gamma (2\alpha +1)} + \frac{b_{1}t^{3\alpha }}{\Gamma (3\alpha +1)} \\ &\quad {}+ \frac{r_{1}t^{4\alpha }}{\Gamma (4\alpha +1)} + \frac{r_{2}t^{5\alpha }}{\Gamma (5\alpha +1)} + \cdots + \frac{r_{12}t^{15\alpha }}{\Gamma (15\alpha +1)} \biggr), \end{aligned} $$
(4.4)

where

$$\begin{aligned}& c_{1} = \frac{a_{1}\Gamma (3\alpha +1)}{2^{3\alpha -2}\Gamma (\alpha +1)\Gamma (2\alpha +1)} + b_{1} \biggl(\frac{1}{2^{3\alpha -2}}-1 \biggr), \\& c_{2} = \Gamma (4\alpha +1) \biggl( \frac{b_{1}}{2^{4\alpha -2}\Gamma (\alpha +1)\Gamma (3\alpha +1)} + \frac{a_{1}^{2}}{2^{4\alpha -1}(\Gamma (2\alpha +1))^{2}} \biggr) + b_{2} \biggl(\frac{1}{2^{4\alpha -2}}-1 \biggr), \\& c_{3} = \frac{\Gamma (5\alpha +1)}{2^{5\alpha -2}} \biggl( \frac{b_{2}}{\Gamma (\alpha +1)\Gamma (4\alpha +1)} + \frac{a_{1}b_{1}}{\Gamma (2\alpha +1)\Gamma (3\alpha +1)} \biggr) + b_{3} \biggl(\frac{1}{2^{5\alpha -2}} - 1 \biggr), \\& c_{4} = \frac{\Gamma (6\alpha +1)}{2^{6\alpha -2}} \biggl( \frac{b_{3}}{\Gamma (\alpha +1)\Gamma (5\alpha +1)} + \frac{a_{1}b_{2}}{\Gamma (2\alpha +1)\Gamma (4\alpha +1)} \biggr) + \frac{b_{1}^{2}\Gamma (6\alpha +1)}{2^{6\alpha -1}(\Gamma (3\alpha +1))^{2}} \\& \hphantom{c_{4} ={}}{} + b_{4} \biggl(\frac{1}{2^{6\alpha -2}} - 1 \biggr), \\& c_{5} = \frac{\Gamma (7\alpha +1)}{2^{7\alpha -2}} \biggl( \frac{b_{4}}{\Gamma (\alpha +1)\Gamma (6\alpha +1)} + \frac{a_{1}b_{3}}{\Gamma (2\alpha +1)\Gamma (5\alpha +1)} + \frac{b_{1}b_{2}}{\Gamma (3\alpha +1)\Gamma (4\alpha +1)} \biggr) \\& \hphantom{c_{5} ={}}{} + b_{5} \biggl(\frac{1}{2^{7\alpha -2}} - 1 \biggr), \\& c_{6} = \frac{\Gamma (8\alpha +1)}{2^{8\alpha -2}} \biggl( \frac{b_{5}}{\Gamma (\alpha +1)\Gamma (7\alpha +1)} + \frac{a_{1}b_{4}}{\Gamma (2\alpha +1)\Gamma (6\alpha +1)} + \frac{b_{1}b_{3}}{\Gamma (3\alpha +1)\Gamma (5\alpha +1)} \biggr) \\& \hphantom{c_{6} ={}}{} + \frac{b_{2}^{2}\Gamma (8\alpha +1)}{2^{8\alpha -1}(\Gamma (4\alpha +1))^{2}}, \\& c_{7} = \frac{\Gamma (9\alpha +1)}{2^{9\alpha -2}} \biggl( \frac{a_{1}b_{5}}{\Gamma (2\alpha +1)\Gamma (7\alpha +1)} + \frac{b_{1}b_{4}}{\Gamma (3\alpha +1)\Gamma (6\alpha +1)} + \frac{b_{2}b_{3}}{\Gamma (4\alpha +1)\Gamma (5\alpha +1)} \biggr), \\& c_{8} = \frac{\Gamma (10\alpha +1)}{2^{10\alpha -2}} \biggl( \frac{b_{1}b_{5}}{\Gamma (3\alpha +1)\Gamma (7\alpha +1)} + \frac{b_{2}b_{4}}{\Gamma (4\alpha +1)\Gamma (6\alpha +1)} \biggr)+ \frac{b_{3}^{2}\Gamma (10\alpha +1)}{2^{10\alpha -1}(\Gamma (5\alpha +1))^{2}}, \\& c_{9} = \frac{\Gamma (11\alpha +1)}{2^{11\alpha -2}} \biggl( \frac{b_{2}b_{5}}{\Gamma (4\alpha +1)\Gamma (7\alpha +1)} + \frac{b_{3}b_{4}}{\Gamma (5\alpha +1)\Gamma (6\alpha +1)} \biggr), \\& c_{10} = \frac{\Gamma (12\alpha +1)}{2^{12\alpha -2}} \biggl( \frac{b_{3}b_{5}}{\Gamma (5\alpha +1)\Gamma (7\alpha +1)} + \frac{b_{4}^{2}}{1(2\Gamma (6\alpha +1))^{2}} \biggr), \\& c_{11} = \frac{b_{4}b_{5}\Gamma (13\alpha +1)}{2^{13\alpha -2}\Gamma (6\alpha +1) \Gamma (7\alpha +1)},\qquad c_{12} = \frac{b_{5}^{2}\Gamma (14\alpha +1)}{2^{14\alpha -1}(\Gamma (7\alpha +1))^{2}}. \end{aligned}$$

Here also we have investigated the evolution of the solution approximated by the fourth iteration \(v_{4}(x,t)\) of LTVIM given in (4.4) by varying the values of the fractional order (\(\alpha = 0.7, 0.8, 0.9, 1\)) and presented the results in Fig. 3. Moreover, the surface behavior of \(v_{4}(x,t)\) for \(\alpha = 0.7,0.8,0.9,1\) and \(0 \leq x\), \(t \leq 1\) are depicted in Fig. 4.

Figure 3
figure 3

Evolution of LTVIM’s fourth iteration approximate solution of Example 4.2 for \(x = 1\), \(t\in [0,1 ]\) and different values of α

Figure 4
figure 4

Evolution of surface behavior of LTVIM’s fourth iteration solution of Example 4.2 for \(x, t\in [0,1 ]\) and different values of α

The results in Fig. 3 indicate that the approximate solution by LTVIM gets closer to the exact solution when the value of α approaches 1. In addition, it is obvious from the numerical results recorded in Table 2 that LTVIM provides more accurate solutions, which are very close to the exact solution, than HPM [17], HPTM [18], RPSM, and HATM [20]. One can also observe that the maximum error obtained with LTVIM is \(4.91832\times 10^{-4}\), whereas the maximum error obtained with the others is \(5.59603 \times 10^{-3}\). This means LTVIM is at least ten times more accurate than other existing methods. Once again, the experimental results assure that LTVIM outperforms its counterparts and is well suited for solving nonlinear DEs of fractional order.

Table 2 Comparison of LTVIM, HPM [17], HPTM [18], RPSM and HATM [20] on Example 4.2. Here \(\alpha = 1\) is used for all methods

5 Conclusion

In this paper, we have studied TFNPDEs with proportional delay. A new numerical method called LTVIM is designed using the concepts of Laplace-like transform and variational theory. Very good conditions for the stability and convergence of LTVIM are constructed and analyzed in the Banach sense. Moreover, the efficiency of the new method is illustrated by solving some test problems. The numerical results show that LTVIM is very efficient and provides more accurate solutions than some recently developed methods. The promising experimental results signal that LTVIM could be applied successfully for other similar nonlinear problems.

Availability of data and materials

Not applicable.

Abbreviations

AVIM:

Alternative Variational Iteration Method

DEs:

Differential Equations

FDEs:

Fractional Differential Equations

HATM:

Homotopy Analysis Transform Method

HPM:

Homotopy Perturbation Method

HPTM:

Homotopy Perturbation Transform Method

LlT:

Laplace-like Transform

LTVIM:

Laplace Transform Variational Iteration Method

MVIM:

Modified Variational Iteration Method

RKM:

Reproducing Kernel Method

RPSM:

Residual Power Series Method

TFNPDEs:

Time-fractional Nonlinear Partial Differential Equations

VIM:

Variational Iteration Method

References

  1. Miller, K.S., Ross, B.: An Introduction to the Fractional Calculus and Fractional Differential Equations. Wiley, New York (1993)

    MATH  Google Scholar 

  2. Podlubny, I.: Fractional Differential Equations. Academic Press, New York (1999)

    MATH  Google Scholar 

  3. Ajmal, A., Mohd, A.: On numerical solution of fractional order delay differential equation using Chebyshev collocation method. Sch. Math Sci. Malays. 6(1), 8–17 (2018)

    Google Scholar 

  4. Gupta, S.: Numerical simulation of time-fractional Black–Scholes equation using fractional variational iteration method. J. Comput. Math. Sci. 9(9), 1101–1110 (2019)

    Google Scholar 

  5. Kuang, Y.: Delay Differential Equations with Applications in Population Dynamics. Academic Press, Boston (1993)

    MATH  Google Scholar 

  6. Cooke, L., Driessche, D., Zou, X.: Interaction of maturation delay and nonlinear birth in population and epidemic models. J. Math. Biol. 39, 332–352 (1999)

    Article  MathSciNet  Google Scholar 

  7. Song, L., Xu, S., Yang, J.: Dynamical models of happiness with fractional order. Commun. Nonlinear Sci. Numer. Simul. 15(3), 616–628 (2010)

    Article  MathSciNet  Google Scholar 

  8. Abazari, R., Ganji, M.: Extended two-dimensional differential transform method and its application on nonlinear partial differential equations with proportional delay. Int. J. Comput. Math. 88(8), 1749–1762 (2011)

    Article  MathSciNet  Google Scholar 

  9. He, J.H.: Variational iteration method—a kind of non-linear analytical technique: some examples. Int. J. Non-Linear Mech. 34(4), 699–708 (1999)

    Article  Google Scholar 

  10. He, J.H., Wu, X.H.: Construction of solitary solution and compacton-like solution by variational iteration method. Chaos Solitons Fractals 29(1), 108–113 (2006)

    Article  MathSciNet  Google Scholar 

  11. Jafari, H., Alipoor, A.: A new method for calculating general Lagrange multiplier in the variational iteration method. Numer. Methods Partial Differ. Equ. 27, 996–1001 (2011)

    Article  MathSciNet  Google Scholar 

  12. Abassy, T.A., El-Tawil, M.A., El-Zoheiry, H.: Toward a modified variational iteration method. J. Comput. Appl. Math. 207(1), 137–147 (2007)

    Article  MathSciNet  Google Scholar 

  13. Abassy, T.A., El-Tawil, M.A., El-Zoheiry, H.: Modified variational iteration method for Boussinesq equation. Comput. Math. Appl. 54(7–8), 955–965 (2007)

    Article  MathSciNet  Google Scholar 

  14. Sakar, G.M., Saldir, O.: Improving variational iteration method with auxiliary parameter for nonlinear time-fractional partial differential equations. J. Optim. Theory Appl. 174, 530–549 (2017)

    Article  MathSciNet  Google Scholar 

  15. Saldir, O., Sakar, G.M., Erdogan, F.: Numerical solution of time-fractional Kawahara equation using reproducing kernel with error estimate. Comput. Appl. Math. 38, 198 (2019)

    Article  MathSciNet  Google Scholar 

  16. Sakar, G.M., Saldir, O., Erdogan, F.: An iterative approximation of time-fractional Cahn–Allen equation with reproducing kernel method. Comput. Appl. Math. 37, 5951–5964 (2018)

    Article  MathSciNet  Google Scholar 

  17. Sakar, G.M., Uludag, F., Erdogan, F.: Numerical solution of time-fractional nonlinear partial differential equations with proportional delays by homotopy perturbation method. Appl. Math. Model. 40, 6639–6649 (2016)

    Article  MathSciNet  Google Scholar 

  18. Singh, K., Kumar, P.: Homotopy perturbation transform method for solving fractional partial differential equations with proportional delay. SeMA 75, 111–125 (2018)

    Article  MathSciNet  Google Scholar 

  19. Singh, K., Kumar, P.: Fractional variational iteration method for solving fractional partial differential equations with proportional delay. Int. J. Differ. Equ. 2017, Article ID 5206380 (2017)

    MathSciNet  MATH  Google Scholar 

  20. Linjun, W., Yan, W., Yixin, R., Xumei, C.: Two analytical methods for fractional partial differential equations with proportional delay. IAENG Int. J. Appl. Math. 49 (2019)

  21. Maitama, S., Zhao, W.: New integral transform: Shehu transform a generalization of Sumudu and Laplace transform for solving differential equations. Int. J. Anal. Appl. 17(2), 167–190 (2019)

    MATH  Google Scholar 

  22. Sania, Q., Prem, K.: Using Shehu integral transform to solve fractional order Caputo type initial value problems. J. Appl. Math. Comput. Mech. 18(2), 75–83 (2019)

    Article  MathSciNet  Google Scholar 

  23. Abassy, T.A.: Modified variational iteration method (nonlinear homogeneous initial value problem). Comput. Math. Appl. 59, 912–918 (2010)

    Article  MathSciNet  Google Scholar 

  24. Goswami, P., Alqahtani, R.: Solutions of fractional differential equations by Sumudu transform and variational iteration method. J. Nonlinear Sci. Appl. 9, 1944–1951 (2016)

    Article  MathSciNet  Google Scholar 

  25. Finlayson, B.A.: The Method of Weighted Residuals and Variational Principles. Academic Press, New York (1972)

    MATH  Google Scholar 

  26. Kreyszig, E.: Introductory Functional Analysis with Applications. Wiley, New York (1978)

    MATH  Google Scholar 

  27. Qing, Y., Rhoades, B.E.: T-Stability of Picard iteration in metric spaces. Fixed Point Theory Appl. 2008, Article ID 418971 (2008)

    Article  MathSciNet  Google Scholar 

  28. Khan, H., Khan, A., Chen, W., Shah, K.: Stability analysis and a numerical scheme for fractional Klein–Gordon equations. Math. Methods Appl. Sci. 42, 723–732 (2019)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the editor and anonymous reviewers for their useful comments and suggestions.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed enough in the production and writing of this research. Moreover, all of them read and approved the final manuscript.

Corresponding author

Correspondence to Melisew Tefera Belachew.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bekela, A.S., Belachew, M.T. & Wole, G.A. A numerical method using Laplace-like transform and variational theory for solving time-fractional nonlinear partial differential equations with proportional delay. Adv Differ Equ 2020, 586 (2020). https://doi.org/10.1186/s13662-020-03048-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-020-03048-3

Keywords