# Dynamical analysis in explicit continuous iteration algorithm and its applications

## Abstract

This article is devoted to the dynamical analysis of an explicit continuous iteration algorithm, describing its construction, relationship with the explicit trapezoid method, and error analysis. A theorem demonstrating the equality of these methods is also established. The accuracy of the theoretical results and universality of the explicit continuous iteration algorithm are proved by numerical experiments.

## Introduction

With developments in the society and economy, scientific computing has recently become increasingly popular in the world. In fact, it is essential that we derive high order, efficient numerical methods to solve differential equations, which are widely used in physical problems. In particular, it is very important to construct fast algorithms for solving practical problems.

It is well known that many numerical methods are applied to mathematical models to investigate the solution space. Some of these methods are explicit, such as Euler scheme, Adams scheme, and Runge–Kutta scheme, we refer the reader to [9, 10] and the references therein. Others employ implicit methods. However, implicit approaches have many shortcomings, such as being overly complex, relatively slow, and requiring excessive internal memory space. Therefore, explicit methods have become more widely used.

Many uncertainties and practical difficulties involved in models for solving differential equations mean that there are relatively few reports in the literature up to now. In , Butcher stated that the classic finite stage Runge–Kutta methods could be expanded to infinite stage Runge–Kutta methods, and suggested that the finite summation should be changed to definite integration over finite intervals. However, he did not make any further progress in this field. In 2010, Haier built on this important concept and provided an expression for a continuous stage Runge–Kutta method . We expand this to the case of ordinary differential equations (ODEs), which describe many natural phenomena in meteorology, biology, and so on [7, 8]. To the best of our knowledge, there are no previous reports of explicit continuous iterative methods in the literature.

The main motivations for this work are twofold. On the one hand, the classical results on explicit numerical methods are the basis for this research. A variety of numerical methods have been applied to different aspects of differential equations, and many important results have revealed the mechanisms of dynamical behavior. On the other hand, our earlier work [10, 11] on stability analysis and numerical simulations of stochastic differential equations have inspired further study in this direction. For example, there has been some research on the numerical analysis [5, 6, 11] and numerical simulations  of stochastic differential equations. These studies established the foundation of numerical analysis.

In this study, we first construct a class of explicit continuous iterative (ECI) algorithms, and then compare with other classes in terms of class of numerical methods and error analysis. Numerical examples are presented to illustrate the feasibility of the ECI algorithm and to provide accurate solutions within a reasonable time. These results show that, under some appropriate conditions, ECI algorithm can be used to solve some nonlinear ODEs more accurately than with some existing numerical approximations.

The remainder of this paper is organized as follows. Section 2 describes the construction of the ECI algorithm, and introduces some relevant concepts and norms which will be utilized later. Section 3 is devoted to the theoretical analysis of the ECI algorithm, i.e., the error analysis of the solution and equivalence properties. Section 4 presents numerical experiments in some given areas, including illustrative numerical results for the main theorem. Section 5 provides the conclusions to this study.

## Construction of explicit continuous iterative algorithm

We consider the following test equation:

$$\textstyle\begin{cases} \frac{dY}{dt}=aY, \\ Y(0)=\mathit{II}, \end{cases}$$

where $$a\in \mathbb{R}$$, $$Y\in \mathbb{R}^{d}$$, $$\mathit{II}=(1,1,\dots ,1) \in \mathbb{R}^{d}$$, and $$d \in \mathbb{Z}^{+}$$. The norm of a variable $$Y=({y_{1}},{ y_{2}},\dots ,{ y_{d}}) \in \mathbb{R}^{d}$$ is defined as follows:

$$\Vert Y \Vert _{2}= \bigl[ \vert y_{1} \vert ^{2}+ \vert y_{2} \vert ^{2}+\cdots + \vert y_{d} \vert ^{2} \bigr]^{ \frac{1}{2}}< \infty .$$

For simplicity of notation, the norm $$\|\cdot \|_{2}$$ is usually written as $$\|\cdot \|$$ unless otherwise stated in the sequel.

Motivated by Haier’s work , i.e., continuous stage Runge–Kutta method, we subdivide the time axis $$\mathbb{R}^{+}$$ into the union of subintervals $$[nt,(n+1)t]$$, i.e.,

$$\mathbb{R}^{+}=\bigcup_{n=0}^{+\infty } \bigl[nt,(n+1)t\bigr],$$

and utilize the step function and Haier’s construction method to form the following ECI algorithm:

$$U(t)=\frac{1+0.5a(t-nh)}{1-0.5a(t-nh)}, \quad nh\leq t\leq (n+1)h, n=0,1,2,\dots.$$
(1)

### Theorem 2.1

$$Y_{n}$$ and $$Y_{n+1}$$ obtained by scheme (1) are the same as those by the trapezoid formula.

### Proof

On the one hand, it follows from scheme (1) that we obtain the following results. When $$n=0$$, we have

$$\bigl(U(t)\bigr)_{n=0}=\frac{1+0.5at}{1-0.5at}, \quad 0\leq t \leq h.$$
(2)

And when $$n=1$$, we have

$$\bigl(U(t)\bigr)_{n=1}=\frac{1+0.5a(t-h)}{1-0.5a(t-h)}, \quad h\leq t \leq 2h.$$
(3)

By the continuity of (2) and (3), we can get

$$\bigl(U(h)\bigr)_{n=0}=\bigl(U(h)\bigr)_{n=1}.$$

Therefore, we have

$$Y_{1}=\frac{1+0.5ah}{1-0.5ah}Y_{0}.$$

Similarly, when $$n=2$$, we obtain

$$\bigl(U(t)\bigr)_{n=2}=\frac{1+0.5ah}{1-0.5ah}Y_{1}= \biggl(\frac{1+0.5ah}{1-0.5ah}\biggr)^{2}Y _{0}, \quad 2h\leq t\leq 3h.$$
(4)

By the continuity of (3) and (4), we have

$$\bigl(U(2h)\bigr)_{n=1}=\bigl(U(2h)\bigr)_{n=2}.$$

Therefore, we obtain

$$Y_{2}=\biggl(\frac{1+0.5ah}{1-0.5ah}\biggr)^{2}Y_{0}, \quad 2h\leq t\leq 3h.$$

It follows from the same method that we have

$$Y_{n+1}=\frac{1+0.5ah}{1-0.5ah}Y_{n}=\biggl(\frac{1+0.5ah}{1-0.5ah} \biggr)^{n+1}Y _{0}, \quad (n+1)h\leq t\leq (n+2)h.$$

Therefore, we obtain

$$Y_{n}=\biggl(\frac{1+0.5ah}{1-0.5ah}\biggr)^{n}Y_{0}, \quad nh\leq t\leq (n+1)h.$$
(5)

On the other hand, by the trapezoid formula, we have

$$Y_{n+1}=Y_{n}+\frac{1}{2}h(aY_{n}+aY_{n+1}),$$

i.e.,

$$Y_{n+1}=\frac{1+0.5ah}{1-0.5ah}Y_{n}.$$
(6)

Combining (5) and (6), we complete our proof. □

### Remark 1

As known from the above results, the essence of the ECI algorithm is that explicit iteration is applied to fast obtain approximate solutions, which are continuous and more accurate, to the true solutions.

## Error analysis

### Lemma 3.1

The function $$U(t)$$ satisfies the following vector ordinary differential equation:

$$\frac{dU(t)}{dt}=aU(t)+\frac{0.25a^{3}(t-nh)^{2}}{[1-0.5a(t-nh)]^{2}}Y _{n}, \quad t\in \bigl[nh,(n+1)h\bigr], n=0,1,2,\dots ,$$

and initial conditions $$U(nh)=Y_{n}$$. Furthermore,

$$U\bigl((n+1)h\bigr)=e^{ah}\biggl[1+\frac{1}{4}a^{3} \int _{0}^{h}\frac{e^{-\tau a}\tau ^{2}}{(1-0.5a\tau )^{2}}\,d\tau \biggr]Y_{n}.$$

### Proof

Firstly, it follows from the expression of function $$U(t)$$ that

$$\bigl(U(t)\bigr)_{n=n}=\frac{1+0.5a(t-nh)}{1-0.5a(t-nh)}Y_{n}, \quad nh\leq t \leq (n+1)h.$$

The derivative of function $$U(t)$$ is given by

\begin{aligned} \frac{dU(t)}{dt}&=\frac{a}{[1-0.5a(t-nh)]^{2}}Y_{n} \\ &= \frac{a[1-0.25a^{2}(t-nh)^{2}]-a[1-0.25a^{2}(t-nh)^{2}]+a}{[1-0.5a(t-nh)]^{2}}Y _{n} \\ &=a\frac{1-0.5a(t+nh)}{1-0.5a(t-nh)}Y_{n}+ \frac{0.25a^{3}(t-nh)^{2}}{[1-0.5a(t-nh)]^{2}}Y_{n} \\ &=aU(t)+\frac{0.25a^{3}(t-nh)^{2}}{[1-0.5a(t-nh)]^{2}}Y_{n}. \end{aligned}

Secondly, we have $$U(nh)=Y_{n}$$.

Lastly, we integrate the derivative $$\frac{dU(t)}{dt}$$ and obtain

$$\int _{nh}^{(n+1)h}\frac{dU(t)}{dt}\,dt= \int _{nh}^{(n+1)h}aU(t)\,dt+ \int _{nh}^{(n+1)h}\frac{0.25a^{3}(t-nh)^{2}}{[1-0.5a(t-nh)]^{2}}Y_{n}\,dt.$$

We make the transform $$t-nh=\tau$$ and have

$$U\bigl((n+1)h\bigr)-U(nh)= \int _{0}^{h}aU(t)\,dt+\frac{1}{4}a^{3} \int _{0}^{h}\frac{ \tau ^{2}}{(1-0.5a\tau )^{2}}\,d\tau Y_{n}.$$

Therefore, we can obtain the claim of Lemma 3.1 as follows:

$$U\bigl((n+1)h\bigr)=e^{ah}\biggl[1+\frac{1}{4}a^{3} \int _{0}^{h}\frac{\tau ^{2}}{(1-0.5a \tau )^{2}}\,d\tau \biggr]Y_{n}.$$

This completes our proof. □

### Lemma 3.2

Let the local error be $$E_{n}=Y_{n}-Y(nh)$$. Then it satisfies the following equality:

$$E_{n+1}=e^{ah}\biggl[1+\frac{1}{4}a^{3} \int _{0}^{h}\frac{e^{-\tau a}\tau ^{2}}{(1-0.5a\tau )^{2}}\,d\tau \biggr]E_{n}+\frac{1}{4}a^{3}e^{(n+1)ha} \int _{0}^{h}\frac{e^{-\tau a}\tau ^{2}}{(1-0.5a\tau )^{2}}\,d\tau \mathit{II}.$$
(7)

### Proof

If t satisfies the condition $$(n+1)h\leq t\leq (n+2)h$$, we have

$$U(t)=\frac{1+0.5a(t-(n+1)h)}{1-0.5a(t-(n+1)h)}Y_{n+1}.$$

Then we can obtain $$U((t+1)h)=Y_{n+1}$$. It follows from Lemma 3.1 that

$$Y_{n+1}=e^{ah}\biggl[1+\frac{1}{4}a^{3} \int _{0}^{h}\frac{e^{-\tau a}\tau ^{2}}{(1-0.5a\tau )^{2}}\,d\tau \biggr]Y_{n}.$$

By the vector differential equation $$Y'=aY$$, $$Y(0)=\mathit{II}$$, its solution is $$Y(t)=e^{at}\mathit{II}$$. By the definition of local error $$E_{n}$$, we have

\begin{aligned} E_{n+1}&=Y_{n+1}-Y\bigl((n+1)h\bigr) \\ &=e^{ah}\biggl[1+\frac{1}{4}a^{3} \int _{0}^{h}\frac{e^{-\tau a}\tau ^{2}}{(1-0.5a \tau )^{2}}\,d\tau \biggr]Y_{n}-e^{(n+1)ah}\mathit{II} \\ &=e^{ah}\biggl[1+\frac{1}{4}a^{3} \int _{0}^{h}\frac{e^{-\tau a}\tau ^{2}}{(1-0.5a \tau )^{2}}\,d\tau \biggr] \bigl(E_{n}+Y(nh)\bigr)-e^{(n+1)ah}\mathit{II} \\ &=e^{ah}\biggl[1+\frac{1}{4}a^{3} \int _{0}^{h}\frac{e^{-\tau a}\tau ^{2}}{(1-0.5a \tau )^{2}}\,d\tau \biggr]E_{n}\\ &\quad {}+e^{ah}\biggl[1+\frac{1}{4}a^{3} \int _{0}^{h}\frac{e ^{-\tau a}\tau ^{2}}{(1-0.5a\tau )^{2}}\,d\tau \biggr]e^{nha}\mathit{II}-e^{(n+1)ah} \mathit{II} \\ &=e^{ah}\biggl[1+\frac{1}{4}a^{3} \int _{0}^{h}\frac{e^{-\tau a}\tau ^{2}}{(1-0.5a \tau )^{2}}\,d\tau \biggr]E_{n}+\frac{1}{4}a^{3}e^{(n+1)ha} \int _{0}^{h}\frac{e ^{-\tau a}\tau ^{2}}{(1-0.5a\tau )^{2}}\,d\tau \mathit{II}. \end{aligned}

This completes our proof. □

By the conclusions of Lemmas 3.1 and 3.2, we obtain the following error control theorem.

### Theorem 3.1

If $$a<0$$, the ECI algorithm (1) satisfies the following error propagation inequality:

$$\Vert E_{n+1} \Vert \leq e^{ah}\biggl[1+\frac{1}{4} \vert a \vert ^{3} \int _{0}^{h} e^{-\tau a} \tau ^{2} \,d\tau \biggr] \Vert E_{n} \Vert +\frac{1}{4} \vert a \vert ^{3}e^{(n+1)ha} \int _{0}^{h}e ^{-\tau a}\tau ^{2}\,d \tau .$$

### Proof

When $$a<0$$, it follows from the condition $$0<\tau <h$$ that we have the conclusion $$1-0.5a\tau >1$$. By the properties of the integral, we obtain

$$\int _{0}^{h}\frac{e^{-\tau a}\tau ^{2}}{(1-0.5a\tau )^{2}}\,d\tau \leq \int _{0}^{h}e^{-\tau a}\tau ^{2}\,d \tau .$$

Therefore, by Lemma 3.2 and the triangle inequality of the norm, we obtain

\begin{aligned} \Vert E_{n+1} \Vert &\leq e^{ah}\biggl[1+ \frac{1}{4} \vert a \vert ^{3} \int _{0}^{h}\frac{e^{- \tau a}\tau ^{2}}{(1-0.5a\tau )^{2}}\,d\tau \biggr] \Vert E_{n} \Vert +\frac{1}{4} \vert a \vert ^{3}e ^{(n+1)ha} \int _{0}^{h}\frac{e^{-\tau a}\tau ^{2}}{(1-0.5a\tau )^{2}}\,d \tau \\ &\leq e^{ah}\biggl[1+\frac{1}{4} \vert a \vert ^{3} \int _{0}^{h} e^{-\tau a}\tau ^{2} \,d \tau \biggr] \Vert E_{n} \Vert +\frac{1}{4} \vert a \vert ^{3}e^{(n+1)ha} \int _{0}^{h}e^{-\tau a} \tau ^{2}\,d \tau . \end{aligned}

Therefore, the conclusion of Theorem 3.1 follows from Lemma 3.1. □

### Remark 2

The advantages of this method are not only in its convergence, that is, the iterated error being limited in a small interval, but also in its ability to simulate the solutions of ODEs continuously and explicitly, which can help simulate the true solutions more accurately.

## Numerical experiments

### Comparison with classic methods

As for the test equation in Sect. 2, we only consider the special case $$Y\in \mathbb{R}$$, $$a=-4.0$$ and $$Y(0)=1.0$$. We compare numerical solutions obtained by the ECI algorithm with those yielded by some classic methods, such as Euler method and implicit trapezoid method. We choose the step size $$h=0.01$$, and the different results are shown as follows.

From the data shown in Table 1, we see that the results obtained by the ECI algorithm and trapezoid method are almost the same, and with the increasing number of iterations, the solutions become closer to zero.

It follows from Table 2 and Figs. 12 that the accuracy of the numerical solutions obtained by the ECI algorithm is much higher than that obtained by Euler method, and the error approaches zero much faster. Meanwhile, Fig. 3 shows that this algorithm is stable for different initial values. All these facts verify the theoretical results.

### Applications in numerical simulations

We consider a nonlinear ordinary differential equation with initial value

$$\textstyle\begin{cases} \frac{dY}{dt}=-2Y-2t, \\ Y(0)=1.0, Y\in \mathbb{R}. \end{cases}$$

Firstly, we make a transformation as follows. Let $$Z=Y+t$$, then we have

$$\frac{dZ}{dt}=\frac{dY}{dt}+1.$$

So $$\frac{dZ}{dt}=-2Z+1$$. And if we let $$X=Z-\frac{1}{2}$$, then $$\frac{dX}{dt}=-2X$$. Therefore, the analytic solution is

$$Y=\frac{1}{2}e^{-2t}-t+\frac{1}{2}.$$

Secondly, the ECI algorithm is applied to this equation and the numerical solution is obtained as follows:

$$U(t)=\frac{1+0.5a(t-nh)}{1-0.5a(t-nh)}\biggl(Y_{n}+t_{n}-\frac{1}{2} \biggr), \quad nh\leq t\leq (n+1)h,a=-2, n=0,1,\dots$$

We choose the step size $$h=0.01$$ and iterative step $$N=600$$. The numerical results are shown as the following Figs. 4 and 5.

And we obtain the computational time when the error tolerance is no more than $$\|E\|=4.91\mbox{e}{-}06$$. The results are as follows.

Figures 45 and Table 3 demonstrate that the accuracy of the ECI algorithm is much higher than that of the trapezoid method, and the iteration times decrease obviously, so that the computational efficacy of the ECI algorithm is better than that of Euler and trapezoid methods. Altogether, the ECI algorithm is an excellent and appropriate method for some nonlinear ODEs.

### Remark 3

As we see from this numerical experiment, although the ECI algorithm is constructed for simple test equations, it can be extended to some nonlinear ODEs, which can generate dynamical systems by some parameter transformations. However, the conditions, which should be satisfied for such nonlinear ODEs, are still to be investigated, and the associated algorithm will be revised, if needed. All these questions will be tackled in our future work.

## Conclusion

The main result of this paper is the dynamical analysis of the ECI algorithm and its applications in simulating the solutions of ODEs. The results show that this algorithm is effective and the numerical results can match the results of theoretical analysis. Although some progress is made, more practical models and methods, which are needed to solve a system of ODEs or stochastic differential equations, will be shown in our future work.

## References

1. 1.

Butcher, J.C.: The Numerical Analysis of Ordinary Differential Equations: Runge–Kutta and General Linear Methods. Wiley, New York (1987)

2. 2.

Haier, E.: Energy-preserving variant of collocation methods. J. Numer. Anal. Ind. Appl. Math. 5, 73–84 (2010)

3. 3.

Khasminskii, R.: Stochastic Stability of Differential Equations, 2nd edn. Springer, Berlin (2011)

4. 4.

Milstein, G.: Numerical Integration of Stochastic Differential Equations. Kluwer Academic, Dordrecht (1995)

5. 5.

Wang, P.: A-stable Runge–Kutta methods for stiff stochastic differential equations with multiplicative noise. Comput. Appl. Math. 34(2), 773–792 (2015)

6. 6.

Wang, T.: Optimal point-wise error estimate of a compact difference scheme for the coupled Gross–Pitaevskii equations in one dimension. J. Sci. Comput. 59, 158–186 (2014)

7. 7.

Xie, X., Chen, F.: Uniqueness of limit cycle and quality of infinite critical point of a class of cubic system. Ann. Differ. Equ. 21, 3 (2005)

8. 8.

Xie, X., Zhan, Q.: Uniqueness of limit cycles for a class of cubic system with an invariant straight line. Nonlinear Anal. TMA 70(12), 4217–4225 (2009)

9. 9.

Yang, Q.: Numerical Analysis, 2nd edn. Tsinghua University Press (2008)

10. 10.

Zhan, Q.: Mean-square numerical approximations to random periodic solutions of stochastic differential equations. Adv. Differ. Equ. 2015, 292, 1–17 (2015)

11. 11.

Zhan, Q.: Shadowing orbits of stochastic differential equations. J. Nonlinear Sci. Appl. 9, 2006–2018 (2016)

12. 12.

Zhan, Q., Xie, X., Zhang, Z.: Stability results of a class of differential equations and application in medicine. Abstr. Appl. Anal. 2009, Article ID 187021 (2009)

### Acknowledgements

The authors would like to express their gratitude to the referees for giving strong and very useful suggestions for improving the article.

Not applicable.

## Funding

This work is supported by the Science Research Projection of the Education Department of Fujian Province, No. JT180122, the Natural Science Foundation of Fujian Province, No. 2015J01019, and NSFC(Nos. 11021101, 11290142, 91130003, 11771149 and 11701086).

## Author information

Authors

### Contributions

All authors participated in drafting and checking the manuscript, read and approved the final manuscript.

### Corresponding authors

Correspondence to Qingyi Zhan or Zhifang Zhang.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests. 