Computation of solutions to linear difference and differential equations with a prescribed asymptotic behavior

Linear differential equations usually arise from mathematical modeling of physical experiments and real-world problems. In most applications these equations are linked to initial or boundary conditions. But sometimes the solution under consideration is characterized by its asymptotic behavior, which leads to the question how to infer from the asymptotic growth of a solution to its initial values. In this paper we show that under some mild conditions the initial values of the desired solution can be computed by means of a continuous-time analogue of a modified matrix continued fraction. For numerical applications we develop forward and backward algorithms which behave well in most situations. The topic is closely related to the theory of special functions and its extension to higher-dimensional problems. Our investigations result in a powerful tool for solving some classical mathematical problems. To demonstrate the efficiency of our method we apply it to Poincaré type and Kneser’s differential equation.


Introduction
A huge variety of real-world problems is modelled by means of linear differential equations. Although questions of existence and uniqueness of solutions are easy to answer for linear differential equations, explicit solutions are rare, in particular for differential equations of order > 2, so that numerical procedures are needed. But most of these procedures are restricted to initial and boundary value problems.
However, in some applications we do not have initial or boundary conditions. In probability theory for example, densities of probability distribution functions, stationary measures of stochastic processes or certain functionals defined on diffusion processes are characterized by ordinary differential equations and are often uniquely determined by a single regularity or integrability condition (see [1]).
Another area of possible applications concerns linear differential equations with almost constant coefficients. Following Levinson's seminal paper [2], a relatively rich theory concerning the asymptotic behavior of their solutions has been developed. Here, it is obvious to ask which initial values correspond to which asymptotic behavior, and how to compute these solutions numerically. Since the mathematical theory of differential equations is primarily qualitatively oriented, there is only little literature addressing this problem (see [3]). An exception is the class of second-order linear differential equations, which has come into focus due to Miller's famous algorithm (see [4][5][6][7][8]). This is because it has been turned out that many special functions can be represented as so-called minimal solutions of second-order linear differential equations and their discrete analogues. But when it was recognized that this approach could not directly be transferred to higher order linear difference and differential equations (see [8]), the interest in this subject waned. For discrete systems and scalar differential equations, some of these problems have been investigated by Schäfke [9,10] and Hanschke [11,12].
In the sequel it is shown that the initial values corresponding to a specific growth property can be determined by means of a continuous-time analogue of a modified matrix continued fraction. Matrix continued fractions have been originally developed for computing subdominant solutions of linear systems of difference equations (see P. Levrie and A. Bultheel [13]) and stationary measures of discrete state Markov chains with block-band transition matrix (see [14,15]).
Note that standard numerical methods for solving differential equations are dedicated to solving boundary value problems or initial value problems. In particular, if a numerical method is applied to a differential equation for some function x : [0, ∞) → R, consistency/convergence statements guarantee that the approximation to x(t) converges to the exact value x(t) as the maximum step size h tends to 0 (note that we only consider linear problems, and hence, consistency implies convergence). Furthermore, the order of consistency/convergence gives some estimates on the speed of convergence. However, for any method, the error will increase with t, in most cases exponentially. This fact entails that the solution of the discretized system can have other asymptotic properties than the exact solution. In so far, finding a solution with prescribed asymptotic behavior, requires a thorough consideration not only of the original equation but also of its corresponding discretization scheme. The alignment of these two systems may be crucial for the success of the approach.
The rest of this paper is organized as follows: After some remarks on 2 × 2-blockmatrices and their inverses we present our main results. In order to be able to distinguish the asymptotically differently growing solutions of the underlying differential equation we introduce the concept of -subdominant solutions, which is a generalization of the concept introduced in [12]. It turns out that under certain regularity conditions each subspace of -subdominant solutions can be characterized by means of a continuous-time analogue of a modified matrix continued fraction. Since matrix continued fractions represent generalizations of the Jacobi-Perron-algorithm (see [16]), we refer to this correspondence as Jacobi-Perron-characterization. To demonstrate the efficiency of our method we apply it to Poincaré type and Kneser's differential equation. Numerical examples complete our work.

Preparation: some notes on block matrices
In this paper, we will frequently use a block partition of some r × r-matrix where B is a square p × p-matrix for some p < r. In order to avoid an overload of notation, we introduce F 1 = I p 0 ∈ C r×p and F 2 = 0 I r-p ∈ C r×(r-p) . Hence, in the above partition, we have B = F T 1 AF 1 , C = F T 1 AF 2 , . . . . Sometimes, we have to derive the inverse of a block-partitioned matrix. Fundamental identities are where S = (E -DB -1 C) -1 and V = (B -CE -1 D) -1 , provided that A is non-singular and that B or E, respectively, are non-singular square submatrices; see [17, pp. 37-39].

Setting, notations, and definitions
We consider solutions to where A(t) ∈ C r×r is assumed to be invertible for all t ∈ I. In case of the differential equation (4), we look for solutions being differentiable on I \ t 0 and right-continuous in t 0 . Note that a scalar rth order linear differential equation can always be transformed into a system of the form (4). Therefore, we do not consider scalar rth order linear differential equations separately.
For the solutions of both (3) and (4), we can state some well-known properties: • The solution space of (3) and (4), respectively, is a linear vector space of dimension r. • A fundamental system of solutions consists of r linearly independent solutions, say (x (j) (t)) t∈I for j = 1, . . . , r. By setting z ij (t) = x (j) which are invertible for all t ∈ I (due to the linear independence of the columns), and the sequence (Z(t)) t∈I is a matrix-valued solution of (3) or (4). • We can directly look for matrix-valued solutions to (3) or (4). If Z = (Z(t)) t∈I is a solution with Z(t) ∈ C r×p for some p ∈ N, then we obtain another solutioñ Z = (Z(t)) t∈I withZ(t) ∈ C r×s by settingZ(t) = Z(t)B for all t ∈ I with some constant matrix B ∈ C p×s for some s ∈ N. Furthermore, if Z(t) has full rank (which is p for p ≤ r) for some t ∈ I, then Z(t) has full rank for all t ∈ I. • In particular, we can look for solutions Z = (Z(t)) t∈I with square matrices Z(t) ∈ C r×r .
Then Z(t) might be invertible, and the above rank statement can be written as follows: If Z -1 (t) exists for some t ∈ I then Z -1 (t) exists for all t ∈ I. Furthermore, if Z is a solution with existing inverse Z -1 (t 0 ), andZ = (Z(t)) is another solution, we havẽ Z(t) = Z(t)Z -1 (t 0 )Z(t 0 ). For the latter statement,Z(t) is not necessarily a square matrix. As pointed out in the introduction, often we know that there are vector-valued solutions with a certain asymptotic behavior. These solutions form a p-dimensional subspace (p < r) of the space of all vector-valued solutions. This p-dimensional vector space is uniquely characterized by a matrix-valued solution P = (P(t)) t∈I with matrices P(t) ∈ C r×p of full rank p, since then we obtain vector-valued solutions x = (x(t)) t∈I by setting The characterization of the asymptotic behavior can have different forms: • In some situations, we know that there is a solution (Z(t)) of square and invertible matrices with an asymptotic representation of the form We will briefly express this assumption as If P(t) contains the first p columns of Z(t), that is, P(t) = Z(t)F 1 , P(t) has rank p, and hence, the vector space of vector-valued solutions x = (x(t)) t∈I with x(t) = P(t)α for some α is p-dimensional. In some sense, the columns of P(t) have an asymptotic expansion given by the first p columns of Y (t). (We have to be careful with this interpretation since the characterization of E(t) as the 'error term' is given in the form Y -1 (t)E(t) → 0.) • Let P(t) = Z(t)F 1 contain the first p columns of Z(t), and let R(t) = Z(t)F 2 contain the last rp columns of Z(t). Set T (t) = F T 2 Y -1 (t) ∈ C (r-p)×r . Under the assumption (5), it becomes obvious that T (t)R(t) converges to I r-p while T (t)P(t) ∈ C (r-p)×p converges to a null matrix of appropriate size. Hence, we have This may be interpreted as follows: With respect to multiplication with T (t), the solution (P(t)) t∈I is subdominant to the solution (R(t)) t∈I , or concisely, (P(t)) is -subdominant. In some sense, P(t) can be interpreted as 'asymptotically normal' to (t). • Note that under the assumption (5), both denominator and numerator in (6) converge for the choice T (t) = F T 2 Y -1 (t). On the other hand, there are situations where (6) holds for a specific choice of the family ( (t)) t∈I of matrices with full rank rp, but neither denominator nor numerator converge for t → ∞. Hence, for Z(t) = (P(t) R(t)), the characterization of P(t) as the first p columns of a solution (Z(t)) of invertible matrices satisfying (5) is a special case of the characterization (6) with a specific choice of ( (t)), whereas the opposite is not true. Hence, the characterization of P(t), being subdominant with respect to some family ( (t)) of matrices, is more general.
• For p = r -1, σ T (t) = T (t) and R(t) are vectors. Hence (6) may be written as where P •,j (t) denotes the jth column of P(t). This is equivalent to for any solutions x,x of (3) or (4), respectively, with the following assumptions: There is some vector α ∈ C p×1 with x(t) = P(t)α for all t ∈ I, and there is no such vector forx. This is the concept of σ -subdominance which was introduced in [3,12]. Since the -subdominance is the most general concept for characterizing the asymptotic behavior, we introduce a special notation for the space of all vector-valued solution which can be derived from the matrix-valued -subdominant solution (P(t)): Let = ( (t)) t∈I of r × (rp)-dimensional matrices with rank rp, and let (Z(t)) t∈I be a solution to (3) or (4), respectively, with the property (6) for P(t) = Z(t)F 1 and R(t) = Z(t)F 2 . Then we define The following is about the determination of S .
• If (8) and (9) hold, we have (10) characterizes the rth entry of x(τ ) in terms of the other entries. By re-inserting (10) into (3), it is seen that the first r -1 entries of x(τ ) satisfy a system of the form (3) of order r -1.
• In [12], it was demonstrated how this method can be used to reduce the order of the underlying system step by step. In addition, it was shown that the algorithm may be interpreted as a generalization of so-called Jacobi-Perron algorithm; see [16].
In this paper, we deal with the determination of S for arbitrary p < r in only one step. For this purpose, let ( (t)) t∈I be given, define ( (τ , t)) as a solution to (3) or (4) with (τ , τ ) = I r , let F 1 , F 2 be defined as above, and, in the case of existence, set Obviously, if ξ 1 (τ , ) and ξ 2 (τ , ) do exist so does η(τ , ) = ξ -1 2 (τ , )ξ 1 (τ , ). On the other hand, the existence of η(τ , ) does not imply the existence of ξ i (τ , σ ) for i = 1, 2. The next result is a straightforward extension of the results in [12]. Theorem 4.1 Let ( (t)) t∈I be a family of matrices (t) ∈ C r×(r-p) with rank rp, and let F 1 , F 2 , (τ , t), η(τ , ) be defined as above. The limit η(τ , ) exists if and only if there is a solution (Z(t)) t∈I to (3) or (4), respectively, with det(Z(t 0 )) = 0, If these conditions hold, we have x ∈ S if and only if Proof In [12], the result was proven with respect to the recurrence relation (3) and p = r -1. Since our result is more general, we state a detailed proof here. However, the basic principles of the proof remain the same.
Assume first that the limit η(τ , ) exists. Define Z(t) by Then implying condition (12). Condition (11) follows from Conversely, suppose there is a solution (Z(t)) t∈I of invertible matrices satisfying (11) and (12). Then we have (12) guarantees that B is invertible so that we are enabled to apply inversion formula (1) with Note that the definition of S ensures that E is invertible. Hence, we have (τ , t) Therefore, η(τ , ) exists with η(τ , )B + D = 0, that is, This concludes the proof since we have x ∈ S if and only if We refer to Theorem 4.1 as the Jacobi-Perron characterization of -subdominant solutions since the results gives an exact characterization of the p-dimensionalsubdominant subspace of solutions.
Note that Theorem 4.1 allows one to reduce the order of the differential equation if η(τ , ) can be computed explicitly: From (3) and (4), respectively, we obtain By inserting (13) into (14), we get a reduced system for the first p entries of x(t) ∈ S : The other entries of x(t) can be calculated by applying (13) a second time.
Remark 4.2 In Theorem 5.2, we will show that the approximate values of η(τ , t) satisfy a continued-fraction-type recursion scheme. For this reason, we refer to η(τ , t) as some kind of matrix-driven continued fraction. Note that a similar construction can be found in [13] although the authors only use the ordinary term of subdominance. The results based on this can be derived from the more general concept by the restriction For an overview on extensions of ordinary and generalized continued fractions with a special focus on matrix-driven models consult [18]. With regard to this overview, Theorem 4.1 may be interpreted as a matrix-version of Pincherle-type convergence criteria for ordinary and generalized continued fractions [19][20][21].
Next, we show that an asymptotic expansion of the form (5) allows to find an even simpler variant of the Jacobi-Perron characterization. (4), respectively, let (Y (t)) t∈I be a family of matrices satisfying (5), and set T (t) = F T 2 Y -1 (t) for all t ∈ I. Then ξ 1 (τ , ) ∈ C (r-p)×p and ξ 2 (τ , ) ∈ C (r-p)×(r-p) as defined above exist and x ∈ S if and only if

Theorem 4.2 Let (Z(t)) t∈I be a fundamental system for (3) or
Proof By definition, we have Again, this concludes the proof since we have x ∈ S if and only if x(n) = Z(n)F 1 α for some α ∈ C p×1 .
From (16), we obviously return to (13) by multiplying both sides by ξ -1 2 (τ , ). Again, it becomes clear that the method (13) still works in situations where the limits ξ i (τ , ) do not exist and therefore (16) cannot be applied.

Equivalent backward computation
For the discrete problem (3), the Jacobi-Perron characterization can be exploited in a direct way (we only have to replace the limits by very large choices of t), whereas in the continuous-time case, the numerical calculations must be preceded by a suitable discretization.
Either way, algorithms should be efficient, and it seems natural to compute the values of η(τ , ) (or the ξ i (τ , )) for all values τ ∈ [t 0 , t 1 ] ∩ I for an 'interval of interest' [t 0 , t 1 ]. For this purpose, we demonstrate that the approximants of η(τ , ) or ξ i (τ , ) can be computed by different kinds of 'backward procedures' .

A direct backward computation
In case of the strong asymptotic condition (5), the natural approach is to define W (t) as a solution to (3) (note that we W (t) (τ ) and Z(τ ) are solutions (3) or (4), respectively, as functions of τ , and Z -1 (t)W (t) (t) is constant with respect to τ ), and since (5) guarantees that Z -1 (t)W (t) (t) = Z -1 (t)Y (t)F 1 converges to I r F 1 = F 1 , the conjecture is true. In particular, in the discrete setting of (3), W (t) (τ ) can be easily computed (numerically) for τ ∈ [t 0 , t 1 ], and for t t 1 , we get good approximations to P(τ ) = Z(τ )F 1 . Next we demonstrate that a similar kind of backward procedure can be used to compute the approximants of η(τ , ) even in the case that the less restrictive condition (5) holds. Theorem 5.1 Let ( (t)) t∈I be a family of matrices (t) ∈ C r×(r-p) with rank rp, and let Y (t) be an invertible r × r-matrix for all t ∈ I in such a way that T . (17) holds for any solution Z with invertible Z(t). In particular, if we choose Z(t) = (τ , t), we obtain where V (τ , t) is chosen according to (1), that is, which proves our assertion.
Theorem 5.1 suggests the following numerical methods.

Continued-fraction type scheme for η
According to Theorem 4.1, the limit η(τ , ) exists if and only if there is a fundamental system (Z(t)) t∈I satisfying (11) and (12). Obviously, condition (11) does not depend on τ . Hence, there will be many situations in which η(τ , ) exists for all τ ∈ I.
In this section, we derive a recursion scheme for computing approximations of η(τ , ) in the discrete-time setting. The advantage of this scheme is that approximations of η(τ , ) for different values of τ can be computed simultaneously. The special form of the recursive scheme justifies to refer to the values η(τ ) as some kind of generalized matrix continued fraction.
In order to derive the recursion scheme for η(τ , t), we define the approximation For t → ∞, we have convergence to η(τ , ) by definition.

Theorem 5.2 With the notation introduced above and with
for all t ∈ I and for all τ , t ∈ I with τ < t, provided that these inverses exist.
Proof For τ = t, we have (t, t) = I r and hence Using this representation ofṼ , it is easy to show that (I r +DṼCẼ -1 )D =DṼB, and hence which completes the proof.

On the dichotomy of differential equations and their discretization schemes
For the differential equation (4), it is obvious to use Algorithms 5.1 and 5.2, where the numerical calculation of W (t) (τ ) for τ < t is performed by means of some discretization method. However, this discretization will effect the asymptotic behavior. Therefore, • in some situations, we will have to replace (t) in algorithm 5.1 by a sequence adapted to the discretization method, • in almost all situations, we will have to replace Y (t) in algorithm 5.2 by a sequence adapted to the discretization method.
Example 6.1 We consider the case r = 1, although our results have little meaning in this situation since the space of solutions is one-dimensional and explicitly known. Nevertheless, it becomes clear that the asymptotic behavior of the solutions of x (t) = A(t)x(t), t ∈ I, and the corresponding discretized system differ. Consider the differential equation Then the exact solution is given by Now apply Euler's explicit method for computing approximations x [h] (n) to x(nh). Then implying From the general theory of discretization methods we know that lim h→0 nh→t since we apply a consistent method to a linear and therefore Lipschitz-continuous problem. Although the limit (22) is a key topic in numerical mathematics, we are interested in a somewhat different property: We want to compare the behavior of the exact solution Set y(t) = e λt for t ≥ 0. From (19), we see that Since φ ∈ L 1 , we observe that there is a solution x(t) with lim t→∞ y -1 (t)x(t) = 1, that is, (5) is satisfied for x (t) = (λ + φ(t))x(t), t ≥ 0, with y(t) = e λt , t ≥ 0.
In the sequel, we consider the discretized system. Since x [h] (n) is an approximation to x(nh), we conclude that Under appropriate conditions (e.g. monotonicity), φ ∈ L 1 ensures that ∞ k=0 φ(kh) is absolutely convergent, so that the product on the right-hand side converges to a finite value for n → ∞. Observe ln(1 + λh)λh = 0, implying that y -1 (nh)x [h] (n) either tends to 0 or ∞. Hence, (5) is not satisfied for the discretized system with the sequence (y(nh)). Therefore, setting W (N) (N) = y(Nh) for some large N and compute W (N) (n) backwards by use of (20) will not give good approximations to x(nh) for the solution x(t) ∼ e λt .
The Note that usually, there is no need to solve x (t) = (λ + φ(t))x(t), t ∈ I, numerically, since we have the explicit solution given in (19). However, this behavior is typical also in situations where we have no explicit solution.
Of course, there are better discretization methods (in terms of consistency order, speed of convergence, . . . ) than the explicit Euler method. However, for any classical discretization method, the discretizaton error will increase (often exponentially) as t → ∞. Therefore, we can never expect the discretized system to have the same asymptotic behavior as the original differential equation.
For the sake of simplicity, we restrict the discussion in this paper to: • Single-step methods. The reason is that discretizing (4) with A(t) ∈ C r×r by means of a single-step method will result in a system

Poincaré-type/Levinson-type systems of linear difference and differential equations
In this section, we consider the case where A(∞) := lim t→∞ A(t) exists. Basic results on the asymptotic behavior of the solutions of linear difference and differential equations with almost constant coefficients are due to Poincaré [22] and Perron [23][24][25]. A milestone in the analysis of such differential equations is due to Levinson [2]. Therefore, literature refers to these systems as Poincaré-type equations, Poincaré-Perron-type equations, or Levinsontype equations (in particular, in the continuous-time setting). Levinson-type results refer to statements on the asymptotic behavior of the solutions of systems (3) or (4) for which we can write where R(t) is summable/integrable and V (t) → 0 with some additional constraints (e.g. V (t) should be summable/integrable with some conditions on the eigenvalues of A(∞) + V (t), the latter conditions are referred to as dichotomy conditions). We refer to the literature (e.g. [2,[26][27][28]) for results in this setting or similar settings. An exact translation of Levinson's ideas to the discrete-time setting is due to Benzaid and Lutz [29], and therefore, such results are often referred to as Benzaid-Lutz-type results (or sometimes, Levinson-Benzaid-Lutz-type results). A more recent publication in this direction is [30] where further remarks on literature can be found. Note that our focus is not on deriving such results but using these results and finding numerical methods for computing solutions with prescribed asymptotic behavior. Therefore, referring to all literature in this direction is far beyond the scope of this paper, and furthermore, we will only consider the 'simple' situation where V (t) = 0.
Remark 7.1 By setting Y (t) = B diag(λ t 1 , . . . , λ t r ), we find that Z(t)Y -1 (t) → I. Unfortunately, this is not equivalent to our strong condition (5) which requires Y -1 (t)Z(t) → I. There are some exceptions, e.g. if all eigenvalues of A(∞) have absolute value 1 since in that situation, both Y (t) and Y -1 (t) are bounded and we can argue that
Let us consider a special case: Let u(t) satisfy the scalar rth-order linear differential equation with p 0 (t) = 0 for all t ∈ I and assume that the limits p k = lim t→∞ p k (t) exist. Then the vector x(t) = (u(t), u (1) (t), . . . , u (r-1) (t)) T satisfies and the eigenvector corresponding to the eigenvalue λ j has the form (1, λ j , . . . , λ r-1 j ) T . Hence, the solution associated with this eigenvector satisfies lim t→∞ Remark 7.2 Again, setting Y (t) = B diag(e λ 1 t , . . . , e λ r t ) does not imply that our strong condition (5) is met. Again, there are some exceptions, e.g. for Re(λ j ) = 0 for j = 1, . . . , r, both Y (t) and Y -1 (t) are bounded and we can argue as in the discrete-time setting.

Discretization
We give some instructions concerning the discretization of Poincaré-type differential equations. As above, we focus on single-step methods with constant step size h. Since these methods rely on the idea of replacing the integral at the right-hand side of by an appropriate summation formula, it becomes apparent that however we calculate, we finally arrive at provided that R(nh) satisfies the summability condition. For example, if R(t) is (componentwise) monotone, the integrability of R(t) implies summability of R(nh). Due to More precisely, for all discretization methods discussed above,

A [h] (∞) may be decomposed into I r and A(∞), and hence, the eigenvectors of A(∞) coincide with the eigenvectors of A [h] (∞) (but with different eigenvalues), that is, B [h] = B.
Hence, if we want to compute solutions generated by the first p columns of the fundamental system Z(t) of the differential equation, a good approximation can be obtained by computing solutions generated by the first p columns of the fundamental system Z [h] (n) of the discretized system. For this purpose, we need to compute the corresponding eigen- , and we obtain Hence, we have μ j = for j = 1, . . . , r.
• For Runge-Kutta-4, we have becomes dominant and will cause massive numerical deviations. This is a well-known effect which belongs to the phenomenon of stiffness of differential equations.
In order to avoid the effects of stiffness, it is often recommended to use implicit discretization methods. Indeed, when applying the implicit Euler method in the situation sketched above, we would obtain the new eigenvalues 1 1-h·(-1) = 10 11 and 1 1-h·(-100) = 1 11 , that is, the dominance-subdominance relationship of the two solutions is preserved.
In some way, this is not entirely true for all applications of implicit discretization schemes. Think of the situation where A(∞) admits the eigenvalues +1 and +100. Then the corresponding eigenvalues of the implicit Euler method with step size h = 0.1 are 1 1-h·1 = 10 9 and 1 1-h·100 = - 1 9 . Hence, the apparently dominant solution (eigenvalue 100) corresponds to the subdominant solution of the discretized system (eigenvalue - 1 9 ). Of course, in this situation, the step size h = 0.1 is too large for aspects of accuracy.
Anyway, discussing all aspects of stiffness and stability definitions from a numerical point of view is far beyond the scope of this paper since we cannot claim to answer all questions related with them. This remark simply intends to draw attention to the fact that when dealing with solutions of differential equations with prescribed asymptotic behavior numerically, one should carefully study the properties of the resulting discrete system since it is not guaranteed that the dominance-subdominance relationship is preserved during the discretization process.

Numerical examples 8.1 A Poincaré-type difference equation
Let D = diag (-2i, 2i, -2, 2), The general remarks concerning linear differential equations with almost constant coefficients imply that . ThenỸ (t) and (Ỹ (t)) -1 are bounded. Hence, we obtain → Ỹ (t) -1 · I ·Ỹ (t) = I, t = 0, 1, 2, . . . , that is, our strong assumption (5) is satisfied. For this reason, we may apply all algorithms developed in this paper. We start with testing our algorithms by setting R(t) = 0. Then Z(t) = Y (t)B -1 Z(0) for all solutions Z and all t ∈ N 0 . This means that each linear combination of the columns of Y (t) provides a solution of the underlying differential equation. We slightly modify Y (t): • Let us change the order of the columns such that Y (t)F 1 represents the latter two columns of the original matrices Y (t), that is, With initialization time t = 100, we apply Algorithm 5.2, that is, we set W (100) (100) = Y (100)F 1 and compute W (100) (n) for n = 99, 98, . . . , 1, 0. We recognize that all numerically computed values coincide with the true entries of Y (n)F 1 .
• With the same definition of Y (t)F 1 as before, we want to apply Algorithm 5.3, that is, we set η(t) = -0 0 for some large t (here, t = 100), and compute η(n) by the continued-fraction-type scheme for n = t -1, t -2, . . . , 0. Due to R(t) = 0, the exact proportions η(n) do not depend on n. Indeed, Algorithm 5.3 reproduces the exact (and constant) matrices η(n).
• Next, we are interested in the subspace spanned by the two solutions corresponding to the eigenvalues -2i, 2i. In order to obtain a real-valued solution, we set for all t ∈ N 0 which are divisible by four. We apply the backward procedure 5.2 with t = 100. Again, the algorithm reproduces the exact solution.
• Finally, we use Y (t)F 1 as defined in the last point, and we set

Now let us consider
Obviously, R(t) converges. Suppose we are interested in the solutions corresponding to the real eigenvalues 2, -2. As in the case for R(t) = 0, we set and apply Algorithm 5.2 or we set η(100) = 0 0 -5 2 1 and apply Algorithm 5.3. In Table 1, we have listed some numerical results. Although we are not in a position to find the exact solution to (3) in the given situation, we remark that the results of both algorithms 'fit together' in the sense that we have η (100) (0) = -(F T 2 W (100) (0))(F T 1 W (100) (0)). Since both algorithms lead to the same results, it becomes clear that using the continuedfraction type scheme in Algorithm 5.3 has some advantages: Whereas the growth/decay values of W (100) (n) are clearly influenced by the absolute value of the eigenvalues, this is not true for the values of η (t) (n). Hence, computing η (t) (n) does not require any 'scaling technique' and Algorithm 5.3 can be applied with much larger t. E.g. for t = 1000, we obtain η (1000) (0) = -1.5504 0.6340 1.0773 -1.3982 .

Ordinary subdominance for Poincaré-type differential equations
The computation of subdominant solutions in the ordinary sense is a special case of our approach. By 'ordinary sense' we mean that we consider a scalar rth order linear differential equation, and that we say a subspace S to be subdominant if u(t) v(t) → 0 for every u ∈ S and every solution v / ∈ S.
Consider the scalar differential equation with some differentiable function φ(t) converging to 0 monotonically. (The assumption of monotonicity is chosen for the sake of simplicity, it could be replaced by a weaker one.) With the above construction, we have where the monotonicity ensures that V (t) ∈ L 1 . A(∞) has the eigenvalues λ 1 = -2, λ 2 = -1 and λ 3 = 1. According to (23), there is a fundamental system of solutions x (1) , x (2) , x (3) such that the asymptotic growth of x (i) is expressed by λ i .
For a thorough numerical computation, we have to use some discretization method. The monotonicity of φ(t) ensures that (V (nh)) converges to 0 in a (componentwise) monotonic sense and therefore, (25) holds. Since Then we can easily prove that (11) is satisfied for the discretized system; in the above calculation for the continuous case, we only have to replace e -λ * t by (μ * ) -n with some μ * ∈ (1 + hλ 1 , 1 + hλ 2 ).
For our numerical experiment, we set φ(t) = 1 1+t 2 , h = 0.001 and start our backward procedure (for the discretized system) at t = 20 = 2 · 10 4 · h. For the discretization, we consider the explicit Euler method (EE) and the trapezoidal rule (TR). The results are listed in Table 2. We have included the values u 1 (t)-u 1 (t-h) The results clearly indicate that we calculate the solution u 1 for which u 1 (t) u 1 (t) approaches -2 as t → ∞. Next, assume that we want to compute solutions which are linear combinations of x (1) and x (2) . Hence, we set p = 2 and T (t) = F T 2 B -1 . Again, (11) is satisfied not only for the differential equation but also for its discretized version. In addition to the above parameters for p = 1, we compute a solution u 1 (t) with u 1 (0) = 1 and u 1 (0) = 2. The results are documented in Table 3, and they clearly indicate that here, u 1 (t) u 1 (t) approaches -1 as t → ∞. This is completely in accordance with our goals.

Knesers's differential equation
For computing subdominant solutions in the 'ordinary sense' , we were allowed to choose T (t) = ( [h] (t)) T = F T 2 B -1 independently of the discretization scheme. Next, we demonstrate that there are situations in which the choice of [h] (t) depends on the discretization scheme.
Consider Kneser's differential equation in which φ is continuous and real-valued with φ(t) → 0 'sufficiently fast' . Kneser [32,33] already investigated conditions under which there is a fundamental set of solutions [u (1) , u (2) ] to equation (26) satisfying Various conditions on the speed-of-convergence condition on φ(t) were discussed by Wintner [34]. The algebraic characterization of these solutions in terms of its initial values and its computation for small values of t remained an open problem.

Conclusion and further research
As outlined in this paper the concept of -subdominant solutions of linear systems of differential and difference equations, respectively, has a strong impact to a lot of classical mathematical problems. Our aim was to raise awareness of this problem not only from a theoretical but also from a practical point of view. It is therefore to be expected that even more problems can be subjected to this context. As an example we mention the mathematical representation of regular solutions of complex-valued singular linear systems of differential equations. For the scalar case this problem has been addressed by Pincherle [19] and Perron [35], who linked it to the computation of certain ordinary and generalized continued fractions. This paper is restricted to linear differential equations. But there is also a connection with nonlinear equations. For example, if x(t) = z(t) z (t) satisfies that is, z (t) = (g(t) + f (t) f (t) )z (t)f (t)h(t)z(t) for t ≥ t 0 , the function y(t) = -z (t) f (t)z(t) solves the Ricatti differential equation y (t) = f (t)y 2 (t) + g(t)y(t) + h(t), t ≥ t 0 , which means that our methods can also be applied to special cases of Ricatti differential (or difference) equations. But this is not surprising since the continued-fraction-type scheme in Theorem 5.2 reminds one of Ricatti difference equations.
Without further investigations we cannot principally exclude that our algorithm is subject to inherent numerical instability. For example, in the situation of Sect. 7, it is not trivial to find Y (t) or T (t) such that the strong criterion (5) or the weaker condition (6) is met. A trivial exception is given for R(t) = 0 where we know the exact solutions. Numerical computations in this situation reveal that our algorithms might still be susceptible to numerical instabilities. However, a profound study of these effects cannot be performed before it is clarified how to choose T (t) in this situation. Therefore, these questions need further research. In order to solve these problems, it might make sense to consider the adjoint systems of (3) and (4) (some considerations in the scalar case can be found in [3,11,35]).