# Generalized continued fractions: a unified definition and a Pringsheim-type convergence criterion

• 212 Accesses

## Abstract

In the literature, many generalizations of continued fractions have been introduced, and for each of them, convergence results have been proved. In this paper, we suggest a definition of generalized continued fractions which covers a great variety of former generalizations as special cases. As a starting point for a convergence theory, we prove a Pringsheim-type convergence criterion which includes criteria for the aforementioned special cases. Furthermore, we address several fields in which our definition may be applied.

## Introduction and definitions

Infinite continued fractions have the form

$$b_{0}+\frac{a_{1}}{b_{1}+\frac{a_{2}}{b_{2}+\ddots }}=\lim_{N\to \infty }b_{0}+ \frac{a_{1}}{b_{1}+\frac{a_{2}}{b_{2}+\frac{ \ddots }{b_{N-1}+\frac{a_{N}}{b_{N}}}}},$$

where various domains for the coefficients $$a_{n}$$, $$b_{n}$$ can be considered depending on the mathematical field the continued fraction is used in. In early approaches (sometimes referred to as simple continued fractions), $$a_{n}=1$$ and $$b_{n}\in \mathbb{N}$$ were required, yielding unique representations for irrational numbers, with the approximants being the best rational approximations. Later on, continued fractions with complex coefficients $$a_{n}$$, $$b_{n}$$ were introduced and used for characterizing subdominant solutions of second-order difference equations or for finding representations of special analytic functions. Most of the theory is based on the fact that the approximants can be rewritten as

$$b_{0}+\frac{a_{1}}{b_{1}+\frac{a_{2}}{b_{2}+\frac{\ddots }{b_{N-1}+\frac{a _{N}}{b_{N}}}}}=\frac{A_{N}}{B_{N}},$$

where both $$(A_{N})$$ and $$(B_{N})$$ meet the same recurrence relation $$X_{N}=X_{N-1}b_{N}+X_{N-2}a_{N}$$, subject to some initial conditions. We refer to [31, 32] for details on these (and more) basic facts.

Motivated by applications in different areas of mathematics (such as number theory, ergodic theory, linear difference equations, Padé approximants), researchers introduced many generalizations of continued fractions in the literature. Discussing all applications is far beyond the scope of this paper. Instead, we focus on the characterization of special solutions of linear difference equations by means of continued fractions. Starting from the second-order difference equation

$$x_{n}=b_{n}x_{n+1}+a_{n+1}x_{n+2}, \quad n=0,1,2,\ldots$$

with two-dimensional space of solution (we assume $$a_{n+1}\neq 0$$ for all $$n\in \mathbb{N}_{0}$$), it seems natural to write

$$\frac{x_{n}}{x_{n+1}}=b_{n}+\frac{a_{n+1}}{\frac{x_{n+1}}{x_{n+2}}}=b _{n}+ \frac{a_{n+1}}{b_{n+1}+\frac{a_{n+2}}{b_{n+2}+\ddots }},$$

that is, the proportion of two successive elements of a solution $$x=(x_{n})$$ is given by a continued fraction. Obviously, this can only be true for a one-dimensional subspace of solutions. It turned out that the subdominant solution is characterized by continued fractions if there is one, see [32, Sect. 20] which was later used for backward computing methods, see, e.g., [15, 24]. A natural motivation for generalizations was to consider difference equations

$$x_{n}=b_{n}x_{n+1}+\sum _{m=n+1}^{n+r-1} a_{nm}x_{m+1}$$

of order r or infinite difference equations (‘sum equations’ as literal translations of the German term ‘Summengleichungen’)

$$x_{n}=b_{n}x_{n+1}+\sum _{m=n+1}^{\infty }a_{nm}x_{m+1},$$

and to look for characterizations of a certain subspace of solutions. Another natural generalization concerns second-order difference equations

$$c_{n-1}x_{n}=b_{n}x_{n+1}+a_{n+1}x_{n+2}, \quad n=0,1,2,\ldots ,$$
(1)

where $$c_{n-1}$$, $$b_{n}$$, $$a_{n+1}$$, $$x_{n}$$ are not complex numbers anymore, but elements of some more general structure, e.g., $$c_{n-1}$$, $$b_{n}$$, $$a_{n+1}$$ are matrices and the $$x_{n}$$ are vectors. In a quite general setting, we might discuss such equations in some Banach algebra $$\mathcal{R}$$ with unity I. Therefore, we find various generalizations of continued fractions in the literature:

• The recurrence relation for both numerators $$A_{n}$$ and denominators $$B_{n}$$ is generalized. This includes the intuitive replacement of the second-order difference equation by schemes of higher order [9, 27, 29], infinite order , or other recurrence schemes . These generalizations correspond to solutions of higher-order difference equations, sum equations, …in $$\mathbb{C}$$.

• Instead of $$a_{n},b_{n}\in \mathbb{C}$$, the coefficients in this recurrence scheme are elements from some more general structure $$\mathcal{R}$$. In general, for defining continued fractions or generalizations, it is appropriate to require that $$\mathcal{R}$$ is a Banach algebra with unity I, in literature (e.g., [23, 33]), matrix algebras were considered. Such continued fractions correspond to special solutions of difference equations of the form (1) with $$c_{n-1}=I$$.

• The coefficients $$a_{n},b_{n}\in \mathbb{C}$$ in the original representation are replaced by coefficients $$a_{n},b_{n}\in \mathcal{R}$$ directly. This replacement might lead to $$b_{0}+a_{1} (b_{0}+\cdots )^{-1}$$ or $$b_{0}+ (b_{1}+\cdots ) ^{-1}$$, but due to the absence of commutativity, the general form will be

$$b_{0}+a_{1} (b_{1}+\cdots )^{-1}c_{1}$$

with two sequences $$(a_{n})$$ and $$(c_{n})$$ of partial numerators (see, e.g., [11, 39]). These continued fractions can characterize solutions of the general difference equation (1) for arbitrary $$c_{n-1}$$. Unfortunately for this construction, there are no recurrence schemes for sequences $$(A_{n})$$, $$(B_{n})$$, $$(C_{n})$$ for which the Nth approximant can be written as $$A_{N}B_{N}^{-1}$$, $$B_{N}^{-1}C_{N}$$, or $$A_{N}B_{N}^{-1}C _{N}$$.

Therefore, the last type of generalization is not a special case of the first one (or the first two ones) and vice versa. This is true for many generalizations of continued fractions found in the literature. Hence, for each single generalization, convergence criteria were published. The main motivation of this paper is the question

$$\textstyle\begin{array}{l} \mbox{Is there a unified definition which contains all the} \\ \mbox{generalizations cited above as special cases?} \end{array}$$

The answer will be ‘Yes’. The next question is

$$\textstyle\begin{array}{l} \mbox{Is it possible to find unified proofs for convergence} \\ \mbox{criteria for generalized continued fractions?} \end{array}$$

Due to the large amount of completely different convergence criteria, we cannot answer this question for all classical convergence criteria for continued fractions in a single paper. We will concentrate on the famous Pringsheim-type criteria. For this class of convergence criteria, the answer is again ‘Yes’.

Our unified definition is motivated by a relationship between converging continued fractions and irreducible Markov chains: For dealing algorithmically with quasi-birth-death processes, that is, Markov chains with a block-tridiagonal transition structure, some methods use matrix-valued continued fractions. The convergence of these continued fractions can be guaranteed by a probabilistic interpretation of the continued fraction and its approximants as a series of some kind of taboo probabilities. In Sect. 2, we demonstrate how to obtain (matrix-valued) continued fractions in the context of Markov chain, and how to interpret the continued fraction and its approximants probabilistically. In Sect. 3, we preserve this probabilistic interpretation, but omit the condition of tridiagonality. Then we obtain a new recursion scheme which we use for defining generalized continued fractions.

At first glance, this construction intends to treat Markov chains with a general block-transition structure algorithmically by means of generalized continued fractions. Although such algorithms can be deduced, we once again emphasize that the clue of the definition in Sect. 3 is that it:

• Includes a great variety of generalizations of continued fractions found in the literature. This fact is discussed in Sect. 4.

• Allows direct proofs of some convergence criteria, for instance, Pringsheim-type criteria. This is done in Sect. 5. Due to our definition covering many other generalizations, this convergence criterion includes convergence criteria for a large class of generalizations of continued fractions.

• Follows traditional motivations for studying continued fractions: Historically, continued fractions with complex elements were introduced since they can be used for deriving regular representations of some special functions (hypergeometric functions, Bessel functions, …). The coefficients of these expansions can be obtained from the second-order difference equations these special functions satisfy, whereas continued fractions (in the non-generalized sense) are strongly related to infinite systems of linear equations which include second-order linear difference equations, higher-order difference equations, and sum equations as special cases. Therefore, in Sects. 6 and 7, we

• point out how to use generalized continued fractions (in our sense) for obtaining (minimal) roots of analytic functions, briefly discuss possibilities to find representations of analytic functions,

• and briefly discuss that gcfs might provide useful representations for analytic functions.

## Markov chains and continued fractions

In this section, we demonstrate that in the context of Markov chains with a certain transition structure, continued fractions arise in a natural way. Originally, this relationship between Markov chains and continued fractions was exploited algorithmically.

Consider a (time-homogeneous) discrete-time Markov chain (basics of Markov chains can be found in many textbooks, for example, in ) $$(X_{\ell })_{\ell \in \mathbb{N}_{0}}$$ with two-dimensional state space $$E=\mathbb{N}_{0}\times \{1,\ldots ,d\}$$ (we use the notation $$\mathbb{N}_{0}=\{0,1,2,\ldots \}$$) for which the one-step transition probabilities from state $$(i,u)$$ to state $$(j,v)$$ are 0 if $$\vert j-i \vert \geq 2$$. Then the one-step transition probability matrix has the form

$$P= \begin{pmatrix} p_{00}&p_{01}&&& \\ p_{10}&p_{11}&p_{12}&& \\ &p_{21}&p_{22}&p_{23}& \\ &&\ddots &\ddots &\ddots \end{pmatrix},$$

where $$p_{ij}= (p_{(i,u),(j,v)} )_{u,v=1}^{d}\in \mathbb{R}^{d\times d}$$, and the entries $$p_{(i,u),(j,v)}$$ are the one-step transition probabilities from state $$(i,u)$$ to $$(j,v)$$. In many applications of Markov chains, invariant measures have to be computed, that is, a non-trivial, non-negative vector π with $$\pi P=\pi$$. With $$\pi =(\pi _{n})_{n\in \mathbb{N}_{0}}$$ and $$\pi _{n}\in \mathbb{R}^{d}$$, a vector-matrix difference equation has to be solved, and for $$d>1$$, there is no explicit solution. Up to notations and slight variations (and the consideration of continuous-time Markov chains instead of discrete-time Markov chains), in [4, 5, 8, 18, 34], the following method has been suggested and discussed:

• Choose N large, and set $$K_{N}^{(N)}=I-p_{NN}$$, where $$I\in \mathbb{R}^{d\times d}$$ is the identity matrix.

• For $$n=N-1,N-2,\ldots ,0$$, compute

$$K_{n}^{(N)}=I-p_{nn}-p_{n,n+1} \bigl(K_{n+1}^{(N)} \bigr)^{-1}p_{n+1,n}.$$
(2)
• Determine $$\pi _{0}^{(N)}$$ as an approximate solution of $$xK_{0}^{(N)}=0$$.

• Compute $$\pi _{n}^{(N)}=\pi _{n-1}^{(N)}p_{n-1,n} (K_{n}^{(N)} ) ^{-1}$$ for $$n=1,\ldots ,N$$.

Using appropriate probabilistic interpretations and assuming irreducibility and recurrence of the Markov chain, it is possible to prove that

• $$(K_{n}^{(N)} )^{-1}$$ exists for all $$n\in \mathbb{N}$$ if the Markov chain is irreducible,

• $$\lim_{N\to \infty }K_{n}^{(N)}$$ exists for all $$n\in \mathbb{N}_{0}$$,

• $$K=\lim_{N\to \infty }K_{0}^{(N)}$$ has the eigenvalue 0 with non-negative eigenvector,

• $$\pi _{n}=\lim_{N\to \infty }\pi _{n}^{(N)}$$ exists for an appropriate method of finding an ‘approximate’ solution $$\pi _{0}^{(N)}$$ of $$xK_{0}^{(N)}=0$$,

• $$\pi =(\pi _{n})_{n\in \mathbb{N}_{0}}$$ satisfies $$\pi P=\pi$$.

For further algorithmic details, comparisons with other methods, and an extensive discussion of the probabilistic interpretation of $$K_{n}^{(N)}$$, $$(K_{n}^{(N)} )^{-1}$$, and $$\pi _{n}^{(N)}$$, we refer to the literature cited above. For $$d=1$$, the interpretation simplifies as follows:

• Let $$\tau _{i}=\inf \{\ell >0: X_{\ell }=i\}$$ be the first hitting time on state $$i\in E=\mathbb{N}_{0}$$, and let $$T_{C}=\inf \{\ell >0: X _{\ell }\notin C\}$$ be the first time of leaving set $$C\subset E$$.

• Then $$1-K_{n}^{(N)}=\mathbb{P} (T_{\{n+1,\ldots ,N\}}=\tau _{n}|X _{0}=n )$$ is the probability that, conditioned on starting in state n, the Markov chain returns to n before reaching one of the states $$n-1$$ or $$N+1$$.

• Furthermore, $$(K_{n}^{(N)} )^{-1}=\mathbb{E} [\sum_{m=0}^{T_{\{n,\ldots ,N\}}}{\mathbf{1}}_{\{n+1\}}(X_{m})|X _{0}=n ]$$ is the expected number of visits in state n before reaching one of the states $$n-1$$ or $$N+1$$. These expectations are finite if the Markov chain is irreducible.

• For $$N\to \infty$$, we see that $$1-K_{0}^{(N)}$$ converges to the probability that the Markov chain will eventually return to state 0 if it starts in state 0. Hence, $$K=0$$ if and only if state 0 is recurrent.

The recursion (2) can be interpreted as a ‘prototype’ of a matrix-valued continued fraction. The probabilistic interpretation ensures that $$K_{n}^{(N)}$$ is invertible for $$n\geq 1$$ and $$K=\lim_{N\to \infty }K_{0}^{(N)}$$ converges whenever the Markov chain is irreducible. Based on this relationship, with some further steps, a Ślezyński–Pringsheim-type convergence criterion has been proved in . In the next section, we use this relationship for finding a powerful definition of generalized continued fractions, and in Sect. 5, we extend the proof for the Ślezyński–Pringsheim-type convergence criterion to generalized continued fractions. We conclude this introductory discussion with some remarks:

• We can replace the state space by $$E=\mathbb{N}_{0}\times D$$ with some Polish space D. Then the matrices $$p_{ij}\in \mathbb{R}^{d\times d}$$ have to be replaced by kernels $$p_{ij}:D\times \mathcal{B}(D)\to [0,1]$$, where $$\mathcal{B}(D)$$ is the Borel-σ-field on D. With an appropriate definition of multiplication of kernels, all considerations of this section still hold. In this setting, there is no direct algorithmic use, and therefore, there is little literature on this topic. Nevertheless, this fact may be used for theoretical considerations, and it gives an additional motivation for not only considering (generalized) continued fractions with coefficients in some matrix algebra, but an arbitrary Banach algebra.

• The literature cited above deals with so-called matrix-analytic methods for quasi-birth-death processes. Often, these methods are interpreted as variants or generalizations of matrix-geometric methods introduced by Neuts . Alternatively, the algorithm described above can be interpreted as solving $$\pi ^{(N)}P^{(N)}=\pi ^{(N)}$$ approximately by a block-Gauss-elimination, where $$P^{(N)}=(p_{ij})_{i,j=0}^{N}$$ is the north-west corner truncation of P. In , this approach is discussed and a relationship to a method of repetitively censoring the Markov chain is established.

• In Sect. 5, we come back to the probabilistic interpretation since it can be used for proving convergence criteria.

## Defining generalized continued fractions

### The general case

The main idea for finding a unified definition of generalized continued fractions is omitting the condition of tridiagonality while preserving the probabilistic interpretation of $$K_{n}^{(N)}$$ and $$(K_{n} ^{(N)} )^{-1}$$. For this purpose let $$Q=(q_{mn})_{m,n=0}^{ \infty }$$ be an arbitrary infinite $$\mathcal{R}$$-valued matrix where $$\mathcal{R}$$ is a Banach algebra with unity I and introduce the notation

$$S(Q,i,j,A)= \sum_{\substack{\ell \in \mathbb{N} \\ i_{0},\ldots ,i_{\ell }\in \mathbb{N}_{0} \\ i_{0}=i, i_{\ell }=j \\ i_{1},\ldots ,i_{\ell -1}\in A}}\prod _{r=1}^{\ell }q_{i_{r-1},i_{r}}$$

for a matrix $$Q=(q_{mn})_{m,n=0}^{\infty }$$, indices $$i,j\in \mathbb{N}_{0}$$ and sets $$A\subset \mathbb{N}_{0}$$. Since we do not specify the order of summation, this notation only makes sense in case of unconditional convergence. One way to interpret this series is as follows: Consider an infinite graph with nodes $$0,1,2,\ldots$$ and edges with weights $$q_{mn}$$. Define the value of a path $$i_{0},i_{1},\ldots ,i_{\ell }$$ by multiplying all weights $$q_{i_{0},i_{1}},\ldots ,q_{i _{\ell -1},i_{\ell }}$$. Then $$S(Q,i,j,A)$$ is the sum of all values of paths from i to j where only nodes within set A are visited along the way. Typically, such series occur in the context of Markov chains: If $$P=(p_{mn})_{m,n=0}^{\infty }$$ is the transition probability matrix of a discrete-time Markov chain with state space $$\mathbb{N}_{0}$$, we have

$$S(P,i,j,A)=\sum_{\ell =1}^{\infty }\mathbb{P} (X_{\ell }=j, X _{\ell -1},\ldots ,X_{1}\in A|X_{0}=i ).$$

Due to this interpretation, for stochastic matrices P, we can use standard arguments from the basic theory of Markov chains for proving convergence of $$S(P,i,j,A)$$ for some specific choices of i, j, A. We will benefit from this consideration when proving a Pringsheim-type convergence criterion in Sect. 5 (see Lemma 2 below). Here, we focus on relationships that hold between the $$S(Q,i,j,A)$$ if these series converge unconditionally. We will see that these relationships generalize the recursion schemes defining continued fractions.

### Lemma 1

If the series on the right-hand sides converge unconditionally, we have

$$\bigl(I-S\bigl(Q,n,n,\{n+1,\ldots ,N\}\bigr)\bigr)^{-1}=I+S \bigl(Q,n,n,\{n,\ldots ,N\}\bigr)$$
(3)

for $$n\leq N$$ and

\begin{aligned}& S\bigl(Q,n,n,\{n+1,\ldots ,N\}\bigr) = q_{nn}+\sum _{m=n+1}^{N} q_{nm}S\bigl(Q,m,n,\{n+1, \ldots ,N\}\bigr), \end{aligned}
(4)
\begin{aligned}& \begin{aligned}[b] S\bigl(Q,n,k,\{n,\ldots ,N\}\bigr) &= \bigl(I+S\bigl(Q,n,n,\{n,\ldots ,N\}\bigr) \bigr) \\ &\quad {}\cdot \Biggl(q_{nk}+\sum_{m=n+1}^{N} q_{nm}S\bigl(Q,m,k,\{n+1,\ldots ,N \}\bigr) \Biggr), \end{aligned} \end{aligned}
(5)
\begin{aligned}& \begin{aligned}[b] S\bigl(Q,m,k,\{n,\ldots ,N\}\bigr) &= S \bigl(Q,m,k,\{n+1,\ldots ,N\}\bigr) \\ &\quad {}+ S\bigl(Q,m,n,\{n+1,\ldots ,N\}\bigr)S\bigl(Q,n,k,\{n,\ldots ,N\} \bigr) \end{aligned} \end{aligned}
(6)

for $$k< n< m\leq N$$.

### Proof

For proving (3), we consider

\begin{aligned}& S\bigl(Q,n,n,\{n,\ldots ,N\}\bigr)-S\bigl(Q,n,n,\{n+1,\ldots ,N\}\bigr) \\ & \quad = \sum_{\substack{\ell \geq 2,i_{0},\ldots ,i_{\ell }\in \mathbb{N}_{0} \\ i_{0}=n, i_{\ell }=n \\ i_{1},\ldots ,i_{\ell -1}\in \{n,\ldots ,N\} \\ i_{k}=n\text{ for some }k\in \{1,\ldots ,\ell -1\}}}\prod _{r=1}^{ \ell }q_{i_{r-1},i_{r}} =\sum _{\ell \in \mathbb{N}}\sum_{k=1}^{\ell -1} \sum_{\substack{\ell \geq 2,i_{0},\ldots ,i_{\ell }\in \mathbb{N}_{0} \\ i_{0}=n, i_{k}=n, i_{\ell }=n \\ i_{1},\ldots ,i_{k-1}\in \{n,\ldots ,N\} \\ i_{k+1},\ldots ,i_{\ell -1}\in \{n,\ldots ,N\}}}\prod_{r=1}^{\ell }q _{i_{r-1},i_{r}} \\ & \quad =\sum_{k=1}^{\infty }\sum _{\ell =k+1}^{\infty } \sum_{\substack{i_{0},\ldots ,i_{k}\in \mathbb{N}_{0} \\ i_{0}=n, i_{k}=n \\ i_{1},\ldots ,i_{k-1}\in \{n,\ldots ,N\}}} \prod_{r=1}^{k} q_{i_{r-1},i _{r}} \sum _{\substack{i_{k},\ldots ,i_{\ell }\in \mathbb{N}_{0} \\ i_{k}=n, i_{\ell }=n \\ i_{k+1},\ldots ,i_{\ell -1}\in \{n+1,\ldots ,N\}}}\prod_{r=k+1}^{ \ell }q_{i_{r-1},i_{r}} \\ & \quad = \sum_{\substack{k\in \mathbb{N} \\ i_{0},\ldots ,i_{k}\in \mathbb{N}_{0} \\ i_{0}=n, i_{k}=n \\ i_{1},\ldots ,i_{k-1}\in \{n,\ldots ,N\}}}\prod _{r=1}^{k} q_{i_{r-1},i _{r}}\cdot \sum _{\substack{\ell \in \mathbb{N} \\ i_{0},\ldots ,i_{\ell }\in \mathbb{N}_{0} \\ i_{0}=n, i_{\ell }=n \\ i_{1},\ldots ,i_{\ell -1}\in \{n+1,\ldots ,N\}}}\prod_{r=1}^{\ell }q _{i_{r-1},i_{r}} \\ & \quad =S\bigl(Q,n,n,\{n,\ldots ,N\}\bigr)S\bigl(Q,n,n,\{n+1,\ldots ,N\}\bigr), \end{aligned}

yielding

$$\bigl(I+S\bigl(Q,n,n,\{n,\ldots ,N\}\bigr) \bigr) \bigl(I-S\bigl(Q,n,n,\{n+1, \ldots ,N\}\bigr) \bigr)=I,$$

and hence, (3). Equation (4) is seen from

\begin{aligned} S\bigl(Q,n,n,\{n+1,\ldots ,N\}\bigr) =& \sum_{\substack{\ell \in \mathbb{N},i_{0},\ldots ,i_{\ell }\in \mathbb{N}_{0} \\ i_{0}=n, i_{\ell }=n \\ i_{1},\ldots ,i_{\ell -1}\in \{n+1,\ldots ,N\}}} \prod_{r=1}^{\ell }q _{i_{r-1},i_{r}} \\ =&q_{nn}+\sum_{m=n+1}^{N} \sum _{\substack{\ell \geq 2,i_{0},\ldots ,i_{\ell }\in \mathbb{N}_{0} \\ i_{0}=n, i_{1}=m, i_{\ell }=n \\ i_{2},\ldots ,i_{\ell -1}\in \{n+1,\ldots ,N\}}}\prod_{r=1}^{\ell }q _{i_{r-1},i_{r}} \\ =&q_{nn}+\sum_{m=n+1}^{N} q_{nm} \sum_{\substack{\ell \geq 2,i_{1},\ldots ,i_{\ell }\in \mathbb{N}_{0} \\ i_{1}=m, i_{\ell }=n \\ i_{2},\ldots ,i_{\ell -1}\in \{n+1,\ldots ,N\}}}\prod _{r=2}^{\ell }q _{i_{r-1},i_{r}} \\ =&q_{nn}+\sum_{m=n+1}^{N} q_{nm} \sum_{\substack{\ell \in \mathbb{N},i_{0},\ldots ,i_{\ell }\in \mathbb{N}_{0} \\ i_{0}=m, i_{\ell }=n \\ i_{1},\ldots ,i_{\ell -1}\in \{n+1,\ldots ,N\}}}\prod _{r=1}^{\ell }q _{i_{r-1},i_{r}} \\ =&q_{nn}+\sum_{m=n+1}^{N} q_{nm}S\bigl(Q,m,n,\{n+1,\ldots ,N\}\bigr), \end{aligned}

and (5) and (6) are derived in a similar manner. □

Let us suppose that all series in Lemma 1 converge unconditionally, let us set $$K_{n}^{(N)}=I-S(Q,n,n,\{n+1,\ldots ,N\})$$, and let us set $$L_{m,n,k}^{(N)}=S(Q,m,k,\{n,\ldots ,N\})$$ for $$n\leq N$$ and $$k< n\leq m\leq N$$, respectively. Then (3)–(6) imply

\begin{aligned}& K_{n}^{(N)} = I-q_{nn}-\sum _{m=n+1}^{N} q_{nm}L_{m,n+1,n}^{(N)}, \end{aligned}
(7)
\begin{aligned}& L_{n,n,k}^{(N)} = \bigl(K_{n}^{(N)} \bigr)^{-1} \Biggl(q_{nk}+ \sum_{m=n+1}^{N} q_{nm}L_{m,n+1,k}^{(N)} \Biggr), \quad 0\leq k< n, \end{aligned}
(8)
\begin{aligned}& L_{m,n,k}^{(N)} = L_{m,n+1,k}^{(N)}+L_{m,n+1,n}^{(N)}L_{n,n,k}^{(N)}, \quad 0\leq k< n, n< m\leq N. \end{aligned}
(9)

Note that for tridiagonal Q, these recursions simplify to the scheme

$$K_{n}^{(N)}=I-q_{nn}-q_{n,n+1} \bigl(K_{n+1}^{(N)} \bigr)^{-1}q_{n+1,n}$$

for continued fractions in Banach algebras. Traditionally, continued fractions are built up by denominators $$b_{n}$$ and numerators $$a_{n}$$, and for (two-sided) continued fractions in Banach algebras, the coefficients are usually named such that

$$K_{n}^{(N)}=b_{n}+a_{n+1} \bigl(K_{n+1}^{(N)} \bigr)^{-1}c_{n+1},$$

see [2, 11]. In order to obtain similar letters, we rename $$b_{n}=I-q_{nn}$$, $$a_{nm}=-q_{nm}$$ for $$m>n$$ and $$c_{nk}=q _{nk}$$ for $$k< n$$. Then we obtain

\begin{aligned}& K_{n}^{(N)} = b_{n}+\sum _{m=n+1}^{N} a_{nm}L_{m,n+1,n}^{(N)}, \end{aligned}
(10)
\begin{aligned}& L_{n,n,k}^{(N)} = \bigl(K_{n}^{(N)} \bigr)^{-1} \Biggl(c_{nk}- \sum_{m=n+1}^{N} a_{nm}L_{m,n+1,k}^{(N)} \Biggr),\quad 0\leq k< n, \end{aligned}
(11)
\begin{aligned}& L_{m,n,k}^{(N)} = L_{m,n+1,k}^{(N)}+L_{m,n+1,n}^{(N)}L_{n,n,k}^{(N)}, \quad 0\leq k< n, n< m\leq N. \end{aligned}
(12)

### Definition 1

Let $$\mathcal{R}$$ be a Banach algebra with unity I, let $$b_{n},a _{nm},c_{nk}\in \mathcal{R}$$ for all $$n,m,k\in \mathbb{N}_{0}$$ with $$k< n< m$$, and let $$K_{n}^{(N)}$$ and $$L_{m,n,k}^{(N)}$$ be defined by (10), (11), (12). If $$K_{0}^{(N)}$$ is well-defined (that is, $$(K_{n}^{(N)} )^{-1}$$ exists for $$n=1,\ldots ,N$$) for almost all $$N\in \mathbb{N}$$ and if

$$K=\lim_{N\to \infty }K_{0}^{(N)}$$

exists, K is said to be a convergent gcf (abbreviating generalized continued fraction), and $$K_{0}^{(N)}$$ is referred to as the Nth approximant for the gcf K.

The construction of the series $$S(Q,i,j,A)$$ is inspired by the Markov-chain interpretation, and therefore, it may be conjectured that this definition is only useful for the algorithmic treatment of Markov chains with a general (block-)transition structure. However, a large variety of generalizations of continued fractions that were discussed in the literature turn out to be special cases of Definition 1. This is one of the main scopes of this paper, and in Sect. 4, we provide detailed comparisons. In particular, note that the recursion scheme for the $$K_{n}^{(N)}$$ and $$L_{m,n,k}^{(N)}$$ does not reflect the probabilistic interpretation directly, and $$(K_{n}^{(N)} )^{-1}$$ and $$\lim_{N\to \infty }K_{0}^{(N)}$$ may exist in situations where the corresponding series $$S(\cdots )$$ do not converge unconditionally.

Nevertheless, if the series $$S(\cdots )$$ converge, the interpretation of $$K_{n}^{(N)}=I-S(Q,n,n,\{n, \ldots ,N\})$$ provides a convergence criterion.

### Theorem 1

Define $$Q=(q_{mn})_{m,n=0}^{\infty }$$ by

$$q_{mn}=\textstyle\begin{cases} -a_{mn},&n>m, \\ I-b_{n},&n=m, \\ c_{mn},&n< m, \end{cases}$$

and let

• $$S(Q,n,n,\{n,\ldots ,N\})$$ converge unconditionally for all $$n,N\in \mathbb{N}$$ with $$n\leq N$$,

• $$S(Q,m,k,\{n,\ldots ,N\})$$ converge unconditionally for all $$m,n,k,N\in \mathbb{N}_{0}$$ with $$k< n\leq m\leq N$$ and

• $$S(Q,0,0,\mathbb{N})$$ converge unconditionally.

Then $$K_{n}^{(N)}$$ as defined in Definition 1 is invertible for all $$N\geq n\geq 1$$. In particular, the gcf K defined in Definition 1 is well-defined, and furthermore, it converges with

$$K=I-S(Q,0,0,\mathbb{N}),$$

and for the approximants, we have

$$K_{0}^{(N)}-K=S(Q,0,0,\mathbb{N})-S\bigl(Q,0,0,\{1,\ldots ,N \}\bigr).$$

### Proof

The existence of $$(K_{n}^{(N)} )^{-1}$$ and the representation

$$K_{0}^{(N)}=I-S\bigl(Q,0,0,\{1,\ldots ,N\}\bigr)$$

are immediate consequences of Lemma 1 and our construction of gcfs. The latter term converges to $$I-S(Q,0,0,\mathbb{N})$$ provided unconditional convergence of this series. □

Later on, we will use this criterion as a first step for proving a Pringsheim-type convergence criterion. Before, we want to demonstrate that our definition covers a wide range of generalizations of continued fractions found in the literature. In fact, these generalizations can be interpreted as special cases of a subclass of gcfs which we will discuss below.

### Remark 1

The graphical interpretation of $$S(Q,i,j,A)$$ for tridiagonal Q from the beginning of this section is very similar or even identical to that in Flajolet’s outstanding paper  where important connections between combinatorial identities, formal power series, and formal continued fractions were introduced. However, the scope of the present paper differs from Flajolet’s work in the facts that we

• consider convergence of the series $$S(Q,i,j,A)$$ in a Banach algebra, whereas Flajolet dealt with these terms as formal series (and convergence of a sequence of formal series to another series is interpreted in terms of the monoid algebra of all formal series on some non-commutative alphabet, see Flajolet’s original article  for more details);

• consider non-tridiagonal matrices Q.

### Remark 2

In the probabilistic context of computing invariant measures, systems of linear equations have to be solved, and this can be done by truncating the system and use (block-)Gaussian elimination. This provides another way of interpreting the formulas (10) to (12): Consider the infinite system of equations:

$$\begin{pmatrix} b_{0}&a_{01}&a_{02}&a_{03}&\cdots \\ -c_{10}&b_{1}&a_{12}&a_{13}&\cdots \\ -c_{20}&-c_{21}&b_{2}&a_{23}&\cdots \\ -c_{30}&-c_{31}&-c_{32}&b_{3}&\ddots \\ \vdots &\vdots &\vdots &\ddots &\ddots \end{pmatrix} \begin{pmatrix} x_{0} \\ x_{1} \\ x_{2} \\ x_{3} \\ \vdots \end{pmatrix}=0$$

for some $$\mathcal{R}$$-valued sequence $$(x_{n})_{n\in \mathbb{N}_{0}}$$, that is,

\begin{aligned}& 0= b_{0}x_{0}+\sum_{n=1}^{\infty }a_{0n}x_{n}, \end{aligned}
(13)
\begin{aligned}& \sum_{n=0}^{m-1}c_{mn}x_{n} = b_{m}x_{m}+\sum_{n=m+1}^{\infty }a_{mn}x _{n},\quad m=1,2,3,\ldots . \end{aligned}
(14)

If $$a_{mn}=0$$ for $$n\geq m+2$$ and $$c_{mn}=0$$ for $$n\leq m-2$$, (14) simplifies to a second-order difference equation and (13) is some kind of initial condition. The algorithmic method in the probabilistic context referred to above relies on considering the truncated system

$$\sum_{n=0}^{m-1}c_{mn}x_{n}=b_{m}x_{m}+ \sum_{n=m+1}^{N} a_{mn}x_{n}, \quad m=1,2,3,\ldots ,N.$$
(15)

A proof by induction with respect to N yields that a solution for (15) is given by $$x_{0}=I$$ and $$x_{n}=L_{n,1,0} ^{(N)}$$ for $$n=1,\ldots ,N$$ if the values $$L_{1,1,0}^{(N)},\ldots ,L _{N,1,0}^{(N)}$$ in Definition 1 are well-defined. More or less, the induction step performs a reduction step of Gaussian elimination without pivoting. In the probabilistic context, the probabilistic interpretation guarantees that $$L_{n,1,0}^{(N)}$$ converges to some $$L_{n,1,0}$$, and a solution to (14) is given by $$x_{n}=L_{n,1,0}x_{0}$$ for $$n\geq 1$$. Equation (10) and the ‘initial condition’ (13) then yield $$0=Kx_{0}$$ with the gcf K itself. This agrees with the algorithmic procedure for Markov chains with a block-tridiagonal transition structure, but the construction of $$L_{n,1,0}$$ and K is neither restricted to tridiagonal matrices nor to the probabilistic context. In fact, many special functions provide solutions to special cases of system (14). Hence, $$L_{n,1,0}$$ or K might yield new representations for such special functions, see Sect. 7 below for an example.

### Remark 3

Later on, we will use the term ‘gcf’ for the values $$L_{n,1,0}$$ as well. This is simply due to the fact that $$b_{0}$$ and $$a_{0m}$$ have no impact on $$L_{n,1,0}$$, and by setting $$b_{0}=0$$ and $$a_{0m}=\delta _{mn}I$$, we obtain $$K=L_{n,1,0}$$.

### An important subclass of generalized continued fractions

As pointed out in the introduction, traditional analysis of continued fractions relies on the fact that the approximants can be written as $$\frac{A_{N}}{B_{N}}$$ where both the sequence $$(A_{N})$$ of numerators and the sequence $$(B_{N})$$ of denominators are solutions of the difference equation $$X_{N}=X_{N-1}b_{N}+X_{N-2}a_{N}$$. Hence, many generalizations of continued fractions are based on generalizations of this recurrence relation. Without additional assumptions, generalized continued fractions as defined in Definition 1 cannot be represented in such a way, and this makes it difficult to compare our definition with many of those found in the literature.

Therefore, we consider the special case where $$c_{nk}=0$$ for $$k< n-1$$ (in the Markov-chain context, this assumption corresponds to upper block Hessenberg matrices P) and $$c_{n,n-1}^{-1}$$ exists for all $$n\in \mathbb{N}$$. We will see that in this case, we are able to provide a recursion scheme for the sequences $$(A_{N})$$ and $$(B_{N})$$ with $$K_{0}^{(N)}=A_{N}B_{N}^{-1}$$. First, we observe that $$c_{nk}=0$$ for $$k< n-1$$ implies $$L_{m,n,k}^{(N)}=0$$ for $$k< n-1$$, entailing $$L_{n,n,n-1} ^{(N)}= (K_{n}^{(N)} )^{-1}p_{n,n-1}$$ and $$L_{m,n,n-1}^{(N)}= \prod_{n}^{r=m}L_{r,r,r-1}^{(N)}=\prod_{n}^{r=m} (K _{r}^{(N)} )^{-1}p_{r,r-1}$$, where $$\prod_{n}^{r=m}$$ denotes the ‘top-to-bottom product’, that is,

$$\prod_{n}^{r=m}L_{r,r,r-1}^{(N)}=L_{m,m,m-1}^{(N)} \cdots L_{n,n,n-1} ^{(N)}.$$

Hence in total, the recursion for $$K_{n}^{(N)}$$ simplifies to

$$K_{n}^{(N)}=b_{n}+\sum _{m=n+1}^{N} a_{nm}\prod _{n+1}^{r=m} \bigl(K _{r}^{(N)} \bigr)^{-1}c_{r,r-1}, \quad 0\leq n< N.$$
(16)

### Theorem 2

Let $$c_{n,n-1}^{-1}$$ exist for all $$n\in \mathbb{N}$$, let $$K_{0}^{(N)}$$ be defined by $$K_{N}^{(N)}=b_{N}$$ and (16), and let the sequences $$(A_{n})_{n\geq -1}$$ and $$(B_{n})_{n\geq 0}$$ be defined by

\begin{aligned}& A_{-1}=I, \qquad A_{n}c_{n+1,n} = A_{n-1}b_{n}+\sum_{m=0}^{n-1}A_{m-1}a_{mn}, \quad n\in \mathbb{N}_{0}, \end{aligned}
(17)
\begin{aligned}& B_{0}c_{10}=I, \qquad B_{n}c_{n+1,n} = B_{n-1}b_{n}+\sum_{m=1}^{n-1}B_{m-1}a_{mn}, \quad n\in \mathbb{N}. \end{aligned}
(18)

If $$K_{0}^{(N)}$$ is well-defined for some $$N\in \mathbb{N}_{0}$$, $$B_{N}^{-1}$$ exists and $$K_{0}^{(N)}=A_{N}B_{N}^{-1}$$.

### Proof

The proof is similar to that for the classic result for non-generalized continued fractions with coefficients in $$\mathbb{C}$$, see  for example. Nevertheless, it is important, and therefore we give a full proof by induction with respect to N.

For $$N=0$$, we have $$K_{0}^{(0)}=b_{0}$$, $$A_{0}=b_{0}c_{10}^{-1}$$ and $$B_{0}=c_{10}^{-1}$$, and the statement is obviously true. Similarly, for $$N=1$$, we have $$K_{0}^{(1)}=b_{0}+a_{01}b_{1}^{-1}c_{10}$$, $$A_{1}=b _{0}c_{10}^{-1}b_{1}c_{21}^{-1}+a_{01}c_{21}^{-1}$$ and $$B_{1}=c_{10} ^{-1}b_{1}c_{21}^{-1}$$, and again, the statement is true (note that invertibility of $$K_{1}^{(1)}$$ implies invertibility of $$b_{1}$$).

In the induction step, we assume that the statement is true for the $$(N-1)$$st approximants of all gcfs. Precisely, we assume that for all coefficients $$\tilde{b}_{n}$$, $$\tilde{a}_{nm}$$, $$\tilde{c}_{n,n-1}$$ with existing inverses $${\tilde{c}_{n,n-1}}^{-1}$$, the existence of $$({\tilde{K}}_{n}^{(N-1)} )^{-1}$$ for $$n=1,\ldots ,N-1$$ implies

$$\tilde{K}_{0}^{(N-1)}=\tilde{A}_{N-1} \tilde{B}_{N-1}^{-1},$$

where $$\tilde{K}_{n}^{(N-1)}$$, $$\tilde{A}_{N-1}$$, $$\tilde{B}_{N-1}$$ are constructed using the coefficients $$\tilde{b}_{n}$$, $$\tilde{a}_{mn}$$, and $$\tilde{c}_{n,n-1}$$.

For the gcf built up by the coefficients $$b_{n}$$, $$a_{mn}$$, and $$c_{n,n-1}$$, we assume that $$(K_{n}^{(N)} )^{-1}$$ exists for $$n=1,\ldots ,N$$. We have to prove that $$K_{0}^{(N)}=A_{N}B_{N}^{-1}$$. For this purpose, we set $$\tilde{b}_{n}=b_{n}$$ for $$n< N-1$$, $$\tilde{a}_{nm}=a_{nm}$$ for $$n< m< N-1$$, $$\tilde{c}_{n,n-1}=c_{n,n-1}$$ for $$n< N$$, and

$$\tilde{b}_{N-1}=b_{N-1}+a_{N-1,N}b_{N}^{-1}c_{N,N-1}, \qquad \tilde{a} _{n,N-1}=a_{n,N-1}+a_{nN}b_{N}^{-1}c_{N,N-1}$$

for $$n< N-1$$. By the choice of $$\tilde{b}_{N-1}$$, we have $$\tilde{K} _{N-1}^{(N-1)}=K_{N-1}^{(N)}$$, and iteratively, the choice of $$\tilde{a}_{n,N-1}$$ yields

\begin{aligned} \tilde{K}_{n}^{(N-1)} =&\tilde{b}_{n}+\sum _{m=n+1}^{N-1}\tilde{a}_{nm} \prod _{n+1}^{\ell =m} \bigl( \bigl( \tilde{K}_{\ell }^{(N-1)} \bigr) ^{-1}\tilde{c}_{\ell ,\ell -1} \bigr) \\ =&b_{n}+\sum_{m=n+1}^{N-1}a_{nm} \prod_{n+1}^{\ell =m} \bigl( \bigl(K _{\ell }^{(N)} \bigr)^{-1}c_{\ell ,\ell -1} \bigr) \\ &{} +a_{n,N}b_{N}^{-1}c_{N,N-1}\prod _{n+1}^{\ell =N-1} \bigl( \bigl(K_{ \ell }^{(N)} \bigr)^{-1}c_{\ell ,\ell -1} \bigr) \\ =&b_{n}+\sum_{m=n+1}^{N}a_{nm} \prod_{n+1}^{\ell =m} \bigl( \bigl(K _{\ell }^{(N)} \bigr)^{-1}c_{\ell ,\ell -1} \bigr)=K_{n}^{(N)} \end{aligned}

for all $$n\leq N-2$$. Therefore, if $$(K_{n}^{(N)} )^{-1}$$ exists for $$n=1,\ldots ,N$$, so does $$({\tilde{K}}_{n}^{(N-1)} ) ^{-1}$$ for $$n=1,\ldots ,N-1$$, and we can apply the induction hypothesis, that is, we obtain

\begin{aligned} K_{0}^{(N)} =&\tilde{K}_{0}^{(N-1)}= \tilde{A}_{N-1}\tilde{B}_{N-1} ^{-1}= \tilde{A}_{N-1}\tilde{c}_{N,N-1} (\tilde{B}_{N-1} \tilde{c} _{N,N-1} )^{-1} \\ =& \Biggl(\tilde{A}_{N-2}\tilde{b}_{N-1}+\sum _{m=0}^{N-2}\tilde{A} _{m-1} \tilde{a}_{m,N-1} \Biggr) \Biggl(\tilde{B}_{N-2} \tilde{b}_{N-1}+ \sum_{m=0}^{N-2} \tilde{B}_{m-1}\tilde{a}_{m,N-1} \Biggr)^{-1}. \end{aligned}

Obviously, we have $$\tilde{A}_{m}=A_{m}$$ for all $$m< N-2$$, and therefore, the numerator can be written as

\begin{aligned}& \tilde{A}_{N-2}\tilde{b}_{N-1}+\sum _{m=0}^{N-2}\tilde{A}_{m-1} \tilde{a}_{m,N-1} \\& \quad = A_{N-2}b_{N-1}+\sum_{m=0}^{N-2}A_{m-1}a_{m,N-1}+A_{N-2}a_{N-1,N}b _{N}^{-1}c_{N,N-1} \\& \qquad {}+\sum_{m=0}^{N-2}A_{m-1}a_{m,N}b_{N}^{-1}c_{N,N-1} \\& \quad = A_{N-1}c_{N,N-1}+A_{N-2}a_{N-1,N}b_{N}^{-1}c_{N,N-1}+ \sum_{m=0} ^{N-2}A_{m-1}a_{m,N}b_{N}^{-1}c_{N,N-1} \\& \quad = \Biggl(A_{N-1}b_{N}+\sum _{m=0}^{N-1}A_{m-1}a_{mN} \Biggr)b_{N}^{-1}c _{N,N-1} \\& \quad = A_{N}b_{N}^{-1}c_{N,N-1}. \end{aligned}

Note that again, the invertibility of $$K_{N}^{(N)}$$ implies the invertibility of $$b_{N}$$. Of course, we can deal with the denominator in the same way, and finally, we obtain

$$K_{0}^{(N)}=A_{N}b_{N}^{-1}c_{N,N-1} \bigl(B_{N}b_{N}^{-1}c_{N,N-1} \bigr) ^{-1}=A_{N}B_{N}^{-1}.$$

□

Theorem 2 allows an alternative definition.

### Definition 2

Let $$b_{n},c_{n,n-1},a_{nm}\in \mathcal{R}$$ for some Banach algebra $$\mathcal{R}$$ with unity I, let $$c_{n,n-1}^{-1}$$ exist for all $$n\in \mathbb{N}$$, and let the sequences $$(A_{n})_{n\geq -1}$$, $$(B_{n})_{n\geq 0}$$ be defined by (17) and (18), respectively. If $$B_{N}^{-1}$$ exists for almost all $$N\in \mathbb{N}$$ and if $$K=\lim_{N\to \infty }A_{N}B_{N}^{-1}$$ converges, we refer to K as a convergent ugcf.

We use the letter u to put emphasis on the relationship with upper Hessenberg matrices. Obviously, Definitions 1 and 2 are not equivalent. In some way, Definition 1 is much more general. Even if $$c_{nk}=0$$ for all $$k< n-1$$, it is not clear how to use (17) and (18) if $$c_{n,n-1}^{-1}$$ does not exist. As soon as the existence of $$(K_{n}^{(N)} )^{-1}$$ is guaranteed for sufficiently large N, gcfs defined in Definition 1 include ugcfs defined in Definition 2. However there are some examples where ugcfs are well-defined, whereas the corresponding gcfs are not well-defined. This effect is well known for non-generalized continued fractions in $$\mathbb{C}$$, see . A simple example is as follows: Let $$a_{nm}=0$$ for $$m>n+1$$, $$c_{nk}=0$$ for $$k< n-1$$, and let $$b_{0}=b_{1}=b_{2}=b_{3}=1$$, $$a_{01}=a_{12}=1$$, $$a_{23}=-1$$, and $$c_{10}=c_{21}=c_{32}=c_{34}=1$$. Then $$K_{0}^{(3)}$$ is the non-generalized finite continued fraction

$$1+\frac{1}{1+\frac{1}{1+\frac{-1}{1}}}$$

in $$\mathbb{C}$$, and since $$K_{2}^{(3)}=0$$, $$K_{0}^{(3)}$$ is not well-defined according to Definition 1. On the other hand, $$A_{-1}=A_{0}=1$$, $$A_{1}=2$$, $$A_{2}=3$$, $$A_{3}=1$$, and $$B_{0}=B_{1}=1$$, $$B_{2}=2$$, $$B_{3}=1\neq 0$$, that is, $$K_{0}^{(3)}=\frac{A _{3}}{B_{3}}=1$$ is well-defined according to Definition 2.

### Remark 4

Under the assumptions of this subsection, the approximants to $$L_{\ell ,1,0}$$ can be represented in a similar manner: Define $$B_{n}^{(\ell )}$$ by $$B_{\ell }^{(\ell )}c_{\ell +1,\ell }=I$$ and

$$B_{n}^{(\ell )}c_{n+1,n}=B_{n-1}^{(\ell )}b_{n}+ \sum_{m=\ell +1}^{n-1}B _{m-1}^{(\ell )}a_{mn}, \quad n>\ell .$$

If $$L_{\ell ,1,0}^{(N)}$$ is well-defined, we have $$L_{\ell ,1,0}^{(N)}=B _{N}^{(\ell )}B_{N}^{-1}$$, the proof is almost identical to that of Theorem 2.

### Remark 5

For $$c_{mk}=0$$ for $$k\leq m-2$$, (14) simplifies to

$$c_{m,m-1}x_{m-1}=b_{m}+\sum _{n=m+1}^{\infty }a_{mn}x_{n},$$
(19)

which was referred to as sum equation in the introduction. According to Remark 2, a solution might be given by $$x_{m}=L_{m,1,0}x_{0}$$ for $$m\geq 1$$.

## Generalized continued fractions in the literature

A major goal of this paper is demonstrating that many generalizations found in the literature are covered by our definitions. That way, it becomes clear that convergence criteria for our kind of gcfs/ugcfs (as will be proved in Sect. 5) include convergence criteria for other generalizations of continued fractions.

### Non-generalized continued fractions in Banach algebras

As pointed out above, the recursion scheme for $$K_{n}^{(N)}$$ and $$L_{m,n,k}^{(N)}$$ simplifies if Q is tridiagonal. Equivalently, we can assume that $$a_{nm}=0$$ for $$m>n+1$$ and $$c_{nk}=0$$ for $$k< n-1$$. From (16), we easily obtain

$$K_{n}^{(N)}=b_{n}+a_{n,n+1} \bigl(K_{n+1}^{(N)} \bigr)^{-1}c_{n+1,n}, \quad 0 \leq n\leq N.$$

Such continued fractions have been studied in [2, 11, 39]. If $$c_{n+1,n}^{-1}$$ exists for all $$n\in \mathbb{N}_{0}$$, we obtain a special case of Definition 2, where $$B_{n}$$ and $$A_{n}$$ both meet the recurrence relation $$X_{n}c_{n+1,n}=X_{n-1}b_{n}+X_{n-2}a_{n-1,n}$$. Replacing $$b_{n}$$ by $$b_{n}c_{n+1,n}^{-1}$$ and $$a_{n-1,n}$$ by $$a_{n}:=a_{n-1,n}c _{n+1,n}^{-1}$$ yields the recursion $$X_{n}=X_{n-1}b_{n}+X_{n-2}a_{n}$$. Such continued fractions in non-commutative structures have been introduced in [12, 13, 33, 45, 46], further convergence results can be found in [1, 19, 25, 37, 39, 41, 47].

### Perron’s finite and infinite Jacobi chains

For non-generalized continued fractions in $$\mathbb{C}$$, using the recursions for $$A_{n}$$ and $$B_{n}$$ dates back at least to the eighteenth century. An early generalization of the recurrence scheme for $$A_{n}$$ and $$B_{n}$$ is due to Jacobi. In , he established the fundamentals for the Jacobi–Perron algorithm, which was extended by the work of Perron [27, 29] (a discussion on all publications on this topic is beyond the scope of this paper, we refer to  for an early extensive study on the Jacobi–Perron algorithm). With our notation, Perron defined a Jacobi chain of order n by

\begin{aligned}& B_{N}^{(\ell )} = \delta _{N\ell }, \quad N,\ell =0,\ldots ,n, \\& B_{N}^{(\ell )} = B_{N-1}^{(\ell )}b_{N}+ \sum_{m=N-n}^{N-1}B_{m-1} ^{(\ell )}a_{mN}, \quad N>n,\ell =0,\ldots ,n, \\& K = \biggl(\lim_{N\to \infty }\frac{a_{0\ell }B_{N}^{(\ell )}}{B_{N} ^{(0)}} \biggr)_{\ell =1}^{n}, \end{aligned}

if the limits exist. For the coefficients $$b_{N}$$, $$a_{mN}$$, he allowed arbitrary complex numbers up to the condition $$a_{N-n,N}\neq 0$$ for all $$N>n$$. In contrast to our definition of gcfs or ugcfs, K is n-dimensional in this situation. Nevertheless, we can identify the th entry $$\lim_{m\to \infty }\frac{a_{0\ell }B_{N}^{( \ell )}}{B_{N}^{(0)}}$$ as a ugcf in the sense of Definition 2 by setting $$c_{N,N-1}=1$$, $$a_{mN}=0$$ for $$N-m>n$$ and $$a_{0N}=0$$ for $$N\in \{1,\ldots ,n\}\setminus \{\ell \}$$. Hence, Perron’s Jacobi chains can be interpreted as n-dimensional vectors of ugcfs in $$\mathbb{C}$$.

Continued fractions are capable of characterizing subdominant solutions of second-order difference equations (see [32, Sect. 20]). Amongst others, a reason for considering Jacobi chains of order n is that they are strongly related to an n-dimensional subspace S of the solutions of a difference equation of order $$n+1$$ where all solutions in S are dominated by all other solutions. In , Perron intended to generalize this concept to ‘sum equations’ (literal translation of the German term ‘Summengleichung’), and in this context, he introduced a kind of infinite Jacobi chain, that is, he considered the recursion scheme

$$X_{N}=X_{N-1}b_{N}+\sum _{m=0}^{N-1}X_{m-1}a_{mN}$$

with coefficients $$b_{N},a_{mN}\in \mathbb{C}$$ (up to notation, this is equation (14) in ). Although he did not explicitly constructed quotients of numerators and denominators, he pointed out that this recursion scheme is a straightforward generalization of finite Jacobi chains. Obviously, the recursion coincides with (17) and (18) for $$\mathcal{R}=\mathbb{C}$$ and $$c_{N+1,N}=1$$.

### n-Fractions as introduced by de Bruin

A concept similar to Perron’s (finite) Jacobi chains is due to de Bruin [9, 10]. Up to notation, his approach is as follows: Let the (complex-valued) sequences $$(A_{N}^{(-n)} ) _{N\geq -n}$$, …, $$(A_{N}^{(-1)} )_{N\geq -n}$$, $$(B_{N})_{N\geq -n}$$ satisfy the recurrence relation

$$X_{N}=X_{N-1}b_{N}+\sum _{m=N-n}^{N-1}X_{m-1}a_{mN}, \quad N=1,2,3,\ldots ,$$

subject to the initial conditions

$$\begin{pmatrix} A_{-n}^{(-n)}&A_{-n+1}^{(-n)}&\ldots &A_{-1}^{(-n)}&A_{0}^{(-n)} \\ A_{-n}^{(-n+1)}&A_{-n+1}^{(-n+1)}&\ldots &A_{-1}^{(-n+1)}&A_{0}^{(-n+1)} \\ \vdots &\vdots &\ddots &\vdots &\vdots \\ A_{-n}^{(-1)}&A_{-n+1}^{(-1)}&\ldots &A_{-1}^{(-1)}&A_{0}^{(-1)} \\ B_{-n}&B_{-n+1}&\ldots &B_{-1}&B_{0} \end{pmatrix}= \begin{pmatrix} 1&0&\ldots &0&a_{-n+1,0} \\ 0&1&\ldots &0&a_{-n+2,0} \\ \vdots &\vdots &\ddots &\vdots &\vdots \\ 0&0&\ldots &1&b_{0} \\ 0&0&\ldots &0&1 \end{pmatrix}.$$

Then define the n-fraction

$$K=\lim_{N\to \infty } \biggl(\frac{A_{N}^{(-n)}}{B_{N}},\ldots , \frac{A _{N}^{(-1)}}{B_{N}} \biggr).$$

Again, we obtain an n-dimensional construction. However, by setting $$\mathcal{R}=\mathbb{C}$$ and $$c_{N+1,N}=1$$ for all N, we see that the last entry of K, that is, $$\lim_{N\to \infty }\frac{A_{N} ^{(-1)}}{B_{N}}$$, coincides with our definition of ugcfs.

For $$\ell >1$$, we can define $$\tilde{A}_{N}$$ by $$\tilde{A}_{-1}=I$$ and $$\tilde{A}_{N}=\tilde{A}_{N-1}\tilde{b}_{N}+\sum_{m=0}^{N-1} \tilde{A}_{m-1}\tilde{a}_{mN}$$, where $$\tilde{b}_{0}=a_{-\ell +1,0}$$, $$\tilde{a}_{0N}=a_{-\ell +1,N}$$ and $$\tilde{b}_{N}=b_{N}$$, $$\tilde{a} _{mN}=a_{mN}$$ for $$m\geq 1$$. Then, due to $$A_{N}^{(-\ell )}=0$$ for $$N=-\ell +1,\ldots ,-1$$, we have $$\tilde{A}_{N}=A_{N}^{(-\ell )}$$ for all $$N\geq 0$$, and thus, we have a representation

$$A_{N}^{(-\ell )}B_{N}^{-1}= \tilde{A}_{N}B_{N}^{-1},$$

where $$\lim_{N\to \infty }\tilde{A}_{N}B_{N}^{-1}$$ is a ugcf in the sense of Definition 2. Therefore, we can interpret de Bruins n-fractions as an n-dimensional vector of ugcfs, too.

### Matrix continued fractions as defined by Levrie and Bultheel

Another kind of generalized continued fractions in Banach algebras is due to Levrie and Bultheel . Up to notation, their idea is as follows: The recurrence relation $$X_{n}=X_{n-1}b_{n}+X _{n-2}a_{n}$$ for numerators $$A_{n}$$ and denominators $$B_{n}$$ of non-generalized continued fractions can be written as

$$\begin{pmatrix} A_{n}&A_{n-1} \\ B_{n}&B_{n-1} \end{pmatrix}= \begin{pmatrix} A_{n-1}&A_{n-2} \\ B_{n-1}&B_{n-2} \end{pmatrix}\cdot \begin{pmatrix} b_{n}&I \\ a_{n}&0 \end{pmatrix},$$

subject to $( A 0 A − 1 B 0 B − 1 )=( b 0 I I 0 )$. Therefore, a kind of generalization of continued fractions can be obtained from defining

$$\begin{pmatrix} A_{n}&C_{n} \\ B_{n}&D_{n} \end{pmatrix}=\prod_{k=0}^{n} \begin{pmatrix} b_{k}&c_{k} \\ a_{k}&d_{k} \end{pmatrix}=\prod_{k=0}^{n} \theta _{k}.$$

Levrie and Bultheel referred to $$\lim_{n\to \infty }A_{n} B _{n}^{-1}$$ as ‘matrix continued fraction’. They assumed $$b_{k}$$, $$c_{k}$$, $$a _{k}$$, $$d_{k}$$ to be matrices with dimensions independent of k, $$\theta _{k}$$ being a quadratic matrix. Here, we assume that all coefficients are elements of some Banach algebra $$\mathcal{R}$$. In order to guard against misunderstandings (in the literature, the term ‘matrix continued fractions’ is also used for continued fractions with matrix-valued coefficients, see [33, 37, 41, 47]), we will prefer referring to Levrie and Bultheel’s construction as LB-fractions.

Actually, LB-fractions are strongly related to ugcfs since

\begin{aligned}& \begin{aligned} A_{n} & = A_{n-1}b_{n}+C_{n-1}a_{n} \\ & = A_{n-1}b_{n}+ (A_{n-2}c_{n-1}+C_{n-2}d_{n-1} )a_{n}= \cdots \\ & = A_{n-1}b_{n}+A_{n-2}c_{n-1}a_{n}+A_{n-3}c_{n-2}d_{n-1}a_{n}+A_{n-4}c _{n-3}d_{n-2}d_{n-1}a_{n} \\ &\quad {} +\cdots +A_{0}c_{1}d_{2}\cdots d_{n-1}a_{n}+C_{0}d_{1}\cdots d_{n-1}a _{n} \quad \mbox{and} \end{aligned} \\& \begin{aligned} B_{n} & = B_{n-1}b_{n}+B_{n-2}c_{n-1}a_{n}+B_{n-3}c_{n-2}d_{n-1}a_{n}+B _{n-4}c_{n-3}d_{n-2}d_{n-1}a_{n} \\ &\quad {} +\cdots +B_{0}c_{1}d_{2}\cdots d_{n-1}a_{n}+D_{0}d_{1}\cdots d_{n-1}a _{n}. \end{aligned} \end{aligned}

Hence, with

$$a_{mn}=c_{m}\prod_{k=m+1}^{n-1}d_{k} a_{n},$$

$$(A_{n})$$ and $$(B_{n})$$ satisfy (17) and (18), respectively, at least for $$C_{0}=A_{-1}$$ and $$D_{0}=0$$. Therefore, if $$C_{0}=c_{0}=I$$, $$B_{0}=a_{0}=I$$, and $$D_{0}=d_{0}=0$$, we obtain that the LB-fraction $$\lim_{n\to \infty }A_{n}B_{n}^{-1}$$ is a ugcf in the sense of Definition 2.

### Special cases of system (14)

As pointed out in the introduction, continued fractions and their generalizations are capable of characterizing certain solutions of subspaces of subdominant solutions for certain systems of equations:

• It is well known that continued fractions

$$b_{0}+\frac{a_{1}}{b_{1}+\frac{a_{2}}{b_{2}+\ddots }}$$

are capable of characterizing subdominant solutions of second-order difference equations, see [32, Sect. 20].

• In , two-sided matrix-valued continued fractions were used for solving a second-order matrix-vector difference equation.

• Perron’s Jacobi chains of order n can characterize the n-dimensional subspace of non-dominant solutions of a difference equation of order $$n+1$$, see .

• Up to notation, in , Perron used (17) and (18) in the context of characterizing (non-dominant) solutions of sum equations.

• In [17, 43], a relationship between de Bruins n-fractions and subdominant solutions of difference equations of order $$n+1$$ was established.

• In , the relationship between n-fractions and subdominant solutions of difference equations was used for proving that invariant measures of recurrent Markov chains with a certain transition structure are subdominant solutions of a difference equation, and therefore, naive computation schemes are subject to numerical instability. A stable alternative computation scheme based on n-fractions was suggested.

Difference equations of order 2 or of arbitrary order n are special cases of (14), simply set $$c_{mk}=0$$ for $$k\leq m-2$$ and $$a_{mk}=0$$ for $$k\geq m+n$$. Since gcfs might provide solutions to (14), Definition 1 follows one of the traditional motivations of dealing with continued fractions.

## Convergence theory

In Sects. 6 and 7, we outline some (mathematical) applications of gcfs. These applications require a thorough discussion of convergence criteria. As a starting point for a convergence theory, we demonstrate that classic Pringsheim-type (sometimes referred to as Śleszyński–Pringsheim-type) criteria can be extended to gcfs.

### Pringsheim-type criteria in the literature

For complex-valued non-generalized continued fractions, Pringsheim’s convergence criterion is one of the most famous criteria. In its basic form, it guarantees convergence of

$$b_{0}+\frac{a_{1}}{b_{1}+\frac{a_{2}}{b_{2}+\ddots }}$$

if $$\vert b_{n} \vert \geq \vert a_{n} \vert +1$$ for all $$n=1,2,3,\ldots$$ (see [32, 35, 36]). By applying appropriate equivalence transformations (see ), it is seen that $$\vert \frac{a_{n}}{b_{n-1}b_{n}} \vert \leq \frac{1}{4}$$ for $$n\geq 2$$ is sufficient for convergence. This criterion is referred to as Worpitzky-type criterion, and in fact, it is older than Śleszyński–Pringsheim-type criteria. In the literature,

• for non-generalized continued fractions in Banach algebras defined by $$X_{n}=X_{n-1}b_{n}+X_{n-2}a_{n}$$ (that is, $$c_{n+1,n}=I$$), Śleszyński–Pringsheim-type and/or Worpitzky-type criteria can be found in [1, 13, 19, 25, 37, 39, 47],

• for non-generalized two-sided continued fractions in Banach algebras, such criteria were proven in [2, 11, 39],

• for (finite) complex-valued Jacobi chains, a Śleszyński–Pringsheim-type criterion can be found in ,

• for complex-valued n-fractions as defined by de Bruin, Śleszyński–Pringsheim-type criteria are proved in [21, 22].

Proving a Pringsheim-type criterion for gcfs includes all of these results. In fact, Theorem 5 below will even improve some former results. The ideas are very similar to those used in  where it was proven that

$$\bigl\Vert a_{n-1,n}b_{n}^{-1} \bigr\Vert + \Vert c_{n+1,n} \Vert \leq 1, \quad n\geq 1,$$

is sufficient for guaranteeing convergence of non-generalized continued fractions in Banach algebras. In principle, we replace this condition by

$$\sum_{m=0}^{n-1} \bigl\Vert a_{mn}b_{n}^{-1} \bigr\Vert + \sum _{m=n+1}^{\infty } \bigl\Vert c_{mn}b_{n}^{-1} \bigr\Vert \leq 1, \quad n\geq 1.$$

### A Pringsheim-type criterion for gcfs with $$b_{n}=I$$

We begin with considering gcfs with $$b_{n}=I$$ for $$n\geq 1$$. The proof of a Pringsheim-type convergence criterion for this case is based on Theorem 1. In fact, the unconditional convergence of the series $$S(\cdots )$$ is less restrictive than any result which we prove in this section. However, classical Pringsheim-type conditions are much easier to check. As a first step, we replace the entries of the matrix Q by their norms.

### Theorem 3

Let Q be defined as in Theorem 1, set $$B=( \Vert q _{mn} \Vert )_{m,n=0}^{\infty }$$, and suppose that

• $$S(B,n,n,\{n,\ldots ,N\})$$ converges for all $$n,N\in \mathbb{N}$$ with $$n\leq N$$,

• $$S(B,m,k,\{n,\ldots ,N\})$$ converges for all $$m,n,k,N\in \mathbb{N} _{0}$$ with $$k< n\leq m\leq N$$ and

• $$S(B,0,0,\mathbb{N})$$ converges.

Then the gcf defined in Definition 1 is well-defined ($$K _{n}^{(N)}$$ is invertible for all $$N\geq n\geq 1$$) and converges with

$$\bigl\Vert K_{0}^{(N)}-K \bigr\Vert \leq S(B,0,0, \mathbb{N})-S\bigl(B,0,0, \{1,\ldots ,N\}\bigr).$$

### Proof

By submultiplicativity of $$\Vert \cdot \Vert$$, convergence of

$$S(B,i,j,A)= \sum_{\substack{\ell \in \mathbb{N} \\ i_{0},\ldots ,i_{\ell }\in \mathbb{N}_{0} \\ i_{0}=i, i_{\ell }=j \\ i_{1},\ldots ,i_{\ell -1}\in A}}\prod _{r=1}^{\ell } \Vert q _{i_{r-1},i_{r}} \Vert$$

implies absolute convergence of

$$S(Q,i,j,A)= \sum_{\substack{\ell \in \mathbb{N} \\ i_{0},\ldots ,i_{\ell }\in \mathbb{N}_{0} \\ i_{0}=i, i_{\ell }=j \\ i_{1},\ldots ,i_{\ell -1}\in A}}\prod _{r=1}^{\ell }q_{i_{r-1},i_{r}}.$$

Since absolute convergence implies unconditional convergence, all requirements of Theorem 1 are met, and hence, we have convergence of the gcf. Regarding the bound for $$\Vert K_{0} ^{(N)}-K \Vert$$, we observe that every summand of $$S(Q,0,0,\{1,\ldots ,N\})$$ occurs in $$S(Q,0,0,\mathbb{N})$$, too, and hence

$$K_{0}^{(N)}-K=S(Q,0,0,\mathbb{N})-S\bigl(Q,0,0,\{1,\ldots ,N \}\bigr)$$

is still a sum of products of the $$q_{mn}$$. Applying the submultiplicativity of $$\Vert \cdot \Vert$$ again yields the desired bound. □

Note that since the matrix B in Theorem 3 has entries in $$\mathbb{R}_{\geq 0}$$, the terms of ‘convergence’, ‘absolute convergence’, and ‘unconditional convergence’ coincide for the series $$S(B,\ldots )$$. Furthermore, the multiplication in $$\mathbb{R}$$ is commutative, and therefore, $$S(B,j,i,A)=S(B^{T},i,j,A)$$ for all i, j, A.

If $$b_{n}=I$$ for all $$n\geq 1$$ and if the above Pringsheim-type condition holds, $$B^{T}$$ is ‘near to stochastic’. In the proof of Theorem 4 below, we will be more precise. As a preparation, we show the following.

### Lemma 2

Let $$P=(p_{mn})_{m,n=0}^{\infty }$$ be a stochastic matrix, and for all $$m\geq 1$$, let there be some $$n>m$$ with $$p_{mn}>0$$. Then

• $$S(P,n,n,\{n,\ldots ,N\})$$ converges for all $$n,N\in \mathbb{N}$$ with $$n\leq N$$,

• $$S(P,k,m,\{n,\ldots ,N\})$$ converges for all $$m,n,k,N\in \mathbb{N} _{0}$$ with $$k< n\leq m\leq N$$, and

• $$S(P,0,0,\mathbb{N})\leq 1$$ converges.

### Proof

Let $$(X_{\ell })_{\ell =0}^{\infty }$$ be a Markov chain in discrete time with states $$\mathbb{N}_{0}$$ and transition probability matrix P. Then

$$S(P,i,j,A)=\sum_{\ell =1}^{\infty } \mathbb{P}(X_{\ell }=j, X_{\ell -1}, \ldots ,X_{1}\in A|X_{0}=i).$$

In particular, $$S(P,0,0,\mathbb{N})$$ is a sum of probabilities of disjoint events, and therefore, it converges to some value $$\in [0,1]$$ (the return probability to state 0).

Now, fix $$k< n\leq m\leq N$$. Since the values of $$p_{ij}$$ for $$i>N$$ have no impact on the series $$S(P,n,n,\{n,\ldots ,N\})$$ or $$S(P,k,m,\{n, \ldots ,N\})$$, we change the entries of P in such a way that $$p_{ij}=0$$ for $$i>N$$ and $$i>j$$. By $$p_{ij}^{(\ell )}$$, we denote the entries of $$P^{\ell }$$. For Markov chains, state j is said to be accessible from state i if $$p_{ij}^{(\ell )}>0$$ for some $$\ell \in \mathbb{N}_{0}$$. Due to the assumptions on the entries of P, for all $$n\geq 1$$, some state $$i>N$$ is accessible, but conversely, for states $$i>N$$, no state $$n\leq N$$ is accessible. As a consequence, all states $$n\in \{1,\ldots ,N\}$$ are transient (or even inessential in some terminology, see, e.g., ). A standard result for Markov chains (e.g., ) guarantees that

$$\sum_{\ell =0}^{\infty }p_{mn}^{(\ell )} \leq \sum_{ \ell =0}^{\infty }p_{nn}^{(\ell )}< \infty .$$

In particular, we have

$$S\bigl(P,n,n,\{n,\ldots ,N\}\bigr)\leq \sum_{\ell =1}^{\infty } \mathbb{P}(X_{ \ell }=n|X_{0}=n)=\sum _{\ell =1}^{\infty }p_{nn}^{(\ell )}< \infty .$$

Furthermore, the transience of m guarantees

\begin{aligned} S\bigl(P,k,m,\{n,\ldots ,N\}\bigr) =&p_{km}+\sum _{j=n}^{N} p_{kj}S\bigl(P,j,m,\{n, \ldots ,N\}\bigr) \\ \leq &p_{km}+\sum_{j=n}^{N} p_{kj}\sum_{\ell =1}^{\infty }p_{jm}^{( \ell )}< \infty . \end{aligned}

□

### Theorem 4

Let $$b_{n}=I$$ for $$n\geq 1$$, and for all $$m\geq 1$$, let there be some $$n>m$$ with $$c_{nm}\neq 0$$. Furthermore, let

$$\sum_{m=1}^{\infty } \Vert c_{m0} \Vert \leq 1,$$
(20)

and let

$$\sum_{m=0}^{n-1} \Vert a_{mn} \Vert +\sum_{m=n+1}^{\infty } \Vert c_{mn} \Vert \leq 1, \quad n\in \mathbb{N}.$$
(21)

Then the gcf K defined in Definition 1 is well-defined ($$K_{n}^{(N)}$$ is invertible for all $$N\geq n\geq 1$$) and converges with

$$\Vert K-b_{0} \Vert \leq 1.$$

### Proof

Construct Q as in Theorem 1 and B as in Theorem 3, and set $$P=B^{T}$$. Then (20) and (21) guarantee that P is stochastic. Finally, $$c_{nm}\neq 0$$ implies $$p_{mn}>0$$, and hence, the conditions of Lemma 2 are met. Due to $$S(P,i,j,A)=S(B^{T},j,i,A)$$, the conditions of Theorem 3 are satisfied, guaranteeing convergence of K with

$$\bigl\Vert K-K_{0}^{(0)} \bigr\Vert \leq S(B,0,0, \mathbb{N})-S(B,0,0, \emptyset )\leq 1-0=1.$$

□

### Equivalence transformations

In order to obtain a more general formulation of a Pringsheim-type criterion, we introduce equivalence transformations. Note that the usage of equivalence transformations is not restricted to this proof. For the special cases discussed in Sect. 4, equivalence transformations have been introduced in the corresponding literature, and here, we use straightforward generalizations. Let us set

$$\tilde{b}_{n}=\lambda _{n} b_{n}\rho _{n}^{-1},\qquad \tilde{a}_{nm}=\lambda _{n} a_{nm} \rho _{m}^{-1}, \qquad \tilde{c}_{mn}=\lambda _{m} c_{mn}\rho _{n} ^{-1}$$

for $$m,n\in \mathbb{N}_{0}$$ with $$n>m$$, where $$(\lambda _{n})_{n=0} ^{\infty }$$ and $$(\rho _{n})_{n=0}^{\infty }$$ are two sequences of invertible elements, and define $${\tilde{K}}_{n}^{(N)}$$ by using $$\tilde{b}_{n},\ldots$$ instead of $$b_{n},\ldots$$ . Using the corresponding notation for $$\tilde{L}_{m,n,k}^{(N)}$$, a simple induction shows that

$$\tilde{K}_{n}^{(N)}=\lambda _{n} K_{n}^{(N)}\rho _{n}^{-1} \quad \mbox{and} \quad \tilde{L}_{m,n,k}^{(N)}=\rho _{m} L_{m,n,k}^{(N)}\rho _{k}^{-1}.$$

As a direct consequence, we obtain

### Lemma 3

$$({\tilde{K}}_{n}^{(N)} )^{-1}$$ exists if and only if $$(K_{n}^{(N)} )^{-1}$$ exists. Furthermore, $$\tilde{K}= \lim_{N\to \infty }{\tilde{K}}_{0}^{(N)}$$ converges if and only if $$K=\lim_{N\to \infty }K_{0}^{(N)}$$ converges. In case of convergence, we have $$\tilde{K}=\lambda _{0} K\rho _{0}^{-1}$$.

### A general Pringsheim criterion

Finally, we have all preparations for proving our desired Pringsheim-type criterion.

### Theorem 5

Let $$b_{n}^{-1}$$ exist for $$n\geq 1$$, and for all $$m\geq 1$$, let there be some $$n>m$$ with $$c_{nm}\neq 0$$. Furthermore, let

$$\sum_{m=1}^{\infty } \Vert c_{m0} \Vert < \infty ,$$
(22)

and let

$$\sum_{m=0}^{n-1} \bigl\Vert a_{mn}b_{n}^{-1} \bigr\Vert + \sum _{m=n+1}^{\infty } \bigl\Vert c_{mn}b_{n}^{-1} \bigr\Vert \leq 1, \quad n\in \mathbb{N}.$$
(23)

Then the gcf K defined in Definition 1 is well-defined ($$K_{n}^{(N)}$$ is invertible for all $$N\geq n\geq 1$$) and converges with

$$\Vert K-b_{0} \Vert \leq \sum_{m=1}^{\infty } \Vert c_{m0} \Vert .$$

### Proof

The value of $$b_{0}$$ has no impact on any statement of the theorem. Therefore, we may assume without loss of generality that $$b_{0}= (\sum_{m=1}^{\infty } \Vert c_{m0} \Vert ) \cdot I$$. Now apply an equivalence transformation with $$\lambda _{n}=I$$ and $$\rho _{n}=b_{n}$$, resulting in a new gcf with coefficients $$\tilde{b}_{n}=I$$, $${\tilde{a}}_{mn}=a_{mn}b_{n}^{-1}$$, and $${\tilde{c}}_{mn}=c_{mn}b_{n}^{-1}$$, where

$$\sum_{m=0}^{n-1} \Vert { \tilde{a}}_{mn} \Vert + \sum_{m=n+1}^{\infty } \Vert {\tilde{c}}_{mn} \Vert = \sum_{m=0}^{n-1} \bigl\Vert a_{mn}b_{n}^{-1} \bigr\Vert + \sum _{m=n+1}^{\infty } \bigl\Vert c_{mn}b_{n}^{-1} \bigr\Vert \leq 1, \quad n\geq 1,$$

and

$$\sum_{m=1}^{\infty } \Vert { \tilde{c}}_{m0} \Vert = \sum_{m=1}^{\infty } \bigl\Vert c_{m0}b_{0}^{-1} \bigr\Vert \leq 1.$$

Theorem 4 guarantees that converges with $$\Vert \tilde{K}-I \Vert = \Vert \tilde{K}- {\tilde{b}}_{0} \Vert \leq 1$$. From the general results concerning equivalence transformations, we finally obtain

$$\Vert K-b_{0} \Vert = \bigl\Vert (\tilde{K}-I )b _{0} \bigr\Vert \leq \sum_{m=1}^{\infty } \Vert c _{m0} \Vert .$$

□

### Remarks on the Pringsheim-type convergence criterion

• For non-generalized continued fractions, we have $$c_{m0}=0$$ for $$m\geq 2$$, and hence, the condition $$\sum \Vert c_{m0} \Vert < \infty$$ is trivially fulfilled. Hence, our result still is a straightforward generalization of the results in  for non-generalized continued fractions in Banach algebras.

• The basic strategy for proving the Pringsheim-type criterion coincides with the strategy in  for the non-generalized case.

• For non-generalized continued fractions, a speed-of-convergence statement was proved in . For gcfs, a direct analogue requires further research. In principle, such statements should make use of the bound on $$\Vert K_{0}^{(N)}-K \Vert$$ given in Theorem 3.

• For ugcfs defined in Definition 2, $$c_{m+1,m}^{-1}$$ exists for all $$m\in \mathbb{N}_{0}$$, and in particular, $$c_{m+1,m}\neq 0$$. Furthermore, $$c_{m0}=0$$ for $$m\geq 2$$. Hence, the conditions of Theorem 5 simplify to

$$\sum_{m=0}^{n-1} \bigl\Vert a_{mn}b_{n}^{-1} \bigr\Vert + \bigl\Vert c_{n+1,n}b_{n}^{-1} \bigr\Vert \leq 1, \quad n\in \mathbb{N}.$$
(24)

Note that Theorem 5 guarantees invertibility of all $$K_{n}^{(N)}$$, and hence, the ugcf is a special case of a gcf. Therefore, (24) guarantees convergence of ugcfs.

• As demonstrated in Sect. 4, our Definition of gcfs covers a wide generality of generalizations of continued fractions found in the literature. Therefore, Theorem 5 (or the condition (24) for ugcfs) provides a Pringsheim-type criterion for all these constructions (and for non-generalized continued fractions, a Worpitzky-type criterion by means of equivalence transformation), that is, all results mentioned in Sect. 5.1 are included by Theorem 5 or (24), or ugcfs. As some of these criteria have more restricting conditions (for example, instead of $$\cdots \leq 1$$, in  and , $$\cdots \leq 1-\epsilon$$ with some $$\epsilon >0$$ was required), our approach not only provides a unified proof, but also improvements on the statements.

• Another equivalence transformation can be applied to the Pringsheim-type conditions, transforming conditions (22) and (23) into $$\sum_{m=0}^{\infty } \Vert \lambda _{m}c_{m0} \Vert <\infty$$ and

$$\sum_{m=0}^{n-1} \bigl\Vert \lambda _{m}a_{mn}b_{n}^{-1} \lambda _{n}^{-1} \bigr\Vert +\sum_{m=n+1}^{\infty } \bigl\Vert \lambda _{m}c_{mn}b_{n}^{-1} \lambda _{n}^{-1} \bigr\Vert \leq 1, \quad n=1,2,3,\ldots ,$$
(25)

respectively, where all $$\lambda _{n}^{-1}$$ are supposed to exist. The estimate on K is transformed into

$$\bigl\Vert \lambda _{0}(K-b_{0}) \bigr\Vert \leq \Vert \lambda _{m}c_{m0} \Vert .$$
(26)

### Further convergence criteria

Obviously, the literature for continued fractions provides more convergence criteria than Pringhseim-type conditions. In further research, some of these may be extended to our definition of gcfs. In particular, Pincherle-type criteria are interesting. For non-generalized continued fractions in $$\mathbb{C}$$, that is, $$K=\lim \frac{A_{n}}{B_{n}}$$ with $$(A_{n})$$ and $$(B_{n})$$ satisfying $$X_{n}=X_{n-1}b_{n}+X_{n-2}a_{n}$$, it states convergence if and only if this recurrence scheme has solutions $$(Y_{n})$$ and $$(Z_{n})$$ with $$\lim_{n\to \infty }\frac{Y_{n}}{Z_{n}}=0$$. Such criteria should be extended at least to ugcfs as defined in Definition 2, and maybe even to gcfs as defined in Definition 1. Such a result would include the corresponding results for n-fractions [43, 44] and LB-fractions .

## Periodic generalized continued fractions

Provided convergence to some value ≠0, the periodic non-generalized $$\mathbb{C}$$-valued continued fraction

$$K=b+\frac{a}{b+\frac{a}{b+\ddots }}$$

will meet $$K=b+\frac{a}{K}$$, that is, it is a solution of the quadratic equation $$x^{2}-xb-a=0$$. A popular result states that K converges if and only if the two solutions $$x_{1}$$, $$x_{2}$$ of $$x^{2}-xb-a$$ coincide or have different absolute value, and if K converges, K is the root with larger absolute value, see . Alternatively, we can consider the value $$\frac{1}{K}$$ which solves $$ay^{2}+by-1=0$$. Again, K converges if and only if the solutions $$y_{1}$$, $$y_{2}$$ coincide or have different absolute values, and in case of convergence, $$\frac{1}{K}$$ is the root with smaller absolute value, that is, the minimal root.

Now consider our setting, and let $$c_{nk}=0$$ for $$k< n-1$$, let $$c_{n,n-1}=-\alpha _{0}$$, $$b_{n}=\alpha _{1}$$ for $$n\geq 1$$, and $$a_{mn}=\alpha _{m-m+1}$$ for $$n>m\geq 1$$. Due to the periodicity, $$L_{r,r,r-1}^{(N)}$$ only depends on $$N-r$$, and $$c_{nk}=0$$ for $$k< n-1$$ guarantees that

$$L_{n,1,0}^{(N)}=\prod_{1}^{r=n}L_{r,r,r-1}^{(N)}= \prod_{1}^{r=n}L_{1,1,0}^{(N-r)}.$$

Hence, if $$L_{1,1,0}$$ converges, so does $$L_{n,1,0}=L_{1,1,0}^{n}=:L ^{n}$$. The periodic structure of the coefficients means that system (14) simplifies to $$0=\sum_{n=m-1}^{\infty } \alpha _{n-m+1}x_{n}$$. Hence, the considerations in Remark 2 let us hope that

$$0=\sum_{n=m-1}^{\infty }\alpha _{n-m+1}L_{n,1,0}=\sum_{n=m-1}^{\infty } \alpha _{n-m+1}L^{n-m+1}L^{m+1}, \quad n\geq 1,$$

which is obviously equivalent to

$$\sum_{n=0}^{\infty }\alpha _{n}L^{n}=0.$$

Provided convergence of L, this is in fact true.

### Theorem 6

Let $$(\alpha _{n})_{n=0}^{\infty }$$ be an $$\mathcal{R}$$-valued sequence, define $$c_{nk}$$, $$b_{n}$$, $$a_{mn}$$ as above, let all approximants $$L_{1,1,0}^{(N)}$$ of L be well-defined, and let L converge, and let

$$\sum_{n=0}^{\infty } \Vert \alpha _{n} \Vert x ^{n}$$

converge in a neighborhood of $$\Vert L \Vert$$. Then $$\sum_{n=0}^{\infty }\alpha _{n}L^{n}=0$$.

### Proof

We use that a solution for the truncated system (15) is given by $$x_{0}=I$$ and $$x_{m}=L_{m,1,0} ^{(N)}$$ for $$m\geq 1$$. Here, we have $$L_{m,1,0}^{(N)}=\prod_{1}^{r=m}L_{1,1,0}^{(N+1-r)}$$ and (15) for $$n=1$$ guarantees

$$0=\sum_{m=0}^{N}\alpha _{m}\prod_{1}^{r=m}L_{1,1,0}^{(N+1-r)}= \sum_{m=0} ^{\infty }\alpha _{m} \Biggl( \prod_{1}^{r=m}L_{1,1,0}^{(N-r)} \Biggr) \mathbf{1}_{\{1,\ldots ,N\}}(m),$$
(27)

where

$$\mathbf{1}_{\{1,\ldots ,N\}}(m)= \textstyle\begin{cases} 1,&m=1,\ldots ,N, \\ 0,&\mbox{otherwise}. \end{cases}$$

Define $$M=\max \{ 1,\sup_{n\in \mathbb{N}}\frac{ \Vert L _{1,1,0}^{(n)} \Vert }{ \Vert L \Vert } \}$$, choose $$\epsilon >0$$ and $$n_{0}\in \mathbb{N}$$ such that $$\sum_{m=0}^{\infty } \Vert \alpha _{m} \Vert ((1+\epsilon ) \Vert L \Vert )^{m}$$ converges and $$\frac{ \Vert L_{1,1,0}^{(n)} \Vert }{ \Vert L \Vert }\leq 1+\epsilon$$ for $$n\geq n_{0}$$. Then we have

$$\Biggl\Vert \alpha _{m} \Biggl(\prod_{1}^{r=m}L_{1,1,0}^{(N-r)} \Biggr) \mathbf{1}_{\{1,\ldots ,N\}}(m) \Biggr\Vert \leq \Vert \alpha _{m} \Vert \cdot \Vert L \Vert ^{m}(1+ \epsilon )^{m}\cdot M^{n_{0}}=:\beta _{m},$$

where $$\beta _{m}$$ does not depend on N and $$\sum_{m=0}^{ \infty }\beta _{m}$$ converges. Hence, we can use dominated convergence in (27), and obtain

\begin{aligned} 0 =&\lim_{N\to \infty }\sum_{m=0}^{\infty } \alpha _{m} \Biggl(\prod_{1} ^{r=m}L_{1,1,0}^{(N-r)} \Biggr)\mathbf{1}_{\{1,\ldots ,N\}}(m) \\ =&\sum_{m=0}^{\infty }\lim_{N\to \infty } \alpha _{m} \Biggl(\prod_{1} ^{r=m}L_{1,1,0}^{(N-r)} \Biggr)\mathbf{1}_{\{1,\ldots ,N\}}(m) \\ =&\sum_{m=0}^{\infty }\alpha _{m} L^{m}. \end{aligned}

□

### Remark 6

If L converges, and if $$\sum_{m=0}^{\infty }\alpha _{m}L^{m}$$ holds, we have

$$0=\sum_{m=n-1}^{\infty }\alpha _{m-n+1}L_{m,1,0}.$$

Due to the choice of $$c_{nk}$$, $$b_{n}$$, $$a_{mn}$$ this entails that the $$L_{n,1,0}$$ satisfy the infinite system (14). Hence, the examples below provide also examples for gcfs characterizing a certain solution of system (14).

Note that $$c_{nk}=0$$ for $$n\geq k+2$$ and the periodicity allows us to write $$L=K=\lim_{N\to \infty }K_{0}^{(N)}$$ in two different ways:

• Set $$b_{0}=0$$, $$a_{01}=I$$, and $$a_{0m}=0$$ for $$m\geq 2$$. Then $$K=L$$ (see Remark 3).

• Define $$b_{0}=\alpha _{1}$$, $$a_{0m}=\alpha _{m+1}$$ for $$m\geq 1$$, and let $$\alpha _{0}^{-1}$$ exist. Then $$K_{0}^{(N)}=K_{1}^{(N)}$$ and $$L_{1,1,0}^{(N)}= (K_{1}^{(N)} )^{-1}c_{10}$$ (due to $$c_{nk}=0$$ for $$k< n-1$$) yields

$$L^{-1}=-\alpha _{0}^{-1}K.$$

The latter representation allows us to find a special case of the Pringsheim-type criterion.

### Theorem 7

Let $$\alpha _{n}$$, $$c_{nk}$$, $$b_{n}$$, $$a_{mn}$$ as in Theorem 6. Additionally, let $$\alpha _{0}^{-1}$$ and $$\alpha _{1}^{-1}$$ exist, let $$\alpha _{n}\neq 0$$ for some $$n\geq 2$$, let $$\lambda >0$$, and let

$$\bigl\Vert \alpha _{0}\alpha _{1}^{-1} \bigr\Vert +\sum_{m=2}^{\infty }\lambda ^{m} \bigl\Vert \alpha _{m}\alpha _{1}^{-1} \bigr\Vert \leq \lambda .$$
(28)

Then L converges with $$\Vert (\alpha _{0}L^{-1}+\alpha _{1}) \Vert \leq \frac{1}{\lambda } \Vert \alpha _{0} \Vert$$.

### Proof

Additionally, set $$b_{0}=\alpha _{1}$$, $$a_{0m}=\alpha _{m+1}$$ for $$m\geq 1$$. Then $$L^{-1}=-\alpha _{0}^{-1}K$$ (in case of convergence) as pointed out above. Set $$\lambda _{n}=\frac{1}{\lambda ^{n}}I$$. Then the Pringsheim-type condition (25) simplifies to

$$\sum_{m=0}^{n-1}\lambda ^{n-m} \bigl\Vert \alpha _{n+1-m}\alpha _{1} ^{-1} \bigr\Vert +\frac{1}{\lambda } \bigl\Vert \alpha _{0}\alpha _{1}^{-1} \bigr\Vert \leq 1, \quad n\geq 1,$$

or equivalently

$$\sum_{m=2}^{n+1}\lambda ^{m} \bigl\Vert \alpha _{m}\alpha _{1}^{-1} \bigr\Vert + \bigl\Vert \alpha _{0}\alpha _{1}^{-1} \bigr\Vert \leq \lambda , \quad n\geq 1.$$

This is true if and only if (28) holds. From (26), we obtain

$$\Vert K-b_{0} \Vert \leq \frac{1}{\lambda } \Vert c _{10} \Vert ,$$

which completes the proof since $$b_{0}=\alpha _{1}$$, $$c_{10}=-\alpha _{0}$$, and $$L^{-1}=-\alpha _{0}^{-1}K$$. □

### Example: a scalar periodic gcf

Consider $$f:\mathbb{C}\to \mathbb{C}$$ with $$f(z)=\sum_{n=0} ^{\infty }\frac{(-z)^{n}}{(2n)!}$$. Obviously, f is an entire function, and we have $$f(z)=\cos (\sqrt{z} )$$ for non-negative real numbers z and $$f(z)=\cosh (\sqrt{-z} )$$ for non-positive real numbers z. $$f(z)=0$$ is true if and only if $$z=\frac{\ell ^{2}\pi ^{2}}{4}$$ for some odd and positive integer .

Set $$\alpha _{n}=\frac{(-1)^{n}}{(2n)!}$$. Then (28) is equivalent to

\begin{aligned} 0 \geq & \vert \alpha _{0} \vert -\lambda \vert \alpha _{1} \vert + \sum_{m=2}^{\infty } \lambda ^{m} \vert \alpha _{m} \vert =1- \frac{ \lambda }{2}+\sum_{m=2}^{\infty } \frac{\lambda ^{m}}{(2m)!} \\ =&-\lambda +\sum_{m=0}^{\infty } \frac{\lambda ^{m}}{(2m)!}=-\lambda + \cosh (\sqrt{\lambda } ). \end{aligned}

This is true for, e.g., $$\lambda =4$$. By Theorem 7, L converges with $$\vert L^{-1}- \frac{1}{2} \vert \leq \frac{1}{4}$$. Since all coefficients are real numbers, $$L^{-1}\in [\frac{1}{4},\frac{3}{4} ]$$, that is, $$L\in [\frac{4}{3},4 ]$$. According to Theorem 6, $$f(L)=0$$, and therefore, $$L=\frac{\pi ^{2}}{4}$$. So, L converges to the minimal root of the function f in this example.

### Example: another scalar periodic gcf

Consider $$f:\mathbb{C}\to \mathbb{C}\setminus \{1\}$$ with

$$f(z)=\frac{(z-2)(z-3)}{1-z}=6+z+\sum_{n=0}^{\infty }2z^{n}.$$

f is meromorphic on $$\mathbb{C}$$ with a pole in 1. Set $$\alpha _{0}=6$$, $$\alpha _{1}=1$$, and $$\alpha _{n}=2$$ for $$n\geq 2$$. For no λ, the Pringsheim-type condition (28) will be satisfied, but here we can find an explicit representation for the approximants $$L_{1,1,0}^{(N)}$$ to L: From (16), we obtain that

$$L_{n,n,n-1}^{(N)}= \Biggl(b_{n}+\sum _{m=n+1}^{N} a_{nm}\prod _{n+1}^{r=m}L _{r,r,r-1}^{(N)} \Biggr)^{-1}c_{n,n-1},$$

and for $$n=1$$ we obtain with the periodicity of the coefficients

$$L_{1,1,0}^{(N)}=- \Biggl(\alpha _{1}+ \sum_{m=2}^{N}\alpha _{m}\prod _{2} ^{r=m}L_{1,1,0}^{(N+1-r)} \Biggr)^{-1}\alpha _{0}.$$
(29)

Hence, $$L_{1,1,0}^{(1)}=-6$$, $$L_{1,1,0}^{(2)}=\frac{6}{11}$$, and an easy induction yields

$$L_{1,1,0}^{(N)}=\frac{3 (\frac{1}{2} )^{N-1}-4 (\frac{1}{3} ) ^{N-1}}{3 (\frac{1}{2} )^{N}-4 (\frac{1}{3} ) ^{N}}, \quad N\geq 0.$$

Therefore, $$L=\lim_{N\to \infty }L_{1,1,0}^{(N)}=2$$. So, in total, no Pringsheim-type criterion is satisfied, and the minimal root 2 does not lie within the area of convergence of the power series $$\sum \alpha _{m}z^{m}$$. Nevertheless, the periodic gcf built up by the series $$\alpha _{n}$$ converges to this minimal root.

### Example: a matrix-valued gcf

Now set

$$\alpha _{0}= \begin{pmatrix} 6&1 \\ 0&6 \end{pmatrix}, \qquad \alpha _{1}= \begin{pmatrix} 2&2 \\ 1&2 \end{pmatrix}, \qquad \alpha _{n}= \begin{pmatrix} 2&2 \\ 2&2 \end{pmatrix}, \quad n\geq 2.$$

Then we obtain

\begin{aligned} L_{1,1,0}^{(N)} =& \begin{pmatrix} 3 (\frac{1}{2} )^{2N-2}-4 (\frac{1}{3} )^{2N-2}&3 (\frac{1}{2} )^{2N-1}-4 (\frac{1}{3} )^{2N-1} \\ 3 (\frac{1}{2} )^{2N-3}-4 (\frac{1}{3} )^{2N-3}&3 (\frac{1}{2} )^{2N-2}-4 (\frac{1}{3} )^{2N-2} \end{pmatrix} \\ &{}\cdot \begin{pmatrix} 3 (\frac{1}{2} )^{2N}-4 (\frac{1}{3} )^{2N}&3 (\frac{1}{2} )^{2N+1}-4 (\frac{1}{3} )^{2N+1} \\ 3 (\frac{1}{2} )^{2N-1}-4 (\frac{1}{3} )^{2N-1}&3 (\frac{1}{2} )^{2N}-4 (\frac{1}{3} )^{2N} \end{pmatrix}^{-1} \end{aligned}

by means of a (lengthy, but not difficult) induction from (29). With some effort, we find

$$L=\lim_{N\to \infty }L_{1,1,0}^{(N)}= \begin{pmatrix} -6&5 \\ -30&19 \end{pmatrix}= \begin{pmatrix} 1&1 \\ 2&3 \end{pmatrix} \begin{pmatrix} 2&0 \\ 0&3 \end{pmatrix} \begin{pmatrix} 1&1 \\ 2&3 \end{pmatrix}^{-1}.$$

Hence, the eigenvalues of the limit L are the two (minimal) roots of $$f(z)=6+z+\sum_{n=2}^{\infty }2z^{n}$$.

## Special functions as solutions of (14): an example

We consider the Riemann zeta function ζ, where we will interpret $$s\mapsto (s-1)\zeta (s)$$ as an entire function with value 1 at $$s=1$$.

The Riemann zeta function does not satisfy any easy difference equation of finite order, but a variety of infinite recurrence schemes, from which we have chosen three for consideration here.

\begin{aligned}& 1= \sum_{n=0}^{\infty }\frac{(s-1)s\ldots (s+n-1)}{(n+1)!} \bigl( \zeta (s+n)-1 \bigr),\quad s\in \mathbb{C}, \end{aligned}
(30)
\begin{aligned}& \frac{2^{s}-2}{s-1}\cdot \frac{(s-1)\zeta (s)}{2^{s}} = \sum_{n=1} ^{\infty }\binom{s+n-2}{n}\frac{(s+n-1)\zeta (s+n)}{2^{s+n}},\quad s\in \mathbb{C}, \end{aligned}
(31)
\begin{aligned}& 2 \bigl(2^{s}-2 \bigr)\frac{\zeta (s)}{2^{s}} = 1+2\sum _{n=1}^{ \infty }\binom{s+2n-1}{2n}\frac{\zeta (s+2n)}{2^{s+2n}},\quad s\in \mathbb{C}, \end{aligned}
(32)

(30) is the most prominent of these relationships and can be found in many textbooks, for instance, in  where also (31) can be found. Alternatively, (31) and (32) can be found in .

### A gcf associated with (30)

Set $$c_{n0}=1$$ and $$b_{n}=1$$ for all $$n=1,2,3,\ldots$$ , let $$c_{nk}=0$$ for $$1\leq k< n$$, and let $$a_{nm}=\frac{(s+n-2)\cdot (s+n-1) \cdot \ldots\cdot (s+m-3)}{(m-n+1)!}$$ for $$m>n$$ with some fixed $$s\in \mathbb{C}$$. With $$x_{0}=1$$, (14) reads as

\begin{aligned} 1 =&\sum_{m=n}^{\infty } \frac{(s+n-2)(s+n-1)\cdots (s+m-3)}{(m-n+1)!}x_{m} \\ =&\sum_{m=0}^{\infty }\frac{(s+n-2)(s+n-1)\cdots (s+n+m-3)}{(n+1)!}x _{m+n}, \quad n=1,2,3,\ldots , \end{aligned}

where the empty product (for $$m=n$$) is 1. Hence, (30) (with s being replaced by $$s+m-1$$) guarantees that one solution of (14) is given by $$x_{0}=1$$ and $$x_{k}=(s+k-2) (\zeta (s+k-1)-1 )$$ for $$k\geq 1$$.

Due to $$c_{nk}=0$$ for $$k\geq 1$$, (10), (11), and (12) simplify to $$K_{n}^{(N)}=b_{n}$$ for $$1\leq n\leq N$$, $$L_{n,1,0}^{(N)}=L_{n,n,0}^{(N)}$$ for $$1\leq n\leq N$$ and

$$L_{n,1,0}^{(N)}=L_{n,n,0}^{(N)}=1- \sum_{m=n+1}^{N}a_{nm}L_{m,m,0}^{(N)}.$$
(33)

Let

$$q(s,K)=\sum_{k=0}^{K} \frac{B_{k}^{-}}{k!}(s-1)s\ldots (s+k-2),$$

where $$B_{0}^{-}=1$$, $$B_{1}^{-}=-\frac{1}{2}$$, $$B_{2}^{-}=\frac{1}{6}$$, $$B_{3}^{-}=0$$, $$B_{4}^{-}=-\frac{1}{30}$$, …are the Bernoulli numbers. Then

$$L_{n,n,0}^{(N)}=q(s+n-1,N-n)$$

for $$1\leq n\leq N$$, as can be proved by induction with respect to $$N-n$$ by means of (33) and the recursion formula $$B_{n}^{-}=-\frac{1}{n+1}\sum_{k=0}^{n-1} \binom{n+1}{k}B_{k}^{-}$$ which is valid for $$n\geq 1$$.

In particular, we obtain

$$L_{1,1,0}^{(N+1)}=q(s,N)=1-\frac{(s-1)}{2}+\sum _{n=2}^{N}\frac{B_{n} ^{-}}{n!}(s-1)s(s+1)\cdots (s+n-2).$$

For $$s\in \{1,0,-1,-2,\ldots \}$$, $$L_{1,1,0}$$ is a ‘finite’ (inverse) generalized continued fraction, and $$L_{1,1,0}^{(N)}$$ obviously converges, its value is $$L_{1,1,0}=(s-1)(\zeta (s)-1)$$. For any other $$s\in \mathbb{C}$$, $$L_{1,1,0}^{(N)}$$ diverges as $$N\to \infty$$.

Note that there is the representation

$$(s-1) \bigl(\zeta (s)-1 \bigr) =q(s,N)- \frac{(s-1)s\cdots (s+N-1)}{N!} \int _{1}^{\infty }B_{N} \bigl(x-\lfloor x \rfloor \bigr)x^{-s-N} \,dx$$

for the Riemann zeta function ($$B_{N}(x)$$ is the Nth Bernoulli polynomial), but this does not imply $$q(s,N)\to (s-1)\zeta (s-1)$$ as $$N\to \infty$$.

Summarizing, we have seen an infinite system of equations

• which has a solution,

• for which $$L_{n,1,0}^{(N)}$$ is well-defined for all $$1\leq n\leq N$$ and solves the truncated system (15),

• for which $$L_{1,1,0}^{(N)}$$ does not converge for $$N\to \infty$$.

### A gcf associated with (32)

In a similar manner, (32) can be used to construct a system with $$c_{nk}=0$$ for $$1\leq k< n$$ which is solved by $$x_{0}=1$$ and $$x_{k}=\zeta (s+2(k-1))$$ for $$k\geq 1$$. For this system, numerical experiments give hope that $$L_{1,1,0}^{(N)}$$ converges to $$\zeta (s)$$ for real $$s>1$$, and at least for some $$s\in \mathbb{C}$$ with $$\operatorname {Im}(s)\neq 0$$ and/or $$\operatorname {Re}(s)<1$$, although the convergence seems to be quite slow.

### A gcf associated with (31)

While (30) and (32) are inhomogeneous equations for $$\zeta (s+\cdots )$$, equation (31) is homogeneous.

This means that $$x_{n}=\frac{(s+n-1)\zeta (s+n)}{2^{s+n}}$$ provides a solution to the sum equation

$$c_{m,m-1}x_{m-1}=b_{m}x_{m}+\sum _{n=m+1}^{\infty }a_{mn}x_{n}, \quad m \geq 1,$$

where $$c_{m,m-1}=\frac{2^{s+m-1}-2}{s+m-2}$$, $$a_{mn}= \binom{s+n-2}{n-m+1}$$, and $$b_{m}=a_{mm}=s+m-2$$. In case of convergence, we might have $$x_{1}=L_{1,1,0}x_{0}$$. Hence, hope arises that $$L_{1,1,0}$$ converges to

$$\frac{s\zeta (s+1)}{2(s-1)\zeta (s)}.$$

Again, numerical experiments support this hope including values s with $$\operatorname {Re}(s)<1$$ and/or $$\operatorname {Im}(s)\neq 0$$. Again, this convergence seems to be relatively slow.

### Comment on the recurrence relations and the zeta function

There are many more recurrence relations (see, e.g., [38, 42]) for the ζ function. A thorough analysis of which infinite recurrence formula might be suitable for obtaining a generalized continued fraction which represents the Riemann zeta function and which allows to

• efficiently compute values $$\zeta (s)$$ or

• better understand properties of ζ

is far beyond the scope of this paper. Note that the gcfs in this section do not satisfy a Pringsheim-type condition. Hence, such an analysis would require a further development of the convergence theory.

## Conclusion and further research

In this paper, we gave a new definition for generalized continued fractions (gcfs), and demonstrated that our definition includes and extends many generalizations of continued fractions which can be found in the literature. In some way, it combines various approaches of generalizations (coefficients in Banach algebras, more general recursion schemes). As a direct benefit of our definition, we were able to prove a Pringsheim-type convergence criterion, including many former results as special cases.

Providing more convergence criteria and speed-of-convergence results is a goal for future research. With a thorough convergence theory as a background, gcfs can be applied in various fields of mathematics. We have already mentioned that the definition is inspired by a graphical interpretation (relationship to combinatorics) and by numerical algorithms for Markov chains. Furthermore, we have seen that scalar periodic gcfs might characterize the minimal roots of meromorphic functions and that matrix-valued periodic gcfs might characterize several minimal roots at once. In combination with an advanced convergence theory, gcfs might turn out to provide useful representations of special functions. In these mathematical applications, the relationship between (minimal) solutions of second-order difference equations and continued fractions is generalized.

## References

1. 1.

Baumann, H.: A Pringsheim-type convergence criterion for continued fractions in Banach algebras. J. Approx. Theory 166, 154–162 (2013)

2. 2.

Baumann, H.: Two-sided continued fractions in Banach algebras—a Śleszyński–Pringsheim-type convergence criterion and applications. J. Approx. Theory 199(C), 13–28 (2015)

3. 3.

Baumann, H., Hanschke, T.: Inherent numerical instability in computing invariant measures of Markov chains. Appl. Math. 8, 1367–1385 (2017)

4. 4.

Baumann, H., Sandmann, W.: Numerical solution of level dependent quasi-birth-and-death processes. Proc. Comput. Sci. 1(1), 1561–1569 (2010)

5. 5.

Baumann, H., Sandmann, W.: Computing stationary expectations in level-dependent QBD processes. J. Appl. Probab. 50(1), 151–165 (2013)

6. 6.

Bernstein, L.: The Jacobi–Perron Algorithm: Its Theory and Application. Lecture Notes Math., vol. 207. Springer, Berlin (1971)

7. 7.

Bobryk, R.V.: Closure method and asymptotic expansions for linear stochastic systems. J. Math. Anal. Appl. 329, 703–711 (2007)

8. 8.

Bright, L., Taylor, P.G.: Calculating the equilibrium distribution in level dependent quasi-birth-and-death processes. Commun. Stat., Stoch. Models 11(3), 497–525 (1995)

9. 9.

de Bruin, M.G.: Generalized C-fractions and a multi-dimensional Padé table. Doctoral thesis, University of Amsterdam (1974)

10. 10.

de Bruin, M.G.: Convergence of generalized C-fractions. J. Approx. Theory 24, 177–207 (1978)

11. 11.

Denk, H., Riederle, M.: A generalization of a theorem of Pringsheim. J. Approx. Theory 35, 355–363 (1982)

12. 12.

Fair, W.: Noncommutative continued fractions. SIAM J. Math. Anal. 2, 226–232 (1971)

13. 13.

Fair, W.: A convergence theorem for noncommutative continued fractions. J. Approx. Theory 5, 74–76 (1972)

14. 14.

Flajolet, P.: Combinatorial aspects of continued fractions. Discrete Math. 32(2), 125–161 (1980)

15. 15.

Gautschi, W.: Computational aspects of three-term recurrence relations. SIAM Rev. 9, 24–82 (1967)

16. 16.

Grassmann, W.K., Heyman, D.P.: Equilibrium distribution of block-structured Markov chains with repeating rows. J. Appl. Probab. 27(3), 557–576 (1990)

17. 17.

Hanschke, T.: Ein verallgemeinerter Jacobi–Perron-Algorithmus zur Reduktion linearer Differenzengleichungssysteme. Monatshefte Math. 126, 287–311 (1998)

18. 18.

Hanschke, T.: A matrix continued fraction algorithm for the multiserver repeated order queue. Math. Comput. Model. 30, 159–170 (1999)

19. 19.

Hayden, T.L.: Continued fractions in Banach spaces. Rocky Mt. J. Math. 4, 367–370 (1974)

20. 20.

Jacobi, C.G.J.: Allgemeine Theorie der kettenbruchähnlichen Algorithmen, in welchen jede zahl aus drei vorhergehenden gebildet wird. J. Reine Angew. Math. 69, 29–64 (1868)

21. 21.

Levrie, P.: Pringsheim’s theorem for generalized continued fractions. J. Comput. Appl. Math. 14, 439–445 (1986)

22. 22.

Levrie, P.: Pringsheim’s theorem revisited. J. Comput. Appl. Math. 25, 93–104 (1989)

23. 23.

Levrie, P., Bultheel, A.: Matrix continued fractions related to first-order linear recurrence systems. Electron. Trans. Numer. Anal. 4, 46–63 (1996)

24. 24.

Miller, K.S.: Linear Difference Equations. Benjamin, New York (1968)

25. 25.

Negoescu, N.: Convergence theorems on non-commutative continued fractions. Math. Rev. Anal. Numer. Theor. Approx. 5, 165–180 (1976)

26. 26.

Neuts, M.F.: Matrix-Geometric Solutions in Stochastic Models. John Hopkins University Press, Baltimore (1981)

27. 27.

Perron, O.: Grundlagen für eine Theorie des Jacobischen Kettenbruchalgorithmus. Math. Ann. 64(1), 1–76 (1907)

28. 28.

Perron, O.: Über die Konvergenz der Jacobi-Kettenalgorithmen mit komplexen elementen. Sitzungsber. Akad. Münch. Math.-Phys. 37, 401–482 (1907)

29. 29.

Perron, O.: Über lineare Differenzen- und Differentialgleichungen. Math. Ann. 66, 446–487 (1909)

30. 30.

Perron, O.: Zur Theorie der Summengleichungen. Math. Z. 8(1–2), 159–170 (1920)

31. 31.

Perron, O.: Die Lehre von den Kettenbrüchen. Teubner, Stuttgart (1954)

32. 32.

Perron, O.: Die Lehre von den Kettenbrüchen II. Teubner, Stuttgart (1957)

33. 33.

Pfluger, P.: Matrizenkettenbrüche. Dissertation ETH Zürich. Juris Druck + Verlag, Zürich (1966)

34. 34.

Phung-Duc, T., Masuyama, H., Kasahara, S., Takahashi, Y.: A simple algorithm for the rate matrices of level-dependent QBD processes. In: Proceedings of the 5th International Conference on Queueing Theory and Network Applications, pp. 46–52 (2010)

35. 35.

Pringsheim, A.: Über die Konvergenz unendlicher Kettenbrüche. Sb. Münch. 28, 295–324 (1899)

36. 36.

Pringsheim, A.: Über einige Konvergenzkriterien für Kettenbrüche mit komplexen Gliedern. Sb. Münch. 35, 359–380 (1905)

37. 37.

Raissouli, M., Kacha, A.: Convergence of matrix continued fractions. Linear Algebra Appl. 320, 115–129 (2000)

38. 38.

Ramaswami, V.: Notes on Riemann’s zeta-function. J. Lond. Math. Soc. s1–9(3), 165–169 (1934)

39. 39.

Schelling, A.: Convergence theorems for continued fractions in Banach spaces. J. Approx. Theory 86, 72–80 (1996)

40. 40.

Seneta, E.: Non-negative Matrices and Markov Chains. Springer, Berlin (1981)

41. 41.

Sorokin, V.N., Van Iseghem, J.: Matrix continued fractions. J. Approx. Theory 96, 237–257 (1999)

42. 42.

Srivastava, H.M., Choi, J.: Series Associated with the Zeta and Related Functions. Springer, Berlin (2001)

43. 43.

van der Cruyssen, P.: Linear difference equations and generalized continued fractions. Computing 22, 269–278 (1979)

44. 44.

van der Cruyssen, P.: Discussion of algorithms for the computation of generalized continued fractions. J. Comput. Appl. Math. 8, 179–2186 (1982)

45. 45.

Wynn, P.: Continued fractions whose coefficients obey a noncommutative law of multiplication. Arch. Ration. Mech. Anal. 12, 273–312 (1963)

46. 46.

Wynn, P.: On some recent developments in the theory and application of continued fractions. J. Soc. Ind. Appl. Math., Ser. B Numer. Anal. 1, 177–197 (1964)

47. 47.

Zhao, H., Zhu, G.: Matrix-valued continued fractions. J. Approx. Theory 120, 136–152 (2003)

### Acknowledgements

The author acknowledges the support by Open Access Publishing Fund of Clausthal University of Technology.

Not applicable.

Not applicable.

## Author information

The single author is responsible for the complete manuscript. The author read and approved the final manuscript.

Correspondence to Hendrik Baumann.

## Ethics declarations

### Competing interests

The author declares that he has no competing interests. 