Skip to main content

Advertisement

On characteristic polynomial of higher order generalized Jacobsthal numbers

Article metrics

  • 68 Accesses

Abstract

In this paper, we study a higher order generalization of the Jacobsthal sequence, namely, the \((k,c)\)-Jacobsthal sequence \((J^{(k,c)}_{n})\) for any integers n, \(k\geq 2\) and a real number \(c>0\). In particular, we find information about roots of its characteristic polynomial. For that purpose, we combine some powerful tools such as Marden’s method, the Perron–Frobenius theorem, and the Eneström–Kakeya theorem.

Introduction

A sequence \((u_{n})_{n}\) is a homogeneous linear recurrence sequence with coefficients \(c_{0}, c_{1},\ldots, c_{s-1}\), \(c_{0}\neq 0\), if

$$ u_{n+s}=c_{s-1}u_{n+s-1}+\cdots + c_{1}u_{n+1}+c_{0}u_{n}, $$
(1)

for all non-negative integers n. A recurrence sequence is therefore completely determined by the initial values \(u_{0},u_{1},\ldots,u_{s-1}\), and by the coefficients \(c_{0}, c_{1},\ldots, c _{s-1}\). The integer s is called the order of the linear recurrence. The characteristic polynomial of the sequence \((u_{n})_{n\geq 0}\) is given by

$$ \psi (x)=x^{s}-c_{s-1}x^{s-1}-\cdots - c_{1}x-c_{0}=(x-\alpha _{1})^{m _{1}}\cdots (x-\alpha _{\ell })^{m_{\ell }}, $$

where the \(\alpha _{j}\)’s (which are distinct) are named the roots of the recurrence. Also, the recurrence \((u_{n})_{n}\) has a dominant root if one of its roots has strictly largest absolute value. A fundamental result in the theory of recurrence sequences asserts that there exist uniquely determined non-zero polynomials \(g_{1},\ldots,g_{\ell }\in \mathbb{Q}(\{\alpha _{j}\}_{j=1}^{\ell })[x]\), with \(\deg g_{j}\leq m_{j}-1\) (\(m_{j}\) is the multiplicity of \(\alpha _{j}\) as zero of \(\psi (x)\)), for \(j=1,2,\ldots,\ell \), such that

$$ u_{n}=g_{1}(n)\alpha _{1}^{n}+g_{2}(n) \alpha _{2}^{n}+\cdots +g_{\ell }(n) \alpha _{\ell }^{n}, \quad \text{for all } n. $$
(2)

For more details, see [11, Theorem C.1].

Let P, Q be non-zero integers and let \(P^{2} -4Q \neq0\). The sequences \((U_{n}(P,Q))_{n\geq 0}\) given for \(n\geq 0\) by

$$ U_{n+2}(P,Q)=P\cdot U_{n+1}(P,Q) - Q\cdot U_{n}(P,Q), $$

where \(U_{0}(P,Q)=0\), \(U_{1}(P,Q)=1\), is called the first Lucas sequence. For instance, if \(P=1\) and \(Q=-1\), then \((U_{n}(1,-1))_{n \geq 0}=(F_{n})_{n\geq 0}\) is the well-known Fibonacci sequence. The Fibonacci numbers are known for their amazing properties (see [9] for the history, properties, and applications of the Fibonacci sequence and some of its generalizations). When \(P=1\) and \(Q=-2\), we find that \((U_{n}(1,-2))_{n \geq 0}=(J_{n})_{n\geq 0}\) is the Jacobsthal sequence, which has many interesting properties (see [5]). An explicit formula for \(J_{n}\) is

$$ J_{n}=\frac{2^{n}-(-1)^{n}}{3}. $$

There are several generalizations for Fibonacci numbers. For example, let \(k\geq 2\) and denote \(F^{(k)}:=(F_{n}^{(k)})_{n\geq -(k-2)}\), the k-generalized Fibonacci sequence whose terms satisfy the recurrence relation

$$ F_{n}^{(k)}=F_{n-1}^{(k)}+F_{n-2}^{(k)}+ \cdots + F_{n-k}^{(k)}, \quad \text{for } n\geq 2, $$
(3)

with the initial conditions \(F_{-(k-2)}^{(k)}= F_{-(k-3)}^{(k)}= \cdots =F_{0}^{(k)}=0\) and \(F_{1}^{(k)}=1\).

The study of the behavior of the roots of the characteristic polynomial of a recurrence (which gives information about the asymptotic behavior of the sequence) has a very long history and it became more popular after the seminal work of Baker on effective lower bounds for linear forms in logarithms. For example, as a consequence of Baker’s theory (see [1]) we have: Let \((u_{n})\) be a recurrence sequence. Suppose that \((u_{n})\) is a sequence of integers of the form

$$ u_{n}=a\alpha ^{n}+O \bigl( \vert \alpha \vert ^{\theta n} \bigr), \quad \text{with } \theta \in (0,1), $$

where a and α are non-zero algebraic numbers, with \(\vert \alpha \vert >1\) and such that \(u_{n}-a\alpha ^{n}\neq 0\) for all n. Then the equation

$$ u_{n}=y^{p}, \quad u_{n}\notin \{0, \pm 1 \}, $$

implies \(p< C\), where \(C>0\) is an effective constant, which depends only on the parameters of the recurrence \((u_{n})\). This result can be applied to k-generalized Fibonacci numbers. In fact, since it is known that the characteristic polynomial of \((F_{n}^{(k)})_{n}\), namely,

$$ \psi _{k}(x):=x^{k}-x^{k-1}-\cdots -x-1, $$

has just one zero outside the unit circle and all the zeros are simple (as can be found in [6]), we have \(F_{n}^{(k)}=a\alpha ^{n}+O(1)\). Actually, we remark that the case \(k=2\) was solved completely in 2003, by Bugeaud et al [3, Theorem 1].

Papers [8] studied some generalized sequences of (3) and authors proved similar properties of roots of the characteristic polynomials.

Here, we are interested in the following generalization of the Jacobsthal sequence.

Definition 1

Let \(k\geq 2\) be an integer and let \(c>0\) be a real number. The \((k,c)\)-Jacobsthal sequence \((J^{(k,c)}_{n})_{n\geq -(k-2)}\) is defined by the recurrence

$$ J^{(k,c)}_{n}=J^{(k,c)}_{n-1}+\cdots + J^{(k,c)}_{n-k+1}+c\cdot J^{(k,c)} _{n-k}, \quad \text{for } n\geq 2, $$

with the initial values \(J^{(k,c)}_{-(k-2)}= J^{(k,c)}_{-(k-3)}= \cdots =J^{(k,c)}_{0}=0\) and \(J_{1}^{(k,c)}=1\).

Some special cases of \((k,c)\)-Jacobsthal sequences are listed in Table 1. Let \(f_{k,c}(x)\) be the characteristic polynomial of the \((k,c)\)-Jacobsthal sequence. Our first results are related to the zeros of \(f_{k,c}(x)\). More precisely, we proved the following theorems.

Table 1 Examples of \((k,c)\)-Jacobsthal sequences for some integer values of k and c

Theorem 1

Let \(k\geq 2\) be an integer and let \(c>0\) be a real number. Set \(f_{k,c}(x)=x^{k}-x^{k-1}-\cdots -x-c\), then \(f_{k,c}(x)\) has a simple dominant zero and, moreover:

  1. (i)

    \(f_{k,2}(x)\) has 2 as the dominant zero and all the other zeros lie on the boundary of unit circle.

  2. (ii)

    If \(c>2\), then \(f_{k,c}(x)\) has a dominant zero \(\alpha >2\) and all the other zeros lie outside the closed unit circle.

  3. (iii)

    If \(c\in (0,2)\), then \(f_{k,c}(x)\) has a dominant zero \(\alpha \in (1, 2)\) and all the other zeros lie inside the unit circle.

Theorem 2

For \(k\geq 2\), all the zeros of \(f_{k,c}(x)\) are simple if some of the items below is true

  1. (i)

    \(c\in (1,2]\);

  2. (ii)

    \(c>2\) and \(k\geq \sqrt{\frac{8c(c-1)}{(c-2)^{2}}} + \frac{3c-2}{c-2}\).

Moreover, there are at most two zeros \(\alpha _{+}\) and \(\alpha _{-}\) with multiplicity greater than one and they must have the form

$$ \alpha _{\pm }=\frac{3ck+2-c-2k\pm \sqrt{(c+2k-3ck-2)^{2}-8c(c-1)k ^{2}}}{2(c-1)k}. $$

As a consequence of the previous theorem, we find that the following result holds for integer higher order Jacobsthal recurrences.

Corollary 1

If c and k are positive integers, then all the zeros of \(f_{k,c}(x)\) are simple.

Auxiliary results

In this section, we shall present some results which will be essential ingredients in the proof of our results.

Our first tool is related to a method of Marden [10, Chapter X] for calculating the number of zeros of a polynomial within the unit circle. Here we shall state only a particular case which is convenient to us. For that, first, consider a polynomial \(f(x)=a_{0}+a_{1}x+\cdots + a_{n}x^{n}\) with real coefficients and denote by \(f^{*}(x)\) its reciprocal polynomial, i.e., \(f^{*}(x)=x^{n}f(1/x)\). We define the Schur transform of \(f(x)\), denoted by \(\operatorname{Tf}(x)\), by

$$ \operatorname{Tf}(x)=a_{0}f(x)-a_{n}f^{*}(x). $$

A particular case of Marden’s result [10, Lemma 42.1] is the following.

Theorem 3

Let \(f(x)=a_{0}+a_{1}x+\cdots + a_{n}x^{n}\) be a polynomial with real coefficients. If \(\delta (f):=a_{0}^{2}-a_{n}^{2}\neq 0\), then \(f(x)\) and \(\operatorname{Tf}(x)\) have the same number of zeros on the boundary of the unit circle.

Another useful and very important result is due to Eneström and Kakeya [4, 7].

Theorem 4

Let \(f(x)=a_{0}+a_{1}x+\cdots +a_{n}x^{n}\) be an n-degree polynomial with real coefficients. If \(0\leq a_{0}\leq a_{1}\leq \cdots \leq a _{n}\), then all zeros of \(f(x)\) lie in \(\vert x \vert \leq 1\).

Further, we shall use the Perron–Frobenius theorem from the linear algebra.

Theorem 5

Let A be a square matrix with non-negative real entries. If \(A^{k}\) is a positive matrix (i.e., a matrix having all positive entries), then A has a positive eigenvalue of multiplicity 1 and strictly greater in absolute value than all other eigenvalues.

We still shall use two well-known trigonometric formulas: For \(\alpha \neq 2k\pi \), where k is any integer, we have

$$ \sin (\phi )+\sin (\phi +\alpha )+\cdots +\sin (\phi +n\alpha )= \frac{ \sin \frac{(n+1)\alpha }{2} \sin (\phi +\frac{n\alpha }{2} )}{ \sin \frac{\alpha }{2}} $$
(4)

and

$$ \cos (\phi )+\cos (\phi +\alpha )+\cdots +\cos (\phi +n\alpha )= \frac{ \sin \frac{(n+1)\alpha }{2}\cos (\phi +\frac{n\alpha }{2} )}{ \sin \frac{\alpha }{2}}. $$
(5)

With these tools at hand we are ready to deal with the proof of our results.

Proof of Theorem 1

For proving that \(f_{k,c}(x)\) has a dominant zero for all values of \(c>0\), we shall use the connection between recurrence sequences and linear algebra (eigenvalues, characteristic polynomial etc.). First, we note that \(f_{k,c}(x)\) is the characteristic polynomial of its companion \(k\times k\) matrix

$$ A_{c}= \begin{bmatrix} 0&0&\cdots &0&c \\ 1&0&\cdots &0&1 \\ 0&1&\cdots &0&1 \\ \vdots & \vdots & \ddots & \vdots \\ 0&0&\cdots &1&1 \end{bmatrix} $$

That is, \(f_{k,c}(x)=\det (xI-A_{c})\) and then the roots of the recurrence of the \((k,c)\)-Jacobsthal sequence are precisely the eigenvalues of the matrix \(A_{c}\). Thus, in order to prove the existence of a dominant zero for \(f_{k,c}(x)\), it suffices to prove the existence of an eigenvalue with absolute value strictly greater than the absolute value of all other eigenvalues (this largest absolute value is called spectral radius). For proving that, we shall use Theorem 5. So, we must prove that \(A_{c}^{n}\) is a positive matrix, for some \(n\geq 1\). In fact, we claim that \(A_{c}^{k}\) is a positive matrix. To prove this assertion, we shall use a fact from graph theory, which says that, for \(A_{c}=(a_{i,j})\), we can look at the directed hypergraph of k vertices where arcs correspond to positive entries of \(A_{c}\) (including a loop \(k\to k\)). So, the entry \((i,j)\) of \(A_{c}^{k}\) is positive if you can get from i to j in k steps (loops are allowed). Since \(a_{i,k}>0\), for all \(1\leq i\leq k\) and \(a_{i+1,i}>0\), for all \(1\leq i\leq k-1\), our graph is like Fig. 1 (we refer the reader to [2, p. 78] for this results and other facts about combinatorial matrix theory). So, the first step is to go from i to k (since \(a_{i,k}>0\)), after we stay in the loop \(k\to k\) for \(j-1\) steps and then we go to j in \(k-j\) steps. So, we reach j from i in \(1+(j-1)+(k-j)=k\) steps. Since the pair \((i,j)\) is arbitrary, the matrix \(A_{c}^{k}\) has only positive entries and by the Perron–Frobenius theorem, we find that \(f_{k,c}(x)\) has a simple dominant zero, as desired.

Figure 1
figure1

The digraph corresponding to the companion matrix \(A_{c}\)

In order to prove the items, we observe that, for \(c=1\), the \((k,1)\)-Jacobsthal sequence is exactly the k-generalized Fibonacci sequence (and the result is already known for this last sequence). So, we may suppose, in all that follows, that \(c\neq 1\).

Proof of (i)

The proof follows directly from the fact that

$$ f_{k,2}(x)=(x-2) \frac{x^{k}-1}{x-1}. $$

 □

Proof of (ii)

Note that by Descartes’ sign rule, \(f_{k,c}(x)\) has only one positive zero, say α. Also, \(f_{k,c}(2)=2-c<0\) and then \(\alpha >2\), by the Intermediate Value Theorem, together with the fact that \(f_{k,c}(x)\) tends to infinity as \(x\to \infty \).

Define \(g_{c}(x)=-f_{k,c}^{*}(1/x)=cx^{k}+x^{k-1}+\cdots + x-1\). Thus \(\beta :=1/\alpha \in (0,1/2)\) is a zero of \(g_{c}(x)\). Note that

$$ g_{c}(x)=(x-\beta ) \bigl(cx^{k-1}+(c\beta +1)x^{k-2}+\cdots + \bigl(c\beta ^{k-1}+ \beta ^{k-2}+ \cdots + \beta +1 \bigr) \bigr). $$

Write \(\psi _{c}(x):=g_{c}(x)/(x-\beta )\). Now, we shall prove that the coefficients of \(\psi _{c}(x)\) are in the decreasing order. In fact, since \(c(1-\beta )>1\) (because \(\beta <1/2\) and \(c>2\)), we have \(c>c\beta +1\). So, by the same reason, we have

$$ c\beta ^{j-1}+\sum_{i=0}^{j-2}\beta ^{i}>c\beta ^{j}+\sum_{i=0}^{j-1} \beta ^{i}, $$

for all \(j\in [2,k]\) (since the above inequality is equivalent to \(c>c\beta +1\)). Therefore, by the Eneström–Kakeya theorem, all the zeros of \(\psi _{c}(x)\) satisfy \(\vert x \vert \leq 1\) and so, all the zeros of \(f_{k,c}(x)\) satisfy \(\vert x \vert \geq 1\). Now, it suffices to prove that these zeros do not lie on the unit circle. For that, we shall use Marden’s method. Since \(\delta (f_{k,c})=(-c^{2})-1^{2}\neq 0\), then, by Marden’s theorem, the polynomials \(f_{k,c}(x)\) and \(\operatorname{Tf}_{k,c}(x)=(-c)^{2}f _{k,c}(x)-x^{k} f_{k,c}(1/x)\) have the same number of zeros on the boundary of the unit circle. After some calculations, we obtain

$$ \operatorname{Tf}_{k,c}(x)=x^{k-1}+x^{k-2}+\cdots + x+ c -1. $$

Let us prove that this polynomial does not have a zero with absolute value equal to 1. Indeed, suppose that \(x=\exp (i\theta )=\cos ( \theta )+i\sin (\theta )\) (for some \(\theta \in (0,\pi )\), since −1 and 1 are not zeros of \(\operatorname{Tf}_{k,c}(x)\)) satisfy \(\operatorname{Tf}_{k,c}(x)=0\). Thus, we use De Moivre’s formula and by combining the real and imaginary parts, we obtain

$$ \cos (\theta )+\cos (2\theta )+\cdots +\cos \bigl((k-1)\theta \bigr)=1-c $$

and

$$ \sin (\theta )+\sin (2\theta )+\cdots +\sin \bigl((k-1)\theta \bigr)=0. $$

By using Eqs. (4) and (5), we arrive at

$$ \frac{\sin ((k-1)\theta /2)\cos (k\theta /2)}{\sin (\theta /2)}=1-c $$
(6)

and

$$ \frac{\sin ((k-1)\theta /2)\sin (k\theta /2)}{\sin (\theta /2)}=0. $$
(7)

Since \(c\neq 1\), from (7), one has \(\sin (k\theta /2)=0\) and so \(k\theta /2=\ell \pi \), for some integer . Thus, \(\cos (k \theta /2)=(-1)^{\ell }\) and we use the sine addition formula to get \(\sin ((k-1)\theta /2)=(-1)^{\ell +1}\sin (\theta /2)\). We combine this fact with (6) to arrive at the absurdity that \(-1=1-c\), i.e., \(c=2\). So, all the zeros of \(f_{k,c}(x)\) satisfy \(\vert x \vert >1\). □

Proof of (iii)

Again, by Descartes’ sign rule, \(f_{k,c}(x)\) has only one positive zero, say α. Also, \(f_{k,c}(2)=2-c>0\) and \(f_{k,c}(1)=2-k-c<0\) and then \(\alpha \in (1, 2)\), by the intermediate value theorem.

Define \(g_{c}(x)=-f_{k,c}^{*}(1/x)=cx^{k}+x^{k-1}+\cdots + x-1\). Thus \(\beta :=1/\alpha >1/2\) is a zero of \(g_{c}(x)\). Note that

$$ g_{c}(x)=(x-\beta ) \bigl(cx^{k-1}+(c\beta +1)x^{k-2}+\cdots + \bigl(c\beta ^{k-1}+ \beta ^{k-2}+ \cdots + \beta +1 \bigr) \bigr). $$

Write \(\psi _{c}(x):=g_{c}(x)/(x-\beta )\). Then

$$ \psi _{c}^{*}(x)= \bigl(c\beta ^{k-1}+\beta ^{k-2}+\cdots + \beta +1 \bigr)x^{k-1}+ \cdots + (c\beta +1)x+c. $$

Now the coefficients of the previous polynomial are in increasing order. In fact, as in the proof of item (ii), it is enough to prove that \(c\beta +1\geq c\). This holds, because \(c(1-\beta )<2\cdot (1-1/2)=1\) (since \(c<2\) and \(\beta >1/2\)). Thus, we use the Eneström–Kakeya theorem to ensure that all the zeros of \(\psi ^{*}_{c}(x)\) satisfy \(\vert x \vert \leq 1\), so, all the zeros of \(g_{c}(x)\) satisfy \(\vert x \vert \geq 1\) and finally all the zeros of \(f_{k,c}(x)\) (different of α) satisfy \(\vert x \vert \leq 1\). Now, the proof that there is no zero of \(f_{k,c}(x)\) on the boundary of the unit circle is the same as in the previous item (since in that proof the absurdity was that \(c=2\)). This completes the proof. □

Proofs of Theorem 2 and Corollary 1

Proof of Theorem 2

Let \(g_{c}(x)=(x-1)f_{k,c}(x)=x^{k+1}-2x^{k}-(c-1)x+c\). By Descartes’ sign rule this polynomial has two positive real zeros counting multiplicity. One of them is \(x=1\) and the other, which is a zero of \(f_{k,c}(x)\), must be simple (note that \(f_{k,c}(1)=2-k-c<0\)). For the same reason (Descartes’ sign rule for \(g_{c}(-x)\)), we find that \(g_{c}(x)\) has exactly one negative zero when k is even and exactly two negative zeros or none negative zeros when k is odd. So, in the even case the real zeros must be simple.

In conclusion, a possible zero α of \(g_{c}(x)\) with multiplicity greater than one must be a non-real number. Thus, since \(g_{c}(\alpha )=0\) and \(g_{c}'(\alpha )=0\) have to hold, we can combine these equalities to obtain

$$ 0=h_{c}(\alpha ):=\alpha g_{c}'(\alpha )-(k+1)g_{c}(\alpha )=2\alpha ^{k}+(c-1)k\alpha -c(k+1). $$

Also,

$$ 0=(\alpha -2)h_{c}(\alpha )-2g_{c}(\alpha )= (c-1)k \alpha ^{2}+(c+2k-3ck-2) \alpha +2ck. $$

This implies that we have two possibilities for α, namely,

$$ \alpha _{\pm }=\frac{3ck+2-c-2k\pm \sqrt{(c+2k-3ck-2)^{2}-8c(c-1)k ^{2}}}{2(c-1)k}. $$

However, for any c and k as in items (i) or (ii), we obtain

$$ (c+2k-3ck-2)^{2} \geq 8c(c-1)k^{2} , $$
(8)

showing that α is real. Inequality (8) we can rewrite as

$$ (c-2)^{2} k^{2} - 2(c-2) (3c-2)k + (c-2)^{2} \geq 0 $$
(9)

or

$$ (c-2)^{2} \bigl(k^{2}+1 \bigr) \geq - 2(2-c) (3c-2)k . $$
(10)

For \(c\in (1,2]\) we can see from (10) that (8) surely holds for any non-negative k. Let us consider \(c\in (0,1] \cup (2, \infty )\). The discriminant D of the quadratic polynomial (in variable k) on the left-hand side of inequality (9) is equal to \(32c(c-1)(c-2)^{2}\). Clearly, \(D\leq 0\) for \(c\in (0,1]\), hence (8) holds for any k. Similarly, \(D>0\) for \(c>1\), thus we have to solve only the case \(c>2\). Zeros of the quadratic polynomial from the left-hand side of inequality (9) are

$$ k_{\pm }= \frac{3 c-2}{c-2} \pm 2 \sqrt{2} \sqrt{ \frac{c(c-1) }{(c-2)^{2}}} . $$
(11)

These zeros are dependent on the parameter c, hence we will define two functions \(k_{+}\) and \(k_{-}\), \(k_{\pm }: (2, \infty ) \to \mathbb{R}\), given by (11). We get easily that \(\lim_{c\to 2^{+}} k_{-}(c) = 0\), \(\lim_{c\to 2^{+}} k_{+}(c) = +\infty \) and \(\lim_{c\to \infty } k_{\pm }(c) = 3\pm 2\sqrt{2}\). The derivatives of these functions are

$$ k_{\pm }'(c) = \frac{-4 \sqrt{c(c-1)} \pm \sqrt{2} (2-3c)}{(c-2)^{2} \sqrt{c(c-1)}} . $$

We easily see that the function \(k_{+}(c)\) is decreasing and the function \(k_{-}(c)\) is increasing in \((2,\infty )\). Thus, for \(c>2\) the function \(k_{-}\) is bounded, concretely \(0< k_{-}(c) < 3-2\sqrt{2} < 0.2\) and function \(k_{-}\) is bounded from below by \(3+2\sqrt{2} > 2\). Hence for \(c>2\) we have the following condition for k:

$$ k \geq \frac{3 c-2}{c-2} + \sqrt{\frac{8c(c-1) }{(c-2)^{2}}} . $$

This completes the proof.

Proof of Corollary 1

First, let us suppose that \(k>5\). We will again use the function \(k_{+}\), defined above, and its properties. If c is an integer, then the maximum of \(k_{+}(c)\) happens when \(c=3\) and this maximum is equal to \(k_{+}(3)=7+4\sqrt{3}<14\). So, if \(k\geq 14\), then, by Theorem 2, \(f_{k,c}(x)\) has only single zeros and this fact does not depend on c. However, if c increases, function \(k_{+}(c)\) decreases and then we can obtain better lower bounds for k which can lead, by computational methods, to our desired result. For evaluating this task, we shall define some commands in Wolfram Mathematica. First, the following function \(r(c)\):

      r[c_] := IntegerPart[(-2 + 3 c)/(-2 + c) +

              2 Sqrt[2] Sqrt[(-c + c^2)/(-2 + c)^2]]

The functions \(f_{k,c}(x)\) (here \(F(x,k,c)\)):

   F[x_, k_, c_] := x^k - Sum[x^j, {j, 1, k - 1}] - c

Its derivative (in relation to x):

   G[x_, k_] := k*x^(k -1) - Sum[j*x^(j - 1), {j, 1, k - 1}]

We find that the possible multiple zeros for \(f_{k,c}(x)\) happen for \(5< k\leq r(c)\). However, \(r(c)=5\), for all \(c\geq 51\) and so there are none of these kind of zeros for these cases. So, we can consider only the case \(3\leq c\leq 50\). We know that α is a multiple zero of \(f_{k,c}(x)\) if \(f_{k,c}(\alpha )=f_{k,c}'(\alpha )=0\). The next function will calculate the maximum between the number of solutions for a fixed c and k in the range \(6\leq k\leq r(c)\):

   s[c_] := Max[

    Table[ Length[NSolve[F[x, k, c] == 0 && G[x, k] == 0, x ]],

      {k, 6, h[c]} ]]

Finally, we make a table for c from 3 to 50 by the following input:

        Table[ s[c], {c, 3, 50} ]

The output is

 {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,

 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,

  0, 0, 0, 0, 0, 0, 0, 0}

This means that there are no multiple zeros if \(k>5\).

For the cases, \(k = 2, 3, 4\) or 5, we can use Mathematica to return the exact zeros of \(f_{k,c}'(x)\) (for \(k\leq 5\), we have a closed formula for these zeros, which does not depend on c). After that, we consider the linear equation \(f_{k,c}(y)=0\) (in the variable c) for y being a root of \(f_{k,c}'(x)=0\). Then we again use Mathematica to see that none of the returned values of c is a positive integer. This concludes the proof.

References

  1. 1.

    Baker, A.: Transcendental Number Theory. Cambridge Mathematical Library. Cambridge University Press, Cambridge (1990)

  2. 2.

    Brualdi, R.A., Ryser, H.J.: Combinatorial Matrix Theory. Cambridge University Press, Cambridge (1991)

  3. 3.

    Bugeaud, Y., Mignotte, M., Siksek, S.: Classical and modular approaches to exponential Diophantine equations I. Fibonacci and Lucas powers. Ann. Math. 163, 969–1018 (2006)

  4. 4.

    Eneström, G.: Remarque sur un théorème relatif aux racines de l’equation \(a_{n}x^{n} + a_{n-1}x^{n-1} + \cdots +a_{1}x + a_{0} = 0\) où tous les coefficientes a sont réels et positifs. Tohoku Math. J. 18, 34–36 (1920)

  5. 5.

    Horadam, A.F.: Jacobsthal representation numbers. Fibonacci Q. 34, 40–54 (1996)

  6. 6.

    Hua, L.-K., Wang, Y.: Application of Number Theory to Numerical Analysis. Springer, Berlin (1981)

  7. 7.

    Kakeya, S.: On the limits of the roots of an algebraic equation with positive coefficients. Tohoku Math. J. 2, 140–142 (1912)

  8. 8.

    Kilic, E., Arikan, T.: More on the infinite sum of reciprocal Fibonacci, Pell and higher order recurrences. Appl. Math. Comput. 219, 7783–7788 (2013)

  9. 9.

    Koshy, T.: Fibonacci and Lucas Numbers with Applications. Wiley, New York (2001)

  10. 10.

    Marden, M.: The Geometry of the Zeros of a Polynomial in a Complex Variable. Am. Math. Soc., New York (1989)

  11. 11.

    Shorey, T.N., Tijdeman, R.: Exponential Diophantine Equations. Cambridge Tracts in Mathematics, vol. 87. Cambridge University Press, Cambridge (1986)

Download references

Acknowledgements

The authors thank the anonymous referees for their careful corrections and very helpful and detailed comments, which have significantly improved the presentation of this paper.

Availability of data and materials

The data used to support the findings of this paper are included within the article. Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

Funding

This research was funded by the Project of Excelence PřF UHK 01/2018, University of Hradec Králové, Czech Republic.

Author information

Authors worked together to produce the results presented here. All authors read and approved the final manuscript.

Correspondence to Pavel Trojovský.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

MSC

  • 11Bxx
  • 11B83

Keywords

  • Linear recurrence sequence
  • Jacobsthal sequence
  • Generalized Jacobsthal sequence
  • Computation
  • Polynomial
  • Matrix theory
  • Digraph