Skip to content

Advertisement

  • Research
  • Open Access

Spectral and oscillation theory for general second order Sturm-Liouville difference equations

Advances in Difference Equations20122012:82

https://doi.org/10.1186/1687-1847-2012-82

  • Received: 10 February 2012
  • Accepted: 15 June 2012
  • Published:

Abstract

In this article we establish an oscillation theorem for second order Sturm-Liouville difference equations with general nonlinear dependence on the spectral parameter λ. This nonlinear dependence on λ is allowed both in the leading coefficient and in the potential. We extend the traditional notions of eigenvalues and eigenfunctions to this more general setting. Our main result generalizes the recently obtained oscillation theorem for second order Sturm-Liouville difference equations, in which the leading coefficient is constant in λ. Problems with Dirichlet boundary conditions as well as with variable endpoints are considered.

Mathematics Subject Classification 2010: 39A21; 39A12.

Keywords

  • Sturm-Liouville difference equation
  • discrete symplectic system
  • oscillation theorem
  • finite eigenvalue
  • finite eigenfunction
  • generalized zero
  • quadratic functional

1 Introduction

In this article we consider the second order Sturm-Liouville difference equation
Δ ( r k ( λ ) Δ x k ) + q k ( λ ) x k + 1 = 0 , k [ 0 , N 1 ] , ( SL λ )
where r k : for k [0, N] and q k : for k [0, N - 1] are given differentiable functions of the spectral parameter λ such that
r k ( λ ) 0 and r ˙ k ( λ ) 0 , k [ 0 , N ] , q ˙ k ( λ ) 0 , k [ 0 , N 1 ] . }
(1.1)
Here N is a fixed number with N ≥ 2 and [a, b] := [a, b]∩, and the dot denotes the differentiation with respect to λ. With equation (SLλ) we consider the Dirichlet boundary conditions, that is, we study the eigenvalue problem
( SL λ ) , λ , x 0 = 0 = x N + 1 . ( E 0 )
We recall first the classical setting of Sturm-Liouville difference equations, see e.g. [14], in which the function r k (·) is constant (nonzero) in λ and the function q k (·) is linear and increasing in λ. That is, the traditional assumptions for the oscillation and spectral theory of (SLλ) are the following:
r k ( λ ) r k 0 , for all k [ 0 , N ] , q k ( λ ) = q k + λ w k , w k > 0 , for all k [ 0 , N 1 ] . }
(1.2)

In some publications, such as in [2, 4], the authors also impose the sign condition r k > 0 for all k [0, N], but it is well known nowadays that r k ≠ 0 is sufficient to develop the oscillation and spectral theory of these equations, see e.g. [[5], p. 5] or [6]. The explanation of this phenomenon also follows from the analysis of the general equation (SLλ) discussed below.

Assume for a moment that (1.2) holds. Following [4, Chapter 7] or [2, Chapter 4], a number λ0 is an eigenvalue of (E0) if there exists a nontrivial solution x = x0) of equation ( S L λ 0 ) satisfying the Dirichlet endpoints x00) = 0 = xN+10). By the uniqueness of solutions of equation ( S L λ 0 ) , it follows that the eigenvalues of (E0) are characterized by the condition x ^ N + 1 ( λ 0 ) = 0 , where x ^ ( λ ) is the principal solution of equation (SLλ), i.e., it is the solution starting with the initial values x ^ 0 ( λ ) = 0 and x ^ 1 ( λ ) = 1 / r 0 . If x(λ) is a solution of (SLλ) with (1.2), then the functions x k (λ) are polynomials in λ for every k [0, N + 1]. Therefore, the zeros of x k (λ) are isolated, showing that the eigenvalues of (E0) are simple (with the multiplicity equal to one) and isolated. Furthermore, by a standard argument from linear algebra it follows that the eigenvalues of (E0) with λ are indeed real and that the eigenfunctions corresponding to different eigenvalues are orthogonal with respect to the inner product x , y w : = k = 0 N w k x k + 1 y k + 1 . The oscillation theorem for (E0) then says that the j-th eigenfunction has exactly j generalized zeros in the interval (0, N + 1]. The generalized zeros are defined as follows, see [7, 8]. A sequence x = { x k } k = 0 N + 1 has a generalized zero in (k, k + 1], if
x k 0 and r k x k x k + 1 0 .
(1.3)

If the sequence x has a generalized zero in (k, k+1], then this generalized zero is said to be at k + 1 when xk+1= 0, while it is said to be in (k, k + 1) when r k x k xk+1< 0. This terminology corresponds, roughly speaking, to the idea of hitting the axis in the first case or crossing the axis in the latter case. Finally, the Rayleigh principle for (E0) says that the (j + 1)-th eigenvalue can be computed by minimizing the associated quadratic form over nontrivial sequences η = { η k } k = 0 N + 1 which satisfy the endpoints conditions η0 = 0 = ηN+1and which are orthogonal to the first j eigenfunctions.

In this article we show that some of the above properties can be extended to the eigenvalue problem (E0) in which the coefficients depend on the spectral parameter λ in general nonlinearly and they satisfy the monotonicity assumption (1.1). In particular, we discuss the notions of finite eigenvalues and finite eigenfunctions for such problems which are appropriate generalizations of the corresponding notions for the case of (1.2). Then we prove as our main result the corresponding oscillation theorem. Note that such an oscillation theorem for problem (E0) was recently proven in [[9], Section 6.1 and Example 7.6] under the assumption (1.1) with the first condition in (1.2), that is, under r k (λ) ≡ r k constant in λ. That result follows by writing equation (SLλ) as a special discrete symplectic system, see the next section. The oscillation theorem in the present article does not impose this restriction, so that it directly generalizes the oscillation theorem in [9, Example 7.6] to the case of variable r k (λ). As an application of our new oscillation theorem we prove that the j-th finite eigenfunction has exactly j generalized zeros in the interval (0, N + 1], which is a discrete analogue of a traditional statement in the continuous time theory. In addition, we further consider the eigenvalue problems with more general boundary conditions, which include the Neumann-Dirichlet or Neumann-Neumann boundary conditions as a special case. This additional result is obtained by a known transformation to problem (E0), see [10]. Our new oscillation theorem and the new notions of finite eigenvalues and finite eigenfunctions for problem (E0) can also be regarded as the discrete analogues to the corresponding continuous time theory in [11].

2 Main results

Equation (SLλ) is a special scalar discrete symplectic system
x k + 1 = a k ( λ ) x k + b k ( λ ) u k , u k + 1 = c k ( λ ) x k + d k ( λ ) u k , k [ 0 , N ] ,
(2.1)
in which the 2 × 2 transition matrix is symplectic, i.e., with
S k ( λ ) = ( a k ( λ ) b k ( λ ) c k ( λ ) d k ( λ ) ) : = ( 1 1 r k ( λ ) q k ( λ ) 1 q k ( λ ) r k ( λ ) ) , J : = ( 0 1 1 0 ) ,
(2.2)

we have S k T ( λ ) J S k ( λ ) = J for all k [0, N] and λ . Note that S k (λ) being symplectic is equivalent to the fact that the determinant of S k (λ) is equal to one, that is, to the condition

a k (λ) d k (λ) - b k (λ) c k (λ) = 1, see [12]. Oscillation theorems for discrete symplectic systems (2.1) in which a k (λ) ≡ a k and b k (λ) ≡ b k are constant in λ, and c k (λ) and d k (λ) are linear in λ were derived in [10, 1315].

Remark 2.1 The component u k in (2.1) is defined through x k as u k := r k (λ) Δx k for k [0, N - 1]. This yields that the first equation in (2.1) is satisfied for all k [0,N] and the second equation in (2.1) for all k [0, N - 1]. If we want to have both equations satisfied for k [0,N], we need to define the coefficient q N (λ) in such a way that the matrix S N (λ) is symplectic and q ˙ N ( λ ) 0 . This can be done e.g. by taking q N (λ) := 0 for all λ and uN+1:= u N .

First we show how certain solutions of (SLλ) behave with respect to λ. Assumption (1.1) implies that the solutions of (SLλ) are differentiable, hence continuous, in λ on . We will consider the solutions whose initial values
x 0 ( λ ) , r 0 ( λ ) Δ x 0 ( λ ) do not depend on λ .
(2.3)
This condition is satisfied for example by the principal solution x ^ ( λ ) , for which
x ^ 0 ( λ ) = 0 , x ^ 1 ( λ ) = 1 / r 0 ( λ ) for all λ .
(2.4)

The following result shows that under the monotonicity assumption (1.1) the oscillation behavior in λ is not allowed for the above type of solutions near any finite value of λ, compare with [9, Theorem 4.3].

Lemma 2.2 Assume that (1.1) holds and let x ( λ ) = { x k ( λ ) } k = 0 N + 1 be a nontrivial solution of (SLλ) satisfying (2.3). Then for each k [0, N + 1] and λ0 there exists δ > 0 such that x k (λ) is either identically zero or never zero on0, λ0 + δ), resp. on0 - δ, λ0).

Proof. Let λ0 and k [0, N + 1] be fixed. If k = 0, then the result follows trivially. Also, if x k 0) ≠ 0, then the statement is a consequence of the continuity of x k (λ) in λ. Therefore, further on we assume that k [1, N + 1] and x k 0) = 0. First we construct another solution y ( λ ) = { y j ( λ ) } j = 0 N + 1 whose initial conditions do not depend on λ as in (2.3) such that y k 0) ≠ 0 and such that the Casorati determinant
C [ y ( λ ) , x ( λ ) ] j : = r j ( λ ) y j ( λ ) x j ( λ ) Δ y j ( λ ) Δ x j ( λ ) = 1
for all j [0, N + 1] and λ . This means that the solutions y(λ) and x(λ) form a normalized pair of solutions of (SLλ). The solution y(λ) can be constructed from the initial conditions
y 0 ( λ ) = r 0 ( λ ) Δ x 0 ( λ ) / ω 0 , r 0 ( λ ) Δ y 0 ( λ ) = - x 0 ( λ ) / ω 0 ,
where ω 0 : = x 0 2 ( λ ) + r 0 2 ( λ ) [ Δ x 0 ( λ ) ] 2 is independent of λ. The choice of y k 0) = 0 then follows from [[16], Proposition 4.1.1]. By the continuity of y k (λ) in λ, there exists ε > 0 such that y k (λ) ≠ 0 on (λ0 - ε, λ0 + ε). For these values of λ, a direct calculation shows the formula
d d λ x k ( λ ) y k ( λ ) = 1 y k 2 ( λ ) j = 0 k - 1 q ˙ j ( λ ) x j + 1 ( λ ) x k ( λ ) y j + 1 ( λ ) y k ( λ ) 2 - j ( λ ) Δ x j ( λ ) x k ( λ ) Δ y j ( λ ) y k ( λ ) 2 ,

compare with [9, Lemma 4.1]. Therefore, under the assumption (1.1) the function z k (λ) := x k (λ)/y k (λ) is nondecreasing in λ on (λ0-ε, λ0+ε). This means that once z k 0) = 0, then z k (λ) is either identically zero on (λ0, λ0 + δ) for some δ (0, ε), or z k (λ) is positive on (λ0, λ0 + ε). Similar argument applies also on the left side of λ0. And since the zeros of z k (λ) in (λ0 - ε, λ0 + ε) are exactly those of x k (λ), the result follows.

Remark 2.3 The statement of Lemma 2.2 says that for a nontrivial solution x ( λ ) = { x k ( λ ) } k = 0 N + 1 of (SLλ) satisfying (2.3) the quantity
h k ( λ ) : = rank x k ( λ )
(2.5)

is piecewise constant in λ on for every given k [0, N + 1].

Remark 2.4 If a solution x(λ) of (SLλ) satisfies x k 0) ≠ 0 at some k [0, N + 1] and λ0 , then there exists δ > 0 such that x k (λ) ≠ 0 on (λ0 - δ, λ0 + δ). Moreover, as in the proof of Lemma 2.2 we can derive for all λ 0 - δ, λ0 + δ) the formula
k ( λ ) = - k ( λ ) x k 2 ( λ ) r k 2 ( λ ) x k + 1 2 ( λ ) + 1 r k 2 ( λ ) j = 0 k - 1 q ˙ j ( λ ) x j + 1 2 ( λ ) - j ( λ ) [ Δ x j ( λ ) ] 2 ,
(2.6)
compare with [9, Remarks 6.13 and 6.15], where
p k ( λ ) : = x k ( λ ) r k ( λ ) x k + 1 ( λ ) .
(2.7)

Identity (2.6) shows that the function p k (λ) is nondecreasing in λ whenever it is defined, i.e., whenever xk+1(λ) ≠ 0. This monotonicity of p k (λ) in λ is essential for deriving the oscillation theorem below. Note also that according to (1.3) we have p k (λ) < 0 if and only if the solution x(λ) has a generalized zero in (k, k + 1).

Remark 2.5 The uniqueness of solutions of (SLλ) implies that a nontrivial solution x(λ) of (SLλ) cannot vanish at any two consecutive points k and k + 1. Therefore, if x k (λ) = 0, then xk+1(λ) ≠ 0, while if xk+1(λ) = 0, then x k (λ) ≠ 0.

Let x ( λ ) = { x k ( λ ) } k = 0 N + 1 be a nontrivial solution of (SLλ) and denote by m k (λ) the number of its generalized zeros in (k, k + 1]. Then m k (λ) {0,1}. Our aim is to prove the following local oscillation theorem.

Theorem 2.6 (Local oscillation Theorem I) Assume (1.1). Let x ( λ ) = { x k ( λ ) } k = 0 N + 1 be a nontrivial solution of (SLλ) such that (2.3) holds. Fix an index k [0, N] and denote by m k (λ) the number of generalized zeros of x(λ) in (k, k + 1]. Then m k -) and m k +) exist and for all λ
m k ( λ + ) = m k ( λ ) 1 ,
(2.8)
m k ( λ + ) - m k ( λ - ) = h k ( λ ) - h k ( λ - ) + h k + 1 ( λ - ) - h k + 1 ( λ ) ,
(2.9)

where h k (λ) and hk+1(λ) are given in (2.5).

In the above formula the value of the function h j (λ) is 1 if x j (λ) ≠ 0 and it is 0 if x j (λ) = 0, for j {k, k + 1}. Moreover, the notation h j -) means the left-hand limit of the function h j (λ) at the given point λ. Similarly, the notation m k -) and m k +) stands respectively for the left-hand and right-hand limits of the function m k (λ) at the point λ.

Proof of Theorem 2.6 Let k [0, N] and λ0 be given. By Remark 2.3, the limits h k ( λ 0 - ) and h k + 1 ( λ 0 - ) exist. We will show that the left-hand and right-hand limits of the function m k (λ) at λ0 also exist and Equations (2.8) and (2.9) are satisfied. We split the proof into two parts depending on the rank of xk+10).

Part I. Assume first that xk+10) ≠ 0. Then there exists ε > 0 such that xk+1(λ) ≠ 0 for all λ 0 - ε0 + ε). This means that for these values of λ the point k + 1 is not a generalized zero of the solution x(λ). According to Remark 2.4, the function p k (λ) in (2.7) is nondecreasing on (λ0 - ε, λ0 +ε), and we have on this interval either m k (λ) = 1 if p k (λ) < 0, or m k (λ) = 0 if p k (λ) ≥ 0. We further distinguish the following three subcases:

(I-a) p k 0) < 0,

(I-b) p k 0) > 0, and

(I-c) p k 0) = 0.

In subcase (I-a), in which p k 0) < 0 we have p k (λ) < 0 and x k (λ) ≠ 0 for all λ 0 - δ, λ0 + δ) for some δ (0, ε), so that in this case m k ( λ 0 ) = m k ( λ 0 - ) = m k ( λ 0 + ) = 1 , h k ( λ 0 ) = h k ( λ 0 - ) = 1 , and h k + 1 ( λ 0 ) = h k + 1 ( λ 0 - ) = 1 . Therefore, the equations in (2.8) and (2.9) hold as the identities 1 = 1 and 0 = 0, respectively. Similarly in subcase (I-b), in which p k 0) > 0, there is δ (0, ε) such that p k (λ) > 0 and x k (λ) ≠ 0 for all λ 0-δ, λ0+δ), so that in this case m k ( λ 0 ) = m k ( λ 0 - ) = m k ( λ 0 + ) = 0 , h k ( λ 0 ) = h k ( λ 0 - ) = 1 , and h k + 1 ( λ 0 ) = h k + 1 ( λ 0 - ) = 1 . Therefore, both Equations (2.8) and (2.9) now hold as the identity 0 = 0. In subcase (I-c), in which p k 0) = 0, we have x k 0) = 0. By Lemma 2.2, there is δ (0, ε) such that one of the additional four subcases applies for the behavior of x k (λ) near the point λ0:

(I-c-i) x k (λ) ≠ 0 on (λ0 - δ, λ0) and on (λ0, λ0 + δ),

(I-c-ii) x k (λ) ≠ 0 on (λ0 - δ, λ0) and x k (λ) ≡ 0 on (λ0, λ0 + δ),

(I-c-iii) x k (λ) ≡ 0 on (λ0 - δ, λ0) and x k (λ) ≠ 0 on (λ0, λ0 + δ), and

(I-c-iv) x k (λ) ≡ 0 both on (λ0 - δ, λ0) and on (λ0, λ0 + δ).

In subcase (I-c-i), the function p k (λ) must be nondecreasing on (λ0 -δ, λ0 + δ), which implies that p k (λ) < 0 on (λ0 - δ, λ0) and p k (λ) > 0 on (λ0, λ0 + δ). Therefore, in this case m k ( λ 0 - ) = 1 , m k ( λ 0 + ) = m k ( λ 0 ) = 0 , h k ( λ 0 - ) = 1 , h k 0) = 0, and h k + 1 ( λ 0 - ) = h k + 1 ( λ 0 ) = 1 . This means that the equations in (2.8) and (2.9) now hold as the identities 0 = 0 and -1 = -1, respectively. In subcase (I-c-ii), the function p k (λ) is nondecreasing on (λ0 - δ, λ0], which implies that p k (λ) < 0 on (λ0 - δ, λ0) and p k (λ) ≡ 0 on (λ0, λ0 + δ). Thus, as in subcase (I-c-i) we now have m k ( λ 0 - ) = 1 , m k ( λ 0 + ) = m k ( λ 0 ) = 0 , h k ( λ 0 - ) = 1 h k 0) = 0 and h k + 1 ( λ 0 - ) = h k + 1 ( λ 0 ) = 1 , so that the equations in (2.8) and (2.9) hold as the identities 0 = 0 and -1 = -1, respectively. In subcase (I-c-iii), the situation is similar with the result that p k (λ) is nondecreasing on [λ0, λ0 + δ), so that p k (λ) ≡ 0 in (λ0 - δ, λ0] and p k (λ) > 0 on (λ00 + δ). Thus, in this case m k ( λ 0 - ) = m k ( λ 0 + ) = m k ( λ 0 ) = 0 , h k ( λ 0 - ) = h k ( λ 0 ) = 0 , and h k + 1 ( λ 0 - ) = h k + 1 ( λ 0 ) = 1 , so that both Equations (2.8) and (2.9) hold as the identity 0 = 0. In the last subcase (I-c-iv), we have p k (λ) ≡ 0 on (λ0-δ, λ0+δ), and in this case m k ( λ 0 - ) = m k ( λ 0 + ) = m k ( λ 0 ) = 0 , h k ( λ 0 - ) = h k ( λ 0 ) = 0 , and h k + 1 ( λ 0 - ) = h k + 1 ( λ 0 ) = 1 , so that (2.8) and (2.9) hold as the identity 0 = 0.

Part II. Assume that xk+10) = 0. Then by Remark 2.5 we have x k 0) ≠ 0, and there exists ε > 0 such that x k (λ) = 0 for all λ 0 - ε, λ0 + ε). By Lemma 2.2, there is δ (0, ε) such that one of the following four subcases applies for the behavior of xk+1(λ) near the point λ0:

(II-a) xk+1(λ) ≠ 0 on (λ0 - δ, λ0) and on (λ0, λ0 + δ),

(II-b) xk+1(λ) ≠ 0 on (λ0 - δ, λ0) and xk+1(λ) ≡ 0 on (λ0, λ0 + δ),

(II-c) xk+1(λ) ≡ 0 on (λ0 - δ, λ0) and xk+1(λ) ≠ 0 on (λ0, λ0 + δ), and

(II-d) xk+1(λ) ≡ 0 both on (λ0 - δ, λ0) and on (λ0, λ0 + δ).

In subcase (II-a), the function p k (λ) is well defined on (λ0 - δ, λ0) and (λ0, λ0 + δ), so that it is nondecreasing on each of these two intervals, by Remark 2.4. Since x k 0) ≠ 0, it follows that p k ( λ 0 - ) = + and p k ( λ 0 + ) = - , which shows that m k ( λ 0 - ) = 0 and m k ( λ 0 + ) = 1 . Since in this case we also have m k 0) = 1 (by the definition of a generalized zero at k + 1) and h k ( λ 0 - ) = h k ( λ 0 ) = 1 , h k + 1 ( λ 0 - ) = 1 , and hk+10) = 0, it follows that the equations in (2.8) and (2.9) hold as the identity 1 = 1. In subcase (II-b), the function p k (λ) is well defined and nondecreasing on (λ0 - δ, λ0), so that p k ( λ 0 - ) = + , and hence m k ( λ 0 - ) = 0 . Moreover, h k ( λ 0 - ) = h k ( λ 0 ) = 1 , h k + 1 ( λ 0 - ) = 1 , hk+10) = 0, and m k ( λ 0 + ) = m k ( λ 0 ) = 1 , by the definition of a generalized zero at k + 1. This shows that in this case (2.8) and (2.9) hold again as the identity 1 = 1. In subcase (II-c), we have m k ( λ 0 - ) = m k ( λ 0 ) = 1 (by the definition of a generalized zero at k + 1), h k ( λ 0 - ) = h k ( λ 0 ) = 1 , and h k + 1 ( λ 0 - ) = h k + 1 ( λ 0 ) = 0 . Moreover, the function p k (λ) is well defined and nondecreasing on (λ0, λ0 + δ), so that p k ( λ 0 + ) = - , and hence m k ( λ 0 + ) = 1 . In this case (2.8) and (2.9) hold as the identities 1 = 1 and 0 = 0, respectively. Finally, in subcase (II-d), we have m k ( λ 0 - ) = m k ( λ 0 ) = m k ( λ 0 + ) (by the definition of a generalized zero at k + 1), while h k ( λ 0 - ) = h k ( λ 0 ) = 1 and h k + 1 ( λ 0 - ) = h k + 1 ( λ 0 ) = 0 . Thus, both (2.8) and (2.9) now hold as the identity 0 = 0. This completes the proof.

The above result (Theorem 2.6) now leads to further oscillation theorems for the problem (E0). Denote by
n 1 ( λ ) : = the number of generalized zeros of x ( λ ) in ( 0 , N + 1 ] .
(2.10)
Theorem 2.7 (Local oscillation Theorem II) Assume (1.1). Let x ( λ ) = { x k ( λ ) } k = 0 N + 1 be a non-trivial solution of (SLλ) such that (2.3) holds. Then n1-) and n1+) exist and for all λ
n 1 ( λ + ) = n 1 ( λ ) N + 1 ,
(2.11)
n 1 ( λ + ) - n 1 ( λ - ) = h N + 1 ( λ - ) - h N + 1 ( λ ) { 0 , 1 } .
(2.12)
Hence, the function n1(λ) is nondecreasing in λ on , the limit
m : = lim λ - n 1 ( λ )
(2.13)
exists with m [0, N + 1], so that for a suitable λ0 < 0 we have
n 1 ( λ ) m a n d h N + 1 ( λ - ) - h N + 1 ( λ ) 0 f o r a l l λ λ 0 .
(2.14)
Proof. The number of generalized zeros of x(λ) in (0, N + 1] is by definition
n 1 ( λ ) = k = 0 N m k ( λ ) , λ ,
where, as in Theorem 2.6, m k (λ) is the number of generalized zeros of x(λ) in (k,k + 1]. The statement in (2.11) follows directly from (2.8). The expression in (2.12) is calculated by the telescope sum of the expression in (2.9). This yields that
n 1 ( λ + ) - n 1 ( λ - ) = h N + 1 ( λ - ) - h N + 1 ( λ ) - h 0 ( λ - ) + h 0 ( λ ) , λ .

But since by (2.3) the initial conditions of x(λ) do not depend on λ, we have h0-) = h0(λ) for all λ , which shows (2.12). From the two conditions (2.11) and (2.12) we then have that the function n1(λ) is nondecreasing in λ on . Since the values of n1(λ) are nonnegative integers, the limit in (2.13) exists with m {0}. Consequently, n1(λ) ≡ m for λ sufficiently negative, say for all λ ≤ λ0 for some λ0 < 0. Hence, n1+) - n1-) ≡ 0 for λ ≤ λ0. Applying (2.12) once more then yields the second equation in (2.14). This completes the proof.

Now we relate the above oscillation results with the eigenvalue problem (E0). We say that λ0 is a finite eigenvalue of (E0), provided there exists a nontrivial solution x ( λ ) = { x k ( λ ) } k = 0 N + 1 of (E0) such that xN+10) = 0 and
x N + 1 ( λ ) 0 for λ in some left neighborhood of λ 0 .
(2.15)

Note that such a requirement is justified by Lemma 2.2. We observe that every finite eigenvalue of (E0) is also a traditional eigenvalue, for which the "nondegeneracy condition" (2.15) is dropped. From the uniqueness of solutions of equation (SLλ) it then follows that λ0 is a finite eigenvalue of (E0) if and only if the principal solution x ^ ( λ ) , see (2.4), satisfies x ^ N + 1 ( λ 0 ) = 0 and x ^ N + 1 ( λ ) 0 for λ in some left neighborhood of λ0. Or equivalently, the principal solution x ^ ( λ ) has h N + 1 ( λ 0 ) = 1 and hN+10) = 0. This shows that the difference h N + 1 ( λ 0 - ) - h N + 1 ( λ 0 ) , whenever it is positive, indicates a finite eigenvalue of problem (E0).

From Lemma 2.2 we obtain that under the assumption (1.1) the finite eigenvalues of (E0) are isolated. This property was also proven for the classical eigenvalues of (SLλ) in [17] under the strict monotonicity of r k (λ) and q k (λ). Such a strict monotonicity assumption is not required in this article.

Thus, we finally arrive at the following global oscillation theorem. Set
n 2 ( λ ) : = the number of finite eigenvalues of ( E 0 ) in ( , λ ] .
(2.16)
Then from this definition we have
n 2 ( λ + ) = n 2 ( λ ) , n 2 ( λ ) - n 2 ( λ - ) = h N + 1 ( λ - ) - h N + 1 ( λ ) , for all λ ,
(2.17)

i.e., positivity of the difference n2(λ) - n2-) indicates finite eigenvalue at λ.

Theorem 2.8 (Global oscillation theorem) Assume (1.1). Then for all λ
n 2 ( λ + ) = n 2 ( λ ) 1 ,
(2.18)
n 2 ( λ + ) - n 2 ( λ - ) = n 1 ( λ + ) - n 1 ( λ - ) { 0 , 1 } ,
(2.19)
and there exists m [0, N + 1] such that
n 1 ( λ ) = n 2 ( λ ) + m f o r a l l λ .
(2.20)
Moreover, for a suitable λ0 < 0 we have
n 2 ( λ ) 0 a n d n 1 ( λ ) m f o r a l l λ λ 0 .
(2.21)

Proof. The result follows directly from Theorem 2.7.

Corollary 2.9 Under the assumption (1.1), the finite eigenvalues of (E0) are isolated and bounded from below.

Proof. From Lemma 2.2 we know that the finite eigenvalues of (E0) are isolated. The second statement follows from condition (2.21) of Theorem 2.8, since n2(λ) ≡ 0 for all λ ≤ λ0 means that there are no finite eigenvalues of (E0) in the interval (-∞, λ0].

It remains to connect the above global oscillation theorem with the traditional statement saying that the j-th eigenfunction has exactly j generalized zeros in the interval (0, N + 1]. We will see that under some additional assumption the statement of this result remains exactly the same when we replace the eigenfunctions of (E0) by its finite eigenfunctions. This additional assumption is formulated in terms of the associated discrete quadratic functional
F 0 ( η , λ ) : = k = 0 N { r k ( λ ) ( Δ η k ) 2 - q k ( λ ) η k + 1 2 } ,

where η = { η k } k = 0 N + 1 is a sequence such that η0 = 0 = ηN+1. The functional F0(·, λ) is positive, we write F0(·, λ) > 0, if F0(η, λ) > 0 for every sequence η with η0 = 0 = ηN + 1and η ≠ 0. The following auxiliary result is taken from [[18], Theorem 5.1], compare also with [4, Theorem 8.10].

Proposition 2.10 Let λ0 be fixed. The functional F0(·, λ0) is positive if and only if the principal solution x ^ ( λ 0 ) of ( S L λ 0 ) has no generalized zeros in (0, N + 1], i.e., n10) = 0.

Theorem 2.11 (Oscillation theorem) Assume (1.1). Then
n 1 ( λ ) = n 2 ( λ ) f o r a l l λ
(2.22)

if and only if there exists λ0 < 0 such that F0(·, λ0) > 0. In this case, if λ1 < λ2 < < λ r (where rN + 1) are the finite eigenvalues of (E0) with the corresponding finite eigenfunctions x(1), x(2),..., x(r), then for each j {1,..., r} the finite eigenfunction x(j)has exactly j generalized zeros in (0,N + 1].

Note that since the finite eigenfunction x(j)has x N + 1 ( j ) = 0 , it satisfies x N ( j ) 0 , by Remark 2.5. Therefore, the point N + 1 is one of the generalized zeros of x(j), and consequently the remaining j - 1 generalized zeros of x(j)are in the open interval (0, N + 1). This complies with the traditional continuous time statement.

Proof of Theorem 2.11 If n1(λ) = n2(λ) for all λ , then the number m in Equation (2.20) of Theorem 2.8 is zero. This implies through condition (2.21) that n1(λ) ≡ 0 for all λ ≤ λ0 with some λ0 < 0. By Proposition 2.10, the latter condition is equivalent to the positivity of the functional F0(·, λ) for every λ ≤ λ0, in particular for λ = λ0. Conversely, assume that F0(·, λ0) > 0 for some λ0 < 0. Then n10) = 0, by Proposition 2.10, and since the function n1(·) is nondecreasing in λ on (see Theorem 2.7), it follows that n1(λ) ≡ 0 for all λ ≤ λ0. From this we see that m = 0 in (2.21), and hence also in (2.20). Equality (2.22) is therefore established. Finally, assume that (2.22) holds and let λ j (where j {1,..., r}) be the j-th finite eigenvalue of (E0) with the corresponding finite eigenfunction x(j). Then n2 j ) = j and from (2.22) we get n1 j ) = j, i.e., x(j)has exactly j generalized zeros in (0, N + 1]. The proof is complete.

In the last part of this section we present certain results on the existence of finite eigenvalues of (E0). These results are proven in [9, Section 7] under the restriction that r k (λ) ≡ r k is constant in λ for every k [0, N], since the corresponding oscillation theorem in [9, Theorem 7.2] required that assumption. In the present article we allow r k (λ) in Theorem 2.11 to be dependent on λ, so that the results in [9, Theorems 7.3-7.5] can be directly transferred to the equation (SLλ). The proofs of the three results below are identical to the proofs of [9, Theorems 7.3-7.5] and they are therefore omitted.

Theorem 2.12 (Existence of finite eigenvalues: necessary condition) Assume (1.1). If (E0) has a finite eigenvalue, then there exist λ0, λ1 with λ0 < λ1 and m {0} such that n1(λ) ≡ m for all λ ≤ λ0 and F 0 ( . , λ 1 ) 0 .

Theorem 2.13 (Existence of finite eigenvalues: sufficient condition) Assume (1.1). If there exist λ0, λ1 with λ0 < λ1 such that F0(·, λ0) > 0 and F 0 ( . , λ 1 ) 0 , then (E0) has at least one finite eigenvalue.

Theorem 2.14 (Characterization of the smallest finite eigenvalue) Assume (1.1). Let there exist λ0, λ1 with λ0 < λ1 such that F0(·, λ0) > 0 and F 0 ( . , λ 1 ) 0 . Then the eigenvalue problem (E0) possesses a smallest finite eigenvalue λmin, which is characterized by any of the conditions:
λ min = sup { λ , F 0 ( . , λ ) > 0 } , λ min = min { λ , F 0 ( . , λ ) 0 } .

Remark 2.15 The differentiability assumption on the coefficients r k (λ) and q k (λ) can be weakened without changing the statements in Theorems 2.6, 2.7 and 2.8 as follows. The functions r k (λ) and q k (λ) are continuous in λ on and differentiable in λ except possibly at isolated values at which the left-hand and right-hand derivatives of r k (λ) and q k (λ) exist finite (i.e., there may be a corner at such points). In this case we replace the quantities k ( λ ) and q ˙ k ( λ ) by the corresponding one sided limits k ( λ - ) , k ( λ + ) and q ˙ k ( λ - ) , q ˙ k ( λ + ) .

Remark 2.16 The methods of this article allow to study the eigenvalue problem (E0) when the spectral parameter λ is restricted to some compact interval [a, b] only. In this case, using Remark 2.15, we may extend the coefficient r k (λ) by the constant r k (a) for λ (-∞, a), and by the constant r k (b) for λ (b, ∞), and similarly the coefficient q k (λ). After such an extension of r k (λ) and q k (λ), the finite eigenvalues of (E0) belong to the interval (a, b] (if there is a finite eigenvalue at all).

Remark 2.17 One of the referees pointed out the article [19], in which the authors study spectral properties of the Jacobi matrices associated with the eigenvalue problem (E0) for two Jacobi equations of the form
s k + 1 x k + 2 - t k + 1 x k + 1 + s k x k = λ x k + 1
with s k ≠ 0 in terms of the generalized zeros of a weighted Wronskian of their specific solutions. As the results in [19] are proven under the linear dependence on λ as in (1.2), it is an interesting topic to extend such results to general nonlinear dependence on λ, i.e., to Jacobi equations
s k + 1 ( λ ) x k + 2 - t k + 1 ( λ ) x k + 1 + s k ( λ ) x k = 0

with s k (λ) ≠ 0, compare also with [6] and [9, Example 7.8].

3 Eigenvalue problem with variable endpoints

In this section we consider more general boundary conditions than the Dirichlet endpoints in problem (E0). Our aim is to establish the oscillation theorems for the variable initial endpoint, i.e., for the boundary conditions
α x 0 + β r 0 ( λ ) Δ x 0 = 0 , x N + 1 = 0 ,
(3.1)
where α, β are, without loss of generality, such that α2 + β2 = 1 and α ≥ 0. Boundary conditions (3.1) contain as a special case the Dirichlet endpoints x0 = 0 = xN+1as in the previous section (upon taking α = 1 and β = 0), or the Neumann-Dirichlet boundary conditions Δx0 = 0 = xN+1(upon taking α = 0 and β = 1), or other combinations. One possibility to study the eigenvalue problem
( S L λ ) , λ , ( 3 . 1 )
(3.2)
is to transform (3.2) into a problem with the Dirichlet endpoints and apply to this transformed problem the results from Section 2. This technique has been successfully used in the literature in this context, see e.g. [10, 20, 21], and it will be utilized also in the present article. Define the natural solution x ̄ ( λ ) = { x ̄ k ( λ ) } k = 0 N + 1 of equation (SLλ) associated with boundary conditions (3.1) as the solution starting with the initial values
x ̄ 0 ( λ ) - β , r 0 ( λ ) Δ x ̄ 0 ( λ ) α , for all λ .
(3.3)
Since α and β cannot be simultaneously zero, it follows that x ̄ ( λ ) is a nontrivial solution of (SLλ) and satisfies the initial boundary condition in (3.1). And similarly to the Dirichlet endpoints case in Section 2, in which the natural solution x ̄ ( λ ) reduces to the principal solution x ^ ( λ ) , the finite eigenvalues of (3.2) will be determined by the behavior of x ̄ N + 1 ( λ ) in λ. We say that λ0 is a finite eigenvalue of (3.2), if x ̄ N + 1 ( λ 0 ) = 0 and x ̄ N + 1 ( λ ) 0 in some left neighborhood of λ0. As before, denote by
n 1 ( λ ) : = the number of generalized zeros of x ̄ ( λ ) in ( 0 , N + 1 ] ,
(3.4)
n 2 ( λ ) : = the number of finite eigenvalues of ( 3.2 ) in ( , λ ] .
(3.5)

The following result is an extension of Theorem 2.8 to problem (3.2).

Theorem 3.1 (Global oscillation theorem) Assume that (1.1) is satisfied. Then with n1(λ) and n2(λ) defined in (3.4)-(3.5), conditions (2.18) and (2.19) hold for all λ , and there exist m [0,N + 1] and λ0 < 0 such that (2.20) and (2.21) are satisfied.

Proof. If β = 0, then α = 1 and the result is contained in Theorem 2.8. Therefore, we assume further on that β ≠ 0. We extend the interval [0, N + 1] by the point k = -1 to get an equivalent eigenvalue problem on the interval [-1, N + 1] with the Dirichlet endpoints x-1 = 0 = xN+1. This is done as follows. We put
r - 1 ( λ ) : = - 1 / β and q - 1 ( λ ) : = ( α - 1 ) / β for all λ ,
and consider the extended Sturm-Liouville difference equation
Δ ( r k ( λ ) Δ x k ) + q k ( λ ) x k + 1 = 0 , k [ - 1 , N - 1 ] . ( SL λ ext )
Since r-1(λ) ≠ 0 and - 1 ( λ ) = q ˙ - 1 ( λ ) = 0 for all λ , and since (1.1) is assumed, it follows that the coefficients r k (λ) for k [-1, N] and q k (λ) for k [-1, N - 1] of ( SL λ ext ) satisfy the main assumptions in Section 1. Consequently, the eigenvalue problem (3.2) is equivalent to the extended eigenvalue problem
( SL λ ext ) , λ , x - 1 = 0 = x N + 1 .
(3.6)

Consider the principal solution x ^ ext ( λ ) of the extended equation ( SL λ ext ) , which starts with the initial values x ^ - 1 ext ( λ ) = 0 and r - 1 ( λ ) Δ x ^ - 1 ext ( λ ) = 1 , i.e., x ^ 0 ext ( λ ) = - β . From equation ( SL λ ext ) at k = -1 we then get r 0 ( λ ) Δ x ^ - 1 ext ( λ ) = α . This shows that the principal solution x ^ ext ( λ ) of ( SL λ ext ) coincides on [0, N + 1] with the natural solution x ̄ ( λ ) of (SLλ). Moreover, since the principal solution x ^ ext ( λ ) does not have a generalized zero in (-1,0], it follows that the number of generalized zeros of x ^ ext ( λ ) in (-1, N + 1] is the same as the number of generalized zeros of x ̄ ( λ ) in (0, N + 1]. This shows that the definitions of n1(λ) in (2.10) and (3.4) coincide, and the definitions of n2(λ) in (2.16) and (3.5) coincide as well. The result then follows from Theorem 2.8 applied to eigenvalue problem (3.6).

Remark 3.2 Note that the transformation in the proof of Theorem 3.1 is different from the transformation in [10]. The transition matrices S-1(λ) in the above proof and in [10] have respectively the form
S - 1 ( λ ) = 1 - β ( 1 - α ) / β α , S - 1 ( λ ) = α - β β α .

We can see that the second one does not in general correspond to a Sturm-Liouville difference equation, while the first one always does, compare with (2.2).

The methods of this article in combination with the result in [9, Theorem 6.1] allow to study the eigenvalue problems with more general separated boundary conditions
α x 0 + β r 0 ( λ ) Δ x 0 = 0 , γ x N + 1 + δ r N ( λ ) Δ x N = 0 ,
(3.7)
where α, β, γ, δ satisfy, without loss of generality,
α 2 + β 2 = 1 , γ 2 + δ 2 = 1 , α 0 , γ 0 .
(3.8)
Boundary conditions (3.7) contain as a special case for example the Neumann-Neumann boundary conditions Δx0 = 0 = Δx N (upon taking α = γ = 0 and β = δ = 1). The proof is based on the transformation of the final endpoint condition in (3.7) to the Dirichlet endpoint xN+2= 0, which was suggested in [10]. This transformation, however, leads to a symplectic eigenvalue problem as in [9, Section 6], and not to a Sturm-Liouville eigenvalue problem as in Sections 1 and 2 in this article. Therefore, the statement of Theorem 3.1 extends--indirectly via [9, Theorem 6.1]--to boundary conditions (3.7) as follows. Consider the eigenvalue problem
( S L λ ) , λ , ( 3 . 7 )
(3.9)
and define the quantity
Λ ( λ ) : = γ x ̄ N + 1 ( λ ) + δ r N ( λ ) Δ x ̄ N ( λ ) , λ ,
where x ̄ ( λ ) is the natural solution of (SLλ), i.e., (3.3) holds. We say that λ0 is a finite eigenvalue of the eigenvalue problem (3.9), if Λ0) = 0 and Λ(λ) ≠ 0 in some left neighborhood of λ0. Note that for boundary conditions (3.1) we have Λ ( λ ) = x ^ N + 1 ( λ ) , and for Dirichlet endpoints we have Λ ( λ ) = x ^ N + 1 ( λ ) , so that the above definition of finite eigenvalues of (3.9) agrees with the corresponding definitions for problems (3.2) and (E0). Note that similarly to (2.17) the difference
n 2 ( λ ) - n 2 ( λ - ) = h ( λ - ) - h ( λ ) , where h ( λ ) : = rank Λ ( λ ) ,
(3.10)
indicates a finite eigenvalue of problem (3.9). Denote by
n 2 ( λ ) : = the number of finite eigenvalues of ( 3.9 ) in ( , λ ] ,
(3.11)
s ( λ ) : = 1 , when δ 0 , x ̄ N + 1 ( λ ) 0 , and δ x ̄ N + 1 ( λ ) Λ ( λ ) 0 , 0 , otherwise .
(3.12)

The following result is an extension of Theorem 3.1 to problem (3.9).

Theorem 3.3 (Global oscillation theorem) Assume (1.1) and (3.8) are satisfied. Then with n1(λ), n2(λ), s(λ) defined in (3.4), (3.11), (3.12) conditions (2.18) and
n 2 ( λ + ) - n 2 ( λ - ) = n 1 ( λ + ) - n 1 ( λ - ) + s ( λ + ) - s ( λ - ) { 0 , 1 }
(3.13)
hold for all λ , and there exist m [0, N + 2] and λ0 < 0 such that
n 1 ( λ ) + s ( λ ) = n 2 ( λ ) + m f o r a l l λ ,
(3.14)
n 2 ( λ ) 0 a n d n 1 ( λ ) + s ( λ ) m f o r a l l λ λ 0 .
(3.15)
Proof. We define for all λ
S N + 1 ( λ ) : = a N + 1 ( λ ) b N + 1 ( λ ) c N + 1 ( λ ) d N + 1 ( λ ) γ δ - δ γ .
(3.16)
Then the matrix SN+1(λ) is symplectic and independent of λ. We extend the natural solution x ̄ ( λ ) to k = N + 2 by setting
x ̄ N + 2 ( λ ) : = Λ ( λ ) , ū N + 2 ( λ ) : = - δ x ̄ N + 1 ( λ ) + γ ū N + 1 ( λ ) ,
(3.17)
where ū N + 1 ( λ ) = ū N ( λ ) = r N ( λ ) Δ x ̄ N ( λ ) , see Remark 2.1. Then (3.17) is the unique extension of x ̄ ( λ ) to k = N + 2, given the coefficient matrix SN + 1(λ) in (3.16). It follows that λ0 is a finite eigenvalue of problem (3.9) if and only if it is a finite eigenvalue of the extended symplectic eigenvalue problem
( S λ ) , λ , α x 0 + β u 0 = 0 , x N + 2 = 0 ,
where (Sλ) is the discrete symplectic system zk+1= S k (λ)z k for k [0, N + 1] corresponding to (SLλ) and (3.17), i.e., the matrix S k (λ) is defined by (2.2) for all k [0,N] and SN + 1(λ) is given by (3.16). Let m k (λ) for k [0, N + 1] denote the number of focal points of ( x ̄ ( λ ) , ū ( λ ) ) in (k, k + 1] according to [15, Definition 1], see also [9, Equation (3.9)]. Then, by (2.2) and (1.3), the number m k (λ) indicates a generalized zero in (k, k + 1] when k [0, N]. Hence, by Theorem 2.6, Equations (2.8) and (2.9) hold for every k [0, N]. Moreover, since the coefficient bN+1(λ) ≡ δ is constant in λ, it follows by [9, Theorem 6.1] that Equations (2.8) and (2.9) hold also for k = N + 2, where
h N + 2 ( λ ) : = rank x ̄ N + 2 ( λ ) = rank Λ ( λ ) = h ( λ ) .
Upon analyzing [15, Definition 1], it is not difficult to see that mN+1(λ) {0, 1} and that it is nonzero only when the three conditions δ ≠ 0 and x ̄ N + 1 ( λ ) 0 and δ x ̄ N + 1 ( λ ) Λ ( λ ) 0 are simultaneously satisfied. That is, we have mN + 1(λ) = s(λ). By telescope summation for k [0, N + 1], we then obtain as in the proof of Theorem 2.7 the formulas
n 1 ( λ + ) + s ( λ + ) = n 1 ( λ ) + s ( λ ) ,
(3.18)
n 1 ( λ + ) + s ( λ + ) - n 1 ( λ - ) - s ( λ - ) = h ( λ - ) - h ( λ ) .
(3.19)

Combining (3.19) with (3.10) now yields the statement in (3.13). Since by (3.18) and (3.13) the function n1(λ) + s(λ) is right continuous and nondecreasing in λ on with values in [0,N + 2], it follows that its limit m at -∞ exists with m [0, N + 2] and n1(λ) + s(λ) ≡ m for all λ ≤ λ0 for a suitable λ0 < 0. Moreover, from (3.13) we know that the jumps in n2(λ) and n1(λ) + s(λ) are always the same, which yields that identity (3.14) holds. This in turn implies that n2(λ) ≡ 0 for all λ ≤ λ0. The proof is complete.

4 Examples

In this section we analyze one illustrative example with the finite eigenvalues and the finite eigenfunctions for problem (E0). We shall utilize Remark 2.15 in our examples below.

We consider the interval [0,4], i.e., we take N = 3. With the parameter a (0,2), which will be specified later, we define the coefficients r k (λ) and q k (λ) as follows:
q k ( λ ) : = a , for λ ( - , a ) , λ , for λ [ a , 2 ] , 2 , for λ ( 2 , ) , r k ( λ ) : = 1 q k ( λ ) .
Then, with respect to Remark 2.15, the main assumptions in (1.1) are satisfied, since k ( λ ) = q ˙ k ( λ ) = 0 for λ (-∞, a) (2, ∞) and k ( λ ) = - 1 / λ 2 and q ˙ k ( λ ) = 1 for λ (a, 2). Equation (SLλ) has therefore the form
Δ 2 x k + a 2 x k + 1 = 0 , Δ 2 x k + λ 2 x k + 1 = 0 , Δ 2 x k + 4 x k + 1 = 0
(4.1)
for k [0,2] depending on whether λ < a, λ [a, 2], or λ > 2. Let us find the principal solution x ^ ( λ ) = { x ^ k ( λ ) } k = 0 4 of the middle equation in (4.1). The initial conditions x ^ 0 ( λ ) = 0 and r 0 ( λ ) Δ x ^ 0 ( λ ) = 1 imply that for λ [a, 2]
x ^ 0 ( λ ) = 0 , x ^ 2 ( λ ) = λ ( 2 - λ 2 ) , x ^ 1 ( λ ) = λ , x ^ 3 ( λ ) = λ [ ( 2 - λ 2 ) 2 - 1 ] , x ^ 4 ( λ ) = w ( λ ) ,
where the function w(λ) is defined by
w ( λ ) : = λ ( 2 - λ 2 ) [ ( 2 - λ 2 ) 2 - 2 ] , λ .
(4.2)
Now we can easily calculate the critical value x ^ 4 ( λ ) of the principal solution x ̄ ( λ ) of (SLλ) for every λ as
x ^ 4 ( λ ) = w ( a ) , for λ ( - , a ) , w ( λ ) , for λ [ a , 2 ] , w ( 2 ) , for λ ( 2 , ) .
(4.3)
The candidates for the finite eigenvalues are the zeros of the function x ^ 4 ( λ ) in (4.3). First of all, the zeros of w(λ) are 0 , ± 2 , ± 2 - 2 , and ± 2 + 2 , see the graph of w(λ) in Figure 1 below.
Figure 1
Figure 1

The graph of w (λ) from (4.2).

Since a > 0, the nonpositive zeros of w(λ) are disregarded, which yields the three candidates
z 1 : = 2 - 2 0 . 77 , z 2 : = 2 1 . 41 , z 3 : = 2 + 2 1 . 85
for the finite eigenvalues of (E0). The final disposition of the finite eigenvalues will now depend on the particular value of the parameter a (0, 2). We specify several choices of a in the examples below. For the finite eigenfunction x ( λ ) = { x k ( λ ) } k = 0 4 we shall use the notation
x ( λ ) = { x 0 ( λ ) , x 1 ( λ ) , x 2 ( λ ) , x 3 ( λ ) , x 4 ( λ ) } .
Example 4.1 Let a (0, z1). The critical value x ^ 4 ( λ ) of the principal solution x ̄ ( λ ) of (SLλ) is of the form displayed in Figure 2 (shown for a = 0.15). In this case, there are three finite eigenvalues λ1 = z1, λ2 = z2, and λ3 = z3 with the finite eigenfunctions
Figure 2
Figure 2

The graph of x ^ 4 ( λ ) for a = 0.15.

x ^ ( 1 ) = { 0 , z 1 , z 1 z 2 , z 1 , 0 } , x ^ ( 2 ) = { 0 , z 2 , 0 , - z 2 , 0 } , x ^ ( 3 ) = { 0 , z 3 , - z 2 z 3 , z 3 , 0 } .

The finite eigenfunction x(1) has one generalized zero in (0, 4], namely in (3, 4], x(2) has two generalized zeros in (0,4], namely in (1, 2] and (3, 4], and x(3) has three generalized zeros in (0,4], namely in (1,2], (2,3], and (3,4]. This shows that equality (2.20) in Theorem 2.8 holds with the number m = 0.

Example 4.2 Let a = z 1 = 2 - 2 . Then x ^ 4 ( λ ) has the form
x ^ 4 ( λ ) = 0 , for λ ( - , z 1 ) , w ( λ ) , for λ [ z 1 , 2 ] , - 8 , for λ ( 2 , ) ,
(4.4)
see Figure 3. In this case there are only two finite eigenvalues λ1 = z2 and λ2 = z3 with the finite
Figure 3
Figure 3

The graph of x ^ 4 ( λ ) from (4.4).

eigenfunctions
x ^ ( 1 ) = { 0 , z 2 , 0 , - z 2 , 0 } , x ^ ( 2 ) = { 0 , z 3 , - z 2 z 3 , z 3 , 0 } .

Note that λ = z 1 = 2 - 2 is not now a finite eigenvalue of (E0), since the function x ^ 4 ( λ ) in Figure 3 is identically zero in the left neighborhood of z1. The finite eigenfunction x(1) has two generalized zeros in (0,4] and x(2) has three generalized zeros in (0,4]. Therefore, equality (2.20) in Theorem 2.8 is satisfied with the number m = 1.

Example 4.3 Let a (z1, z2). This situation is similar to Example 4.2. In this case, x ^ 4 ( λ ) has the form shown in Figure 4, there are two finite eigenvalues λ1 = z2 and λ2 = z3, and equality (2.20) in Theorem 2.8 holds with m = 1.
Figure 4
Figure 4

The graph of x ^ 4 ( λ ) for a = 1.

Example 4.4 Let a = z 2 = 2 . Then x ^ 4 ( λ ) has the form (4.4) in which z1 is replaced by z2, see Figure 5. In this case, there is only one finite eigenvalue λ1 = z3 with the finite eigenfunction
Figure 5
Figure 5

The graph of x ^ 4 ( λ ) for a = z 2 = 2 .

x ^ ( 1 ) = { 0 , z 3 , - z 2 z 3 , z 3 , 0 } .

Again, λ = z 2 = 2 is not a finite eigenvalue of (E0), since the function x ^ 4 ( λ ) in Figure 5 is identically zero in the left neighborhood of z2. The finite eigenfunction x(1) has three generalized zeros in (0,4] and equality (2.20) in Theorem 2.8 holds with m = 2.

Example 4.5 Similar analysis as in Examples 4.1-4.4 reveals the fact that for a (z2, z3) equality (2.20) is satisfied with m = 2 as in Example 4.4. For a [z3,2) equality (2.20) holds with m = 3, as in this case there are no finite eigenvalues of (E0) at all.

Declarations

Acknowledgements

This research was supported by the Czech Science Foundation under grant P201/10/1032. The author is grateful to Professor Werner Kratz for pointing out the fact that the transformation from [10] and the result of [9, Theorem 6.1] can be combined to obtain the result in Theorem 3.3 with varying right endpoint. The author also thanks anonymous referees for their suggestions which improved the presentation of the results.

Authors’ Affiliations

(1)
Department of Mathematics and Statistics, Faculty of Science, Masaryk University, Kotlářská 2, CZ-61137 Brno, Czech Republic

References

  1. Ahlbrandt CD, Peterson AC: Discrete Hamiltonian Systems: Difference Equations, Continued Fractions, and Riccati Equations. Kluwer Academic Publishers, Boston; 1996.View ArticleGoogle Scholar
  2. Atkinson FV: Discrete and Continuous Boundary Problems. Academic Press, New York/London; 1964.Google Scholar
  3. Hinton DB, Lewis RT: Spectral analysis of second order difference equations. J Math Anal Appl 1978, 63: 421–438. 10.1016/0022-247X(78)90088-4MATHMathSciNetView ArticleGoogle Scholar
  4. Kelley WG, Peterson AC: Difference Equations: An Introduction with Applications. Academic Press, San Diego; 1991.Google Scholar
  5. Teschl G: Jacobi Operators and Completely Integrable Nonlinear Lattices. In Mathematical Surveys and Monographs. Volume 72. American Mathematical Society, Providence, RI; 2000.Google Scholar
  6. Šimon Hilscher R, Zeidan V: Symmetric three-term recurrence equations and their symplectic structure. Adv Difference Equ 2010, 2010: 17. (Article ID 626942)Google Scholar
  7. Hartman P: Difference equations: disconjugacy, principal solutions, Green's function, complete monotonicity. Trans Amer Math Soc 1978, 246: 1–30.MATHMathSciNetGoogle Scholar
  8. Teschl G: Oscillation theory and renormalized oscillation theory for Jacobi operators. J Differential Equations 1996, 129(2):532–558. 10.1006/jdeq.1996.0126MATHMathSciNetView ArticleGoogle Scholar
  9. Šimon Hilscher R: Oscillation theorems for discrete symplectic systems with nonlinear dependence in spectral parameter. Linear Algebra Appl, 2012. to appear. doi:10.1016/j.laa.2012.06.033Google Scholar
  10. Došlý O, Kratz W: Oscillation and spectral theory for symplectic difference systems with separated boundary conditions. J Difference Equ Appl 2010, 16(7):831–846. 10.1080/10236190802558910MATHMathSciNetView ArticleGoogle Scholar
  11. Šimon Hilscher R: Oscillation and spectral theory of Sturm-Liouville differential equations with nonlinear dependence in spectral parameter, submitted. 2011.Google Scholar
  12. Lax PD: Linear Algebra. Pure and Applied Mathematics (New York). A Wiley-Interscience Publication, John Wiley & Sons, Inc., New York; 1997.Google Scholar
  13. Bohner M, Došlý O, Kratz W: An oscillation theorem for discrete eigenvalue problems. Rocky Mountain J Math 2003, 33(4):1233–1260. 10.1216/rmjm/1181075460MATHMathSciNetView ArticleGoogle Scholar
  14. Došlý O, Kratz W: Oscillation theorems for symplectic difference systems. J Difference Equ Appl 2007, 13(7):585–605. 10.1080/10236190701264776MATHMathSciNetView ArticleGoogle Scholar
  15. Kratz W: Discrete oscillation. J Difference Equ Appl 2003, 9(1):135–147.MATHMathSciNetView ArticleGoogle Scholar
  16. Kratz W: Quadratic Functionals in Variational Analysis and Control Theory. Akademie Verlag, Berlin; 1995.Google Scholar
  17. Bohner M: Discrete linear Hamiltonian eigenvalue problems. Comput Math Appl 1998, 36(10–12):179–192. 10.1016/S0898-1221(98)80019-9MATHMathSciNetView ArticleGoogle Scholar
  18. Ahlbrandt CD, Hooker JW: A variational View of Nonoscillation Theory for Linear Difference Equations. In Differential and Integral Equations. Edited by: Henderson JL. Institute of Applied Mathematics, University of Missouri-Rolla, Rolla, MO; 1985:1–21. Proceedings of the Twelfth and Thirteenth Midwest Conferences (Iowa City, IO, 1983, Argonne, IL, 1984)Google Scholar
  19. Ammann K, Teschl G: Relative oscillation theory for Jacobi matrices. Edited by: Bohner M, Došlá Z, Ladas G, Ünal M, Zafer A. Uğur-Bahçeşehir University Publishing Company, Istanbul; 2009:105–115.Google Scholar
  20. Hilscher R, Růžičková V: Implicit Riccati equations and quadratic functionals for discrete symplectic systems. Int J Difference Equ, 1, no 1 2006, 135–154.Google Scholar
  21. Šimon Hilscher R, Zeidan V: Oscillation theorems and Rayleigh principle for linear Hamiltonian and symplectic systems with general boundary conditions. Appl Math Comput 2012, 218(17):8309–8328. 10.1016/j.amc.2012.01.056MATHMathSciNetView ArticleGoogle Scholar

Copyright

© Hilscher; licensee Springer. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement