Skip to main content

J-self-adjoint extensions for second-order linear difference equations with complex coefficients

Abstract

This paper is concerned with second-order linear difference equations with complex coefficients which are formally J-symmetric. Both J-self-adjoint subspace extensions and J-self-adjoint operator extensions of the corresponding minimal subspace are completely characterized in terms of boundary conditions.

MSC:39A70, 47A06.

1 Introduction

In this paper, we consider the following second-order linear difference equation with complex coefficients:

τ(x)(t):= ( p ( t ) Δ x ( t ) ) +q(t)x(t)=λw(t)x(t),tI,
(1.1)

where I is the integer set { t } t = a b , a is a finite integer or −∞, and b is a finite integer or +∞ with ba3; Δ and are the forward and backward difference operators, respectively, i.e., Δx(t)=x(t+1)x(t) and x(t)=x(t)x(t1); p(t) and q(t) are complex with p(t)0 for tI, p(a1)0 if a is finite and p(b+1)0 if b is finite; w(t)>0 for tI; and λ is a spectral parameter.

Equation (1.1) is formally symmetric if and only if both p(t) and q(t) are real numbers. Therefore, if p(t) or q(t) are complex, then Eq. (1.1) is formally nonsymmetric. To study nonsymmetric operators, Glazman introduced the concept of J-symmetric operators in [1] where J is a conjugation operator (see Definition 2.2). The minimal operators generated by Sturm-Liouville and some higher-order differential and difference expressions with complex coefficients are J-symmetric operators in the related Hilbert spaces (e.g., [24]). Here, we remark that a bounded J-symmetric operator is also called a complex symmetric operator (cf. [5, 6]). The operators generated by singular differential and difference expressions are not bounded in general.

It is well known that the study of spectra of symmetric (J-symmetric) differential expressions is to consider the spectra of self-adjoint (J-self-adjoint) operators generated by such expressions. In general, under a certain definiteness condition, a formally differential expression can generate a minimal operator in a related Hilbert space and its adjoint is the corresponding maximal operator (see, e.g., [7, 8]). Generally, the self-adjoint (J-self-adjoint) operators are generated by extending the minimal operators. In addition, the eigenvalues of every self-adjoint (J-self-adjoint) extension of the corresponding minimal operator are different although the essential spectra of them are the same. Therefore, the characterization of self-adjoint (J-self-adjoint) extensions of a differential expression is a primary task in the study of its spectral problems; and the classical von Neumann self-adjoint extension theory and the Glazman-Krein-Naimark (GKN) theory for symmetric operators were established [9, 10]. The related J-self-adjoint extension theory was also established (cf. [3, 11]). By using them, characterizations of self-adjoint (J-self-adjoint) extensions for differential expressions in terms of boundary conditions have been given (cf. [4, 7, 12, 13]). For other results for formally symmetric (J-symmetric) differential expressions, the reader is referred to [1422] and the references therein.

It has been found out that the minimal operators generated by some differential expressions may be non-densely defined and the maximal operators may be multi-valued (e.g., see [[20], Example 2.2]). In particular, the maximal operator corresponding to Eq. (1.1) is multi-valued, and the minimal operator is non-densely defined in the related Hilbert space (cf. [23]). Therefore, the self-adjoint extension theory for symmetric operators is not applicable in these cases. Coddington [24] extended the von Neumann self-adjoint extension theory for symmetric operators to Hermitian subspaces in 1973. Recently, Shi [25] extended the GKN theory for symmetric operators to Hermitian subspaces. Using GKN theory given in [25], Shi [23] first studied the self-adjoint extensions of (1.1) with real coefficients in the framework of subspaces in a product space. For J-symmetric case, in order to study the J-self-adjoint extensions of J-symmetric differential and difference expressions for which the minimal operators are non-densely defined or the maximal operators are multi-valued, the theory for a J-Hermitian subspace was given in [26] which includes the GKN theorem for a J-Hermitian subspace. For the results for difference expressions, the reader is referred to [2733].

The limit types of (1.1) which are directly related to how many boundary conditions should be added to get a J-self-adjoint extension have been investigated in [31, 32]. In the present paper, the J-self-adjoint subspace extensions and J-self-adjoint operator extensions of the minimal subspace corresponding to Eq. (1.1) with complex coefficients are studied. A complete characterization of them in terms of boundary conditions is given. These characterizations are basic in the study of spectral theory for Eq. (1.1).

The rest of this present paper is organized as follows. In Section 2, some basic concepts and fundamental results about subspaces and Eq. (1.1) are introduced. In Section 3, the maximal, pre-minimal, and minimal subspaces in the whole interval and the left-hand and right-hand half-intervals are introduced and their properties are studied. The relationship among the defect indices of the minimal subspaces in the whole interval and the left-hand and right-hand half-intervals is studied in Section 4. In Section 5, we pay our attention to J-self-adjoint subspace extensions of the minimal subspace in the whole interval. Finally, a complete characterization of J-self-adjoint operator extensions of the minimal operator in the whole interval is given in Section 6. Three examples are given in Section 7.

2 Preliminaries

In this section, we introduce some basic concepts and give some fundamental results about subspaces in a product space and present two results about Eq. (1.1).

By C denote the set of complex numbers, and by z ¯ denote the complex conjugate of zC. Let X be a complex Hilbert space with the inner product ,. The norm is defined by f= f , f 1 / 2 for fX. Let X 2 be the product space X×X with the following induced inner product, denoted by , without any confusion:

( x , f ) , ( y , g ) =x,y+f,gfor all (x,f),(y,g) X 2 .

Let T be a linear subspace in X 2 . For briefness, a linear subspace is only called a subspace. For a subspace T in X 2 , denote

D ( T ) = { x X : ( x , f ) T  for some  f X } , T ( x ) = { f X : ( x , f ) T } , T λ = { ( x , f λ x ) : ( x , f ) T } .

Clearly, T(0)={0} if and only if T can determine a unique linear operator from D(T) into X whose graph is T. Therefore, T is said to be an operator if T(0)={0}.

Definition 2.1 [24]

Let T be a subspace in X 2 .

  1. (1)

    Its adjoint, T , is defined by

    T = { ( y , g ) X 2 : f , y = x , g  for all  ( x , f ) T } .
  2. (2)

    T is said to be a Hermitian subspace if T T .

  3. (3)

    T is said to be a self-adjoint subspace if T= T .

Lemma 2.1 [24]

Let T be a subspace in X 2 . Then T is a closed subspace in X 2 , T = ( T ¯ ) , and T = T ¯ , where T ¯ is the closure of T.

Definition 2.2 (see [[19], p.114] or [3])

An operator J defined on X is said to be a conjugation operator if for all x,yX,

Jx,Jy=y,xand J 2 x=x.
(2.1)

It can be verified that J is a conjugate linear, norm-preserving bijection on X and it holds that (see [[19], p.114])

Jx,y=Jy,xfor all x,yX.
(2.2)

The complex conjugation x x ¯ in any l 2 space is a conjugation operator on l 2 .

Definition 2.3 [26]

Let T be a subspace in X 2 and J be a conjugation operator.

  1. (1)

    The J-adjoint of T, i.e., T J , is defined by

    T J = { ( y , g ) X 2 : f , J y = x , J g  for all  ( x , f ) T } .
  2. (2)

    T is said to be a J-Hermitian subspace if T T J .

  3. (3)

    T is said to be a J-self-adjoint subspace if T= T J .

  4. (4)

    Let T be a J-Hermitian subspace. Then S is a J-self-adjoint subspace extension (briefly, J-SSE) of T if TS and S is a J-self-adjoint subspace.

Remark 2.1

  1. (i)

    It can be easily verified that T J is a closed subspace. Consequently, a J-self-adjoint subspace T is a closed subspace since T= T J . In addition, S J T J if TS.

  2. (ii)

    From the definition, we have that f,Jy=x,Jg holds for all (x,f)T and (y,g) T J , and that T is a J-Hermitian subspace if and only if

    f,Jy=x,Jgfor all (x,f),(y,g)T.
  3. (iii)

    Assume that T is not only J-symmetric for some conjugation operator J but also symmetric, and that S is a J-SSE of T. Then S is a self-adjoint subspace extension of T if and only if S J = S .

Lemma 2.2 [26]

Let T be a subspace in X 2 . Then

  1. (1)

    T ={(Jy,Jg):(y,g) T J };

  2. (2)

    T J ={(Jy,Jg):(y,g) T }.

Lemma 2.3 [26]

Let T be a J-Hermitian subspace. Then (y,g) T ¯ if and only if (y,g) T J and f,Jy=x,Jg for all (x,f) T J .

Definition 2.4 [26]

Let T be a J-Hermitian subspace. Then d(T)= 1 2 dim T J / T ¯ is called to be the defect index of T.

Remark 2.2 By [[26], Remark 3.5], d(T) is a nonnegative integer or else infinite. Further, d(T)=d( T ¯ ). Then T and T ¯ have the same J-SSEs since every J-SSE is closed.

Define the form [:] as

[ ( x , f ) : ( y , g ) ] =f,Jyx,Jg,(x,f),(y,g) T J .

Then, for all Y j =( x j , f j ) T J (j=1,2,3) and μC, it holds that

[ Y 3 : Y 1 + Y 2 ] = [ Y 3 : Y 1 ] + [ Y 3 : Y 2 ] , [ Y 1 + Y 2 : Y 3 ] = [ Y 1 : Y 3 ] + [ Y 2 : Y 3 ] , [ μ Y 1 : Y 2 ] = [ Y 1 : μ Y 2 ] = μ [ Y 1 : Y 2 ] , [ Y 1 : Y 2 ] = [ Y 2 : Y 1 ] .
(2.3)

The following result which can be regarded as the GKN theorem for a J-Hermitian subspace was established in [26].

Theorem 2.1 Let T be a closed J-Hermitian subspace. Assume that d(T)=:d<+. Then a subspace S is a J-SSE of T if and only if TS T J and there exists { ( x j , f j ) } j = 1 d T J such that

  1. (i)

    ( x 1 , f 1 ),( x 2 , f 2 ),,( x d , f d ) are linearly independent (modulo T);

  2. (ii)

    [( x s , f s ):( x j , f j )]=0 for s,j=1,2,,d;

  3. (iii)

    S={(y,g) T J :[(y,g):( x j , f j )]=0,j=1,2,,d}.

Finally, we present two results for τ or Eq. (1.1). For briefness, introduce the conventions: for any given integer k, a+k= when a=, and b+k=+ when b=+. Further, denote

(x,y)(t)=p(t) [ ( Δ y ( t ) ) x ( t ) y ( t ) Δ x ( t ) ] ,t { t } t = a 1 b .

In the case of a=, if lim t a (x,y)(t) exists and is finite, then denote the limit by (x,y)(); and in the other case of b=+, if lim t b (x,y)(t) exists and is finite, then denote the limit by (x,y)().

We remark that the notation (x,y)(t) is also used in [23] where it is given by (x,y)(t)=p(t)[(Δ y ¯ (t))x(t) y ¯ (t)Δx(t)]. So, the expression of (x,y)(t) in the present paper is different from that in [23].

It can be easily verified that the following result holds.

Lemma 2.4 For any x= { x ( t ) } t = a 1 b + 1 , y= { y ( t ) } t = a 1 b + 1 C, and for any m,nI with mn,

t = m n [ y ( t ) τ ( x ) ( t ) τ ( y ) ( t ) x ( t ) ] =(x,y)(t) | t = m 1 n .

The following result is a direct consequence of Lemma 2.4.

Lemma 2.5 For each λC, let y and z be any solutions of (1.1). Then, for any given a1 t 0 b,

(y,z)(t)=(y,z)( t 0 ),tI.

3 Maximal and minimal subspaces

In this section, we introduce the corresponding maximal, pre-minimal, and minimal subspaces to τ in the whole interval and the left-hand and right-hand half-intervals and study their properties.

First, introduce the following space:

l w 2 (I):= { x = { x ( t ) } t = a 1 b + 1 C : t = a b w ( t ) | x ( t ) | 2 < + } .

Then l w 2 (I) is a Hilbert space with the inner product

x,y:= t = a b y ¯ (t)w(t)x(t).

Clearly, x=y in l w 2 (I) if and only if x(t)=y(t), tI, i.e., xy=0, where x= x , x 1 / 2 .

The formally adjoint operator of τ is

τ + (x)(t):= ( p ¯ ( t ) Δ x ( t ) ) + q ¯ (t)x(t),tI.

Now, introduce the maximal subspace H(τ) and the pre-minimal subspace H 00 (τ) in l w 2 (I)× l w 2 (I) corresponding to τ as follows.

H ( τ ) = { ( x , f ) l w 2 ( I ) × l w 2 ( I ) : τ ( x ) ( t ) = w ( t ) f ( t ) , t I } , H 00 ( τ ) = { ( x , f ) H ( τ ) : there exist two integers  t ˜ 0 , t 0 I  with  t ˜ 0 < t 0 H 00 ( τ ) = such that  x ( t ) = 0  for  t t ˜ 0  and  t t 0 } .
(3.1)

The subspace H 0 (τ):= H ¯ 00 (τ) is called the minimal subspace corresponding to τ.

The endpoints a and b may be finite or infinite. In order to characterize the J-SSEs of H 0 (τ) in a unified form, we introduce the left and right maximal and minimal subspaces. Fix any integer a+1< c 0 <b. Denote

I 1 := { t } t = a t = c 0 1 , I 2 := { t } t = c 0 b ,

and by ,, , a , , b , , a , and b denote the inner products and the norms of l w 2 (I), l w 2 ( I 1 ), and l w 2 ( I 2 ), respectively. For briefness, we still denote the inner products and norms of their product spaces l w 2 (I)× l w 2 (I), l w 2 ( I 1 )× l w 2 ( I 1 ), and l w 2 ( I 2 )× l w 2 ( I 2 ) by the same notations as those for l w 2 (I), l w 2 ( I 1 ), and l w 2 ( I 2 ), respectively.

Let H a (τ) and H a , 00 (τ) be the left maximal and pre-minimal subspaces defined as in (3.1) with I replaced by I 1 , respectively, and let H b (τ) and H b , 00 (τ) be the right maximal and pre-minimal subspaces defined as in (3.1) with I replaced by I 2 , respectively. The subspaces H a , 0 (τ):= H ¯ a , 00 (τ) and H b , 0 (τ):= H ¯ b , 00 (τ) are called the left and right minimal subspaces corresponding to τ, respectively. By Lemma 2.1, one has

H 0 (τ)= H 00 (τ), H a , 0 (τ)= H a , 00 (τ), H b , 0 (τ)= H b , 00 (τ).
(3.2)

In the rest of the present paper, let J be the complex conjugate x x ¯ , i.e., Jx= x ¯ . Then J is a conjugation operator on l w 2 (I) (or l w 2 ( I 1 ) or l w 2 ( I 2 )). By Lemma 2.2 and (3.2), one has that

( H 0 ( τ ) ) J = ( H 00 ( τ ) ) J , ( H a , 0 ( τ ) ) J = ( H a , 00 ( τ ) ) J , ( H b , 0 ( τ ) ) J = ( H b , 00 ( τ ) ) J .
(3.3)

The rest of this section is divided into three parts.

3.1 Properties of minimal subspaces and their adjoint and J-adjoint subspaces

In this subsection, we study the properties of minimal subspaces H 0 (τ), H a , 0 (τ), H b , 0 (τ) and their adjoint and J-adjoint subspaces.

First, we have the following result.

Lemma 3.1 (see [[23], Lemma 3.1])

For each a+1 t 0 b1 (or a+1 t 0 c 0 2 or c 0 +1 t 0 b1) and for each ξC, there exists xD( H 00 (τ)) (or D( H a , 00 (τ)) or D( H b , 00 (τ))) such that x( t 0 )=ξ and x(t)=0 for all t t 0 .

Theorem 3.1 H( τ + ) H 00 (τ), H a ( τ + ) H a , 00 (τ), H b ( τ + ) H b , 00 (τ), and

H 00 ( τ ) = { ( x , f ) l w 2 ( I ) × l w 2 ( I ) : τ + ( x ) ( t ) = w ( t ) f ( t ) , a + 1 t b 1 } , H a , 00 ( τ ) = { ( x , f ) l w 2 ( I 1 ) × l w 2 ( I 1 ) : τ + ( x ) ( t ) = w ( t ) f ( t ) , a + 1 t c 0 2 } , H b , 00 ( τ ) = { ( x , f ) l w 2 ( I 2 ) × l w 2 ( I 2 ) : τ + ( x ) ( t ) = w ( t ) f ( t ) , c 0 + 1 t b 1 } .
(3.4)

Proof Since H a , 00 (τ) and H b , 00 (τ) are two special cases of H 00 (τ), we only prove the results corresponding to H 00 (τ).

For any given (x,f) H 00 (τ), we have

f,y=x,g,(y,g) H 00 (τ),
(3.5)

which implies that

t = a b [ y ¯ ( t ) w ( t ) f ( t ) τ + ( y ¯ ) ( t ) x ( t ) ] =0.
(3.6)

On the other hand, by using yD( H 00 (τ)), it can be verified that

t = a b [ y ¯ ( t ) τ + ( x ) ( t ) τ + ( y ¯ ) ( t ) x ( t ) ] =0,

which, together with (3.6) and y(a)=y(b)=0 when a and b are finite, implies that

t = a + 1 b 1 y ¯ (t) [ w ( t ) f ( t ) τ + ( x ) ( t ) ] =0,yD ( H 00 ( τ ) ) .

So, by Lemma 3.1 we get

τ + (x)(t)=w(t)f(t),a+1tb1.
(3.7)

Conversely, suppose that (x,f) l w 2 (I)× l w 2 (I) satisfies (3.7). Then (3.5) holds for all (y,g) H 00 (τ). Consequently, (x,f) H 00 (τ). So, the first relation of (3.4) holds. In addition, the first relation of (3.4) directly yields that H( τ + ) H 00 (τ). This completes the proof. □

Theorem 3.2 The subspaces H 00 (τ), H a , 00 (τ), and H b , 00 (τ) are J-Hermitian subspaces in l w 2 (I)× l w 2 (I), l w 2 ( I 1 )× l w 2 ( I 1 ), and l w 2 ( I 2 )× l w 2 ( I 2 ), respectively. Further, H(τ) ( H 00 ( τ ) ) J , H a (τ) ( H a , 00 ( τ ) ) J , and H b (τ) ( H b , 00 ( τ ) ) J , and

( H 00 ( τ ) ) J = { ( x , f ) l w 2 ( I ) × l w 2 ( I ) : τ ( x ) ( t ) = w ( t ) f ( t ) , a + 1 t b 1 } , ( H a , 00 ( τ ) ) J = { ( x , f ) l w 2 ( I 1 ) × l w 2 ( I 1 ) : τ ( x ) ( t ) = w ( t ) f ( t ) , a + 1 t c 0 2 } , ( H b , 00 ( τ ) ) J = { ( x , f ) l w 2 ( I 2 ) × l w 2 ( I 2 ) : τ ( x ) ( t ) = w ( t ) f ( t ) , c 0 + 1 t b 1 } .
(3.8)

Proof It can be easily verified that H 00 (τ), H a , 00 (τ), and H b , 00 (τ) are J-Hermitian subspaces in the corresponding Hilbert spaces by (ii) of Remark 2.1 and Lemma 2.4. Further, (3.8) can be concluded from Theorem 3.1 and Lemma 2.2. This completes the proof. □

Using Theorem 3.2 and with a similar argument to [[23], Corollary 3.1], we can get the following results.

Corollary 3.1 H(τ)= ( H 00 ( τ ) ) J = ( H 0 ( τ ) ) J , H a (τ)= ( H a , 00 ( τ ) ) J = ( H a , 0 ( τ ) ) J , and H b (τ)= ( H b , 00 ( τ ) ) J = ( H b , 0 ( τ ) ) J in the sense of the norms , a , and b , respectively. Consequently, H(τ), H a (τ), and H b (τ) are closed subspaces in l w 2 (I)× l w 2 (I), l w 2 ( I 1 )× l w 2 ( I 1 ), and l w 2 ( I 2 )× l w 2 ( I 2 ), respectively.

Remark 3.1 H(τ)= ( H 00 ( τ ) ) J = ( H 0 ( τ ) ) J follows from (3.3) and the first relation of (3.8) in the special case that a= and b=+.

Now, we introduce the boundary forms on l w 2 (I)× l w 2 (I), l w 2 ( I 1 )× l w 2 ( I 1 ), and l w 2 ( I 2 )× l w 2 ( I 2 ) as follows.

[ : ] : l w 2 ( I ) × l w 2 ( I ) × l w 2 ( I ) × l w 2 ( I ) C , ( ( x , f ) , ( y , g ) ) f , J y x , J g ; [ : ] a : l w 2 ( I 1 ) × l w 2 ( I 1 ) × l w 2 ( I 1 ) × l w 2 ( I 1 ) C , ( ( x , f ) , ( y , g ) ) f , J y a x , J g a ; [ : ] b : l w 2 ( I 2 ) × l w 2 ( I 2 ) × l w 2 ( I 2 ) × l w 2 ( ( I 2 ) C , ( ( x , f ) , ( y , g ) ) f , J y b x , J g b .

It can be easily shown that (2.3) holds for [:], [ : ] a , and [ : ] b , respectively.

Note that H 0 (τ), H a , 0 (τ), and H b , 0 (τ) are closed. Then, by Lemma 2.3 and (3.3), H 0 (τ), H a , 0 (τ), and H b , 0 (τ) can be expressed in terms of the boundary forms as follows.

H 0 ( τ ) = { ( x , f ) ( H 00 ( τ ) ) J : [ ( x , f ) : ( H 00 ( τ ) ) J ] = 0 } , H a , 0 ( τ ) = { ( x , f ) ( H a , 00 ( τ ) ) J : [ ( x , f ) : ( H a , 00 ( τ ) ) J ] a = 0 } , H b , 0 ( τ ) = { ( x , f ) ( H b , 00 ( τ ) ) J : [ ( x , f ) : ( H b , 00 ( τ ) ) J ] b = 0 } .
(3.9)

Theorem 3.3 The subspaces H 0 (τ), H a , 0 (τ), and H b , 0 (τ) are closed J-Hermitian operators in l w 2 (I), l w 2 ( I 1 ), and l w 2 ( I 2 ), respectively.

Proof We only prove the result for H 0 (τ) since H a , 0 (τ) and H b , 0 (τ) can be regarded as two special cases of H 0 (τ).

Since H 0 (τ) is a J-Hermitian subspace by Theorem 3.2 and H 0 (τ)= H ¯ 00 (τ), one has that H 0 (τ) is a closed J-Hermitian subspace. So, it suffices to show that ( H 0 (τ))(0)={0}. Suppose that (0,f) H 0 (τ). Then, for all (y,g)H(τ) ( H 00 ( τ ) ) J , [(0,f):(y,g)]=f,Jy=0, that is,

t = a b y(t)w(t)f(t)=0.
(3.10)

In order to show f=0, the discussion is divided into three cases.

Case 1. The endpoints a and b are finite. For all (x,f) ( H 00 ( τ ) ) J with x(t)=0 for all tI, we get by Theorem 3.2 and (3.10) that

y(a)w(a)f(a)+y(b)w(b)f(b)=0,(y,g)H(τ).
(3.11)

It can be easily shown that there exists (y,g)H(τ) such that y(a)=f(a) and y(t)=0 for all ta. Inserting it into (3.11) yields f(a)=0. Similarly, f(b)=0. Hence, f=0.

Case 2. One of a and b is finite. With a similar argument to that for Case 1, one can show f(a)=0. Hence, f=0.

Case 3. a= and b=+. By Remark 3.1, H(τ)= ( H 00 ( τ ) ) J . So, by the first relation of (3.8), x(t)=0 for tI implies that f(t)=0 for tI. This completes the proof. □

Lemma 3.2 For every (x,f) H 0 (τ), x(a)=0 in the case that a is finite and x(b)=0 in the case that b is finite.

Proof Fix any (x,f) H 0 (τ). Then we have

0= [ ( x , f ) : ( y , g ) ] = t = a b y(t)w(t)f(t) t = a b g(t)w(t)x(t),(y,g)H(τ).
(3.12)

If a is finite, then there exists ( y 0 , g 0 )H(τ) such that y 0 (a1)0 and y 0 (t)=0 for all tI. Inserting ( y 0 , g 0 ) into (3.12), we have that p(a1) y 0 (a1)x(a)=0. So, x(a)=0. One can get that x(b)=0 when b is finite similarly. This completes the proof. □

Theorem 3.4 The subspace H 0 (τ) is a densely defined J-Hermitian operator in l w 2 (I) in the case that a= and b=+ and a non-densely defined J-Hermitian operator in l w 2 (I) in the case that at least one of a and b is finite. Consequently, H a , 0 (τ) and H b , 0 (τ) are non-densely defined J-Hermitian operators in l w 2 ( I 1 ) and l w 2 ( I 2 ), respectively.

Proof By Theorem 3.3, Lemma 3.2, and a similar method to [[23], Theorem 3.3], this theorem can be proved. □

3.2 Characterizations of the three subspaces H ˆ 0 (τ), H ˆ a , 0 (τ), and H ˆ b , 0 (τ)

In this section, we introduce three subspaces H ˆ 0 (τ), H ˆ a , 0 (τ), and H ˆ b , 0 (τ), and discuss their characterizations, which will play an important role in the study of J-SSEs of H 0 (τ).

First, define H ˆ 0 (τ), H ˆ a , 0 (τ), and H ˆ b , 0 (τ) in l w 2 (I)× l w 2 (I), l w 2 ( I 1 )× l w 2 ( I 1 ), and l w 2 ( I 2 )× l w 2 ( I 2 ) as follows:

H ˆ 0 ( τ ) : = { ( x , f ) H ( τ ) : [ ( x , f ) : H ( τ ) ] = 0 } , H ˆ a , 0 ( τ ) : = { ( x , f ) H a ( τ ) : [ ( x , f ) : H a ( τ ) ] a = 0 } , H ˆ b , 0 ( τ ) : = { ( x , f ) H b ( τ ) : [ ( x , f ) : H b ( τ ) ] b = 0 } .

Since [:], [ : ] a , and [ : ] b are defined in terms of the norms , a , and b , respectively, by Corollary 3.1 we get that H ˆ 0 (τ)= H 0 (τ), H ˆ a , 0 (τ)= H a , 0 (τ), and H ˆ b , 0 (τ)= H b , 0 (τ) in the sense of the norms , a , and b , respectively. So, H ˆ 0 (τ), H ˆ a , 0 (τ), and H ˆ b , 0 (τ) are closed J-Hermitian operators in the corresponding spaces by Theorem 3.3.

In [23], the patching lemma [[23], Lemma 3.3] was used in the study of the self-adjoint subspace extensions for (1.1) with real coefficients. It also holds for (1.1) with complex coefficients here.

Lemma 3.3 [[23], Lemma 3.3]

For any given α j , β j C, j=1,2, and any given a 1 , b 1 I with b 1 a 1 +1, there exists f= { f ( t ) } t = a 1 b 1 C such that the boundary value problem

τ ( x ) ( t ) = w ( t ) f ( t ) , a 1 t b 1 , x ( a 1 1 ) = α 1 , x ( a 1 ) = α 2 , x ( b 1 ) = β 1 , x ( b 1 + 1 ) = β 2

has a solution x= { x ( t ) } t = a 1 1 b 1 + 1 . Further, for any given ( x 1 , f 1 ),( x 2 , f 2 )H(τ), there exists (y,g)H(τ) such that

y(t)= { x 1 ( t ) , a 1 t a 1 , x 2 ( t ) , b 1 t b + 1 , g(t)= { f 1 ( t ) , a t a 1 1 , f 2 ( t ) , b 1 + 1 t b .

Remark 3.2 (see [[23], Remark 3.2])

Any two elements of H a (τ) (or H b (τ)) can be patched together by some element of H a (τ) (or H b (τ)) in a similar way as in Lemma 3.3. Further, any element of H a (τ) and any element of H b (τ) can be patched together by some element of H(τ) in a similar way as in Lemma 3.3.

The following result can be easily verified by Lemma 2.4, Theorem 3.2, and (3.3).

Lemma 3.4 For all x,yD( ( H 0 ( τ ) ) J ) or D( ( H a , 0 ( τ ) ) J ), lim t a 1 (x,y)(t) exists and is finite in the case of a=, and for all x,yD( ( H 0 ( τ ) ) J ) or D( ( H b , 0 ( τ ) ) J ), lim t b (x,y)(t) exists and is finite in the case of b=+. Moreover,

[ ( x , f ) : ( y , g ) ] = ( x , y ) ( b ) ( x , y ) ( a 1 ) , ( x , f ) , ( y , g ) H ( τ ) , [ ( x , f ) : ( y , g ) ] a = ( x , y ) ( c 0 1 ) ( x , y ) ( a 1 ) , ( x , f ) , ( y , g ) H a ( τ ) , [ ( x , f ) : ( y , g ) ] b = ( x , y ) ( b ) ( x , y ) ( c 0 1 ) , ( x , f ) , ( y , g ) H b ( τ ) .

Using Lemma 3.3 and with a similar argument to [[23], Theorem 3.4], we have the other characterizations of three subspaces H ˆ 0 (τ), H ˆ a , 0 (τ), and H ˆ b , 0 (τ).

Theorem 3.5

H ˆ 0 ( τ ) = { ( x , f ) H ( τ ) : ( x , y ) ( a 1 ) = ( x , y ) ( b ) = 0 , y D ( H ( τ ) ) } . H ˆ a , 0 ( τ ) = { ( x , f ) H a ( τ ) : x ( c 0 1 ) = x ( c 0 ) = 0 H ˆ a , 0 ( τ ) = and ( x , y ) ( a 1 ) = 0 , y D ( H a ( τ ) ) } . H ˆ b , 0 ( τ ) = { ( x , f ) H b ( τ ) : x ( c 0 1 ) = x ( c 0 ) = 0 H ˆ b , 0 ( τ ) = and ( x , y ) ( b ) = 0 , y D ( H b ( τ ) ) } .

3.3 Characterizations of the left and right maximal subspaces

In this section, we characterize H a (τ) and H b (τ).

First, let d, d a , and d b be the defect indices of H 0 (τ), H a , 0 (τ), and H b , 0 (τ), respectively. Then we have

Lemma 3.5 d= 1 2 dimD, d b = 1 2 dim D b , and d a = 1 2 dim D a , where

D : = { ( y , g ) H ( τ ) : τ + ( 1 w τ ( y ) ) ( t ) = w ( t ) y ( t ) , a + 1 t b 1 } , D b : = { ( y , g ) H b ( τ ) : τ + ( 1 w τ ( y ) ) ( t ) = w ( t ) y ( t ) , c 0 + 1 t b 1 } , D a : = { ( y , g ) H a ( τ ) : τ + ( 1 w τ ( y ) ) ( t ) = w ( t ) y ( t ) , a + 1 t c 0 2 } .

Proof Since the proofs are similar, we only prove d= 1 2 dimD.

First, it can be verified that

dim ( H 0 ( τ ) ) J / H 0 (τ)=dimH(τ)/ H ˆ 0 (τ).
(3.13)

Next, we prove that

H(τ)= H ˆ 0 (τ)D(orthogonal sum).
(3.14)

Let (y,g)H(τ) H ˆ 0 (τ), where denotes the orthogonal complement of H ˆ 0 (τ) in H(τ). Then

0= ( y , g ) , ( x , f ) =y,x+g,f,(x,f) H ˆ 0 (τ),
(3.15)

which yields that (g,y) H ˆ 0 (τ). It can be easily verified that H ˆ 0 (τ)= H 0 (τ). So, (g,y) H 0 (τ), and by Theorem 3.1, one has that

τ + (g)(t)=w(t)y(t),a+1tb1.
(3.16)

Since (y,g)H(τ) and w0, we get g= 1 w τ(y) on I. Inserting it into (3.16), we have

τ + ( 1 w τ ( y ) ) (t)=w(t)y(t),a+1tb1.
(3.17)

So, (y,g)D. Conversely, suppose that (y,g)D. Then (y,g)H(τ) and (3.17) holds, and then (3.16) holds. Then (g,y) H 0 (τ) and hence (g,y) H ˆ 0 (τ). So, (3.15) holds and hence (y,g)H(τ) H ˆ 0 (τ). So, (3.14) holds, which together with (3.13) implies that d= 1 2 dimD. This completes the proof. □

Lemma 3.6 d b =1 or 2 and d a =1 or 2.

Proof By Lemma 3.5, dim D b is equal to the number of linearly independent solutions of

τ + ( 1 w τ ( y ) ) (t)=w(t)y(t), c 0 +1tb1,
(3.18)

for which both y and 1 w τy are in l w 2 ( I 2 ). Then d b = 1 2 dim D b 2 since (3.18) has at most four linearly independent solutions. In addition, there exists ( z j , h j ) H b (τ), j=1,2, such that

z 1 ( c 0 1 ) = 1 , z 1 ( c 0 ) = 0 , z 1 ( t ) = 0 , c 0 + 1 t b + 1 , z 2 ( c 0 1 ) = 0 , z 2 ( c 0 ) = 1 , z 2 ( t ) = 0 , c 0 + 1 t b + 1 .
(3.19)

Note that ( z 1 , h 1 ) and ( z 2 , h 2 ) are linearly independent (modulo H ˆ b , 0 (τ)) and

d b = 1 2 dim H b (τ)/ H ˆ b , 0 (τ).

Then d b 1 and hence 1 d b 2. Then d b =1 or 2 since d b is an integer.

The assertion d a =1 or 2 can be proved similarly. This completes the proof. □

Lemma 3.7

  1. (1)

    If all the solutions of (1.1) restricted on I 2 are in l w 2 ( I 2 ) for some λ 0 C, then the same is true for all λC.

  2. (2)

    If all the solutions of the equation

    τ + ( 1 w τ ( y ) ) (t)=λw(t)y(t), c 0 +1t<b1,
    (3.20)

are in l w 2 ( I 2 ) for some λ 0 C, then the same is true for all λC.

Proof The first result is [[31], Lemma 2.2]. Now, we prove the assertion (2). Clearly, this result holds if b is finite. So, we prove the case where b=+. By setting

x 1 ( t ) = y ( t ) , x 2 ( t ) = 1 w ( t ) τ ( y ) ( t ) , x 3 ( t ) = p ¯ ( t ) Δ ( 1 w ( t ) τ ( y ) ( t ) ) , x 4 ( t ) = p ( t ) Δ y ( t ) ,
(3.21)

Eq. (3.20) can be rewritten as the following discrete Hamiltonian system:

J ˜ ΔY(t)= ( P ( t ) λ W ( t ) ) R(Y)(t), c 0 t<,
(3.22)

where

Y ( t ) = ( x 1 ( t ) x 2 ( t ) x 3 ( t ) x 4 ( t ) ) , R ( Y ) ( t ) = ( x 1 ( t + 1 ) x 2 ( t + 1 ) x 3 ( t ) x 4 ( t ) ) , P ( t ) = ( C ( t ) 0 0 B ( t ) ) , C ( t ) = ( 0 q ¯ ( t + 1 ) q ( t + 1 ) w ( t + 1 ) ) , B ( t ) = ( 0 1 / p ( t ) 1 / p ¯ ( t ) 0 ) , J ˜ = ( 0 I 2 × 2 I 2 × 2 0 ) ,

I 2 × 2 is the 2×2 unit matrix, and W(t)=diag(w(t+1),0,0,0). It is evident that the assumptions ( A 1 ) and ( A 2 ) of [[27], Section 1] hold for (3.22). Let

l W 2 := { Y = { Y ( t ) } t = c 0 C 4 : t = c 0 R ( Y ) ( t ) W ( t ) R ( Y ) ( t ) < + }

with the inner product Y , Z W = t = c 0 R ( Z ) (t)W(t)R(Y)(t), where Y (t) denotes the complex conjugate transpose of Y(t). We have from [[27], Theorem 5.5] that if there exists λ 0 C such that all the solutions of (3.22) are in l W 2 , then the same is true for all λC. Hence, the assertion (2) of this lemma follows. This completes the proof. □

Theorem 3.6 Let ( z j , h j ) H b (τ) (j=1,2) be defined by (3.19). Then the following results hold:

  1. (1)

    In the case of d b =1, for any given (x,f) H b (τ), there exist uniquely ( y 0 , f 0 ) H ˆ b , 0 (τ) and c 1 , c 2 C such that

    x(t)= y 0 (t)+ c 1 z 1 (t)+ c 2 z 2 (t), c 0 1tb+1.
    (3.23)
  2. (2)

    In the case of d b =2, let ϕ 1 and ϕ 2 be two linearly independent solutions of (1.1) restricted on I 2 . Then ϕ 1 and ϕ 2 are in l w 2 ( I 2 ), and for any given (x,f) H b (τ), there exist uniquely ( y 0 , f 0 ) H ˆ b , 0 (τ) and c j , d j C (j=1,2) such that

    x(t)= y 0 (t)+ c 1 z 1 (t)+ c 2 z 2 (t)+ d 1 ϕ 1 (t)+ d 2 ϕ 2 (t), c 0 1tb+1.
    (3.24)

Proof Since dim H b (τ)/ H ˆ b , 0 (τ)=2 in the case of d b =1, one has that ( z 1 , h 1 ) and ( z 2 , h 2 ) defined by (3.19) form a basis of H b (τ)/ H ˆ b , 0 (τ). So, the first result holds.

In the case of d b =2, one has that dim H b (τ)/ H ˆ b , 0 (τ)=4. By Lemmas 3.5 and 3.7, all the solutions of (3.20) with λ=0 are in l w 2 ( I 2 ) and hence all the solutions of τ(y)(t)=0 restricted on I 2 are in l w 2 ( I 2 ). So, all the solutions of (1.1) restricted on I 2 are in l w 2 ( I 2 ) by Lemma 3.7. Let ϕ 1 and ϕ 2 be two linearly independent solutions of (1.1). Then ( ϕ 1 ,λ ϕ 1 ), ( ϕ 2 ,λ ϕ 2 ) H b (τ). Set

Φ:= ( ( ϕ j , ϕ k ) ( c 0 1 ) ) 2 × 2 .
(3.25)

Then it can be concluded that rankΦ=2. On the other hand, ( z 1 , h 1 ), ( z 2 , h 2 ), ( ϕ 1 ,λ ϕ 1 ), and ( ϕ 2 ,λ ϕ 2 ) are linearly independent (modulo H ˆ b , 0 (τ)). In fact, if

( j = 1 2 c j z j + j = 1 2 c j + 1 ϕ j , j = 1 2 c j h j + λ j = 1 2 c j + 1 ϕ j ) H ˆ b , 0 (τ),

then by Theorem 3.5 and ϕ 1 , ϕ 2 D( H b (τ)),

{ j = 1 2 c j z j ( c 0 1 ) + j = 1 2 c j + 1 ϕ j ( c 0 1 ) = 0 , j = 1 2 c j z j ( c 0 ) + j = 1 2 c j + 1 ϕ j ( c 0 ) = 0 , c 3 ( ϕ 1 , ϕ 1 ) ( b ) + c 4 ( ϕ 2 , ϕ 1 ) ( b ) = 0 , c 3 ( ϕ 1 , ϕ 2 ) ( b ) + c 4 ( ϕ 2 , ϕ 2 ) ( b ) = 0 .

This, together with Lemma 2.5 and rankΦ=2, implies that c j =0 (1j4). Then ( z 1 , h 1 ), ( z 2 , h 2 ), ( ϕ 1 ,λ ϕ 1 ), and ( ϕ 2 ,λ ϕ 2 ) form a basis of H b (τ)/ H ˆ b , 0 (τ). So, (3.24) holds. This completes the proof. □

Using a similar argument to Theorem 3.6, we can get the following result.

Theorem 3.7 Let ( z ˜ j , h ˜ j ) H a (τ) (j=1,2) be defined by

z ˜ 1 ( c 0 1 ) = 1 , z ˜ 1 ( c 0 ) = 0 , z ˜ 1 ( t ) = 0 , a 1 t c 0 2 , z ˜ 2 ( c 0 1 ) = 0 , z ˜ 2 ( c 0 ) = 1 , z ˜ 2 ( t ) = 0 , a 1 t c 0 2 .
(3.26)

Then the following results hold:

  1. (1)

    In the case of d a =1, for any given (x,f) H a (τ), there exist uniquely ( y ˜ 0 , f ˜ 0 ) H ˆ a , 0 (τ) and c ˜ 1 , c ˜ 2 C such that

    x(t)= y ˜ 0 (t)+ c ˜ 1 z ˜ 1 (t)+ c ˜ 2 z ˜ 2 (t),a1t c 0 .
    (3.27)
  2. (2)

    In the case of d a =2, let ϕ ˜ 1 and ϕ ˜ 2 be two linearly independent solutions of equation (1.1) restricted on I 1 . Then ϕ ˜ 1 and ϕ ˜ 2 are in l w 2 ( I 1 ), and for any given (x,f) H a (τ), there exist uniquely ( y ˜ 0 , f ˜ 0 ) H ˆ a , 0 (τ) and c ˜ j , d ˜ j C (j=1,2) such that

    x(t)= y ˜ 0 (t)+ c ˜ 1 z ˜ 1 (t)+ c ˜ 2 z ˜ 2 (t)+ d ˜ 1 ϕ ˜ 1 (t)+ d ˜ 2 ϕ ˜ 2 (t),a1t c 0 .
    (3.28)

4 Defect indices of H 0 (τ)

The following is the main result of this section.

Theorem 4.1 Let d, d a , and d b be the defect indices of H 0 (τ), H a , 0 (τ), and H b , 0 (τ), respectively. Then d= d a + d b 2.

It is evident that Theorem 4.1 holds in the case that at least one of a and b is finite. So, it is only needed to consider the case that a= and b=+. Before proving Theorem 4.1, we prove three lemmas in this case.

Lemma 4.1 d= 1 2 dim D ˜ , d b = 1 2 dim D ˜ b , and d a = 1 2 dim D ˜ a , where

D ˜ : = { ( y , g ) ( H 0 ( τ ) ) J : ( g , y ) H 0 ( τ ) } , D ˜ b : = { ( y , g ) ( H b , 0 ( τ ) ) J : ( g , y ) H b , 0 ( τ ) } , D ˜ a : = { ( y , g ) ( H a , 0 ( τ ) ) J : ( g , y ) H a , 0 ( τ ) } .

Proof It can be easily verified that ( H 0 ( τ ) ) J = H 0 (τ) D ˜ . This gives that d= 1 2 dim D ˜ . The other two relations are proved similarly. This completes the proof. □

For any given (x,f) l w 2 (I)× l w 2 (I), denote

x := { x ( t ) } t = c 0 , x + := { x ( t ) } t = c 0 1 , f := { f ( t ) } t = c 0 , f + := { f ( t ) } t = c 0 1 .

Then we have the following result.

Lemma 4.2 Let H ˜ 0 (τ) be the restriction of H ˆ 0 (τ) defined by

H ˜ 0 (τ)= { ( x , f ) H ˆ 0 ( τ ) : x ( c 0 1 ) = x ( c 0 ) = 0 } .

Then

( H ˜ 0 ( τ ) ) J = { ( y , g ) l w 2 ( I ) × l w 2 ( I ) : ( y , g ) ( H a , 0 ( τ ) ) J  and ( y + , g + ) ( H b , 0 ( τ ) ) J } .
(4.1)

Proof It can be easily verified by Theorem 3.5 that

H ˆ a , 0 ( τ ) = { ( x , f ) : ( x , f ) H ˜ 0 ( τ )  with  x + ( t ) 0 } , H ˆ b , 0 ( τ ) = { ( x + , f + ) : ( x , f ) H ˜ 0 ( τ )  with  x ( t ) 0 } .
(4.2)

Then it can be verified that

H ˜ 0 ( τ ) = { ( y , g ) l w 2 ( I ) × l w 2 ( I ) : ( y , g ) H a , 0 ( τ )  and ( y + , g + ) H b , 0 ( τ ) } .
(4.3)

Relation (4.1) follows from (4.3) and Lemma 2.2. This completes the proof. □

Lemma 4.3 Let d ˜ be the defect index of H ˜ 0 (τ). Then d ˜ = d a + d b .

Proof It can be easily verified that H ˜ 0 (τ) is a closed J-Hermitian operator in l w 2 (I) by the fact that H ˆ 0 (τ) is a closed J-Hermitian operator in l w 2 (I). Set

D a , b = { ( y , g ) l w 2 ( I ) × l w 2 ( I ) : ( y , g ) D ˜ a  and  ( y + , g + ) D ˜ b } ,

in which D ˜ a and D ˜ b are given in Lemma 4.1. Now, we prove that D a , b = ( H ˜ 0 ( τ ) ) J H ˜ 0 (τ). Let (y,g) ( H ˜ 0 ( τ ) ) J H ˜ 0 (τ). Then, for all (x,f) H ˜ 0 (τ), (3.15) holds, which together with (4.2) implies that

( g , y ) H ˆ a , 0 (τ), ( g + , y + ) H ˆ b , 0 (τ).

Since H a , 0 (τ)= H ˆ a , 0 (τ) and H b , 0 (τ)= H ˆ b , 0 (τ), one has (y,g) D a , b . Conversely, suppose that (y,g) D a , b . It can be verified that (y,g) ( H ˜ 0 ( τ ) ) J H ˜ 0 (τ) by (4.2). Hence, D a , b = ( H ˜ 0 ( τ ) ) J H ˜ 0 (τ). Therefore, d ˜ = 1 2 dim D a , b . It can be easily verified that dim D a , b =dim D a +dim D b . So, d ˜ = d a + d b by Lemma 4.1. This completes the proof. □

Proof of Theorem 4.1 Set

H ˜ (τ)= { ( x , f ) H ( τ ) : x ( c 0 1 ) = x ( c 0 ) = 0 } .

There exist ( y 1 , g 1 ),( y 2 , g 2 )H(τ) such that

y 1 ( c 0 1 ) = 1 , y 1 ( t ) = 0 , t c 0 1 , y 2 ( c 0 ) = 1 , y 2 ( t ) = 0 , t c 0 .

Then ( y j , g j ) H ˆ 0 (τ) by Theorem 3.5, ( y j , g j ) H ˜ 0 (τ), and ( y j , g j ) H ˜ (τ), j=1,2. We claim that

H ˆ 0 (τ)= H ˜ 0 (τ) + ˙ span { ( y 1 , g 1 ) , ( y 2 , g 2 ) } ,
(4.4)
H(τ)= H ˜ (τ) + ˙ span { ( y 1 , g 1 ) , ( y 2 , g 2 ) } .
(4.5)

In fact, for each given (x,f) H ˆ 0 (τ), the algebraic system

c 1 y 1 ( c 0 1 ) + c 2 y 2 ( c 0 1 ) = x ( c 0 1 ) , c 1 y 1 ( c 0 ) + c 2 y 2 ( c 0 ) = x ( c 0 )

has a unique solution ( c ˜ 1 , c ˜ 2 ) T . Let x ˜ =x( c ˜ 1 y 1 + c ˜ 2 y 2 ) and f ˜ =f( c ˜ 1 g 1 + c ˜ 2 g 2 ). Then ( x ˜ , f ˜ ) H ˜ 0 (τ). So, every (x,f) H ˆ 0 (τ) can be uniquely expressed as a linear combination of some element of H ˜ 0 (τ), ( y 1 , g 1 ), and ( y 2 , g 2 ). Therefore, (4.4) holds. Similarly, (4.5) can be proved.

Furthermore, there exists ( x j , f j ) ( H ˜ 0 ( τ ) ) J such that

x j ( t ) = { 1 , t = c 0 1 , 0 , t c 0 1 , j = 1 , 2 , x k ( t ) = { 1 , t = c 0 , 0 , t c 0 , k = 3 , 4 , f 1 ( t ) = { p ( c 0 2 ) w ( c 0 2 ) , t = c 0 2 , r ( c 0 1 ) w ( c 0 1 ) , t = c 0 1 , p ( c 0 1 ) w ( c 0 ) , t = c 0 , 0 , t c 0 2 , c 0 1 , c 0 , f 2 ( t ) = { p ( c 0 2 ) w ( c 0 2 ) , t = c 0 2 , r ( c 0 1 ) w ( c 0 1 ) , t = c 0 1 , 0 , t c 0 2 , c 0 1 , f 3 ( t ) = { p ( c 0 ) w ( c 0 + 1 ) , t = c 0 + 1 , 0 , t c 0 + 1 , f 4 ( t ) = { p ( c 0 1 ) w ( c 0 1 ) , t = c 0 1 , p ( c 0 ) w ( c 0 + 1 ) , t = c 0 + 1 , 0 , t c 0 1 , c 0 + 1 ,

where r(t):=p(t)+p(t1)+q(t). Suppose that there exists c j C such that j = 1 4 c j ( x j , f j ) H ˜ (τ). Then we get from w(t)0 for tI that

j = 1 4 c j x j ( c 0 1)= j = 1 4 c j x j ( c 0 )=0,
(4.6)

which implies that j = 1 4 c j x j (t)=0, a1tb+1. Therefore,

j = 1 4 c j f j ( c 0 1)= j = 1 4 c j f j ( c 0 )=0.
(4.7)

It can be obtained from (4.6) and (4.7) that c j =0. So, ( x 1 , f 1 ),,( x 4 , f 4 ) are linearly independent (modulo H ˜ (τ)). Further, we claim that

( H ˜ 0 ( τ ) ) J = H ˜ (τ) + ˙ U,
(4.8)

where U=span{( x 1 , f 1 ),( x 2 , f 2 ),( x 3 , f 3 ),( x 4 , f 4 )}. In fact, it is evident that H ˜ (τ) + ˙ U ( H ˜ 0 ( τ ) ) J . Now, we show ( H ˜ 0 ( τ ) ) J H ˜ (τ) + ˙ U. For each given (x,f) ( H ˜ 0 ( τ ) ) J , the algebraic system

{ j = 1 4 c j x j ( c 0 1 ) = x ( c 0 1 ) , j = 1 4 c j x j ( c 0 ) = x ( c 0 ) , τ ( j = 1 4 c j x j ) ( c 0 1 ) w ( c 0 1 ) j = 1 4 c j f j ( c 0 1 ) = τ ( x ) ( c 0 1 ) w ( c 0 1 ) f ( c 0 1 ) , τ ( j = 1 4 c j x j ) ( c 0 ) w ( c 0 ) j = 1 4 c j f j ( c 0 ) = τ ( x ) ( c 0 ) w ( c 0 ) f ( c 0 )

has a unique solution ( c ˜ 1 , c ˜ 2 , c ˜ 3 , c ˜ 4 ) T . Let x ˜ =x j = 1 4 c ˜ j x j and f ˜ =f j = 1 4 c ˜ j f j . Then ( x ˜ , f ˜ ) H ˜ (τ). So, every (x,f) ( H ˜ 0 ( τ ) ) J can be uniquely expressed as a linear combination of some element of H ˜ (τ), ( x 1 , f 1 ),,( x 4 , f 4 ). Therefore, ( H ˜ 0 ( τ ) ) J H ˜ (τ) + ˙ U and hence (4.8) holds.

Since ( y 1 , g 1 ) and ( y 2 , g 2 ) are linearly independent (modulo H ˜ (τ)), it follows from (4.5) that dimH(τ)/ H ˜ (τ)=2. Further, from (4.8),

dim ( H ˜ 0 ( τ ) ) J / H ˜ (τ)=4.

Then H ˜ (τ)H(τ) ( H ˜ 0 ( τ ) ) J implies

dim ( H ˜ 0 ( τ ) ) J /H(τ)=2.
(4.9)

Since

H ˜ 0 (τ) H ˆ 0 (τ)H(τ) ( H ˜ 0 ( τ ) ) J ,

we get from (3.13), (4.4), and (4.9) that

d ˜ = 1 2 dim ( H ˜ 0 ( τ ) ) J / H ˜ 0 ( τ ) = 1 2 { dim ( H ˜ 0 ( τ ) ) J / H ( τ ) + dim H ( τ ) / H ˆ 0 ( τ ) + dim H ˆ 0 ( τ ) / H ˜ 0 ( τ ) } = 2 + d ,

which together with Lemma 4.3 implies that d= d a + d b 2. So, Theorem 4.1 holds. This completes the proof. □

5 J-self-adjoint subspace extensions of H 0 (τ)

By [[26], Theorem 4.3], H 0 (τ) must have J-SSEs since it is J-Hermitian. In this section, we give a complete characterization of all the J-SSEs of H 0 (τ) in terms of boundary conditions. This section consists of two subsections.

5.1 The general case

The discussion is divided into three cases: d=0, d=1, and d=2, which are equivalent to d a = d b =1, d a =1, d b =2 or d a =2, d b =1, and d a = d b =2, respectively, by Theorem 4.1.

The following result can be directly derived from Theorem 2.1 and Theorem 3.3.

Theorem 5.1 In the case of d=0, i.e., d a = d b =1, H 0 (τ) is a J-self-adjoint operator.

Theorem 5.2 In the case of d=1 with d a =2 and d b =1, let ϕ 1 and ϕ 2 be any two linearly independent solutions of (1.1). Then H 1 is a J-SSE of H 0 (τ) (i.e., H 00 (τ)) if and only if there exists a matrix M C 1 × 2 such that M0 and

H 1 = { ( x , f ) H ( τ ) : M ( ( x , ϕ 1 ) ( a 1 ) ( x , ϕ 2 ) ( a 1 ) ) = 0 } .
(5.1)

Proof Note that ϕ 1 , ϕ 2 l w 2 ( I 1 ) by Theorem 3.7 and b=+ in this case.

First, consider the sufficiency. Suppose that M=( m 1 , m 2 )0. Let u= m 1 ϕ 1 + m 2 ϕ 2 . It is evident that u D( H a (τ)). Fix any integers a 1 and b 1 with a< a 1 +1< c 0 < b 1 1. By Remark 3.2, there exists β=(y,g)H(τ) such that

y(t)= { u ( t ) , a 1 t a 1 , 0 , t b 1 .

We claim that β H 0 (τ). Suppose on the contrary that β H 0 (τ). Then β H ˆ 0 (τ). Again by Remark 3.2, there exists ( y j , g j )H(τ), j=1,2, such that

y j (t)= { ϕ j ( t ) , a 1 t a 1 , 0 , t b 1 .

So, we get from Lemma 3.4 and β H ˆ 0 (τ) that

0= ( [ β : ( y 1 , g 1 ) ] , [ β : ( y 2 , g 2 ) ] ) =M ( ( ϕ j , ϕ k ) ( a 1 ) ) 2 × 2 ,
(5.2)

which implies that M=0 since rank ( ( ϕ j , ϕ k ) ( a 1 ) ) 2 × 2 =2 from Lemma 2.5 and the proof of Theorem 3.6. This contradicts M0. Hence, β H 0 (τ). Note that [β:β]=0 and d=1. Then, by Theorem 2.1 and Corollary 3.1, the set

H 2 = { F H ( τ ) : [ F : β ] = 0 }
(5.3)

is a J-SSE of H 0 (τ). On the other hand, for any F=(x,f)H(τ), by Lemma 3.4 one has

[ ( x , f ) : β ] =(x,y)(a1)=M ( ( x , ϕ 1 ) ( a 1 ) ( x , ϕ 2 ) ( a 1 ) ) ,

which implies that H 1 = H 2 . The sufficiency is shown.

Next, consider the necessity. Suppose that H 2 is a J-SSE of H 0 (τ). By Theorem 2.1, Corollary 3.1, and d=1, there exists some element β=(y,g)H(τ) such that β H 0 (τ), [β:β]=0, and (5.3) holds. By (1) in Theorem 3.6 and (2) in Theorem 3.7, there exist uniquely y b , 0 D( H ˆ b , 0 (τ)) and uniquely y a , 0 D( H ˆ a , 0 (τ)) such that

y ( t ) = y b , 0 ( t ) + c 1 z 1 ( t ) + c 2 z 2 ( t ) , c 0 1 t + , y ( t ) = y a , 0 ( t ) + j = 1 2 c ˜ j z ˜ j ( t ) + j = 1 2 d ˜ j ϕ j ( t ) , a 1 t c 0 ,
(5.4)

where c ˜ k , c k , d ˜ k C and z k , z ˜ k , k=1,2, are defined by (3.19) and (3.26). If d ˜ 1 = d ˜ 2 =0, then it can be obtained from (5.4), (3.19), (3.26), Corollary 3.1, Lemma 3.4, and Theorem 3.5 that for all (x,f) ( H 0 ( τ ) ) J there exists ( x ˆ , f ˆ )H(τ) such that

[ ( x , f ) : β ] = [ ( x ˆ , f ˆ ) : β ] =( x ˆ ,y)(+)( x ˆ ,y)(a1)=0.

So, β H 0 (τ), which contradicts β H 0 (τ). Therefore, | d ˜ 1 |+| d ˜ 2 |>0. Set

M:=( d ˜ 1 , d ˜ 2 ).
(5.5)

Then M0. Furthermore, for any (x,f)H(τ), by Lemma 3.4 one has

[ ( x , f ) : β ] =(x,y)(+)(x,y)(a1).

It follows from (5.4), (3.19), (3.26), and Theorem 3.5 that

(x,y)(+)=0,(x,y)(a1)=M ( ( x , ϕ 1 ) ( a 1 ) ( x , ϕ 2 ) ( a 1 ) ) .

So, H 2 determined by (5.3) can be expressed as (5.1). The necessity is proved. The entire proof is complete. □

With a similar argument to Theorem 5.2, one can show the following result.

Theorem 5.3 In the case of d=1 with d a =1 and d b =2, let ϕ 1 and ϕ 2 be any two linearly independent solutions of (1.1). Then H 1 is a J-SSE of H 0 (τ) (i.e., H 00 (τ)) if and only if there exists a matrix N C 1 × 2 such that N0 and

H 1 = { ( x , f ) H ( τ ) : N ( ( x , ϕ 1 ) ( b ) ( x , ϕ 2 ) ( b ) ) = 0 } .

Theorem 5.4 In the case of d=2, let ϕ 1 and ϕ 2 be any two linearly independent solutions of (1.1). Then H 1 is a J-SSE of H 0 (τ) (i.e., H 00 (τ)) if and only if there exist two matrices M,N C 2 × 2 such that

rank(M,N)=2,MΦ M T =NΦ N T ,
(5.6)
H 1 = { ( x , f ) H ( τ ) : M ( ( x , ϕ 1 ) ( a 1 ) ( x , ϕ 2 ) ( a 1 ) ) N ( ( x , ϕ 1 ) ( b ) ( x , ϕ 2 ) ( b ) ) = 0 } ,
(5.7)

where Φ is defined by (3.25).

Proof Because d=2 is equivalent to d a = d b =2, it follows that ϕ 1 and ϕ 2 are in l w 2 ( I 1 ), and ϕ 1 + and ϕ 2 + are in l w 2 ( I 2 ) and hence ϕ 1 and ϕ 2 are in l w 2 (I).

Step 1. Consider the sufficiency. Let M=( m j k ), N=( n j k ), and

u ˜ j = k = 1 2 m j k ϕ k , u j = k = 1 2 n j k ϕ k ,j=1,2.

It is evident that u ˜ j , u j H(τ), j=1,2. Choose any integers a 1 and b 1 with a< a 1 +1< c 0 < b 1 1<b. By Lemma 3.3 there exists β j =( y j , g j )H(τ) (j=1,2) such that

y i (t)= { u ˜ j ( t ) , a 1 t a 1 , u j ( t ) , b 1 t b + 1 .
(5.8)

By Theorem 3.5, rankΦ=2, and rank(M,N)=2, it can be verified that β 1 and β 2 are linearly independent (modulo H 0 (τ)). Furthermore, by Lemmas 2.5 and 3.4, (5.6), and (5.8), we have

( [ β j : β k ] ) 1 j , k 2 =NΦ N T MΦ M T =0.

Therefore, by Theorem 2.1 and Corollary 3.1, it can be concluded that

H 2 = { F H ( τ ) : [ F : β j ] = 0 , j = 1 , 2 }
(5.9)

is a J-SSE of H 0 (τ). For any F=(x,f)H(τ),

( ( x , y 1 ) ( a 1 ) ( x , y 2 ) ( a 1 ) ) =M ( ( x , ϕ 1 ) ( a 1 ) ( x , ϕ 2 ) ( a 1 ) ) , ( ( x , y 1 ) ( b ) ( x , y 2 ) ( b ) ) =N ( ( x , ϕ 1 ) ( b ) ( x , ϕ 2 ) ( b ) ) .
(5.10)

Lemma 3.4 and (5.10) yield that H 1 = H 2 . The sufficiency is proved.

Step 2. Consider the necessity. Suppose that H 2 is a J-SSE of H 0 (τ). By Theorem 2.1 and Corollary 3.1, there exist two linearly independent (modulo H 0 (τ)) elements β 1 and β 2 in H(τ) such that [ β j : β k ]=0, j,k=1,2, and (5.9) holds. Note that β j =( y j , g j )H(τ) and hence ( y j , g j ) H a (τ) and ( y j + , g j + ) H b (τ). By Theorems 3.7 and 3.6, there exist uniquely y ˜ j 0 D( H ˆ a , 0 (τ)), y j 0 D( H ˆ b , 0 (τ)), c ˜ j k , n ˜ j k , c j k , n j k C (j=1,2) such that

y j ( t ) = y ˜ j 0 ( t ) + k = 1 2 c ˜ j k z ˜ k ( t ) + k = 1 2 n ˜ j k ϕ k ( t ) , a 1 t c 0 , y j ( t ) = y j 0 ( t ) + k = 1 2 c j k z k ( t ) + k = 1 2 n j k ϕ k ( t ) , c 0 1 t b + 1 ,
(5.11)

where z ˜ k and z k are defined by (3.19) and (3.26). Set

M=( n ˜ j k ),N=( n j k ).
(5.12)

We will show that rank(M,N)=2. Otherwise, rank(M,N)<2. Then there exist c 1 , c 2 C with | c 1 |+| c 2 |>0 such that ( c 1 , c 2 )(M,N)=0, i.e.,

( c 1 , c 2 )M=( c 1 , c 2 )N=0.
(5.13)

Set β=( y , g )= c 1 β 1 + c 2 β 2 . Then βH(τ) and from (5.13) and Theorem 3.5,

( ( y , ϕ 1 ) ( a 1 ) , ( y , ϕ 2 ) ( a 1 ) ) = ( c 1 , c 2 ) M ( ( ϕ j , ϕ k ) ( a 1 ) ) 2 × 2 = 0 , ( ( y , ϕ 1 ) ( b ) , ( y , ϕ 2 ) ( b ) ) = ( c 1 , c 2 ) N ( ( ϕ j , ϕ k ) ( b ) ) 2 × 2 = 0 .
(5.14)

By Theorems 3.6 and 3.7, for yD(H(τ)), y + can be uniquely expressed as (3.24) and y can be uniquely expressed as (3.28). So, it follows from (5.14) and Theorem 3.5 that ( y ,y)(a1)=( y ,y)(b)=0 for all yD(H(τ)). This, together with Corollary 3.1 and Lemma 3.4, implies that [β: ( H 00 ( τ ) ) J ]=[β:H(τ)]=0. Hence, β H 0 (τ). Consequently, β 1 and β 2 are linearly dependent (modulo H 0 (τ)). This is a contradiction. So, rank(M,N)=2. Further, from [ β j : β k ]=0 and Lemmas 2.5 and 3.4, (5.11), and Theorem 3.5, we get that

0= ( [ β j : β k ] ) 1 j , k 2 =NΦ N T MΦ M T .

So, M and N satisfy the second relation of (5.6).

Finally, for any (x,f)H(τ), it follows from (5.11) and Theorem 3.5 that (5.10) holds with M and N defined by (5.12). So, by Lemma 3.4, H 2 determined by (5.9) can be expressed as (5.7). The necessity is proved. The entire proof is complete. □

5.2 The special cases

In this subsection, we characterize the J-SSEs of H 0 (τ) in the special cases that one of the two endpoints a and b is finite and that both a and b are finite.

First, consider the case that a is finite and b=+. By Lemma 3.5, d a =2 in this case. Let ϕ 1 and ϕ 2 be two linearly independent solutions of (1.1) satisfying

ϕ 1 ( a 1 ) = 0 , p ( a 1 ) Δ ϕ 1 ( a 1 ) = 1 , ϕ 2 ( a 1 ) = 1 , p ( a 1 ) Δ ϕ 2 ( a 1 ) = 0 .
(5.15)

Then ( ( ϕ j , ϕ k ) ( a 1 ) ) 2 × 2 = J ˆ and hence by Lemma 2.5, Φ= J ˆ , where Φ is defined by (3.25) and J ˆ = ( 0 1 1 0 ) . It can be obtained from (5.15) that

( ( x , ϕ 1 ) ( a 1 ) ( x , ϕ 2 ) ( a 1 ) ) = ( x ( a 1 ) p ( a 1 ) Δ x ( a 1 ) ) .
(5.16)

Then the following result can be directly derived from Theorem 5.2.

Theorem 5.5 In the case that a is finite, b=+, and d=1, H 1 is a J-SSE of H 0 (τ) (i.e., H 00 (τ)) if and only if there exists a matrix M=( m 1 , m 2 ) C 1 × 2 with M0 such that

H 1 = { ( x , f ) H ( τ ) : m 1 x ( a 1 ) + m 2 p ( a 1 ) Δ x ( a 1 ) = 0 } .

Furthermore, the following result is a direct consequence of (5.16), Φ= J ˆ , and Theorem 5.4.

Theorem 5.6 In the case that a is finite, b=+, and d=2, let ϕ 1 and ϕ 2 be the solutions of (1.1) satisfying (5.15). Then H 1 is a J-SSE of H 0 (τ) (i.e., H 00 (τ)) if and only if there exist matrices M,N C 2 × 2 such that

rank ( M , N ) = 2 , M J ˆ M T = N J ˆ N T , H 1 = { ( x , f ) H ( τ ) : M ( x ( a 1 ) p ( a 1 ) Δ x ( a 1 ) ) + N ( ( x , ϕ 1 ) ( + ) ( x , ϕ 2 ) ( + ) ) = 0 } .

Theorem 5.7 In the case that a and b are finite, H 1 is a J-SSE of H 0 (τ) (i.e., H 00 (τ)) if and only if there exist matrices M 1 , N 1 C 2 × 2 such that

rank( M 1 , N 1 )=2, M 1 J ˆ M 1 T = N 1 J ˆ N 1 T ,
(5.17)
H 1 = { ( x , f ) H ( τ ) : M 1 ( x ( a 1 ) p ( a 1 ) Δ x ( a 1 ) ) N 1 ( x ( b ) p ( b ) Δ x ( b ) ) = 0 } .
(5.18)

Proof In this case, d a = d b =2. Let ϕ 1 and ϕ 2 be the solutions of (1.1) satisfying (5.15). Then Φ= J ˆ . By Theorem 5.4, H 1 is a J-SSE of H 0 (τ) if and only if there exist matrices M,N C 2 × 2 such that (5.6) and (5.7) hold. Set

M 1 =M, N 1 =N P T J ˆ ,P= ( ϕ 1 ( b ) ϕ 2 ( b ) p ( b ) Δ ϕ 1 ( b ) p ( b ) Δ ϕ 2 ( b ) ) .

Then P is invertible and hence rank( M 1 , N 1 )=rank(M,N). It can be verified that

N 1 J ˆ N 1 T =N J ˆ N T , N 1 ( ( x , ϕ 1 ) ( b ) ( x , ϕ 2 ) ( b ) ) =N ( x ( b ) p ( b ) Δ x ( b ) ) .

So, (5.6) and (5.7) hold if and only if (5.17) and (5.18) hold by (5.16). This completes the proof. □

Remark 5.1 Let p and q be real-valued. Then H 0 (τ) is not only J-symmetric but also symmetric. However, the set of all the J-SSEs is not equal to the set of all the SSEs (SSE is an abbreviation of self-adjoint subspace extension) in general, except for the case that d=0. For example, let a be finite, b=+, and d=1, and set

H 1 = { ( x , f ) H ( τ ) : ( 1 + i ) x ( a 1 ) + p ( a 1 ) Δ x ( a 1 ) = 0 } .

Then H 1 is a J-SSE of H 0 (τ) by Theorem 5.5. However, by Lemma 2.2, it can be verified that H 1 is not a SSE of H 0 (τ).

6 J-self-adjoint operator extensions of H 0 (τ)

In this section, we discuss the characterization of all the J-self-adjoint operator extensions of H 0 (τ) (i.e., H 00 (τ)).

It is evident that each J-self-adjoint operator extension (briefly, J-SOE) of H 0 (τ) must be its J-SSE. So, the J-SSEs of H 0 (τ) characterized in Section 5 contain all the J-SOEs of H 0 (τ). With similar arguments to [[23], Section 6], we can get the results for three different cases that a= and b=+, a is finite and b=+, and both a and b are finite.

Theorem 6.1 In the case that a= and b=+, each J-SSE of H 0 (τ) (i.e., H 00 (τ)) in Theorems 5.1-5.4 is its J-SOE.

Theorem 6.2 In the case that a is finite and b=+, a J-SSE of H 0 (τ) (i.e., H 00 (τ)) in Theorems 5.5 and 5.6 is its J-SOE if and only if the matrix M in Theorems 5.5 and 5.6 satisfies

M ( 1 p ( a 1 ) ) 0.

Theorem 6.3 In the case that both a and b are finite, a J-SSE of H 0 (τ) (i.e., H 00 (τ)) in Theorem  5.7 is its J-SOE if and only if the matrices M and N in Theorem  5.7 satisfy

rank ( M ( 1 p ( a 1 ) ) , N ( 0 p ( b ) ) ) =2.

7 Examples for J-self-adjoint subspace extensions

In this section, we give three examples for J-self-adjoint subspace extensions.

Let T be a subspace in X 2 . The set

Γ ( T ) : = { λ C :  there exists  c ( λ ) > 0  such that f λ x c ( λ ) x  for all  ( x , f ) T }

is called to be the regularity field of T. First, we give a result for the regularity field of Γ( H 0 (τ)).

Lemma 7.1 Assume that a is finite. If for some λ 0 , (1.1) has two linearly independent solutions in L w 2 (I), then λ 0 Γ( H 0 (τ)), and consequently Γ( H 0 (τ))=C.

Proof By Lemma 2.5, let y 1 and y 2 be two linearly independent solutions of (1.1) such that ( y 1 , y 2 )(t)=1. For z L w 2 (I), set

R λ (z)(t):= j = a t 1 ( y 1 ( t ) y 2 ( j ) y 1 ( j ) y 2 ( t ) ) w(j)z(j),t { t } t = a 1 b ,

where j = a a 2 = j = a a 1 =0. Then it holds that

( τ λ w ( t ) ) R λ (z)(t)=w(t)z(t),tI.
(7.1)

Further, it can be concluded that

R λ ( z ) 2 y 1 y 2 z.
(7.2)

So, R λ is a bounded operator from L w 2 (I) into D(H(τ)). In addition, ( H 00 ( τ ) λ ) 1 is an operator in L w 2 (I). Let xD( H 00 (τ)) and take z= 1 w (τλw)x. Then (τλw)(x R λ (z))=0, i.e., x R λ (z) is a solution of (τλw)y=0. Since

x(a1) R λ (z)(a1)=x(a) R λ (z)(a)=0,

one has that x R λ (z) on I. This yields that the operator ( H 00 ( τ ) λ ) 1 is a restriction of R λ . Then ( H 00 ( τ ) λ ) 1 is a bounded operator and hence λΓ( H 00 (τ)). So, λΓ( H 0 (τ)) by Γ( H 0 (τ))=Γ( H 00 (τ)) and hence Γ( H 0 (τ))=C by Lemma 3.7. This completes the proof. □

If a is finite, then d=1 or 2 by Lemma 3.6. Further, by Lemma 7.1 the following result can be proved.

Theorem 7.1 Assume that a is finite. Then d=2 if and only if there are two linearly independent solutions of (1.1) in L w 2 (I) and consequently d=1 if and only if there is at most one linearly independent solution of (1.1) in L w 2 (I).

Proof Let d=2. It can be verified by Lemmas 3.5 and 3.7 that (1.1) has two linearly independent solutions in L w 2 (I). Conversely, suppose that there are two linearly independent solutions of (1.1) in L w 2 (I). Then Γ( H 0 (τ)) by Lemma 7.1, and then by [[26], Theorem 3.8], d=2. This completes the proof. □

Remark 7.1 In [17], Brown et al. developed a spectral theory for second-order differential operators with complex coefficients and one regular endpoint. They classified the corresponding formally second-order differential expressions into three limit cases at the singular endpoint: Cases I, II, and III. In [32], (1.1) was analogously classified into three limit cases at b: Cases I, II, and III. By Lemma 7.1, d b =1 if and only if (1.1) is in the limit Case I at b. Hence, (1.1) is not in the limit Case I at b if and only if d b =2. Further, d= d b by Theorem 4.1 if a is finite.

Finally, we give three examples.

Example 7.1 Consider (1.1) on I= { t } t = 0 with p(t)=w(t)=1 and q(t)= t 2 . It is noted that (1.1) is both J-symmetric and symmetric in this case. By [[32], Corollary 3.2], equation (1.1) is in the limit Case I at t=+. So, d=1 by Theorem 7.1. By Theorem 5.5, it can be concluded that (1.1) with the boundary condition

m 1 x(1)+ m 2 Δx(1)=0,( m 1 , m 2 )0,
(7.3)

determines all the J-SSEs of H 0 (τ). In addition, (1.1) with the boundary condition

cosαx(1)+sinαΔx(1)=0,α(0,π],

determines all the J-SSEs of H 0 (τ) which are also SSEs of H 0 (τ). Especially, (7.3) contains the Dirichlet condition x(1)=0 and the Neumann condition Δx(1)=0.

Example 7.2 Consider (1.1) on I= { t } t = 0 with p(t)=w(t)=1 and q(t)= t 2 +i q 2 (t), where q 2 is real-valued. By [[32], Corollary 3.2], equation (1.1) is in the limit Case I at t=+. So, d=1 by Theorem 7.1. By Theorem 5.5, it can be concluded that (1.1) with the boundary condition (7.3) determines all the J-SSEs of H 0 (τ). Also, the condition x(1)=0 and the condition Δx(1)=0 are called the Dirichlet and Neumann boundary conditions, respectively.

Example 7.3 Consider (1.1) on I= { t } t = 0 with p(t)= ( t + 1 ) 4 , q(t)=μ, where μ is a constant in the open upper half-plane and w(t)= ( t + 1 ) 2 . By [[32], Example 3.2], equation (1.1) is not in the limit Case I at t=+. So, d=2 by Theorem 7.1. Let ϕ 1 and ϕ 2 be solutions of (1.1) satisfying (5.15). By Theorem 5.6, (1.1) with the boundary conditions

ix(1)+(x, ϕ 2 )(+)=0,ip(1)Δx(1)+(x, ϕ 1 )(+)=0

determines a J-SSE of H 0 (τ). In addition, (1.1) with the boundary conditions

a x ( 1 ) + b p ( 1 ) Δ x ( 1 ) = 0 , c ( x , ϕ 1 ) ( + ) + d ( x , ϕ 2 ) ( + ) = 0 , ( a , b ) 0 , ( c , d ) 0 ,

determines the J-SSEs of H 0 (τ) with separated boundary conditions.

Remark 7.2 By Theorem 6.2, all the J-SSEs determined in terms of the Dirichlet or Neumann boundary conditions in Examples 7.1 and 7.2 are J-SOEs of H 0 (τ).

References

  1. 1.

    Glazman IM: An analogue of the extension theory of Hermitian operators and a non-symmetric one-dimensional boundary-value problem on a half-axis. Dokl. Akad. Nauk SSSR 1957, 115: 214-216.<