Skip to content

Advertisement

Open Access

Direct and inverse spectral problems for discrete Sturm-Liouville problem with generalized function potential

Advances in Difference Equations20162016:172

https://doi.org/10.1186/s13662-016-0898-z

Received: 14 March 2016

Accepted: 14 June 2016

Published: 30 June 2016

Abstract

In this work, we study the inverse problem for difference equations which are constructed by the Sturm-Liouville equations with generalized function potential from the generalized spectral function (GSF). Some formulas are given in order to obtain the matrix J, which need not be symmetric, by using the GSF and the structure of the GSF is studied.

Keywords

difference equationinverse problemsgeneralized spectral function

MSC

39A1234A5534L15

1 Introduction

In this paper we deal with the \(N\times N\) tridiagonal matrix
$$ J=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} b_{0} & a_{0} & 0 & \cdots & 0 & 0 & \cdots & 0 & 0 & 0 \\ a_{0} & b_{1} & a_{1} & \cdots & 0 & 0 & \cdots & 0 & 0 & 0 \\ 0 & a_{1} & b_{2} & \cdots & 0 & 0 & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & b_{M} & a_{M} & \cdots & 0 & 0 & 0 \\ 0 & 0 & 0 & \cdots & c_{M} & d_{M+1} & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & d_{N-3} & c_{N-3} & 0 \\ 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & c_{N-3} & d_{N-2} & c_{N-2} \\ 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & 0 & c_{N-2} & d_{N-1}\end{array}\displaystyle \right ], $$
(1.1)
where \(a_{n}, b_{n}\in\mathbb{C}\), \(a_{n}\neq 0\) and
$$\begin{aligned}& c_{n}=a_{n}/\alpha, \quad n\in \{M,M+1,\ldots,N-2\}, \\& d_{n}=b_{n}/\alpha, \quad n\in \{M+1,M+2,\ldots,N-1 \}, \end{aligned}$$
and \(\alpha \neq 1\) is a positive real number.

The definitions and some properties of GSF are given in [16]. The inverse problem for the infinite Jacobi matrices from the GSF was investigated in [36], see also [7]. The inverse spectral problem for \(N\times N\) tridiagonal symmetric matrix has been studied in [8] and the inverse spectral problem with spectral parameter in the initial conditions has been studied in [9]. The goal of this paper is to study the almost symmetric matrix J of the form (1.1). Almost symmetric here means that the entries above and below the main diagonal are the same except the entries \(a_{M}\) and \(c_{M}\).

The eigenvalue problem we consider in this paper is \(Jy=\lambda y\), where \(y= \{ y_{n} \} _{n=0}^{N-1}\) is a column vector. There exists a relation between this matrix eigenvalue problem and the second order linear difference equation
$$ \begin{aligned} & a_{n-1}y_{n-1}+b_{n}y_{n}+a_{n}y_{n+1}= \lambda \rho _{n}y_{n},\quad n\in \{ 0,1,\ldots,M,\ldots,N-1 \} , \\ & a_{-1}=c_{N-1}=1,\end{aligned} $$
(1.2)
for \(\{ y_{n} \} _{n=-1}^{N}\), with the boundary conditions
$$ y_{-1}=y_{N}=0, $$
(1.3)
where \(\rho _{n}\) is a constant defined by
$$ \rho _{n}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l@{\quad}} 1, & 0\leq n\leq M, \\ \alpha, & M< n\leq N-1,\end{array}\displaystyle 1\neq \alpha >0.\right . $$
(1.4)
These expressions are equivalent. The problem (1.2), (1.3) is a discrete form of the Sturm-Liouville operator with discontinuous coefficients
$$\begin{aligned}& \frac{d}{dx} \biggl[ p(x)\frac{d}{dx}y(x) \biggr] +q(x)y(x)=\lambda \rho (x)y(x), \quad x \in [ a,b ] , \end{aligned}$$
(1.5)
$$\begin{aligned}& y(a)=y(b)=0, \end{aligned}$$
(1.6)
where \(\rho (x)\) is a piecewise function defined by
$$ \rho (x)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} 1, & a\leq x\leq c, \\ \alpha ^{2}, & c< x\leq b,\end{array}\displaystyle \right . \quad \alpha ^{2}\neq 1, $$
\([ a,b ] \) is a finite interval, α is a real number, and c is a discontinuity point in \([ a,b ] \). On eigenvalues and eigenfunctions of such an equation, see [10], and the inverse problem for this kind equation has been investigated in [11].

2 Generalized spectral function

In this section, we find the characteristic polynomial for the matrix J and then give the existence of linear functional which is defined from the ring of all polynomials in λ of degree \({\leq}2N\) with the complex coefficients to \(\mathbb{C}\). Let us denote by \(\{ P_{n}(\lambda ) \} _{n=-1}^{N}\), the solution of equation (1.2) together with the initial data
$$ y_{-1}=0,\qquad y_{0}=1. $$
(2.1)
By starting with (2.1), we can derive from equation (1.2) iteratively the polynomials \(P_{n}(\lambda )\) of order n, for \(n=\overline{1,N}\). In this way we obtain the unique solution \(\{ P_{n}(\lambda ) \} _{n=0}^{N}\) of the following recurrence relations:
$$ \begin{aligned} & b_{0}P_{0}(\lambda )+a_{0}P_{1}(\lambda )=\lambda P_{0}(\lambda ), \quad c_{N-1}=1, \\ & a_{n-1}P_{n-1}(\lambda )+b_{n}P_{n}(\lambda )+a_{n}P_{n+1}(\lambda )=\lambda P_{n}(\lambda ), \quad n\in \{ 1,2,\ldots,M \} , \\ & c_{n-1}P_{n-1}(\lambda )+d_{n}P_{n}(\lambda )+c_{n}P_{n+1}(\lambda )=\lambda P_{n}(\lambda ), \quad n\in \{ M+1,\ldots,N-1 \} ,\end{aligned} $$
(2.2)
subject to the initial condition
$$ P_{0}(\lambda )=1. $$
(2.3)

Lemma 1

The following equality holds:
$$ \det (J-\lambda I)=(-1)^{N}a_{0}a_{1} \cdots a_{M}c_{M+1} \cdots c_{N-1}P_{N}( \lambda ). $$
(2.4)
Therefore, the roots of the polynomial \(P_{N}(\lambda )\) and the eigenvalues of the matrix J are coincident.

Proof

We will consider the proof in three cases. For each \(n=\overline{1,M}\), let us define the determinant \(\bigtriangleup _{n}(\lambda )\) as follows:
$$ \bigtriangleup _{n}(\lambda )=\left \vert \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} b_{0}-\lambda & a_{0} & 0 & \cdots & 0 & 0 & 0 \\ a_{0} & b_{1}-\lambda & a_{1} & \cdots & 0 & 0 & 0 \\ 0 & a_{1} & b_{2}-\lambda & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & b_{n-3}-\lambda & a_{n-3} & 0 \\ 0 & 0 & 0 & \cdots & a_{n-3} & b_{n-2}-\lambda & a_{n-2} \\ 0 & 0 & 0 & \cdots & 0 & a_{n-2} & b_{n-1}-\lambda \end{array}\displaystyle \right \vert . $$
Then expanding \(\bigtriangleup _{n}(\lambda )\) by adding a row and column and finding the determinant of \(\bigtriangleup _{n+1}(\lambda )\) by the elements of the last row, we obtain
$$ \bigtriangleup _{n+1}(\lambda )=(b_{n}-\lambda )\bigtriangleup _{n}(\lambda )-a_{n-1}^{2} \bigtriangleup _{n-1}(\lambda ), \quad n=\overline{1,M}, \bigtriangleup _{0}(\lambda )=1. $$
(2.5)
Now for \(n=\overline{M+2,N}\), let us define \(\bigtriangleup _{n}(\lambda )\) as follows:
$$\left \vert \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} b_{0}-\lambda & a_{0} & 0 & \cdots & 0 & 0 & \cdots & 0 & 0 & 0 \\ a_{0} & b_{1}-\lambda & a_{1} & \cdots & 0 & 0 & \cdots & 0 & 0 & 0 \\ 0 & a_{1} & b_{2}-\lambda & \cdots & 0 & 0 & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & b_{M}-\lambda & a_{M} & \cdots & 0 & 0 & 0 \\ 0 & 0 & 0 & \cdots & c_{M} & d_{M+1}-\lambda & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & d_{n-3}-\lambda & c_{n-3} & 0 \\ 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & c_{n-3} & d_{n-2}-\lambda & c_{n-2} \\ 0 & 0 & 0 & \cdots & 0 & 0 & \cdots & 0 & c_{n-2} & d_{n-1}-\lambda \end{array}\displaystyle \right \vert . $$
By using the same method, we get
$$ \bigtriangleup _{n+1}(\lambda )=(d_{n}-\lambda ) \bigtriangleup _{n}(\lambda )-c_{n-1}^{2} \bigtriangleup _{n-1}(\lambda ), $$
(2.6)
and finally, for \(n=M+1\), we find
$$ \bigtriangleup _{M+2}(\lambda )=(d_{M+1}-\lambda ) \bigtriangleup _{M+1}(\lambda )-a_{M}c_{M} \bigtriangleup _{M}(\lambda ). $$
(2.7)
Dividing (2.5) and (2.7) by the product \(a_{0}\cdots a_{n-1}\), (2.6) by the product \(a_{0} \cdots a_{n-2}c_{n-1}\), we can easily show that the sequence
$$\begin{aligned}& h_{-1} = 0, \qquad h_{0}=1, \qquad h_{n}=(-1)^{n} ( a_{0}\cdots a_{n-1} ) ^{-1}\bigtriangleup _{n}(\lambda ),\quad n=\overline{1,M+1}, \\& h_{n} =(-1)^{n}(a_{0} \cdots c_{M+1} \cdots c_{n-1})^{-1}\bigtriangleup _{n}(\lambda ),\quad n=\overline{M+2,N}, \end{aligned}$$
satisfies (1.2), (2.1). Then \(h_{n}\) is solution of (1.2), (2.1). We can show it by \(P_{n}(\lambda )\) for \(n=\overline{0,N}\). Since \(\bigtriangleup _{N}(\lambda )\) is also equal to \(\det (J-\lambda I)\) if we combine (2.5), (2.6) and (2.7), we obtain (2.4), for all \(n\in \{ 0,1,\ldots,M,M+1,\ldots,N \}\). □

Theorem 1

There exists a unique linear functional \(\Omega:\mathbb{C}_{2N} [ \lambda ] \rightarrow \mathbb{C}\) such that the following relations hold:
$$\begin{aligned}& \Omega \bigl( P_{m} ( \lambda ) P_{n} ( \lambda ) \bigr) =\frac{\delta _{mn}}{\eta },\quad m,n\in \{ 0,1,\ldots,M,\ldots,N-1 \} , \end{aligned}$$
(2.8)
$$\begin{aligned}& \Omega \bigl( P_{m} ( \lambda ) P_{N} ( \lambda ) \bigr) =0,\quad m\in \{ 0,1,\ldots,M,\ldots,N \} , \end{aligned}$$
(2.9)
where \(\delta _{mn}\) is the Kronecker delta, η is defined by
$$ \eta =\left\{ \textstyle\begin{array}{@{}l@{\quad}l} 1, & m,n\leq M, \\ \alpha, & m,n>M,\end{array}\displaystyle \right. $$
(2.10)
and \(\Omega ( P ( \lambda ) ) \) shows the value of Ω on any polynomial \(P ( \lambda ) \).

Proof

In order to show the uniqueness of Ω we assume that there exists such a linear functional Ω, satisfying (2.8) and (2.9). Let us define the \(2N+1\) polynomials as follows:
$$ P_{n} ( \lambda ) \quad ( n=\overline{0,N-1} ),\qquad P_{m} ( \lambda ) P_{N} ( \lambda ) \quad ( m=\overline{0,N} ). $$
(2.11)
It is clear that this polynomial set is a basis for the linear space \(\mathbb{C}_{2N} [ \lambda ]\). Indeed the polynomials defined by (2.11) are linearly independent and their number is equal to dimension of \(\mathbb{C}_{2N} [ \lambda ] \). On the other hand, by using (2.8) and (2.9), the quantities of the polynomials given in (2.11) under the functional Ω can be found as finite values:
$$\begin{aligned}& \Omega \bigl( P_{n} ( \lambda ) \bigr) = \frac{\delta _{0n}}{\eta },\quad n\in \{ 0,1,\ldots,M,\ldots,N-1 \} , \end{aligned}$$
(2.12)
$$\begin{aligned}& \Omega \bigl( P_{m} ( \lambda ) P_{N} ( \lambda ) \bigr) =0,\quad m\in \{ 0,1,\ldots,N \}. \end{aligned}$$
(2.13)
Therefore, by linearity, the functional Ω defined on \(\mathbb{C}_{2N} [ \lambda ] \) is unique.
To show the existence of Ω, let us define it on the polynomials (2.11) by (2.12), (2.13) and then we expand Ω to over the whole space \(\mathbb{C}_{2N} [ \lambda ] \) by using the linearity of Ω. It can be shown that the functional Ω satisfies (2.8), (2.9). Denote
$$ \Omega \bigl( P_{m} ( \lambda ) P_{n} ( \lambda ) \bigr) =B_{mn},\quad m,n\in \{ 0,1,\ldots,M,\ldots,N \}. $$
(2.14)
It is clear that \(B_{mn}=B_{nm}\), for \(m, n\in \{ 0,1,\ldots,N \}\). From (2.12) and (2.13), we get
$$\begin{aligned}& B_{m0}=B_{0m}=\delta _{m0}, \quad m \in \{ 0,1,\ldots,M \}, \end{aligned}$$
(2.15)
$$\begin{aligned}& B_{m0}=B_{0m}=\frac{\delta _{m0}}{\alpha },\quad m\in \{ M+1,\ldots,N \} , \end{aligned}$$
(2.16)
$$\begin{aligned}& B_{mN}=B_{Nm}=0, \quad m\in \{ 0,1,\ldots,N \}. \end{aligned}$$
(2.17)
Since \(\{ P_{n} ( \lambda ) \} _{0}^{N}\) is the solution of (2.2), we derive from the first equation of (2.2), using (2.3),
$$ \lambda =b_{0}+a_{0}P_{1} ( \lambda ). $$
Inserting this into the remaining equations in (2.2), we get
$$\begin{aligned}& a_{n-1}P_{n-1}(\lambda )+b_{n}P_{n}(\lambda )+a_{n}P_{n+1}(\lambda )=b_{0}P_{n}( \lambda )+a_{0}P_{1} ( \lambda ) P_{n}(\lambda ), \quad n\in \{ 1,2,\ldots,M \} , \\& c_{n-1}P_{n-1}(\lambda )+d_{n}P_{n}(\lambda )+c_{n}P_{n+1}(\lambda )=b_{0}P_{n}( \lambda )+a_{0}P_{1} ( \lambda ) P_{n}(\lambda ), \quad n\in \{ M+1,\ldots,N-1 \}. \end{aligned}$$
If we apply the linear functional Ω to both sides of the last two equations, by taking into account (2.15), (2.16), and (2.17), we get
$$\begin{aligned}& B_{n1}=B_{1n}=\delta _{n1},\quad n \in \{ 0,1,\ldots,M \} , \end{aligned}$$
(2.18)
$$\begin{aligned}& B_{n1}=B_{1n}=\frac{\delta _{n1}}{\alpha },\quad n\in \{ M+1,\ldots,N \}. \end{aligned}$$
(2.19)
Further, recalling the definition of \(\rho _{n}\) in (1.4), we write
$$\begin{aligned}& a_{m-1}P_{m-1}(\lambda )+b_{m}P_{m}(\lambda )+a_{m}P_{m+1}(\lambda )=\lambda \rho _{m}P_{m}( \lambda ), \quad m\in \{ 1,2,\ldots,M,\ldots,N-1 \} , \\& a_{n-1}P_{n-1}(\lambda )+b_{n}P_{n}(\lambda )+a_{n}P_{n+1}(\lambda )=\lambda \rho _{n}P_{n}( \lambda ),\quad n\in \{ 1,2,\ldots,M,\ldots,N-1 \}. \end{aligned}$$
If the first equality is multiplied by \(P_{n}(\lambda )\) and the second equality is multiplied by \(P_{m}(\lambda )\), then the second result is subtracted from the first, we obtain:
for \(m,n\in \{ 1,2,\ldots,M \} \),
$$\begin{aligned}& a_{m-1}P_{m-1}(\lambda )P_{n}(\lambda )+b_{m}P_{m}(\lambda )P_{n}(\lambda )+a_{m}P_{m+1}(\lambda )P_{n}(\lambda ) \\& \quad =a_{n-1}P_{n-1}(\lambda )P_{m}(\lambda )+b_{n}P_{n}(\lambda )P_{m}(\lambda )+a_{n}P_{n+1}(\lambda )P_{m}(\lambda ), \end{aligned}$$
for \(m\in \{ 1,2,\ldots,M \}\), \(n\in \{ M+1,\ldots,N-1 \} \),
$$\begin{aligned}& a_{m-1}P_{m-1}(\lambda )P_{n}(\lambda )+b_{m}P_{m}(\lambda )P_{n}(\lambda )+a_{m}P_{m+1}(\lambda )P_{n}(\lambda ) \\& \quad =c_{n-1}P_{n-1}(\lambda )P_{m}(\lambda )+d_{n}P_{n}(\lambda )P_{m}(\lambda )+c_{n}P_{n+1}(\lambda )P_{m}(\lambda ), \end{aligned}$$
for \(m\in \{ M+1,\ldots,N-1 \}\), \(n\in \{ 1,2,\ldots,M \} \),
$$\begin{aligned}& c_{m-1}P_{m-1}(\lambda )P_{n}(\lambda )+d_{m}P_{m}(\lambda )P_{n}(\lambda )+c_{m}P_{m+1}(\lambda )P_{n}(\lambda ) \\& \quad =a_{n-1}P_{n-1}(\lambda )P_{m}(\lambda )+b_{n}P_{n}(\lambda )P_{m}(\lambda )+a_{n}P_{n+1}(\lambda )P_{m}(\lambda ), \end{aligned}$$
for \(m,n\in \{ M+1,\ldots,N-1 \} \),
$$\begin{aligned}& c_{m-1}P_{m-1}(\lambda )P_{n}(\lambda )+d_{m}P_{m}(\lambda )P_{n}(\lambda )+c_{m}P_{m+1}(\lambda )P_{n}(\lambda ) \\& \quad =c_{n-1}P_{n-1}(\lambda )P_{m}(\lambda )+d_{n}P_{n}(\lambda )P_{m}(\lambda )+c_{n}P_{n+1}(\lambda )P_{m}(\lambda ). \end{aligned}$$

If the functional Ω is applied to both sides of these equations, and using (2.15)-(2.19), we obtain for \(B_{mn}\) the following boundary value problems:

for \(m,n\in \{ 1,2,\ldots,M \} \),
$$ a_{m-1}B_{m-1,n}+b_{m}B_{mn}+a_{m}B_{m+1,n}=a_{n-1}B_{n-1,m}+b_{n}B_{nm}+a_{n}B_{n+1,m}, $$
(2.20)
for \(m\in \{ 1,2,\ldots,M \}\), \(n\in \{ M+1,\ldots,N-1 \} \),
$$ a_{m-1}B_{m-1,n}+b_{m}B_{mn}+a_{m}B_{m+1,n}=c_{n-1}B_{n-1,m}+d_{n}B_{nm}+c_{n}B_{n+1,m}, $$
(2.21)
for \(m\in \{ M+1,\ldots,N-1 \}\), \(n\in \{ 1,2,\ldots,M \} \),
$$ c_{m-1}B_{m-1,n}+d_{m}B_{mn}+c_{m}B_{m+1,n}=a_{n-1}B_{n-1,m}+b_{n}B_{nm}+a_{n}B_{n+1,m}, $$
(2.22)
for \(m,n\in \{ M+1,\ldots,N-1 \} \),
$$ c_{m-1}B_{m-1,n}+d_{m}B_{mn}+c_{m}B_{m+1,n}=c_{n-1}B_{n-1,m}+d_{n}B_{nm}+c_{n}B_{n+1,m,} $$
(2.23)
for \(n\in \{ 0,1,\ldots,M \} \),
$$ B_{n0}=B_{0n}=\delta _{n0},\qquad B_{n1}=B_{1n}=\delta _{n1},\qquad B_{Nn}=B_{nN}=0, $$
(2.24)
for \(n\in \{ M+1,\ldots,N \}\),
$$ B_{n0}=B_{0n}=\frac{\delta _{n0}}{\alpha },\qquad B_{n1}=B_{1n}=\frac{\delta _{n1}}{\alpha }, \qquad B_{Nn}=B_{nN}=0. $$
(2.25)
After starting with boundary values (2.24), (2.25) and using equations (2.20)-(2.23), we can find all \(B_{mn}\) uniquely as follows:
$$\begin{aligned}& B_{mn} =\delta _{mn},\quad m,n\in \{ 0,1,\ldots,M \} , \\& B_{mn} =\frac{\delta _{mn}}{\alpha }, \quad m,n\in \{ M+1,\ldots,N-1 \} , \\& B_{mN} = 0, \quad m\in \{ 0,1,\ldots,M,M+1,\ldots,N \}. \end{aligned}$$
 □

Definition 1

The linear functional Ω defined by Theorem 1 is called the GSF of the matrix J given in (1.1).

3 Inverse problem from the generalized spectral function

In this section, we solve the inverse spectral problem of reconstructing the matrix J by its GSF and we give the structure of GSF. The inverse spectral problem may be stated as follows: determine the reconstruction procedure to construct the matrix J from a given GSF and find the necessary and sufficient conditions for a linear functional Ω on \(\mathbb{C} _{2N} [ \lambda ] \), to be the GSF for some matrix J of the form (1.1). For the investigation of necessary and sufficient conditions for a given linear functional to be the GSF, we will refer to Theorems 2 and 3 in [8]. In this paper, we only find the formulas to construct the matrix J.

Recall that \(P_{n} ( \lambda ) \) is a polynomial of degree n, so it can be expressed as
$$ P_{n} ( \lambda ) =\gamma _{n} \Biggl( \lambda ^{n}+\sum_{k=0}^{n-1}\chi _{nk}\lambda ^{k} \Biggr) ,\quad n\in \{ 0,1,\ldots,M, \ldots,N \}. $$
(3.1)
where \(\gamma _{n}\) and \(\chi _{nk}\) are constants. Inserting (3.1) in (2.2) and using the equality of the polynomials, we can find the following equalities between the coefficients \(a_{n}\), \(b_{n}\), \(c_{n}\), \(d_{n}\) and the quantities \(\gamma _{n}\), \(\chi _{nk}\):
$$\begin{aligned}& \begin{aligned} & a_{n}=\frac{\gamma _{n}}{\gamma _{n+1}}\quad ( 0\leq n\leq M ),\gamma _{0}=1, \\ & c_{n}=\frac{\gamma _{n}}{\gamma _{n+1}}\quad ( M< n\leq N-2 ) ,\qquad c_{M}= \frac{\gamma _{M}}{\alpha \gamma _{M+1}},\end{aligned} \end{aligned}$$
(3.2)
$$\begin{aligned}& \begin{aligned} & b_{n}=\chi _{n,n-1}-\chi _{n+1,n}\quad ( 0\leq n\leq M ) ,\chi _{0,-1}=0, \\ & d_{n}=\chi _{n,n-1}-\chi _{n+1,n}\quad ( M< n\leq N-1 ). \end{aligned} \end{aligned}$$
(3.3)
It is easily shown that there exists an equivalence between (2.8), (2.9), and
$$\begin{aligned}& \Omega \bigl( \lambda ^{m}P_{n}(\lambda ) \bigr) =\frac{\delta _{mn}}{\eta \gamma _{n}},\quad m=\overline{0,n},n\in \{ 0,1,\ldots,M,\ldots,N-1 \} , \end{aligned}$$
(3.4)
$$\begin{aligned}& \Omega \bigl( \lambda ^{m}P_{N}(\lambda ) \bigr) =0,\quad m=\overline{0,N}, \end{aligned}$$
(3.5)
respectively. Indeed, from (3.1), we can write
$$ \Omega \bigl( P_{m}(\lambda )P_{n}(\lambda ) \bigr) =\gamma _{m}\Omega \bigl( \lambda ^{m}P_{n}( \lambda ) \bigr) +\gamma _{m}\sum_{j=0}^{m-1} \chi _{mj}\Omega \bigl( \lambda ^{j}P_{n}(\lambda ) \bigr). $$
(3.6)
Then, since
$$ \lambda ^{j}=\sum_{i=0}^{j}c_{i}^{ ( j ) }P_{i}( \lambda ),\quad j\in \{ 0,1,\ldots,N \} \text{,} $$
it follows from (3.6) that (3.4), (3.5) hold if we have (2.8), (2.9) and conversely if (3.4), (3.5) hold, then (2.8), (2.9) can be obtained from (3.6) and (3.1).
Now, let us introduce
$$ t_{l}=\Omega \bigl( \lambda ^{l} \bigr) , \quad l\in \{ 0,1,\ldots,2N \} , $$
(3.7)
which are called ‘power moments’ of the functional Ω.
Writing the expansion (3.1) in (3.4) and (3.5) instead of \(P_{n}(\lambda )\) and \(P_{N}(\lambda )\), respectively, and using the notation in (3.7), we get
$$\begin{aligned}& t_{n+m}+\sum_{k=0}^{n-1} \chi _{nk}t_{k+m}=0, \quad m=\overline{0,n-1}, n\in \{ 1,2,\ldots,N \} , \end{aligned}$$
(3.8)
$$\begin{aligned}& t_{2N}+\sum_{k=0}^{N-1} \chi _{Nk}t_{k+N}=0, \end{aligned}$$
(3.9)
$$\begin{aligned}& t_{2n}+\sum_{k=0}^{n-1} \chi _{nk}t_{k+n}=\frac{1}{\eta \gamma _{n}^{2}},\quad n\in \{ 0,1, \ldots,N-1 \} , \end{aligned}$$
(3.10)
where η is defined in (2.10).

As a result of all discussions above, we write the procedure to construct the matrix in (1.1). In turn, in order to find the entries \(a_{n}\), \(b_{n}\), \(c_{n}\), \(d_{n}\) of the required matrix J, it suffices to know only the quantities \(\gamma _{n}\), \(\chi _{nk}\). Given the linear functional Ω which satisfies the conditions of Theorem 2 in [8] on \(\mathbb{C}_{2N} [ \lambda ] \), we can use (3.7) to find the quantities \(t_{l}\) and write down the inhomogeneous system of linear algebraic equations (3.8) with the unknowns \(\chi _{n0}, \chi _{n1},\ldots,\chi _{n,n-1}\), for every fixed \(n\in \{ 1,2,\ldots,N \} \). After solving this system uniquely and using (3.10), we find the quantities \(\gamma _{n}\) and so we obtain \(a_{n}\), \(b_{n}\), \(c_{n}\), \(d_{n}\), recalling (3.2), (3.3). Therefore, we can construct the matrix J.

Using the numbers \(t_{l}\) defined in (3.7), let us present the determinants
$$ D_{n}=\left \vert \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} t_{0} & t_{1} & \cdots & t_{n} \\ t_{1} & t_{2} & \cdots & t_{n+1} \\ \vdots & \vdots & \ddots & \vdots \\ t_{n} & t_{n+1} & \cdots & t_{2n}\end{array}\displaystyle \right \vert , \quad n= \overline{0,N}. $$
(3.11)
From the definition of determinants in (3.11), it can be shown that the determinant of system (3.8) is \(D_{n-1}\). Then, solving system (3.8) by using Cramer’s rule, we obtain
$$ \chi _{nk}=-\frac{D_{n-1}^{ ( k ) }}{D_{n-1}}, \quad k=\overline{0,n-1}, $$
(3.12)
where \(D_{m}^{ ( k ) }\) (\(k=\overline{0,m}\)) is the determinant formed by exchanging in \(D_{m}\) the \(( k+1 ) \)th column by the vector \(( t_{m+1},t_{m+2},\ldots,t_{2m+1} ) ^{T}\). Next, substituting equation (3.12) of \(\chi _{nk}\) into the left-hand side of (3.10), we find
$$ \gamma _{n}^{-2}=\frac{\eta D_{n}}{D_{n-1}}, $$
(3.13)
where η is defined in (2.10). Now if we set \(D_{m}^{ ( m ) }=\triangle _{m}\), then we obtain from (3.2), (3.3), by using (3.12), (3.13),
$$\begin{aligned}& a_{n}=\pm \frac{\sqrt{D_{n-1}D_{n+1}}}{D_{n}}\quad ( 0\leq n\leq M-1 ) ,D_{-1}=1, \end{aligned}$$
(3.14)
$$\begin{aligned}& a_{M}=\pm \frac{\sqrt{\alpha D_{M-1}D_{M+1}}}{D_{M}},\qquad c_{M}= \pm \frac{\sqrt{D_{M-1}D_{M+1}}}{\sqrt{\alpha }D_{M}}, \end{aligned}$$
(3.15)
$$\begin{aligned}& c_{n}=\pm \frac{\sqrt{D_{n-1}D_{n+1}}}{D_{n}}\quad ( M< n\leq N-2 ) , \end{aligned}$$
(3.16)
$$\begin{aligned}& b_{n}=\frac{\triangle _{n}}{D_{n}}-\frac{\triangle _{n-1}}{D_{n-1}}\quad ( 0 \leq n\leq M ) ,\triangle _{-1}=0, \end{aligned}$$
(3.17)
$$\begin{aligned}& d_{n}=\frac{\triangle _{n}}{D_{n}}-\frac{\triangle _{n-1}}{D_{n-1}}\quad ( M< n \leq N-1 ) , \triangle _{0}=t_{1}. \end{aligned}$$
(3.18)
Hence , if Ω which satisfies the conditions of Theorem 3 in [8] is given, then the values \(a_{n}\), \(b_{n}\), \(c_{n}\), \(d_{n}\) of the matrix J are obtained by equations (3.14)-(3.18), where \(D_{n}\) is defined by (3.11) and (3.7).

In the following theorem, we will show that the GSF of Jhas a special form and we will give a structure of the GSF. Let J be a matrix which has the form (1.1) and Ω be the GSF of J. Here we characterize the structure of Ω.

Theorem 2

Let \(\lambda _{1},\ldots,\lambda _{p}\) be all the eigenvalues with the multiplicities \(m_{1},\ldots,m_{p}\), respectively, of the matrix J. These are also the roots of the polynomial (2.4). Then there exist numbers \(\beta _{kj}\) (\(j=\overline{1,m_{k}}\), \(k=\overline{1,p}\)) uniquely determined by the matrix J such that for any polynomial \(P(\lambda )\in \mathbb{C}_{2N} [ \lambda ] \) the following formula holds:
$$ \Omega \bigl( P(\lambda ) \bigr) =\sum _{k=1}^{p}\sum_{j=1}^{m_{k}} \frac{\beta _{kj}}{ ( j-1 ) !}P^{ ( j-1 ) }(\lambda _{k}), $$
(3.19)
where \(P^{ ( j-1 ) }(\lambda )\) denotes the \(( j-1 ) \) th derivative of \(P(\lambda )\) with respect to λ.

Proof

Let J be a matrix which has the form (1.1). Take into consideration the difference equation (1.2)
$$ a_{n-1}y_{n-1}+b_{n}y_{n}+a_{n}y_{n+1}= \lambda \rho _{n}y_{n}, \quad n\in \{ 0,1,\ldots,N-1 \} ,a_{-1}=c_{N-1}=1, $$
(3.20)
where \(\{ y_{n} \} _{n=-1}^{N}\) is desired solution and
$$ \rho _{n}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} 1, & 0\leq n\leq M, \\ \alpha, & M< n\leq N-1.\end{array}\displaystyle \right . $$
Denote by \(\{ P_{n}(\lambda ) \} _{n=-1}^{N}\) and \(\{ Q_{n}(\lambda ) \} _{n=-1}^{N}\) the solutions of (3.20) satisfying the initial conditions
$$\begin{aligned}& P_{-1}(\lambda )=0,\qquad P_{0}(\lambda )=1, \end{aligned}$$
(3.21)
$$\begin{aligned}& Q_{-1}(\lambda )=-1, \qquad Q_{0}(\lambda )=0. \end{aligned}$$
(3.22)
For each \(n\geq 0\), the degree of polynomial \(P_{n}(\lambda )\) is n and the degree of polynomial \(Q_{n}(\lambda )\) is \(n-1\). It is clear that the entries \(R_{nm}(\lambda )\) of the resolvent matrix \(R(\lambda )=(J-\lambda I)^{-1}\) are of the form
$$ R_{nm}(\lambda )=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \rho _{n}P_{n}(\lambda ) [ Q_{m}(\lambda )+M(\lambda )P_{m}(\lambda ) ] ,& 0\leq n\leq m\leq N-1, \\ \rho _{n}P_{m}(\lambda ) [ Q_{n}(\lambda )+M(\lambda )P_{n}(\lambda ) ] , & 0\leq m\leq n\leq N-1,\end{array}\displaystyle \right . $$
(3.23)
where
$$ M(\lambda )=-\frac{Q_{N}(\lambda )}{P_{N}(\lambda )}, $$
(3.24)
and \(\rho _{n}\) is defined in (1.4). Let \(f= ( f_{1},f_{2},\ldots,f_{N-1} ) ^{T}\in \) \(\mathbb{C}^{N}\) be an arbitrary vector. Since
$$ R(\lambda )f=-\frac{f}{\lambda }+O \biggl( \frac{1}{\lambda ^{2}} \biggr) , $$
as \(\vert \lambda \vert \rightarrow \infty \), we get for each \(n\in \{ 0,1,\ldots,N-1 \} \) and for a sufficiently large positive number r
$$ f_{n}=-\frac{1}{2\pi i} \int_{\Gamma _{r}} \Biggl\{ \sum_{m=0}^{N-1}R_{nm}( \lambda )f_{m} \Biggr\} \,d\lambda + \int_{\Gamma _{r}}O \biggl( \frac{1}{\lambda ^{2}} \biggr) \,d\lambda , $$
(3.25)
where \(\Gamma _{r}\) is the circle in the λ-plane of radius r centered at the origin.
Let all the distinct zeros of \(P_{N}(\lambda )\) in (2.4) be \(\lambda _{1},\ldots,\lambda _{p}\) with multiplicities \(m_{1},\ldots,m_{p}\), respectively. Then
$$ P_{N}(\lambda )=c(\lambda -\lambda _{1})^{m_{1}} \cdots (\lambda -\lambda _{N})^{m_{p}}, $$
(3.26)
where c is a constant. We have \(1\leq p\leq N\) and \(m_{1}+\cdots+m_{p}=N\). By (3.26), we can write \(\frac{Q_{N}(\lambda )}{P_{N}(\lambda )}\) as the sum of partial fractions:
$$ \frac{Q_{N}(\lambda )}{P_{N}(\lambda )}=\sum_{k=1}^{p} \sum_{j=1}^{m_{k}}\frac{\beta _{kj}}{ ( \lambda -\lambda _{k} ) ^{j}}, $$
(3.27)
where \(\beta _{kj}\) are some uniquely determined complex numbers which depend on the matrix J. Inserting (3.23) in (3.25) and using (3.24), (3.27) we get, by the residue theorem and passing then to the limit \(r\rightarrow \infty \),
$$ f_{n}=\sum_{k=1}^{p} \sum_{j=1}^{m_{k}}\frac{\beta _{kj}}{ ( j-1 ) !} \biggl\{ \frac{d^{j-1}}{d\lambda ^{j-1}} \bigl[ \rho _{n}F(\lambda )P_{n}(\lambda ) \bigr] \biggr\} _{\lambda =\lambda _{k}}, \quad n\in \{ 0,1,\ldots,N-1 \} , $$
(3.28)
where
$$ F(\lambda )=\sum_{m=0}^{N-1}f_{m}P_{m}( \lambda ). $$
(3.29)
Now define the functional Ω on \(\mathbb{C} _{2N} [ \lambda ] \) by the formula
$$ \Omega \bigl( P(\lambda ) \bigr) =\sum _{k=1}^{p}\sum_{j=1}^{m_{k}} \frac{\beta _{kj}}{ ( j-1 ) !}P^{ ( j-1 ) }(\lambda _{k}),\quad P(\lambda )\in \mathbb{C} _{2N} [ \lambda ]. $$
(3.30)
Thus, (3.28) can be written as follows:
$$ \frac{f_{n}}{\rho _{n}}=\Omega \bigl( F(\lambda )P_{n}( \lambda ) \bigr) ,\quad n\in \{ 0,1,\ldots,N-1 \}. $$
(3.31)
Now by using (3.29) in (3.31) and the arbitrariness of \(\{ f_{m} \} _{m=0}^{N-1}\), we see that the first relation in Theorem 1,
$$\begin{aligned}& \Omega \bigl( P_{m} ( \lambda ) P_{n} ( \lambda ) \bigr) =\delta _{mn},\quad m,n\in \{ 0,1,\ldots,M \} , \end{aligned}$$
(3.32)
$$\begin{aligned}& \Omega \bigl( P_{m} ( \lambda ) P_{n} ( \lambda ) \bigr) =\frac{\delta _{mn}}{\alpha },\quad m,n\in \{ M+1,\ldots,N-1 \} , \end{aligned}$$
(3.33)
holds. Moreover, from (3.26) and (3.30), we have also the second relation in Theorem 1,
$$ \Omega \bigl( P_{m} ( \lambda ) P_{N} ( \lambda ) \bigr) =0,\quad m\in \{ 0,1,\ldots,N \}. $$
(3.34)
These mean that the GSF of the matrix J has the form (3.19). □

Now, we shall work out two examples to illustrate our formulas. In the first example, in order to determine \(\chi _{n,k}\) and \(\gamma _{n}\), we will use (3.8)-(3.10).

Example 1

Take into consideration the case \(N=3\), \(M=1\), and the functional Ω described by the formula
$$ \Omega \bigl( P ( \lambda ) \bigr) =\frac{1}{3} \bigl( P ( 0 ) +P ( 1 ) +P ( 2 ) \bigr). $$
It is clear that the functional defined above has the structure given in Theorem 2 and satisfies the conditions of Theorem 2 in [8]. So it can be chosen as a GSF. From (3.7) we calculate all \(t_{l}\) as follows:
$$ \begin{aligned}& t_{0}=1, \qquad t_{1}=1,\qquad t_{2}=\frac{5}{3},\\ & t_{3}=3,\qquad t_{4}=\frac{17}{3}, \qquad t_{5}=11, \qquad t_{6}=\frac{65}{3}. \end{aligned} $$
(3.35)
Then solving the system of equation (3.8) by using the values in (3.35), we get
$$ \begin{aligned} & \chi _{1,0}=-1, \qquad \chi _{2,0}= \frac{1}{3},\qquad \chi _{2,1}=-2,\\ & \chi _{3,0}=0, \qquad \chi _{3,1}=2, \qquad \chi _{3,2}=-3. \end{aligned} $$
(3.36)
Now inserting the quantities in (3.35) and (3.36) into equation (3.10), we obtain
$$ \gamma _{0}=1, \qquad \gamma _{1}=\pm \sqrt{ \frac{3}{2}},\qquad \gamma _{2}=\pm \frac{3}{\sqrt{2\alpha }}. $$
(3.37)
Now it follows from (3.2) and (3.3) that
$$\begin{aligned}& a_{0}=\pm \sqrt{\frac{2}{3}},\qquad a_{1}=\pm \sqrt{\frac{\alpha }{3}},\qquad c_{1}=\pm \sqrt{\frac{1}{3\alpha }}, \\& b_{0}=1, \qquad b_{1}=1,\qquad d_{2}=1, \end{aligned}$$
where (3.36) and (3.37) are used. Consequently, we find the four matrices \(J_{\pm }\) for Ω as follows:
$$ J_{\pm }=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} b_{0} & a_{0} & 0 \\ a_{0} & b_{1} & a_{1} \\ 0 & c_{1} & d_{2}\end{array}\displaystyle \right ] =\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 1 & \pm \sqrt{\frac{2}{3}} & 0 \\ \pm \sqrt{\frac{2}{3}} & 1 & \pm \sqrt{\frac{\alpha }{3}} \\ 0 & \pm \sqrt{\frac{1}{3\alpha }} & 1\end{array}\displaystyle \right ]. $$
The characteristic polynomials which are determined by the matrices \(J_{\pm } \) are obtained:
$$ \det ( J_{\pm }-\lambda I ) =\lambda ( \lambda -1 ) ( \lambda -2 ). $$

In the following example, by using Theorem 3 in [8], it can be shown that the necessary and sufficient conditions for a given linear functional Ω to be the GSF hold and the matrix J can be constructed from (3.14)-(3.18).

Example 2

Let us consider the functional Ω defined by the formula for \(N=3\) and \(M=1\)
$$ \Omega \bigl( P ( \mu ) \bigr) =P ( \mu ) +3P^{\prime } ( \mu ) +4P^{\prime \prime } ( \mu ) , $$
where μ is any number. From (3.7), we obtain
$$ t_{0}=1, \qquad t_{l}=\Omega \bigl( \mu ^{l} \bigr) =\mu ^{l}+3l\mu ^{l-1}+4l ( l-1 ) \mu ^{l-2}, $$
(3.38)
and from (3.38), we get numbers \(t_{l}\) for \(l=\overline{1,6}\) as follows:
$$\begin{aligned}& \begin{aligned} & t_{1} =\mu +3,\qquad t_{2}=\mu ^{2}+6\mu +8, \\ & t_{3} =\mu ^{3}+9\mu ^{2}+24\mu ,\qquad t_{4}=\mu ^{4}+12\mu ^{3}+48\mu ^{2}, \\ & t_{5} =\mu ^{5}+15\mu ^{4}+80\mu ^{3},\qquad t_{6}=\mu ^{6}+18\mu ^{5}+120\mu ^{4}. \end{aligned} \end{aligned}$$
(3.39)
By using (3.39) and recalling (3.11), we find
$$\begin{aligned}& D_{-1}=1, \qquad D_{0}=t_{0}=1, \end{aligned}$$
(3.40)
$$\begin{aligned}& D_{1}=\left \vert \textstyle\begin{array}{@{}c@{\quad}c@{}} t_{0} & t_{1} \\ t_{1} & t_{2} \end{array}\displaystyle \right \vert =\left \vert \textstyle\begin{array}{@{}c@{\quad}c@{}} 1 & \mu +3 \\ \mu +3 & \mu ^{2}+6\mu +8\end{array}\displaystyle \right \vert =-1, \end{aligned}$$
(3.41)
$$\begin{aligned}& D_{2}=\left \vert \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 1 & \mu +3 & \mu ^{2}+6\mu +8 \\ \mu +3 & \mu ^{2}+6\mu +8 & \mu ^{3}+9\mu ^{2}+24\mu \\ \mu ^{2}+6\mu +8 & \mu ^{3}+9\mu ^{2}+24\mu & \mu ^{4}+12\mu ^{3}+48\mu ^{2}\end{array}\displaystyle \right \vert =-512, \end{aligned}$$
(3.42)
and similarly, after some basic operations, we get \(D_{3}=0\). From the equality \(D_{m}^{ ( m ) }=\triangle _{m}\), we determine
$$\begin{aligned}& \triangle _{-1}=0, \qquad \triangle _{0}=t_{1}= \mu +3, \end{aligned}$$
(3.43)
$$\begin{aligned}& \triangle _{1}=\left \vert \textstyle\begin{array}{@{}c@{\quad}c@{}} 1 & \mu ^{2}+6\mu +8 \\ \mu +3 & \mu ^{3}+9\mu ^{2}+24\mu \end{array}\displaystyle \right \vert =-2\mu -24, \end{aligned}$$
(3.44)
$$\begin{aligned}& \triangle _{2}=\left \vert \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 1 & \mu +3 & \mu ^{3}+9\mu ^{2}+24\mu \\ \mu +3 & \mu ^{2}+6\mu +8 & \mu ^{4}+12\mu ^{3}+48\mu ^{2} \\ \mu ^{2}+6\mu +8 & \mu ^{3}+9\mu ^{2}+24\mu & \mu ^{5}+15\mu ^{4}+80\mu ^{3}\end{array}\displaystyle \right \vert =-1536\mu. \end{aligned}$$
(3.45)
Now, it follows from (3.14), (3.15), and (3.16) that
$$ a_{0}=\pm i, \qquad a_{1}=\pm 16i\sqrt{2\alpha }, \qquad c_{1}=\pm \frac{16i\sqrt{2}}{\sqrt{\alpha }}, $$
and from (3.17), (3.18) that
$$ b_{0}=\mu +3, \qquad b_{1}=\mu +21, \qquad d_{2}=\mu -24, $$
where (3.40)-(3.45) are used. Consequently, we find the four matrices \(J_{\pm }\) for Ω as follows:
$$ J_{\pm }=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} b_{0} & a_{0} & 0 \\ a_{0} & b_{1} & a_{1} \\ 0 & c_{1} & d_{2}\end{array}\displaystyle \right ] =\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} \mu +3 & \pm i & 0 \\ \pm i & \mu +21 & \pm 16i\sqrt{2\alpha } \\ 0 & \pm \frac{16i\sqrt{2}}{\sqrt{\alpha }} & \mu -24\end{array}\displaystyle \right ]. $$
The characteristic polynomials which are determined by the matrices \(J_{\pm } \) are obtained:
$$ \det ( J_{\pm }-\lambda I ) = ( \mu -\lambda ) ^{3}. $$

Notes

Declarations

Acknowledgements

The first author is thankful to The Scientific and Technological Research Council of Turkey (TUBITAK) for their support with the Ph.D. scholarship. We thank both referees for their useful suggestions.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Faculty of Arts and Sciences, Department of Mathematics, Gaziantep University, Gaziantep, Turkey
(2)
Faculty of Arts and Sciences, Department of Mathematics, Adıyaman University, Adıyaman, Turkey

References

  1. Marchenko, VA: Expansion in eigenfunctions of non-selfadjoint singular second order differential operators. Mat. Sb. 52, 739-788 (1960) (in Russian) MathSciNetGoogle Scholar
  2. Rofe-Beketov, FS: Expansion in eigenfunctions of infinite systems of differential equations in the non-selfadjoint and selfadjoint cases. Mat. Sb. 51, 293-342 (1960) (in Russian) MathSciNetGoogle Scholar
  3. Guseinov, GS: Determination of an infinite non-selfadjoint Jacobi matrix from its generalized spectral function. Mat. Zametki 23, 237-248 (1978). English transl.: Math. Notes 23, 130–136 (1978) MathSciNetGoogle Scholar
  4. Guseinov, GS: The inverse problem from the generalized spectral matrix for a second order non-selfadjoint difference equation on the axis. Izv. Akad. Nauk Azerb. SSR Ser. Fiz.-Tekhn. Mat. Nauk 5, 16-22 (1978) (in Russian) MATHGoogle Scholar
  5. Kishakevich, YL: Spectral function of Marchenko type for a difference operator of an even order. Mat. Zametki 11, 437-446 (1972). English transl.: Math. Notes 11, 266–271 (1972) MathSciNetMATHGoogle Scholar
  6. Kishakevich, YL: On an inverse problem for non-selfadjoint difference operators. Mat. Zametki 11, 661-668 (1972). English transl.: Math. Notes 11, 402-406 (1972) MathSciNetGoogle Scholar
  7. Bohner, M, Koyunbakan, H: Inverse problems for the Sturm-Liouville difference equations. Filomat 30(5), 1297-1304 (2016) View ArticleGoogle Scholar
  8. Guseinov, GS: Inverse spectral problems for tridiagonal N by N complex Hamiltonians. SIGMA 5(18), 28 (2009) MathSciNetMATHGoogle Scholar
  9. Manafov, MD, Bala, B: Inverse spectral problems for tridiagonal N by N complex Hamiltonians with spectral parameter in the initial conditions. Adıyaman Univ. Fen Bilimleri Dergisi 3(1), 20-27 (2013) Google Scholar
  10. Akhmedova, EN, Huseynov, HM: On eigenvalues and eigenfunctions of one class of Sturm-Liouville operators with discontinuous coefficients. Trans. Acad. Sci. Azerb. Ser. Phys.-Tech. Math. Sci. 23(4), 7-18 (2003) MathSciNetMATHGoogle Scholar
  11. Akhmedova, EN, Huseynov, HM: On inverse problem for Sturm-Liouville operator with discontinuous coefficients. Izv. Saratov Univ. (N.S.), Ser. Math. Mech. Inform. 10(1), 3-9 (2010) Google Scholar

Copyright

© Bala et al. 2016