Skip to main content

Theory and Modern Applications

Multidimensional sampling theorems for multivariate discrete transforms

Abstract

This paper is devoted to the establishment of two-dimensional sampling theorems for discrete transforms, whose kernels arise from second order partial difference equations. We define a discrete type partial difference operator and investigate its spectral properties. Green’s function is constructed and kernels that generate orthonormal basis of eigenvectors are defined. A discrete Kramer-type lemma is introduced and two sampling theorems of Lagrange interpolation type are proved. Several illustrative examples are depicted. The theory is extendible to higher order settings.

1 Introduction

The derivation of multidimensional versions of the classical sampling theorem, see [23], has attracted many authors; see e.g. [6, 1214, 21]. This is an essential implication of applying the multivariate sampling theorem to multidimensional signals like images for example. With this respect, two-dimensional (2-D) sampling theorems are of great interest; see also [15, 18]. In the mentioned researches, authors established multidimensional integral transform, most of which arise from self-adjoint differential operators, cf. [6].

In [4] the author derived a discrete counterpart of the classical sampling theorem of Whittaker [23]. He also gave a sampling theorem for discrete transforms associated with second order self-adjoint difference operators. The results of [4] extend many sampling theorems for discrete signals derived in [2]; see also [19].

The basic idea of [4] is based to apply a discrete version of Kramer’s sampling theorem derived in [3, 7, 11]. This theorem can be stated as follows. Let \(l^{2}(\mathbb{J})\) denote the space of all sequences \(\alpha :=(\alpha _{n})_{n\in \mathbb{J}}\), where \(\mathbb{J}\) is a countable index set, with the norm \(\Vert \alpha \Vert ^{2}:=\sum_{n\in \mathbb{J}}|\alpha _{n}|^{2}< \infty \), then we have the following.

Theorem 1.1

Let \((t_{k})_{k\in \mathbb{J}}\), be a sequence of real numbers and \(K_{n}(t):\mathbb{C}\longrightarrow \mathbb{C}\), \(n\in \mathbb{J}\), be a function such that, for any \(t\in \mathbb{C}\), \(K_{n}(t)\in l^{2}(\mathbb{J})\), and the sequence \(\{K_{n}(t_{k})\}_{k\in \mathbb{J}}\) is a complete orthogonal set of \(l^{2}(\mathbb{J})\). Then the discrete transform

$$ F(t)=\sum_{n\in \mathbb{J}} g_{n} K_{n}(t),\quad g\in l^{2}( \mathbb{J}), $$
(1.1)

has the sampling expansion

$$ F(t)=\sum_{k\in \mathbb{J}}F(t_{k}) \frac{\sum_{n\in \mathbb{J}} K_{n}(t) \overline{K_{n}(t_{k})}}{\sum_{n\in \mathbb{I}} \vert K_{n}(t_{k}) \vert ^{2}}. $$
(1.2)

When \(\mathbb{J}\) is infinite, series (1.2) converges absolutely for \(t\in \mathbb{C}\) and uniformly locally on \(\mathbb{C}\).

The kernel \(K_{n}(t)\) may arise from difference operators as in [4, 7, 9, 10]. This can be illustrated as follows. Consider the eigenvalue problem

$$\begin{aligned}& r^{-1}(n) \bigl\{ \nabla \bigl[p(n)\Delta y(n) \bigr]+q(n)y(n) \bigr\} = ty(n),\quad n\in \mathbb{J}=\{1,\ldots ,N\}, t\in \mathbb{C}, \end{aligned}$$
(1.3)
$$\begin{aligned}& M_{1}(y):=y(0)+hy(1)=0,\qquad M_{2}(y):=y(N+1)+ly(N)=0, \end{aligned}$$
(1.4)

where \(\Delta y(n):=y(n+1)-y(n)\), \(\nabla y(n):=y(n)-y(n-1)\), Δ and are the forward and the backward difference operators and h, l are real numbers. For definiteness and self-adjointness the functions \(p(n)\), \(r(n)\) are assumed to be strictly positive.

Let \(\phi (n,t)\) be the solution of Eq. (1.3) such that \(M_{1}(\phi (n,t))=0\). The eigenvalues of the problem are the zeros of \(M_{2}(\phi )\) and they are simple, where \(M_{2}(\phi )\) is polynomial of degree N. The eigenvalues of the problem are N distinct real numbers which will be denoted by \(\{ t_{k} \} _{k=1}^{N}\). The corresponding sequence of eigenfunctions is \(\{ \phi (n,t_{k}) \} _{k=1}^{N}\). The sequence \(\{ \phi (n,t_{k}) \} _{k=1}^{N}\), is a set of real-valued functions and it forms an orthogonal basis of \(l^{2}(\mathbb{J})\); cf. e.g. [16]. Let

$$ \Pi (t)= \textstyle\begin{cases} \prod_{k=1}^{N} (1-\frac{t}{t_{k}} ),&\text{if zero is not an eigenvalue}, \\ t\prod_{k=2}^{N} (1-\frac{t}{t_{k}} ),&\text{$t_{1}=0$, is an eigenvalue}. \end{cases} $$
(1.5)

One of the sampling results of [4] can be stated as the following Lagrange interpolation theorem.

Theorem 1.2

If \(f(n)\in l^{2}(\mathbb{J})\) and

$$ F(t)=\sum_{n=1}^{N} f(n) \phi (n,t),\quad t \in \mathbb{C}, $$
(1.6)

then

$$ F(t)=\sum_{k=1}^{N} F(t_{k}) \frac{\Pi (t)}{(t-t_{k})\Pi '(t_{k})}. $$
(1.7)

In [1] sampling results were obtained for Eq. (1.3) with the general boundary conditions

$$ \begin{aligned} &\alpha _{11}y(0)+\alpha _{12}y(1)+\beta _{11}y(N)+ \beta _{12}y(N+1)=0, \\ &\alpha _{21}y(0)+\alpha _{22}y(1)+\beta _{21}y(N)+\beta _{22}y(N+1)=0, \end{aligned} $$
(1.8)

where \(\alpha _{11}\beta _{22}-\beta _{12}\alpha _{21}\ne 0\).

In this paper we establish two-dimensional sampling theorems associated with a discrete-type Dirichlet problem. For this task we define a second order partial difference operator in the next section. We also impose conditions on the potential which make the problem breakable into two different ordinary Sturm–Liouville discrete systems. This is done in the next section. Section 3 is devoted to the construction of the Green’s function of the system and derive its eigenfunctions expansion. Section 4 contains the sampling results of this paper and the last section depicted some worked examples. The theory for 2-D setting can be similarly extended to higher order situation, representing discrete counterpart of the results of both [5, 24].

2 A two-dimensional discrete operator

In this section we define the two-dimensional discrete eigenvalue problem of this paper. Let \(\mathbb{I}=\mathbb{Z}_{N}\times \mathbb{Z}_{M}\), where \(\mathbb{Z}_{N}=\{1,2,\ldots ,N\}\), \(\mathbb{Z}_{M}=\{1,2,\ldots ,M\}\), and N, M are fixed positive integers. We will write \({\mathbf{{n}}}=(n,m)\in \mathbb{I}\). Let \(\ell ^{2}(\mathbb{I})\) denote the space of all complex-valued functions

$$ \alpha :\mathbb{I}\longrightarrow \mathbb{C}\qquad \bigl({\mathbf{{n}}} \mapsto \alpha ({\mathbf{{n}}}) \bigr), $$

with the inner product and norm

$$ \begin{aligned} &\langle \alpha ,\beta \rangle :=\sum_{n=1}^{N} \sum_{m=1}^{M} \alpha (n,m) \overline{ \beta (n,m)}, \\ &\Vert \alpha \Vert ^{2}:=\sum_{n=1}^{N} \sum_{m=1}^{M} \bigl\vert \alpha (n,m) \bigr\vert ^{2}, \quad \alpha , \beta \in \ell ^{2}( \mathbb{I}). \end{aligned} $$
(2.1)

For \(y\in \ell ^{2}(\mathbb{I})\), let \(\Delta _{n}\) and \(\nabla _{m}\) be the partial forward and backward difference operators defined, respectively, by

$$ \begin{aligned} &\Delta _{n} Y(n,m) :=Y(n+1,m)-Y(n,,m), \\ &\nabla _{n} Y(n,m) :=Y(n,m)-Y(n-1,,m). \end{aligned} $$

Similarly we define \(\Delta _{m}\) and \(\nabla _{m}\). Let

$$ \boldsymbol{\Delta}=\Delta _{n}\nabla _{n}+\Delta _{m}\nabla _{m}. $$

Consider the second order partial difference equation

$$ \boldsymbol{\Delta}Y({\mathbf{{n}}})+Q({\mathbf{{n}}})Y({ \mathbf{{n}}})= {t} Y({\mathbf{{n}}}),\quad {\mathbf{{n}}}\in \mathbb{I}, $$
(2.2)

with the separate-type boundary conditions

$$ \begin{pmatrix} U_{11}(Y) \\ U_{12}(Y) \\ U_{21}(Y) \\ U_{22}(Y)\end{pmatrix} = \begin{pmatrix} l&h_{1}&0&0&0&0&0&0 \\ 0&0&l_{1}&1&0&0&0&0 \\ 0&0&0&0&1&h_{2}&0&0 \\ 0&0&0&0&0&0&l_{2}&1 \end{pmatrix} \begin{pmatrix} Y(0,m) \\ Y(1,m) \\ Y(N,m) \\ Y(N+1,m) \\ Y(n,0) \\ Y(n,1) \\ Y(n,M) \\ Y(n,M+1) \end{pmatrix} . $$
(2.3)

Here \(h_{i}\), \(l_{i}\) are real numbers, \(i=1,2\). The function \(Q(\mathbf{{n})}\) is also a real-valued function defined on \(\mathbb{I}\), and \({t}\in \mathbb{C}\) is the eigenvalue parameter.

Assuming that \(Q({\mathbf{{n}}})=q(n)+p(m)\), and letting \(Y({\mathbf{{n}}})=y(n) z(m)\), make problem (2.2)–(2.3) separable that can be split into two self-adjoint Sturm–Liouville problems with separate-type boundary conditions as follows:

$$\begin{aligned}& \begin{aligned} &D_{1}y= \Delta _{n}\nabla _{n}y(n)+q(n)y(n)= \lambda y(n), \\ &U_{11}(y) =y(0)+h_{1} y(1)=0, \\ &U_{12}(y) =y(N+1)+l_{1} y(N)=0, \end{aligned} \end{aligned}$$
(2.4)
$$\begin{aligned}& \begin{aligned} &D_{2}z= \Delta _{m}\nabla _{m}z(m)+p(m)z(n)= \mu z(m), \\ &U_{21}(z) =z(0)+h_{2} z(1)=0, \\ &U_{22}(z) =z(N+1)+l_{2} z(N)=0, \end{aligned} \end{aligned}$$
(2.5)

where \(\lambda +\mu =t\). Let \(\phi (n,\lambda )\) be the solution of \(D_{1}y=\lambda y\) uniquely determined by the initial conditions

$$ \phi (0,\lambda )=-h_{1}, \qquad \phi (1,\lambda )=1, \quad \lambda \in \mathbb{C}, $$

and \(\psi (m,\mu )\) be the solution of \(D_{2}y=\lambda y\) uniquely determined by the initial conditions

$$ \psi (0,\mu )=-h_{2},\qquad \psi (1,\mu )=1,\quad \mu \in \mathbb{C}. $$

Thus [16], both \(\phi (n,\lambda )\) and \(\psi (m,\mu )\) are, respectively, polynomials in λ and μ of degree \(n-1\) and \(m-1\). Noting that

$$ U_{11}(\phi )=0,\qquad U_{21}(\psi )=0, $$
(2.6)

the eigenvalues of (2.4) and (2.5) are the zeros of the equations

$$ U_{12}(\phi )=0,\qquad U_{22}(\psi )=0, $$
(2.7)

Following the theory developed in [16], the eigenvalues of (2.4) and (2.5) are real distinct and they form the sets

$$ \{\lambda _{k}\}_{k_{=}1}^{N}, \{\mu _{l}\}_{l_{=}1}^{M}\subset \mathbb{R}, $$
(2.8)

respectively. Denote the sets of corresponding eigenvectors by

$$ \bigl\{ \phi _{k}(n)=\phi (n,\lambda _{k})\bigr\} _{k=1}^{N},\qquad \bigl\{ \psi _{l}(m)= \psi (m,\mu _{l})\bigr\} _{l=1}^{M}. $$
(2.9)

Therefore, a solution of (2.2) is

$$ \Phi ({\mathbf{{n}}},\lambda ,\mu )=\Phi (n,m,\lambda ,\mu )= \phi (n, \lambda )\psi (m, \mu ),\quad 1\leqslant k\leqslant N, 1\leqslant l \leqslant M, \lambda ,\mu \in \mathbb{C}. $$
(2.10)

We also conclude that problem (2.2)–(2.3) has the set of eigenvalues

$$ {t}_{kl}=\lambda _{k}+\mu _{l},\quad 1\leqslant k\leqslant N, 1 \leqslant l\leqslant M, $$
(2.11)

with the corresponding real-valued eigenvectors

$$ \Phi _{kl}(n,m)=\Phi (n,m,\lambda _{k},\mu _{l})=\phi _{k}(n) \psi _{l}(m),\quad 1\leqslant k\leqslant N, 1\leqslant l\leqslant M. $$
(2.12)

Unlike the case of the one-dimensional problems, the eigenvalues are not necessarily simple. In fact, for all \(\lambda _{k}\), \(\mu _{l}\) eigenvalues of the problem (2.4) and (2.5), respectively, \(\lambda _{k}+\mu _{l}=t\); is fixed, then t is an eigenvalue of (2.2)–(2.3) corresponding to all eigenfunctions of the form \(\Phi _{kl}(n,m)\). Hence, the set of eigenvalues of (2.2)–(2.3) can be listed as \(\{t_{K}\}_{K=1}^{N\times M}\), where an eigenvalues is repeated according to its (geometric) multiplicity. Note that the eigenvalues (2.11) are real and the eigenvectors (2.12) are real-valued functions.

Lemma 2.1

The eigenvectors (2.12) form an orthogonal basis of \(\ell ^{2}(\mathbb{I})\).

Proof

Since all eigenvalues of each of the problems (2.4) and (2.5) are simple, then [16] their eigenvectors \(\{\phi _{k}(n)\}_{k=1}^{N}\), \(\{\psi _{l}(m)\}_{l=1}^{M}\), construct orthogonal bases in \(\ell ^{2}(\mathbb{Z}_{N})\), \(\ell ^{2}(\mathbb{Z}_{M})\), respectively. Hence, the eigenvectors of (2.2)–(2.3); \(\{\Phi _{kl}(n,m)\}_{k=1,l=1}^{N,M}\), are also orthogonal in \(\ell ^{2}(\mathbb{I})\). We have

$$ \begin{aligned} \langle \Phi _{kl},\Phi _{k'l'} \rangle &=\sum _{n=1}^{N}\sum _{m=1}^{M} \Phi _{kl}(n,m)\Phi _{k'l'}(n,m) \\ &=\sum_{n=1}^{N}\phi _{k}(n) \phi _{k'}(n) \sum_{m=1}^{M} \psi _{l}(m) \psi _{l'}(m) \\ &= \Vert \phi _{k} \Vert ^{2} \Vert \psi _{l} \Vert ^{2}\delta _{kk'}\delta _{ll'}. \end{aligned} $$

Since \(\ell ^{2}(\mathbb{I})\) has dimension NM, the set \(\{\Phi _{kl}\}_{k=1,l=1}^{N,M}\) is an orthogonal basis of \(\ell ^{2}(\mathbb{I})\). □

In the following lemma, we prove that (2.11) and (2.12) are the only eigenvalues and eigenvectors of problem (2.2)–(2.3), which is a discrete counterpart of the classical result of [22, p. 114].

Lemma 2.2

The function (2.10) generates all eigenvectors of the problem (2.2)(2.3).

Proof

Assume that \(\theta (n,\lambda )\) and \(\chi (m,\mu )\) are normalized functions corresponding to \(\phi (n,\lambda )\) and \(\psi (m, \mu )\), respectively. Thus, \(\{\theta _{k}(n)=\theta (n,\lambda _{k})\}_{k=1}^{N}\) is an orthonormal basis in \(\ell ^{2}(\mathbb{Z}_{N})\), and \(\{\chi _{l}(m)=\chi (m,\mu _{l})\}_{l=1}^{M}\) is an orthonormal basis in \(\ell ^{2}(\mathbb{Z}_{M})\). Therefore,

$$ \Theta _{kl}({\mathbf{n}})= \Theta _{kl}(n,m)=\theta _{k}(n)\chi _{l}(m),\quad 1\leqslant k\leqslant N, 1 \leqslant l\leqslant M, $$

is an orthonormal basis in \(\ell ^{2}(\mathbb{I})\). Let \(f(n,m)\in \ell ^{2}(\mathbb{I})\). Hence, for each \(m\in \mathbb{Z}_{M}\), we define

$$ \zeta _{k}(m)=\sum_{n=1}^{N}f(n,m) \theta _{k}(n). $$

Thus, \(\{\zeta _{k}(n)\}_{k=1}^{N}\) are merely the Fourier coefficients of the functions \(f(n,m)\in \ell ^{2}(\mathbb{Z}_{N})\) for each fixed \(m\in \mathbb{Z}_{M}\). Parseval’s relation, cf. [17, p. 170], related to the problem (2.4), yields

$$ \sum_{n=1}^{N} \bigl\vert f(n,m) \bigr\vert ^{2}=\sum_{k=1}^{N} \bigl\vert \zeta _{k}(m) \bigr\vert ^{2}. $$
(2.13)

On the other hand, and in similar manner, if

$$ c_{k,l}=\sum_{m=1}^{M} \zeta _{k}(m) \chi _{l}(m) =\sum _{m=1}^{M} \sum_{n=1}^{N}f(n,m) \Theta _{kl}(n,m), $$
(2.14)

then Parseval’s relation leads us to

$$ \sum_{m=1}^{M} \bigl\vert \zeta _{k}(m) \bigr\vert ^{2}=\sum _{l=1}^{M} \vert c_{k,l} \vert ^{2}. $$
(2.15)

From (2.13) and (2.15), we get

$$ \sum_{m=1}^{M}\sum _{n=1}^{N} \bigl\vert f(n,m) \bigr\vert ^{2}=\sum_{m=1}^{M} \sum_{k=1}^{N} \bigl\vert \zeta _{k}(m) \bigr\vert ^{2} =\sum _{l=1}^{M}\sum_{k=1}^{N} \vert c_{k,l} \vert ^{2}. $$
(2.16)

If \(f(n,m)\) is a different eigenvector of (2.2)–(2.3), then, by orthogonality of eigenvectors and Eq. (2.14), this implies that \(c_{k,l}=0, 1\leqslant k\leqslant N, 1\leqslant l\leqslant M\). Thus, \(f(n,m)\equiv 0\), \((n,m)\in \ell ^{2}(\mathbb{I})\), which contradicts the assumption that \(f(n,m)\) is an eigenvector. □

It is worthwhile to mention that the theory outlined above is a discrete counterpart of Dirichlet boundary-value problem with additive potential; see [5, 24] for the treatment of the associated sampling theorems.

3 Construction of Green’s function

In this section we construct Green’s function associated with the eigenvalue problem (2.2)–(2.3).

Theorem 3.1

The Green’s function of the problem (2.2)(2.3) is

$$ G({\mathbf{{n}},\mathbf{{j}}},t)=\sum _{k=1}^{N}\sum_{l=1}^{M} \frac{\Theta _{kl}({\mathbf{{n}}}) \Theta _{kl}({{\mathbf{{j}}}})}{\lambda _{k}+\mu _{l}-t}, \quad {\mathbf{{j}}}=(j,i). $$
(3.1)

Proof

To get Green’s function of the problem, we seek a solution of the equation

$$ \boldsymbol{\Delta}y({\mathbf{{n}}})+ \bigl(q({\mathbf{{n}}})-t \bigr)y({\mathbf{{n}}})=f({\mathbf{{n}}}),\quad {\mathbf{{n}}}\in \mathbb{I}, f\in \ell ^{2}(\mathbb{I}), $$
(3.2)

with the boundary conditions (2.3). Since f is an \(\ell ^{2}(\mathbb{I})\) function, it has the Fourier expansion

$$ f(n,m)=\sum_{k=1}^{N}\sum _{l=1}^{M}b_{kl} \Theta _{kl}({\mathbf{n}}), $$

where

$$ b_{kl}=\sum_{j=1}^{N}\sum _{i=1}^{M}f(j,i) \Theta _{kl}({\mathbf{j}}). $$

Let \(y({\mathbf{{n}}})=y({\mathbf{{n}}},t)\) be a solution of (3.2) with (2.3). Then it has the expansion

$$ y({\mathbf{{n}}},t)=\sum_{k=1}^{N}\sum _{l=1}^{M}B_{kl} \Theta _{kl}({ \mathbf{n}}). $$

Then (3.2) is satisfied if

$$ B_{kl}=\frac{b_{kl}}{\lambda _{k}+\mu _{l}-t}. $$

Therefore,

$$ \begin{aligned} y(\mathbf{{n}},\mathbf{t)}&=\sum _{k=1}^{N}\sum_{l=1}^{M} \frac{b_{kl}}{\lambda _{k}+\mu _{l}-t} \Theta _{kl}({\mathbf{{n}}}) \\ &=\sum_{k=1}^{N}\sum _{l=1}^{M}\sum_{j=1}^{N} \sum_{i=1}^{M} \frac{\Theta _{kl}({\mathbf{n}})\Theta _{kl}({\mathbf{j}})}{\lambda _{k}+\mu _{l}-t} f(j,i) \\ &=\sum_{j=1}^{N}\sum _{i=1}^{M}G({\mathbf{{n}}},{\mathbf{{j}}},t) f({ \mathbf{{j}}}). \end{aligned} $$

Then (3.1) is approved. □

The classical multidimensional Green’s functions may be encountered in [22].

4 Sampling theorems

This section involves the sampling theorems of this paper. We start with introducing a 2-D Kramer-type sampling theorem. Assume that

$$ K:\mathbb{I}\times \mathbb{C}^{2}\longrightarrow \mathbb{C}\qquad \bigl(({ \mathbf{{n}}},\lambda ,\mu )\mapsto K({\mathbf{{n}}},\lambda ,\mu ) \bigr), $$

for any \((\lambda ,\mu )\), \(K({\mathbf{{n}}},\lambda ,\mu )\in \ell ^{2}(\mathbb{I})\), and that there exists a set of points

$$ \bigl\{ (\lambda _{k},\mu _{l}), 1\leqslant k\leqslant N, 1\leqslant l \leqslant M\bigr\} \subset \mathbb{C}, $$

such that \(\{K({\mathbf{{n}}},\lambda _{k},\mu _{l})\}\) is a complete orthogonal set in \(\ell ^{2}(\mathbb{I})\). The following theorem gives a two-dimensional discrete version of Kramer’s sampling theorem.

Theorem 4.1

The discrete transform

$$ F(\lambda ,\mu )=\sum_{n=1}^{N} \sum_{m=1}^{M} f({\mathbf{{n}}}) K({ \mathbf{{n}}}, \lambda ,\mu ),\quad f\in \ell ^{2}(\mathbb{I}), $$
(4.1)

has the sampling expansion

$$ F(\lambda ,\mu )=\sum_{k=1}^{N} \sum_{l=1}^{M} F(\lambda _{k},\mu _{l}) \frac{\sum_{n=1}^{N}\sum_{m=1}^{M} K({\mathbf{{n}}},\lambda ,\mu ) \overline{K({\mathbf{{n}}},\lambda _{k}, \mu _{l})}}{ \Vert K(\cdot ,\lambda _{k},\mu _{l}) \Vert ^{2}}. $$
(4.2)

The proof is by applying Parseval’s relation to (4.1); cf. [17, p. 175]. In the following we will show that the kernel \(K({\mathbf{{n}}},\lambda ,\mu )\) can arise from solutions of partial difference equations.

We give two sampling theorems associated with the problem (2.2)–(2.3). In the first theorem we take the kernel \(K({\mathbf{{n}}},\lambda ,\mu )\) of the discrete transform (4.1) as the solution (2.10); \(\Phi ({\mathbf{{n}}},\lambda ,\mu )\), of the problem (2.2)–(2.3), while Green’s function is involved in the kernel of the second one. Then the sampling expansions (4.2) will be two-dimensional and one-dimensional Lagrange-type interpolations, respectively, as we will see.

Theorem 4.2

The discrete transform

$$ F(\lambda ,\mu )=\sum_{n1}^{N} \sum_{m=1}^{M} f({\mathbf{{n}}}) \Phi ({ \mathbf{{n}}},\lambda ,\mu ), \quad f\in \ell ^{2}(\mathbb{I}), $$
(4.3)

has the sampling expansion

$$ \begin{aligned} F(\lambda ,\mu )&=\sum _{k=1}^{N}\sum_{l=1}^{M} F( \lambda _{k},\mu _{l}) \frac{G(\lambda ) H(\mu )}{(\lambda -\lambda _{k})(\mu -\mu _{l}) G'(\lambda _{k}) H'(\mu _{l})} \\ &=\sum_{k=1}^{N}\sum _{l=1}^{M} F(\lambda _{k},\mu _{l}) \frac{K(\lambda ) L(\mu )}{(\lambda -\lambda _{k})(\mu -\mu _{l}) K'(\lambda _{k}) L'(\mu _{l})}, \end{aligned} $$
(4.4)

where \(G(\lambda )=U_{12}(\phi )\), \(H(\mu )=U_{22}(\psi )\), and \(K(\lambda )=\prod_{k=1}^{N}(\lambda -\lambda _{k})\), \(L( \mu )=\prod_{l=1}^{M}(\mu -\mu _{l})\).

Proof

Using Theorem 4.1, we obtain

$$ F(\lambda ,\mu )=\sum_{k=1}^{N} \sum_{l=1}^{M} F(\lambda _{k},\mu _{l}) \frac{\sum_{n=1}^{N}\sum_{m=1}^{M} \Phi ({\mathbf{{n}}},\lambda ,\mu ) {\Phi ({\mathbf{{n}}},\lambda _{k},\mu _{l})}}{ \Vert \Phi _{kl}(\cdot ) \Vert ^{2}}. $$
(4.5)

Since \(\phi (n,\lambda )\) satisfies the first relation of (2.6), it satisfies Green’s formula [16, p. 13];

$$ \phi (N+1,s){\phi }(N,u)-\phi (N,s){\phi }(N+1,u)= (s-u)\sum _{n=1}^{N} \phi (n,s){\phi }(n,u),\quad s, u \in \mathbb{C}. $$
(4.6)

Moreover, since \(\phi _{k}(n)\) is an eigenfunction of (2.4), it satisfies the boundary conditions. Then \(\phi _{k}(N+1)=-l_{1}\phi _{k}(N)\). Thus, for \(s=\lambda \), \(u=\lambda _{k}\), (4.6) leads to

$$ \begin{aligned} \sum_{n=1}^{N} \phi (n,\lambda )\phi _{k}(n)&= \frac{\phi _{k}(N)}{\lambda -\lambda _{k}} \bigl[\phi (N+1,\lambda )+l_{1} \phi (N,\lambda ) \bigr] \\ &=\frac{\phi _{k}(N)}{\lambda -\lambda _{k}} G(\lambda ). \end{aligned} $$

Similarly,

$$ \sum_{m=1}^{M}\psi (m,\mu )\psi _{l}(m)= \frac{\psi _{l}(M)}{\mu -\mu _{l}} H(\mu ). $$

Therefore,

$$ \begin{aligned} \sum_{n=1}^{N} \sum_{m=1}^{M} \Phi ({\mathbf{{n}}}, \lambda , \mu ) {\Phi ({\mathbf{{n}}},\lambda _{k},\mu _{l})} &= \Biggl(\sum_{n=1}^{N} \phi (n,\lambda )\phi _{k}(n) \Biggr) \Biggl(\sum _{m=1}^{M}\psi (m, \mu )\psi _{l}(m) \Biggr) \\ &=\phi _{k}(N) \psi _{l}(M) \frac{G(\lambda ) H(\mu )}{(\lambda -\lambda _{k})(\mu -\mu _{l})}. \end{aligned} $$
(4.7)

Letting \(\lambda \rightarrow \lambda _{k}\), \(\mu \rightarrow \mu _{l}\), then (4.7) implies

$$ \bigl\Vert \Phi _{kl}(\cdot ) \bigr\Vert ^{2}=\phi _{k}(N) \psi _{l}(M) G'(\lambda _{k}) H'(\mu _{l}). $$
(4.8)

Combining Eqs. (4.5), (4.7) and (4.8), we obtain

$$ F(\lambda ,\mu )=\sum_{k=1}^{N} \sum_{l=1}^{M} F(\lambda _{k},\mu _{l}) \frac{G(\lambda ) H(\mu )}{(\lambda -\lambda _{k})(\mu -\mu _{l}) G'(\lambda _{k}) H'(\mu _{l})}. $$
(4.9)

Since \(G(\lambda )\) and \(H(\mu )\) are polynomials in λ and μ of degrees N and M with different zeros at \(\{\lambda _{k}\}_{k=1}^{N}\) and \(\{\mu _{l}\}_{l=1}^{M}\), respectively,

$$ G(\lambda )=c_{1}\prod_{k=1}^{N}( \lambda -\lambda _{k})=c_{1} K( \lambda ),\qquad H(\mu )=c_{2}\prod_{l=1}^{M}(\mu - \mu _{l})=c_{2} L( \mu ), $$

where \(c_{1}\), \(c_{2}\) are nonzero constants. Because of \(G(\lambda )/G_{i}'(\lambda _{k})=K(\lambda )/K'(\lambda _{k})\) and \(H(\mu )/ H'(\mu _{l})=L(\mu )/L'(\mu _{l})\), then Eq. (4.9) reduces to (4.4). □

Equation (4.4) is a two-dimensional Lagrange interpolation formula, cf. [20, p. 166], [8, p. 39], where the fundamental polynomials are multiplications of one-dimensional ones. In the following theorem, where the kernel is the Green’s function, we obtain a one-dimensional Lagrange interpolation representation and the fundamental polynomial is determined by a polynomial containing both of the sampled values.

Assume that the different eigenvalues of the problem (2.2)–(2.3) are \(\{\lambda _{k}+\mu _{l}\}_{k=1, l=1}^{s_{1},s_{2}}\). Since the Green’s function (3.1) has simple poles at the eigenvalues, the function

$$ \Psi ({\mathbf{{n}}},t)=P(t) G({\mathbf{{n}}},{\mathbf{{j}}}_{0},t),\qquad P(t)=\prod_{k=1}^{s_{1}} \prod _{l=1}^{s_{2}} (t-\lambda _{k}-\mu _{l} ), $$
(4.10)

is an entire function as a function in t, where \({\mathbf{{j}}}_{0}\in \mathbb{I}\) is fixed.

Theorem 4.3

For the discrete transform

$$ H(t)=\sum_{n=1}^{N} \sum_{m=1}^{M} h({\mathbf{{n}}}) \Psi ({ \mathbf{{n}}},t),\quad h\in \ell ^{2}(\mathbb{I}), $$
(4.11)

we have the sampling expansion

$$ H(t)=\sum_{k=1}^{s_{1}} \sum_{l=1}^{s_{2}} H(\lambda _{k}+\mu _{l}) \frac{P(t)}{(t-\lambda _{k}-\mu _{l}) P'(\lambda _{k}+\mu _{l})}. $$
(4.12)

Proof

If the multiplicity of the eigenvalue \(\lambda _{k}+\mu _{l}\) is \(\nu _{kl}\), with corresponding normalized eigenvectors \(\{\Theta _{kl}^{i}({\mathbf{{n}}})\}_{i=1}^{\nu _{kl}}\), then (3.1) will be rewritten as

$$ G({\mathbf{{n}}},{\mathbf{{j}}},t)=\sum _{k=1}^{s_{1}}\sum_{l=1}^{s_{2}} \sum_{i=1}^{ \nu _{kl}} \frac{\Theta _{kl}^{i}({\mathbf{{n}}}) \Theta _{kl}^{i}({\mathbf{{j}}})}{\lambda _{k}+\mu _{l}-t}. $$
(4.13)

Applying Parseval’s relation to (4.11), cf. [17, p. 175], we have

$$ H(t)= \bigl\langle \Psi (\cdot ,t),\overline{h}(\cdot ) \bigr\rangle =\sum _{k=1}^{s_{1}} \sum_{l=1}^{s_{2}} \sum_{i=1}^{\nu _{kl}} \bigl\langle \Psi (\cdot ,t), \Theta _{kl}^{i}(\cdot ) \bigr\rangle \bigl\langle \Theta _{kl}^{i}( \cdot ), \overline{h}(\cdot ) \bigr\rangle . $$
(4.14)

By the orthogonality property, we get

$$ \begin{aligned} \bigl\langle \Psi (\cdot ,t),\Theta _{kl}^{i}(\cdot ) \bigr\rangle &= P(t) \sum _{k'=1}^{s_{1}}\sum_{l'=1}^{s_{2}} \sum_{i'=1}^{\nu _{k'l'}} \frac{\Theta _{k'l'}^{i'}({\mathbf{{j}}}_{0})}{\lambda _{k'}+\mu _{l'}-t} \sum_{n=1}^{N}\sum _{m=1}^{M}\Theta _{k'l'}^{i'}({ \mathbf{{n}}})\Theta _{kl}^{i}({ \mathbf{{n}}}) \\ &=P(t)\frac{\Theta _{kl}^{i}({\mathbf{{j}}}_{0})}{\lambda _{k}+\mu _{l}-t}. \end{aligned} $$
(4.15)

Also,

$$ \begin{aligned} H(\lambda _{k}+\mu _{l})&=\lim_{t\rightarrow {\lambda _{k}+ \mu _{l}}} \bigl\langle \Psi (\cdot ,t),\overline{h}( \cdot ) \bigr\rangle \\ &=\lim_{t\rightarrow {\lambda _{k}+\mu _{l}}} \Biggl(P(t)\sum_{k'=1}^{s_{1}} \sum_{l'=1}^{s_{2}}\sum _{i'=1}^{\nu _{k'l'}} \frac{\Theta _{k'l'}^{i'}({\mathbf{{j}}}_{0})}{\lambda _{k'}+\mu _{l'}-t} \bigl\langle \Theta _{k'l'}^{i'}(\cdot ),\overline{h}(\cdot ) \bigr\rangle \Biggr) \\ &=-P'(\lambda _{k}+\mu _{l})\sum _{i=1}^{\nu _{kl}} \Theta _{kl}^{i}({ \mathbf{{j}}}_{0}) \bigl\langle \Theta _{kl}^{i}(\cdot ),\overline{h}(\cdot ) \bigr\rangle . \end{aligned} $$
(4.16)

Substituting from (4.15) in (4.14), then using (4.16), one obtains (4.12). □

Example 4.4

Consider the boundary value problem

$$ \boldsymbol{\Delta}Y({\mathbf{{n}}})+4 Y({\mathbf{{n}}})= {t} Y({\mathbf{{n}}}),\quad {\mathbf{{n}}} \in \mathbb{I}, $$
(4.17)

with the boundary conditions

$$ Y(0,m)=0, \qquad Y(N+1,m)=0,\qquad y(n,0)=0,\qquad Y(n,M+1)=0. $$
(4.18)

This problem is separable into

$$ \begin{aligned} & y(n+1)+y(n-1) = {\lambda } y(n),\qquad y(0)=0,\qquad y(N+1)=0,\quad n\in \mathbb{Z}_{N}, \\ &z(m+1)+z(m-1) = {\mu } z(m), \qquad z(0)=0, \qquad z(M+1)=0, \quad m\in \mathbb{Z}_{M}, \end{aligned} $$
(4.19)

where \(t=\lambda +\mu \). The solution of the first problem of (4.19) under the condition \(y(0)=0\) is

$$ y(n)= \textstyle\begin{cases} \frac{\sin n\sigma }{\sin \sigma }, &\cos \sigma =\frac{\lambda }{2}, \vert \frac{\lambda }{2} \vert \leqslant 1, \\ \frac{\sinh n\omega }{\sinh \omega }, &\cosh \omega = \vert \frac{\lambda }{2} \vert , \vert \frac{\lambda }{2} \vert >1. \end{cases} $$

The first case of \(y(n)\) generates all eigenvalues and eigenvectors of the first problem of (4.19), so we will consider only the case \(|\lambda |\leqslant 2\). Let \(\phi (n,\lambda )=\frac{\sin n\sigma }{\sin \sigma }\), then the eigenvalues of the first problem of (4.19) are the zeros of \(\phi (N+1,\lambda )=0\), which gives \(\sigma _{k}=k\pi /(N+1)\), then the eigenvalues and the eigenvectors are

$$ \lambda _{k}=2\cos \frac{k\pi }{N+1}, \qquad \phi _{k}(n)= \frac{\sin \frac{kn\pi }{N+1}}{\sin \frac{k\pi }{N+1}},\quad k=1,2, \ldots ,N. $$

The other values of k lead to the same eigenvalues. Similarly for the second problem of (4.19) we have

$$ \mu _{l}=2\cos \frac{l\pi }{M+1},\qquad \psi _{l}(m)= \frac{\sin \frac{lm\pi }{M+1}}{\sin \frac{l\pi }{M+1}},\quad l=1,2, \ldots ,M. $$

Here we have

$$ G(\lambda )= \frac{\sin ((N+1)\cos ^{-1}\frac{\lambda }{2} )}{\sin (\cos ^{-1}\frac{\lambda }{2} )},\qquad G'(\lambda _{k})=\frac{N+1}{2} \frac{(-1)^{k+1}}{\sin ^{2} (\frac{k\pi }{N+1} )}. $$

Also

$$ H(\mu )= \frac{\sin ((M+1)\cos ^{-1}\frac{\mu }{2} )}{\sin (\cos ^{-1}\frac{\mu }{2} )},\qquad H'(\mu _{k})= \frac{M+1}{2} \frac{(-1)^{l+1}}{\sin ^{2} (\frac{l\pi }{M+1} )}. $$

Thus, for the transform

$$ F(\lambda ,\mu )=\sum_{n=1}^{N}\sum _{m=1}^{M} f(n,m) \frac{\sin ( n\cos ^{-1}\frac{\lambda }{2} )}{\sin (\cos ^{-1}\frac{\lambda }{2} )} \frac{\sin ( m\cos ^{-1}\frac{\mu }{2} )}{\sin (\cos ^{-1}\frac{\mu }{2} )},\quad f\in \ell ^{2}(\mathbb{I}), $$

has the expansion

$$ \begin{aligned} F(\lambda ,\mu )&=\frac{4}{(N+1)(M+1)}\sum _{k=1}^{N}\sum_{l=1}^{M}(-1)^{k+l} F \biggl(2\cos \frac{k\pi }{N+1},2\cos \frac{l\pi }{M+1} \biggr) \\ &\quad {}\times \frac{\sin ((N+1)\cos ^{-1}\frac{\lambda }{2} )\sin ^{2} (\frac{k\pi }{N+1} )}{ (\lambda -2\cos \frac{k\pi }{N+1} ) \sin (\cos ^{-1}\frac{\lambda }{2} )} \frac{\sin ((M+1)\cos ^{-1}\frac{\mu }{2} )\sin ^{2} (\frac{l\pi }{M+1} )}{ (\mu -2\cos \frac{l\pi }{M+1} ) \sin (\cos ^{-1}\frac{\mu }{2} )}. \end{aligned} $$

Example 4.5

Consider the partial difference problem (4.17) with the boundary conditions

$$ \begin{aligned} &Y(0,m)-Y(1,m)=0, \qquad Y(N+1,m)=0, \\ &Y(n,0)+Y(n,1)=0, \qquad Y(n,M+1)=0, \end{aligned} $$
(4.20)

which is separable into

$$ \begin{aligned} &y(n+1)+y(n-1) = \lambda y(n),\qquad y(0)-y(1)=0,\qquad y(N+1)=0,\quad n\in \mathbb{Z}_{N}, \\ &z(m+1)+z(m-1) = \mu z(m), \qquad z(0)+z(1)=0, \qquad z(M+1)=0, \quad m\in \mathbb{Z}_{M}, \end{aligned} $$
(4.21)

\(t=\lambda +\mu \). The solutions which generate all the eigenfunctions of (4.21), according to the notations of Sect. 2, are

$$ \phi (n,\lambda )=\frac{\cos (n-\frac{1}{2})\sigma }{\cos \frac{\sigma }{2}},\qquad \psi (n,\lambda )= \frac{\sin (n-\frac{1}{2})\eta }{\sin \frac{\eta }{2}}, $$

where \(\cos \sigma =\frac{\lambda }{2}\), \(\cos \eta =\frac{\mu }{2}\). The zeros of \(\phi (N+1,\lambda )=0\), \(\psi (M+1,\lambda )=0\), give \(\sigma _{k}=\frac{(2k-1)\pi }{2N+1}\) and \(\eta _{l}=\frac{2l\pi }{2M+1}\). Then the eigenvalues and the eigenvectors are

$$ \begin{aligned} &\lambda _{k} =2\cos \frac{(2k-1)\pi }{2N+1},\qquad \phi _{k}(n)= \frac{\cos \frac{(n-\frac{1}{2})(2k-1)\pi }{2N+1}}{\cos \frac{(2k-1)\pi }{2(2N+1)}},\quad k=1,2,\ldots ,N, \\ &\mu _{l} =2\cos \frac{2l\pi }{2M+1}, \qquad \psi _{l}(m)= \frac{\sin \frac{2(m-\frac{1}{2})l\pi }{2M+1}}{\sin \frac{2l\pi }{2(2M+1)}},\quad l=1,2,\ldots ,M. \end{aligned} $$

Here we have

$$ \begin{aligned} &G(\lambda ) = \frac{\cos ((N+\frac{1}{2})\cos ^{-1}\frac{\lambda }{2} )}{\cos (\frac{\cos ^{-1}\frac{\lambda }{2}}{2} )},\qquad G'( \lambda _{l})=\frac{N+\frac{1}{2}}{2} \frac{(-1)^{k-1}}{\cos \frac{(2k-1)\pi }{4N+2}\sin \frac{(2k-1)\pi }{2N+1}}, \\ &H(\mu ) = \frac{\sin ((M+\frac{1}{2})\cos ^{-1}\frac{\mu }{2} )}{\sin (\frac{\cos ^{-1}\frac{\mu }{2}}{2} )},\qquad H'(\mu _{l})= \frac{M+\frac{1}{2}}{2} \frac{(-1)^{l-1}}{\sin \frac{2l\pi }{4M+2}\sin \frac{2l\pi }{2M+1}}. \end{aligned} $$

If \(f\in \ell ^{2}(\mathbb{I})\), then the transform

$$ F(\lambda ,\mu )=\sum_{n=1}^{N}\sum _{m=1}^{M} f(n,m) \frac{\cos ((n-\frac{1}{2})\cos ^{-1}\frac{\lambda }{2} )}{\cos (\frac{\cos ^{-1}\frac{\lambda }{2}}{2} )} \frac{\sin ((m-\frac{1}{2})\cos ^{-1}\frac{\mu }{2} )}{\sin (\frac{\cos ^{-1}\frac{\mu }{2}}{2} )}, $$

has the expansion

$$ \begin{aligned} &F(\lambda ,\mu ) \\ &\quad =4\sum_{k=1}^{N} \sum_{k=1}^{M}(-1)^{k+l}F \biggl(2\cos \frac{(2k-1)\pi }{2N+1},2\cos \frac{2l\pi }{2M+1} \biggr) \\ &\qquad {}\times \frac{\cos ((N+\frac{1}{2})\cos ^{-1}\frac{\lambda }{2} )\cos \frac{(2k-1)\pi }{4N+2}\sin \frac{(2k-1)\pi }{2N+1}}{(N+\frac{1}{2}) (\lambda -2\cos \frac{(2k-1)\pi }{2N+1} ) \cos (\frac{\cos ^{-1}\frac{\lambda }{2}}{2} )} \frac{\sin ((M+\frac{1}{2})\cos ^{-1}\frac{\mu }{2} )\sin \frac{2l\pi }{4M+2}\sin \frac{2l\pi }{2M+1}}{(M+\frac{1}{2}) (\lambda -2\cos \frac{2l\pi }{2M+1} ) \sin (\frac{\cos ^{-1}\frac{\mu }{2}}{2} )}. \end{aligned} $$

Availability of data and materials

Not applicable.

References

  1. Abd-Alla, M.Z., Annaby, M.H., Hassan, H.A., Ayad, H.A.: Difference operators, Green’s matrix, sampling theory and applications in signal processing. J. Differ. Equ. Appl. (2018). https://doi.org/10.1080/10236198.2018.1480613

    Article  MathSciNet  MATH  Google Scholar 

  2. Ahmed, N., Rao, K.R.: Orthogonal Transforms for Digital Signal Processing. Springer, Berlin (1975)

    Book  Google Scholar 

  3. Annaby, M.H.: Sampling expansions for discrete transforms and their relationship with interpolation series. Analysis 18, 55–64 (1998)

    Article  MathSciNet  Google Scholar 

  4. Annaby, M.H.: Finite Lagrange and Cauchy sampling expansions associated with regular difference equations. J. Differ. Equ. Appl. 4, 551–569 (1998)

    Article  MathSciNet  Google Scholar 

  5. Annaby, M.H.: One and multidimensional sampling theorems associated with Dirichlet problems. Math. Methods Appl. Sci. 21, 361–374 (1998)

    Article  MathSciNet  Google Scholar 

  6. Annaby, M.H.: Multivariate sampling theorems associated with multiparameter differential operators. Proc. Edinb. Math. Soc. 48, 257–277 (2005)

    Article  MathSciNet  Google Scholar 

  7. Annaby, M.H., García, A.G., Hernández-Mendina, M.A.: On sampling and second order difference equations. Analysis 19, 79–92 (1999)

    Article  MathSciNet  Google Scholar 

  8. Davis, P.J.: Interpolation and Approximation. Dover, New York (1975)

    MATH  Google Scholar 

  9. García, A.G., Hernández-Mendina, M.A.: Finite sampling theorem associated with second order difference problem. In: Proceedings of the 1995 Workshop in Sampling and Applications, Aveiro, Portugal, pp. 385–390 (1997)

    Google Scholar 

  10. García, A.G., Hernández-Mendina, M.A.: Sampling theorems and difference Sturm–Liouville problems. J. Differ. Equ. Appl. 2, 695–717 (2000)

    Article  MathSciNet  Google Scholar 

  11. García, A.G., Hernández-Mendina, M.A.: The discrete Kramer sampling theorem and indeterminate moment problems. J. Comput. Appl. Math. 134, 13–22 (2001)

    Article  MathSciNet  Google Scholar 

  12. Gosselin, R.P.: On the the \(L^{p}\)-theory of cardinal series. Ann. Math. 78, 567–581 (1963)

    Article  MathSciNet  Google Scholar 

  13. Hassan, H.A.: Sampling theorem by Green’s function in a space of vector-functions. Turk. J. Math. 41, 67–79 (2017)

    Article  MathSciNet  Google Scholar 

  14. Hori, T.: Higher-order sampling of 2-D frequency distributions by using Smith normal forms and Vandermonde determinants. Multidimens. Syst. Signal Process. (2019). https://doi.org/10.1007/s11045-019-00665-4

    Article  MATH  Google Scholar 

  15. Hori, T.: Mixed-order sampling of 2-D frequency distributions by using the concept of common superset. Multidimens. Syst. Signal Process. 30, 1237–1262 (2019)

    Article  MathSciNet  Google Scholar 

  16. Jirari, A.: Second-order Sturm–Liouville difference equations and orthogonal polynomials. Mem. Am. Math. Soc. 542, 1–138 (1995)

    MathSciNet  MATH  Google Scholar 

  17. Kreyszig, E.: Introductory Functional Analysis with Applications. Wiley, New York (1989)

    MATH  Google Scholar 

  18. Marvasti, F.: Interpolation of 2-D signals from their isolated zeros. Multidimens. Syst. Signal Process. 1, 87–97 (1990)

    Article  Google Scholar 

  19. Nering, E.D.: Linear Algebra and Matrix Theory. Wiley, New York (1970)

    MATH  Google Scholar 

  20. Phillips, G.M.: Interpolation and Approximation by Polynomials. Springer, New York (2003)

    Book  Google Scholar 

  21. Prosser, R.T.: A multidimensional sampling theorem. J. Math. Anal. Appl. 16, 574–584 (1966)

    Article  MathSciNet  Google Scholar 

  22. Titchmarsh, E.C.: Eigenfunct in Expansions Associated with Second-Order Differential Equations, Part II. Oxford University Press, London (1958)

    Google Scholar 

  23. Whittaker, E.: On the functions which are represented by the expansion of the interpolation theory. Proc. R. Soc. Edinb. A 35, 181–194 (1915)

    Article  Google Scholar 

  24. Zayed, A.I.: Kramer’s sampling theorems for multidimensional signals and its relationship with Lagrange-type interpolation. Multidimens. Syst. Signal Process. 3, 323–340 (1915)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author is very grateful to Professor M.H. Annaby for valuable suggestions during the work.

Funding

No funding available.

Author information

Authors and Affiliations

Authors

Contributions

The author worked in the derivation of the mathematical results and read and approved the final manuscript.

Corresponding author

Correspondence to H. A. Hassan.

Ethics declarations

Competing interests

The author declares that he has no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hassan, H.A. Multidimensional sampling theorems for multivariate discrete transforms. Adv Differ Equ 2021, 206 (2021). https://doi.org/10.1186/s13662-021-03368-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-021-03368-y

MSC

Keywords