Skip to main content

Theory and Modern Applications

Estimation of divergences on time scales via the Green function and Fink’s identity

Abstract

The aim of the present paper is to obtain new generalizations of an inequality for n-convex functions involving Csiszár divergence on time scales using the Green function along with Fink’s identity. Some new results in h-discrete calculus and quantum calculus are also presented. Moreover, inequalities for some divergence measures are also deduced.

1 Introduction

The development of the theory of time scales was initiated by Hilger in 1988 as a theory efficient to contain both difference and differential calculus in a steady approach. The books of Bohner and Peterson [8, 9] related to time scales are compact and resolve a lot of time scales calculus. This theory allows one to get some insight into and right understanding of the precise differences between discrete and continuous systems.

In the past years, new developments in the theory and applications of dynamic derivatives on time scales have emerged. Many results from the continuous case are carried over to the discrete one very easily, but some seem to be completely different. The study on time scales comes to reveal such discrepancies and to make us understand the difference between the two cases. The results in time scale calculus are unified and extended. This hybrid theory is also extensively used on dynamic inequalities.

Various linear and nonlinear integral inequalities on time scales have been established by many authors [3, 4, 32, 35].

Quantum calculus or q-calculus is usually called calculus without limits. In 1910, Jackson [18] described q-analogue of derivative and integral operator along with their applications. He was the first to establish q-calculus in an organized form. It is important to note that quantum integral inequalities are more significant and constructive than their classical counterparts. It has been primarily for the reason that quantum integral inequalities can interpret the hereditary properties of the fact and technique under consideration.

Recently, there has been a rapid development in q-calculus. Consequently, new generalizations of the classical approach of quantum calculus have been proposed and analyzed in various literature works, see [10, 17, 27, 44] and the references therein. The concepts of quantum calculus on finite intervals were given by Tariboon and Ntouyas [37, 38], and they obtained certain q-analogues of classical mathematical objects, which motivated numerous researchers to explore the subject in detail. Subsequently, several new results related to quantum counterpart of classical mathematical results have been established, see [7, 29, 34].

Divergence measure is the measure of distance between two probability distributions. The idea of divergence measure is used to solve a variety of problems in probability theory. In the literature, several types of divergence measures exist that compare two probability distributions and are used in statistics and information theory. Information and divergence measure are very useful and play a vital part in various areas, namely sensor networks [24], testing the order in a Markov chain [26], finance [33], economics [39], and approximation of probability distributions [14]. Shannon entropy and the related measures are often used in different fields such as information theory, molecular ecology, population genetics, statistical physics, and dynamical systems (see [13, 25]). Kullback–Leibler divergence is one of the best known among information divergences. The well-known divergence measure is used in information theory, mathematical statistics, and signal processing (see [42]). Jeffreys distance and triangular discrimination have many applications in statistics, information theory, and pattern recognition (see [23, 40, 41]).

Recently, various types of bounds on the distance, divergence, and information measures have been obtained (see [2, 6, 12, 15, 19, 22, 36] and the references therein). In [1], Adeel et al. generalized Levinson’s inequality for 3-convex function by using two Green functions. Moreover, the obtained results are applied to information theory via f-divergence, Rényi divergence, and Shannon entropy. In [21], Khan et al. introduced a new functional based on a classical f-divergence functional and obtained some estimates for the new functionals, the f-divergence, and Rényi divergence. In [11], Butt et al. established new refinements of Popoviciu’s inequality for higher order convex functions utilizing Abel–Gontscharoff interpolation in combination with new Green functions. New inequalities are obtained for n-convex functions. They also gave applications in information theory by finding new estimates for relative, Shannon, and Zipf–Mandelbrot entropies.

Motivated by the above discussion, we generalize an inequality involving Csiszár divergence on time scales for n-convex functions by using the Green function along with Fink’s identity. In addition, we estimate Kullback–Leibler divergence, differential entropy, Shannon entropy, Jeffreys distance, and triangular discrimination on time scales by using the obtained results.

2 Preliminaries

Throughout this paper, assume that \(\mathbb{T}\) is a time scale, \(a, b \in\mathbb{T}\) with \(a < b\). The following definitions and results are given in [8].

For \(\zeta\in\mathbb{T}\), the forward jump operator \(\sigma: \mathbb {T} \rightarrow\mathbb{T}\) is defined as follows:

$$ \sigma(\zeta) = \inf\{\lambda\in\mathbb{T}: \lambda> \zeta\}. $$

A function \(g : \mathbb{T} \rightarrow\mathbb{R}\) is known as right-dense continuous (rd-continuous), provided it is continuous at right-dense points in \(\mathbb{T}\) and its left-sided limit exists (finite) at left-dense points in \(\mathbb{T}\). The set of all rd-continuous functions will be denoted in this paper by \(C_{rd}\). \(\mathbb{T}^{k}\) is defined as follows:

$$\mathbb{T}^{k} = \textstyle\begin{cases} \mathbb{T}\backslash(\rho(\sup\mathbb{T}), \sup\mathbb{T}] & \text{if }\sup\mathbb{T} < \infty,\\ \mathbb{T} &\text{if } \sup\mathbb{T} = \infty. \end{cases} $$

Suppose that \(g : \mathbb{T} \rightarrow\mathbb{R}\) and \(\zeta\in \mathbb{T}^{k}\). Delta derivative \(g^{\Delta}(\zeta)\) is defined to be the number (provided it exists) if for each \(\epsilon> 0\) there exists a neighborhood U of ζ such that

$$ \bigl\vert g\bigl(\sigma(\zeta)\bigr) - g(\lambda) - g^{\Delta}(\zeta) \bigl(\sigma(\zeta) - \lambda\bigr) \bigr\vert \leq\epsilon\bigl\vert \sigma(\zeta) - \lambda\bigr\vert $$

holds for all \(\lambda\in U\). Then g is said to be delta differentiable at ζ.

For \(\mathbb{T}= \mathbb{R}\), \(g^{\Delta}\) is the usual derivative \(g^{\prime}\), and \(g^{\Delta}\) turns into the forward difference operator \(\Delta g(\zeta) = g(\zeta+1) - g(\zeta)\) for \(\mathbb{T} = \mathbb{Z}\). If \(\mathbb{T} = \overline{q^{\mathbb{Z}}} = \{q^{n}: n \in\mathbb{Z} \} \bigcup\{0\}\), the so-called q-difference operator \(q > 1\),

$$ g^{\Delta}(\zeta) = \frac{g(q\zeta) - g(\zeta)}{(q-1)\zeta},\qquad g^{\Delta}(0) = \lim_{\lambda\rightarrow0} \frac{g(\lambda) - g(0)}{\lambda}. $$

Theorem A

(Existence of antiderivatives)

Every rd-continuous function has an antiderivative. If \(x_{0} \in \mathbb{T}\), then F defined by

$$F(\zeta):= \int_{x_{0}}^{x}f(\zeta)\Delta\zeta\quad\textit{for } x \in \mathbb{T}^{k} $$

is an antiderivative of f.

For \(\mathbb{T} = \mathbb{R}\), we obtain \(\int_{a}^{b}f(\zeta)\Delta\zeta= \int_{a}^{b}f(\zeta) d\zeta\), and if \(\mathbb{T} = \mathbb{N}\), then \(\int_{a}^{b}f(\zeta)\Delta\zeta= \sum_{\zeta=a}^{b-1}f(\zeta)\), where \(a, b \in\mathbb{T}\) with \(a\leq b\).

3 Improvement of the inequality involving Csiszár divergence

Assume \(\mathbb{T}\) to be a time scale and consider the set of all probability densities on \(\mathbb{T}\) to be

$$ \Omega:= \biggl\{ p | p : [a, b]_{\mathbb{T}}\rightarrow[0, \infty), \int_{a}^{b}p(x) \Delta x \leq1 \biggr\} . $$
(1)

Let \(\zeta_{1}, \zeta_{2} \in\mathbb{R}\), where \(\zeta_{1} < \zeta _{2}\). Consider the Green function \(G : [\zeta_{1}, \zeta_{2}]\times [\zeta_{1}, \zeta_{2}] \rightarrow\mathbb{R}\) defined by

$$ G(x, s) = \textstyle\begin{cases} \frac{(x-\zeta_{2})(s-\zeta_{1})}{\zeta_{2}-\zeta_{1}}& \text{for } \zeta_{1} \leq s \leq x,\\ \frac{(s-\zeta_{2})(x-\zeta_{1})}{\zeta_{2}-\zeta_{1}}& \text{for } x \leq s \leq\zeta_{2}, \end{cases} $$
(1)

where G is convex and continuous corresponding to both x and s. It is notable that (see for example [20, 28, 30, 43]) any function \(\Psi\in C^{2}([\zeta_{1}, \zeta_{2}],\mathbb{R})\) can be written as

$$ \Psi(x) = \frac{\zeta_{2}-x}{\zeta_{2}-\zeta_{1}} \Psi(\zeta _{1}) + \frac{x-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \Psi(\zeta_{2}) + \int_{\zeta_{1}}^{\zeta_{2}} G(x, s) \Psi^{\prime\prime}(s) \,ds, $$
(2)

where \(G(x, s)\) is defined in (1).

In [5], Ansari et al. proved the following inequality.

Theorem B

Let \(\Psi: [0, \infty) \rightarrow\mathbb{R}\) be a convex function on the interval \([\zeta_{1}, \zeta_{2}] \subset[0, \infty)\) and \(\zeta_{1}\leq1 \leq \zeta_{2}\). If \(p_{1}, p_{2} \in\Omega\) with \(\zeta_{1}\leq\frac{p_{1}(y)}{p_{2}(y)} \leq\zeta_{2}\) for all \(y \in\mathbb{T}\), then

$$ \int_{a}^{b}p_{2}(y)\Psi\biggl( \frac{p_{1}(y)}{p_{2}(y)} \biggr)\Delta y \leq\frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}} \Psi( \zeta_{1}) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \Psi(\zeta_{2}). $$
(3)

Motivated by inequality (3), we initiate with the following result.

Theorem 1

Under the assumptions of Theorem Bwith \(\int_{a}^{b} p_{1}(y) \Delta y = \int_{a}^{b} p_{2}(y) \Delta y = 1\), then (3) and (4) are equivalent

$$ \int_{a}^{b}p_{2}(y)G \biggl( \frac{p_{1}(y)}{p_{2}(y)}, s \biggr)\Delta y \leq\frac{\zeta _{2}-1}{\zeta_{2}-\zeta_{1}} G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s), $$
(4)

where \(G(\cdot, s)\) is defined in (1) and \(s \in[\zeta_{1}, \zeta_{2}]\). Moreover, if we reverse the inequality in both (3) and (4), then again (3) and (4) are equivalent.

Proof

Let (3) be valid. Since the function \(G(\cdot, s) (s \in [\zeta_{1}, \zeta_{2}])\) is continuous and convex, therefore (4) holds.

Let (4) be valid. Let \(\Psi\in C^{2}([\zeta_{1}, \zeta_{2}], \mathbb{R})\). Then, by using (2), one can get

$$\begin{aligned}& \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \Psi(\zeta_{1}) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \Psi(\zeta_{2})- \int_{a}^{b}p_{2}(y)\Psi\biggl( \frac{p_{1}(y)}{p_{2}(y)} \biggr)\Delta y \\& \quad = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1 }} \biggl[\frac{\zeta _{2}-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \Psi( \zeta_{1}) + \frac{\zeta_{1}-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \Psi(\zeta _{2}) + \int_{\zeta_{1}}^{\zeta_{2}} G(\zeta_{1}, s) \Psi^{\prime\prime}(s) \,ds \biggr] \\& \qquad {}+ \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \biggl[\frac{\zeta_{2}-\zeta _{2}}{\zeta_{2}-\zeta_{1}} \Psi( \zeta_{1}) + \frac{\zeta_{2}-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \Psi(\zeta _{2}) + \int_{\zeta_{1}}^{\zeta_{2}} G(\zeta_{2}, s) \Psi^{\prime\prime}(s) \,ds \biggr] \\& \qquad {}- \int_{a}^{b}p_{2}(y) \biggl[ \frac{\zeta_{2}-\frac{p_{1}(y)}{p_{2}(y)}}{\zeta_{2}-\zeta_{1}} \Psi (\zeta_{1}) + \frac{\frac{p_{1}(y)}{p_{2}(y)}-\zeta_{1}}{\zeta_{2}-\zeta _{1}} \Psi( \zeta_{2}) \\& \qquad {}+ \int_{\zeta_{1}}^{\zeta_{2}} G \biggl(\frac{p_{1}(y)}{p_{2}(y)}, s \biggr) \Psi^{\prime\prime}(s) \,ds \biggr]\Delta y. \end{aligned}$$
(5)

Utilize Fubini’s theorem with \(\int_{a}^{b} p_{1}(y) \Delta y = \int _{a}^{b} p_{2}(y) \Delta y = 1\) in (5) to obtain

$$\begin{aligned}& \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \Psi(\zeta_{1}) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \Psi(\zeta_{2})- \int_{a}^{b}p_{2}(y)\Psi\biggl( \frac{p_{1}(y)}{p_{2}(y)} \biggr)\Delta y \\& \quad = \int_{\zeta_{1}}^{\zeta_{2}} \biggl[\frac{\zeta_{2}-1}{\zeta_{2}-\zeta _{1}} G( \zeta_{1}, s)+ \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) \biggr] \Psi^{\prime\prime}(s) \,ds \\& \qquad {}- \int_{\zeta_{1}}^{\zeta_{2}}\Psi^{\prime\prime}(s) \biggl[ \int_{a}^{b} p_{2}(y)G \biggl( \frac{p_{1}(y)}{p_{2}(y)}, s \biggr)\Delta y \biggr]\,ds. \end{aligned}$$

For all \(s \in[\zeta_{1}, \zeta_{2}]\), if the function Ψ is convex, then \(\Psi^{\prime\prime}(s)\geq0\), and thus for every convex function \(\Psi\in C^{2}([\zeta_{1}, \zeta _{2}],\mathbb{R})\) inequality (3) holds. One can prove the last part of the theorem analogously. □

Remark 1

Under the assumptions of Theorem 1, the following two statements are equivalent:

\((c^{\prime}_{1})\):

If the function \(\Psi\in C([\zeta_{1}, \zeta _{2}],\mathbb{R})\) is concave, then the reverse inequality in (3) holds.

\((c^{\prime}_{2})\):

For all \(s \in[\zeta_{1}, \zeta_{2}]\), the reverse inequality in (4) holds.

In addition, if we reverse the inequality in both statements \((c^{\prime}_{1})\) and \((c^{\prime}_{2})\), then again \((c^{\prime}_{1})\) and \((c^{\prime}_{2})\) are equivalent.

Theorem 2

Assume the conditions of Theorem 1, we define the following functional:

$$ \mathfrak{J}_{1}(\Psi) := \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \Psi(\zeta_{1}) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \Psi(\zeta_{2})- \int_{a}^{b}p_{2}(y)\Psi\biggl( \frac{p_{1}(y)}{p_{2}(y)} \biggr)\Delta y $$
(6)

if the inequality in (4) holds for all \(s \in[\zeta_{1}, \zeta_{2}]\).

Remark 2

Suppose that all the assumptions of Theorem 2 hold. If Ψ is continuous and convex, then \(\mathfrak{J}_{1}(\Psi) \geq0\).

The following theorem is proved by Fink in [16].

Theorem 3

Let \(f : [\zeta_{1}, \zeta_{2}]\rightarrow\mathbb{R}\), \(n\geq1\), and \(f^{(n-1)}\) is absolutely continuous on \([\zeta_{1}, \zeta_{2}]\), where \(\zeta_{1}, \zeta_{2} \in\mathbb{R}\). Then

$$\begin{aligned} f(x) =&\frac{n}{\zeta_{2}-\zeta_{1}} \int_{\zeta_{1}}^{\zeta_{2}}f(t)\,dt \\ &{}-\sum_{w=1}^{n-1} \biggl( \frac{n-w}{w!} \biggr) \biggl(\frac{f^{(w-1)}(\zeta_{1})(x-\zeta _{1})^{w}-f^{(w-1)}(\zeta_{2})(x-\zeta_{2})^{w}}{ \zeta_{2}-\zeta_{1}} \biggr) \\ &{}+\frac{1}{(n-1)!(\zeta_{2}-\zeta_{1})} \int_{\zeta_{1}}^{\zeta_{2}}(x-t)^{n-1}W^{[\zeta_{1}, \zeta_{2}]}(t, x)f^{(n)}(t)\,dt, \end{aligned}$$
(7)

where

$$\begin{aligned} W^{[\zeta_{1}, \zeta_{2}]}(t, x)= \textstyle\begin{cases} (t-\zeta_{1}), & \zeta_{1} \leq t \leq x \leq\zeta_{2}, \\ (t-\zeta_{2}), & \zeta_{1} \leq x < t \leq\zeta_{2}. \end{cases}\displaystyle \end{aligned}$$
(8)

4 Interpolation of the functional involving Csiszár divergence by Fink’s identity

Theorem 4

Assume \(n \in\mathbb{Z}^{+}\) and the function \(\Psi: [\zeta_{1}, \zeta _{2}] \rightarrow\mathbb{R}\) with \(\Psi^{(n-1)}\) is absolutely continuous and \(\zeta_{1} \leq1 \leq\zeta_{2}\). If \(p_{1}, p_{2} \in\Omega\) with \(\zeta_{1} \leq\frac {p_{1}(y)}{p_{2}(y)} \leq\zeta_{2}\) for all \(y \in\mathbb{T}\), then we have the following new identity:

$$\begin{aligned} \mathfrak{J}_{1}\bigl(\Psi(x)\bigr) =& \frac{(n-2)(\Psi^{\prime}(\zeta_{2}) - \Psi^{\prime}(\zeta_{1}))}{\zeta _{2}-\zeta_{1}} \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr)\,ds \\ &{}+\frac{1}{\zeta_{2}-\zeta_{1}} \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr)\sum_{w=1}^{n-3} \biggl(\frac{n-w-2}{w!} \biggr) \\ &{}\times\bigl(\Psi^{(w+1)}(\zeta_{1}) (s- \zeta_{1})^{w}-\Psi^{(w+1)}( \zeta_{2}) (s-\zeta_{2})^{w} \bigr)\,ds + \frac{1}{(n-3)!(\zeta_{2}-\zeta_{1})} \\ &{}\times \int_{\zeta_{1}}^{\zeta_{2}}\Psi^{(n)}(t) \biggl( \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr) (s-t)^{n-3}W^{[\zeta_{1}, \zeta_{2}]}(t, s)\,ds \biggr)\,dt, \end{aligned}$$
(9)

where

$$ \mathfrak{J}_{1}\bigl(\Psi(x)\bigr) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \Psi(\zeta_{1}) + \frac{1-\zeta _{1}}{\zeta_{2}-\zeta_{1}} \Psi( \zeta_{2}) - \int_{a}^{b}p_{2}(y)\Psi\biggl( \frac{p_{1}(y)}{p_{2}(y)} \biggr)\Delta y, $$
(10)
$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} G(\zeta_{1}, s) + \frac{1-\zeta _{1}}{\zeta_{2}-\zeta_{1}} G( \zeta_{2}, s) - \int_{a}^{b}p_{2}(y)G \biggl( \frac{p_{1}(y)}{p_{2}(y)}, s \biggr)\Delta y. $$
(11)

Proof

Use (2) in (6) and the linearity of \(\mathfrak {J}_{1}(\cdot)\) to obtain

$$ \mathfrak{J}_{1}\bigl(\Psi(x)\bigr) = \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr)\Psi^{\prime\prime}(s)\,ds. $$
(12)

Replacing n with \(n - 2\) in (7), one gets

$$\begin{aligned} \Psi^{\prime\prime}(s) =&\frac{(n-2)(\Psi^{\prime}(\zeta _{2}) - \Psi^{\prime}(\zeta_{1}))}{\zeta_{2}-\zeta_{1}} \\ &{}-\sum_{w=1}^{n-3} \biggl( \frac{n-w-2}{w!} \biggr) \biggl(\frac{\Psi^{(w+1)}(\zeta_{1})(x-\zeta _{1})^{w}-\Psi^{(w+1)}(\zeta_{2})(x-\zeta_{2})^{w}}{ \zeta_{2}-\zeta_{1}} \biggr) \\ &{}+\frac{1}{(n-3)!(\zeta_{2}-\zeta_{1})} \int_{\zeta_{1}}^{\zeta_{2}}(s-t)^{n-3}W^{[\zeta_{1}, \zeta_{2}]}(t, s)\Psi^{(n)}(t)\,dt. \end{aligned}$$
(13)

Use (13) in (12) and rearrange the indices to have

$$\begin{aligned} \mathfrak{J}_{1}\bigl(\Psi(s)\bigr) =& \frac{(n-2)(\Psi^{\prime}(\zeta_{2}) - \Psi^{\prime}(\zeta_{1}))}{\zeta _{2}-\zeta_{1}} \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr)\,ds \\ &{}+ \sum_{w=1}^{n-3} \biggl( \frac{n-w-2}{w!} \biggr) \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr) \\ &{}\times\biggl(\frac{\Psi^{(w+1)}(\zeta_{1})(x-\zeta_{1})^{w}-\Psi ^{(w+1)}(\zeta_{2})(x-\zeta_{2})^{w}}{ \zeta_{2}-\zeta_{1}} \biggr)\,ds +\frac{1}{(n-3)!(\zeta_{2}-\zeta_{1})} \\ &{}\times \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr) \biggl( \int_{\zeta_{1}}^{\zeta_{2}}(s-t)^{n-3}W^{[\zeta_{1}, \zeta_{2}]}(t, s)\Psi^{(n)}(t)\,dt \biggr)\,ds. \end{aligned}$$
(14)

Utilize Fubini’s theorem on the last term of (14) to obtain (9). □

Example 1

Choose \(\mathbb{T} = \mathbb{R}\) in Theorem 4, to get the same result as one can obtain from [15, (2.1)] by utilizing (1) and (7).

Example 2

Put \(\mathbb{T} = h\mathbb{Z}~(h>0)\) in Theorem 4 to obtain a new identity in h-discrete calculus with the following values:

$$ \mathfrak{J}_{1}\bigl(\Psi(x)\bigr) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta _{1}} \Psi( \zeta_{1}) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \Psi(\zeta_{2}) - \sum_{j=\frac{a}{h}}^{\frac{b}{h} - 1} p_{2}(jh)h \Psi\biggl(\frac{p_{1}(jh)}{p_{2}(jh)} \biggr) $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j=\frac{a}{h}}^{\frac{b}{h} - 1} p_{2}(jh)h ~G \biggl(\frac{p_{1}(jh)}{p_{2}(jh)}, s \biggr). $$

Remark 3

Choose \(h = 1\) in Example 2. Suppose that \(a = 0, b = n, p_{1}(j) = (p_{1})_{j}\), and \(p_{2}(j) = (p_{2})_{j}\) to get a new identity in the discrete case with the following values:

$$ \mathfrak{J}_{1}\bigl(\Psi(x)\bigr) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta _{1}} \Psi( \zeta_{1}) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \Psi(\zeta_{2}) - \sum_{j = 1}^{n} (p_{2})_{j} \Psi\biggl(\frac{(p_{1})_{j}}{(p_{2})_{j}} \biggr) $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j = 1}^{n} (p_{2})_{j} ~G \biggl(\frac{(p_{1})_{j}}{(p_{2})_{j}}, s \biggr). $$

Example 3

Use \(\mathbb{T} = q^{\mathbb{N}_{0}}~(q > 1), a = q^{l}\), and \(b = q^{n}\) with \(l < n\) in Theorem 4 to obtain a new identity in q-calculus with the following values:

$$ \mathfrak{J}_{1}\bigl(\Psi(x)\bigr) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \Psi(\zeta_{1}) + \frac{1-\zeta _{1}}{\zeta_{2}-\zeta_{1}} \Psi( \zeta_{2}) - \sum_{j=l}^{n - 1} q^{j+1} p_{2}\bigl(q^{j}\bigr) \Psi\biggl( \frac{p_{1}(q^{j})}{p_{2}(q^{j})} \biggr) $$
(15)

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j=0}^{n - 1} q^{j+1}p_{2} \bigl(q^{j}\bigr)G \biggl(\frac{p_{1}(q^{j})}{p_{2}(q^{j})}, s \biggr). $$

As a result of the earlier obtained identities, the following theorem yields sublime generalization of inequalities involving Csiszár divergence on time scales for n-convex \((n \geq3)\) functions.

Theorem 5

Assume the conditions of Theorem 4. Also suppose that Ψ is an n-convex function with \(\Psi^{(n-1)}\) is absolutely continuous. If

$$ \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr) (s-t)^{n-3}W^{[\zeta_{1}, \zeta_{2}]}(t, s)\,ds \geq0, \quad t \in[\zeta_{1}, \zeta_{2}], $$
(16)

then

$$\begin{aligned} \mathfrak{J}_{1}\bigl(\Psi(x)\bigr) \geq& \frac{(n-2)(\Psi^{\prime}(\zeta_{2}) - \Psi^{\prime}(\zeta_{1}))}{\zeta _{2}-\zeta_{1}} \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr)\,ds \\ &{}+ \frac{1}{\zeta_{2}-\zeta_{1}} \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr) \sum_{w=1}^{n-3} \biggl(\frac{n-w-2}{w!} \biggr) \\ &{}\times\bigl(\Psi^{(w+1)}(\zeta_{1}) (s- \zeta_{1})^{w}-\Psi^{(w+1)}( \zeta_{2}) (s-\zeta_{2})^{w} \bigr)\,ds. \end{aligned}$$
(17)

Proof

Since \(\Psi^{(n-1)}\) is absolutely continuous on \([\zeta_{1}, \zeta _{2}]\), therefore \(\Psi^{(n)}\) exists almost everywhere. Given that Ψ is n-convex, hence for all \(x \in[\zeta_{1}, \zeta_{2}]\) we have \(\Psi^{(n)}(x)\geq0\) (see [31, p. 16]). Thus use Theorem 4 to get the required result. □

Theorem 6

Suppose that all the assumptions of Theorem 4hold. Let \(\Psi\in C^{n}[\zeta_{1}, \zeta_{2}]\) be such that \(\Psi^{(n-1)}\) is absolutely continuous. Moreover, for the functional \(\mathfrak{J}_{1}(\cdot)\) given in (6), we get the following:

\((i)\) Inequality (17) holds provided that n is even and (\(n \geq4\)).

\((ii)\) Let inequality (17) be satisfied and

$$ \sum_{w=1}^{n-3} \biggl(\frac{n-w-2}{w!} \biggr) \bigl(\Psi^{(w+1)}( \zeta_{1}) (s-\zeta_{1})^{w}- \Psi^{(w+1)}(\zeta_{2}) (s-\zeta_{2})^{w} \bigr)\,ds \geq0 $$
(18)

for all \(s \in[\zeta_{1}, \zeta_{2}]\). Then

$$ \mathfrak{J}_{1}\bigl(\Psi(\cdot)\bigr) \geq0. $$
(19)

Proof

It is obvious that the Green function \(G(\cdot, s)\) given in (1) is convex. Therefore, by applying Theorem 2 and by using Remark 2, one has \(\mathfrak{J_{1}}G(\cdot, s) \geq0\).

\((i)\) \(W^{[\zeta_{1}, \zeta_{2}]}(t, x) \geq0\) for \(n=4, 6, \ldots\) , so (16) holds. As Ψ is n-convex, hence by utilizing Theorem 5, one gets (17).

\((ii)\) Use (18) in (17) to get (19). □

Remark 4

Grüss, Cebyšev, and Ostrowski-type bounds corresponding to the obtained generalizations can also be deduced.

5 Application to information theory

Shannon entropy is the fundamental term in information theory and is often dealt with measure of uncertainty. The random variable, entropy, is characterized regarding its probability distribution, and it can appear as a better measure of uncertainty or predictability. The Shannon entropy allows the estimation of the normal least number of bits essential to encode a string of symbols based on alphabet size and frequency of symbols.

5.1 Differential entropy on time scales

Consider a positive density function p on time scale to a continuous random variable X with \(\int_{a}^{b} p(\zeta)\Delta\zeta = 1\), wherever the integral exists.

In [4], Ansari et al. defined the so-called differential entropy on a time scale by

$$ h_{\bar{b}}(X) := \int_{a}^{b} p(\zeta) \log\frac{1}{p(\zeta)} \Delta\zeta\quad(\bar{b}>1). $$
(20)

Theorem 7

Let X be a continuous random variable and \(p_{1}, p_{2} \in\Omega\) with \(\zeta_{1}\leq\frac{p_{1}(y)}{p_{2}(y)} \leq\zeta_{2}\) for all \(y \in\mathbb{T}\). If n is even \((n = 6, 8,\ldots)\), then

$$\begin{aligned} \mathfrak{J}_{1}(\cdot) \geq&\frac {(n-2)}{\zeta_{1}\zeta_{2}} \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr)\,ds + \frac{1}{\zeta_{2}-\zeta_{1}} \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr) \\ &{}\times \sum_{w=1}^{n-3}(-1)^{w} (n-w-2) \biggl[-\frac{(s-\zeta_{1})^{w}}{(\zeta_{1})^{w+1}} + \frac {(s-\zeta_{2})^{w}}{(\zeta_{2})^{w+1}} \biggr]\,ds, \end{aligned}$$
(21)

where

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}} \bigl(-\log(\zeta_{1})\bigr) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \bigl(-\log( \zeta_{2})\bigr)+ \int_{a}^{b}p_{2}(y)\log p_{1}(y)\Delta y + \tilde{h}_{\bar{b}}(X) $$
(22)

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \int_{a}^{b}p_{2}(y)G \biggl( \frac{p_{1}(y)}{p_{2}(y)}, s \biggr) \Delta y. $$

Proof

It is obvious that the Green function \(G(\cdot, s)\) given in (1) is convex, therefore by using Remark 2, \(\mathfrak{J}_{1}G(\cdot, s)\geq0\). Let

$$\begin{aligned} \Phi(s):=(s-t)^{n-3}W^{[\zeta_{1}, \zeta_{2}]}(t, s)= \textstyle\begin{cases} (s-t)^{n-3}(t-\zeta_{1}), & \zeta_{1} \leq t \leq s \leq\zeta_{2}, \\ (s-t)^{n-3}(t-\zeta_{2}), & \zeta_{1} \leq s < t \leq\zeta_{2}. \end{cases}\displaystyle \end{aligned}$$

Consequently,

$$\begin{aligned} \Phi^{''}(s):= \textstyle\begin{cases} (n-3)(n-4)(s-t)^{n-5}(t-\zeta_{1}), & \zeta_{1} \leq t \leq s \leq \zeta_{2}, \\ (n-3)(n-4)(s-t)^{n-5}(t-\zeta_{2}), & \zeta_{1} \leq s \leq t \leq \zeta_{2}. \end{cases}\displaystyle \end{aligned}$$

Since Φ is n-convex for even n, where \(n > 4\), (16) holds for even values of \(n \geq6\). The function \(\Psi(x) = -\log x\) is n-convex \(n = 6, 8, \ldots\) . Use \(\Psi(x) = -\log x \) in Theorem 5 to get (21), where \(\tilde{h}_{\bar{b}}(X)\) is given in (20). □

Example 4

Choose \(\mathbb{T} = \mathbb{R}\) in Theorem 7 to have a new inequality with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \bigl(-\log( \zeta_{1})\bigr) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \bigl(-\log( \zeta_{2})\bigr)+ \int_{a}^{b}p_{2}(y)\log p_{1}(y) \,dy + h_{\bar{b}}(X), $$

where

$$ h_{\bar{b}}(X) := \int_{a}^{b} p_{2}(y) \log \frac{1}{p_{2}(y)} \,dy $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \int_{a}^{b}p_{2}(y)G \biggl( \frac{p_{1}(y)}{p_{2}(y)}, s \biggr) \,dy. $$

Example 5

Choose \(\mathbb{T} = h\mathbb{Z}, h > 0\) in Theorem 7 to get a new inequality for the Shannon entropy in h-discrete calculus with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \bigl(-\log( \zeta_{1})\bigr) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \bigl(-\log( \zeta_{2})\bigr) + \sum_{j =\frac{a}{h}}^{\frac{b}{h} - 1} p_{2}(jh)h~ \log\bigl[p_{1}(jh)h\bigr] + \tilde{S}, $$

where

$$ \tilde{S} := \sum_{j=\frac{a}{h}}^{\frac{b}{h} - 1} p_{2}(jh)h~ \log\frac{1}{p_{2}(jh)h} $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j=\frac{a}{h}}^{\frac{b}{h} - 1} p_{2}(jh)h ~G \biggl(\frac{p_{1}(jh)}{p_{2}(jh)}, s \biggr). $$

Remark 5

Choose \(h = 1\) in Example 5. Suppose that \(a = 0,~b = n,~p_{1}(j) = (p_{1})_{j}\), and \(p_{2}(j) = (p_{2})_{j}\) to get a new inequality involving the discrete Shannon entropy with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \bigl(-\log( \zeta_{1})\bigr) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \bigl(-\log( \zeta_{2})\bigr) + \sum_{j = 1}^{n} (p_{2})_{j}~ \log(p_{1})_{j} + S, $$

where

$$ S := \sum_{j = 1}^{n} (p_{2})_{j}~ \log\frac{1}{(p_{2})_{j}} $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j = 1}^{n} (p_{2})_{j} ~G \biggl(\frac{(p_{1})_{j}}{(p_{2})_{j}}, s \biggr). $$

Example 6

Choose \(\mathbb{T} = q^{\mathbb{N}_{0}} (q > 1), a = q^{l}\), and \(b = q^{n}\) with \(l < n\) in Theorem 7 to obtain a new inequality for the Shannon entropy in q-calculus with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \bigl(-\log( \zeta_{1})\bigr) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \bigl(-\log( \zeta_{2})\bigr) + \sum_{j = 0}^{n - 1} q^{j+1}p_{2}\bigl(q^{j}\bigr)~ \log \bigl[p_{1}\bigl(q^{j}\bigr)\bigr] + S_{q}, $$

where

$$ S_{q} := \sum_{j = l}^{n - 1} q^{j+1}p_{2}\bigl(q^{j}\bigr)~ \log \frac{1}{p_{2}(q^{j})} $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j = l}^{n - 1} q^{j+1}p_{2} \bigl(q^{j}\bigr) ~G \biggl(\frac{p_{1}(q^{j})}{p_{2}(q^{j})}, s \biggr). $$

5.2 Kullback–Leibler divergence

Kullback–Leibler divergence on time scales is defined in [5] as follows:

$$ D(p_{1}, p_{2}) = \int_{a}^{b} p_{1}(\zeta) \ln \biggl[\frac{p_{1}(\zeta)}{p_{2}(\zeta)} \biggr]\Delta\zeta. $$
(23)

Theorem 8

Let X be a continuous random variable and \(p_{1}, p_{2} \in\Omega\) with \(\zeta_{1}\leq\frac{p_{1}(y)}{p_{2}(y)} \leq\zeta_{2}\) for all \(y \in\mathbb{T}\). If n is even \((n = 6, 8,\ldots)\), then

$$\begin{aligned} \mathfrak{J}_{1}(\cdot) \geq&\frac {(n-2)(\ln\zeta_{1} - \ln\zeta_{2})}{\zeta_{1}\zeta_{2}} \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr)\,ds + \frac{1}{\zeta_{2}-\zeta_{1}} \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr) \\ \times& \sum_{w=1}^{n-3}(-1)^{w-1} \biggl(\frac{n-w-2}{w} \biggr) \biggl[\frac{(s-\zeta_{1})^{w}}{\zeta ^{w}_{1}} - \frac{(s-\zeta_{2})^{w}}{\zeta^{w}_{2}} \biggr]\,ds, \end{aligned}$$
(24)

where

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \bigl( \zeta_{1}\ln(\zeta_{1})\bigr) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \bigl(\zeta_{2}\ln(\zeta_{2})\bigr) - D(p_{1}, p_{2}) $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \int_{a}^{b}p_{2}(y)G \biggl( \frac{p_{1}(y)}{p_{2}(y)}, s \biggr) \Delta y, $$

where \(D(p_{1}, p_{2})\) is given in (23).

Proof

The function \(\Psi(x) = x\ln x\) is n-convex for \(n = 6, 8, \ldots\) . Use \(\Psi(x) = x\ln x \) in Theorem 5 to get (24). □

Example 7

Choose \(\mathbb{T} = \mathbb{R}\) in Theorem 8 to have a new inequality in classical calculus with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \bigl( \zeta_{1}\ln(\zeta_{1})\bigr) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \bigl(\zeta_{2}\ln(\zeta_{2})\bigr) - D_{KL}(p_{1}, p_{2}), $$

where

$$ D_{KL}(p_{1}, p_{2}) := \int_{a}^{b} p_{1}(y) \ln \frac{p_{1}(y)}{p_{2}(y)} \,dy $$

is Kullback–Leibler divergence and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \int_{a}^{b}p_{2}(y)G \biggl( \frac{p_{1}(y)}{p_{2}(y)}, s \biggr) \,dy. $$

Example 8

Choose \(\mathbb{T} = h\mathbb{Z}~(h>1)\) in Theorem 8 to get a new inequality in h-discrete calculus with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \bigl( \zeta_{1}\ln(\zeta_{1})\bigr) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \bigl(\zeta_{2}\ln(\zeta_{2})\bigr) - \tilde{D}_{KL}(p_{1}, p_{2}), $$

where

$$ \tilde{D}_{KL}(p_{1}, p_{2}) := \sum _{j=\frac{a}{h}}^{\frac{b}{h} - 1} p_{1}(jh)h \ln \frac{p_{1}(jh)}{p_{2}(jh)} $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j=\frac{a}{h}}^{\frac{b}{h} - 1} p_{2}(jh)h ~G \biggl(\frac{p_{1}(jh)}{p_{2}(jh)}, s \biggr). $$

Remark 6

Choose \(h = 1\) in Example 8. Suppose that \(a = 0, b = n, p_{1}(j) = (p_{1})_{j}\), and \(p_{2}(j) = (p_{2})_{j}\) to get a new inequality involving discrete Kullback–Leibler divergence with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \bigl( \zeta_{1}\ln(\zeta_{1})\bigr) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} \bigl(\zeta_{2}\ln(\zeta_{2})\bigr) - KL(p_{1}, p_{2}), $$

where

$$ KL(p_{1}, p_{2}) := \sum_{j = 1}^{n} (p_{1})_{j} \ln\frac{(p_{1})_{j}}{(p_{2})_{j}} $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j = 1}^{n} (p_{2})_{j} ~G \biggl(\frac{(p_{1})_{j}}{(p_{2})_{j}}, s \biggr). $$

Example 9

Choose \(\mathbb{T} = q^{\mathbb{N}_{0}} (q > 1), a = q^{l}\), and \(b = q^{n}\) with \(l < n\) in Theorem 8 to have a new inequality involving Kullback–Leibler divergence in q-calculus with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} (- \zeta_{1})\ln(\zeta_{1}) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} (- \zeta_{2})\ln(\zeta_{2}) - KL_{q}(p_{1}, p_{2}), $$

where

$$ KL_{q}(p_{1}, p_{2}) := \sum _{j = l}^{n - 1} q^{j+1}p_{1} \bigl(q^{j}\bigr)~ \ln\frac{p_{1}(q^{j})}{p_{2}(q^{j})} $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j = l}^{n - 1} q^{j+1}p_{2} \bigl(q^{j}\bigr) ~G \biggl(\frac{p_{1}(q^{j})}{p_{2}(q^{j})}, s \biggr). $$

5.3 Jeffreys distance

Jeffreys distance on time scale is defined in [5] as follows:

$$ D_{J}(p_{1}, p_{2}) := \int_{a}^{b} \bigl(p_{1}(\zeta) - p_{2}(\zeta)\bigr) \ln\biggl[\frac{p_{1}(\zeta)}{p_{2}(\zeta)} \biggr]\Delta \zeta. $$
(25)

Theorem 9

Let X be a continuous random variable and \(p_{1}, p_{2} \in\Omega\) with \(\zeta_{1}\leq\frac{p_{1}(y)}{p_{2}(y)} \leq\zeta_{2}\) for all \(y \in\mathbb{T}\). If n is even \((n = 6, 8,\ldots)\), then

$$\begin{aligned} \mathfrak{J}_{1}(\cdot) \geq& \biggl( \frac{(n-2)(\ln\zeta_{2} - \ln\zeta_{1})}{\zeta_{2} - \zeta_{1}} + \frac{1}{\zeta_{1}\zeta_{2}} \biggr) \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr)\,ds + \frac{1}{\zeta_{2}-\zeta_{1}} \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr) \\ &{}\times \sum_{w=1}^{n-3}(-1)^{w+1} \biggl(\frac{n-w-2}{w} \biggr) \biggl[ \biggl(\frac{w}{\zeta^{w+1}_{1}} + \frac{1}{\zeta^{w}_{1}} \biggr) (s-\zeta_{1})^{w} \\ &{}- \biggl(\frac{w}{\zeta^{w+1}_{2}} + \frac{1}{\zeta^{w}_{2}} \biggr) (s- \zeta_{2})^{w} \biggr]\,ds, \end{aligned}$$
(26)

where

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} ( \zeta_{1}-1)\ln(\zeta_{1}) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}}( \zeta_{2} - 1)\ln(\zeta_{2}) - D_{J}(p_{1}, p_{2}) $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \int_{a}^{b}p_{2}(y)G \biggl( \frac{p_{1}(y)}{p_{2}(y)}, s \biggr) \Delta y, $$

where \(D_{J}(p_{1}, p_{2})\) is given in (25).

Proof

The function \(\Psi(x) = (x-1)\ln x\) is n-convex for \(n = 6, 8, \ldots\) . Use \(\Psi(x) = (x-1)\ln x \) in Theorem 5 to get (26). □

Example 10

Choose \(\mathbb{T} = \mathbb{R}\) in Theorem 9 to have a new inequality in classical calculus with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} ( \zeta_{1} - 1)\ln(\zeta_{1}) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} ( \zeta_{2}-1)\ln(\zeta_{2}) - D_{J_{a}}(p_{1}, p_{2}), $$

where

$$ D_{Ja}(p_{1}, p_{2}) := \int_{a}^{b} \bigl[p_{1}(y)-p_{2}(y) \bigr] \ln\frac{p_{1}(y)}{p_{2}(y)} \,dy, $$

is Jeffreys distance and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \int_{a}^{b}p_{2}(y)G \biggl( \frac{p_{1}(y)}{p_{2}(y)}, s \biggr) \,dy. $$

Example 11

Choose \(\mathbb{T} = h\mathbb{Z}~(h>1)\) in Theorem 9 to get a new inequality in h-discrete calculus with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} ( \zeta_{1}-1)\ln(\zeta_{1}) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} ( \zeta_{2}-1)\ln(\zeta_{2}) - \tilde{D}_{J_{a}}(p_{1}, p_{2}), $$

where

$$ \tilde{D}_{J_{a}}(p_{1}, p_{2}) := \sum _{j=\frac{a}{h}}^{\frac{b}{h} - 1} (p_{1}- p_{2}) (jh)h \ln\frac{p_{1}(jh)}{p_{2}(jh)} $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j=\frac{a}{h}}^{\frac{b}{h} - 1} p_{2}(jh)h ~G \biggl(\frac{p_{1}(jh)}{p_{2}(jh)}, s \biggr). $$

Remark 7

Put \(h = 1\) in Example 11. Suppose that \(a = 0, b = n, p_{1}(j) = (p_{1})_{j}\), and \(p_{2}(j) = (p_{2})_{j}\) to get a new inequality involving discrete Jeffreys distance with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} ( \zeta_{1} - 1)\ln(\zeta_{1}) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} ( \zeta_{2} - 1)\ln(\zeta_{2}) - D_{J_{a}}(p_{1}, p_{2}), $$

where

$$ J_{a}(p_{1}, p_{2}) := \sum _{j = 1}^{n} (p_{1} - p_{2})_{j} \ln\frac{(p_{1})_{j}}{(p_{2})_{j}} $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j = 1}^{n} (p_{2})_{j} ~G \biggl(\frac{(p_{1})_{j}}{(p_{2})_{j}}, s \biggr). $$

Example 12

Choose \(\mathbb{T} = q^{\mathbb{N}_{0}} (q > 1), a = q^{l}\), and \(b = q^{n}\) with \(l < n\) in Theorem 9 to have a new inequality involving Jeffreys distance in q-calculus with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} ( \zeta_{1} - 1) \ln(\zeta_{1}) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} ( \zeta_{1} - 1)\ln(\zeta_{2}) - D_{J_{q}}(p_{1}, p_{2}), $$

where

$$ D_{J_{q}}(p_{1}, p_{2}) := \sum _{j = l}^{n - 1} q^{j+1} \bigl[p_{1}\bigl(q^{j}\bigr)-p_{2} \bigl(q^{j}\bigr)\bigr]~ \ln\frac{p_{1}(q^{j})}{p_{2}(q^{j})} $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j = l}^{n - 1} q^{j+1}p_{2} \bigl(q^{j}\bigr) G \biggl(\frac{p_{1}(q^{j})}{p_{2}(q^{j})}, s \biggr). $$

5.4 Triangular discrimination

Triangular discrimination on time scales is defined in [5] as follows:

$$ D_{\Delta}(p_{1}, p_{2}) = \int_{a}^{b}\frac{[p_{2}(\zeta) - p_{1}(\zeta)]^{2}}{p_{2}(\zeta) + p_{1}(\zeta)}\Delta\zeta. $$
(27)

Theorem 10

Let X be a continuous random variable and \(p_{1}, p_{2} \in\Omega\) with \(\zeta_{1}\leq\frac{p_{1}(y)}{p_{2}(y)} \leq\zeta_{2}\) for all \(y \in\mathbb{T}\). If n is even \((n = 6, 8,\ldots)\), then

$$\begin{aligned} \mathfrak{J}_{1}(\cdot) \geq&\frac {n-2}{\zeta_{2} - \zeta_{1}} \biggl(\frac{1}{(\zeta_{1}+1)^{2}} - \frac{1}{(\zeta_{2} + 1)^{2}} \biggr) \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr)\,ds + \frac{1}{\zeta_{2}-\zeta_{1}} \int_{\zeta_{1}}^{\zeta_{2}}\mathfrak{J}_{1} \bigl(G(\cdot, s)\bigr) \\ &{}\times \sum_{w=1}^{n-3}4(-1)^{w+1}(w+1) (n-w-2) \biggl[\frac{(s-\zeta_{1})^{w}}{(1+\zeta_{1})^{w+2}} - \frac {(s-\zeta_{2})^{w}}{(1+\zeta_{2})^{w+2}} \biggr]\,ds, \end{aligned}$$
(28)

where

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \frac{(\zeta_{1} - 1)^{2}}{\zeta_{1} + 1} + \frac{1-\zeta_{1}}{\zeta _{2}-\zeta_{1}}\frac{(\zeta_{2} - 1)^{2}}{\zeta_{2} + 1} - D_{\Delta}(p_{1}, p_{2}) $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \int_{a}^{b}p_{2}(y)G \biggl( \frac{p_{1}(y)}{p_{2}(y)}, s \biggr) \Delta y, $$

where \(D_{\Delta}(p_{1}, p_{2})\) is given in (27).

Proof

The function \(\Psi(x) = \frac{(x - 1)^{2}}{x + 1}\) is n-convex for \(n = 6, 8, \ldots\) . Use \(\Psi(x) = \frac{(x - 1)^{2}}{x + 1} \) in Theorem 5 to get (28). □

Example 13

Choose \(\mathbb{T} = \mathbb{R}\) in Theorem 10 to have a new inequality in classical calculus with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \frac{(\zeta- 1)^{2}}{\zeta+ 1} + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta _{1}} \frac{(\zeta- 1)^{2}}{\zeta+ 1} - D_{\Delta_{a}}(p_{1}, p_{2}), $$

where

$$ D_{\Delta_{a}}(p_{1}, p_{2}) := \int_{a}^{b} \frac{[p_{2}(y)-p_{1}(y)]^{2}}{p_{1}(y) + p_{2}(y)} \,dy $$

is triangular discrimination and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \int_{a}^{b}p_{2}(y)G \biggl( \frac{p_{1}(y)}{p_{2}(y)}, s \biggr) \,dy. $$

Example 14

Choose \(\mathbb{T} = h\mathbb{Z}~(h>1)\) in Theorem 10 to get a new inequality in h-discrete calculus with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \frac{(\zeta_{1} - 1)^{2}}{\zeta_{1} + 1} + \frac{1-\zeta_{1}}{\zeta _{2}-\zeta_{1}} \frac{(\zeta_{2} - 1)^{2}}{\zeta_{2} + 1} - \tilde{D}_{\Delta_{a}}(p_{1}, p_{2}), $$

where

$$ \Delta_{a}(p_{1}, p_{2}) := \sum _{j=\frac{a}{h}}^{\frac{b}{h} - 1} h\frac {[p_{2}(hj)-p_{1}(hj)]^{2}}{p_{1}(hj) + p_{2}(hj)} $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j=\frac{a}{h}}^{\frac{b}{h} - 1} p_{2}(jh)h ~G \biggl(\frac{p_{1}(jh)}{p_{2}(jh)}, s \biggr). $$

Remark 8

Take \(h = 1\) in Example 14 and consider \(a = 0, b = n, p_{1}(j) = (p_{1})_{j}\), and \(p_{2}(j) = (p_{2})_{j}\) to get a new inequality involving discrete triangular discrimination with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \frac{(\zeta_{1} - 1)^{2}}{\zeta_{1} + 1} + \frac{1-\zeta_{1}}{\zeta _{2}-\zeta_{1}} \frac{(\zeta_{2} - 1)^{2}}{\zeta_{2} + 1} - D_{\Delta_{a}}(p_{1}, p_{2}), $$

where

$$ \Delta(p_{1}, p_{2}) := \sum _{j = 1}^{n} \frac{[(p_{2})_{j} - (p_{1})_{j}]^{2}}{(p_{1})_{j} + (p_{2})_{j}} $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j = 1}^{n} (p_{2})_{j} ~G \biggl(\frac{(p_{1})_{j}}{(p_{2})_{j}}, s \biggr). $$

Example 15

Choose \(\mathbb{T} = q^{\mathbb{N}_{0}} (q > 1), a = q^{l}\), and \(b = q^{n}\) with \(l < n\) in Theorem 10 to have a new inequality involving triangular discrimination in q-calculus with the following values:

$$ \mathfrak{J}_{1}(\cdot) = \frac{\zeta_{2}-1}{\zeta_{2}-\zeta_{1}} \frac{(\zeta_{1} - 1)^{2}}{\zeta_{1} + 1} + \frac{1-\zeta_{1}}{\zeta _{2}-\zeta_{1}} \frac{(\zeta_{2} - 1)^{2}}{\zeta_{2} + 1} - D_{\Delta_{q}}(p_{1}, p_{2}), $$

where

$$ D_{\Delta_{q}}(p_{1}, p_{2}) := \sum _{j = l}^{n - 1} q^{j+1} \frac{[p_{2}(q^{j})-p_{1}(q^{j})]^{2}}{p_{1}(q^{j}) + p_{2}(q^{j})} $$

and

$$ \mathfrak{J}_{1}\bigl(G(\cdot, s)\bigr) = \frac{\zeta_{2}-1}{\zeta _{2}-\zeta_{1}}G( \zeta_{1}, s) + \frac{1-\zeta_{1}}{\zeta_{2}-\zeta_{1}} G(\zeta_{2}, s) - \sum_{j = l}^{n - 1} q^{j+1}p_{2} \bigl(q^{j}\bigr) ~G \biggl(\frac{p_{1}(q^{j})}{p_{2}(q^{j})}, s \biggr). $$

Availability of data and materials

Data sharing is not applicable to this paper as no data sets were generated or analyzed during the current study.

References

  1. Adeel, M., Khan, K.A., Pečarić, Ð., Pečarić, J.: Generalization of the Levinson inequality with applications to information theory. J. Inequal. Appl. 2019, 230 (2019)

    Article  MathSciNet  Google Scholar 

  2. Adil Khan, M., Husain, Z., Chu, Y.M.: New estimates for Csiszár divergence and Zipf-Mandelbrot entropy via Jensen-Mercer’s inequality. Complexity 2020, 8928691 (2020)

    Article  MATH  Google Scholar 

  3. Agarwal, R., Bohner, M., Peterson, A.: Inequalities on time scales: a survey. Math. Inequal. Appl. 4, 535–557 (2001)

    MathSciNet  MATH  Google Scholar 

  4. Ansari, I., Khan, K.A., Nosheen, A., Pečarić, Ð., Pečarić, J.: Shannon type inequalities via time scales theory. Adv. Differ. Equ. 2020, 135 (2020)

    Article  MathSciNet  Google Scholar 

  5. Ansari, I., Khan, K.A., Nosheen, A., Pečarić, Ð., Pečarić, J.: Some inequalities for Csiszár divergence via theory of time scales. Adv. Differ. Equ. 2020, 698 (2020)

    Article  Google Scholar 

  6. Ansari, I., Khan, K.A., Nosheen, A., Pečarić, Ð., Pečarić, J.: Estimation of divergence measures via weighted Jensen inequality on time scales. J. Inequal. Appl. 2021, 93 (2021)

    Article  MathSciNet  Google Scholar 

  7. Ben Makhlouf, A., Kharrat, M., Hammami, M.A., Baleanu, D.: Henry-Gronwall type q-fractional integral inequalities. Math. Methods Appl. Sci. 44(2), 3–9 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bohner, M., Peterson, A.: Dynamic Equations on Time Scales. Birkhäuser, Boston (2001)

    Book  MATH  Google Scholar 

  9. Bohner, M., Peterson, A.: Advances in Dynamic Equations on Time Scales. Birkhäuser, Boston (2003)

    Book  MATH  Google Scholar 

  10. Brahim, K., Bettaibi, N., Sellemi, M.: On some Feng Qi type q-integral inequalities. J. Inequal. Pure Appl. Math. 9(2), 1–7 (2008)

    MathSciNet  Google Scholar 

  11. Butt, S.I., Rasheed, T., Pečarić, Ð., Pečarić, J.: Combinatorial extensions of Popoviciu’s inequality via Abel-Gontscharoff polynomial with applications in information theory. Rad Hrvat. Akad. Znan. Umjet. Mat. Znan. 542, 59–80 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  12. Cerone, P., Dragomir, S.S.: Some new Ostrowski-type bounds for the Čebyšev functional and applications. J. Math. Inequal. 8(1), 159–170 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chao, A., Jost, L., Hsieh, T.C., Ma, K.H., Sherwin, W.B., Rollins, L.A.: Expected Shannon entropy and Shannon differentiation between subpopulations for neutral genes under the finite island model. PLoS ONE 10(6), 1–24 (2015)

    Article  Google Scholar 

  14. Chow, C.K., Lin, C.N.: Approximating discrete probability distributions with dependence trees. IEEE Trans. Inf. Theory 14(3), 462–467 (1968)

    Article  MATH  Google Scholar 

  15. Dragomir, S.S.: Other inequalities for Csiszár divergence and applications. Preprint, RGMIA Res. Rep. Coll. (2000)

  16. Fink, A.M.: Bounds of the deviation of a function from its averages. Czechoslov. Math. J. 42(117), 289–310 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  17. Gauchman, H.: Integral inequalities in q-calculus. Comput. Math. Appl. 47(2–3), 281–300 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  18. Jackson, H.: On q-definite integrals. Q. J. Pure Appl. Math. 41, 193–203 (1910)

    MATH  Google Scholar 

  19. Jain, K.C., Mathur, R.: A symmetric divergence measure and its bounds. Tamkang J. Math. 42(4), 493–503 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  20. Khan, A.R., Pečarić, J., Lipanović, M.R.: n-Exponential convexity for Jensen-type inequalities. J. Math. Inequal. 7(3), 313–335 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  21. Khan, K.A., Niaz, T., Pečarić, Ð., Pečarić, J.: Refinement of Jensen’s inequality and estimation of f- and Rényi divergence via Montgomery identity. J. Inequal. Appl. 2018, 318 (2018)

    Article  Google Scholar 

  22. Khan, M.A., Pecaric, Ð., Pecaric, J.: A new refinement of the Jensen inequality with applications in information theory. Bull. Malays. Math. Sci. Soc. 44, 267–278 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  23. Kullback, S.: Information Theory and Statistics. Peter Smith, Gloucester (1978)

    MATH  Google Scholar 

  24. Leandro, P.: Statistical Inference Based on Divergence Measures. Chapman and Hall, London (2006)

    MATH  Google Scholar 

  25. Lesne, A.: Shannon entropy: a rigorous notion at the crossroads between probability, information theory, dynamical systems and statistical physics. Math. Struct. Comput. Sci. 24(3), E240311 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  26. Lyndon, S.H.: Region segmentation using information divergence measures. Med. Image Anal. 8, 233–244 (2004)

    Article  Google Scholar 

  27. Miao, Y., Qi, F.: Several q-integral inequalities. J. Math. Inequal. 3(1), 115–121 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  28. Niculescu, C.P., Persson, L.E.: Convex Functions and Their Applications. A Contemporary Approach. Springer, New york (2006)

    Book  MATH  Google Scholar 

  29. Noor, M.A., Awan, M.U., Noor, K.I.: Quantum Ostrowski inequalities for q-differentiable convex functions. J. Math. Inequal. 10(4), 1013–1018 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  30. Pečarić, J., Perić, I., Rodić Lipanović, M.: Uniform treatment of Jensen type inequalities. Math. Rep. 16(2), 183–205 (2014)

    MathSciNet  MATH  Google Scholar 

  31. Pečarić, J., Proschan, F., Tong, Y.L.: Convex Functions, Partial Orderings, and Statistical Applications. Mathematics in Science and Engineering., vol. 187. Academic Press, Boston (1992)

    MATH  Google Scholar 

  32. Saker, S.H.: Some nonlinear dynamic inequalities on time scales. Math. Inequal. Appl. 14(3), 633–645 (2011)

    MathSciNet  MATH  Google Scholar 

  33. Sen, A.: On Economic Inequality. Oxford University Press, London (1973)

    Book  Google Scholar 

  34. Sudsutad, W., Ntouyas, S.K., Tariboon, J.: Quantum integral inequalities for convex functions. J. Math. Inequal. 9(3), 781–793 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  35. Sun, Y.G., Hassan, T.: Some nonlinear dynamic integral inequalities on time scales. Appl. Math. Comput. 220(4), 221–225 (2013)

    MathSciNet  MATH  Google Scholar 

  36. Taneja, I.J.: Bounds on non symmetric divergence measures in terms of symmetric divergence measures. J. Comb. Inf. Syst. Sci. 29(14), 115–134 (2005)

    MathSciNet  MATH  Google Scholar 

  37. Tariboon, J., Ntouyas, S.K.: Quantum calculus on finite intervals and applications to impulsive difference equations. Adv. Differ. Equ. 2013, 1 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  38. Tariboon, J., Ntouyas, S.K.: Quantum integral inequalities on finite intervals. J. Inequal. Appl. 2014, 1 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  39. Theil, H.: Economics and Information Theory. North-Holland, Amsterdam (1967)

    Google Scholar 

  40. Topsoe, F.: Some inequalities for information divergence and related measures of discrimination. RGMIA Res. Rep. Collect. 2(1), 85–98 (1999)

    Google Scholar 

  41. Tou, J.T., Gonzales, R.C.: Pattern Recognition Principle. Addison-Wesley, Reading (1974)

    Google Scholar 

  42. Wedrowska, E.: Application of Kullback-Leibler relative entropy for studies on the divergence of household expenditures structures. Olszt. Econ. J. 6, 133–142 (2011)

    Google Scholar 

  43. Widder, D.V.: Completely convex function and Lidstone series. Trans. Am. Math. Soc. 51, 387–398 (1942)

    Article  MathSciNet  MATH  Google Scholar 

  44. Zhu, C., Yang, W., Zhao, Q.: Some new fractional q-integral Grüss-type inequalities and other inequalities. J. Inequal. Appl. 2012(1), 299 (2012)

    Article  MATH  Google Scholar 

Download references

Acknowledgements

The authors wish to thank the anonymous referees for their very careful reading of the manuscript and fruitful comments and suggestions. The research of the 5th author (Josip Pečarić) is supported by the Ministry of Education and Science of the Russian Federation (Agreement number 02.a03.21.0008).

Funding

There is no funding for this work.

Author information

Authors and Affiliations

Authors

Contributions

All authors jointly worked on the results and they read and approved the final manuscript.

Corresponding author

Correspondence to Iqrar Ansari.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ansari, I., Khan, K.A., Nosheen, A. et al. Estimation of divergences on time scales via the Green function and Fink’s identity. Adv Differ Equ 2021, 394 (2021). https://doi.org/10.1186/s13662-021-03550-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-021-03550-2

Keywords