# The logarithmic concavity of modified Bessel functions of the first kind and its related functions

## Abstract

This research demonstrates the log-convexity and log-concavity of the modified Bessel function of the first kind and the related functions. The method of coefficient is used to verify such properties. One of our results contradicts the conjecture proposed by Neumann in 2007 which states that modified Bessel function of the first kind $$I_{\nu }$$ is log-concave in $$(0,\infty )$$ given $$\nu >0$$. The log-concavity holds true in some bounded domain. The application of the other results in Kibble’s bivariate gamma distribution is also demonstrated.

## Introduction

The modified Bessel function of the first kind (MBF-I), $$I_{\nu }$$, is a special function which is one of two solutions of the modified Bessel differential equation,

$$x^{2}y^{\prime \prime }+xy^{\prime }- \bigl(x^{2}+ \nu \bigr)y=0.$$

The MBF-I function is found as a part in many probability distribution functions [1, 2]. During the last decades, there has been a number of studies related to the logarithmic concavity, as well as the logarithmic convexity, of the MBF-I and relevant functions. The results help proving other properties related to MBF-I [3, 4] and it might be the useful properties in some optimizing statistical problems as the convex optimization is well constructed. In this research, the logarithmic convexity and logarithmic concavity of the MBF-I and the related functions are studied and the applications of the results are demonstrated.

The MBF-I has no closed-form expression. It is usually expressed by the series of gamma functions, Γ:

$$I_{\nu }(t)= (t/2 )^{\nu }\sum_{k=0}^{\infty } \bigl[ \bigl(t ^{2}/4 \bigr)^{k}/k!\varGamma (\nu +k+1 ) \bigr].$$
(1)

The other expression of MBF-I is in the form of the hypergeometric function, $$\gamma _{\nu }(t)$$:

$$I_{\nu } (t ) = 2^{-\nu }\gamma _{\nu } \bigl({t^{2}}\bigr)t^{\nu } / \varGamma (\nu +1),$$
(2)

equivalently,

$$\gamma _{\nu }(t) = 2^{\nu }\varGamma (\nu +1)t^{-\nu /2}I_{\nu } (\sqrt{t} ).$$
(3)

The hypergeometric function $$\gamma _{\nu }(t)$$ is defined as

$$\gamma _{\nu }(t):=\sum_{k=0}^{\infty } { (1/4 )^{k} t^{k}}/( { ( \nu +1 )_{k} k! ) },$$
(4)

where $$\nu >-1$$ and $$(p)_{k}$$ denotes $$\Gamma(p+k)/\Gamma(p)$$ ( $$p \notin \{0,-1,-2,\cdots\}$$ ). In this research, the derivative test and the series cofficients rearrangement are used to prove the log-convexity and log-concavity of MBF-I related functions. Thus, the derivative and other properties of the hypergeometric function are reviewed.

\begin{aligned}& \gamma _{\nu }^{(n)}(t) =\sum_{k=0}^{\infty } \frac{\varGamma (\nu +1 ) (1/4 )^{k+n}}{\varGamma (\nu +n+k+1 )k!}t ^{k} , \end{aligned}
(5)
\begin{aligned}& \gamma _{\nu }^{(n)}(0) =\frac{\varGamma (\nu +1 ) (1/4 ) ^{n}}{\varGamma (\nu +n+1 )}, \end{aligned}
(6)

where the term $$\gamma _{\nu }^{(n)}$$ denotes the derivative of the order n with respect to t of $$\gamma _{\nu }$$. Equation (2) is found in . The map t $$\mapsto \gamma _{\nu } (t )$$ is log-concave in $$(0, \infty )$$ provided that $$\nu >-1$$. The map $$t\mapsto \gamma _{\nu } (t^{2} )$$ is log-convex in $$(0,\infty )$$ provided that $$\nu >-{1}/{2}$$. The studies of MBF-I were usually performed by using inequalities and the infinite product expansion of $$\gamma _{\nu }$$ [3, 5, 6],

$$\mathcal{I}_{\nu } (t )=\gamma _{\nu } \bigl(t^{2} \bigr) = \prod_{n\geq 1} \bigl(1+t^{2}/j_{\nu ,n}^{2} \bigr).$$
(7)

The following lemma is fundamental identity for the simplification of the representation of the MBF-I related functions which are mentioned in many parts of this paper.

### Lemma 1

\begin{aligned} t^{r}\gamma _{\nu }^{ (k )} (t )\gamma _{\mu } ^{ (m )} (t ) &= \sum_{n\geq r}^{\infty } \frac{n!}{(n-r)!} \binom{2n-2r+\nu +\mu +k+m}{n-r+\nu +k} \\ &\quad {}\cdot \frac{4^{-n-k-m+r}\varGamma (\nu +1 )\varGamma (\mu +1 )t ^{n}}{\varGamma (n+\nu +\mu +k+m-r+1 )n!}, \end{aligned}
(8)

where $$r\in \mathbb{R}^{+}$$; $$k,m\in \mathbb{N}_{0}$$, where $$\mathbb{N}_{0}$$ is the set of natural numbers including 0; and $$\nu , \mu \in \mathbb{R}$$.

### Proof

The term $$t^{r}\gamma _{\nu }^{ (k )} (t ) \gamma _{\mu }^{ (m )} (t )$$ can be expanded as the Taylor series with

$$t^{r}\gamma _{\nu }^{ (k )} (t )\gamma _{\mu } ^{ (m )} (t )=\sum_{n=0}^{\infty }a_{n} t ^{n},$$
(9)

where $$a_{n}:= [ t^{r} \gamma _{\nu }^{ (k )} (t ) \gamma _{\mu }^{ (m )} (t ) ]_{t=0}^{(n)}$$. Then

\begin{aligned}& a_{n}= \Biggl\lbrace \sum_{i=0}^{n} \binom{n}{i} \bigl( t^{r} \bigr)^{(i)} \bigl[ \gamma _{\nu }^{ (k )} (t ) \gamma _{\mu }^{ (m )} (t ) \bigr]^{(n-i)} \Biggr\rbrace _{t=0} \\ \end{aligned}
(10)
\begin{aligned}& \hphantom{a_{n}}= \binom{n}{r} r! \bigl[\gamma _{\nu }^{ (k )} (t ) \gamma _{\mu }^{ (m )} (t ) \bigr]_{t=0}^{ (n-r )} \end{aligned}
(11)
\begin{aligned}& \hphantom{a_{n}}= \frac{n!}{(n-r)!} \Biggl\lbrace \sum_{j=0}^{n-r} \binom{n-r}{j} \bigl[\gamma _{\nu }^{ (k )} (t ) \bigr] ^{ (j )} \bigl[\gamma _{\mu }^{ (m )} (t ) \bigr] ^{ (n-r-j )} \Biggr\rbrace _{t=0} \end{aligned}
(12)
\begin{aligned}& \hphantom{a_{n}}= \frac{n!}{(n-r)!} \sum_{j=0}^{n-r} \binom{n-r}{j} \gamma _{\nu }^{ (k+j )} (0 )\gamma _{\mu }^{ (m+n-r-j )} (0 ), \end{aligned}
(13)
\begin{aligned}& a_{n}= \frac{n!}{(n-r)!} \sum_{j=0}^{n-r} \binom{n-r}{j} \biggl(\frac{\varGamma (\nu +1 ) (1/4 )^{k+j}}{ \varGamma (\nu +k+j+1 )} \biggr) \\& \hphantom{a_{n}={}}{}\cdot \biggl(\frac{\varGamma (\mu +1 ) (1/4 ) ^{m+n-r-j}}{\varGamma (\mu +m+n-r-j+1 )} \biggr) \end{aligned}
(14)
\begin{aligned}& \hphantom{a_{n}}= \frac{n!}{(n-r)!}\frac{4^{-n-k-m+r}\varGamma (\nu +1 ) \varGamma (\mu +1 )}{\varGamma (n+\nu +\mu +k+m-r+1 )} \\& \hphantom{a_{n}={}}{} \cdot \sum_{j=0}^{n-r} \binom{n-r}{j}\binom{n+\nu +\mu +k+m-r}{ \nu +k+j}. \end{aligned}
(15)

By the Chu–Vandermonde identity proposed in , that is,

\begin{aligned} \sum_{k=0}^{n}\binom{n}{k} \binom{s}{t+k} = & \binom{n+s}{n+t}, \end{aligned}
(16)

the series in (15) is simplified as follows:

\begin{aligned}& \sum_{j=0}^{n-r} \binom{n-r}{j} \binom{n+\nu +\mu +k+m-r}{ \nu +k+j} \\& \quad = \binom{2n-2r+\nu +\mu +k+m}{n-r+\nu +k}. \end{aligned}
(17)

Correspondingly, by substituting Eqs. (15) and (17) into (9), Eq. (8) is proved. □

Lemma 1 simplifies the coefficients of Cauchy product in closed forms. A similar approach was used to prove a Turán type inequality of the modified Bessel function by determining the signs of the coefficients of the function found in [8, 9]. In this work, the convexity and the concavity of functions on positive real domain $$(0,\infty )$$ can be proved by verifying non-negativity and non-positivity of the second derivative of corresponding functions. In some conditions, mentioned in Lemma 2, the coefficient technique ensures that a function is neither entirely non-positive nor non-negative on $$(0,\infty )$$.

### Lemma 2

Given $$f(t)=\sum_{i=0}^{\infty } a_{k} t^{k}$$. If there exists an m such that $$a_{n}>0$$, $$\forall n>m$$ then there exists a constant $$\tau >0$$ such that $$f(t)>0$$ on $$(\tau ,\infty )$$.

### Proof

Suppose that there exists an integer m such that $$a_{k}\leq 0$$, $$\forall k\le m$$ and $$a_{n} > 0$$ for all $$n>m$$, the function f can be split into a sum of polynomial with degree m and a series with positive coefficients as follows:

$$f(t)=\sum_{k=0}^{\infty }a_{k}t^{k} =\sum_{k=0}^{m+1}a _{k}t^{k} +\sum_{n=m+2}^{\infty }a_{n}t^{n}.$$
(18)

We have

$$\lim_{t\rightarrow \infty } \Biggl(\sum_{i=0}^{m+1}a_{k}t^{k} \Biggr)\Big/t ^{m+1}=a_{m+1}>0.$$
(19)

There exist a constant $$\varepsilon =a_{m+1}$$ and a constant $$\tau >0$$ such that

$$\Biggl(\sum_{k=0}^{m+1} a_{k}t^{k} \Biggr)\Big/t^{m+1}\in (0, \epsilon ) \subseteq (0,2a_{m+1} )\subseteq (0, \infty ),\quad \forall t>\tau .$$
(20)

The summation between the polynomial and non-negative series is positive on $$(\tau ,\infty )$$. □

## Main results

### Theorem 3

The map

$$t\mapsto I_{\mu }/I_{\nu }$$
(21)

is log-concave on $$(-\infty ,0 )$$ and $$(0,\infty )$$ if $$0<\nu <\mu$$.

The inequalities

$$1< \frac{I_{\nu }^{2} (t )}{I_{\nu +\delta } (t )I _{\nu -\delta } (t )} < \frac{\varGamma (\nu +\delta +1 ) \varGamma (\nu -\delta +1 )}{\varGamma ^{2} (\nu +1 )}$$
(22)

hold true on $$(-\infty ,0 )$$ and $$(0,\infty )$$, where $$\delta >0$$, $$\nu >-1$$ and $$\nu -\delta +1\in (0,\infty )$$.

### Proof

Let $$\mu =\nu +\delta$$ for some $$\delta >0$$ and

$$R_{\mu ,\nu }(t) := \ln \biggl[\frac{I_{\mu }(t)}{I_{\nu }(t)} \biggr]= \ln \biggl[ \frac{t^{\delta }\varGamma (\nu +1 ) \gamma _{\nu +\delta }(t^{2})}{2\varGamma (\nu +\delta +1 ) \gamma _{\nu }(t^{2})} \biggr].$$
(23)

By taking advantage of Lemma 1, we have

$$R_{\mu ,\nu }^{\prime } (t )= \delta (2\nu +\delta ) \frac{1}{t} \frac{ \sum_{n=0}^{\infty } a_{n} t^{2n}}{ \sum_{n=0}^{\infty } b_{n} t^{2n}},$$
(24)

where

\begin{aligned}& a_{n}= \frac{1}{4^{n}} \frac{\varGamma (\nu +\delta +1 ) \varGamma (\nu +1 )\varGamma (2n+2\nu +\delta )}{ \varGamma (n+2\nu +\delta +1 )\varGamma (n+\nu +\delta +1 ) \varGamma (n+\nu +1 )} , \end{aligned}
(25)
\begin{aligned}& b_{n}= \frac{1}{4^{n}}\frac{\varGamma (\nu +\delta +1 ) \varGamma (\nu +1 )\varGamma (2n+2\nu +\delta +1 )}{ \varGamma (n+2\nu +\delta +1 )\varGamma (n+\nu +\delta +1 ) \varGamma (n+\nu +1 )}. \end{aligned}
(26)

The ratio $$1/t$$ is strictly decreasing on $$(0,\infty )$$. Since the ratio of the coefficients $$a_{n}$$ and $$b_{n}$$ is given by

$$r_{n}=\frac{a_{n}}{b_{n}}=\frac{1}{2n+2\nu +\delta }\longrightarrow 0 \quad \text{as } n\longrightarrow \infty ,$$
(27)

the function $$F(t) := ( \sum_{n=0}^{\infty } a_{n} t^{2n} ) / ( \sum_{n=0}^{\infty } b_{n} t^{2n} )$$ is decreasing on $$(0,\infty )$$. Since $$R_{\mu ,\nu }^{\prime } (t )$$ is the product of the positive decreasing functions $$1/t$$ and $$F(t)$$ and the constant $$\delta (2 \nu +\delta )$$, $$R_{\mu ,\nu }^{\prime } (t )$$ is strictly decreasing on $$(0,\infty )$$. Thus $$R_{\mu ,\nu }^{\prime } (t )>0$$, and $$R_{\mu ,\nu }^{\prime \prime } (t )<0$$, $$\forall t>0$$. Also, $$R_{\mu ,\nu }^{\prime } (t ) =- \vert R_{\mu ,\nu }^{ \prime } (t ) \vert$$ where $$\vert R_{\mu ,\nu }^{\prime } (t ) \vert$$ is increasing on $$(-\infty ,0 )$$, so it is clear that $$R_{\mu ,\nu }^{\prime } (t )$$ is decreasing on negative real domain. Thus, $$t\mapsto I_{\mu }/I_{ \nu }$$ is log-concave on $$(-\infty ,0 )$$ and $$(0, \infty )$$. The strict log-concavity holds only when $$\mu \neq \nu$$.

To prove the latter part of the theorem, we claim the following property:

$$\frac{\varGamma ^{2} (n+\nu +1 )}{\varGamma (n+\nu +\delta +1 )\varGamma (k+\nu -\delta +1 )}>\frac{\varGamma ^{2} (\nu +1 )}{\varGamma (\nu +\delta +1 )\varGamma (\nu -\delta +1 )}$$
(28)

for any integer $$n \geq 1$$, provided that $$n + \nu -\delta +1>0$$, $$\nu >-1$$, $$\delta >0$$ and $$\nu +1>\delta$$. When $$n+\nu -\delta +1>0$$, we have $$\varGamma (n+\nu +\delta +1 )\varGamma (n+\nu - \delta +1 )>0$$. Consequently, the following holds true:

\begin{aligned} &\frac{\varGamma ^{2} (n+\nu +1 )}{\varGamma (n+\nu + \delta +1 )\varGamma (n+\nu -\delta +1 )} \\ &\quad = \frac{\varGamma ^{2} (n+\nu )}{\varGamma (n+\nu + \delta )\varGamma (n+\nu -\delta )} \cdot \frac{(n+ \nu )^{2}}{(n+\nu )^{2}-\delta ^{2}} \\ &\quad >\frac{\varGamma ^{2} (n+\nu )}{\varGamma (n+\nu + \delta )\varGamma (n+\nu -\delta )}. \end{aligned}
(29)

By induction, Eq. (29) implies

\begin{aligned} &\frac{\varGamma ^{2} (n+\nu +1 )}{\varGamma (n+\nu +\delta +1 )\varGamma (n+\nu -\delta +1 )} \\ &\quad > \frac{\varGamma ^{2} (n+\nu )}{\varGamma (n+\nu + \delta )\varGamma (n+\nu -\delta )} \end{aligned}
(30)
\begin{aligned} &\quad >\frac{\varGamma ^{2} (\nu +1 )}{\varGamma (\nu +\delta +1 ) \varGamma (\nu -\delta +1 )}, \end{aligned}
(31)

for any integer $$n\geq 1$$. According to Lemma 1, The product of $$I_{\nu +\delta }$$ and $$I_{\nu -\delta }$$ becomes

\begin{aligned} I_{\nu +\delta } (t )I_{\nu -\delta } (t ) &= t ^{2\nu } \frac{\gamma _{\nu +\delta }(t^{2})\gamma _{\nu -\delta }(t ^{2})}{2^{2\nu }\varGamma (\nu +\delta +1)\varGamma (\nu -\delta +1)} \end{aligned}
(32)
\begin{aligned} &=\sum_{n=0}^{\infty }\frac{\varGamma (2n+2\nu +1 )}{ \varGamma (n+2\nu +1 )\varGamma (n+\nu +\delta +1 )} \\ &\quad {}\cdot \frac{1}{\varGamma (n+\nu -\delta +1 )}\frac{t^{2n+2 \nu }}{2^{2n-2\nu } n!} \end{aligned}
(33)
\begin{aligned} &=\sum_{n=0}^{\infty } \biggl( \frac{\varGamma ^{2} (n+\nu +1 )}{ \varGamma (n+\nu +\delta +1 )\varGamma (n+\nu -\delta +1 )} \biggr) \\ &\quad {}\cdot \frac{\varGamma (2n+2\nu +1 )}{\varGamma (n+2 \nu +1 )\varGamma ^{2} (n+\nu +1 )} \frac{t^{2n+2\nu }}{2^{2n-2 \nu } n!}. \end{aligned}
(34)

By applying (28), we have

\begin{aligned} I_{\nu +\delta } (t )I_{\nu -\delta } (t )&> \biggl(\frac{\varGamma ^{2} (\nu +1 )}{\varGamma (\nu + \delta +1 )\varGamma (\nu -\delta +1 )} \biggr)t^{2 \nu } \\ &\quad {}\cdot \sum_{n=0}^{\infty } \frac{1}{2^{2n-2\nu }}\frac{ \varGamma (2n+2\nu +1 )}{\varGamma (n+2\nu +1 ) \varGamma ^{2} (n+\nu +1 )}\frac{t^{2n}}{n!}. \end{aligned}
(35)

Since

$$I_{\nu }^{2} (t )=t^{2\nu }\sum _{n=0}^{\infty }\frac{1}{2^{2n-2 \nu }} \frac{\varGamma (2n+2\nu +1 )}{\varGamma (n+2\nu +1 ) \varGamma ^{2} (n+\nu +1 )}\frac{t^{2n}}{n!},$$
(36)

the result is

$$\frac{I_{\nu }^{2} (t )}{I_{\nu +\delta } (t )I _{\nu -\delta } (t )}< \frac{\varGamma (\nu +\delta +1 ) \varGamma (\nu -\delta +1 )}{\varGamma ^{2} (\nu +1 )}.$$
(37)

To prove that $$1<{I_{\nu }^{2} (t )}/{[I_{\nu +\delta } (t )I_{\nu -\delta } (t )]}$$, we need to claim that

$$g(\nu ,\delta ):=\frac{\varGamma ^{2}(n+\nu +1)}{\varGamma (n+\nu +\delta +1) \varGamma (n+\nu -\delta +1)}< 1.$$
(38)

By letting

$$G (\nu ,\delta ):=\ln \biggl[ \frac{\varGamma ^{2}(n+\nu +1)}{ \varGamma (n+\nu +\delta +1)\varGamma (n+\nu -\delta +1)} \biggr] ,$$
(39)

taking $${\partial }/{\partial \delta }$$ on $$G (\nu ,\delta )$$ yields

$$\frac{\partial }{\partial \delta } G (\nu ,\delta ) =- \psi (n+\nu +\delta +1)+\psi (n+\nu - \delta +1),$$
(40)

where ψ denotes the digamma function. It can be expanded to the sum of Euler–Mascheroni constant , i.e. $$\gamma _{E}$$, and the harmonic series

\begin{aligned} \frac{\partial }{\partial \delta } G (\nu ,\delta ) &= - \Biggl(-\gamma _{E}+ \sum_{k=0}^{\infty } \biggl( \frac{1}{k+1}-\frac{1}{k+n+ \nu +\delta +1} \biggr) \Biggr) \\ &\quad {} + \Biggl(-\gamma _{E}+\sum_{k=0}^{\infty } \biggl(\frac{1}{k+1}-\frac{1}{k+n+ \nu -\delta +1} \biggr) \Biggr) \\ &= -2\sum_{k=0}^{\infty } \biggl( \frac{\delta }{(k+n+\nu +1)^{2}- \delta ^{2}} \biggr)< 0, \end{aligned}
(41)

so $$g(\nu ,\delta )$$ is strictly decreasing for $$\delta \in (0, \infty )$$ given n and ν. Since $$g(\nu ,\delta )\longrightarrow 1$$ as $$\delta \longrightarrow 0$$, $$g(\nu ,\delta )<1$$, where $$\delta \in (0,\infty )$$. From the inequalities (37) and (38), we conclude that

\begin{aligned} & I_{\nu }^{2} (t )-I_{\nu +r} (t )I_{\nu -r} (t ) \\ &\quad = \sum_{n=0}^{\infty }\biggl[ \frac{\varGamma (2n+2\nu +1 )}{ \varGamma (n+2\nu +1 )} \frac{t^{2n+2\nu }}{2^{2n-2\nu }n!} \frac{1}{\varGamma ^{2} (n+\nu +\delta +1 )} \\ &\qquad {} \cdot \biggl( 1-\frac{\varGamma ^{2} (n+\nu +\delta +1 )}{ \varGamma (n+\nu +1 )\varGamma (n+\nu +2\delta +1 )} \biggr)\biggr] \\ &\quad > 0 . \end{aligned}
(42)

Hence, $$1< {I_{\nu }^{2} (t )}/{[I_{\nu +\delta } (t )I _{\nu -\delta } (t )]}$$. □

The inequality (42) is equivalent to the Turán type inequality reported in . There are numerous versions of the inequality. Thiruvenkatachar et al. originally proved the inequality using the coefficient comparison method . Later, Joshi et al.  and Baricz  took advantage of product expansions to demonstrate the property but their proofs are different in details. The inequality is used to prove other properties as found in [11, 12]. Our study uses the Cauchy product of series expansion as in Thiruvenkatachar’s paper but we refined the property to cover the non-integer δ condition.

### Theorem 4

Given $$\nu >-{1}/{2}$$, $$t\mapsto t^{\mu }I_{\nu } (t )$$ is strictly log-concave on $$(0,\infty )$$ for

\begin{aligned}& \textstyle\begin{cases} \mu > -\nu ; & -1/2 < \nu < 1/2, \\ \mu > 1/2; & \nu \ge 1/2, \end{cases}\displaystyle \\& \textstyle\begin{cases} \mu >-\nu ; & -1/2< \nu < 1/2, \\ \mu >1/2; & \nu \geq 1/2, \end{cases}\displaystyle \end{aligned}

and is strictly log-convex on $$(0,\infty )$$ for $$\mu <-\nu$$.

### Proof

According to (2), the MBF-I can be rewritten in terms of $$\gamma _{\nu }(t^{2})$$ as

$$t^{\mu }I_{\nu } (t )=\frac{t^{\mu +\nu }\gamma _{\nu }(t ^{2})}{2^{\nu }\varGamma (\nu +1)}.$$
(43)

Then the derivatives of

$$\ln t^{\mu }I_{\nu } (t ) = (\mu +\nu )\ln t+ \ln \gamma _{\nu }\bigl(t^{2}\bigr)+\ln 2^{\nu }\varGamma (\nu +1)$$
(44)

with respect to t are described by the following:

\begin{aligned} &\frac{d}{dt} \bigl[ \ln t^{\mu }I_{\nu } (t ) \bigr] =\frac{ \mu +\nu }{t}+2t\frac{\gamma _{\nu }^{\prime } (t^{2} )}{ \gamma _{\nu } (t^{2} )}, \end{aligned}
(45)
\begin{aligned} &\frac{d^{2}}{dt^{2}} \bigl[\ln t^{\mu }I_{\nu } (t ) \bigr] \\ &\quad =-\frac{\mu +\nu }{t^{2}}+2\frac{\gamma _{\nu }^{\prime } (t ^{2} )}{\gamma _{\nu } (t^{2} )} +4t^{2} \frac{ \gamma _{\nu }^{\prime \prime } (t^{2} )\gamma _{\nu } (t ^{2} )- [\gamma _{\nu }^{\prime } (t^{2} ) ] ^{2}}{\gamma _{\nu }^{2} (t^{2} )} \end{aligned}
(46)
\begin{aligned} &\quad = \Biggl\lbrace \sum_{n=0}^{\infty } \frac{\varGamma ^{2} (\nu +1 ) \varGamma (2n+2\nu -1 )}{4^{n}\varGamma (n+2\nu +1 ) (\varGamma (n+\nu +1 ) )^{2}} \frac{t^{2n}}{n!} \\ &\qquad {}\cdot \biggl[ \biggl(\frac{2\nu ^{2}-\nu -n}{1-2\nu -2n} \biggr)- \mu \biggr] \Biggr\rbrace \cdot \bigl[ t^{2}\gamma _{\nu }^{2} \bigl(t ^{2} \bigr) \bigr]^{-1}. \end{aligned}
(47)

From (46), we consider the term

$$S_{n}:= \biggl(\frac{2\nu ^{2}-\nu -n}{1-2\nu -2n} \biggr)-\mu .$$
(48)

The conditions $$\nu >1/2$$ implies $$1-2\nu -2n\le 1-2\nu <0$$ for $$n\le 0$$, so we have

\begin{aligned}& \inf \biggl\{ \frac{2\nu ^{2}-\nu -n}{1-2\nu -2n} \biggr\} = \textstyle\begin{cases} 1/2; & -1/2< \nu < 1/2, \\ -\nu ; & \nu \geq 1/2, \end{cases}\displaystyle \end{aligned}
(49)
\begin{aligned}& \sup \biggl\{ \frac{2\nu ^{2}-\nu -n}{1-2\nu -2n} \biggr\} = \textstyle\begin{cases} -\nu ; & -1/2< \nu < 1/2, \\ 1/2; & \nu \geq 1/2. \end{cases}\displaystyle \end{aligned}
(50)

For $$\mu >\sup \{ (2\nu ^{2}-\nu -n )/ (1-2 \nu -2n ) \}$$, it implies $$S_{n}<0$$, and the product of the non-positive coefficient series in (47) and the positive function $$[t\gamma _{\nu }(t^{2})]^{-2}$$ is non-negative. Therefore, $$({d^{2}}/{dt^{2}}) (\ln t^{\mu }I_{\nu } (t ) )<0$$ which makes $$t^{\mu }I_{\nu }$$ log-concave on $$(0,\infty )$$. Notice that unless $$\mu =\nu =1/2$$, $$S_{n}<0$$ for some $$n\in \mathbb{N}$$ that implies $$({d^{2}}/{dt^{2}}) (\ln t^{\mu }I_{\nu } (t ) )$$ is strictly negative. Therefore, $$t\mapsto t^{\mu }I_{\nu } (t )$$ is strictly log-concave on $$(0,\infty )$$.

For log-convex $$\mu <\inf \{ (2\nu ^{2}-\nu -n )/ (1-2\nu -2n ) \}$$, $$S_{n}>0$$, which implies that under condition $$\nu >- {1}/{2}$$, $$\mu > {1}/{2}$$. Thus, we see that $$({d^{2}}/{dt^{2}}) (\ln t^{\mu }I_{\nu } (t ) )$$ is a product of negative coefficient series and non-negative function $$[t \gamma _{\nu }^{2}(t^{2})]^{-2}$$. Therefore, $$t^{\mu }I_{\nu }$$ is log-concave on $$(0,\infty )$$. □

Theorem 4 gives an alternative approach to proving the log-concavity of the map $$u\mapsto \sqrt{u}I_{\nu } (u )$$ in $$(0,\infty )$$ which had already been reported in . When $$\mu +\nu =0$$, the term $$- (\mu +\nu )/t^{2}\gamma _{\nu }^{2} (t^{2} )$$ vanishes. Then $$D^{2} (\ln t^{\mu }I_{\nu } (t ) )$$ exists for all $$t>0$$, so we can extend the domain of log-convexity to $$\mathbb{R}^{+}$$. The result yields another approach to proving log-convexity of $$t\mapsto \gamma _{\nu } (t^{2} )=2^{ \nu }\varGamma (\nu +1)t^{-\nu }I_{\nu } (t )$$ on $$\mathbb{R} ^{+}$$ demonstrated by Neumann [3, 5]. In this work, we extend the condition of log-concavity for $$\nu \in (-1/2,1/2 )$$. According to (46), (49) and (50), one can see that $$t\mapsto t^{1/2}I_{\nu } (t )$$, $$t\mapsto t^{-\nu }I_{\nu } (t )$$, $$\nu >1/2$$, seems to be the finest conditions for log-concavity and log-convexity, respectively, in the form of $$t^{\mu }I_{\nu } (t )$$.

### Theorem 5

The map $$(x,y )\mapsto (xy )^{\mu }\gamma _{ \nu } (xy )$$ is log-concave in $$\mathbb{R}^{+}\times \mathbb{R}^{+}$$ provided that $$\mu =\nu +{1}/{2}$$, $$\nu >1/2$$. Equivalently,

$$(x,y )\mapsto (xy )^{\nu /2+1/2}I_{\nu } (\sqrt{xy} )$$
(51)

is log-concave in first quadrant provided that $$\nu >1/2$$.

### Proof

For $$l(x,y)=-\ln [ (xy )^{\mu }\gamma _{\nu } (xy ) ]$$, its derivatives are expressed as follows:

\begin{aligned}& \frac{\partial l}{\partial x}= -\frac{\mu }{x}-y\frac{\gamma _{ \nu }^{\prime } (xy )}{\gamma _{\nu } (xy )}, \end{aligned}
(52)
\begin{aligned}& \frac{\partial ^{2}l}{\partial x^{2}}= \frac{\mu }{x^{2}}-y^{2} \frac{ \gamma _{\nu }^{\prime \prime } (xy )\gamma _{\nu } (xy )- [\gamma _{\nu }^{\prime } (xy ) ]^{2}}{\gamma _{\nu }^{2} (xy )}, \end{aligned}
(53)
\begin{aligned}& \frac{\partial ^{2}l}{\partial y^{2}}= \frac{\mu }{y^{2}}-x^{2} \frac{ \gamma _{\nu }^{\prime \prime } (xy )\gamma _{\nu } (xy )- [\gamma _{\nu }^{\prime } (xy ) ]^{2}}{\gamma _{\nu }^{2} (xy )}, \end{aligned}
(54)
\begin{aligned}& \frac{\partial ^{2}l}{\partial xy}= -\frac{\gamma _{\nu }^{\prime } (xy )}{\gamma _{\nu } (xy )}-xy\frac{\gamma _{\nu }^{\prime \prime } (xy )\gamma _{\nu } (xy )- [\gamma _{\nu }^{\prime } (xy ) ]^{2}}{\gamma _{\nu }^{2} (xy )}. \end{aligned}
(55)

Then the Hessian matrix of $$l(x,y)$$ is

$$\begin{bmatrix} \frac{\mu }{x^{2}}-y^{2}g_{\nu }^{\prime \prime } (xy ) & -g_{\nu }^{\prime } (xy )-xyg_{\nu }^{\prime \prime } (xy ) \\ -g_{\nu }^{\prime } (xy )-xyg^{\prime \prime } (xy )& \frac{\mu }{y^{2}}-x^{2}g_{\nu }^{\prime \prime } (xy ) \end{bmatrix} ,$$
(56)

where

\begin{aligned}& g_{\nu }(t)= \ln \gamma _{\nu }(t), \\& g_{\nu }^{\prime }(t)= \gamma _{\nu }^{\prime } (t )/ \gamma _{\nu } (t )>0, \\& g_{\nu }^{\prime \prime }(t)= \gamma _{\nu }^{\prime \prime } (t )/ \gamma _{\nu } (t )- \bigl(\gamma _{\nu }^{\prime } (t )/ \gamma _{\nu } (t ) \bigr)^{2}< 0. \end{aligned}

The second derivative is negative as $$t\mapsto \gamma _{\nu }$$ is log-concave in $$(0,\infty )$$ provided that $$\nu >-1$$. We have to prove that the Hessian matrix is positive definite by letting

\begin{aligned} E&= \begin{bmatrix} u\\v \end{bmatrix} ^{T} \begin{bmatrix} \frac{\mu }{x^{2}}-y^{2}g_{\nu }^{\prime \prime } (xy ) & -g_{\nu }^{\prime } (xy )-xyg_{\nu }^{\prime \prime } (xy ) \\ -g_{\nu }^{\prime \prime } (xy )-xyg_{\nu }^{\prime \prime } (xy ) & \frac{\mu }{y^{2}}-x^{2}g_{\nu }^{\prime \prime } (xy ) \end{bmatrix} \begin{bmatrix} u \\ v \end{bmatrix} \\ &= \biggl(\frac{u^{2}y^{2}+v^{2}x^{2}}{x^{2}y^{2}} \biggr)\mu - (uy+vx ) ^{2}g_{\nu }^{\prime \prime } (xy )-2uvg_{\nu }^{\prime } (xy ). \end{aligned}
(57)

Case 1: $$uv<0$$. It is obvious that

\begin{aligned} E (u,v ) &= \biggl(\frac{u^{2}y^{2}+v^{2}x^{2}}{x^{2}y^{2}} \biggr)q - (uy+vx ) ^{2} g_{\nu }^{\prime \prime } (xy )-2uvg_{\nu }^{\prime } (xy ) \\ &> 0. \end{aligned}
(58)

Case 2: $$uv>0$$.

\begin{aligned} E (u,v )&= \biggl(\frac{u^{2}y^{2}+v^{2}x^{2}}{x^{2}y^{2}} \biggr) \mu - (uy+vx )^{2}g_{\nu }^{\prime \prime } (xy )-2uvg _{\nu }^{\prime } (xy ) \\ &=\frac{ (uy-vx )^{2}}{x^{2}y^{2}}\mu - (uy-vx ) ^{2}g_{\nu }^{\prime \prime } (xy ) \\ &\quad {}+\frac{2uv}{xy}\mu -4uvxyg_{\nu }^{\prime \prime } (xy )-2uvg _{\nu }^{\prime } (xy ) \\ &\geq \frac{2uv}{xy}\mu -4uvxyg_{\nu }^{\prime \prime } (xy )-2uvg _{\nu }^{\prime } (xy ) \\ &= \frac{uv}{xy\gamma _{\nu } (xy )\gamma _{\nu } (xy )} \\ &\quad {}\cdot \bigl(2\mu \gamma _{\nu } (xy )\gamma _{\nu } (xy ) -4x ^{2}y^{2}\gamma _{\nu }^{\prime \prime } (xy )\gamma _{ \nu } (xy ) \\ &\quad {}+4x^{2}y^{2} \bigl[\gamma _{\nu }^{\prime } (xy ) \bigr] ^{2}-2xy\gamma _{\nu }^{\prime } (xy ) \gamma _{\nu } (xy )\bigr) \\ &= \frac{uvH(xy)}{xy\gamma _{\nu } (xy )\gamma _{\nu } (xy )}, \end{aligned}
(59)

where

\begin{aligned}& H(xy)= \sum_{n=0}^{\infty } \frac{\varGamma ^{2} (\nu +1 ) \varGamma (2n+2\nu -1 )}{4^{n}\varGamma (n+2\nu +1 ) \varGamma ^{2} (n+\nu +1 )}Q (n,\nu ,\mu )\frac{ (xy )^{n}}{n!}, \end{aligned}
(60)
\begin{aligned}& Q (n,\nu ,\mu )= 2(n+\nu ) \bigl(n+2n\nu +4\nu ^{2}-2\bigr). \end{aligned}
(61)

Next, we find the conditions implying $$Q (n,\nu ,\mu )>0$$. If $$n+\nu >0$$ for all $$n\geq 0$$, we have $$\nu >0$$. By considering $$n+2n\nu +4\nu ^{2}-2>0$$, we have $$\nu <-1/2$$ or $$\nu >1/2-n/4$$. For $$\nu >1/2-n/4$$, we have $$\nu \geq 1/2>1/2-n/4$$ for all $$n\geq 0$$. Hence, $$Q (n,\nu ,\mu )>0$$ when $$\nu \geq 1/2$$.

Then we suppose that $$\mu =\nu +{1}/{2}$$, $$\nu >1/2$$. By definition, the Hessian matrix of $$-\ln (xy )^{\mu } \gamma _{\nu } (xy )$$ is a positive definite matrix. The mapping $$(x,y )\mapsto (xy )^{\mu }\gamma _{ \nu } (xy )$$ is log-concave under the condition of $$\mu =\nu +{1}/{2}$$ where $$\nu >1/2$$. Furthermore,

\begin{aligned} (xy )^{\mu }\gamma _{\nu } (xy )&= 2^{\nu } \varGamma (\nu +1 ) (xy )^{\mu -\nu /2}I_{\nu } (\sqrt{xy} ) \\ &= 2^{\nu }\varGamma (\nu +1 ) (xy )^{\nu /2+1/2}I _{\nu } ( \sqrt{xy} ) \end{aligned}
(62)

implies that $$(x,y )\mapsto (xy )^{\nu /2+1/2}I _{\nu } (\sqrt{xy} )$$ is log-concave. □

Theorem 5 can be applied to statistics as it proves the log-concavity of log-likelihood function of Kibble’s bivariate gamma distribution [14,15,16] whose probability density function is

\begin{aligned} f(x,y|\nu ,\lambda _{1},\lambda _{2},\rho )&= \frac{ (\lambda _{1} \lambda _{2} )^{\nu }}{ (1-\rho )\varGamma (\nu )} \biggl(\frac{xy}{\rho \lambda _{1}\lambda _{2}} \biggr)^{ \frac{\nu -1}{2}} \\ &\quad {} \cdot \exp \biggl(-\frac{\lambda _{1}x+\lambda _{2}y}{1-\rho } \biggr)I _{\nu -1} \biggl( \frac{2\sqrt{\rho \lambda _{1}\lambda _{2}xy}}{1- \rho } \biggr). \end{aligned}
(63)

In the situation where the degree of freedom ν and the shape parameter ρ are given, the maximal likelihood estimation of the distribution reduces to a convex optimization problem. The details of proof are given as follows:

\begin{aligned} f(x,y|\nu ,\lambda _{1},\lambda _{2},\rho )&= \frac{ (\lambda _{1} \lambda _{2} )^{\nu }}{ (\lambda _{1}\lambda _{2} )^{\frac{ \nu -1}{2}}}\frac{ (xy/\rho )^{\frac{\nu -1}{2}}}{ (1- \rho )\varGamma (\nu )} \\ &\quad {}\cdot \exp \biggl(-\frac{\lambda _{1}x+\lambda _{2}y}{1-\rho } \biggr)I _{\nu -1} \biggl( \frac{2\sqrt{\rho \lambda _{1}\lambda _{2}xy}}{1- \rho } \biggr) \end{aligned}
(64)
\begin{aligned} &= \biggl[\frac{ (xy/\rho )^{\frac{\nu -1}{2}}}{ (1- \rho )\varGamma (\nu )} \biggr] \biggl[\exp \biggl(- \frac{ \lambda _{1}x+\lambda _{2}y}{1-\rho } \biggr) \biggr] \\ &\quad {} \cdot \biggl[ (\lambda _{1}\lambda _{2} )^{ \frac{\nu +1}{2}}I_{\nu -1} \biggl(\frac{2\sqrt{\rho \lambda _{1} \lambda _{2}xy}}{1-\rho } \biggr) \biggr]. \end{aligned}
(65)

Let $$\kappa =2\sqrt{\rho xy}/(1-\rho )$$ and we claim that log-concavity is conserved under an affine transformation of domain. Consequently, the following situations are equivalent:

\begin{aligned}& (\lambda _{1},\lambda _{2} )\mapsto (\lambda _{1} \lambda _{2} )^{ \frac{\nu +1}{2}} I_{\nu -1} (\sqrt{ \lambda _{1}\lambda _{2}} )\quad \text{is log-concave}, \end{aligned}
(66)
\begin{aligned}& \bigl(\kappa ^{2}\lambda _{1},\lambda _{2} \bigr) \mapsto \bigl(\kappa ^{2}\lambda _{1}\lambda _{2} \bigr)^{\frac{\nu +1}{2}}I_{\nu -1} (\kappa \sqrt{ \lambda _{1}\lambda _{2}} ) \quad \text{is log-concave}, \end{aligned}
(67)
\begin{aligned}& (\lambda _{1},\lambda _{2} )\mapsto \bigl(\kappa ^{2} \lambda _{1},\lambda _{2} \bigr) \\& \hphantom{(\lambda _{1},\lambda _{2} )}\mapsto \bigl(\kappa ^{2}\lambda _{1}\lambda _{2} \bigr)^{\frac{ \nu +1}{2}}I_{\nu -1} (\kappa \sqrt{\lambda _{1}\lambda _{2}} ) \quad \text{is log-concave}, \end{aligned}
(68)
\begin{aligned}& (\lambda _{1},\lambda _{2} ) \mapsto (\lambda _{1} \lambda _{2} )^{\frac{\nu +1}{2}}I_{\nu -1} ( \kappa \sqrt{ \lambda _{1}\lambda _{2}} ) \quad \text{is log-concave}. \end{aligned}
(69)

By considering the domain of $$\lambda _{1}$$ and $$\lambda _{2}$$, $$f(\lambda _{1},\lambda _{2})$$ is the product of 3 components:

\begin{aligned}& (xy )^{ \frac{\nu -1}{2}}/ \bigl[\rho ^{\frac{\nu -1}{2}} (1-\rho ) \varGamma (\nu ) \bigr] \quad \text{(constant)}, \end{aligned}
(70)
\begin{aligned}& \exp \bigl[- (\lambda _{1}x+\lambda _{2}y ) / (1-\rho ) \bigr]\quad \text{(log-linear function)}, \end{aligned}
(71)
\begin{aligned}& (\lambda _{1}\lambda _{2} )^{\frac{\nu +1}{2}}I_{\nu -1} (\kappa \sqrt{\lambda _{1}\lambda _{2}} )\quad \text{(log-concave function)}. \end{aligned}
(72)

Thus, the map $$(\lambda _{1},\lambda _{2} )\mapsto f$$ is log-concave in $$\mathbb{R}^{+}\times \mathbb{R}^{+}$$.

### Theorem 6

If $$-1/2<\nu <0$$, $$I_{\nu }$$ is log-convex on $$(0,\infty )$$ and if $$\nu >0$$, $$I_{\nu }$$ is log-concave function on

$$\bigl(0,\bigl(16\nu ^{4}-16\nu ^{3}-24\nu ^{2}+4\nu +5\bigr)^{1/2}/2 \bigr).$$

In 2007, Neumann conjectured  that the modified Bessel function of the first kind is log-concave on $$(0,\infty )$$. To the best of our knowledge, it seems to remain an open question . In our investigation, the function is log-concave just on a specific interval determined by parameter ν.

### Proof

The second derivative of $$\ln I_{\nu }$$ with respect to t is $$-\nu /t^{2} + 2\gamma _{\nu }^{\prime } (t^{2} ) / \gamma _{\nu } (t^{2} ) + 4t^{2} (\gamma _{\nu }^{ \prime \prime } (t^{2} ) \gamma _{\nu } (t^{2} )- (\gamma _{\nu }^{\prime } (t^{2} ) )^{2} )/ \gamma _{\nu }^{2} (t^{2} )$$. The term can be simplified to $$\varOmega (t )$$/$$[t\gamma _{\nu } (t^{2} ) ] ^{2}$$ where $$\varOmega (t )= -\nu \gamma _{\nu } (t^{2} ) \gamma _{\nu } (t^{2} )+ 2t^{2}\gamma _{\nu }^{\prime } (t^{2} )\gamma _{\nu } (t^{2} )+4t^{4} \gamma _{\nu }^{\prime \prime } (t^{2} )\gamma _{\nu } (t ^{2} )-4t^{4}\gamma _{\nu }^{\prime } (t^{2} ) \gamma _{\nu }^{\prime } (t^{2} )$$. The sign of the second derivative of $$\ln I_{\nu }$$ is determined by $$\varOmega (t )$$ as the denominator is non-negative. By adopting (8), $$\varOmega (t )$$ is rewritten in the form of the series

$$\varOmega (t )=\sum_{n=0}^{\infty }a_{n} \frac{t^{2n}}{n!},$$
(73)

where

$$a_{n}=\frac{\varGamma ^{2} (\nu +1 ) \varGamma (2n+2\nu -1 )}{4^{n} \varGamma (n+2\nu +1 ) \varGamma ^{2} (n+\nu +1 )} \bigl[2 (n+\nu ) \bigl(n-2\nu ^{2}+\nu \bigr) \bigr].$$
(74)

Then we consider

$$\varOmega (t )=a_{0}+\sum_{n=1}^{\infty }a_{n} \frac{t^{2n}}{n!}=-\nu +\sum_{n=1}^{\infty }a_{n} \frac{t^{2n}}{n!}.$$
(75)

The terms $$\varGamma ^{2} (\nu +1 )$$, $$\varGamma (2n+2 \nu -1 )$$, $$\varGamma (n+2\nu +1 )$$, $$\varGamma (n+ \nu +1 )$$, and $$n+\nu$$ of $$\{a_{n}\}$$ are positive for $$\nu >-1/2$$ and $$n\geq 1$$. Therefore, the sign of $$\{a_{n}\}$$ is determined only by $$n-2\nu ^{2}+\nu$$. Given $$-1/2<\nu <0$$, $$n-2\nu ^{2}+\nu$$ is positive, so we conclude that $$a_{n}$$ is positive for any $$n\geq 0$$. For arbitrary $$t\neq 0$$, $$\varOmega (t)$$ is a positive function so that the second derivative of $$\ln I_{\nu }$$ is positive. This proves the log-convexity of $$I_{\nu }$$ under the first condition. For the case of $$\nu >0$$, $$a_{n}<0$$ is equivalent to $$n-2\nu ^{2}+\nu <0$$ and $$n<2\nu ^{2}-\nu$$, so we can infer that there exists an $$n_{0}$$ such that $$a_{m}\leq 0$$ for any $$m\leq n_{0}$$ and $$a_{n}>0$$ for any $$n>n_{0}$$. By Lemma 2, we conclude that there is a $$t>0$$ such that

\begin{aligned} \varOmega (t )&=\sum_{n=0}^{\infty } \frac{\varGamma ^{2} (\nu +1 ) \varGamma (2n+2\nu -1 )}{4^{n}\varGamma (n+2\nu +1 ) \varGamma ^{2} (n+\nu +1 )} \\ &\quad {}\cdot \bigl[2 (n+\nu ) \bigl(n-\nu (2\nu -1 ) \bigr) \bigr] \frac{t ^{2n}}{n!} \\ &=\sum_{n=0}^{\infty }\frac{\varGamma ^{2} (\nu +1 )\varGamma (2n+2\nu +1 )}{4^{n}\varGamma (n+2\nu +1 )\varGamma ^{2} (n+\nu +1 )} \\ &\quad {}\cdot \biggl(\frac{1}{2}-\frac{ (\nu -\frac{1}{2} ) (\nu + \frac{1}{2} )}{n+\nu -\frac{1}{2}} \biggr) \frac{t^{2n}}{n!}. \end{aligned}
(76)

By Lemma 1, we have

\begin{aligned} \varOmega (t )&=\frac{1}{2}\gamma _{\nu } \bigl(t^{2} \bigr) \gamma _{\nu } \bigl(t^{2} \bigr) -\Biggl\lbrace \frac{ (\nu - \frac{1}{2} ) (\nu +\frac{1}{2} )}{ (\nu +1 )} \\ &\quad {}\cdot \Biggl[ \sum_{n=0}^{\infty } \frac{\varGamma (\nu +1 ) \varGamma (\nu +2 )\varGamma (2n+2\nu +2 )}{4^{n} \varGamma (n+2\nu +2 )\varGamma (n+\nu +2 )\varGamma (n+\nu +1 )} \\ &\quad {} \cdot \frac{ (n+\nu +1 ) (n+2\nu +1 )}{ (n+\nu -\frac{1}{2} ) (2n+2\nu +1 )}\frac{t ^{2n}}{n!}\Biggr] \Biggr\rbrace . \end{aligned}
(77)

Since $$\forall n \in \mathbb{N}_{0}$$ and $$\nu >{1}/{2}$$, we have

\begin{aligned}& (n+\nu )^{2}+ (\nu +2 ) (n+\nu )+ (\nu +1 )> (n+\nu )^{2}-\frac{1}{4}, \end{aligned}
(78)
\begin{aligned}& \frac{ (n+\nu +1 ) (n+2\nu +1 )}{ (n+ \nu -\frac{1}{2} ) (2n+2\nu +1 )}> \frac{1}{2}. \end{aligned}
(79)

Thus, we have the inequality

\begin{aligned} \varOmega (t )&< \frac{1}{2}\gamma _{\nu } \bigl(t^{2} \bigr) \gamma _{\nu } \bigl(t^{2} \bigr) -\Biggl\lbrace \frac{ (\nu - \frac{1}{2} ) (\nu +\frac{1}{2} )}{2 (\nu +1 )} \\ &\quad {}\cdot \Biggl[ \sum_{n=0}^{\infty } \frac{\varGamma (\nu +1 ) \varGamma (\nu +2 )}{4^{n}\varGamma (n+2\nu +2 )}\frac{ \varGamma (2n+2\nu +2 )}{\varGamma (n+\nu +1 ) \varGamma (n+\nu +2 )}\frac{t^{2n}}{n!} \Biggr] \Biggr\rbrace \\ &=\frac{1}{2}\gamma _{\nu } \bigl(t^{2} \bigr) \gamma _{\nu } \bigl(t ^{2} \bigr) \biggl[1- \frac{ (\nu -\frac{1}{2} ) (\nu + \frac{1}{2} )}{ (\nu +1 )}\frac{\gamma _{\nu +1} (t ^{2} )}{\gamma _{\nu } (t^{2} )} \biggr] \\ &=\frac{1}{2}\gamma _{\nu } \bigl(t^{2} \bigr) \gamma _{\nu } \bigl(t ^{2} \bigr) \biggl[1-2 \biggl(\nu - \frac{1}{2} \biggr) \biggl(\nu + \frac{1}{2} \biggr) \frac{I_{\nu +1} (t )}{tI_{\nu } (t )} \biggr]. \end{aligned}
(80)

Owing to the work of Kokologiannaki , we can use of the inequality

$$-\frac{\nu +1}{t^{2}}+\sqrt{\frac{ (\nu +1 )^{2}}{t^{4}}+\frac{1}{t ^{2}}}< \frac{I_{\nu +1} (t )}{tI_{\nu } (t )}.$$
(81)

The inequality (80) becomes

$$\varOmega (t )< \frac{1}{2} \gamma _{\nu } \bigl(t^{2} \bigr) \gamma _{\nu } \bigl(t^{2} \bigr) \mathcal{F}_{\nu } (t ),$$
(82)

where

\begin{aligned} \mathcal{F}_{\nu } (t )&= 1-2 \biggl(\nu -\frac{1}{2} \biggr) \biggl(\nu +\frac{1}{2} \biggr)\frac{I_{\nu +1} (t )}{tI _{\nu } (t )} \\ &< 1+\frac{2 (\nu -\frac{1}{2} ) (\nu +\frac{1}{2} ) (\nu +1 )}{t^{2}} \\ &\quad {}-2 \biggl(\nu -\frac{1}{2} \biggr) \biggl(\nu +\frac{1}{2} \biggr)\sqrt{\frac{ (\nu +1 )^{2}}{t^{4}}+\frac{1}{t^{2}}}. \end{aligned}
(83)

Since $$\gamma _{\nu } >0$$, the function $$\varOmega (t )<0$$ if and only if $$\mathcal{F} (\nu ,t )\leq 0$$. Consequently, the log-concavity of the modified Bessel function of the first kind is guaranteed on the interval $$(0,({16\nu ^{4}-16\nu ^{3}-24\nu ^{2}+4 \nu +5})^{1/2}/2)$$. □

According to the proof, MBF-I should not be log-concave on $$\mathbb{R}^{+}$$ as it was conjectured. However, we suspect that the actual upper bound of the interval that ensures the log-concavity property of MBF-I should be at least $$2\nu ^{2}$$.

## Conclusion

This research demonstrates the logarithmic concavity and logarithmic convexity properties of the MBF-I and its related functions. The proofs cover the case of bivariate function. In our techniques, we simplify the coefficients and exploit the Chu–Vandermonde identity in order to prove such properties. The results might help solving optimization problems in univariate and multivariate probabilistic models. The result from Theorem 6 suggests that log-concavity of the mapping $$t \mapsto I_{\nu +1} (t )$$ is true on $$(0,({16\nu ^{4}-16 \nu ^{3}-24\nu ^{2}+4\nu +5})^{1/2}/2)$$ but not entirely on $$\mathbb{R} ^{+}$$ as is conjectured by Neuman.

## References

1. 1.

Yue, S., Ouarda, T., Bobée, B.: A review of bivariate gamma distributions for hydrological application. J. Hydrol. 246(1–4), 1–18 (2001)

2. 2.

Yuji, H., Hiroyuki, M.: The probability distributions of the first hitting times of Bessel processes (2012). arXiv:1106.6132

3. 3.

Baricz, A.: Functional inequalities involving Bessel and modified Bessel functions of the first kind. Expo. Math. 26(3), 279–293 (2008)

4. 4.

Baricz, A., Neuman, E.: Inequalities involving modified Bessel functions of the first kind II. J. Math. Anal. Appl. 332(1), 265–271 (2007)

5. 5.

Neuman, E.: Inequalities involving modified Bessel functions of the first kind. J. Math. Anal. Appl. 171(2), 532–536 (1992)

6. 6.

Joshi, C.M., Bissu, S.K.: Some inequalities of Bessel and modified Bessel functions. J. Aust. Math. Soc. 50(2), 333 (1991)

7. 7.

Koepf, W.: Hypergeometric Summation: An Algorithmic Approach to Summation and Special Function Identities. Vieweg, Braunschweig (1998)

8. 8.

Baricz, A.: Functional inequalities for Galué’s generalized modified Bessel functions. J. Math. Inequal. 2, 183–193 (2007)

9. 9.

Thiruvenkatachar, V., Nanjundiah, T.: Inequalities concerning Bessel functions and orthogonal polynomials. Proc. Math. Sci. 33(6), 373–384 (1951)

10. 10.

Binmore, K.G.: Mathematical Analysis a Straightforward Approach. Cambridge University press, Cambridge (1977)

11. 11.

Laforgia, A.: Bounds for modified Bessel functions. J. Comput. Appl. Math. 34(3), 263–267 (1991)

12. 12.

Segura, J.: Bounds for ratios of modified Bessel functions and associated Turán-type inequalities. J. Math. Anal. Appl. 374(2), 516–528 (2011)

13. 13.

Baricz, A., Ponnusamy, S., Vuorinen, M.: Functional inequalities for modified Bessel functions. Expo. Math. 29(4), 399–414 (2011)

14. 14.

Iliopoulos, G., Karlis, D., Ntzoufras, I.: Bayesian estimation in Kibble’s bivariate gamma distribution. Can. J. Stat. 33(4), 571–589 (2005)

15. 15.

Nadarajah, S., Gupta, A.K.: Some bivariate gamma distributions. Appl. Math. Lett. 19(8), 767–774 (2006)

16. 16.

Nadarajah, S., Kotz, S.: Product moments of Kibble’s bivariate gamma distribution. Circuits Syst. Signal Process. 25(4), 567–570 (2006)

17. 17.

Kokologiannaki, C.G.: Bounds for functions involving ratios of modified Bessel functions. J. Math. Anal. Appl. 385(2), 737–742 (2012)

## Acknowledgements

We would like to express my special thanks of gratitude to the mentors Dr. Boriboon Novaprateep, Dr. Withoon Chunwachirasiri and Prof. Yongwimon Lenbury for valuable guidance and support. We also would like to thank collegue Dr. Narongpol Wichailukkana who helped proofreading the manuscript.

## Funding

This research was supported by the Centre of Excellence in Mathematics, the Commission on Higher Education, Thailand.

## Author information

Authors

### Contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Thanit Nanthanasub.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests. 