Skip to main content

Theory and Modern Applications

New bounds for soft margin estimator via concavity of Gaussian weighting function

Abstract

In the present article, we elaborate on the notion to obtain bounds for the soft margin estimator of “Identification of Patient Zero in Static and Temporal Network-Robustness and Limitations”. To achieve these bounds for the soft margin estimator, we utilize the concavity of the Gaussian weighting function and well-known Jensen’s inequality. To acquire some more general bounds for the soft margin estimator, we consider some general functions defined on rectangles. We also use the behavior of the Jaccard similarity function to extract some handsome bounds for the soft margin estimator.

1 Introduction

One of the most dangerous threats to the human society is the infectious disease. When this infectious disease becomes an epidemic, it will cause a big loss to human life and damage the economy on a large scale. The epidemic infectious diseases are also very dangerous in the sense that they are spreading very rapidly to a massive quantity of people in a given population in a limited period of time. Many factors that are contributing to epidemic infectious diseases are climate change, genetic change, globalization, and urbanization, and most of these factors are to some extent caused by humans. Many people from different fields have a lot of contribution to the detection of epidemic source and controlling of epidemic spreading. Mathematicians have also played a vital role in the modeling of epidemic spreading.

The contagion processes are the most attractive dynamic processes for the real life complex network of public interest [11, 12, 22, 24]. To model epidemic spreading, epidemiologists frequently use the compartmental models such as SIR models [17], SIS models [16], and SEIR models [20]. These models are very important when explicitly modeling and estimating the quantity of susceptible and infected individuals in a population at risk.

Epidemiologists have obtained many models for the epidemic source detection by imposing some restrictions on the network structure or on the spreading model process of compartmental models (SIR, SIS) or both [1214, 23, 25, 28]. The epidemiologists analyze the virous genetic evolution [15, 26] and detect the epidemic source or do backtracking by using given data [10]. Zhu et al. [28] initiated a model in which they established that the maximum distance to the infected nodes can be minimized by the source nodes on infinite trees. Altarelli et al. [8] estimated the epidemic source by using the message passing method, where they replaced the independent assumption by a tree-like contact network. Lokhov et al. [21] estimated the probability of a given node to produce the observed snapshot by considering the SIR model and using message-passing algorithm. Antulov-Fantulin et al. [9] proposed a model to analyze source probability estimators. They dropped the independency assumptions on nodes and all network structures and analyzed the source probability estimators for general compartmental models. The soft margin estimator for the proposed model of Antulov-Fantulin et al. [9] is given by

$$ \hat{P}(\overrightarrow{R}=\overrightarrow{r}_{*}|\Theta =\theta )= \frac{1}{n}\sum_{i=1}^{n} \exp \biggl( -\frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr), $$
(1)

where \(\overrightarrow{R_{\theta }}\) is a binary vector that indicates the random outcomes of the epidemic process, \(\{\overrightarrow{r}_{\theta,1},\overrightarrow{r}_{\theta,2}, \ldots,\overrightarrow{r}_{\theta,n}\}\) are the sample vectors that show the n independent outcomes of the epidemic process with the source term θ, \(\varphi:{\mathbb{R}^{n}}\times {\mathbb{R}^{n}}\rightarrow [0,1]\) is a Jaccard similarity function, which can be calculated by dividing the cardinality of the intersection of the set of infected nodes in \(\overrightarrow{r}_{1}, \overrightarrow{r}_{2}\) by the cardinality of their union, \(\varphi (\overrightarrow{r_{*}},\overrightarrow{r}_{\theta,i})\) is a random variable that measures the similarity between the fixed realization vector \(\overrightarrow{r_{*}}\) and the random realization vector \(\overrightarrow{r}_{\theta,i}\), and \(\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is the Gaussian weighting function with \(a>0\).

We will use the following hypothesis for the construction of our results throughout the paper.

\(\mathbf{H:}\) Let \(R_{\theta }\) be a binary vector, \(\{\overrightarrow{r}_{\theta,1},\overrightarrow{r}_{\theta,2}, \ldots,\overrightarrow{r}_{\theta,n}\}\) be n independent vectors, \(\overrightarrow{r}_{*}\) be a fixed realization vector, a be a positive real number, \(\varphi:\mathbb{R}^{n}\times \mathbb{R}^{n}\mapsto [0,1]\) be the Jaccard similarity function, and \(\hat{P}(\overrightarrow{R}=\overrightarrow{r}_{*}|\Theta =\theta )\) be the soft margin estimator as given in (1).

In the remaining portion of this section, we are going to discuss briefly convexity and concavity.

The notion of convex and concave functions is so impressive in all fields of science, especially in mathematics, because of its notable property. Therefore many generalized and interesting results for convex and concave functions and their application have been accomplished [17, 18, 19, 27].

Now, the formal definition of convex and concave functions is stated as follows.

Definition 1

Let I be an arbitrary interval in \(\mathbb{R}\). Then the function \(\Psi:I\rightarrow \mathbb{R}\) is convex if the inequality

$$ \Psi \bigl(\lambda x+(1-\lambda )y\bigr)\leq \lambda \Psi (x)+(1- \lambda )\Psi (y) $$
(2)

holds for all \(x,y\in I\) and \(\lambda \in [0,1]\).

If inequality (2) holds in the reverse direction, then the function \(\Psi:I\rightarrow \mathbb{R}\) is said to be concave.

There are many inequalities proved for convex and concave functions. Among these inequalities, one of the most prominent and dynamic inequality is the well known Jensen’s inequality in the literature. Jensen’s inequality is one of the most leading and generalized inequality in the sense that many inequalities can be assumed from it. The formal statement of Jensen’s inequality can be read in the following theorem.

Theorem 1

Let I be an interval in \(\mathbb{R}\), \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\) be an n-tuple such that \(x_{i}\in {I}\) for all \(i\in \{1,2,\ldots,n\}\), and \(\mathbf{p}=(p_{1},p_{2},\ldots,p_{n})\) be a positive n-tuple of real entries with \(P_{n}=\sum_{i=1}^{n}p_{i}\). If the function \(\Psi:{I}\rightarrow \mathbb{R}\) is convex, then

$$ \Psi \Biggl(\frac{1}{P_{n}}\sum_{i=1}^{n}p_{i}x_{i} \Biggr)\leq \frac{1}{P_{n}}\sum_{i=1}^{n}p_{i} \Psi (x_{i}). $$
(3)

If the function \(\Psi:{I}\rightarrow \mathbb{R}\) is concave, then inequality (3) holds in the reverse direction.

In this paper, we advance the idea to give bounds for the soft margin estimator given in (1) while accustoming the existing notion of concave function. To achieve bounds for the soft margin estimator, we consume the concavity of Gaussian weighting function and Jensen’s inequality. To obtain some more general and clear bounds for soft margin estimator, we use some general functions defined on rectangles, which are monotonic with respect to the first variable. We also utilize the behavior of the Jaccard similarity function for obtaining the desire bounds of soft margin estimator.

2 Main results

In order to build our results, we first establish the following lemma, which will support us in the achievement of our results.

Lemma 1

The Gaussian weighting function \(\Psi:[0,1]\rightarrow \mathbb{R}\) defined by

$$ \Psi (x)=\exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr) $$

is concave for all \(a\in [\sqrt{2},\infty )\).

Proof

To show the concavity of Gaussian function \(\Psi (x)\), we use the double derivative test. For this, differentiating two times \(\Psi (x)\) with respect to x, we get

$$ \Psi ^{\prime \prime }(x)=\exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr) \biggl[ \frac{4(x-1)^{2}-2a^{2}}{a^{4}} \biggr]. $$

Since

$$ \exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr)>0 \quad\text{and}\quad a^{4}>0. $$

So, we just need to show that

$$ 4(x-1)^{2}-2a^{2}\leq 0. $$

As

$$ 4(x-1)^{2}\leq 4 \quad\text{for all } x\in [0,1] $$
(4)

and

$$ -2a^{2}\leq -4 \quad\text{for all } a\in [\sqrt{2}, \infty ). $$
(5)

Now, adding (4) and (5), we obtain

$$ 4(x-1)^{2}-2a^{2}\leq 0. $$

Hence

$$ \Psi ^{\prime \prime }(x)=\exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr) \biggl[ \frac{4(x-1)^{2}-2a^{2}}{a^{4}} \biggr]\leq 0 $$

for all \(x\in [0,1]\) and \(a\in [\sqrt{2},\infty )\).

Consequently,

$$ \Psi (x)=\exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr) $$

is a concave function for all \(x\in [0,1]\) and \(a\in [\sqrt{2},\infty )\). □

In the following result, we acquire bounds for soft margin estimator adopting the concavity of the Gaussian function.

Theorem 2

Let hypothesis H hold with \(a\in [\sqrt{2},\infty )\). Then

$$\begin{aligned} &\exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr) \\ &\quad \geq \hat{P}( \overrightarrow{R}=\overrightarrow{r}_{*}|\Theta = \theta ) \\ &\quad \geq \Biggl(1-\frac{1}{n}\sum_{i=1}^{n} \varphi ( \overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i}) \Biggr) \exp \biggl(-\frac{1}{a^{2}} \biggr) +\frac{1}{n}\sum _{i=1}^{n} \varphi (\overrightarrow{r}_{*}, \overrightarrow{r}_{\theta,i}). \end{aligned}$$
(6)

Proof

By Lemma 1, the Gaussian function \(\Psi (x)=\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is concave on \([0,1]\) for \(a\in [\sqrt{2},\infty )\). Therefore

$$\begin{aligned} &\Psi (x)=\Psi \bigl((1-x)0+(x-0)1\bigr) \geq (1-x)\Psi (0)+x \Psi (1) \\ &\quad\Rightarrow \quad \exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr) \geq (1-x)\exp \biggl(-\frac{1}{a^{2}} \biggr)+x\exp (0)=(1-x)\exp \biggl(- \frac{1}{a^{2}} \biggr)+x. \end{aligned}$$
(7)

Now, putting \(x=\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})\) in (7), we obtain

$$\begin{aligned} \exp \biggl(- \frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr) \geq \bigl(1- \varphi (\overrightarrow{r}_{*}, \overrightarrow{r}_{\theta,i}) \bigr)\exp \biggl(-\frac{1}{a^{2}} \biggr) + \varphi (\overrightarrow{r}_{*}, \overrightarrow{r}_{\theta,i}). \end{aligned}$$
(8)

Multiplying both sides of (8) by \(\frac{1}{n}\) and taking summation over i, we get

$$\begin{aligned} &\frac{1}{n}\sum_{i=1}^{n} \exp \biggl(- \frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr) \\ &\quad\geq \Biggl(1-\frac{1}{n}\sum_{i=1}^{n} \varphi ( \overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i}) \Biggr) \exp \biggl(-\frac{1}{a^{2}} \biggr) +\frac{1}{n}\sum _{i=1}^{n} \varphi (\overrightarrow{r}_{*}, \overrightarrow{r}_{\theta,i}). \end{aligned}$$
(9)

Since, by Lemma 1, the Gaussian function \(\Psi (x)=\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is concave. Therefore, using Theorem 1, we have

$$\begin{aligned} \exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr) \geq \frac{1}{n}\sum_{i=1}^{n} \exp \biggl(- \frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(10)

Now, comparing (9) and (10), we obtain (6). □

In the following theorem, we get some clearer bounds for soft margin estimator by imposing a restriction on the Jaccard function.

Theorem 3

Let all the hypotheses of Theorem 2hold. If \(0< d\leq \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{ \theta,i})\leq D<1\), then

$$\begin{aligned} &\exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr) \\ &\quad \geq \hat{P}( \overrightarrow{R}=\overrightarrow{r}_{*}|\Theta = \theta ) \\ &\quad \geq \frac{D-\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})}{D-d} \exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad{}+ \frac{\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-d}{D-d} \exp \biggl(-\frac{(D-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(11)

Proof

By Lemma 1, for \(a\in [\sqrt{2},\infty )\) and \(x\in [d,D]\), the Gaussian function \(\Psi (x)=\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is concave. Therefore

$$\begin{aligned} & \Psi (x)=\Psi \biggl(\frac{(D-x)d+(x-d)D}{D-d} \biggr) \geq \frac{D-x}{D-d}\Psi (d)+ \frac{x-d}{D-d}\Psi (D) \\ &\quad \Rightarrow \quad\exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr) \geq \frac{D-x}{D-d}\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) + \frac{x-d}{D-d}\exp \biggl(-\frac{(D-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(12)

Now, substituting \(x=\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})\) in (12) and then multiplying by \(\frac{1}{n}\) and taking summation over i, we gain

$$\begin{aligned} &\frac{1}{n}\sum_{i=1}^{n} \exp \biggl(- \frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr) \\ &\quad \geq \frac{D-\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})}{D-d} \exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad{}+ \frac{\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-d}{D-d} \exp \biggl(-\frac{(D-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(13)

By Lemma 1, the Gaussian function \(\Psi (x)=\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is concave. Therefore, using Theorem 1, we have

$$\begin{aligned} \exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr) \geq \frac{1}{n}\sum_{i=1}^{n} \exp \biggl(- \frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(14)

Now, comparing (13) and (14), we achieve

$$\begin{aligned} & \exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr) \\ &\quad \geq \frac{1}{n}\sum_{i=1}^{n}\exp \biggl(- \frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr) \\ &\quad \geq \frac{D-\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})}{D-d} \exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad{}+ \frac{\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-d}{D-d} \exp \biggl(-\frac{(D-1)^{2}}{a^{2}} \biggr), \end{aligned}$$
(15)

which is equivalent to (11). □

In the following theorem, we acquire some general bounds for soft margin estimator by considering a general function defined on rectangles, which is increasing with respect to the first variable.

Theorem 4

Let hypothesis H hold with \(a\in [\sqrt{2},\infty )\). Also assume that ϒ is an interval in \(\mathbb{R}\), \(F:\Upsilon \times \Upsilon \rightarrow \mathbb{R}\) is an increasing function with respect to the first variable, and \(\phi:[0,1]\rightarrow \Upsilon \) is an arbitrary function. Then

$$\begin{aligned} & F \biggl(\exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr),\phi (y) \biggr) \\ &\quad \geq F \bigl(\hat{P}(\overrightarrow{R}=\overrightarrow{r}_{*}|\Theta = \theta ),\phi (y) \bigr) \\ &\quad \geq \min_{x,y\in [0,1]} F \biggl( (1-x ) \exp \biggl(- \frac{1}{a^{2}} \biggr) +x, \phi (y) \biggr). \end{aligned}$$
(16)

Proof

By utilizing inequality (6) and increasing the property of F with respect to the first variable, we get (16). □

In the following result, we obtain some more general bounds for soft margin estimator by using a general function defined on rectangles and imposing a restriction on the Jaccard function.

Theorem 5

Let hypothesis H hold with \(a\in [\sqrt{2},\infty )\). Also assume that ϒ is an interval in \(\mathbb{R}\) and \(F:\Upsilon \times \Upsilon \rightarrow \mathbb{R}\) is an increasing function with respect to the first variable. If \(0< d\leq \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{ \theta,i})\leq D<1\) and \(\phi:[d,D]\rightarrow \Upsilon \) is an arbitrary function, then

$$\begin{aligned} &F \biggl(\exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr),\phi (y) \biggr) \\ &\quad \geq F \bigl(\hat{P}(\overrightarrow{R}=\overrightarrow{r}_{*}|\Theta = \theta ),\phi (y) \bigr) \\ &\quad \geq \min_{x,y\in [d,D]} F \biggl(\frac{D-x}{D-d}\exp \biggl(- \frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad{}+\frac{x-d}{D-d} \exp \biggl(-\frac{(D-1)^{2}}{a^{2}} \biggr),\phi (y) \biggr). \end{aligned}$$
(17)

Furthermore, the right-hand side of (17) is a decreasing function of D and an increasing function of d.

Proof

By utilizing inequality (11) and increasing the property of F with respect to the first variable, we obtain (17).

Now, we show that the right-hand side of (17) is a decreasing function of D.

Let \(d\leq k_{1}< k_{2}\leq D\). By Lemma 1, the Gaussian function \(\Psi (x)=\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is concave for \(a\in [\sqrt{2},\infty )\). Therefore, the first-order divided difference of \(\Psi (x)\) is decreasing, that is,

$$\begin{aligned} \frac{\exp (-\frac{(k_{1}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{1}-d} \geq \frac{\exp (-\frac{(k_{2}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{2}-d}. \end{aligned}$$
(18)

Multiplying both sides of (18) by \(x-d\) and then adding \(\exp (-\frac{(d-1)^{2}}{a^{2}} )\), we get

$$\begin{aligned} &\frac{\exp (-\frac{(k_{1}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{1}-d}(x-d) +\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad \geq \frac{\exp (-\frac{(k_{2}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{2}-d}(x-d) +\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\quad \Rightarrow\quad \frac{ \{\exp (-\frac{(k_{1}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} ) \} (x-d) +\exp (-\frac{(d-1)^{2}}{a^{2}} )(k_{1}-d)}{k_{1}-d} \\ & \phantom{\quad \Rightarrow\quad}\quad\geq \frac{ \{\exp (-\frac{(k_{2}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} ) \} (x-d) +\exp (-\frac{(d-1)^{2}}{a^{2}} )(k_{2}-d)}{k_{2}-d} \\ &\quad \Rightarrow \quad \frac{k_{1}-x}{k_{1}-d}\exp \biggl(- \frac{(d-1)^{2}}{a^{2}} \biggr) +\frac{x-d}{k_{1}-d}\exp \biggl(- \frac{(k_{1}-1)^{2}}{a^{2}} \biggr) \\ &\phantom{\quad \Rightarrow\quad}\quad \geq \frac{k_{2}-x}{k_{2}-d}\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) + \frac{x-d}{k_{2}-d}\exp \biggl(-\frac{(k_{2}-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(19)

By utilizing (19) and the fact that \([d,k_{1}]\subseteq [d,k_{2}]\) and F is increasing with respect to the first variable, we attain

$$\begin{aligned} &\min_{x,y\in [d,k_{1}]}F \biggl(\frac{k_{1}-x}{k_{1}-d}\exp \biggl(- \frac{(d-1)^{2}}{a^{2}} \biggr) +\frac{x-d}{k_{1}-d}\exp \biggl(- \frac{(k_{1}-1)^{2}}{a^{2}} \biggr),\phi (y) \biggr) \\ &\quad \geq \min_{x,y\in [d,k_{2}]}F \biggl( \frac{k_{2}-x}{k_{2}-d} \exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) +\frac{x-d}{k_{2}-d}\exp \biggl(-\frac{(k_{2}-1)^{2}}{a^{2}} \biggr),\phi (y) \biggr). \end{aligned}$$
(20)

Hence, (20) proves that the right-hand side of (17) is a decreasing function of D.

Similarly, we can prove that the right-hand side of (17) is an increasing function of d. □

In the succeeding theorem, we acquire some general bounds for soft margin estimator by taking a general function defined on rectangles and decreasing with respect to the first variable.

Theorem 6

Let hypothesis H hold with \(a\in [\sqrt{2},\infty )\). Also assume that ϒ is an interval in \(\mathbb{R}\), \(F:\Upsilon \times \Upsilon \rightarrow \mathbb{R}\) is a decreasing function with respect to the first variable and \(\phi:[0,1]\rightarrow \Upsilon \) is an arbitrary function. Then

$$\begin{aligned} &F \biggl(\exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr),\phi (y) \biggr) \\ &\quad \leq F \bigl(\hat{P}(\overrightarrow{R}=\overrightarrow{r}_{*}|\Theta = \theta ),\phi (y) \bigr) \\ &\quad \leq \max_{x,y\in [0,1]} F \biggl( (1-x ) \exp \biggl(- \frac{1}{a^{2}} \biggr) +x, \phi (y) \biggr). \end{aligned}$$
(21)

Proof

By utilizing inequality (6) and decreasing the property of F with respect to the first variable, we get (21). □

In the next result, we secure more certain general bounds for soft margin estimator by using a general function, which is decreasing with respect to the first variable, defined on rectangles and also imposing restriction on the Jaccard function.

Theorem 7

Let hypothesis H hold with \(a\in [\sqrt{2},\infty )\). Also assume that ϒ is an interval in \(\mathbb{R}\) and \(F:\Upsilon \times \Upsilon \rightarrow \mathbb{R}\) is a decreasing function with respect to the first variable. If \(0< d\leq \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{ \theta,i})\leq D<1\) and \(\phi:[d,D]\rightarrow \Upsilon \) is an arbitrary function, then

$$\begin{aligned} &F \biggl(\exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr),\phi (y) \biggr) \\ &\quad \leq F \bigl(\hat{P}(\overrightarrow{R}=\overrightarrow{r}_{*}|\Theta = \theta ),\phi (y) \bigr) \\ &\quad \leq \max_{x,y\in [d,D]} F \biggl(\frac{D-x}{D-d}\exp \biggl(- \frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad{}+\frac{x-d}{D-d} \exp \biggl(-\frac{(D-1)^{2}}{a^{2}} \biggr),\phi (y) \biggr). \end{aligned}$$
(22)

Furthermore, the right-hand side of (22) is an increasing function of D and a decreasing function of d.

Proof

By using inequality (11) and decreasing the property of F with respect to the first variable, we get (22).

Now, we show that the right-hand side of (22) is an increasing function of D.

Let \(d\leq k_{1}< k_{2}\leq D\). By Lemma 1, the Gaussian function \(\Psi (x)=\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is concave for \(a\in [\sqrt{2},\infty )\). Therefore, the first-order divided difference of \(\Psi (x)\) is decreasing, that is,

$$\begin{aligned} \frac{\exp (-\frac{(k_{1}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{1}-d} \geq \frac{\exp (-\frac{(k_{2}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{2}-d}. \end{aligned}$$
(23)

Multiplying both sides of (23) by \(x-d\) and then adding \(\exp (-\frac{(d-1)^{2}}{a^{2}} )\), we get

$$\begin{aligned} &\frac{\exp (-\frac{(k_{1}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{1}-d}(x-d) +\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad\geq \frac{\exp (-\frac{(k_{2}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{2}-d}(x-d) +\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\quad \Rightarrow\quad \frac{ \{\exp (-\frac{(k_{1}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} ) \} (x-d) +\exp (-\frac{(d-1)^{2}}{a^{2}} )(k_{1}-d)}{k_{1}-d} \\ &\phantom{\quad \Rightarrow\quad}\quad \geq \frac{ \{\exp (-\frac{(k_{2}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} ) \} (x-d) +\exp (-\frac{(d-1)^{2}}{a^{2}} )(k_{2}-d)}{k_{2}-d} \\ &\quad \Rightarrow\quad \frac{k_{1}-x}{k_{1}-d}\exp \biggl(- \frac{(d-1)^{2}}{a^{2}} \biggr) +\frac{x-d}{k_{1}-d}\exp \biggl(- \frac{(k_{1}-1)^{2}}{a^{2}} \biggr) \\ &\phantom{\quad \Rightarrow\quad}\quad \geq \frac{k_{2}-x}{k_{2}-d}\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) + \frac{x-d}{k_{2}-d}\exp \biggl(-\frac{(k_{2}-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(24)

By utilizing (24) and the fact that \([d,k_{1}]\subseteq [d,k_{2}]\) and F is decreasing with respect to the first variable, we obtain

$$\begin{aligned} &\max_{x,y\in [d,k_{1}]}F \biggl(\frac{k_{1}-x}{k_{1}-d}\exp \biggl(- \frac{(d-1)^{2}}{a^{2}} \biggr) +\frac{x-d}{k_{1}-d}\exp \biggl(- \frac{(k_{1}-1)^{2}}{a^{2}} \biggr),\phi (y) \biggr) \\ &\quad\leq \max_{x,y\in [d,k_{2}]}F \biggl( \frac{k_{2}-x}{k_{2}-d} \exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) +\frac{x-d}{k_{2}-d}\exp \biggl(-\frac{(k_{2}-1)^{2}}{a^{2}} \biggr),\phi (y) \biggr). \end{aligned}$$
(25)

Hence (25) confirms that the right-hand side of (22) is an increasing function of D.

Similarly, we can prove that the right-hand side of (22) is a decreasing function of d. □

3 Conclusion

In this paper, we extracted some useful bounds for the soft margin estimator given in (1) with the help of notion of concavity. Acquiring these beneficial bounds, we exercised the characteristics of the Jaccard similarity function. To obtain some more advanced bounds for the soft margin estimator, we considered some broad function defined on rectangles and monotonic with respect to the first variable.

Availability of data and materials

Not applicable.

References

  1. Adil Khan, M., Khan, J., Pečarić, J.: Generalization of Jensen’s and Jensen–Steffensen’s inequalities by generalized majorization theorem. J. Math. Inequal. 11(4), 1049–1074 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  2. Adil Khan, M., Khan, S., Chu, Y.-M.: A new bound for the Jensen gap with applications in information theory. IEEE Access 8, 98001–98008 (2020)

    Article  Google Scholar 

  3. Adil Khan, M., Latif, N., Pečarić, J.: Generalization of majorization theorem. J. Math. Inequal. 9(3), 847–872 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  4. Adil Khan, M., Latif, N., Pečarić, J.: Generalization of majorization theorem via Abel–Gontscharoff polynomial. Rad Hrvat. Akad. Znan. Umjet. Mat. Znan. 19(523), 91–116 (2015)

    MathSciNet  MATH  Google Scholar 

  5. Adil Khan, M., Pečarić, Ð., Pečarić, J.: New refinement of the Jensen inequality associated to certain functions with applications. J. Inequal. Appl. 2020, Article ID 74 (2020)

    Article  MathSciNet  Google Scholar 

  6. Adil Khan, M., Wu, S.-H., Ullah, H., Chu, Y.-M.: Discrete majorization type inequalities for convex functions on rectangles. J. Inequal. Appl. 2019, Article ID 16 (2019)

    Article  MathSciNet  Google Scholar 

  7. Ahmed, K., Adil Khan, M., Khan, S., Ali, A., Chu, Y.-M.: New estimates for generalized Shannon and Zipf–Mandelbrot entropies via convexity results. Results Phys. 18, Article ID 103305 (2020)

    Article  Google Scholar 

  8. Altarelli, F., Braunstein, A., Dall, L., Asta, A., Lage-Castellanos, A., Zecchina, R.: Bayesian inference of epidemics on networks via belief propagation. Phys. Rev. Lett. 112(11), 118701 (2014)

    Article  Google Scholar 

  9. Antulov-Fantulin, N., Lančić, A., Šmuc, T., Štefančić, H., Šikić, M.: Identification of patient zero in static and temporal networks: robustness and limitations. Phys. Rev. Lett. 114, 248701 (2015)

    Article  Google Scholar 

  10. Auerbach, D.M., Darrow, W.W., Jaffe, H.W., Curran, J.W.: Cluster of cases of the acquired immune deficiency syndrome. Am. J. Med. 76, 487–491 (1984)

    Article  Google Scholar 

  11. Castellano, C., Satorras, R.P.: Thresholds for epidemic spreading in networks. Phys. Rev. Lett. 105, 218701 (2010)

    Article  Google Scholar 

  12. Colizza, V., Pastor-Satorras, R., Vespignani, A.: Reaction–diffusion processes and metapopulation models in heterogeneous networks. Nat. Phys. 3(4), 276–282 (2007)

    Article  Google Scholar 

  13. Comin, C.H., da F. Costa, L.: Identifying the starting point of a spreading process in complex networks. Phys. Rev. E 84, 056105 (2011)

    Article  Google Scholar 

  14. Dong, W., Zhang, W., Tan, C.W.: Rooting out the rumor culprit from suspects. In: IEEE Intl. Symp. on Information Theory, pp. 2671–2675 (2013)

    Google Scholar 

  15. Du, X., Dong, L., Lan, Y., Peng, Y., Wu, A., Zhang, Y., Huang, W., Wang, D., Wang, M., Guo, Y.: Mapping of H3N2 influenza antigenic evolution in China reveals a strategy for vaccine strain recommendation. Nat. Commun. 3(1), 1–9 (2012)

    Google Scholar 

  16. Gray, A., Greenhalgh, D., Hu, L., Mao, X., Pan, J.: A stochastic differential equation SIS epidemic model. SIAM J. Appl. Math. 71(3), 876–902 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  17. Kelling, M.J., Pejman, R.: Modeling Infectious Diseases in Humans and Animals. Princeton University Press, Princeton (2011)

    Book  Google Scholar 

  18. Khan, S., Adil Khan, M., Butt, S.I., Chu, Y.-M.: A new bound for the Jensen gap pertaining twice differentiable functions with applications. Adv. Differ. Equ. 2020, Article ID 333 (2020)

    Article  MathSciNet  Google Scholar 

  19. Khan, S., Adil Khan, M., Chu, Y.-M.: New converses of the Jensen’s inequality via Green functions with applications. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 114(3), Article ID 114 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  20. Legrand, J., Grais, R.F., Boelle, P.Y., Valleron, A.J., Flahault, A.: Understanding the dynamics of Ebola epidemics. Epidemiol. Infect. 135, 610–621 (2007)

    Article  Google Scholar 

  21. Lokhov, A.Y., Mézard, M., Ohta, H., Zdeborová, L.: Inferring the origin of an epidemic with a dynamic message-passing algorithm. Phys. Rev. E 90(1), 012801 (2014)

    Article  Google Scholar 

  22. Pastor-Satorras, R.P., Vespignani, A.: Epidemic spreading in scale-free networks. Phys. Rev. Lett. 86(14), 3200 (2001)

    Article  Google Scholar 

  23. Pinto, P.C., Thiran, P., Vetterli, M.: Locating the source of diffusion in large-scale networks. Phys. Rev. Lett. 109(6), 068702 (2012)

    Article  Google Scholar 

  24. Vespignani, A.: Modelling dynamical processes in complex socio-technical systems. Nat. Phys. 8(1), 32–39 (2012)

    Article  MathSciNet  Google Scholar 

  25. Wang, Z., Dong, W., Zhang, W., Tan, C.W.: Rumor source detection with multiple observations: fundamental limits and algorithms. In: ACM SIGMETRICS Perform. Eval. Rev., pp. 1–13 (2014)

    Google Scholar 

  26. Worobey, M., Han, G.Z., Rambaut, A.: Genesis and pathogenesis of the 1918 pandemic H1N1 influenza a virus. Proc. Natl. Acad. Sci. USA 111, 8107–8112 (2014)

    Article  Google Scholar 

  27. Wu, S., Adil Khan, M., Haleemzai, H.U.: Refinements of majorization inequality involving convex functions via Taylor’s theorem with mean value form of the remainder. Mathematics 7(8), 663 (2019)

    Article  Google Scholar 

  28. Zhu, K., Ying, L.: Information source detection in the SIR model: a sample path based approach. In: Information Theory and Applications Workshop. ITA, pp. 1–9. IEEE Press, New York (2013)

    Google Scholar 

Download references

Acknowledgements

The publication was supported by the Ministry of Education and Science of the Russian Federation (the Agreement number No. 02.a03.21.0008).

Funding

There is no funding.

Author information

Authors and Affiliations

Authors

Contributions

All the authors contributed equally to the writing of this paper. All the authors read and approved the final manuscript.

Corresponding author

Correspondence to Hidayat Ullah.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ullah, H., Adil Khan, M. & Pečarić, J. New bounds for soft margin estimator via concavity of Gaussian weighting function. Adv Differ Equ 2020, 644 (2020). https://doi.org/10.1186/s13662-020-03103-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-020-03103-z

Keywords