# Solution to a Function Equation and Divergence Measures

- Chuan-Lei Dong
^{1}and - Jin Liang
^{1}Email author

**2011**:617564

https://doi.org/10.1155/2011/617564

© C.-L. Dong and J. Liang. 2011

**Received: **1 January 2011

**Accepted: **11 February 2011

**Published: **10 March 2011

## Abstract

## Keywords

## 1. Introduction

As early as in 1952, Chernoff [1] used the -divergence to evaluate classification errors. Since then, the study of various divergence measures has been attracting many researchers. So far, we have known that the Csiszár -divergence is a unique class of divergences having information monotonicity, from which the dual geometrical structure with the Fisher metric is derived, and the Bregman divergence is another class of divergences that gives a dually flat geometrical structure different from the -structure in general. Actually, a divergence measure between two probability distributions or positive measures have been proved a useful tool for solving optimization problems in optimization, signal processing, machine learning, and statistical inference. For more information on the theory of divergence measures, please see, for example, [2–5] and references therein.

then is the solution of a linear homogenous differential equation with constant coefficients. Moreover, new results on divergence measures are given.

Throughout this paper, we let be the set of real numbers and are a convex set.

Basic notations: ; is strictly convex and twice differentiable; is differentiable injective map; is the general vector Bregman divergence; is strictly convex twice-continuously differentiable function satisfying ; is the vector -divergence.

then we say the or is in the intersection of -divergence and general Bregman divergence.

For more information on some basic concepts of divergence measures, we refer the reader to, for example, [2–5] and references therein.

## 2. Main Results

Theorem 2.1.

Proof.

The proof is then complete.

Theorem 2.2.

Proof.

Thus, a modification of Theorem 2.1 implies the conclusion.

Moreover, it is not so hard to deduce the following theorem.

Theorem 2.3.

where is strictly monotone twice-continuously differentiable functions. Then the divergence is -divergence or vector -divergence times a positive constant .

## Declarations

### Acknowledgments

This work was supported partially by the NSF of China and the Specialized Research Fund for the Doctoral Program of Higher Education of China.

## Authors’ Affiliations

## References

- Chernoff H:
**A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations.***Annals of Mathematical Statistics*1952,**23:**493-507. 10.1214/aoms/1177729330MathSciNetView ArticleMATHGoogle Scholar - Amari S:
**Information geometry and its applications: convex function and dually flat manifold.**In*Emerging Trends in Visual Computing, Lecture Notes in Computer Science*.*Volume 5416*. Edited by: Nielsen F. Springer, Berlin, Germany; 2009:75-102. 10.1007/978-3-642-00826-9_4View ArticleGoogle Scholar - Brègman LM:
**The relaxation method of finding a common point of convex sets and its application to the solution of problems in convex programming.***Computational Mathematics and Mathematical Physics*1967,**7:**200-217.View ArticleMathSciNetMATHGoogle Scholar - Cichocki A, Zdunek R, Phan AH, Amari S:
*Non-Negative Matrix and Tensor Factorizations: Applications to Explanatory Multi-Way Data Analysis and Blind Source Separation*. Wiley, New York, NY, USA; 2009.View ArticleGoogle Scholar - Csiszár I:
**Why least squares and maximum entropy? An axiomatic approach to inference for linear inverse problems.***The Annals of Statistics*1991,**19**(4):2032-2066. 10.1214/aos/1176348385MathSciNetView ArticleMATHGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.