Logarithmically concave function
In convex analysis, a non-negative function f : Rn → R+ is logarithmically concave (or log-concave for short) if its domain is a convex set, and if it satisfies the inequality
for all x,y ∈ dom f and 0 < θ < 1. If f is strictly positive, this is equivalent to saying that the logarithm of the function, log ∘ f, is concave; that is,
for all x,y ∈ dom f and 0 < θ < 1.
Examples of log-concave functions are the 0-1 indicator functions of convex sets (which requires the more flexible definition), and the Gaussian function.
Similarly, a function is log-convex if it satisfies the reverse inequality
for all x,y ∈ dom f and 0 < θ < 1.
Properties
- A log-concave function is also quasi-concave. This follows from the fact that the logarithm is monotone implying that the superlevel sets of this function are convex.[1]
- Every concave function that is nonnegative on its domain is log-concave. However, the reverse does not necessarily hold. An example is the Gaussian function f(x) = exp(−x2/2) which is log-concave since log f(x) = −x2/2 is a concave function of x. But f is not concave since the second derivative is positive for |x| > 1:
- From above two points, concavity log-concavity quasiconcavity.
- A twice differentiable, nonnegative function with a convex domain is log-concave if and only if for all x satisfying f(x) > 0,
- ,[1]
- i.e.
- is
- negative semi-definite. For functions of one variable, this condition simplifies to
Operations preserving log-concavity
- Products: The product of log-concave functions is also log-concave. Indeed, if f and g are log-concave functions, then log f and log g are concave by definition. Therefore
- is concave, and hence also f g is log-concave.
- Marginals: if f(x,y) : Rn+m → R is log-concave, then
- is log-concave (see Prékopa–Leindler inequality).
- This implies that convolution preserves log-concavity, since h(x,y) = f(x-y) g(y) is log-concave if f and g are log-concave, and therefore
- is log-concave.
Log-concave distributions
Log-concave distributions are necessary for a number of algorithms, e.g. adaptive rejection sampling. Every distribution with log-concave density is a maximum entropy probability distribution with specified mean μ and Deviation risk measure D.[2] As it happens, many common probability distributions are log-concave. Some examples:[3]
- The normal distribution and multivariate normal distributions.
- The exponential distribution.
- The uniform distribution over any convex set.
- The logistic distribution.
- The extreme value distribution.
- The Laplace distribution.
- The chi distribution.
- The hyperbolic secant distribution.
- The Wishart distribution, where n >= p + 1.[4]
- The Dirichlet distribution, where all parameters are >= 1.[4]
- The gamma distribution if the shape parameter is >= 1.
- The chi-square distribution if the number of degrees of freedom is >= 2.
- The beta distribution if both shape parameters are >= 1.
- The Weibull distribution if the shape parameter is >= 1.
Note that all of the parameter restrictions have the same basic source: The exponent of non-negative quantity must be non-negative in order for the function to be log-concave.
The following distributions are non-log-concave for all parameters:
- The Student's t-distribution.
- The Cauchy distribution.
- The Pareto distribution.
- The log-normal distribution.
- The F-distribution.
Note that the cumulative distribution function (CDF) of all log-concave distributions is also log-concave. However, some non-log-concave distributions also have log-concave CDF's:
- The log-normal distribution.
- The Pareto distribution.
- The Weibull distribution when the shape parameter < 1.
- The gamma distribution when the shape parameter < 1.
The following are among the properties of log-concave distributions:
- If a density is log-concave, so is its cumulative distribution function (CDF).
- If a multivariate density is log-concave, so is the marginal density over any subset of variables.
- The sum of two independent log-concave random variables is log-concave. This follows from the fact that the convolution of two log-concave functions is log-concave.
- The product of two log-concave functions is log-concave. This means that joint densities formed by multiplying two probability densities (e.g. the normal-gamma distribution, which always has a shape parameter >= 1) will be log-concave. This property is heavily used in general-purpose Gibbs sampling programs such as BUGS and JAGS, which are thereby able to use adaptive rejection sampling over a wide variety of conditional distributions derived from the product of other distributions.
See also
Notes
- Boyd, Stephen; Vandenberghe, Lieven (2004). "Log-concave and log-convex functions". Convex Optimization. Cambridge University Press. pp. 104–108. ISBN 0-521-83378-7.
- Grechuk, B.; Molyboha, A.; Zabarankin, M. (2009). "Maximum Entropy Principle with General Deviation Measures". Mathematics of Operations Research. 34 (2): 445–467. doi:10.1287/moor.1090.0377.
- See Bagnoli, Mark; Bergstrom, Ted (2005). "Log-Concave Probability and Its Applications" (PDF). Economic Theory. 26 (2): 445–469. doi:10.1007/s00199-004-0514-4.
- Prékopa, András (1971). "Logarithmic concave measures with application to stochastic programming". Acta Scientiarum Mathematicarum. 32: 301–316.
References
- Barndorff-Nielsen, Ole (1978). Information and exponential families in statistical theory. Wiley Series in Probability and Mathematical Statistics. Chichester: John Wiley \& Sons, Ltd. pp. ix+238 pp. ISBN 0-471-99545-2. MR 0489333.
- Dharmadhikari, Sudhakar; Joag-Dev, Kumar (1988). Unimodality, convexity, and applications. Probability and Mathematical Statistics. Boston, MA: Academic Press, Inc. pp. xiv+278. ISBN 0-12-214690-5. MR 0954608.