Likelihood principle
In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function.
A likelihood function arises from a probability density function considered as a function of its distributional parameterization argument. For example, consider a model which gives the probability density function ƒX(x | θ) of observable random variable X as a function of a parameter θ. Then for a specific value x of X, the function (θ | x) = ƒX(x | θ) is a likelihood function of θ: it gives a measure of how "likely" any particular value of θ is, if we know that X has the value x. The density function may be a density with respect to counting measure, i.e. a probability mass function.
Two likelihood functions are equivalent if one is a scalar multiple of the other.[lower-alpha 1] The likelihood principle is this: all information from the data that is relevant to inferences about the value of the model parameters is in the equivalence class to which the likelihood function belongs. The strong likelihood principle applies this same criterion to cases such as sequential experiments where the sample of data that is available results from applying a stopping rule to the observations earlier in the experiment.[1]
Example
Suppose
- X is the number of successes in twelve independent Bernoulli trials with probability θ of success on each trial, and
- Y is the number of independent Bernoulli trials needed to get three successes, again with probability θ (= 1/2 for a coin-toss) of success on each trial.
Then the observation that X = 3 induces the likelihood function
while the observation that Y = 12 induces the likelihood function
The likelihood principle says that, as the data are the same in both cases, the inferences drawn about the value of θ should also be the same. In addition, all the inferential content in the data about the value of θ is contained in the two likelihoods, and is the same if they are proportional to one another. This is the case in the above example, reflecting the fact that the difference between observing X = 3 and observing Y = 12 lies not in the actual data, but merely in the design of the experiment. Specifically, in one case, one has decided in advance to try twelve times; in the other, to keep trying until three successes are observed. The inference about θ should be the same, and this is reflected in the fact that the two likelihoods are proportional to each other.
This is not always the case, however. The use of frequentist methods involving p-values leads to different inferences for the two cases above,[2] showing that the outcome of frequentist methods depends on the experimental procedure, and thus violates the likelihood principle.
The law of likelihood
A related concept is the law of likelihood, the notion that the extent to which the evidence supports one parameter value or hypothesis against another is indicated by the ratio of their likelihoods, their likelihood ratio. That is,
is the degree to which the observation x supports parameter value or hypothesis a against b. If this ratio is 1, the evidence is indifferent; if greater than 1, the evidence supports the value a against b; or if less, then vice versa.
In Bayesian statistics, this ratio is known as the Bayes factor, and Bayes' rule can be seen as the application of the law of likelihood to inference.
In frequentist inference, the likelihood ratio is used in the likelihood-ratio test, but other non-likelihood tests are used as well. The Neyman–Pearson lemma states the likelihood-ratio test is the most powerful test for comparing two simple hypotheses at a given significance level, which gives a frequentist justification for the law of likelihood.
Combining the likelihood principle with the law of likelihood yields the consequence that the parameter value which maximizes the likelihood function is the value which is most strongly supported by the evidence. This is the basis for the widely used method of maximum likelihood.
History
The likelihood principle was first identified by that name in print in 1962 (Barnard et al., Birnbaum, and Savage et al.), but arguments for the same principle, unnamed, and the use of the principle in applications goes back to the works of R.A. Fisher in the 1920s. The law of likelihood was identified by that name by I. Hacking (1965). More recently the likelihood principle as a general principle of inference has been championed by A. W. F. Edwards. The likelihood principle has been applied to the philosophy of science by R. Royall.[3]
Birnbaum proved that the likelihood principle follows from two more primitive and seemingly reasonable principles, the conditionality principle and the sufficiency principle:
- The conditionality principle says that if an experiment is chosen by a random process independent of the states of nature , then only the experiment actually performed is relevant to inferences about .
- The sufficiency principle says that if is a sufficient statistic for , and if in two experiments with data and we have , then the evidence about given by the two experiments is the same.
Arguments for and against
Some widely used methods of conventional statistics, for example many significance tests, are not consistent with the likelihood principle.
Let us briefly consider some of the arguments for and against the likelihood principle.
The original Birnbaum argument
Birnbaum's proof of the likelihood principle has been disputed by philosophers of science, including Deborah Mayo[4][5] and statisticians including Michael Evans.[6] On the other hand, a new proof of the likelihood principle has been provided by Greg Gandenberger that addresses some of the counterarguments to the original proof.[7]
Experimental design arguments on the likelihood principle
Unrealized events play a role in some common statistical methods. For example, the result of a significance test depends on the p-value, the probability of a result as extreme or more extreme than the observation, and that probability may depend on the design of the experiment. To the extent that the likelihood principle is accepted, such methods are therefore denied.
Some classical significance tests are not based on the likelihood. The following are a simple and more complicated example of those, using a commonly cited example called the optional stopping problem.
- Example 1 – simple version
Suppose I tell you that I tossed a coin 12 times and in the process observed 3 heads. You might make some inference about the probability of heads and whether the coin was fair.
Suppose now I tell that I tossed the coin until I observed 3 heads, and I tossed it 12 times. Will you now make some different inference?
The likelihood function is the same in both cases: It is proportional to
So according to the likelihood principle, in either case the inference should be the same.
- Example 2 – a more elaborated version of the same statistics
Suppose a number of scientists are assessing the probability of a certain outcome (which we shall call 'success') in experimental trials. Conventional wisdom suggests that if there is no bias towards success or failure then the success probability would be one half. Adam, a scientist, conducted 12 trials and obtains 3 successes and 9 failures. One of those successes was the 12th and last observation. Then Adam left the lab.
Bill, a colleague in the same lab, continued Adam's work and published Adam's results, along with a significance test. He tested the null hypothesis that p, the success probability, is equal to a half, versus p < 0.5 . The probability of the observed result that out of 12 trials 3 or something fewer (i.e. more extreme) were successes, if H0 is true, is
which is 299/4096 = 7.3% . Thus the null hypothesis is not rejected at the 5% significance level.
Charlotte, another scientist, reads Bill's paper and writes a letter, saying that it is possible that Adam kept trying until he obtained 3 successes, in which case the probability of needing to conduct 12 or more experiments is given by
which is 134/4096 = 3.27% . Now the result is statistically significant at the 5% level. Note that there is no contradiction between these two analyses; both computations are correct.
To these scientists, whether a result is significant or not depends on the design of the experiment, not on the likelihood (in the sense of the likelihood function) of the parameter value being 1/2 .
- Summary of the illustrated issues
Results of this kind are considered by some as arguments against the likelihood principle. For others it exemplifies the value of the likelihood principle and is an argument against significance tests.
Similar themes appear when comparing Fisher's exact test with Pearson's chi-squared test.
The voltmeter story
An argument in favor of the likelihood principle is given by Edwards in his book Likelihood. He cites the following story from J.W. Pratt, slightly condensed here. Note that the likelihood function depends only on what actually happened, and not on what could have happened.
- An engineer draws a random sample of electron tubes and measures their voltages. The measurements range from 75 to 99 Volts. A statistician computes the sample mean and a confidence interval for the true mean. Later the statistician discovers that the voltmeter reads only as far as 100 Volts, so technically, the population appears to be “censored”. If the statistician is orthodox this necessitates a new analysis. However, the engineer says he has another meter reading to 1000 Volts, which he would have used if any voltage had been over 100. This is a relief to the statistician, because it means the population was effectively uncensored after all. But later, the statistician ascertains that the second meter was not working at the time of the measurements. The engineer informs the statistician that he would not have held up the original measurements until the second meter was fixed, and the statistician informs him that new measurements are required. The engineer is astounded. “Next you'll be asking about my oscilloscope!”
- Throwback to Example 2 in the prior section
This story can be translated to Adam's stopping rule above, as follows: Adam stopped immediately after 3 successes, because his boss Bill had instructed him to do so. After the publication of the statistical analysis by Bill, Adam realizes that he has missed a later instruction from Bill to instead conduct 12 trials, and that Bill's paper is based on this second instruction. Adam is very glad that he got his 3 successes after exactly 12 trials, and explains to his friend Charlotte that by coincidence he executed the second instruction. Later, Adam is astonished to hear about Charlotte's letter, explaining that now the result is significant.
Notes
- Geometrically, if they occupy the same point in projective space.
References
- Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms. OUP. ISBN 0-19-920613-9
- Vidakovic, Brani. "The Likelihood Principle" (PDF). H. Milton Stewart School of Industrial & Systems Engineering. Georgia Tech. Retrieved 21 October 2017.
- Royall, Richard (1997). Statistical Evidence: A likelihood paradigm. Boca Raton, FL: Chapman and Hall. ISBN 0-412-04411-0.
- Mayo, D. (2010) "An Error in the Argument from Conditionality and Sufficiency to the Likelihood Principle" in Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability and the Objectivity and Rationality of Science (D Mayo and A. Spanos eds.), Cambridge: Cambridge University Press: 305-314.
- Mayo, Deborah (2014), "On the Birnbaum Argument for the Strong Likelihood Principle", Statistical Science, 29: 227-266 (with Discussion).
- Evans, Michael (2013) What does the proof of Birnbaum's theorem prove?
- Gandenberger, Greg (2014), "A new proof of the likelihood principle", British Journal for the Philosophy of Science, 66: 475-503; doi:10.1093/bjps/axt039.
- Barnard, G.A.; G.M. Jenkins; C.B. Winsten (1962). "Likelihood Inference and Time Series". Journal of the Royal Statistical Society, Series A. 125 (3): 321–372. doi:10.2307/2982406. ISSN 0035-9238. JSTOR 2982406.
- Berger, J.O.; Wolpert, R.L. (1988). The Likelihood Principle (2nd ed.). Haywood, CA: The Institute of Mathematical Statistics. ISBN 0-940600-13-7.
- Birnbaum, Allan (1962). "On the foundations of statistical inference". Journal of the American Statistical Association. 57 (298): 269–326. doi:10.2307/2281640. ISSN 0162-1459. JSTOR 2281640. MR 0138176. (With discussion.)
- Edwards, Anthony W.F. (1972). Likelihood (1st ed.). Cambridge: Cambridge University Press.
- Edwards, Anthony W.F. (1992). Likelihood (2nd ed.). Baltimore: Johns Hopkins University Press. ISBN 0-8018-4445-2.
- Edwards, Anthony W.F. (1974). "The history of likelihood". International Statistical Review. 42 (1): 9–15. doi:10.2307/1402681. ISSN 0306-7734. JSTOR 1402681. MR 0353514.
- Fisher, Ronald A. (1922). "On the Mathematical Foundations of Theoretical Statistics" (PDF fulltext). Philosophical Transactions of the Royal Society A. 222 (594–604): 326. Bibcode:1922RSPTA.222..309F. doi:10.1098/rsta.1922.0009. Retrieved 2008-12-28.
- Hacking, Ian (1965). Logic of Statistical Inference. Cambridge: Cambridge University Press. ISBN 0-521-05165-7.
- Jeffreys, Harold (1961). The Theory of Probability. The Oxford University Press.
- Mayo, Deborah G. (2010), "An Error in the Argument from Conditionality and Sufficiency to the Likelihood Principle" (PDF), in Mayo, D; Spanos, A (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability and the Objectivity and Rationality of Science, Cambridge UK: Cambridge University Press, pp. 305–314, ISBN 9780521180252.
- Royall, Richard M. (1997). Statistical Evidence: A Likelihood Paradigm. London: Chapman & Hall. ISBN 0-412-04411-0.
- Savage, Leonard J.; et al. (1962). The Foundations of Statistical Inference. London: Methuen.
External links
- Anthony W.F. Edwards. "Likelihood".
- Jeff Miller. Earliest Known Uses of Some of the Words of Mathematics (L)
- John Aldrich. Likelihood and Probability in R. A. Fisher’s Statistical Methods for Research Workers