Cook's distance
In statistics, Cook's distance or Cook's D is a commonly used estimate of the influence of a data point when performing a least-squares regression analysis.[1] In a practical ordinary least squares analysis, Cook's distance can be used in several ways: to indicate influential data points that are particularly worth checking for validity; or to indicate regions of the design space where it would be good to be able to obtain more data points. It is named after the American statistician R. Dennis Cook, who introduced the concept in 1977.[2][3]
Definition
Data points with large residuals (outliers) and/or high leverage may distort the outcome and accuracy of a regression. Cook's distance measures the effect of deleting a given observation. Points with a large Cook's distance are considered to merit closer examination in the analysis.
For the algebraic expression, first define
where is the error term, is the coefficient matrix, is the number of covariates or predictors for each observation, and is the design matrix including a constant. The least squares estimator then is , and consequently the fitted (predicted) values for the mean of are
where is the projection matrix (or hat matrix). The -th diagonal element of , given by ,[4] is known as the leverage of the -th observation. Similarly, the -th element of the residual vector is denoted by .
Cook's distance of observation is defined as the sum of all the changes in the regression model when observation is removed from it[5]
where is the fitted response value obtained when excluding , and is the mean squared error of the regression model.[6]
Equivalently, it can be expressed using the leverage[5] ():
Detecting highly influential observations
There are different opinions regarding what cut-off values to use for spotting highly influential points. Since Cook's distance is in the metric of an F distribution with and (as defined for the design matrix above) degrees of freedom, the median point (i.e., ) can be used as a cut-off.[7] Since this value is close to 1 for large , a simple operational guideline of has been suggested.[8] Note that the Cook's distance measure does not always correctly identify influential observations.[9]
Relationship to other influence measures (and interpretation)
can be expressed using the leverage[5] () and the square of the internally Studentized residual (), as follows:
The benefit in the last formulation is that it clearly shows the relationship between and to (while p and n are the same for all observations). If is large then it (for non-extreme values of ) will increase . If is close to 0 than will be small, while if is close to 1 then will become very large (as long as , i.e.: that the observation is not exactly on the regression line that was fitted without observation ).
is related to DFFITS through the following relationship (note that is the externally studentized residual, and are defined here):
can be interpreted as the distance one's estimates move within the confidence ellipsoid that represents a region of plausible values for the parameters. This is shown by an alternative but equivalent representation of Cook's distance in terms of changes to the estimates of the regression parameters between the cases, where the particular observation is either included or excluded from the regression analysis.
Software implementations
Many programs and statistics packages, such as R, Python, etc., include implementations of Cook's distance.
Language/Program | Function | Notes |
---|---|---|
R | cooks.distance(model, ...) | See |
Python | CooksDistance().fit(X, y) | See |
Extensions
High-dimensional Influence Measure (HIM), is an alternative to Cook's distance for when (i.e.: more predictors than observations) .[10] While the Cook’s distance quantifies the individual observation’s influence on the least squares regression coefficient estimate, the HIM measures the influence of an observation on the marginal correlations.
References
- Mendenhall, William; Sincich, Terry (1996). A Second Course in Statistics: Regression Analysis (5th ed.). Upper Saddle River, NJ: Prentice-Hall. p. 422. ISBN 0-13-396821-9.
A measure of overall influence an outlying observation has on the estimated coefficients was proposed by R. D. Cook (1979). Cook's distance, Di, is calculated...
- Cook, R. Dennis (February 1977). "Detection of Influential Observations in Linear Regression". Technometrics. American Statistical Association. 19 (1): 15–18. doi:10.2307/1268249. JSTOR 1268249. MR 0436478.
- Cook, R. Dennis (March 1979). "Influential Observations in Linear Regression". Journal of the American Statistical Association. American Statistical Association. 74 (365): 169–174. doi:10.2307/2286747. hdl:11299/199280. JSTOR 2286747. MR 0529533.
- Hayashi, Fumio (2000). Econometrics. Princeton University Press. pp. 21–23. ISBN 1400823838.
- "Cook's Distance".
- "Statistics 512: Applied Linear Models" (PDF). Purdue University. Archived from the original (PDF) on 2016-11-30. Retrieved 2016-03-25.
- Bollen, Kenneth A.; Jackman, Robert W. (1990). "Regression Diagnostics: An Expository Treatment of Outliers and Influential Cases". In Fox, John; Long, J. Scott (eds.). Modern Methods of Data Analysis. Newbury Park, CA: Sage. pp. 266. ISBN 0-8039-3366-5.
- Cook, R. Dennis; Weisberg, Sanford (1982). Residuals and Influence in Regression. New York, NY: Chapman & Hall. hdl:11299/37076. ISBN 0-412-24280-X.
- Kim, Myung Geun (31 May 2017). "A cautionary note on the use of Cook's distance". Communications for Statistical Applications and Methods. 24 (3): 317–324. doi:10.5351/csam.2017.24.3.317. ISSN 2383-4757.
- High-dimensional influence measure
Further reading
- Atkinson, Anthony; Riani, Marco (2000). "Deletion Diagnostics". Robust Diagnostics and Regression Analysis. New York: Springer. pp. 22–25. ISBN 0-387-95017-6.
- Heiberger, Richard M.; Holland, Burt (2013). "Case Statistics". Statistical Analysis and Data Display. Springer Science & Business Media. pp. 312–27. ISBN 9781475742848.
- Krasker, William S.; Kuh, Edwin; Welsch, Roy E. (1983). "Estimation for dirty data and flawed models". Handbook of Econometrics. 1. Elsevier. pp. 651–698. doi:10.1016/S1573-4412(83)01015-6. ISBN 9780444861856.
- Aguinis, Herman; Gottfredson, Ryan K.; Joo, Harry (2013). "Best-Practice Recommendations for Defining Identifying and Handling Outliers". Organizational Research Methods. Sage. 16 (2): 270–301. doi:10.1177/1094428112470848. S2CID 54916947. Retrieved 4 December 2015.