Scholarpedia is supported by Brain Corporation
Cumulants
Peter McCullagh and John Kolassa (2009), Scholarpedia, 4(3):4699. | doi:10.4249/scholarpedia.4699 | revision #137322 [link to/cite this article] |
This article describes a sequence of numbers, called cumulants, that are used to describe, and in some circumstances approximate, a univariate or multivariate distribution. Cumulants are not unique in this role; other sequences, such as moments and their generalizations, may also be used in both roles. Cumulants have multiple advantages over competitors, in that cumulants change in a very simple way when the underlying random variable is subject to an affine transformation, cumulants for sums of independent random variables have a very simple relationship to the cumulants of the addends, and cumulants may be used in a simple way to describe the difference between a distribution and its simplest Gaussian approximation.
Contents |
Overview and Definitions
Definition
The moment of order r (or rth moment) of a real-valued random variable X is \mu_r = E(X^r)
is an easy way to combine all of the moments into a single expression. The rth moment is hence the rth derivative of M at the origin.
When
X has a distribution given by a density
f\ , then
\tag{2}
\mu_r = \int_{-\infty}^\infty x^r f(x)\,dx,
and
\tag{3}
M(\xi) = E(e^{\xi X}) =\int_{-\infty}^\infty\exp(\xi x) f(x) d x.
The cumulants
\kappa_r are the coefficients in the Taylor expansion of
the cumulant generating function about the origin
K(\xi) = \log M(\xi) = \sum_{r} \kappa_r \xi^r/r!.
In the reverse direction
\tag{5}
\begin{array}{lcl}
\mu_2 &=& \kappa_2 + \kappa_1^2\\
\mu_3 &=& \kappa_3 + 3\kappa_2\kappa_1 + \kappa_1^3\\
\mu_4 &=& \kappa_4 + 4\kappa_3\kappa_1 + 3\kappa_2^2 + 6\kappa_2\kappa_1^2 + \kappa_1^4.
\end{array}
In particular, \kappa_1 = \mu_1 is the mean of X\ , \kappa_2 is the variance, and \kappa_3 = E((X - \mu_1)^3)\ . Higher-order cumulants are not the same as moments about the mean. Hald (2000) credits Thiele (1889) with the first derivation of cumulants. Fisher (1929) called the quantities \kappa_j cumulative moment functions; Hotelling (1933) claims credit for the simpler term cumulants. Lauritzen (2002) presents an overview, translation, and reprinting of much of this early work.
Examples
As above, let {\mathcal R} denote the real numbers. Let {\mathcal R}^+ represent the positive reals, and let {\mathcal N}=\{0,1,\ldots\} be the natural numbers.
Distribution | Density | CGF | Cumulants |
Normal | \frac{\exp(-x^2/2)}{\sqrt{2\pi}}, x\in{\mathcal R} | \xi^2/2 | \kappa_1=0\ , \kappa_2=1\ , \kappa_r=0 for r>2 |
Bernoulli | \pi^x(1-\pi)^{1-x}, x\in\{0,1\} | \log(1-\pi+\pi\exp(\xi)) | \kappa_1=\pi\ , \kappa_2=\pi(1-\pi)\ , \kappa_3=[2 \pi ^3-3 \pi ^2+\pi] |
Poisson | \frac{\exp(-\lambda)\lambda^x}{x!}, x\in{\mathcal N} | (e^{\xi }-1)\lambda | \kappa_r=\lambda \ \forall r |
Exponential | \frac{\exp(-x/\lambda)}{\lambda}, x\in{\mathcal R}^+ | -\log(1-\lambda\xi) | \kappa_r=\lambda^r(r-1)! \ \forall r |
Geometric | (1-\pi)\pi^x, x\in{\mathcal N} | \log(1-\pi)-\log(1-\pi\exp(\xi)) | \kappa_1=\rho\ , \kappa_2=\rho^2+\rho, \kappa_3=2 \rho ^3+3 \rho ^2+\rho for \rho=\pi/(1-\pi)\ . |
Definitions under less restrictive conditions
The Cauchy distribution with density \pi^{-1}/(1+x^2) has no moments because the integral (2) does not converge for any integer r\ge 1\ ; Student's t distribution on five degrees of freedom is symmetric with density (3\pi\surd5/8)/(1 + x^2/5)^3\ . The first four moments are 0, 5/3, 0, 25\ . Higher-order moments are not defined. The cumulants up to order four are defined by (4) even though the moment generating function (1) does not exist for any real \xi\neq 0\ .
In both of these cases, the characteristic function M(i\xi) is well-defined for real \xi\ , \exp(-|\xi|) for the Cauchy distribution, and \exp(-|\xi|\surd 5)(1 + |\xi|\surd5 + 5\xi^2/3) for t_5\ . In the latter case, both M(i\xi) and K(i\xi) have Taylor expansions up to order four only, so the moments and cumulants are defined only up to this order. The infinite expansion (1) is justified when the radius of convergence is positive, in which case M(\xi) is finite on an open set containing zero, and all moments and cumulants are finite. However, finiteness of the moments does not imply that M(\xi) exists for any \xi\neq 0\ . The log normal distribution provides a counterexample. It has finite moments \mu_r = e^{r^2/2} of all orders, but (1) diverges for every \xi\neq 0\ .
Uniqueness
The normal distribution N(\mu, \sigma^2) has cumulant generating function \xi\mu + \xi^2 \sigma^2/2\ , a quadratic polynomial implying that all cumulants of order three and higher are zero. Marcinkiewicz (1939) showed that the normal distribution is the only distribution whose cumulant generating function is a polynomial, i.e. the only distribution having a finite number of non-zero cumulants. The Poisson distribution with mean \mu has moment generating function \exp(\mu(e^\xi - 1)) and cumulant generating function \mu(e^\xi -1)\ . Consequently all the cumulants are equal to the mean.
Two distinct distributions may have the same moments, and hence the same cumulants. This statement is fairly obvious for distributions whose moments are all infinite, or even for distributions having infinite higher-order moments. But it is much less obvious for distributions having finite moments of all orders. Heyde (1963) gave one such pair of distributions with densities f_1(x) = \exp(-(\log x)^2/2) / (x\sqrt{2\pi}) and f_2(x) = f_1(x) [1 + \sin(2\pi\log x)/2] for x > 0\ . The first of these is called the log normal distribution. To show that these distributions have the same moments it suffices to show that \int_0^\infty x^k f_1(x) \sin(2\pi\log x)\, dx = 0
If the sequence of moments is such that (1) has a finite radius of convergence, the distribution is uniquely determined.
Properties
Cumulants of order r \ge 2 are called semi-invariant on account of their behavior under affine transformation of variables (Thiele, 1903, Dressel, 1940). If \kappa_r is the rth cumulant of X\ , the rth cumulant of the affine transformation a + b X is b^r \kappa_r\ , independent of a\ . This behavior is considerably simpler than that of moments. However, moments about the mean are also semi-invariant, so this property alone does not explain why cumulants are useful for statistical purposes.
The term cumulant reflects their behavior under addition of random variables. Let S = X+Y be the sum of two independent random variables. The moment generating function of the sum is the product M_S(\xi) = M_X(\xi) M_Y(\xi),
Provided that the cumulants are finite, all cumulants of order r\ge 3 of the standardized sum tend to zero, which is a simple demonstration of the central limit theorem.
Good (1977) obtained an expression for the rth cumulant of X as the rth moment of the discrete Fourier transform of an independent and identically distributed sequence as follows. Let X_1, X_2,\ldots be independent copies of X with rth cumulant \kappa_r\ , and let \omega = e^{2\pi i/n} be a primitive nth root of unity. The discrete Fourier combination Z = X_1 + \omega X_2 + \cdots + \omega^{n-1} X_n
Multivariate cumulants
Somewhat surprisingly, the relation between moments and cumulants is simpler and more transparent in the multivariate case than in the univariate case. Let X = (X^1,\ldots, X^k) be the components of a random vector. In a departure from the univariate notation, we write \kappa^r = E(X^r) for the components of the mean vector, \kappa^{rs} = E(X^r X^s) for the components of the second moment matrix, \kappa^{r s t} = E(X^r X^s X^t) for the third moments, and so on. It is convenient notationally to adopt Einstein's summation convention, so \xi_r X^r denotes the linear combination \xi_1 X^1 + \cdots + \xi_k X^k\ , the square of the linear combination is (\xi_r X^r)^2 = \xi_r\xi_s X^r X^s a sum of k^2 terms, and so on for higher powers. The Taylor expansion of the moment generating function M(\xi) = E(\exp(\xi_r X^r)) is M(\xi) = 1 + \xi_r \kappa^r + \textstyle{\frac1{2!}} \xi_r\xi_s \kappa^{rs} + \textstyle{\frac1{3!}} \xi_r\xi_s \xi_t \kappa^{r s t} +\cdots.
Comparison of coefficients reveals that each moment \kappa^{rs}, \kappa^{r s t},\ldots is a sum over partitions of the superscripts, each term in the sum being a product of cumulants: \begin{array}{lcl} \kappa^{rs}&=&\kappa^{r,s} + \kappa^r\kappa^s\\ \kappa^{r s t}&=&\kappa^{r,s,t} + \kappa^{r,s}\kappa^t + \kappa^{r,t}\kappa^s + \kappa^{s,t}\kappa^r + \kappa^r\kappa^s\kappa^t\\ &=&\kappa^{r,s,t} + \kappa^{r,s}\kappa^t[3] + \kappa^r\kappa^s\kappa^t\\ \kappa^{r s t u}&=&\kappa^{r,s,t,u} + \kappa^{r,s,t}\kappa^u[4] + \kappa^{r,s}\kappa^{t,u}[3] + \kappa^{r,s}\kappa^t\kappa^u[6] + \kappa^r\kappa^s\kappa^t\kappa^u. \end{array}
These relationships are an instance of Möbius inversion on the partition lattice.
Partition notation serves one additional purpose. It establishes moments and cumulants as special cases of generalized cumulants, which includes objects of the type \kappa^{r,st} = {\rm cov}(X^r, X^s X^t)\ , \kappa^{rs, t u} = {\rm cov}(X^r X^s, X^t X^u)\ , and \kappa^{rs, t, u} with incompletely partitioned indices. These objects arise very naturally in statistical work involving asymptotic approximation of distributions. They are intermediate between moments and cumulants, and have characteristics of both.
Every generalized cumulant can be expressed as a sum of certain products of ordinary cumulants. Some examples are as follows: \begin{array}{lcl} \kappa^{rs, t} &=& \kappa^{r,s,t} + \kappa^r\kappa^{s,t} + \kappa^s \kappa^{r,t}\\ &=& \kappa^{r,s,t} + \kappa^r\kappa^{s,t}[2]\\ \kappa^{rs,t u} &=& \kappa^{r,s,t,u} + \kappa^{r,s,t}\kappa^u[4] + \kappa^{r,t}\kappa^{s,u}[2] + \kappa^{r,t}\kappa^s\kappa^u[4]\\ \kappa^{rs,t,u} &=& \kappa^{r,s,t,u} + \kappa^{r,t,u}\kappa^s[2] + \kappa^{r,t}\kappa^{s,u}[2] \end{array}
As an example of the way these formulae may be used, let X be a scalar random variable with cumulants \kappa_1,\kappa_2,\kappa_3,\ldots\ . By translating the second formula in the preceding list, we find that the variance of the squared variable is {\rm var}(X^2) = \kappa_4 + 4\kappa_3\kappa_1 + 2\kappa_2^2 + 4\kappa_2\kappa_1^2,
Exponential families
Let f be a probability distribution on an arbitrary measurable space ({\mathcal X},\nu)\ , and let t\colon{\mathcal X}\to{\mathcal R} be a real-valued random variable with cumulant generating function K(\cdot)\ , finite in a set \Theta containing zero in the interior. The family of distributions on {\mathcal X} with density f_\theta(x) = e^{\theta t(x)} f(x) / M(\theta) = e^{\theta t(x) - K(\theta)} f(x)
Two examples suffice to illustrate the idea. In the first example, {\mathcal X} = \{1,2,\ldots\} is the set of natural numbers, f(x) \propto 1/x^2 and t(x) = -\log(x)\ . The associated exponential family is f_\theta(x) = x^{-\theta}/\zeta(\theta), where \zeta(\theta) is the Riemann zeta function with real argument \theta > 1\ .
In the second example, {\mathcal X}={\mathcal X}_n is the symmetric group or the set of permutations of n letters, x\in{\mathcal X}_n is a permutation, t(x) is the number of cycles, f(x) = 1/n! is the uniform distribution, and M_n(\xi) = \Gamma(n+e^\xi)/(n!\, \Gamma(e^\xi)) for all real \xi\ . The exponential family of distributions on permutations of [n] is f_{n,\theta}(x) = \frac{\Gamma(\lambda)\, \lambda^{t(x)}} {\Gamma(n+\lambda)},
In the multi-parameter case,
t\colon{\mathcal X}\to{\mathcal R}^p is a random vector
and \xi\colon{\mathcal R}^p\to{\mathcal R} is a linear functional,
M(\xi) = E(e^{\xi(t)}) is the joint moment generating function.
It is sometimes convenient to employ Einstein's implicit summation convention
in the form \theta(t) = \theta_i t^i where t^1,\ldots, t^p are
the components of t(x)\ , and \theta_1,\ldots, \theta_p are the coefficients
of the linear functional.
For simplicity of notation in what follows, {\mathcal X}={\mathcal R}^p and t(x) = x
is the identity function.
An exponential-family distribution in {\mathcal R}^p has the form
f_\theta(x)=\exp(x^j\theta_j-g(x)-\varphi(\theta))
Calculus of cumulants
Consider descriptions of the sampling distribution of estimates of cumulants. Such calculations are notationally complicated, and may be simplified by a tool called umbral calculus. The umbral calculus is a syntax or formal system consisting of certain operations on objects called umbrae, mimicking addition and multiplication of independent real-valued random variables. Rota and Taylor (1994) reviews this calculus. To each real-valued sequence 1, a_1, a_2,\ldots there corresponds an umbra \alpha such that E(\alpha^r) = a_r\ . This definition goes beyond the random variable context to allow for special umbrae, the singleton and Bell umbra, corresponding to no real-valued random variable. Using these special umbrae, one develops the notion of an \alpha-cumulant umbra \chi\cdot\alpha by formal product operations in the syntax. Properties of cumulants, k\!-statistics and other polynomial functions are then derived by purely combinatorial operations. Di Nardo et al. (2008) present details.
Streitberg (1990) presents parallels between the calculus of cumulants and the calculus of certain decompositions of multivariate cumulative distribution functions into independent segments; these characterizations in terms of independent segments are called Lancaster interactions.
Moment and Cumulant Measures for Random Measures
Moments and cumulants extend quite naturally to random distributions. Let \upsilon be a random measure on a space \Upsilon\ . Then the expectation of \upsilon is defined as that measure such that E(\upsilon)(A)=E(\upsilon(A))\ , for A in a suitable sigma field. Higher--order moments then translate to expectations of product measures. Let \upsilon^{(k)} be the measure defined on \Upsilon^k\ , such that \upsilon^{(k)}(A_1\times\cdots\times A_k)=\prod_{j=1}^k\upsilon(A_j)\ . Then the moment of order k of \upsilon is E(\upsilon^{(k)})\ . A moment generating functional can similarly be defined for \upsilon\ ; a heuristic definition may be constructed through analogy with (1): Let \Phi(f)=\sum_{r=0}^\infty f(x_1)\ldots f(x_r)\upsilon^{(r)}(d x_1\cdots d x_r)/r!,
Approximation of distributions
Edgeworth approximation
Suppose that Y is a random variable that arises as the sum of n independent and identically-distributed summands, each of which has mean 0\ , unit variance, and cumulants \kappa_r\ , and X=Y/\sqrt{n}\ . For ease of exposition, assume that cumulants of all orders exist. Then, using (6), the cumulant generating function of X is given by K(\xi)=\xi^2/2 +\kappa_3\xi^3/(6\sqrt{n}) +\kappa_4\xi^4/(24 n) +\cdots\ , and the moment generating function of X is given by M(\xi)=\exp(\xi^2/2)\exp(\kappa_3\xi^3/(6\sqrt{n})+\kappa_4\xi^4/(24 n)+\cdots)
Repeated application of integration by parts to (3) shows that
\tag{8}
\xi^r M(\xi) =\int_{-\infty}^\infty\exp(\xi x)(-1)^r f^{(r)}(x) d x,
where f^{(r)} denotes the derivative of f of order r\ . Relation (8) holds if f and its derivatives go to zero quickly as \vert x\vert\to\infty\ . Applying (8) to the normal density \phi(x)=\exp(-x^2/2)/\sqrt{2\pi}\ , and applying the result to (7), gives M(\xi)\approx\int_{-\infty}^\infty\exp(\xi x)\phi(x)\left[1+{{\kappa_3 h^3(x)}\over{6\sqrt{n}}}+{{\kappa_4h^4(x)}\over{24 n}}+ {{\kappa_3^2h^6(x)}\over{72 n}}\right] d x
In fact, when the summands contributing to Y have a density and cumulants of order at least 5, the error in the approximation, multiplied by n^{3/2}\ , remains bounded. The functions h^r defined above are the Hermite polynomials. The approximation (9) is known as the Edgeworth series. The subscript refers to the number of cumulants used in its definition. This series can be used to approximate either the cumulative distribution function or survival function through term-wise integration.
The preceding discussion is intended to be heuristic; Kolassa (2006) presents a rigorous derivation, along with the natural extension to random vectors.
Saddlepoint approximation
The approximation (9) to the density f(x) has the property that |f(x)-e_r(x)|\leq C n^{-(r-1)/2}\ , for some constant C\ , when the cumulant of order r+1 exists; C does not depend on x\ . A similar bound holds for the relative error (f(x)-e_r(x))/f(x)\ , only when x is restricted to a finite interval. Because of the polynomial factor multiplying the first omitted term in (9), the relative error can be expected to behave poorly. One might prefer an approximation that maintains good behavior for values of X in a range that increases as n increases; specifically, one might prefer an approximation that performs well for values of \bar Y=X/\sqrt{n} in a fixed interval.
Assume again that random variables Y_j are independent and identically distributed, each with a cumulant generating function K(\xi) finite for \xi in a neighborhood of 0\ . As above, define the exponential family f_{\bar Y}(\bar y;\theta)=\exp(\theta\bar y-K(\theta))f_{\bar Y}(\bar y).
this makes the expectation of the distribution with density f_{\bar Y}(\cdot;\hat\theta) equal to the observed value. One then applies (9), with the scale of the ordinate changed to reflect the fact that we are approximating the distribution of X/\sqrt{n}\ , to obtain f_{\bar Y}(\bar y)\approx\exp(-\hat\theta\bar y+K(\hat\theta)) n\phi(0)\left[1+{{\kappa_3 h^3(0)}\over{6\sqrt{n}}}+{{\kappa_4h^4(0)}\over{24 n}}+ {{\kappa_3^2h^6(0)}\over{72 n}}\right].
Here \hat\kappa_j are calculated from the derivatives of K in the preceding manner, but in this case evaluated at \hat\theta\ . This approximation may only be applied to values of \bar y for which (10) has solutions in an open neighborhood of 0. Expression (11) represents the saddlepoint approximation to the density of the mean \bar Y\ ; since f_{\bar Y}(\bar y;\theta) has a cumulant generating function defined on an open set containing 0\ , cumulants of all orders exist, the Edgeworth series including \kappa_6 may be applied to f_{\bar Y}(\bar y;\theta)\ , and so the error in the Edgeworth series is of order O(1/n^2)\ . Hence the error in (11) is of the same order, and in this case, is relative and uniform for values of \bar y in a bounded subset of an open subset on which (10) has a solution. This approximation was introduced to the statistics literature by Daniels (1954).
The Edgeworth series for the density was trivially integrated to obtain an approximation to tail probabilities. Integration of the saddlepoint approximation is more delicate. Two main approaches have been investigated. Daniels (1987) expresses f_{\bar Y}(\bar y) exactly as a complex integral involving K(\xi)\ , integrates with respect to \bar y to obtain another complex integral, and reviews techniques for approximating the resulting integrals. Robinson (1982) and Lugannani and Rice (1980) derive tail probability approximations based on approximately integrating (11) with respect to \bar y directly.
These saddlepoint and Edgeworth approximations have multivariate and conditional extensions. Davison (1988) exploits the conditional saddlepoint tail probability approximation to perform inference in canonical exponential families.
Samples and sub-samples
A function f\colon{\mathcal R}^n\to{\mathcal R} is symmetric if f(x_1 ,\ldots, x_n) = f(x_{\pi(1)} ,\ldots, x_{\pi(n)}) for each permutation \pi of the arguments. For example, the total T_n = x_1 + \cdots + x_n\ , the average T_n/n\ , the min, max and median are symmetric functions, as are the sum of squares S_n = \sum x_i^2\ , the sample variance s_n^2 = (S_n - T_n^2/n)/(n-1) and the mean absolute deviation \sum |x_i - x_j|/(n(n-1))\ .
A vector x in {\mathcal R}^n is an ordered list of n real numbers (x_1 ,\ldots, x_n) or a function x\colon[n]\to{\mathcal R} where [n]=\{1 ,\ldots, n\}\ . For m \le n\ , a 1--1 function \varphi\colon[m]\to[n] is a sample of size m\ , the sampled values being x\varphi = (x_{\varphi(1)} ,\ldots, x_{\varphi(m)})\ . All told, there are n(n-1)\cdots(n-m+1) distinct samples of size m that can be taken from a list of length n\ . A sequence of functions f_n\colon{\mathcal R}^n\to{\mathcal R} is consistent under sub-sampling if, for each f_m, f_n\ , f_n(x) = {\rm ave} _\varphi f_m(x\varphi),
Although the total and the median are both symmetric functions, neither is consistent under sub-sampling. For example, the median of the numbers (0,1,3) is one, but the average of the medians of samples of size two is 4/3. However, the average \bar x_n = T_n/n is sampling consistent. Likewise the sample variance s_n^2 = \sum(x_i - \bar x)^2/(n-1) with divisor n-1 is sampling consistent, but the mean squared deviation \sum(x_i - \bar x_n)^2/n with divisor n is not. Other sampling consistent functions include Fisher's k\!-statistics, the first few of which are k_{1,n} = \bar x_n\ , k_{2,n} = s_n^2 for n\ge 2\ , k_{3,n} = n\sum(x_i - \bar x_n)^3/((n-1)(n-2)), defined for n\ge 3\ .
For a sequence of independent and identically distributed random variables, the k\!-statistic of order r\le n is the unique symmetric function such that E(k_{r,n}) = \kappa_r\ . Fisher (1929) derived the variances and covariances. The connection with finite-population sub-sampling was developed by Tukey (1950).
The class of statistics called U\!-statistics is consistent under sub-sampling.
References
- D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes. Springer-Verlag, New York, 1988.
- H. E. Daniels. Saddlepoint approximations in statistics. The Annals of Mathematical Statistics, 25 (4): 631--650, 1954.
- H. E. Daniels. Tail probability approximations. Review of the International Statistical Institute, 55: 37--46, 1987.
- A. C. Davison. Approximate conditional inference in generalized linear models. Journal of the Royal Statistical Society Series B, 50: 445--461, 1988.
- E. Di Nardo, G. Guarino, and D. Senato. A unifying framework for k\!-statistics, polykays and their multivariate generalizations. Bernoulli, 14: 440--468, 2008.
- P. L. Dressel. Statistical seminvariants and their setimates with particular emphasis on their relation to algebraic invariants. The Annals of Mathematical Statistics, 11 (1): 33--57, 1940.
- F. Y. Edgeworth. On the representation of statistical frequency by a series. Journal of the Royal Statistical Society, 70 (1): 102--106, 1907.
- R. A. Fisher. Moments and product moments of sampling distributions. Proceedings of the London Mathematical Society, Series 2, 30: 199--238, 1929.
- I. J. Good. A new formula for k-statistics. The Annals of Statistics, 5 (1): 224--228, 1977.
- A. Hald. The early history of cumulants and the Gram-Charlier series. International Statistical Review, 68: 137--153, 2000.
- C. C. Heyde. On a property of the lognormal distribution. Journal of the Royal Statistical Society. Series B (Methodological), 25 (2): 392--393, 1963.
- Harold Hotelling. Review: [untitled]. Journal of the American Statistical Association, 28 (183): 374--375, 1933. ISSN 01621459. Available online at JSTOR.
- J. E. Kolassa. Series Approximation Methods in Statistics. Springer-Verlag, New York, 2006.
- S.L. Lauritzen, editor. Thiele: pioneer in statistics. Oxford University Press, New York, 2002.
- R. Lugannani and S. Rice. Saddle point approximation for the distribution of the sum of independent random variables. Advances in Applied Probability, 12: 475--490, 1980.
- J. Marcinkiewicz. Sur une peropri'et'e de la loi de Gauss. Mathematische Zeitschrift, 44: 612--618, 1939.
- J. Robinson. Saddlepoint approximations for permutation tests and confidence intervals. Journal of the Royal Statistical Society. Series B (Methodological), 44 (1): 91--101, 1982.
- G.-C. Rota and B. D. Taylor. The classical umbral calculus. SIAM J. Math. Anal, 25 (2): 694--711, 1994.
- B. Streitberg. Lancaster interactions revisited. The Annals of Statistics, 18 (4): 1878--1885, 1990.
- T. N. Thiele. Almindelig Iagttagelseslaere: Sandsynlighedsregning og mindste Kvadraters Methode. C. A. Reitzel, Copenhagen, 1889.
- T. N. Thiele. Theory of Observations. C. & E. Layton, London, 1903.
- J. W. Tukey. Some sampling simplified. Journal of the American Statistical Association, 45 (252): 501--519, 1950.