# Rao score test

Post-publication activity

Rao’s Score test is an alternative to Likelihood Ratio and Wald tests. These three tests are referred to in statistical literature on testing of hypotheses as the Holy Trinity. All these tests are equivalent to the first order of asymptotics, but differ to some extent in the second order properties. No one is uniformly superior to the others. Some related tests are Neyman’s C(α) and Neyman–Rao tests.

## Definition

Let $$X = (x_1,\ldots, x_n)$$ be an iid sample from a probability density function $$p(x,\theta) \ ,$$ where $$\theta$$ is an $$r$$- vector parameter. Let

$P(X,\theta) = p(x_1,\theta) \ldots p(x_n,\theta)$

The score vector of Fisher is $S (\theta) = \left[ s_1(\theta), \ldots, s_r(\theta) \right]' \quad s_j(\theta) = \frac{1}{P}\frac{\partial P}{\partial \theta_j} \quad j = 1, \ldots, r$

The Fisher information matrix of order $$r \times r$$ is defined by

$I(\theta) =( i_{jk} (\theta)),\quad i_{jk}(\theta) = E ( s_j(\theta) s_k(\theta)).$

The Rao’s score(RS) test for a simple hypothesis $$H_0: \theta = \theta_0\ ,$$ introduced in Rao (1948), is $\tag{1} RSS = S (\theta_0)^\prime \left[I (\theta_0)\right]^{-1} S(\theta_0)$

which has an asymptotic chi-square distribution on r degrees of freedom. Test (1) uses only $$\theta_0 \ ,$$ the null value of $$\theta\!\ ,$$ unlike the Wald test. Consider the composite hypothesis $H_0 : H(\theta) = C$ where

$\tag{2} H(\theta)^\prime = (h_1(\theta),\ldots, h_t(\theta) ), \quad C^\prime=(c_1, \ldots, c_t), \quad t \leq r,$

$$h_1,\ldots,h_t$$ are given functions and $$c_1, \ldots,c_t$$ are given constants. Let $$\hat{\theta}$$ be the maximum likelihood estimate (mle) of $$\theta\!$$ under the restriction (2). The RS test for the composite hypothesis (2) is $RSC = S (\hat{\theta})^\prime [I (\hat{\theta})]^{-1} S (\hat{\theta}).$

An alternative way of expressing the RSC is as follows. Note that the restricted mle $$\hat{\theta}$$ is a solution of $\tag{3} S (\theta) + [ G(\theta)]^\prime \lambda = 0, \quad H (\theta) = C$

where $$G(\theta)=((\partial h_i/\partial \theta_j ))$$ and $$\lambda$$ is a $$t$$-vector of Lagrangian Multipliers, so that $$[S (\hat{\theta}) ]^\prime = -\lambda^\prime G(\hat{\theta})\ .$$ Substituting in (3), we have $\tag{4} RSC =\lambda^\prime[ A(\hat{\theta} )]\lambda$

where $A(\theta) = G (\theta) [I (\theta)]^{-1}[ G (\theta)]^\prime$

Silvey (1959) expressed RSC in the form (4) and called it the Lagrangian Multiplier (LM) test. Neyman (1979) considered the special case of a composite hypothesis $\tag{5} H : \theta_1 = \theta_{10},\theta_2, \ldots, \theta_r.$

where $$\theta_{10}$$ is given and the rest are arbitrary.

The RSC for (5) is known as Neyman’s $$C(\alpha)$$ test in statistical literature. Hall and Mathiason (1990) considered a more general composite hypothesis of the form. $\tag{6} H : \theta_1 = \theta_{10},\ldots, \theta_q = \theta_{qo}, \ldots, \theta_{q+1}, \ldots, \theta_r.$

where $$\theta_{10},\ldots, \theta_{qo}$$ are all given and the rest are arbitrary.

The RSC for(6) is termed as Neyman – Rao test by them.

## History

In the early years of my appointment at the Indian Statistical Institute, I had the opportunity of interacting with the staff and research scholars and discussing with them new problems in statistics arising out of consultation work. One of the scholars, S.J. Poti, by name, asked me about testing a simple hypothesis $$H : \theta = \theta_0$$ concerning a single parameter $$\theta$$ when there is some prior information about the alternative such as $$\theta > \theta_0\ .$$ I suggested a procedure by which local power on the right side of $$\theta_0$$ is maximized leading to a test of the form $$P^\prime(\theta)/ P (\theta) > \lambda$$ where $$P^\prime/ P$$ is Fisher’s score. The result was published in Rao and Poti (1946). They also proposed a general test of the form $$(P^\prime/ P)^2 > \lambda$$ which is likely to have good local power on either side of $$\theta_0\ .$$ Two years later, I was working on a problem at Cambridge University, UK, which involved testing of simple and composite hypotheses concerning multiple parameters when there is information that alternatives are close to those specified by the null hypothesis. This led to combining the individual tests criteria based on the Fisher’s scores $$s_1(\theta), \ldots, s_r(\theta)$$ into a single criterion. Following methods used in multivariate analysis, I arrived at statistics of the form (1) and (2), which were approved by R.A. Fisher, my thesis advisor when I was working at Cambridge University during 1946-48. The paper (Rao,1948), containing the general discussion of score tests is published in the proceedings of the Cambridge Philosophical Society.

## Applications

A comprehensive account of the applications of RS tests in econometrics is given in Godfrey (1988). Some applications to statistical inference are given in Bera and Jarque (1981), Breusch and Pagan (1979), Breush (1978), Godfrey (1978 a,b), Byron (1968), Bera and Ullah (1991) and in a series of papers in Vol. 97, pp 1-200, 2000 of J. Statistical Planning and inference. Comparison with likelihood ratio and Wald tests in terms of power and other properties are summarized in Rao(2005), which also gives references to some key papers and recent developments on RS tests, suggesting modifications in particular applications.