- normal distribution
- multivariate normal distribution
- central and noncentral distributions
- elliptical distribution
- chi-square distribution
- Wishart distribution

One peculiar feature of the normal distribution is that it is completely described by its first two moments--higher order moments provide no new information about the distribution. This feature has both good and bad consequences. On the good side, statistical derivations involving the normal distribution are very much simplified, since higher order moments can be ignored. On the bad side, normal distributions increase the likelihood that the parameters of statistical models will not be identified, because there will be relatively few pieces of distinct information--fewer "knowns"--available for this purpose (Bekker, Merckens and Wansbeek, 1994).

These chi-square distributions are themselves additive. That is, a chi-square-distributed variable with d1 degrees of freedom can be added to one with d2 degrees of freedom to yield a chi-square-distributed variable with d1 + d2 degrees of freedom, as long as the two added variables are independent. Analogous results can be obtained by subtraction, under the same condition. Steiger, Shapiro and Browne (1985) showed that the independence condition was achieved for *a priori*-specified comparisons of the chi-square statistics from nested models. This gives rise to the familiar chi-square difference test.

As noted above, if some or all of the original normal variables had nonzero mean, then the result will be a noncentral chi-square distribution, with noncentrality parameter (lambda). The mean of this distribution is equal to df + , and the variance is equal to 2df + 4. The effect of the noncentrality parameter is to move the distribution to the right and to make it appear flatter and more symmetrical. Adding two independent noncentral chi-square distributed variables with noncentrality parameters 1 and 2 yields a noncentral chi-square distributed variable with noncentrality parameter (1 + 2). In principle, nested models can also be evaluated using difference tests, but the noncentrality issue complicates interpretation.

If the covariance matrix, , is nonsingular, and if sample size exceeds the number of variables, then W, a matrix related to , will also be nonsingular. This is critical to deriving confidence intervals for and for simple, partial and multiple correlations.

Mardia, K. V. (1970). Measures of multivariate skewness and kurtosis with applications. Biometrika, 57, pp. 519-530.

http://www.gsu.edu/~mkteer/continuo.html Return to the SEMNET FAQ home page.

Return to Ed Rigdon's home page.

Last updated: May 9, 1996