AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |
Back to Blog
![]() ![]() We can use info about the sampling distribution of the variance estimate to find. * So for example, Helmert identified it as related to the distribution of sample variance for iid samples from a normal distribution (though the use of the symbol $\chi^2$ and hence the name "chi-squared" don't come until Pearson's work about a generation later). Result is a random variable distributed as chi-square with (N-1) df. The degrees of freedom relates to the number of independent normals involved and each of those squared components has mean 1. However, if your question is really "why choose that pdf to be called a chi-square?", whuber's comment is relevant - the sum of squares of independent standard normals is a random variable that fairly naturally arises in a number of contexts*, and that is something we would therefore like to have a name for. If you multiply and divide by the relevant normalizing constant so that the integral is 1, you're left with a ratio of normalizing constants out the front (for different d.f.). Wikipedia states that a 2 2 distribution has k k degrees of freedom and is the sum of k k independent standard normal random variables (makes sense). The term in the integral can be recognized as another chi-square (missing the normalizing constant). In case you are curious, the general formula for the chi squared family of distributions is the one shown here, and the distribution for k degrees of freedom. The pdf of a chi-squared random variable with $k$ d.f. You don't define the mean to be the degrees of freedom (d.f.) - it follows from the definition of the pdf and the definition of expectation of a random variable. For the chi-square distribution with n degrees of freedom, the MGF is given by21: ( ) (1 2 2 ) 2 n MY s s (B. ![]()
0 Comments
Read More
Leave a Reply. |