Warum wird eine T-Verteilung zum Testen eines linearen Regressionskoeffizienten verwendet?

16

In der Praxis ist die Verwendung eines Standard-T-Tests zur Überprüfung der Signifikanz eines linearen Regressionskoeffizienten gängige Praxis. Die Mechanik der Berechnung macht für mich Sinn.

Warum kann die T-Verteilung verwendet werden, um die Standardteststatistik zu modellieren, die beim Testen von linearen Regressionshypothesen verwendet wird? Standardteststatistik, auf die ich mich hier beziehe:

T0=β^β0SE(β^)
Nate Parke
quelle
A full and complete answer to this question will be quite long, I'm sure. So while you wait for someone to tackle this, you can get a pretty good idea of why this is the case by looking at some notes I found online here: onlinecourses.science.psu.edu/stat501/node/297. Note specifically that t(np)2=F(1,np).
StatsStudent
1
I cannot believe this is not a duplicate, and yet all the upvotes (both on the question and the answers)... What about this? Or perhaps it is not a duplicate, which means there are (or there was until today) super-basic topics still that have not been covered over the nearly seven years of existence of Cross Validated... Wow...
Richard Hardy
@RichardHardy Hmm, that sounds like a duplicate. While it's more verbose, the question is specifically: "How can I prove that for β^i, β^iβisβ^itnk"
Firebug

Antworten:

25

To understand why we use the t-distribution, you need to know what is the underlying distribution of β^ and of the Residual sum of squares (RSS) as these two put together will give you the t-distribution.

The easier part is the distribution of β^ which is a normal distribution - to see this note that β^=(XTX)1XTY so it is a linear function of Y where YN(Xβ,σ2In). As a result it is also normally distributed, β^N(β,σ2(XTX)1) - let me know if you need help deriving the distribution of β^.

Additionally, RSSσ2χnp2, where n is the number of observations and p is the number of parameters used in your regression. The proof of this is a bit more involved, but also straightforward to derive (see proof here Why is RSS distributed chi square times n-p?).

Up until this point I have considered everything in matrix/vector notation, but let's for simplicity use β^i and use its normal distribution which will give us:

β^iβiσ(XTX)ii1N(0,1)

Additionally, from the chi-squared distribution of RSS we have that:

(np)s2σ2χnp2

This was simply a rearrangement of the first chi-squared expression and is independent of the N(0,1). Additionally, we define s2=RSSnp, which is an unbiased estimator for σ2. By the definition of the tnp definition that dividing a normal distribution by an independent chi-squared (over its degrees of freedom) gives you a t-distribution (for the proof see: A normal divided by the χ2(s)/s gives you a t-distribution -- proof) you get that:

β^iβis(XTX)ii1tnp

Where s(XTX)ii1=SE(β^i).

Let me know if it makes sense.

francium87d
quelle
what a great answer! could you please explain why
β^iβiσ(XTX)ii1N(0,1)
?
KingDingeling
4

The answer is actually very simple: you use t-distribution because it was pretty much designed specifically for this purpose.

Ok, the nuance here is that it wasn't designed specifically for the linear regression. Gosset came up with distribution of sample that was drawn from the population. For instance, you draw a sample x1,x2,,xn, and calculate its mean x¯=i=1nxi/n. What is the distribution of a sample mean x¯?

If you knew the true (population) standard deviation σ, then you'd say that the variable ξ=(x¯μ)n/σ is from the standard normal distribution N(0,1). The trouble's that you usually do not know σ, and can only estimate it σ^. So, Gosset figured out the distribution when you substitute σ with σ^ in the denominator, and the distribution is now called after his pseduonym "Student t".

The technicalities of linear regression lead to a situation where we can estimate the standard error σ^β of the coefficient estimate β^, but we do not know the true σ, therefore Student t distribution is applied here too.

Aksakal
quelle