Das SPSS t-Test-Verfahren meldet 2 Analysen, wenn 2 unabhängige Mittelwerte verglichen werden, eine Analyse mit angenommenen gleichen Abweichungen und eine mit nicht angenommenen gleichen Abweichungen. Die Freiheitsgrade (df) bei Annahme gleicher Varianzen sind immer ganzzahlige Werte (und gleich n-2). Die df, wenn gleiche Varianzen nicht angenommen werden, sind nicht ganzzahlig (z. B. 11,467) und liegen bei n-2. Ich suche eine Erklärung der Logik und der Methode, die verwendet werden, um diese nicht ganzzahligen df zu berechnen.
15
Antworten:
Der Welch-Satterthwaite df kann als skaliertes gewichtetes harmonisches Mittel der beiden Freiheitsgrade mit Gewichten im Verhältnis zu den entsprechenden Standardabweichungen dargestellt werden.
Der ursprüngliche Ausdruck lautet:
Beachten Sie, dass ist die geschätzte Varianz des i - ten Stichprobenmittelwert oder das Quadrat des i -ten Standardfehler des Mittelwerts . Sei r = r 1 / r 2 (das Verhältnis der geschätzten Varianzen des Stichprobenmittels), sori=s2i/ni ith i r=r1/r2
The first factor is1+sech(log(r)) , which increases from 1 at r=0 to 2 at r=1 and then decreases to 1 at r=∞ ; it's symmetric in logr .
The second factor is a weighted harmonic mean:
of the d.f., wherewi=r2i are the relative weights to the two d.f.
Which is to say, whenr1/r2 is very large, it converges to ν1 . When r1/r2 is very close to 0 it converges to ν2 . When r1=r2 you get twice the harmonic mean of the d.f., and when s21=s22 you get the usual equal-variance t-test d.f., which is also the maximum possible value for νW .
--
With an equal-variance t-test, if the assumptions hold, the square of the denominator is a constant times a chi-square random variate.
The square of the denominator of the Welch t-test isn't (a constant times) a chi-square; however, it's often not too bad an approximation. A relevant discussion can be found here.
A more textbook-style derivation can be found here.
quelle
What you are referring to is the Welch-Satterthwaite correction to the degrees of freedom. Thet -test when the WS correction is applied is often called Welch's t -test. (Incidentally, this has nothing to do with SPSS, all statistical software will be able to conduct Welch's t -test, they just don't usually report both side by side by default, so you wouldn't necessarily be prompted to think about the issue.) The equation for the correction is very ugly, but can be seen on the Wikipedia page; unless you are very math savvy or a glutton for punishment, I don't recommend trying to work through it to understand the idea. From a loose conceptual standpoint however, the idea is relatively straightforward: the regular t -test assumes the variances are equal in the two groups. If they're not, then the test should not benefit from that assumption. Since the power of the t -test can be seen as a function of the residual degrees of freedom, one way to adjust for this is to 'shrink' the df somewhat. The appropriate df must be somewhere between the full df and the df of the smaller group. (As @Glen_b notes below, it depends on the relative sizes of s21/n1 vs s22/n2 ; if the larger n is associated with a sufficiently smaller variance, the combined df can be lower than the larger of the two df.) The WS correction finds the right proportion of way from the former to the latter to adjust the df. Then the test statistic is assessed against a t -distribution with that df.
quelle