Wenn der erwartete Wert von G a m m a ( α , β )
Die Parametrisierung, die ich verwende, ist die Formrate.
expected-value
gamma-distribution
Stefano Vespucci
quelle
quelle
Antworten:
Dies kann (vielleicht überraschend) mit einfachen Elementaroperationen durchgeführt werden (unter Verwendung von Richard Feynmans Lieblingstrick der Differenzierung unter dem Integralzeichen in Bezug auf einen Parameter).
We are supposing XX has a Γ(α,β)Γ(α,β) distribution and we wish to find the expectation of Y=log(X).Y=log(X). First, because ββ is a scale parameter, its effect will be to shift the logarithm by logβ.logβ. (If you use ββ as a rate parameter, as in the question, it will shift the logarithm by −logβ.−logβ. ) This permits us to work with the case β=1.β=1.
After this simplification, the probability element of XX is
fX(x)=1Γ(α)xαe−xdxxfX(x)=1Γ(α)xαe−xdxx
where Γ(α)Γ(α) is the normalizing constant
Γ(α)=∫∞0xαe−xdxx.Γ(α)=∫∞0xαe−xdxx.
Substituting x=ey,x=ey, which entails dx/x=dy,dx/x=dy, gives the probability element of YY ,
fY(y)=1Γ(α)eαy−eydy.fY(y)=1Γ(α)eαy−eydy.
The possible values of YY now range over all the real numbers R.R.
Because fYfY must integrate to unity, we obtain (trivially)
Γ(α)=∫Reαy−eydy.Γ(α)=∫Reαy−eydy.(1)
Notice fY(y)fY(y) is a differentiable function of α.α. An easy calculation gives
ddαeαy−eydy=yeαy−eydy=Γ(α)yfY(y).ddαeαy−eydy=yeαy−eydy=Γ(α)yfY(y).
The next step exploits the relation obtained by dividing both sides of this identity by Γ(α),Γ(α), thereby exposing the very object we need to integrate to find the expectation; namely, yfY(y):yfY(y):
E(Y)=∫RyfY(y)=1Γ(α)∫Rddαeαy−eydy=1Γ(α)ddα∫Reαy−eydy=1Γ(α)ddαΓ(α)=ddαlogΓ(α)=ψ(α),E(Y)=∫RyfY(y)=1Γ(α)∫Rddαeαy−eydy=1Γ(α)ddα∫Reαy−eydy=1Γ(α)ddαΓ(α)=ddαlogΓ(α)=ψ(α),
the logarithmic derivative of the gamma function (aka "polygamma"). The integral was computed using identity (1).(1).
Re-introducing the factor ββ shows the general result is
E(log(X))=logβ+ψ(α)E(log(X))=logβ+ψ(α)
for a scale parameterization (where the density function depends on x/βx/β ) or
E(log(X))=−logβ+ψ(α)E(log(X))=−logβ+ψ(α)
for a rate parameterization (where the density function depends on xβxβ ).
quelle
The answer by @whuber is quite nice; I will essentially restate his answer in a more general form which connects (in my opinion) better with statistical theory, and which makes clear the power of the overall technique.
Consider a family of distributions {Fθ:θ∈Θ}{Fθ:θ∈Θ} which consitute an exponential family, meaning they admit a density
fθ(x)=exp{s(x)θ−A(θ)+h(x)}fθ(x)=exp{s(x)θ−A(θ)+h(x)}
with respect to some common dominating measure (usually, Lebesgue or counting measure). Differentiating both sides of∫fθ(x) dx=1
with respect to θθ we arrive at the score equation
∫f′θ(x)=∫f′θ(x)fθ(x)fθ(x)=∫uθ(x)fθ(x) dx=0∫f′θ(x)=∫f′θ(x)fθ(x)fθ(x)=∫uθ(x)fθ(x) dx=0(†)
where uθ(x)=ddθlogfθ(x)uθ(x)=ddθlogfθ(x) is the score function and we have defined f′θ(x)=ddθfθ(x)f′θ(x)=ddθfθ(x) . In the case of an exponential family, we have
uθ(x)=s(x)−A′(θ)uθ(x)=s(x)−A′(θ)
where A′(θ)=ddθA(θ)A′(θ)=ddθA(θ) ; this is sometimes called the cumulant function, as it is evidently very closely related to the cumulant-generating function. It follows now from (†)(†) that Eθ[s(X)]=A′(θ)Eθ[s(X)]=A′(θ) .
∫fθ(x) dx=1
We now show this helps us compute the require expectation. We can write the gamma density with fixed ββ as an exponential family
fθ(x)=βαΓ(α)xα−1e−βx=exp{log(x)α+αlogβ−logΓ(α)−βx}.fθ(x)=βαΓ(α)xα−1e−βx=exp{log(x)α+αlogβ−logΓ(α)−βx}.
This is an exponential family in α alone with s(x)=logx and A(α)=logΓ(α)−αlogβ. It now follows immediately by computing ddαA(α) that
E[logX]=ψ(α)−logβ.
quelle