Ich bin verwirrt darüber, wie die Verwirrung einer Holdout-Stichprobe bei der Latent Dirichlet Allocation (LDA) berechnet wird. Die Zeitungen über das Thema rauschen darüber hinweg und lassen mich denken, ich vermisse etwas Offensichtliches ...
Ratlosigkeit wird als ein gutes Maß für die Leistung von LDA angesehen. Die Idee ist, dass Sie eine Holdout-Stichprobe aufbewahren, Ihre LDA auf den Rest der Daten trainieren und dann die Ratlosigkeit des Holdouts berechnen.
Die Ratlosigkeit könnte durch die Formel gegeben sein:
(Taken from Image retrieval on large-scale image databases, Horster et al.)
Here is the number of documents (in the test sample, presumably), represents the words in document , the number of words in document .
It is not clear to me how to sensibly calcluate , since we don't have topic mixtures for the held out documents. Ideally, we would integrate over the Dirichlet prior for all possible topic mixtures and use the topic multinomials we learned. Calculating this integral doesn't seem an easy task however.
Alternatively, we could attempt to learn an optimal topic mixture for each held out document (given our learned topics) and use this to calculate the perplexity. This would be doable, however it's not as trivial as papers such as Horter et al and Blei et al seem to suggest, and it's not immediately clear to me that the result will be equivalent to the ideal case above.
quelle
We know that parameters of LDA are estimated through Variational Inference. So
If your variational distribution is enough equal to the original distribution, thenD(q(θ,z)||p(θ,z))=0 . So, logp(w|α,β)=E[logp(θ,z,w|α,β)]−E[logq(θ,z)] , which is the likelihood.
quelle