Rekursiver (online) regularisierter Algorithmus der kleinsten Quadrate

12

Kann mich jemand auf einen (rekursiven) Online-Algorithmus für die Tikhonov-Regularisierung (regularisierte kleinste Quadrate) hinweisen?

In einer Offline-Einstellung würde ich Verwendung meines ursprünglichen Datensatzes berechnen, wobei unter Verwendung der n-fachen Kreuzvalidierung gefunden wird. Ein neuer Wert kann für ein gegebenes x mit y = x ^ T \ hat \ beta vorhergesagt werden .λyxy=xT ββ^=(XTX+λI)1XTYλyxy=xTβ^

In einer Online-Umgebung zeichne ich ständig neue Datenpunkte. Wie kann ich updaten?β^ wenn ich neue zusätzliche Datenmuster zeichne, ohne eine vollständige Neuberechnung des gesamten Datensatzes durchzuführen (Original + Neu)?

rnudel
quelle
1
Ihre Tikhonov-regularisierten kleinsten Quadrate werden in statistischen Kreisen vielleicht häufiger als Levenberg-Marquardt bezeichnet , selbst wenn sie auf rein lineare Probleme angewendet werden (wie hier). Es gibt ein Papier über Online - Levenberg Marquardt hier . Ich weiß nicht, ob das hilft.
Glen_b

Antworten:

11

β^n=(XXT+λI)1i=0n1xiyi

SeiMn1=(XXT+λI)1 , dann

β^n+1=Mn+11(i=0n1xiyi+xnyn) und

Mn+1Mn=xnxnT , wir können bekommen

β^n+1=β^n+Mn+11xn(ynxnTβ^n)

Nach der Woodbury-Formel haben wir

Mn+11=Mn1Mn1xnxnTMn1(1+xnTMn1xn)

As a result,

β^n+1=β^n+Mn11+xnTMn1xnxn(ynxnTβ^n)

Polyak averaging indicates you can use ηn=nα to approximate Mn11+xnTMn1xn with α ranges from 0.5 to 1. You may try in your case to select the best α for your recursion.


I think it also works if you apply a batch gradient algorithm:

β^n+1=β^n+ηnni=0n1xi(yixiTβ^n)

lennon310
quelle
Was passiert, wenn ich meinen Regressor jedes Mal mit Stapelproben neuer Daten aktualisiere, wobei jeder aufeinanderfolgende Stapel aus einer geringfügig anderen Verteilung stammt? dh nicht IID. In diesem Fall möchte ich, dass der Regressor die neuen Daten berücksichtigt, aber seine Vorhersagen an der Stelle der alten Daten (vorherige Stapel) nicht beeinflusst. Können Sie mich auf Literatur verweisen, die Sie für nützlich halten könnten?
rnoodle
Good question, but sorry currently I cannot tell how much would it affect your model if you are still using the batch gradient formula in the answer, or approximating by applying the matrix form directly: eta^(-alpha)*X(Y-X'beta_n) where X, Y are your new batch samples
lennon310
hi, it seems that the regularization coefficient does not be involved in the recursive update formula? or does it only matter in the initialization of the inverse of M matrix?
Peng Zhao
4

A point that no one has addressed so far is that it generally doesn't make sense to keep the regularization parameter λ constant as data points are added. The reason for this is that Xβy2 will typically grow linearly with the number of data points, while the regularization term λβ2 won't.

Brian Borchers
quelle
That's an interesting point. But exactly why does it "not make sense"? Keeping λ constant surely is mathematically valid, so "not make sense" has to be understood in some kind of statistical context. But what context? What goes wrong? Would there be some kind of easy fix, such as replacing the sums of squares with mean squares?
whuber
Replacing the sum of squares with a scaled version (e.g. the mean squared error) would make sense, but simply using recursive least squares won't accomplish that.
Brian Borchers
As for what would go wrong, depending on your choice of λ, you'd get a very underregularized solution with a large number of data points or a very overregularized solution with a small number of data points.
Brian Borchers
One would suspect that, but if λ is tuned initially after receiving n data points and then more data points are added, whether the resulting solutions with more data points and the same λ are over- or under-regularized would depend on those new datapoints. This can be analyzed by assuming the datapoints act like an iid sample from a multivariate distribution, in which case it appears λ should be set to N/n at stage N. This would change the updating formulas, but in such a regular and simple way that efficient computation might still be possible. (+1)
whuber
3

Perhaps something like Stochastic gradient descent could work here. Compute β^ using your equation above on the initial dataset, that will be your starting estimate. For each new data point you can perform one step of gradient descent to update your parameter estimate.

Max S.
quelle
I have since realise that SGD (perhaps minibatch) is the way to go for online problems like this i.e. updating function approximations.
rnoodle
1

In linear regression, one possibility is updating the QR decomposition of X directly, as explained here. I guess that, unless you want to re-estimate λ after each new datapoint has been added, something very similar can be done with ridge regression.

Matteo Fasiolo
quelle
0

Here is an alternative (and less complex) approach compared to using the Woodbury formula. Note that XTX and XTy can be written as sums. Since we are calculating things online and don't want the sum to blow up, we can alternatively use means (XTX/n and XTy/n).

If you write X and y as :

X=(x1TxnT),y=(y1yn),

we can write the online updates to XTX/n and XTy/n (calculated up to the t-th row) as:

At=(11t)At1+1txtxtT,

bt=(11t)bt1+1txtyt.

Your online estimate of β then becomes

β^t=(At+λI)1bt.

Note that this also helps with the interpretation of λ remaining constant as you add observations!

This procedure is how https://github.com/joshday/OnlineStats.jl computes online estimates of linear/ridge regression.

joshday
quelle