Was bedeutet

15

Was bedeutet log O ( 1 ) nlogO(1)n ?

Ich kenne die Big-O-Notation, aber diese Notation macht für mich keinen Sinn. Ich kann auch nichts darüber finden, weil eine Suchmaschine dies auf keinen Fall richtig interpretiert.

Für ein bisschen Kontext lautet der Satz, in dem ich ihn gefunden habe, "[...] wir rufen eine Funktion [effizient] auf, wenn sie O ( log n ) Leerzeichen und höchstens log O ( 1 ) n pro Element verwendet."O(logn)logO(1)n

Oebele
quelle
1
Ich stimme zu, dass man solche Dinge nicht schreiben sollte, es sei denn, man weiß genau, was es bedeutet (und teilt dem Leser mit, was das ist) und wendet dieselben Regeln konsequent an.
Raphael
1
Ja, man sollte es stattdessen so schreiben ( log ( n ) ) O ( 1 )(log(n))O(1).
1
@ RickyDemer Das ist nicht der Punkt, den Raphael macht. log b l a h n bedeutet genau ( log n ) b l a h . logblahn(logn)blah
David Richerby
4
@Raphael Dies ist die Standardnotation auf dem Gebiet. Jeder, der Bescheid weiß, würde wissen, was es bedeutet.
Yuval Filmus
1
@YuvalFilmus Ich denke, die Vielzahl der nicht übereinstimmenden Antworten ist ein schlüssiger Beweis dafür, dass Ihre Behauptung falsch ist und dass man in der Tat davon Abstand nehmen sollte, eine solche Notation zu verwenden.
Raphael

Antworten:

16

Sie müssen das starke Gefühl, dass sich das " O " am falschen Ort befindet , für einen Moment ignorieren und trotzdem mit der Definition fortfahren. f ( n ) = log O ( 1 ) n bedeutet , daß es Konstanten existieren k und n 0 , so dass für alle n n 0 , f ( n ) log k 1 n = log k n .Of(n)=logO(1)nkn0nn0f(n)logk1n=logkn

Note that logknlogkn means (logn)k(logn)k. Functions of the form logO(1)nlogO(1)n are often called polylogarithmic and you might hear people say, "ff is polylog nn."

You'll notice that it's easy to prove that 2n=O(n)2n=O(n), since 2nkn2nkn for all n0n0, where k=2k=2. You might be wondering if 2logn=logO(1)n2logn=logO(1)n. The answer is yes since, for large enough nn, logn2logn2, so 2lognlog2n2lognlog2n for large enough nn.

On a related note, you'll often see polynomials written as nO(1)nO(1): same idea.

David Richerby
quelle
This is not supported by the common placeholder convention.
Raphael
I retract my comment: you write in all the important places, which is sufficient.
Raphael
@Raphael OK. I hadn't had time to check it yet but my feeling was you might be ordering quantifiers differently from the way I am. I'm not actually sure we're defining the same class of functions.
David Richerby
I think you are defining my (2), and Tom defines cR>0{logcn}cR>0{logcn}.
Raphael
9

This is an abuse of notation that can be made sense of by the generally accepted placeholder convention: whenever you find a Landau term O(f)O(f), replace it (in your mind, or on the paper) by an arbitrary function gO(f)gO(f).

So if you find

f(n)=logO(1)nf(n)=logO(1)n

you are to read

f(n)=logg(n)nf(n)=logg(n)n for some gO(1).(1)gO(1).(1)

Note the difference from saying "loglog to the power of some constant": g=n1/ng=n1/n is a distinct possibility.

Warning: The author may be employing even more abuse of notation and want you to read

f(n)O(logg(n)n)f(n)O(logg(n)n) for some gO(1).(2)gO(1).(2)

Note the difference between (1) and (2); while it works out to define the same set of positive-valued functions here, this does not always work. Do not move OO around in expressions without care!

Raphael
quelle
3
I think what makes it tick is that xlogx(n)xlogx(n) is monotonic and sufficiently surjective for each fixed nn. Monotonic makes the position of the OO equivalent and gives you (2) ⇒ (1); going the other way requires gg to exist which could fail if f(n)f(n) is outside the range of the function. If you want to point out that moving OO around is dangerous and doesn't cover “wild” functions, fine, but in this specific case it's ok for the kind of functions that represent costs.
Gilles 'SO- stop being evil'
@Gilles I weakened the statement to a general warning.
Raphael
1
This answer has been heavily edited, and now I am confused: do you now claim that (1) and (2) are effectively the same?
Oebele
@Oebele As far as I can tell, they are not in general, but here.
Raphael
But, something like 3log2n3log2n does not match (1) but does match (2) right? or am I just being silly now?
Oebele
6

It means that the function grows at most as loglog to the power of some constant, i.e. log2(n)log2(n) or log5(n)log5(n) or log99999(n)log99999(n)...

Tom van der Zanden
quelle
This can be used when the function growth is known to be bounded by some constant power of the loglog, but the particular constant is unknown or left unspecified.
Yves Daoust
This is not supported by the common placeholder convention.
Raphael
2

"At most logO(1)nlogO(1)n" means that there is a constant cc such that what is being measured is O(logcn)O(logcn).

In a more general context, f(n)logO(1)nf(n)logO(1)n is equivalent to the statement that there exists (possibly negative) constants aa and bb such that f(n)O(logan)f(n)O(logan) and f(n)Ω(logbn)f(n)Ω(logbn).

It is easy to overlook the Ω(logbn)Ω(logbn) lower bound. In a setting where that would matter (which would be very uncommon if you're exclusively interested in studying asymptotic growth), you shouldn't have complete confidence that the author actually meant the lower bound, and would have to rely on the context to make sure.


The literal meaning of the notation logO(1)nlogO(1)n is doing arithmetic on the family of functions, resulting in the family of all functions logg(n)nlogg(n)n, where g(n)O(1)g(n)O(1). This works in pretty much the same as how multiplying O(g(n))O(g(n)) by h(n)h(n) results in O(g(n)h(n))O(g(n)h(n)), except that you get a result that isn't expressed so simply.


Since the details of the lower bound are in probably unfamiliar territory, it's worth looking at some counterexamples. Recall that any g(n)O(1)g(n)O(1) is bounded in magnitude; that there is a constant cc such that for all sufficiently large nn, |g(n)|<c|g(n)|<c.

When looking at asymptotic growth, usually only the upper bound g(n)<cg(n)<c matters, since, e.g., you already know the function is positive. However, in full generality you have to pay attention to the lower bound g(n)>cg(n)>c.

This means, contrary to more typical uses of big-oh notation, functions that decrease too rapidly can fail to be in logO(1)nlogO(1)n; for example, 1n=log(logn)/(loglogn)nlogO(1)n

1n=log(logn)/(loglogn)nlogO(1)n
because lognloglognO(1)
lognloglognO(1)
The exponent here grows in magnitude too rapidly to be bounded by O(1)O(1).

A counterexample of a somewhat different sort is that 1logO(1)n1logO(1)n.


quelle
Can't I just take b=0b=0 and make your claimed lower bound go away?
David Richerby
1
@DavidRicherby No, b=0b=0 still says that ff is bounded below. Hurkyl: why isn't f(n)=1/n in logO(1)n?
Gilles 'SO- stop being evil'
@Gilles: More content added!
@Gilles OK, sure, it's bounded below by 1. Which is no bound at all for "most" applications of Landau notation in CS.
David Richerby
1) Your "move around O" rule does not always work, and I don't think "at most" usually has that meaning; it's just redundant. 2) Never does O imply a lower bound. That's when you use Θ. 3) If and how negative functions are dealt with by a given definition of O (even without abuse of notation) is not universally clear. Most definitions (in analysis of algorithms) exclude them. You seem to assume a definition that bounds the absolute value, which is fine.
Raphael