Comparing strings from the back?

Oscar Benjamin oscar.j.benjamin at gmail.com
Sat Sep 8 07:53:13 EDT 2012


On 2012-09-08, Steven D'Aprano <steve+comp.lang.python at pearwood.info> wrote:
> On Fri, 07 Sep 2012 19:10:16 +0000, Oscar Benjamin wrote:
>
>> On 2012-09-07, Steven D'Aprano <steve+comp.lang.python at pearwood.info>
>> wrote:
>> <snip>
>> 
>> Would you say, then, that dict insertion is O(N)?
>
> Pedantically, yes. 
>
> But since we're allowed to state (or even imply *wink*) whatever 
> assumptions we like, we're allowed to assume "in the absence of 
> significant numbers of hash collisions" and come up with amortized O(1) 
> for dict insertions and lookups.
>
> (Provided, of course, that your computer has an infinite amount of 
> unfragmented memory and the OS never starts paging your dict to disk. 
> Another unstated assumption that gets glossed over when we talk about 
> complexity analysis -- on real world computers, for big enough N, 
> *everything* is O(2**N) or worse.)
>
> Big Oh analysis, despite the formal mathematics used, is not an exact 
> science. Essentially, it is a way of bringing some vague order to hand-
> wavy estimates of complexity, and the apparent mathematical rigour is 
> built on some awfully shaky foundations. But despite that, it actually is 
> useful.
>
> Coming back to strings... given that in any real-world application, you 
> are likely to have some string comparisons on equal strings and some on 
> unequal strings, and more importantly you don't know which are which 
> ahead of time, which attitude is less likely to give you a nasty surprise 
> when you run your code?
>
> "I have many millions of 100K strings to compare against other 100K 
> strings, and string comparisons are O(1) so that will be fast."
>
> "I have many millions of 100K strings to compare against other 100K 
> strings, and string comparisons are O(N) so that will be slow, better 
> find another algorithm."

True. I can't think of a situation where I've used string comparisons
directly in any text heavy code. Rather, I would use a dict or a set (or a
regex) and hash(str) is always O(N).

>
>
> Remember too that "for small enough N, everything is O(1)". Getting hung 
> up on Big Oh is just as much a mistake as ignoring it completely.
>
>

I can't think of a situation in my own work where O(N) vs O(1) string
comparisons would cause a significant problem (except perhaps in libraries
that I use but didn't write). However, I can find a number of cases where I
compare numpy.ndarrays for equality. For example, I found

if np.all(a == b):

in some code that I recently wrote. Although np.all() short-circuits, a==b
does not so that line forces O(N) behaviour onto a situation where the average
case can be better. Unfortunately numpy doesn't seem to provide a
short-circuit equals() function. array_equal() is what I want but it does the
same as the above. In future, I'll consider using something like

def cmparray(a, b):
  return a.shape == b.shape and a.dtype == b.dtype and buffer(a) == buffer(b)

to take advantage of (what I assume are) short-circuit buffer comparisons.

>> Since string comparison is only useful if the strings can be equal or
>> unequal, the average case depends on how often they are equal/unequal as
>> well as the average complexity of both. For random strings the frequency
>> of equal strings decreases very fast as N increases so that the
>> comparison of random strings is O(1).
>
> But that is not an upper bound, and Big Oh analysis is strictly defined 
> in terms of upper bounds.

It is an upper bound, but it is an upper bound on the *expectation value*
assuming a particular distribution of inputs, rather than an upper bound on
all possible inputs.

>>> (I'm talking about the average here -- the actual number of comparisons
>>> can range all the way up to N, but the average is <= 2.)

The average is actually bounded by 1 / (1 - p) where p is the probability that
two characters match. This bound can be arbitrarily large as p approaches 1 as
would be the case if, say, one character was much more likely than others. The
particular assumption that you have made p = 1/M where M is the number of
characters is actually the *smallest* possible value of p. For non-uniform
real data (English words for example) p is significantly greater than 1/M but
in a strict bounds sense we should say that 1/M <= p <= 1.

Oscar




More information about the Python-list mailing list