How to demonstrate bigO cost of algorithms?

Josh Gilbert jgilbert+comp.lang.python at uvm.edu
Wed Jun 2 19:55:29 EDT 2004


Mitja wrote:

> Rusty Shackleford <rs at overlook.homelinux.net>
> (news:slrncbs3d0.9l3.rs at frank.overlook.homelinux.net) wrote:
>> Thanks for that explanation -- it really helped.  I forgot that O(n)
>> can translate into dn + c where d and c are constants.
> 
> ???
> You'd NEVER write, e.g., O(n)=3n+7. What you'd say would be simply O(n)=n.
> As already pointed out, this means that complexity (and, with that,
> execution time) is PROPORTIONAL to n. It doesn't give you the number of
> seconds or something.

If, on the other hand, you knew that one program took 3n+7 and another took
(n^2) / 10^10 you WOULD say that one's O(n) and the other's O(n^2). This
implies that the first one is faster for large data sets. 

And if you don't care how long it takes for your one-off code to sort a
billion elements, insertion sort might be the fastest algorithm you have.
Especially if reordering elements is expensive. At which time you'd use a
Schwartzian transform. 

I think the best way to see this stuff is to look toy examples, an
exponential algorithm with small constants versus a linear time algorithm
with a miserable initial cost. 

        Josh Gilbert.
-- 
http://www.uvm.edu/~jgilbert/public_key.gpg



More information about the Python-list mailing list