Big-O notation (was: Re: Python and Schools)

Chad Netzer cnetzer at mail.arc.nasa.gov
Tue Apr 15 17:23:10 EDT 2003


On Tue, 2003-04-15 at 13:32, Mike C. Fletcher wrote:
> Greg Brondo wrote:

> Here's a crack at explaining it:

Good job.  I have a few corrections and minor additions.


>     * O(1) -- time is linear

You meant "time is constant".  The algorithm requires the same number of
operations regardless of input size.

>     * O(N log N) -- see O(N) and O(log N), basically, you do an amount
>       of work tied to the size of the whole set for every stage in an O(
>       log N ) operation (this is comparatively rarely discussed in my
>       experience).

Really?  Some of the most important classes of algorithms are O( N log N
), specifically, many common sort algorithms.  Also, there are many of
these algorithms that have degenerate cases that become O( N**2 )
algorithms, with naive implementations or just by the nature of the data
set, and it is important to be wary of this phenomenon.

Also, one further class that is import to know is O( 2**N ), such as the
brute-force solution to the "travelling salesman problem".  These are
much, MUCH worse than O( N**2 ) algorithms, and even today can rarely be
computed for problems sets much larger than 100 (whereas O( N**2 )
problem sets can grow to thousands or perhaps even millions and still be
practically solved).  Rather than computing "solutions" with these types
of algorithms, it is required to instead come up with "close-enough" or
"best guess" type algorithms that CAN be computed practically on large
data sets.

-- 

Chad Netzer
(any opinion expressed is my own and not NASA's or my employer's)







More information about the Python-list mailing list