Big-O notation

Bjorn Pettersen BPettersen at NAREX.com
Wed Apr 16 17:35:53 EDT 2003


> From: sik0fewl [mailto:xxdigitalhellxx at hotmail.com] 
> 
> Peter van Kampen <news at datatailors.xs4all.nl> wrote in message 
> news:<slrnb9q90a.mhg.news at datatailors.com>...
> >I think I almost 'get it'. Except who or what decides what Big-O a
> >certain algorithm has. Is that by 'hunch' or by running benchmarks or
> >are there still other ways to recognize 'Big-O's (except for the
> >formal computations that I understand are not very common)? 
> 
> I'll try to give a simple explanation. Big-O is the upper 
> bound of the time it will take to complete the function.
[...]

google(algorithm complexity) == lecture notes
(http://www.geog.buffalo.edu/classes/geo554/week4.pdf)

[also "time complexity", "space complexity"].

 O() ::= The big-O notation gives an indication of the growth 
         rate of a function. It identifies the dominant term 
         of the function.

And, for the practically minded...

Rules of Thumb
- A loop that processes each data item in turn is O(n).
- A loop within a loop
    Do n
      Do m
  Is O(m*n)
- If each data set is halved each time through a loop 
  then there will be O(log2n) iterations. (log2n is 
  the number of times you repeatedly half n to reach 1).
- An algorithm that processes every subset of n data 
  items has complexity O(2**n).
- An algorithm that processes every permutation of n 
  data items has complexity O(n!).
- If the number of operations is the same whatever 
  the number of data items then the complexity function 
  does not depend on n. We say the operation has complexity 
  O(1).

-- bjorn





More information about the Python-list mailing list