Efficient processing of large nuumeric data file

George Sakkis george.sakkis at gmail.com
Fri Jan 18 12:50:40 EST 2008


On Jan 18, 12:15 pm, David Sanders <dpsand... at gmail.com> wrote:

> Hi,
>
> I am processing large files of numerical data.  Each line is either a
> single (positive) integer, or a pair of positive integers, where the
> second represents the number of times that the first number is
> repeated in the data -- this is to avoid generating huge raw files,
> since one particular number is often repeated in the data generation
> step.
>
> My question is how to process such files efficiently to obtain a
> frequency histogram of the data (how many times each number occurs in
> the data, taking into account the repetitions).  My current code is as
> follows:
>
> -------------------
> #!/usr/bin/env python
> # Counts the occurrences of integers in a file and makes a histogram
> of them
> # Allows for a second field which gives the number of counts of each
> datum
>
> import sys
> args = sys.argv
> num_args = len(args)
>
> if num_args < 2:
>         print "Syntaxis: count.py archivo"
>         sys.exit();
>
> name = args[1]
> file = open(name, "r")
>
> hist = {}   # dictionary for histogram
> num = 0
>
> for line in file:
>         data = line.split()
>         first = int(data[0])
>
>         if len(data) == 1:
>                 count = 1
>         else:
>                 count = int(data[1])    # more than one repetition
>
>         if first in hist:       # add the information to the histogram
>                 hist[first]+=count
>         else:
>                 hist[first]=count
>
>         num+=count
>
> keys = hist.keys()
> keys.sort()
>
> print "# i  fraction   hist[i]"
> for i in keys:
>         print i, float(hist[i])/num, hist[i]
> ---------------------
>
> The data files are large (~100 million lines), and this code takes a
> long time to run (compared to just doing wc -l, for example).
>
> Am I doing something very inefficient?  (Any general comments on my
> pythonic (or otherwise) style are also appreciated!)  Is
> "line.split()" efficient, for example?

Without further information, I don't see anything particularly
inefficient. What may help here is if you have any a priori knowledge
about the data, specifically:

- How often does a single number occur compared to a pair of numbers ?
E.g. if a single number is much more common that a pair, you can avoid
split() most of the times:
    try: first,count = int(line), 1
    except ValueError:
        first,count = map(int,line.split())

Similarly if the pair is much more frequent than the single number,
just invert the above so that the common case is in the 'try' block
and the infrequent in 'except'. However if the two cases have similar
frequency, or if you have no a priori knowledge, try/except will
likely be slower.

- What proportion of the first numbers is unique ? If it's small
enough, a faster way to update the dict is:
        try: hist[first]+=count
        except KeyError:
                hist[first] = 1

> Is a dictionary the right way to do this?  In any given file, there is
> an upper bound on the data, so it seems to me that some kind of array
> (numpy?) would be more efficient, but the upper bound changes in each
> file.

Yes, dict is the right data structure; since Python 2.5,
collections.defaultdict is an alternative. numpy is good for
processing numeric data once they are already in arrays, not for
populating them.

George



More information about the Python-list mailing list