Newbie completely confused

Istvan Albert istvan.albert at gmail.com
Mon Sep 24 14:38:52 EDT 2007


Two comments,

> ...
> self.item3 = float(foo[c]); c+=1
> self.item4 = float(foo[c]); c+=1
> self.item5 = float(foo[c]); c+=1
> self.item6 = float(foo[c]); c+=1
> ...

this here (and your code in general) is mind boggling and not in a
good way,

as for you original question, I don't think that reading in files of
the size you mention can cause any substantial problems, I think the
problem is somewhere else,

you can run the code below to see that the read times are unaffected
by the order   of processing

----------

import timeit

# make a big file
NUM= 10**5
fp = open('bigfile.txt', 'wt')
longline = ' ABC '* 60 + '\n'
for count in xrange( NUM ):
    fp.write( longline )
fp.close()

setup1 = """
def readLines():
    data = []
    for line in file('bigfile.txt'):
        data.append( line )
    return data
"""

stmt1 = """
data = readLines()
"""

stmt2 = """
data = readLines()
data = readLines()
"""

stmt3 = """
data = file('bigfile.txt').readlines()
"""

def run( setup, stmt, N=5 ):
    t = timeit.Timer(stmt=stmt, setup=setup)
    msec = 1000 * t.timeit(number=N)/N
    print "%f msec/pass" % msec

if __name__ == '__main__':
    for stmt in (stmt1, stmt2, stmt3):
        run(setup=setup1, stmt=stmt)






More information about the Python-list mailing list