[SciPy-user] Efficient file reading
Lorenzo Isella
lorenzo.isella at gmail.com
Tue Jan 6 17:09:35 EST 2009
Dear All,
I sometimes need to read rather large data files (~500Mb).
These are plain text files (usually tables with 500 x 2e5 entries).
It seems to me (but I have not done any serious test/benchmark) that R
is faster than Python to read/write files.
Or better: maybe I am too naive when doing I/O operation in python.
I usually simply do the following
import pylab as p
my_arr=p.load("my_data.txt")
which gets the job done, but is slow in this case. Probably there is a
more efficient way for doing this, and I should also add that I know
beforehand the dimensions of the data table I want to read into a scipy
array.
Any suggestions?
Many thanks
Lorenzo
More information about the SciPy-User
mailing list