[Tutor] open/closing files and system limits

Wes Bateman wbateman@epicrealm.com
Wed, 13 Sep 2000 15:42:11 -0500 (CDT)


Hello all:

I've got a small python script that rapidly opens a file, reads the lines
and closes the file.  This procedure is in a for loop.

for file in catalogoffilestoprocess.readlines():
	currentfile = open(file[:-1]) # Only way I could figure to strip
					newline from filename
	for line in currentfile.readlines():
		if blah
			blah
		elif blah
			blah
	currentfile.close()


Well that works alright except for this.  My list of files contains about
1100 files to process.  It takes an extraordinary amount of time to
run.  Watching it work, I can see that it rushes through a couple hundred,
stops for several (i.e. 1-4) minutes, then continues.

I believe it has something to do with the default file limits set in my
kernel (Linux).  I was thinking that the system wasn't keeping track of
the fact that the files were closed?  Somehow hitting that system ceiling?

Also running "time ./script catalogoffiles"
returns (look at the time elapsed!  also just now noticed the pagefault
and swap info, but am not familiar with time's output or what this
indicates).

463.40user 8.89system 8:05.39elapsed 97%CPU (0avgtext+0avgdata 
0maxresident)k
0inputs+0outputs (11017major+100694minor)pagefaults 1556swaps


Any ideas?

Thanks :)

Wes