looping through a file

Cecil H. Whitley cwhitley at earthlink.net
Thu May 1 07:38:17 EDT 2003


*snip*
> > import cgi, string
> >
> > form = cgi.FieldStorage()
> >
> > if form.has_key('usernameField') and form.has_key('passwordField'):
> >          users_username = string.strip(form['usernameField'].value)
> >          users_password = string.strip(form['passwordField'].value)
> authorised = 0
> lines = open('users.dat').readlines()
> for i in range(0, len(lines), 2):
>     if (users_username == strip(lines[i]) and (users_password ==
> strip(lines[i + 1]):
>     authorised = 1
>     break
>
> HTH.
> Miki
>
That's a good one, but wouldn't he be better served by a pickled dictionary?

Script to generate and save dictionary.
import pickle
userdb = {}
lines = open('users.dat').readlines()
for i in range(0, len(lines), 2):
    username = strip(lines[i])
    password = strip(lines[i+1])
    if userdb.has_key(username):
        print "error in file, multiple users with same id"
    else:
        userdb[username] = password

# Now for the export to the new "db" file userdb.dat
outfile = open("userdb.dat","w")
pickle.dump(userdb,outfile)
outfile.close()

Now, for the actual usage, stop at authorized = 0 in previous and add

import pickle
infile = open("userdb.dat","r")
userdb = pickle.load(infile)
infile.close()
if userdb[users_username]==users_password:
    authorized = 1

Of course the issue here will be maintaining the user database, but the
"pickling" script can be run as often as needed against the flat file.  By
doing user lookup "indexed" as opposed to looped it should cut down on the
time it takes to authenticate.  Of course, if it's every user in the world
memory will be a problem, but I contend that if it's that big doing a
read/loop on a flat file will take a very long time and will induce problems
of it's own.  If the userdb is small enough, you might also want to consider
using a memory mapped file?

Regards,
Cecil Whitley






More information about the Python-list mailing list