Reading a comma delimited file

Kristen Zander kzander at hot.rr.com
Thu Nov 8 10:34:59 EST 2001


I am creating a system to load existing flat files that are not "truly"
comma delimited from the web onto a server, run a macro on them and "fix"
them, and then dump the data into a database.  I've got the files coming
over, running the macro ok, and the data looks good.  I am getting the data
out and trying to get it into a list, tuple structure in order to use the
executemany() call in ODBC.  Here's my code so far:

 dbfile = open(newname,'r')
  table = []
  lines = dbfile.readline()
  newtable = []
  for line in lines:   *Split the .csv file into rows and take off the
newline character
   line = line[:-1]
   r = string.split(line,',')
   table.append(r)
  for x in table:    *Take each list of strings within the list and change
the numbers from strings to dump into the table
   t=[]
   for all in x:
    if all.isalpha() == 0:
     num = float(all)
    else:
     num = all
   t.append(num)
   newtable.append(tuple(t))  **Make a new table with the list, tuple
structure

Here's what it looks like when I'm done:

[('Cotulla',  28.449999999999999,  99.216700000000003,  1960.0,  2.0,  11.1,
20.0,  0.0,  167.30000000000001),
 ('Cotulla',  28.449999999999999,  99.216700000000003,  1960.0,  3.0,
6.0999999999999996,  13.9,  0.0,  239.0)]

Just what I need except I don't want all those decimal places.  I've tried
searching the help and I'm sure there is a very simple call to fix this.
Can someone help me out on this.  Also, I'm new at Python and am not sure
this is the best way to accomplish my task, and any suggestions as to how I
might increase speed by doing it another way would be great.   One .csv file
could and many times will have up to 14000 rows of data to be dumped into
the database.

Thanks,
Kristen Zander





More information about the Python-list mailing list