I did write that data-analysis project. Actually what I wrote
was a preprocessor; it takes as input some files which specify
how you would like the data to be analyzed, and translates
this information into FORTRAN (I hear the Scheme guys coughing)
code; I then have a FORTRAN main program which includes the
Python FORTRAN output and does just a little bookkeeping.
This gets compiled and run. It uses the CERN HBOOK package
for histogramming so it takes advantage of all the existing
code, but it avoids using PAW (CERN's big hitter data-analysis
program); PAW is great for presentation and "analysis" of
data in near-final form but not good for analyzing large
quantities of data when large numbers of parameters need
to be investigated.
I haven't made a class yet, but I did manage to use dictionaries;
pretty cool. I've been really amazed at how easy it is to
improve your programs once you start; I guess it has to do
with the flexibility of the data types. I had originally written
the preprocessor without dictionaries, and putting them in was
almost no work at all.
If anybody wants to nose around in the code, lemme know, but
this time I won't bother the list with a posting.