Debugging suggestions, efficiency - handling modules

Bruno Desthuilliers bruno.42.desthuilliers at websiteburo.invalid
Fri Oct 10 07:04:59 EDT 2008


John [H2O] a écrit :
> Hello,
> 
> I am writing some scripts that run a few calculations using scipy and plot
> the results with matplotlib (i.e. pylab). What I have found, however, is
> that the bulk of the time it takes to run the script is simply in loading
> modules. 

Is this loading time really that huge ???

> Granted, I am currently using:
> from pylab import *
> 
> However, changing this to the specific classes/functions doesn't make a
> significant difference in the execution time.

Indeed. the 'import' statement does two things : first load the module 
and cache it (so following imports of the same module will access the 
same module object), then populate the importing namespace. Obviously, 
the 'heavy' part is the first import of the module (which requires some 
IO and eventual compilation to .pyc).

> Is there a way to have the modules stay loaded?

where ?

But rerun the script?

Each execution ('run') of a Python script - using the python 
/path/to/my/script syntax or it's point&click equivalent - starts a new 
Python interpreter process, which usually[1] terminates when the script 
ends (wether normally, or because of a sys.exit call or any other 
exception).

[1] using the -i option keeps the interpreter up, switching to 
interactive mode, after execution.

> One
> solution I can think of is to set break points,

???

> and design my scripts more
> as 'functions', then just run them from the command line.

You should indeed write as much as possible of your scripts logic as 
functions. Then you can use the " if __name__ == '__main__': " idiom as 
main entry point.

Now if you're going to use the Python shell as, well, a shell, you may 
want to have a look at IPython, which is a much more featurefull:

http://ipython.scipy.org/moin/Documentation


HTH



More information about the Python-list mailing list