[SciPy-dev] Latest weave still doesn't work on FreeBSD without a hack\ large arrays crash my system

eric eric at scipy.org
Tue Mar 26 14:09:41 EST 2002


Hey Rob,


> It can't find libstdc++.   So I added the path and library name to one of
> the files.  Later when I get home from work I can make a diff.
>
> But here is the data, so it should be a no-brainer.  FreeBSD puts libstdc++
> in /usr/lib.

I don't get this.  weave no longer explicitly specifies stdc++.  It uses g++
instead whenever gcc is detected as the compiler.  g++ should automatically know
where stdc++ is.  When you use "verbose=2" as an argument to blitz(),
what is the output?

>
> Also, I have no problems using weave.blitz() for 50x50x50 FDTD cells.  But
> when I go to 100x100x100 cells I run out of memory and swap.  Why does the
> size of the array affect the compilation?  Isn't an array just a pointer to
> a block of memory?

This doesn't make sense to me either.  The same code should be generated for
both.  In fact, after the FDTD equation(s) have been compiled once, they
shouldn't have to be compiled again -- it should detect the already compiled
version and simply start cranking away.

One thought.  Perhaps the 100x100x100 arrays are eating up a significant
fraction of your memory.  I don't know what formulation you are using, but if
this is 3D, you have 6 material arrays, and your using double precision, then
the memory usage for the FDTD alone is:

    100*100*100/1e6 * (6 + 3) * 8 = 96 MB

If your machine is somewhat limited, this plus the memory that g++ uses when
compiling blitz++ code (memory intensive) may be stressing your machine to its
limits.  This doesn't seem that plausible, but I can't think of another
explanation.

Can you send me a snippet that exhibits this problem?

thanks,
eric






More information about the SciPy-Dev mailing list