[Numpy-discussion] Questions about converting to numpy
Robert Kern
robert.kern at gmail.com
Wed Apr 25 16:11:29 EDT 2007
Russell E. Owen wrote:
> So I finally bit the bullet and converted most of my code from Numeric
> and numarray to numpy. (I haven't yet tried to convert one package that
> makes heavy use of nd_image and has C extensions).
>
> But it left me with a few questions:
>
> - What exception does numpy throw if it runs out of memory? (I can try
> harder to make it do that, but trying to chew up all memory tends to
> slow the machine down and my first tests weren't successful) -- the
> equivalent to numarray.memory.error. The numpy book is silent on the
> issue of what exceptions numpy can throw (at least the index was).
The standard MemoryError exception.
> - Is there a list of the data types that we can expect to be available
> on all regular platforms (including 32-bit linux, MacOS X and Windows)
> and of usual speed for computations (instead of some kind of slow
> emulation)?
Not anywhere particular, but these might not be available/useful on all
platforms: float96, float128, float256, complex182, complex256, complex512,
int64, uint64, int128, uint128, longlong, ulonglong, longfloat, clongfloat.
> - Even after reading the book I'm not really clear on why one would use
> numpy.float_ instead of numpy.float or float for day-to-day programming
> (where the size doesn't matter enough to use float64 or whatever). Any
> hints?
If you wanted an array scalar of the "default" float dtype (whatever that
happened to be), you would have to use float_. Of course the "default" float
dtype is always (and will always be, AFAICT) float64, so really, you might as
well use that.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
More information about the NumPy-Discussion
mailing list