[Numpy-discussion] Notes from meeting with Guido regarding inclusion of array package in Python core
Robert Kern
rkern at ucsd.edu
Thu Mar 10 17:19:18 EST 2005
Chris Barker wrote:
> Perry Greenfield wrote:
>
>> On Mar 10, 2005, at 6:19 PM, Chris Barker wrote:
>>
>>>> a) So long as the extension package has access to the necessary
>>>> array include files, it can build the extension to use the arrays as
>>>> a format without actually having the array package installed.
>
>
>>>> extension would, when requested to use arrays would see if it could
>>>> import the array package, if not, then all use of arrays would
>>>> result in exceptions.
>>>
>>>
>>> I'm not sure this is even necessary. In fact, in the above example,
>>> what would most likely happen is that the **Helper functions would
>>> check to see if the input object was an array, and then fork the code
>>> if it were. An array couldn't be passed in unless the package were
>>> there, so there would be no need for checking imports or raising
>>> exceptions.
>>>
>> So what would the helper function do if the argument was an array? You
>> mean use the sequence protocol?
>
>
> Sorry I wasn't clear. The present Helper functions check to see if the
> sequence is a list, and use list specific code if it is, otherwise, it
> falls back the sequence protocol, which is why it's slow for Numeric
> arrays. I'm proposing that if the input is an array, it will then use
> array-specific code (perhaps PyArray_ContiguousFromObject, then
> accessing *data directly)
If the über-buffer object (item 1c in Perry's notes) gets implemented in
the standard library, then the Helper functions could test
PyUberBuffer_Check() (or perhaps test for the presence of the extra
Numeric information, whatever), dispatch on the typecode, and iterate
through the data as appropriate. wx's C code doesn't need to know about
the Numeric array struct (and thus doesn't need to include any headers),
it just needs to know how to interpret the metadata provided by the
über-buffer.
What's more, other packages could nearly seamlessly provide data in the
same way. For example, suppose your wx function plopped a pixel image
onto a canvas. It could take one of these buffers as the pixel source.
PIL could be a source. A Numeric array could be a source. A string could
be a source. A Quartz CGBitmapContext could be a source. As long as each
could be adapted to include the conventional metadata, they could all be
source for the wx function, and none of the packages need to know about
each other much less be compiled against one another or depend on their
existence at runtime. I say "nearly seamlessly" only because there might
be an inevitable adaptation layer that adds or modifies the metadata.
The buffer approach seems like the most Pythonic way to go. It
encourages loose coupling and flexibility. It also encourages object
adaptation, a la PyProtocols[1], which I like to push now and again.
[1] http://peak.telecommunity.com/PyProtocols.html
--
Robert Kern
rkern at ucsd.edu
"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter
More information about the NumPy-Discussion
mailing list