[Python-Dev] Changing ob_size to [s]size_t

Guido van Rossum guido@python.org
Fri, 07 Jun 2002 10:58:02 -0400


> > I'm not very concerned about strings or lists with more than 2GB
> > items, but I am concerned about other memory buffers.
> 
> Those in the Numeric/numarray community, for one, would also be
> concerned. Although there aren't many data arrays these days that are
> larger than 2GB there are some beginning to appear. I have no doubt
> that within a few years there will be many more. I'm not sure I 
> understand all the implications of the discussion here, but it sounds
> like an important issue. Currently strings are frequently used as
> a common "medium" to pass binary data from one module to another
> (e.g., from Numeric to PIL); limiting strings to 2GB may prove
> a problem in this area (though frankly, I suspect few will want
> to use them as temporary buffers for objects that size until memories
> have grown a bit more :-). 

Sorry, I should have been more exact.  I meant 2 billion items, not 2
gigabytes.  That should give you an extra factor 4-8 to play with. :-)

We'll fix this in Python 3.0 for sure -- the question is, should we
start fixing it now and binary compatibility be damned, or should we
honor binary compatiblity more?

Maybe someone in the Python-with-a-tie camp can comment?

--Guido van Rossum (home page: http://www.python.org/~guido/)