From noreply at buildbot.pypy.org Tue May 1 00:03:08 2012 From: noreply at buildbot.pypy.org (wlav) Date: Tue, 1 May 2012 00:03:08 +0200 (CEST) Subject: [pypy-commit] pypy default: cppyy doc, clarifications Message-ID: <20120430220308.70A0082F50@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: Changeset: r54837:0bbb1685f720 Date: 2012-04-30 15:02 -0700 http://bitbucket.org/pypy/pypy/changeset/0bbb1685f720/ Log: cppyy doc, clarifications diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst --- a/pypy/doc/cppyy.rst +++ b/pypy/doc/cppyy.rst @@ -21,6 +21,26 @@ .. _`llvm`: http://llvm.org/ +Motivation +========== + +The cppyy module offers two unique features, which result in great +performance as well as better functionality and cross-language integration +than would otherwise be possible. +First, cppyy is written in RPython and therefore open to optimizations by the +JIT up until the actual point of call into C++. +This means that there are no conversions necessary between a garbage collected +and a reference counted environment, as is needed for the use of existing +extension modules written or generated for CPython. +It also means that if variables are already unboxed by the JIT, they can be +passed through directly to C++. +Second, Reflex (and cling far more so) adds dynamic features to C++, thus +greatly reducing impedance mismatches between the two languages. +In fact, Reflex is dynamic enough that you could write the runtime bindings +generation in python (as opposed to RPython) and this is used to create very +natural "pythonizations" of the bound code. + + Installation ============ @@ -195,10 +215,12 @@ >>>> d = cppyy.gbl.BaseFactory("name", 42, 3.14) >>>> type(d) - >>>> d.m_i - 42 - >>>> d.m_d - 3.14 + >>>> isinstance(d, cppyy.gbl.Base1) + True + >>>> isinstance(d, cppyy.gbl.Base2) + True + >>>> d.m_i, d.m_d + (42, 3.14) >>>> d.m_name == "name" True >>>> @@ -295,6 +317,9 @@ To select a specific virtual method, do like with normal python classes that override methods: select it from the class that you need, rather than calling the method on the instance. + To select a specific overload, use the __dispatch__ special function, which + takes the name of the desired method and its signature (which can be + obtained from the doc string) as arguments. * **namespaces**: Are represented as python classes. Namespaces are more open-ended than classes, so sometimes initial access may diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst --- a/pypy/doc/extending.rst +++ b/pypy/doc/extending.rst @@ -116,13 +116,21 @@ Reflex ====== -This method is only experimental for now, and is being exercised on a branch, -`reflex-support`_, so you will have to build PyPy yourself. +This method is still experimental and is being exercised on a branch, +`reflex-support`_, which adds the `cppyy`_ module. The method works by using the `Reflex package`_ to provide reflection information of the C++ code, which is then used to automatically generate -bindings at runtime, which can then be used from python. +bindings at runtime. +From a python standpoint, there is no difference between generating bindings +at runtime, or having them "statically" generated and available in scripts +or compiled into extension modules: python classes and functions are always +runtime structures, created when a script or module loads. +However, if the backend itself is capable of dynamic behavior, it is a much +better functional match to python, allowing tighter integration and more +natural language mappings. Full details are `available here`_. +.. _`cppyy`: cppyy.html .. _`reflex-support`: cppyy.html .. _`Reflex package`: http://root.cern.ch/drupal/content/reflex .. _`available here`: cppyy.html @@ -130,16 +138,33 @@ Pros ---- -If it works, it is mostly automatic, and hence easy in use. -The bindings can make use of direct pointers, in which case the calls are -very fast. +The cppyy module is written in RPython, which makes it possible to keep the +code execution visible to the JIT all the way to the actual point of call into +C++, thus allowing for a very fast interface. +Reflex is currently in use in large software environments in High Energy +Physics (HEP), across many different projects and packages, and its use can be +virtually completely automated in a production environment. +One of its uses in HEP is in providing language bindings for CPython. +Thus, it is possible to use Reflex to have bound code work on both CPython and +on PyPy. +In the medium-term, Reflex will be replaced by `cling`_, which is based on +`llvm`_. +This will affect the backend only; the python-side interface is expected to +remain the same, except that cling adds a lot of dynamic behavior to C++, +enabling further language integration. + +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ Cons ---- -C++ is a large language, and these bindings are not yet feature-complete. -Although missing features should do no harm if you don't use them, if you do -need a particular feature, it may be necessary to work around it in python -or with a C++ helper function. +C++ is a large language, and cppyy is not yet feature-complete. +Still, the experience gained in developing the equivalent bindings for CPython +means that adding missing features is a simple matter of engineering, not a +question of research. +The module is written so that currently missing features should do no harm if +you don't use them, if you do need a particular feature, it may be necessary +to work around it in python or with a C++ helper function. Although Reflex works on various platforms, the bindings with PyPy have only been tested on Linux. From noreply at buildbot.pypy.org Tue May 1 11:27:30 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 1 May 2012 11:27:30 +0200 (CEST) Subject: [pypy-commit] pypy default: test that it is possible disable each of the optimization steps Message-ID: <20120501092730.986E49B602F@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r54838:0778ff2df9df Date: 2012-05-01 10:53 +0200 http://bitbucket.org/pypy/pypy/changeset/0778ff2df9df/ Log: test that it is possible disable each of the optimization steps diff --git a/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py @@ -0,0 +1,46 @@ +from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import OptimizeOptTest +from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin +from pypy.jit.metainterp.resoperation import rop + + +allopts = OptimizeOptTest.enable_opts.split(':') +for optnum in range(len(allopts)): + myopts = allopts[:] + del myopts[optnum] + + class TestLLtype(OptimizeOptTest, LLtypeMixin): + enable_opts = ':'.join(myopts) + + def optimize_loop(self, ops, expected, expected_preamble=None, + call_pure_results=None, expected_short=None): + loop = self.parse(ops) + if expected != "crash!": + expected = self.parse(expected) + if expected_preamble: + expected_preamble = self.parse(expected_preamble) + if expected_short: + expected_short = self.parse(expected_short) + + preamble = self.unroll_and_optimize(loop, call_pure_results) + + for op in preamble.operations + loop.operations: + assert op.getopnum() not in (rop.CALL_PURE, + rop.CALL_LOOPINVARIANT, + rop.VIRTUAL_REF_FINISH, + rop.VIRTUAL_REF, + rop.QUASIIMMUT_FIELD, + rop.MARK_OPAQUE_PTR, + rop.RECORD_KNOWN_CLASS) + + def raises(self, e, fn, *args): + try: + fn(*args) + except e: + pass + + opt = allopts[optnum] + exec "TestNo%sLLtype = TestLLtype" % (opt[0].upper() + opt[1:]) + +del TestLLtype +del TestNoUnrollLLtype + diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -105,6 +105,9 @@ return loop + def raises(self, e, fn, *args): + py.test.raises(e, fn, *args) + class OptimizeOptTest(BaseTestWithUnroll): def setup_method(self, meth=None): @@ -2639,7 +2642,7 @@ p2 = new_with_vtable(ConstClass(node_vtable)) jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_2(self): @@ -2651,7 +2654,7 @@ escape(p2) # prevent it from staying Virtual jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_3(self): @@ -2665,7 +2668,7 @@ setfield_gc(p3, p4, descr=nextdescr) jump(p3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_merge_guard_class_guard_value(self): @@ -4411,7 +4414,7 @@ setfield_gc(p1, p3, descr=nextdescr) jump(p3) """ - py.test.raises(BogusPureField, self.optimize_loop, ops, "crash!") + self.raises(BogusPureField, self.optimize_loop, ops, "crash!") def test_dont_complains_different_field(self): ops = """ @@ -5024,7 +5027,7 @@ i2 = int_add(i0, 3) jump(i2) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_ne_const_not(self): ops = """ @@ -5074,7 +5077,7 @@ i3 = int_add(i0, 3) jump(i3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_lshift(self): ops = """ From noreply at buildbot.pypy.org Tue May 1 11:27:31 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 1 May 2012 11:27:31 +0200 (CEST) Subject: [pypy-commit] pypy default: When unrolling is emabled OptSimplify should not touch LABEL and JUMP ops Message-ID: <20120501092731.DE9E19B6030@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r54839:8cc6f87c034b Date: 2012-05-01 11:22 +0200 http://bitbucket.org/pypy/pypy/changeset/8cc6f87c034b/ Log: When unrolling is emabled OptSimplify should not touch LABEL and JUMP ops diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -51,7 +51,7 @@ if ('rewrite' not in enable_opts or 'virtualize' not in enable_opts or 'heap' not in enable_opts or 'unroll' not in enable_opts or 'pure' not in enable_opts): - optimizations.append(OptSimplify()) + optimizations.append(OptSimplify(unroll)) return optimizations, unroll diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -4,8 +4,9 @@ from pypy.jit.metainterp.history import TargetToken, JitCellToken class OptSimplify(Optimization): - def __init__(self): + def __init__(self, unroll): self.last_label_descr = None + self.unroll = unroll def optimize_CALL_PURE(self, op): args = op.getarglist() @@ -35,24 +36,26 @@ pass def optimize_LABEL(self, op): - descr = op.getdescr() - if isinstance(descr, JitCellToken): - return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) - self.last_label_descr = op.getdescr() + if not self.unroll: + descr = op.getdescr() + if isinstance(descr, JitCellToken): + return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) + self.last_label_descr = op.getdescr() self.emit_operation(op) def optimize_JUMP(self, op): - descr = op.getdescr() - assert isinstance(descr, JitCellToken) - if not descr.target_tokens: - assert self.last_label_descr is not None - target_token = self.last_label_descr - assert isinstance(target_token, TargetToken) - assert target_token.targeting_jitcell_token is descr - op.setdescr(self.last_label_descr) - else: - assert len(descr.target_tokens) == 1 - op.setdescr(descr.target_tokens[0]) + if not self.unroll: + descr = op.getdescr() + assert isinstance(descr, JitCellToken) + if not descr.target_tokens: + assert self.last_label_descr is not None + target_token = self.last_label_descr + assert isinstance(target_token, TargetToken) + assert target_token.targeting_jitcell_token is descr + op.setdescr(self.last_label_descr) + else: + assert len(descr.target_tokens) == 1 + op.setdescr(descr.target_tokens[0]) self.emit_operation(op) dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', From noreply at buildbot.pypy.org Tue May 1 11:27:33 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 1 May 2012 11:27:33 +0200 (CEST) Subject: [pypy-commit] pypy default: merge Message-ID: <20120501092733.84E669B6031@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r54840:699a431ea9a8 Date: 2012-05-01 11:23 +0200 http://bitbucket.org/pypy/pypy/changeset/699a431ea9a8/ Log: merge diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst --- a/pypy/doc/cppyy.rst +++ b/pypy/doc/cppyy.rst @@ -21,6 +21,26 @@ .. _`llvm`: http://llvm.org/ +Motivation +========== + +The cppyy module offers two unique features, which result in great +performance as well as better functionality and cross-language integration +than would otherwise be possible. +First, cppyy is written in RPython and therefore open to optimizations by the +JIT up until the actual point of call into C++. +This means that there are no conversions necessary between a garbage collected +and a reference counted environment, as is needed for the use of existing +extension modules written or generated for CPython. +It also means that if variables are already unboxed by the JIT, they can be +passed through directly to C++. +Second, Reflex (and cling far more so) adds dynamic features to C++, thus +greatly reducing impedance mismatches between the two languages. +In fact, Reflex is dynamic enough that you could write the runtime bindings +generation in python (as opposed to RPython) and this is used to create very +natural "pythonizations" of the bound code. + + Installation ============ @@ -195,10 +215,12 @@ >>>> d = cppyy.gbl.BaseFactory("name", 42, 3.14) >>>> type(d) - >>>> d.m_i - 42 - >>>> d.m_d - 3.14 + >>>> isinstance(d, cppyy.gbl.Base1) + True + >>>> isinstance(d, cppyy.gbl.Base2) + True + >>>> d.m_i, d.m_d + (42, 3.14) >>>> d.m_name == "name" True >>>> @@ -295,6 +317,9 @@ To select a specific virtual method, do like with normal python classes that override methods: select it from the class that you need, rather than calling the method on the instance. + To select a specific overload, use the __dispatch__ special function, which + takes the name of the desired method and its signature (which can be + obtained from the doc string) as arguments. * **namespaces**: Are represented as python classes. Namespaces are more open-ended than classes, so sometimes initial access may diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst --- a/pypy/doc/extending.rst +++ b/pypy/doc/extending.rst @@ -116,13 +116,21 @@ Reflex ====== -This method is only experimental for now, and is being exercised on a branch, -`reflex-support`_, so you will have to build PyPy yourself. +This method is still experimental and is being exercised on a branch, +`reflex-support`_, which adds the `cppyy`_ module. The method works by using the `Reflex package`_ to provide reflection information of the C++ code, which is then used to automatically generate -bindings at runtime, which can then be used from python. +bindings at runtime. +From a python standpoint, there is no difference between generating bindings +at runtime, or having them "statically" generated and available in scripts +or compiled into extension modules: python classes and functions are always +runtime structures, created when a script or module loads. +However, if the backend itself is capable of dynamic behavior, it is a much +better functional match to python, allowing tighter integration and more +natural language mappings. Full details are `available here`_. +.. _`cppyy`: cppyy.html .. _`reflex-support`: cppyy.html .. _`Reflex package`: http://root.cern.ch/drupal/content/reflex .. _`available here`: cppyy.html @@ -130,16 +138,33 @@ Pros ---- -If it works, it is mostly automatic, and hence easy in use. -The bindings can make use of direct pointers, in which case the calls are -very fast. +The cppyy module is written in RPython, which makes it possible to keep the +code execution visible to the JIT all the way to the actual point of call into +C++, thus allowing for a very fast interface. +Reflex is currently in use in large software environments in High Energy +Physics (HEP), across many different projects and packages, and its use can be +virtually completely automated in a production environment. +One of its uses in HEP is in providing language bindings for CPython. +Thus, it is possible to use Reflex to have bound code work on both CPython and +on PyPy. +In the medium-term, Reflex will be replaced by `cling`_, which is based on +`llvm`_. +This will affect the backend only; the python-side interface is expected to +remain the same, except that cling adds a lot of dynamic behavior to C++, +enabling further language integration. + +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ Cons ---- -C++ is a large language, and these bindings are not yet feature-complete. -Although missing features should do no harm if you don't use them, if you do -need a particular feature, it may be necessary to work around it in python -or with a C++ helper function. +C++ is a large language, and cppyy is not yet feature-complete. +Still, the experience gained in developing the equivalent bindings for CPython +means that adding missing features is a simple matter of engineering, not a +question of research. +The module is written so that currently missing features should do no harm if +you don't use them, if you do need a particular feature, it may be necessary +to work around it in python or with a C++ helper function. Although Reflex works on various platforms, the bindings with PyPy have only been tested on Linux. diff --git a/pypy/module/rctime/test/test_rctime.py b/pypy/module/rctime/test/test_rctime.py --- a/pypy/module/rctime/test/test_rctime.py +++ b/pypy/module/rctime/test/test_rctime.py @@ -64,6 +64,7 @@ def test_localtime(self): import time as rctime + import os raises(TypeError, rctime.localtime, "foo") rctime.localtime() rctime.localtime(None) @@ -75,6 +76,10 @@ assert 0 <= (t1 - t0) < 1.2 t = rctime.time() assert rctime.localtime(t) == rctime.localtime(t) + if os.name == 'nt': + raises(ValueError, rctime.localtime, -1) + else: + rctime.localtime(-1) def test_mktime(self): import time as rctime @@ -108,8 +113,8 @@ assert long(rctime.mktime(rctime.gmtime(t))) - rctime.timezone == long(t) ltime = rctime.localtime() assert rctime.mktime(tuple(ltime)) == rctime.mktime(ltime) - - assert rctime.mktime(rctime.localtime(-1)) == -1 + if os.name != 'nt': + assert rctime.mktime(rctime.localtime(-1)) == -1 def test_asctime(self): import time as rctime diff --git a/pypy/module/select/__init__.py b/pypy/module/select/__init__.py --- a/pypy/module/select/__init__.py +++ b/pypy/module/select/__init__.py @@ -2,6 +2,7 @@ from pypy.interpreter.mixedmodule import MixedModule import sys +import os class Module(MixedModule): @@ -9,11 +10,13 @@ } interpleveldefs = { - 'poll' : 'interp_select.poll', 'select': 'interp_select.select', 'error' : 'space.fromcache(interp_select.Cache).w_error' } + if os.name =='posix': + interpleveldefs['poll'] = 'interp_select.poll' + if sys.platform.startswith('linux'): interpleveldefs['epoll'] = 'interp_epoll.W_Epoll' from pypy.module.select.interp_epoll import cconfig, public_symbols diff --git a/pypy/module/select/test/test_select.py b/pypy/module/select/test/test_select.py --- a/pypy/module/select/test/test_select.py +++ b/pypy/module/select/test/test_select.py @@ -214,6 +214,8 @@ def test_poll(self): import select + if not hasattr(select, 'poll'): + skip("no select.poll() on this platform") readend, writeend = self.getpair() try: class A(object): diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -1,10 +1,12 @@ import sys, time from pypy.rpython.extregistry import ExtRegistryEntry +from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.rarithmetic import is_valid_int def ll_assert(x, msg): """After translation to C, this becomes an RPyAssert.""" + assert type(x) is bool, "bad type! got %r" % (type(x),) assert x, msg class Entry(ExtRegistryEntry): @@ -21,8 +23,13 @@ hop.exception_cannot_occur() hop.genop('debug_assert', vlist) +class FatalError(Exception): + pass + def fatalerror(msg): # print the RPython traceback and abort with a fatal error + if not we_are_translated(): + raise FatalError(msg) from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop llop.debug_print_traceback(lltype.Void) @@ -33,6 +40,8 @@ def fatalerror_notb(msg): # a variant of fatalerror() that doesn't print the RPython traceback + if not we_are_translated(): + raise FatalError(msg) from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop llop.debug_fatalerror(lltype.Void, msg) diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -916,7 +916,7 @@ ll_assert(not self.is_in_nursery(obj), "object in nursery after collection") # similarily, all objects should have this flag: - ll_assert(self.header(obj).tid & GCFLAG_TRACK_YOUNG_PTRS, + ll_assert(self.header(obj).tid & GCFLAG_TRACK_YOUNG_PTRS != 0, "missing GCFLAG_TRACK_YOUNG_PTRS") # the GCFLAG_VISITED should not be set between collections ll_assert(self.header(obj).tid & GCFLAG_VISITED == 0, diff --git a/pypy/rpython/memory/gc/semispace.py b/pypy/rpython/memory/gc/semispace.py --- a/pypy/rpython/memory/gc/semispace.py +++ b/pypy/rpython/memory/gc/semispace.py @@ -640,7 +640,7 @@ between collections.""" tid = self.header(obj).tid if tid & GCFLAG_EXTERNAL: - ll_assert(tid & GCFLAG_FORWARDED, "bug: external+!forwarded") + ll_assert(tid & GCFLAG_FORWARDED != 0, "bug: external+!forwarded") ll_assert(not (self.tospace <= obj < self.free), "external flag but object inside the semispaces") else: diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -8,7 +8,6 @@ from pypy.rpython.memory.gcheader import GCHeaderBuilder from pypy.rlib.rarithmetic import ovfcheck from pypy.rlib import rgc -from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.translator.backendopt import graphanalyze from pypy.translator.backendopt.support import var_needsgc From noreply at buildbot.pypy.org Tue May 1 11:27:34 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Tue, 1 May 2012 11:27:34 +0200 (CEST) Subject: [pypy-commit] pypy default: some comments Message-ID: <20120501092734.BD7819B6032@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r54841:38a19a4dd9f3 Date: 2012-05-01 11:26 +0200 http://bitbucket.org/pypy/pypy/changeset/38a19a4dd9f3/ Log: some comments diff --git a/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py --- a/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py @@ -41,6 +41,6 @@ opt = allopts[optnum] exec "TestNo%sLLtype = TestLLtype" % (opt[0].upper() + opt[1:]) -del TestLLtype -del TestNoUnrollLLtype +del TestLLtype # No need to run the last set twice +del TestNoUnrollLLtype # This case is take care of by test_optimizebasic From noreply at buildbot.pypy.org Tue May 1 13:47:09 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Tue, 1 May 2012 13:47:09 +0200 (CEST) Subject: [pypy-commit] pypy numpypy-issue1137: use a real correct super call in AppTestNumArray.setup_class Message-ID: <20120501114709.965109B602F@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: numpypy-issue1137 Changeset: r54842:80809f2cd86b Date: 2012-05-01 13:46 +0200 http://bitbucket.org/pypy/pypy/changeset/80809f2cd86b/ Log: use a real correct super call in AppTestNumArray.setup_class diff --git a/pypy/module/micronumpy/test/test_base.py b/pypy/module/micronumpy/test/test_base.py --- a/pypy/module/micronumpy/test/test_base.py +++ b/pypy/module/micronumpy/test/test_base.py @@ -10,6 +10,7 @@ import sys class BaseNumpyAppTest(object): + @classmethod def setup_class(cls): if option.runappdirect: if '__pypy__' not in sys.builtin_module_names: diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -195,7 +195,7 @@ class AppTestNumArray(BaseNumpyAppTest): def setup_class(cls): - BaseNumpyAppTest.setup_class.im_func(cls) + super(AppTestNumArray, cls).setup_class() w_tup = cls.space.appexec([], """(): From noreply at buildbot.pypy.org Wed May 2 01:01:58 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:01:58 +0200 (CEST) Subject: [pypy-commit] pypy default: Update cpyext C sources with recent 2.7 version. Message-ID: <20120501230158.E03A382F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r54843:d15f5852eabb Date: 2012-04-28 12:19 +0200 http://bitbucket.org/pypy/pypy/changeset/d15f5852eabb/ Log: Update cpyext C sources with recent 2.7 version. mostly whitespace change, but also - exarkun's rewrite of the PyArg_Parse cleanup list. - remove all "#if 0" from these functions diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -927,12 +927,12 @@ source_dir / "pyerrors.c", source_dir / "modsupport.c", source_dir / "getargs.c", + source_dir / "abstract.c", source_dir / "stringobject.c", source_dir / "mysnprintf.c", source_dir / "pythonrun.c", source_dir / "sysmodule.c", source_dir / "bufferobject.c", - source_dir / "object.c", source_dir / "cobject.c", source_dir / "structseq.c", source_dir / "capsule.c", diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -29,6 +29,10 @@ # define vsnprintf _vsnprintf #endif +#include +PyAPI_FUNC(int) PyOS_snprintf(char *str, size_t size, const char *format, ...); +PyAPI_FUNC(int) PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va); + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/stringobject.h b/pypy/module/cpyext/include/stringobject.h --- a/pypy/module/cpyext/include/stringobject.h +++ b/pypy/module/cpyext/include/stringobject.h @@ -7,8 +7,6 @@ extern "C" { #endif -int PyOS_snprintf(char *str, size_t size, const char *format, ...); - #define PyString_GET_SIZE(op) PyString_Size(op) #define PyString_AS_STRING(op) PyString_AsString(op) diff --git a/pypy/module/cpyext/src/bufferobject.c b/pypy/module/cpyext/src/bufferobject.c --- a/pypy/module/cpyext/src/bufferobject.c +++ b/pypy/module/cpyext/src/bufferobject.c @@ -13,207 +13,207 @@ static int get_buf(PyBufferObject *self, void **ptr, Py_ssize_t *size, - enum buffer_t buffer_type) + enum buffer_t buffer_type) { - if (self->b_base == NULL) { - assert (ptr != NULL); - *ptr = self->b_ptr; - *size = self->b_size; - } - else { - Py_ssize_t count, offset; - readbufferproc proc = 0; - PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; - if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return 0; - } - if ((buffer_type == READ_BUFFER) || - ((buffer_type == ANY_BUFFER) && self->b_readonly)) - proc = bp->bf_getreadbuffer; - else if ((buffer_type == WRITE_BUFFER) || - (buffer_type == ANY_BUFFER)) - proc = (readbufferproc)bp->bf_getwritebuffer; - else if (buffer_type == CHAR_BUFFER) { + if (self->b_base == NULL) { + assert (ptr != NULL); + *ptr = self->b_ptr; + *size = self->b_size; + } + else { + Py_ssize_t count, offset; + readbufferproc proc = 0; + PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; + if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return 0; + } + if ((buffer_type == READ_BUFFER) || + ((buffer_type == ANY_BUFFER) && self->b_readonly)) + proc = bp->bf_getreadbuffer; + else if ((buffer_type == WRITE_BUFFER) || + (buffer_type == ANY_BUFFER)) + proc = (readbufferproc)bp->bf_getwritebuffer; + else if (buffer_type == CHAR_BUFFER) { if (!PyType_HasFeature(self->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER)) { - PyErr_SetString(PyExc_TypeError, - "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); - return 0; - } - proc = (readbufferproc)bp->bf_getcharbuffer; - } - if (!proc) { - char *buffer_type_name; - switch (buffer_type) { - case READ_BUFFER: - buffer_type_name = "read"; - break; - case WRITE_BUFFER: - buffer_type_name = "write"; - break; - case CHAR_BUFFER: - buffer_type_name = "char"; - break; - default: - buffer_type_name = "no"; - break; - } - PyErr_Format(PyExc_TypeError, - "%s buffer type not available", - buffer_type_name); - return 0; - } - if ((count = (*proc)(self->b_base, 0, ptr)) < 0) - return 0; - /* apply constraints to the start/end */ - if (self->b_offset > count) - offset = count; - else - offset = self->b_offset; - *(char **)ptr = *(char **)ptr + offset; - if (self->b_size == Py_END_OF_BUFFER) - *size = count; - else - *size = self->b_size; - if (offset + *size > count) - *size = count - offset; - } - return 1; + Py_TPFLAGS_HAVE_GETCHARBUFFER)) { + PyErr_SetString(PyExc_TypeError, + "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); + return 0; + } + proc = (readbufferproc)bp->bf_getcharbuffer; + } + if (!proc) { + char *buffer_type_name; + switch (buffer_type) { + case READ_BUFFER: + buffer_type_name = "read"; + break; + case WRITE_BUFFER: + buffer_type_name = "write"; + break; + case CHAR_BUFFER: + buffer_type_name = "char"; + break; + default: + buffer_type_name = "no"; + break; + } + PyErr_Format(PyExc_TypeError, + "%s buffer type not available", + buffer_type_name); + return 0; + } + if ((count = (*proc)(self->b_base, 0, ptr)) < 0) + return 0; + /* apply constraints to the start/end */ + if (self->b_offset > count) + offset = count; + else + offset = self->b_offset; + *(char **)ptr = *(char **)ptr + offset; + if (self->b_size == Py_END_OF_BUFFER) + *size = count; + else + *size = self->b_size; + if (offset + *size > count) + *size = count - offset; + } + return 1; } static PyObject * buffer_from_memory(PyObject *base, Py_ssize_t size, Py_ssize_t offset, void *ptr, - int readonly) + int readonly) { - PyBufferObject * b; + PyBufferObject * b; - if (size < 0 && size != Py_END_OF_BUFFER) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } + if (size < 0 && size != Py_END_OF_BUFFER) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } - b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); - if ( b == NULL ) - return NULL; + b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); + if ( b == NULL ) + return NULL; - Py_XINCREF(base); - b->b_base = base; - b->b_ptr = ptr; - b->b_size = size; - b->b_offset = offset; - b->b_readonly = readonly; - b->b_hash = -1; + Py_XINCREF(base); + b->b_base = base; + b->b_ptr = ptr; + b->b_size = size; + b->b_offset = offset; + b->b_readonly = readonly; + b->b_hash = -1; - return (PyObject *) b; + return (PyObject *) b; } static PyObject * buffer_from_object(PyObject *base, Py_ssize_t size, Py_ssize_t offset, int readonly) { - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } - if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { - /* another buffer, refer to the base object */ - PyBufferObject *b = (PyBufferObject *)base; - if (b->b_size != Py_END_OF_BUFFER) { - Py_ssize_t base_size = b->b_size - offset; - if (base_size < 0) - base_size = 0; - if (size == Py_END_OF_BUFFER || size > base_size) - size = base_size; - } - offset += b->b_offset; - base = b->b_base; - } - return buffer_from_memory(base, size, offset, NULL, readonly); + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } + if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { + /* another buffer, refer to the base object */ + PyBufferObject *b = (PyBufferObject *)base; + if (b->b_size != Py_END_OF_BUFFER) { + Py_ssize_t base_size = b->b_size - offset; + if (base_size < 0) + base_size = 0; + if (size == Py_END_OF_BUFFER || size > base_size) + size = base_size; + } + offset += b->b_offset; + base = b->b_base; + } + return buffer_from_memory(base, size, offset, NULL, readonly); } PyObject * PyBuffer_FromObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 1); + return buffer_from_object(base, size, offset, 1); } PyObject * PyBuffer_FromReadWriteObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 0); + return buffer_from_object(base, size, offset, 0); } PyObject * PyBuffer_FromMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 1); + return buffer_from_memory(NULL, size, 0, ptr, 1); } PyObject * PyBuffer_FromReadWriteMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 0); + return buffer_from_memory(NULL, size, 0, ptr, 0); } PyObject * PyBuffer_New(Py_ssize_t size) { - PyObject *o; - PyBufferObject * b; + PyObject *o; + PyBufferObject * b; - if (size < 0) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (sizeof(*b) > PY_SSIZE_T_MAX - size) { - /* unlikely */ - return PyErr_NoMemory(); - } - /* Inline PyObject_New */ - o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); - if ( o == NULL ) - return PyErr_NoMemory(); - b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); + if (size < 0) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (sizeof(*b) > PY_SSIZE_T_MAX - size) { + /* unlikely */ + return PyErr_NoMemory(); + } + /* Inline PyObject_New */ + o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); + if ( o == NULL ) + return PyErr_NoMemory(); + b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); - b->b_base = NULL; - b->b_ptr = (void *)(b + 1); - b->b_size = size; - b->b_offset = 0; - b->b_readonly = 0; - b->b_hash = -1; + b->b_base = NULL; + b->b_ptr = (void *)(b + 1); + b->b_size = size; + b->b_offset = 0; + b->b_readonly = 0; + b->b_hash = -1; - return o; + return o; } /* Methods */ @@ -221,19 +221,21 @@ static PyObject * buffer_new(PyTypeObject *type, PyObject *args, PyObject *kw) { - PyObject *ob; - Py_ssize_t offset = 0; - Py_ssize_t size = Py_END_OF_BUFFER; + PyObject *ob; + Py_ssize_t offset = 0; + Py_ssize_t size = Py_END_OF_BUFFER; - /*if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) - return NULL;*/ - - if (!_PyArg_NoKeywords("buffer()", kw)) - return NULL; + /* + * if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) + * return NULL; + */ - if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) - return NULL; - return PyBuffer_FromObject(ob, offset, size); + if (!_PyArg_NoKeywords("buffer()", kw)) + return NULL; + + if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) + return NULL; + return PyBuffer_FromObject(ob, offset, size); } PyDoc_STRVAR(buffer_doc, @@ -248,99 +250,99 @@ static void buffer_dealloc(PyBufferObject *self) { - Py_XDECREF(self->b_base); - PyObject_DEL(self); + Py_XDECREF(self->b_base); + PyObject_DEL(self); } static int buffer_compare(PyBufferObject *self, PyBufferObject *other) { - void *p1, *p2; - Py_ssize_t len_self, len_other, min_len; - int cmp; + void *p1, *p2; + Py_ssize_t len_self, len_other, min_len; + int cmp; - if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) - return -1; - if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) - return -1; - min_len = (len_self < len_other) ? len_self : len_other; - if (min_len > 0) { - cmp = memcmp(p1, p2, min_len); - if (cmp != 0) - return cmp < 0 ? -1 : 1; - } - return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; + if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) + return -1; + if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) + return -1; + min_len = (len_self < len_other) ? len_self : len_other; + if (min_len > 0) { + cmp = memcmp(p1, p2, min_len); + if (cmp != 0) + return cmp < 0 ? -1 : 1; + } + return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; } static PyObject * buffer_repr(PyBufferObject *self) { - const char *status = self->b_readonly ? "read-only" : "read-write"; + const char *status = self->b_readonly ? "read-only" : "read-write"; if ( self->b_base == NULL ) - return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", - status, - self->b_ptr, - self->b_size, - self); - else - return PyString_FromFormat( - "<%s buffer for %p, size %zd, offset %zd at %p>", - status, - self->b_base, - self->b_size, - self->b_offset, - self); + return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", + status, + self->b_ptr, + self->b_size, + self); + else + return PyString_FromFormat( + "<%s buffer for %p, size %zd, offset %zd at %p>", + status, + self->b_base, + self->b_size, + self->b_offset, + self); } static long buffer_hash(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - register Py_ssize_t len; - register unsigned char *p; - register long x; + void *ptr; + Py_ssize_t size; + register Py_ssize_t len; + register unsigned char *p; + register long x; - if ( self->b_hash != -1 ) - return self->b_hash; + if ( self->b_hash != -1 ) + return self->b_hash; - /* XXX potential bugs here, a readonly buffer does not imply that the - * underlying memory is immutable. b_readonly is a necessary but not - * sufficient condition for a buffer to be hashable. Perhaps it would - * be better to only allow hashing if the underlying object is known to - * be immutable (e.g. PyString_Check() is true). Another idea would - * be to call tp_hash on the underlying object and see if it raises - * an error. */ - if ( !self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, - "writable buffers are not hashable"); - return -1; - } + /* XXX potential bugs here, a readonly buffer does not imply that the + * underlying memory is immutable. b_readonly is a necessary but not + * sufficient condition for a buffer to be hashable. Perhaps it would + * be better to only allow hashing if the underlying object is known to + * be immutable (e.g. PyString_Check() is true). Another idea would + * be to call tp_hash on the underlying object and see if it raises + * an error. */ + if ( !self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, + "writable buffers are not hashable"); + return -1; + } - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - p = (unsigned char *) ptr; - len = size; - x = *p << 7; - while (--len >= 0) - x = (1000003*x) ^ *p++; - x ^= size; - if (x == -1) - x = -2; - self->b_hash = x; - return x; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + p = (unsigned char *) ptr; + len = size; + x = *p << 7; + while (--len >= 0) + x = (1000003*x) ^ *p++; + x ^= size; + if (x == -1) + x = -2; + self->b_hash = x; + return x; } static PyObject * buffer_str(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - return PyString_FromStringAndSize((const char *)ptr, size); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + return PyString_FromStringAndSize((const char *)ptr, size); } /* Sequence methods */ @@ -348,374 +350,372 @@ static Py_ssize_t buffer_length(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - return size; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + return size; } static PyObject * buffer_concat(PyBufferObject *self, PyObject *other) { - PyBufferProcs *pb = other->ob_type->tp_as_buffer; - void *ptr1, *ptr2; - char *p; - PyObject *ob; - Py_ssize_t size, count; + PyBufferProcs *pb = other->ob_type->tp_as_buffer; + void *ptr1, *ptr2; + char *p; + PyObject *ob; + Py_ssize_t size, count; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return NULL; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return NULL; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return NULL; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return NULL; - - /* optimize special case */ - if ( size == 0 ) - { - Py_INCREF(other); - return other; - } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return NULL; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return NULL; + /* optimize special case */ + if ( size == 0 ) + { + Py_INCREF(other); + return other; + } - assert(count <= PY_SIZE_MAX - size); + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return NULL; - ob = PyString_FromStringAndSize(NULL, size + count); - if ( ob == NULL ) - return NULL; - p = PyString_AS_STRING(ob); - memcpy(p, ptr1, size); - memcpy(p + size, ptr2, count); + assert(count <= PY_SIZE_MAX - size); - /* there is an extra byte in the string object, so this is safe */ - p[size + count] = '\0'; + ob = PyString_FromStringAndSize(NULL, size + count); + if ( ob == NULL ) + return NULL; + p = PyString_AS_STRING(ob); + memcpy(p, ptr1, size); + memcpy(p + size, ptr2, count); - return ob; + /* there is an extra byte in the string object, so this is safe */ + p[size + count] = '\0'; + + return ob; } static PyObject * buffer_repeat(PyBufferObject *self, Py_ssize_t count) { - PyObject *ob; - register char *p; - void *ptr; - Py_ssize_t size; + PyObject *ob; + register char *p; + void *ptr; + Py_ssize_t size; - if ( count < 0 ) - count = 0; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if (count > PY_SSIZE_T_MAX / size) { - PyErr_SetString(PyExc_MemoryError, "result too large"); - return NULL; - } - ob = PyString_FromStringAndSize(NULL, size * count); - if ( ob == NULL ) - return NULL; + if ( count < 0 ) + count = 0; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if (count > PY_SSIZE_T_MAX / size) { + PyErr_SetString(PyExc_MemoryError, "result too large"); + return NULL; + } + ob = PyString_FromStringAndSize(NULL, size * count); + if ( ob == NULL ) + return NULL; - p = PyString_AS_STRING(ob); - while ( count-- ) - { - memcpy(p, ptr, size); - p += size; - } + p = PyString_AS_STRING(ob); + while ( count-- ) + { + memcpy(p, ptr, size); + p += size; + } - /* there is an extra byte in the string object, so this is safe */ - *p = '\0'; + /* there is an extra byte in the string object, so this is safe */ + *p = '\0'; - return ob; + return ob; } static PyObject * buffer_item(PyBufferObject *self, Py_ssize_t idx) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( idx < 0 || idx >= size ) { - PyErr_SetString(PyExc_IndexError, "buffer index out of range"); - return NULL; - } - return PyString_FromStringAndSize((char *)ptr + idx, 1); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( idx < 0 || idx >= size ) { + PyErr_SetString(PyExc_IndexError, "buffer index out of range"); + return NULL; + } + return PyString_FromStringAndSize((char *)ptr + idx, 1); } static PyObject * buffer_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( left < 0 ) - left = 0; - if ( right < 0 ) - right = 0; - if ( right > size ) - right = size; - if ( right < left ) - right = left; - return PyString_FromStringAndSize((char *)ptr + left, - right - left); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( left < 0 ) + left = 0; + if ( right < 0 ) + right = 0; + if ( right > size ) + right = size; + if ( right < left ) + right = left; + return PyString_FromStringAndSize((char *)ptr + left, + right - left); } static PyObject * buffer_subscript(PyBufferObject *self, PyObject *item) { - void *p; - Py_ssize_t size; - - if (!get_buf(self, &p, &size, ANY_BUFFER)) - return NULL; - + void *p; + Py_ssize_t size; + + if (!get_buf(self, &p, &size, ANY_BUFFER)) + return NULL; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return NULL; - if (i < 0) - i += size; - return buffer_item(self, i); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength, cur, i; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return NULL; + if (i < 0) + i += size; + return buffer_item(self, i); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength, cur, i; - if (PySlice_GetIndicesEx((PySliceObject*)item, size, - &start, &stop, &step, &slicelength) < 0) { - return NULL; - } + if (PySlice_GetIndicesEx((PySliceObject*)item, size, + &start, &stop, &step, &slicelength) < 0) { + return NULL; + } - if (slicelength <= 0) - return PyString_FromStringAndSize("", 0); - else if (step == 1) - return PyString_FromStringAndSize((char *)p + start, - stop - start); - else { - PyObject *result; - char *source_buf = (char *)p; - char *result_buf = (char *)PyMem_Malloc(slicelength); + if (slicelength <= 0) + return PyString_FromStringAndSize("", 0); + else if (step == 1) + return PyString_FromStringAndSize((char *)p + start, + stop - start); + else { + PyObject *result; + char *source_buf = (char *)p; + char *result_buf = (char *)PyMem_Malloc(slicelength); - if (result_buf == NULL) - return PyErr_NoMemory(); + if (result_buf == NULL) + return PyErr_NoMemory(); - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - result_buf[i] = source_buf[cur]; - } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + result_buf[i] = source_buf[cur]; + } - result = PyString_FromStringAndSize(result_buf, - slicelength); - PyMem_Free(result_buf); - return result; - } - } - else { - PyErr_SetString(PyExc_TypeError, - "sequence index must be integer"); - return NULL; - } + result = PyString_FromStringAndSize(result_buf, + slicelength); + PyMem_Free(result_buf); + return result; + } + } + else { + PyErr_SetString(PyExc_TypeError, + "sequence index must be integer"); + return NULL; + } } static int buffer_ass_item(PyBufferObject *self, Py_ssize_t idx, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; - if (idx < 0 || idx >= size) { - PyErr_SetString(PyExc_IndexError, - "buffer assignment index out of range"); - return -1; - } + if (idx < 0 || idx >= size) { + PyErr_SetString(PyExc_IndexError, + "buffer assignment index out of range"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; - if ( count != 1 ) { - PyErr_SetString(PyExc_TypeError, - "right operand must be a single byte"); - return -1; - } + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; + if ( count != 1 ) { + PyErr_SetString(PyExc_TypeError, + "right operand must be a single byte"); + return -1; + } - ((char *)ptr1)[idx] = *(char *)ptr2; - return 0; + ((char *)ptr1)[idx] = *(char *)ptr2; + return 0; } static int buffer_ass_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t slice_len; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t slice_len; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; - if ( left < 0 ) - left = 0; - else if ( left > size ) - left = size; - if ( right < left ) - right = left; - else if ( right > size ) - right = size; - slice_len = right - left; + if ( left < 0 ) + left = 0; + else if ( left > size ) + left = size; + if ( right < left ) + right = left; + else if ( right > size ) + right = size; + slice_len = right - left; - if ( count != slice_len ) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ( count != slice_len ) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - if ( slice_len ) - memcpy((char *)ptr1 + left, ptr2, slice_len); + if ( slice_len ) + memcpy((char *)ptr1 + left, ptr2, slice_len); - return 0; + return 0; } static int buffer_ass_subscript(PyBufferObject *self, PyObject *item, PyObject *value) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t selfsize; - Py_ssize_t othersize; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t selfsize; + Py_ssize_t othersize; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = value ? value->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) - return -1; - + pb = value ? value->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) + return -1; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return -1; - if (i < 0) - i += selfsize; - return buffer_ass_item(self, i, value); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength; - - if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, - &start, &stop, &step, &slicelength) < 0) - return -1; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return -1; + if (i < 0) + i += selfsize; + return buffer_ass_item(self, i, value); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength; - if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) - return -1; + if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, + &start, &stop, &step, &slicelength) < 0) + return -1; - if (othersize != slicelength) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) + return -1; - if (slicelength == 0) - return 0; - else if (step == 1) { - memcpy((char *)ptr1 + start, ptr2, slicelength); - return 0; - } - else { - Py_ssize_t cur, i; - - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - ((char *)ptr1)[cur] = ((char *)ptr2)[i]; - } + if (othersize != slicelength) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - return 0; - } - } else { - PyErr_SetString(PyExc_TypeError, - "buffer indices must be integers"); - return -1; - } + if (slicelength == 0) + return 0; + else if (step == 1) { + memcpy((char *)ptr1 + start, ptr2, slicelength); + return 0; + } + else { + Py_ssize_t cur, i; + + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + ((char *)ptr1)[cur] = ((char *)ptr2)[i]; + } + + return 0; + } + } else { + PyErr_SetString(PyExc_TypeError, + "buffer indices must be integers"); + return -1; + } } /* Buffer methods */ @@ -723,64 +723,64 @@ static Py_ssize_t buffer_getreadbuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, READ_BUFFER)) - return -1; - return size; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, READ_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getwritebuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; + Py_ssize_t size; - if ( self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, "buffer is read-only"); + return -1; + } - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, WRITE_BUFFER)) - return -1; - return size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, WRITE_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getsegcount(PyBufferObject *self, Py_ssize_t *lenp) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - if (lenp) - *lenp = size; - return 1; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + if (lenp) + *lenp = size; + return 1; } static Py_ssize_t buffer_getcharbuf(PyBufferObject *self, Py_ssize_t idx, const char **pp) { - void *ptr; - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) - return -1; - *pp = (const char *)ptr; - return size; + void *ptr; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) + return -1; + *pp = (const char *)ptr; + return size; } void init_bufferobject(void) @@ -789,67 +789,65 @@ } static PySequenceMethods buffer_as_sequence = { - (lenfunc)buffer_length, /*sq_length*/ - (binaryfunc)buffer_concat, /*sq_concat*/ - (ssizeargfunc)buffer_repeat, /*sq_repeat*/ - (ssizeargfunc)buffer_item, /*sq_item*/ - (ssizessizeargfunc)buffer_slice, /*sq_slice*/ - (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ - (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ + (lenfunc)buffer_length, /*sq_length*/ + (binaryfunc)buffer_concat, /*sq_concat*/ + (ssizeargfunc)buffer_repeat, /*sq_repeat*/ + (ssizeargfunc)buffer_item, /*sq_item*/ + (ssizessizeargfunc)buffer_slice, /*sq_slice*/ + (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ + (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ }; static PyMappingMethods buffer_as_mapping = { - (lenfunc)buffer_length, - (binaryfunc)buffer_subscript, - (objobjargproc)buffer_ass_subscript, + (lenfunc)buffer_length, + (binaryfunc)buffer_subscript, + (objobjargproc)buffer_ass_subscript, }; static PyBufferProcs buffer_as_buffer = { - (readbufferproc)buffer_getreadbuf, - (writebufferproc)buffer_getwritebuf, - (segcountproc)buffer_getsegcount, - (charbufferproc)buffer_getcharbuf, + (readbufferproc)buffer_getreadbuf, + (writebufferproc)buffer_getwritebuf, + (segcountproc)buffer_getsegcount, + (charbufferproc)buffer_getcharbuf, }; PyTypeObject PyBuffer_Type = { - PyObject_HEAD_INIT(NULL) + PyVarObject_HEAD_INIT(NULL, 0) + "buffer", + sizeof(PyBufferObject), 0, - "buffer", - sizeof(PyBufferObject), - 0, - (destructor)buffer_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - (cmpfunc)buffer_compare, /* tp_compare */ - (reprfunc)buffer_repr, /* tp_repr */ - 0, /* tp_as_number */ - &buffer_as_sequence, /* tp_as_sequence */ - &buffer_as_mapping, /* tp_as_mapping */ - (hashfunc)buffer_hash, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)buffer_str, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - &buffer_as_buffer, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ - buffer_doc, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - buffer_new, /* tp_new */ + (destructor)buffer_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + (cmpfunc)buffer_compare, /* tp_compare */ + (reprfunc)buffer_repr, /* tp_repr */ + 0, /* tp_as_number */ + &buffer_as_sequence, /* tp_as_sequence */ + &buffer_as_mapping, /* tp_as_mapping */ + (hashfunc)buffer_hash, /* tp_hash */ + 0, /* tp_call */ + (reprfunc)buffer_str, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + &buffer_as_buffer, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ + buffer_doc, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + buffer_new, /* tp_new */ }; - diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -50,6 +50,10 @@ PyCObject_AsVoidPtr(PyObject *self) { if (self) { + if (PyCapsule_CheckExact(self)) { + const char *name = PyCapsule_GetName(self); + return (void *)PyCapsule_GetPointer(self, name); + } if (self->ob_type == &PyCObject_Type) return ((PyCObject *)self)->cobject; PyErr_SetString(PyExc_TypeError, diff --git a/pypy/module/cpyext/src/getargs.c b/pypy/module/cpyext/src/getargs.c --- a/pypy/module/cpyext/src/getargs.c +++ b/pypy/module/cpyext/src/getargs.c @@ -7,349 +7,348 @@ #ifdef __cplusplus -extern "C" { +extern "C" { #endif - int PyArg_Parse(PyObject *, const char *, ...); int PyArg_ParseTuple(PyObject *, const char *, ...); int PyArg_VaParse(PyObject *, const char *, va_list); int PyArg_ParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, ...); + const char *, char **, ...); int PyArg_VaParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, va_list); + const char *, char **, va_list); #define FLAG_COMPAT 1 #define FLAG_SIZE_T 2 -typedef int (*destr_t)(PyObject *, void *); - - -/* Keep track of "objects" that have been allocated or initialized and - which will need to be deallocated or cleaned up somehow if overall - parsing fails. -*/ -typedef struct { - void *item; - destr_t destructor; -} freelistentry_t; - -typedef struct { - int first_available; - freelistentry_t *entries; -} freelist_t; - /* Forward */ static int vgetargs1(PyObject *, const char *, va_list *, int); static void seterror(int, const char *, int *, const char *, const char *); -static char *convertitem(PyObject *, const char **, va_list *, int, int *, - char *, size_t, freelist_t *); +static char *convertitem(PyObject *, const char **, va_list *, int, int *, + char *, size_t, PyObject **); static char *converttuple(PyObject *, const char **, va_list *, int, - int *, char *, size_t, int, freelist_t *); + int *, char *, size_t, int, PyObject **); static char *convertsimple(PyObject *, const char **, va_list *, int, char *, - size_t, freelist_t *); + size_t, PyObject **); static Py_ssize_t convertbuffer(PyObject *, void **p, char **); static int getbuffer(PyObject *, Py_buffer *, char**); static int vgetargskeywords(PyObject *, PyObject *, - const char *, char **, va_list *, int); + const char *, char **, va_list *, int); static char *skipitem(const char **, va_list *, int); int PyArg_Parse(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT); + va_end(va); + return retval; } int _PyArg_Parse_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_ParseTuple(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, 0); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTuple_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParse(PyObject *args, const char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, 0); + return vgetargs1(args, format, &lva, 0); } int _PyArg_VaParse_SizeT(PyObject *args, char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, FLAG_SIZE_T); + return vgetargs1(args, format, &lva, FLAG_SIZE_T); } /* Handle cleanup of allocated memory in case of exception */ +#define GETARGS_CAPSULE_NAME_CLEANUP_PTR "getargs.cleanup_ptr" +#define GETARGS_CAPSULE_NAME_CLEANUP_BUFFER "getargs.cleanup_buffer" + +static void +cleanup_ptr(PyObject *self) +{ + void *ptr = PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_PTR); + if (ptr) { + PyMem_FREE(ptr); + } +} + +static void +cleanup_buffer(PyObject *self) +{ + Py_buffer *ptr = (Py_buffer *)PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_BUFFER); + if (ptr) { + PyBuffer_Release(ptr); + } +} + static int -cleanup_ptr(PyObject *self, void *ptr) +addcleanup(void *ptr, PyObject **freelist, PyCapsule_Destructor destr) { - if (ptr) { - PyMem_FREE(ptr); + PyObject *cobj; + const char *name; + + if (!*freelist) { + *freelist = PyList_New(0); + if (!*freelist) { + destr(ptr); + return -1; + } } + + if (destr == cleanup_ptr) { + name = GETARGS_CAPSULE_NAME_CLEANUP_PTR; + } else if (destr == cleanup_buffer) { + name = GETARGS_CAPSULE_NAME_CLEANUP_BUFFER; + } else { + return -1; + } + cobj = PyCapsule_New(ptr, name, destr); + if (!cobj) { + destr(ptr); + return -1; + } + if (PyList_Append(*freelist, cobj)) { + Py_DECREF(cobj); + return -1; + } + Py_DECREF(cobj); return 0; } static int -cleanup_buffer(PyObject *self, void *ptr) +cleanreturn(int retval, PyObject *freelist) { - Py_buffer *buf = (Py_buffer *)ptr; - if (buf) { - PyBuffer_Release(buf); + if (freelist && retval != 0) { + /* We were successful, reset the destructors so that they + don't get called. */ + Py_ssize_t len = PyList_GET_SIZE(freelist), i; + for (i = 0; i < len; i++) + PyCapsule_SetDestructor(PyList_GET_ITEM(freelist, i), NULL); } - return 0; + Py_XDECREF(freelist); + return retval; } -static int -addcleanup(void *ptr, freelist_t *freelist, destr_t destructor) -{ - int index; - - index = freelist->first_available; - freelist->first_available += 1; - - freelist->entries[index].item = ptr; - freelist->entries[index].destructor = destructor; - - return 0; -} - -static int -cleanreturn(int retval, freelist_t *freelist) -{ - int index; - - if (retval == 0) { - /* A failure occurred, therefore execute all of the cleanup - functions. - */ - for (index = 0; index < freelist->first_available; ++index) { - freelist->entries[index].destructor(NULL, - freelist->entries[index].item); - } - } - PyMem_Free(freelist->entries); - return retval; -} static int vgetargs1(PyObject *args, const char *format, va_list *p_va, int flags) { - char msgbuf[256]; - int levels[32]; - const char *fname = NULL; - const char *message = NULL; - int min = -1; - int max = 0; - int level = 0; - int endfmt = 0; - const char *formatsave = format; - Py_ssize_t i, len; - char *msg; - freelist_t freelist = {0, NULL}; - int compat = flags & FLAG_COMPAT; + char msgbuf[256]; + int levels[32]; + const char *fname = NULL; + const char *message = NULL; + int min = -1; + int max = 0; + int level = 0; + int endfmt = 0; + const char *formatsave = format; + Py_ssize_t i, len; + char *msg; + PyObject *freelist = NULL; + int compat = flags & FLAG_COMPAT; - assert(compat || (args != (PyObject*)NULL)); - flags = flags & ~FLAG_COMPAT; + assert(compat || (args != (PyObject*)NULL)); + flags = flags & ~FLAG_COMPAT; - while (endfmt == 0) { - int c = *format++; - switch (c) { - case '(': - if (level == 0) - max++; - level++; - if (level >= 30) - Py_FatalError("too many tuple nesting levels " - "in argument format string"); - break; - case ')': - if (level == 0) - Py_FatalError("excess ')' in getargs format"); - else - level--; - break; - case '\0': - endfmt = 1; - break; - case ':': - fname = format; - endfmt = 1; - break; - case ';': - message = format; - endfmt = 1; - break; - default: - if (level == 0) { - if (c == 'O') - max++; - else if (isalpha(Py_CHARMASK(c))) { - if (c != 'e') /* skip encoded */ - max++; - } else if (c == '|') - min = max; - } - break; - } - } - - if (level != 0) - Py_FatalError(/* '(' */ "missing ')' in getargs format"); - - if (min < 0) - min = max; - - format = formatsave; - - freelist.entries = PyMem_New(freelistentry_t, max); + while (endfmt == 0) { + int c = *format++; + switch (c) { + case '(': + if (level == 0) + max++; + level++; + if (level >= 30) + Py_FatalError("too many tuple nesting levels " + "in argument format string"); + break; + case ')': + if (level == 0) + Py_FatalError("excess ')' in getargs format"); + else + level--; + break; + case '\0': + endfmt = 1; + break; + case ':': + fname = format; + endfmt = 1; + break; + case ';': + message = format; + endfmt = 1; + break; + default: + if (level == 0) { + if (c == 'O') + max++; + else if (isalpha(Py_CHARMASK(c))) { + if (c != 'e') /* skip encoded */ + max++; + } else if (c == '|') + min = max; + } + break; + } + } - if (compat) { - if (max == 0) { - if (args == NULL) - return cleanreturn(1, &freelist); - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes no arguments", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - else if (min == 1 && max == 1) { - if (args == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes at least one argument", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - msg = convertitem(args, &format, p_va, flags, levels, - msgbuf, sizeof(msgbuf), &freelist); - if (msg == NULL) - return cleanreturn(1, &freelist); - seterror(levels[0], msg, levels+1, fname, message); - return cleanreturn(0, &freelist); - } - else { - PyErr_SetString(PyExc_SystemError, - "old style getargs format uses new features"); - return cleanreturn(0, &freelist); - } - } - - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "new style getargs format but argument is not a tuple"); - return cleanreturn(0, &freelist); - } - - len = PyTuple_GET_SIZE(args); - - if (len < min || max < len) { - if (message == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.150s%s takes %s %d argument%s " - "(%ld given)", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()", - min==max ? "exactly" - : len < min ? "at least" : "at most", - len < min ? min : max, - (len < min ? min : max) == 1 ? "" : "s", - Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); - message = msgbuf; - } - PyErr_SetString(PyExc_TypeError, message); - return cleanreturn(0, &freelist); - } - - for (i = 0; i < len; i++) { - if (*format == '|') - format++; - msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, - flags, levels, msgbuf, - sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, message); - return cleanreturn(0, &freelist); - } - } + if (level != 0) + Py_FatalError(/* '(' */ "missing ')' in getargs format"); - if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && - *format != '(' && - *format != '|' && *format != ':' && *format != ';') { - PyErr_Format(PyExc_SystemError, - "bad format string: %.200s", formatsave); - return cleanreturn(0, &freelist); - } - - return cleanreturn(1, &freelist); + if (min < 0) + min = max; + + format = formatsave; + + if (compat) { + if (max == 0) { + if (args == NULL) + return 1; + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes no arguments", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + else if (min == 1 && max == 1) { + if (args == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes at least one argument", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + msg = convertitem(args, &format, p_va, flags, levels, + msgbuf, sizeof(msgbuf), &freelist); + if (msg == NULL) + return cleanreturn(1, freelist); + seterror(levels[0], msg, levels+1, fname, message); + return cleanreturn(0, freelist); + } + else { + PyErr_SetString(PyExc_SystemError, + "old style getargs format uses new features"); + return 0; + } + } + + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "new style getargs format but argument is not a tuple"); + return 0; + } + + len = PyTuple_GET_SIZE(args); + + if (len < min || max < len) { + if (message == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.150s%s takes %s %d argument%s " + "(%ld given)", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()", + min==max ? "exactly" + : len < min ? "at least" : "at most", + len < min ? min : max, + (len < min ? min : max) == 1 ? "" : "s", + Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); + message = msgbuf; + } + PyErr_SetString(PyExc_TypeError, message); + return 0; + } + + for (i = 0; i < len; i++) { + if (*format == '|') + format++; + msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, + flags, levels, msgbuf, + sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, msg); + return cleanreturn(0, freelist); + } + } + + if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && + *format != '(' && + *format != '|' && *format != ':' && *format != ';') { + PyErr_Format(PyExc_SystemError, + "bad format string: %.200s", formatsave); + return cleanreturn(0, freelist); + } + + return cleanreturn(1, freelist); } @@ -358,37 +357,37 @@ seterror(int iarg, const char *msg, int *levels, const char *fname, const char *message) { - char buf[512]; - int i; - char *p = buf; + char buf[512]; + int i; + char *p = buf; - if (PyErr_Occurred()) - return; - else if (message == NULL) { - if (fname != NULL) { - PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); - p += strlen(p); - } - if (iarg != 0) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - "argument %d", iarg); - i = 0; - p += strlen(p); - while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - ", item %d", levels[i]-1); - p += strlen(p); - i++; - } - } - else { - PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); - p += strlen(p); - } - PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); - message = buf; - } - PyErr_SetString(PyExc_TypeError, message); + if (PyErr_Occurred()) + return; + else if (message == NULL) { + if (fname != NULL) { + PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); + p += strlen(p); + } + if (iarg != 0) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + "argument %d", iarg); + i = 0; + p += strlen(p); + while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + ", item %d", levels[i]-1); + p += strlen(p); + i++; + } + } + else { + PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); + p += strlen(p); + } + PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); + message = buf; + } + PyErr_SetString(PyExc_TypeError, message); } @@ -404,85 +403,84 @@ *p_va is undefined, *levels is a 0-terminated list of item numbers, *msgbuf contains an error message, whose format is: - "must be , not ", where: - is the name of the expected type, and - is the name of the actual type, + "must be , not ", where: + is the name of the expected type, and + is the name of the actual type, and msgbuf is returned. */ static char * converttuple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, int toplevel, - freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, int toplevel, + PyObject **freelist) { - int level = 0; - int n = 0; - const char *format = *p_format; - int i; - - for (;;) { - int c = *format++; - if (c == '(') { - if (level == 0) - n++; - level++; - } - else if (c == ')') { - if (level == 0) - break; - level--; - } - else if (c == ':' || c == ';' || c == '\0') - break; - else if (level == 0 && isalpha(Py_CHARMASK(c))) - n++; - } - - if (!PySequence_Check(arg) || PyString_Check(arg)) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %.50s" : - "must be %d-item sequence, not %.50s", - n, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; - } - - if ((i = PySequence_Size(arg)) != n) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %d" : - "must be sequence of length %d, not %d", - n, i); - return msgbuf; - } + int level = 0; + int n = 0; + const char *format = *p_format; + int i; - format = *p_format; - for (i = 0; i < n; i++) { - char *msg; - PyObject *item; + for (;;) { + int c = *format++; + if (c == '(') { + if (level == 0) + n++; + level++; + } + else if (c == ')') { + if (level == 0) + break; + level--; + } + else if (c == ':' || c == ';' || c == '\0') + break; + else if (level == 0 && isalpha(Py_CHARMASK(c))) + n++; + } + + if (!PySequence_Check(arg) || PyString_Check(arg)) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %.50s" : + "must be %d-item sequence, not %.50s", + n, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; + } + + if ((i = PySequence_Size(arg)) != n) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %d" : + "must be sequence of length %d, not %d", + n, i); + return msgbuf; + } + + format = *p_format; + for (i = 0; i < n; i++) { + char *msg; + PyObject *item; item = PySequence_GetItem(arg, i); - if (item == NULL) { - PyErr_Clear(); - levels[0] = i+1; - levels[1] = 0; - strncpy(msgbuf, "is not retrievable", - bufsize); - return msgbuf; - } - PyPy_Borrow(arg, item); - msg = convertitem(item, &format, p_va, flags, levels+1, - msgbuf, bufsize, freelist); + if (item == NULL) { + PyErr_Clear(); + levels[0] = i+1; + levels[1] = 0; + strncpy(msgbuf, "is not retrievable", bufsize); + return msgbuf; + } + PyPy_Borrow(arg, item); + msg = convertitem(item, &format, p_va, flags, levels+1, + msgbuf, bufsize, freelist); /* PySequence_GetItem calls tp->sq_item, which INCREFs */ Py_XDECREF(item); - if (msg != NULL) { - levels[0] = i+1; - return msg; - } - } + if (msg != NULL) { + levels[0] = i+1; + return msg; + } + } - *p_format = format; - return NULL; + *p_format = format; + return NULL; } @@ -490,45 +488,45 @@ static char * convertitem(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, PyObject **freelist) { - char *msg; - const char *format = *p_format; - - if (*format == '(' /* ')' */) { - format++; - msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, - bufsize, 0, freelist); - if (msg == NULL) - format++; - } - else { - msg = convertsimple(arg, &format, p_va, flags, - msgbuf, bufsize, freelist); - if (msg != NULL) - levels[0] = 0; - } - if (msg == NULL) - *p_format = format; - return msg; + char *msg; + const char *format = *p_format; + + if (*format == '(' /* ')' */) { + format++; + msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, + bufsize, 0, freelist); + if (msg == NULL) + format++; + } + else { + msg = convertsimple(arg, &format, p_va, flags, + msgbuf, bufsize, freelist); + if (msg != NULL) + levels[0] = 0; + } + if (msg == NULL) + *p_format = format; + return msg; } #define UNICODE_DEFAULT_ENCODING(arg) \ - _PyUnicode_AsDefaultEncodedString(arg, NULL) + _PyUnicode_AsDefaultEncodedString(arg, NULL) /* Format an error message generated by convertsimple(). */ static char * converterr(const char *expected, PyObject *arg, char *msgbuf, size_t bufsize) { - assert(expected != NULL); - assert(arg != NULL); - PyOS_snprintf(msgbuf, bufsize, - "must be %.50s, not %.50s", expected, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; + assert(expected != NULL); + assert(arg != NULL); + PyOS_snprintf(msgbuf, bufsize, + "must be %.50s, not %.50s", expected, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; } #define CONV_UNICODE "(unicode conversion error)" @@ -536,14 +534,28 @@ /* explicitly check for float arguments when integers are expected. For now * signal a warning. Returns true if an exception was raised. */ static int +float_argument_warning(PyObject *arg) +{ + if (PyFloat_Check(arg) && + PyErr_Warn(PyExc_DeprecationWarning, + "integer argument expected, got float" )) + return 1; + else + return 0; +} + +/* explicitly check for float arguments when integers are expected. Raises + TypeError and returns true for float arguments. */ +static int float_argument_error(PyObject *arg) { - if (PyFloat_Check(arg) && - PyErr_Warn(PyExc_DeprecationWarning, - "integer argument expected, got float" )) - return 1; - else - return 0; + if (PyFloat_Check(arg)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 1; + } + else + return 0; } /* Convert a non-tuple argument. Return NULL if conversion went OK, @@ -557,836 +569,839 @@ static char * convertsimple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - char *msgbuf, size_t bufsize, freelist_t *freelist) + char *msgbuf, size_t bufsize, PyObject **freelist) { - /* For # codes */ -#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ - if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ - else q=va_arg(*p_va, int*); -#define STORE_SIZE(s) if (flags & FLAG_SIZE_T) *q2=s; else *q=s; + /* For # codes */ +#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ + if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ + else q=va_arg(*p_va, int*); +#define STORE_SIZE(s) \ + if (flags & FLAG_SIZE_T) \ + *q2=s; \ + else { \ + if (INT_MAX < s) { \ + PyErr_SetString(PyExc_OverflowError, \ + "size does not fit in an int"); \ + return converterr("", arg, msgbuf, bufsize); \ + } \ + *q=s; \ + } #define BUFFER_LEN ((flags & FLAG_SIZE_T) ? *q2:*q) - const char *format = *p_format; - char c = *format++; + const char *format = *p_format; + char c = *format++; #ifdef Py_USING_UNICODE - PyObject *uarg; -#endif - - switch (c) { - - case 'b': { /* unsigned byte -- very short int */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > UCHAR_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (unsigned char) ival; - break; - } - - case 'B': {/* byte sized bitfield - both signed and unsigned - values allowed */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned char) ival; - break; - } - - case 'h': {/* signed short int */ - short *p = va_arg(*p_va, short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < SHRT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > SHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (short) ival; - break; - } - - case 'H': { /* short int sized bitfield, both signed and - unsigned allowed */ - unsigned short *p = va_arg(*p_va, unsigned short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned short) ival; - break; - } - case 'i': {/* signed int */ - int *p = va_arg(*p_va, int *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival > INT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival < INT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = ival; - break; - } - case 'I': { /* int sized bitfield, both signed and - unsigned allowed */ - unsigned int *p = va_arg(*p_va, unsigned int *); - unsigned int ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); - if (ival == (unsigned int)-1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - case 'n': /* Py_ssize_t */ -#if SIZEOF_SIZE_T != SIZEOF_LONG - { - Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); - Py_ssize_t ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsSsize_t(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif - /* Fall through from 'n' to 'l' if Py_ssize_t is int */ - case 'l': {/* long int */ - long *p = va_arg(*p_va, long *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - - case 'k': { /* long sized bitfield */ - unsigned long *p = va_arg(*p_va, unsigned long *); - unsigned long ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } - -#ifdef HAVE_LONG_LONG - case 'L': {/* PY_LONG_LONG */ - PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); - PY_LONG_LONG ival = PyLong_AsLongLong( arg ); - if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { - return converterr("long", arg, msgbuf, bufsize); - } else { - *p = ival; - } - break; - } - - case 'K': { /* long long sized bitfield */ - unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); - unsigned PY_LONG_LONG ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif // HAVE_LONG_LONG - - case 'f': {/* float */ - float *p = va_arg(*p_va, float *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = (float) dval; - break; - } - - case 'd': {/* double */ - double *p = va_arg(*p_va, double *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = dval; - break; - } - -#ifndef WITHOUT_COMPLEX - case 'D': {/* complex double */ - Py_complex *p = va_arg(*p_va, Py_complex *); - Py_complex cval; - cval = PyComplex_AsCComplex(arg); - if (PyErr_Occurred()) - return converterr("complex", arg, msgbuf, bufsize); - else - *p = cval; - break; - } -#endif /* WITHOUT_COMPLEX */ - - case 'c': {/* char */ - char *p = va_arg(*p_va, char *); - if (PyString_Check(arg) && PyString_Size(arg) == 1) - *p = PyString_AS_STRING(arg)[0]; - else - return converterr("char", arg, msgbuf, bufsize); - break; - } - case 's': {/* string */ - if (*format == '*') { - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (PyString_Check(arg)) { - fflush(stdout); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { -#if 0 - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); -#else - return converterr("string or buffer", arg, msgbuf, bufsize); -#endif - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; - } else if (*format == '#') { - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string", arg, msgbuf, bufsize); - if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr("string without null bytes", - arg, msgbuf, bufsize); - } - break; - } - - case 'z': {/* string, may be NULL (None) */ - if (*format == '*') { - Py_FatalError("'*' format not supported in PyArg_*\n"); -#if 0 - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (arg == Py_None) - PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); - else if (PyString_Check(arg)) { - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; -#endif - } else if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (arg == Py_None) { - *p = 0; - STORE_SIZE(0); - } - else if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (arg == Py_None) - *p = 0; - else if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string or None", - arg, msgbuf, bufsize); - if (*format == '#') { - FETCH_SIZE; - assert(0); /* XXX redundant with if-case */ - if (arg == Py_None) - *q = 0; - else - *q = PyString_Size(arg); - format++; - } - else if (*p != NULL && - (Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr( - "string without null bytes or None", - arg, msgbuf, bufsize); - } - break; - } - case 'e': {/* encoded string */ - char **buffer; - const char *encoding; - PyObject *s; - Py_ssize_t size; - int recode_strings; - - /* Get 'e' parameter: the encoding name */ - encoding = (const char *)va_arg(*p_va, const char *); -#ifdef Py_USING_UNICODE - if (encoding == NULL) - encoding = PyUnicode_GetDefaultEncoding(); + PyObject *uarg; #endif - /* Get output buffer parameter: - 's' (recode all objects via Unicode) or - 't' (only recode non-string objects) - */ - if (*format == 's') - recode_strings = 1; - else if (*format == 't') - recode_strings = 0; - else - return converterr( - "(unknown parser marker combination)", - arg, msgbuf, bufsize); - buffer = (char **)va_arg(*p_va, char **); - format++; - if (buffer == NULL) - return converterr("(buffer is NULL)", - arg, msgbuf, bufsize); - - /* Encode object */ - if (!recode_strings && PyString_Check(arg)) { - s = arg; - Py_INCREF(s); - } - else { + switch (c) { + + case 'b': { /* unsigned byte -- very short int */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > UCHAR_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (unsigned char) ival; + break; + } + + case 'B': {/* byte sized bitfield - both signed and unsigned + values allowed */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned char) ival; + break; + } + + case 'h': {/* signed short int */ + short *p = va_arg(*p_va, short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < SHRT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > SHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (short) ival; + break; + } + + case 'H': { /* short int sized bitfield, both signed and + unsigned allowed */ + unsigned short *p = va_arg(*p_va, unsigned short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned short) ival; + break; + } + + case 'i': {/* signed int */ + int *p = va_arg(*p_va, int *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival > INT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival < INT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = ival; + break; + } + + case 'I': { /* int sized bitfield, both signed and + unsigned allowed */ + unsigned int *p = va_arg(*p_va, unsigned int *); + unsigned int ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); + if (ival == (unsigned int)-1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'n': /* Py_ssize_t */ +#if SIZEOF_SIZE_T != SIZEOF_LONG + { + Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); + Py_ssize_t ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsSsize_t(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + /* Fall through from 'n' to 'l' if Py_ssize_t is int */ + case 'l': {/* long int */ + long *p = va_arg(*p_va, long *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'k': { /* long sized bitfield */ + unsigned long *p = va_arg(*p_va, unsigned long *); + unsigned long ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } + +#ifdef HAVE_LONG_LONG + case 'L': {/* PY_LONG_LONG */ + PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); + PY_LONG_LONG ival; + if (float_argument_warning(arg)) + return converterr("long", arg, msgbuf, bufsize); + ival = PyLong_AsLongLong(arg); + if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { + return converterr("long", arg, msgbuf, bufsize); + } else { + *p = ival; + } + break; + } + + case 'K': { /* long long sized bitfield */ + unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); + unsigned PY_LONG_LONG ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + + case 'f': {/* float */ + float *p = va_arg(*p_va, float *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = (float) dval; + break; + } + + case 'd': {/* double */ + double *p = va_arg(*p_va, double *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = dval; + break; + } + +#ifndef WITHOUT_COMPLEX + case 'D': {/* complex double */ + Py_complex *p = va_arg(*p_va, Py_complex *); + Py_complex cval; + cval = PyComplex_AsCComplex(arg); + if (PyErr_Occurred()) + return converterr("complex", arg, msgbuf, bufsize); + else + *p = cval; + break; + } +#endif /* WITHOUT_COMPLEX */ + + case 'c': {/* char */ + char *p = va_arg(*p_va, char *); + if (PyString_Check(arg) && PyString_Size(arg) == 1) + *p = PyString_AS_STRING(arg)[0]; + else + return converterr("char", arg, msgbuf, bufsize); + break; + } + + case 's': {/* string */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } #ifdef Py_USING_UNICODE - PyObject *u; + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; - /* Convert object to Unicode */ - u = PyUnicode_FromObject(arg); - if (u == NULL) - return converterr( - "string or unicode or text buffer", - arg, msgbuf, bufsize); - - /* Encode object; use default error handling */ - s = PyUnicode_AsEncodedString(u, - encoding, - NULL); - Py_DECREF(u); - if (s == NULL) - return converterr("(encoding failed)", - arg, msgbuf, bufsize); - if (!PyString_Check(s)) { - Py_DECREF(s); - return converterr( - "(encoder failed to return a string)", - arg, msgbuf, bufsize); - } + if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string", arg, msgbuf, bufsize); + if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr("string without null bytes", + arg, msgbuf, bufsize); + } + break; + } + + case 'z': {/* string, may be NULL (None) */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (arg == Py_None) + PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); + else if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + + if (arg == Py_None) { + *p = 0; + STORE_SIZE(0); + } + else if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (arg == Py_None) + *p = 0; + else if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string or None", + arg, msgbuf, bufsize); + if (*format == '#') { + FETCH_SIZE; + assert(0); /* XXX redundant with if-case */ + if (arg == Py_None) { + STORE_SIZE(0); + } else { + STORE_SIZE(PyString_Size(arg)); + } + format++; + } + else if (*p != NULL && + (Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr( + "string without null bytes or None", + arg, msgbuf, bufsize); + } + break; + } + + case 'e': {/* encoded string */ + char **buffer; + const char *encoding; + PyObject *s; + Py_ssize_t size; + int recode_strings; + + /* Get 'e' parameter: the encoding name */ + encoding = (const char *)va_arg(*p_va, const char *); +#ifdef Py_USING_UNICODE + if (encoding == NULL) + encoding = PyUnicode_GetDefaultEncoding(); +#endif + + /* Get output buffer parameter: + 's' (recode all objects via Unicode) or + 't' (only recode non-string objects) + */ + if (*format == 's') + recode_strings = 1; + else if (*format == 't') + recode_strings = 0; + else + return converterr( + "(unknown parser marker combination)", + arg, msgbuf, bufsize); + buffer = (char **)va_arg(*p_va, char **); + format++; + if (buffer == NULL) + return converterr("(buffer is NULL)", + arg, msgbuf, bufsize); + + /* Encode object */ + if (!recode_strings && PyString_Check(arg)) { + s = arg; + Py_INCREF(s); + } + else { +#ifdef Py_USING_UNICODE + PyObject *u; + + /* Convert object to Unicode */ + u = PyUnicode_FromObject(arg); + if (u == NULL) + return converterr( + "string or unicode or text buffer", + arg, msgbuf, bufsize); + + /* Encode object; use default error handling */ + s = PyUnicode_AsEncodedString(u, + encoding, + NULL); + Py_DECREF(u); + if (s == NULL) + return converterr("(encoding failed)", + arg, msgbuf, bufsize); + if (!PyString_Check(s)) { + Py_DECREF(s); + return converterr( + "(encoder failed to return a string)", + arg, msgbuf, bufsize); + } #else - return converterr("string", arg, msgbuf, bufsize); + return converterr("string", arg, msgbuf, bufsize); #endif - } - size = PyString_GET_SIZE(s); + } + size = PyString_GET_SIZE(s); - /* Write output; output is guaranteed to be 0-terminated */ - if (*format == '#') { - /* Using buffer length parameter '#': - - - if *buffer is NULL, a new buffer of the - needed size is allocated and the data - copied into it; *buffer is updated to point - to the new buffer; the caller is - responsible for PyMem_Free()ing it after - usage + /* Write output; output is guaranteed to be 0-terminated */ + if (*format == '#') { + /* Using buffer length parameter '#': - - if *buffer is not NULL, the data is - copied to *buffer; *buffer_len has to be - set to the size of the buffer on input; - buffer overflow is signalled with an error; - buffer has to provide enough room for the - encoded string plus the trailing 0-byte - - - in both cases, *buffer_len is updated to - the size of the buffer /excluding/ the - trailing 0-byte - - */ - FETCH_SIZE; + - if *buffer is NULL, a new buffer of the + needed size is allocated and the data + copied into it; *buffer is updated to point + to the new buffer; the caller is + responsible for PyMem_Free()ing it after + usage - format++; - if (q == NULL && q2 == NULL) { - Py_DECREF(s); - return converterr( - "(buffer_len is NULL)", - arg, msgbuf, bufsize); - } - if (*buffer == NULL) { - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr( - "(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - } else { - if (size + 1 > BUFFER_LEN) { - Py_DECREF(s); - return converterr( - "(buffer overflow)", - arg, msgbuf, bufsize); - } - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - STORE_SIZE(size); - } else { - /* Using a 0-terminated buffer: - - - the encoded string has to be 0-terminated - for this variant to work; if it is not, an - error raised + - if *buffer is not NULL, the data is + copied to *buffer; *buffer_len has to be + set to the size of the buffer on input; + buffer overflow is signalled with an error; + buffer has to provide enough room for the + encoded string plus the trailing 0-byte - - a new buffer of the needed size is - allocated and the data copied into it; - *buffer is updated to point to the new - buffer; the caller is responsible for - PyMem_Free()ing it after usage + - in both cases, *buffer_len is updated to + the size of the buffer /excluding/ the + trailing 0-byte - */ - if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) - != size) { - Py_DECREF(s); - return converterr( - "encoded string without NULL bytes", - arg, msgbuf, bufsize); - } - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr("(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr("(cleanup problem)", - arg, msgbuf, bufsize); - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - } - Py_DECREF(s); - break; - } + */ + FETCH_SIZE; + + format++; + if (q == NULL && q2 == NULL) { + Py_DECREF(s); + return converterr( + "(buffer_len is NULL)", + arg, msgbuf, bufsize); + } + if (*buffer == NULL) { + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr( + "(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + } else { + if (size + 1 > BUFFER_LEN) { + Py_DECREF(s); + return converterr( + "(buffer overflow)", + arg, msgbuf, bufsize); + } + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + STORE_SIZE(size); + } else { + /* Using a 0-terminated buffer: + + - the encoded string has to be 0-terminated + for this variant to work; if it is not, an + error raised + + - a new buffer of the needed size is + allocated and the data copied into it; + *buffer is updated to point to the new + buffer; the caller is responsible for + PyMem_Free()ing it after usage + + */ + if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) + != size) { + Py_DECREF(s); + return converterr( + "encoded string without NULL bytes", + arg, msgbuf, bufsize); + } + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr("(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr("(cleanup problem)", + arg, msgbuf, bufsize); + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + } + Py_DECREF(s); + break; + } #ifdef Py_USING_UNICODE - case 'u': {/* raw unicode buffer (Py_UNICODE *) */ - if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - if (PyUnicode_Check(arg)) { - *p = PyUnicode_AS_UNICODE(arg); - STORE_SIZE(PyUnicode_GET_SIZE(arg)); - } - else { - return converterr("cannot convert raw buffers", - arg, msgbuf, bufsize); - } - format++; - } else { - Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); - if (PyUnicode_Check(arg)) - *p = PyUnicode_AS_UNICODE(arg); - else - return converterr("unicode", arg, msgbuf, bufsize); - } - break; - } + case 'u': {/* raw unicode buffer (Py_UNICODE *) */ + if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + if (PyUnicode_Check(arg)) { + *p = PyUnicode_AS_UNICODE(arg); + STORE_SIZE(PyUnicode_GET_SIZE(arg)); + } + else { + return converterr("cannot convert raw buffers", + arg, msgbuf, bufsize); + } + format++; + } else { + Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); + if (PyUnicode_Check(arg)) + *p = PyUnicode_AS_UNICODE(arg); + else + return converterr("unicode", arg, msgbuf, bufsize); + } + break; + } #endif - case 'S': { /* string object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyString_Check(arg)) - *p = arg; - else - return converterr("string", arg, msgbuf, bufsize); - break; - } - + case 'S': { /* string object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyString_Check(arg)) + *p = arg; + else + return converterr("string", arg, msgbuf, bufsize); + break; + } + #ifdef Py_USING_UNICODE - case 'U': { /* Unicode object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyUnicode_Check(arg)) - *p = arg; - else - return converterr("unicode", arg, msgbuf, bufsize); - break; - } + case 'U': { /* Unicode object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyUnicode_Check(arg)) + *p = arg; + else + return converterr("unicode", arg, msgbuf, bufsize); + break; + } #endif - case 'O': { /* object */ - PyTypeObject *type; - PyObject **p; - if (*format == '!') { - type = va_arg(*p_va, PyTypeObject*); - p = va_arg(*p_va, PyObject **); - format++; - if (PyType_IsSubtype(arg->ob_type, type)) - *p = arg; - else - return converterr(type->tp_name, arg, msgbuf, bufsize); - } - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - p = va_arg(*p_va, PyObject **); - format++; - if ((*pred)(arg)) - *p = arg; - else - return converterr("(unspecified)", - arg, msgbuf, bufsize); - - } - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - converter convert = va_arg(*p_va, converter); - void *addr = va_arg(*p_va, void *); - format++; - if (! (*convert)(arg, addr)) - return converterr("(unspecified)", - arg, msgbuf, bufsize); - } - else { - p = va_arg(*p_va, PyObject **); - *p = arg; - } - break; - } - - case 'w': { /* memory buffer, read-write access */ - Py_FatalError("'w' unsupported\n"); -#if 0 - void **p = va_arg(*p_va, void **); - void *res; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; + case 'O': { /* object */ + PyTypeObject *type; + PyObject **p; + if (*format == '!') { + type = va_arg(*p_va, PyTypeObject*); + p = va_arg(*p_va, PyObject **); + format++; + if (PyType_IsSubtype(arg->ob_type, type)) + *p = arg; + else + return converterr(type->tp_name, arg, msgbuf, bufsize); - if (pb && pb->bf_releasebuffer && *format != '*') - /* Buffer must be released, yet caller does not use - the Py_buffer protocol. */ - return converterr("pinned buffer", arg, msgbuf, bufsize); + } + else if (*format == '?') { + inquiry pred = va_arg(*p_va, inquiry); + p = va_arg(*p_va, PyObject **); + format++; + if ((*pred)(arg)) + *p = arg; + else + return converterr("(unspecified)", + arg, msgbuf, bufsize); - if (pb && pb->bf_getbuffer && *format == '*') { - /* Caller is interested in Py_buffer, and the object - supports it directly. */ - format++; - if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { - PyErr_Clear(); - return converterr("read-write buffer", arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) - return converterr("contiguous buffer", arg, msgbuf, bufsize); - break; - } + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + converter convert = va_arg(*p_va, converter); + void *addr = va_arg(*p_va, void *); + format++; + if (! (*convert)(arg, addr)) + return converterr("(unspecified)", + arg, msgbuf, bufsize); + } + else { + p = va_arg(*p_va, PyObject **); + *p = arg; + } + break; + } - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) - return converterr("read-write buffer", arg, msgbuf, bufsize); - if ((*pb->bf_getsegcount)(arg, NULL) != 1) - return converterr("single-segment read-write buffer", - arg, msgbuf, bufsize); - if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); - if (*format == '*') { - PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); - format++; - } - else { - *p = res; - if (*format == '#') { - FETCH_SIZE; - STORE_SIZE(count); - format++; - } - } - break; -#endif - } - - case 't': { /* 8-bit character buffer, read-only access */ - char **p = va_arg(*p_va, char **); - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; -#if 0 - if (*format++ != '#') - return converterr( - "invalid use of 't' format character", - arg, msgbuf, bufsize); -#endif - if (!PyType_HasFeature(arg->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER) -#if 0 - || pb == NULL || pb->bf_getcharbuffer == NULL || - pb->bf_getsegcount == NULL -#endif - ) - return converterr( - "string or read-only character buffer", - arg, msgbuf, bufsize); -#if 0 - if (pb->bf_getsegcount(arg, NULL) != 1) - return converterr( - "string or single-segment read-only buffer", - arg, msgbuf, bufsize); + case 'w': { /* memory buffer, read-write access */ + void **p = va_arg(*p_va, void **); + void *res; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; - if (pb->bf_releasebuffer) - return converterr( - "string or pinned buffer", - arg, msgbuf, bufsize); -#endif - count = pb->bf_getcharbuffer(arg, 0, p); -#if 0 - if (count < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); -#endif - { - FETCH_SIZE; - STORE_SIZE(count); - ++format; - } - break; - } - default: - return converterr("impossible", arg, msgbuf, bufsize); - - } - - *p_format = format; - return NULL; + if (pb && pb->bf_releasebuffer && *format != '*') + /* Buffer must be released, yet caller does not use + the Py_buffer protocol. */ + return converterr("pinned buffer", arg, msgbuf, bufsize); + + if (pb && pb->bf_getbuffer && *format == '*') { + /* Caller is interested in Py_buffer, and the object + supports it directly. */ + format++; + if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { + PyErr_Clear(); + return converterr("read-write buffer", arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) + return converterr("contiguous buffer", arg, msgbuf, bufsize); + break; + } + + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr("read-write buffer", arg, msgbuf, bufsize); + if ((*pb->bf_getsegcount)(arg, NULL) != 1) + return converterr("single-segment read-write buffer", + arg, msgbuf, bufsize); + if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + if (*format == '*') { + PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); + format++; + } + else { + *p = res; + if (*format == '#') { + FETCH_SIZE; + STORE_SIZE(count); + format++; + } + } + break; + } + + case 't': { /* 8-bit character buffer, read-only access */ + char **p = va_arg(*p_va, char **); + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + + if (*format++ != '#') + return converterr( + "invalid use of 't' format character", + arg, msgbuf, bufsize); + if (!PyType_HasFeature(arg->ob_type, + Py_TPFLAGS_HAVE_GETCHARBUFFER) || + pb == NULL || pb->bf_getcharbuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr( + "string or read-only character buffer", + arg, msgbuf, bufsize); + + if (pb->bf_getsegcount(arg, NULL) != 1) + return converterr( + "string or single-segment read-only buffer", + arg, msgbuf, bufsize); + + if (pb->bf_releasebuffer) + return converterr( + "string or pinned buffer", + arg, msgbuf, bufsize); + + count = pb->bf_getcharbuffer(arg, 0, p); + if (count < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + { + FETCH_SIZE; + STORE_SIZE(count); + } + break; + } + + default: + return converterr("impossible", arg, msgbuf, bufsize); + + } + + *p_format = format; + return NULL; } static Py_ssize_t convertbuffer(PyObject *arg, void **p, char **errmsg) { - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - pb->bf_releasebuffer != NULL) { - *errmsg = "string or read-only buffer"; - return -1; - } - if ((*pb->bf_getsegcount)(arg, NULL) != 1) { - *errmsg = "string or single-segment read-only buffer"; - return -1; - } - if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { - *errmsg = "(unspecified)"; - } - return count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + pb->bf_releasebuffer != NULL) { + *errmsg = "string or read-only buffer"; + return -1; + } + if ((*pb->bf_getsegcount)(arg, NULL) != 1) { + *errmsg = "string or single-segment read-only buffer"; + return -1; + } + if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { + *errmsg = "(unspecified)"; + } + return count; } static int getbuffer(PyObject *arg, Py_buffer *view, char **errmsg) { - void *buf; - Py_ssize_t count; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - if (pb == NULL) { - *errmsg = "string or buffer"; - return -1; - } - if (pb->bf_getbuffer) { - if (pb->bf_getbuffer(arg, view, 0) < 0) { - *errmsg = "convertible to a buffer"; - return -1; - } - if (!PyBuffer_IsContiguous(view, 'C')) { - *errmsg = "contiguous buffer"; - return -1; - } - return 0; - } + void *buf; + Py_ssize_t count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + if (pb == NULL) { + *errmsg = "string or buffer"; + return -1; + } + if (pb->bf_getbuffer) { + if (pb->bf_getbuffer(arg, view, 0) < 0) { + *errmsg = "convertible to a buffer"; + return -1; + } + if (!PyBuffer_IsContiguous(view, 'C')) { + *errmsg = "contiguous buffer"; + return -1; + } + return 0; + } - count = convertbuffer(arg, &buf, errmsg); - if (count < 0) { - *errmsg = "convertible to a buffer"; - return count; - } - PyBuffer_FillInfo(view, NULL, buf, count, 1, 0); - return 0; + count = convertbuffer(arg, &buf, errmsg); + if (count < 0) { + *errmsg = "convertible to a buffer"; + return count; + } + PyBuffer_FillInfo(view, arg, buf, count, 1, 0); + return 0; } /* Support for keyword arguments donated by @@ -1395,501 +1410,487 @@ /* Return false (0) for error, else true. */ int PyArg_ParseTupleAndKeywords(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, - kwlist, &va, FLAG_SIZE_T); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, + kwlist, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParseTupleAndKeywords(PyObject *args, PyObject *keywords, - const char *format, + const char *format, char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); - return retval; + retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); + return retval; } int _PyArg_VaParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, va_list va) + PyObject *keywords, + const char *format, + char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, - kwlist, &lva, FLAG_SIZE_T); - return retval; + retval = vgetargskeywords(args, keywords, format, + kwlist, &lva, FLAG_SIZE_T); + return retval; } #define IS_END_OF_FORMAT(c) (c == '\0' || c == ';' || c == ':') static int vgetargskeywords(PyObject *args, PyObject *keywords, const char *format, - char **kwlist, va_list *p_va, int flags) + char **kwlist, va_list *p_va, int flags) { - char msgbuf[512]; - int levels[32]; - const char *fname, *msg, *custom_msg, *keyword; - int min = INT_MAX; - int i, len, nargs, nkeywords; - PyObject *current_arg; - freelist_t freelist = {0, NULL}; + char msgbuf[512]; + int levels[32]; + const char *fname, *msg, *custom_msg, *keyword; + int min = INT_MAX; + int i, len, nargs, nkeywords; + PyObject *freelist = NULL, *current_arg; + assert(args != NULL && PyTuple_Check(args)); + assert(keywords == NULL || PyDict_Check(keywords)); + assert(format != NULL); + assert(kwlist != NULL); + assert(p_va != NULL); - assert(args != NULL && PyTuple_Check(args)); - assert(keywords == NULL || PyDict_Check(keywords)); - assert(format != NULL); - assert(kwlist != NULL); - assert(p_va != NULL); + /* grab the function name or custom error msg first (mutually exclusive) */ + fname = strchr(format, ':'); + if (fname) { + fname++; + custom_msg = NULL; + } + else { + custom_msg = strchr(format,';'); + if (custom_msg) + custom_msg++; + } - /* grab the function name or custom error msg first (mutually exclusive) */ - fname = strchr(format, ':'); - if (fname) { - fname++; - custom_msg = NULL; - } - else { - custom_msg = strchr(format,';'); - if (custom_msg) - custom_msg++; - } + /* scan kwlist and get greatest possible nbr of args */ + for (len=0; kwlist[len]; len++) + continue; - /* scan kwlist and get greatest possible nbr of args */ - for (len=0; kwlist[len]; len++) - continue; + nargs = PyTuple_GET_SIZE(args); + nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); + if (nargs + nkeywords > len) { + PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " + "argument%s (%d given)", + (fname == NULL) ? "function" : fname, + (fname == NULL) ? "" : "()", + len, + (len == 1) ? "" : "s", + nargs + nkeywords); + return 0; + } - freelist.entries = PyMem_New(freelistentry_t, len); + /* convert tuple args and keyword args in same loop, using kwlist to drive process */ + for (i = 0; i < len; i++) { + keyword = kwlist[i]; + if (*format == '|') { + min = i; + format++; + } + if (IS_END_OF_FORMAT(*format)) { + PyErr_Format(PyExc_RuntimeError, + "More keyword list entries (%d) than " + "format specifiers (%d)", len, i); + return cleanreturn(0, freelist); + } + current_arg = NULL; + if (nkeywords) { + current_arg = PyDict_GetItemString(keywords, keyword); + } + if (current_arg) { + --nkeywords; + if (i < nargs) { + /* arg present in tuple and in dict */ + PyErr_Format(PyExc_TypeError, + "Argument given by name ('%s') " + "and position (%d)", + keyword, i+1); + return cleanreturn(0, freelist); + } + } + else if (nkeywords && PyErr_Occurred()) + return cleanreturn(0, freelist); + else if (i < nargs) + current_arg = PyTuple_GET_ITEM(args, i); - nargs = PyTuple_GET_SIZE(args); - nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); - if (nargs + nkeywords > len) { - PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " - "argument%s (%d given)", - (fname == NULL) ? "function" : fname, - (fname == NULL) ? "" : "()", - len, - (len == 1) ? "" : "s", - nargs + nkeywords); - return cleanreturn(0, &freelist); - } + if (current_arg) { + msg = convertitem(current_arg, &format, p_va, flags, + levels, msgbuf, sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, custom_msg); + return cleanreturn(0, freelist); + } + continue; + } - /* convert tuple args and keyword args in same loop, using kwlist to drive process */ - for (i = 0; i < len; i++) { - keyword = kwlist[i]; - if (*format == '|') { - min = i; - format++; - } - if (IS_END_OF_FORMAT(*format)) { - PyErr_Format(PyExc_RuntimeError, - "More keyword list entries (%d) than " - "format specifiers (%d)", len, i); - return cleanreturn(0, &freelist); - } - current_arg = NULL; - if (nkeywords) { - current_arg = PyDict_GetItemString(keywords, keyword); - } - if (current_arg) { - --nkeywords; - if (i < nargs) { - /* arg present in tuple and in dict */ - PyErr_Format(PyExc_TypeError, - "Argument given by name ('%s') " - "and position (%d)", - keyword, i+1); - return cleanreturn(0, &freelist); - } - } - else if (nkeywords && PyErr_Occurred()) - return cleanreturn(0, &freelist); - else if (i < nargs) - current_arg = PyTuple_GET_ITEM(args, i); - - if (current_arg) { - msg = convertitem(current_arg, &format, p_va, flags, - levels, msgbuf, sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, custom_msg); - return cleanreturn(0, &freelist); - } - continue; - } + if (i < min) { + PyErr_Format(PyExc_TypeError, "Required argument " + "'%s' (pos %d) not found", + keyword, i+1); + return cleanreturn(0, freelist); + } + /* current code reports success when all required args + * fulfilled and no keyword args left, with no further + * validation. XXX Maybe skip this in debug build ? + */ + if (!nkeywords) + return cleanreturn(1, freelist); - if (i < min) { - PyErr_Format(PyExc_TypeError, "Required argument " - "'%s' (pos %d) not found", - keyword, i+1); - return cleanreturn(0, &freelist); - } - /* current code reports success when all required args - * fulfilled and no keyword args left, with no further - * validation. XXX Maybe skip this in debug build ? - */ - if (!nkeywords) - return cleanreturn(1, &freelist); + /* We are into optional args, skip thru to any remaining + * keyword args */ + msg = skipitem(&format, p_va, flags); + if (msg) { + PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, + format); + return cleanreturn(0, freelist); + } + } - /* We are into optional args, skip thru to any remaining - * keyword args */ - msg = skipitem(&format, p_va, flags); - if (msg) { - PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, - format); - return cleanreturn(0, &freelist); - } - } + if (!IS_END_OF_FORMAT(*format) && *format != '|') { + PyErr_Format(PyExc_RuntimeError, + "more argument specifiers than keyword list entries " + "(remaining format:'%s')", format); + return cleanreturn(0, freelist); + } - if (!IS_END_OF_FORMAT(*format) && *format != '|') { - PyErr_Format(PyExc_RuntimeError, - "more argument specifiers than keyword list entries " - "(remaining format:'%s')", format); - return cleanreturn(0, &freelist); - } + /* make sure there are no extraneous keyword arguments */ + if (nkeywords > 0) { + PyObject *key, *value; + Py_ssize_t pos = 0; + while (PyDict_Next(keywords, &pos, &key, &value)) { + int match = 0; + char *ks; + if (!PyString_Check(key)) { + PyErr_SetString(PyExc_TypeError, + "keywords must be strings"); + return cleanreturn(0, freelist); + } + ks = PyString_AsString(key); + for (i = 0; i < len; i++) { + if (!strcmp(ks, kwlist[i])) { + match = 1; + break; + } + } + if (!match) { + PyErr_Format(PyExc_TypeError, + "'%s' is an invalid keyword " + "argument for this function", + ks); + return cleanreturn(0, freelist); + } + } + } - /* make sure there are no extraneous keyword arguments */ - if (nkeywords > 0) { - PyObject *key, *value; - Py_ssize_t pos = 0; - while (PyDict_Next(keywords, &pos, &key, &value)) { - int match = 0; - char *ks; - if (!PyString_Check(key)) { - PyErr_SetString(PyExc_TypeError, - "keywords must be strings"); - return cleanreturn(0, &freelist); - } - ks = PyString_AsString(key); - for (i = 0; i < len; i++) { - if (!strcmp(ks, kwlist[i])) { - match = 1; - break; - } - } - if (!match) { - PyErr_Format(PyExc_TypeError, - "'%s' is an invalid keyword " - "argument for this function", - ks); - return cleanreturn(0, &freelist); - } - } - } - - return cleanreturn(1, &freelist); + return cleanreturn(1, freelist); } static char * skipitem(const char **p_format, va_list *p_va, int flags) { - const char *format = *p_format; - char c = *format++; - - switch (c) { + const char *format = *p_format; + char c = *format++; - /* simple codes - * The individual types (second arg of va_arg) are irrelevant */ + switch (c) { - case 'b': /* byte -- very short int */ - case 'B': /* byte as bitfield */ - case 'h': /* short int */ - case 'H': /* short int as bitfield */ - case 'i': /* int */ - case 'I': /* int sized bitfield */ - case 'l': /* long int */ - case 'k': /* long int sized bitfield */ + /* simple codes + * The individual types (second arg of va_arg) are irrelevant */ + + case 'b': /* byte -- very short int */ + case 'B': /* byte as bitfield */ + case 'h': /* short int */ + case 'H': /* short int as bitfield */ + case 'i': /* int */ + case 'I': /* int sized bitfield */ + case 'l': /* long int */ + case 'k': /* long int sized bitfield */ #ifdef HAVE_LONG_LONG - case 'L': /* PY_LONG_LONG */ - case 'K': /* PY_LONG_LONG sized bitfield */ + case 'L': /* PY_LONG_LONG */ + case 'K': /* PY_LONG_LONG sized bitfield */ #endif - case 'f': /* float */ - case 'd': /* double */ + case 'f': /* float */ + case 'd': /* double */ #ifndef WITHOUT_COMPLEX - case 'D': /* complex double */ + case 'D': /* complex double */ #endif - case 'c': /* char */ - { - (void) va_arg(*p_va, void *); - break; - } + case 'c': /* char */ + { + (void) va_arg(*p_va, void *); + break; + } - case 'n': /* Py_ssize_t */ - { - (void) va_arg(*p_va, Py_ssize_t *); - break; - } - - /* string codes */ - - case 'e': /* string with encoding */ - { - (void) va_arg(*p_va, const char *); - if (!(*format == 's' || *format == 't')) - /* after 'e', only 's' and 't' is allowed */ - goto err; - format++; - /* explicit fallthrough to string cases */ - } - - case 's': /* string */ - case 'z': /* string or None */ + case 'n': /* Py_ssize_t */ + { + (void) va_arg(*p_va, Py_ssize_t *); + break; + } + + /* string codes */ + + case 'e': /* string with encoding */ + { + (void) va_arg(*p_va, const char *); + if (!(*format == 's' || *format == 't')) + /* after 'e', only 's' and 't' is allowed */ + goto err; + format++; + /* explicit fallthrough to string cases */ + } + + case 's': /* string */ + case 'z': /* string or None */ #ifdef Py_USING_UNICODE - case 'u': /* unicode string */ + case 'u': /* unicode string */ #endif - case 't': /* buffer, read-only */ - case 'w': /* buffer, read-write */ - { - (void) va_arg(*p_va, char **); - if (*format == '#') { - if (flags & FLAG_SIZE_T) - (void) va_arg(*p_va, Py_ssize_t *); - else - (void) va_arg(*p_va, int *); - format++; - } else if ((c == 's' || c == 'z') && *format == '*') { - format++; - } - break; - } + case 't': /* buffer, read-only */ + case 'w': /* buffer, read-write */ + { + (void) va_arg(*p_va, char **); + if (*format == '#') { + if (flags & FLAG_SIZE_T) + (void) va_arg(*p_va, Py_ssize_t *); + else + (void) va_arg(*p_va, int *); + format++; + } else if ((c == 's' || c == 'z') && *format == '*') { + format++; + } + break; + } - /* object codes */ + /* object codes */ - case 'S': /* string object */ + case 'S': /* string object */ #ifdef Py_USING_UNICODE - case 'U': /* unicode string object */ + case 'U': /* unicode string object */ #endif - { - (void) va_arg(*p_va, PyObject **); - break; - } - - case 'O': /* object */ - { - if (*format == '!') { - format++; - (void) va_arg(*p_va, PyTypeObject*); - (void) va_arg(*p_va, PyObject **); - } -#if 0 -/* I don't know what this is for */ - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - format++; - if ((*pred)(arg)) { - (void) va_arg(*p_va, PyObject **); - } - } -#endif - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - (void) va_arg(*p_va, converter); - (void) va_arg(*p_va, void *); - format++; - } - else { - (void) va_arg(*p_va, PyObject **); - } - break; - } + { + (void) va_arg(*p_va, PyObject **); + break; + } - case '(': /* bypass tuple, not handled at all previously */ - { - char *msg; - for (;;) { - if (*format==')') - break; - if (IS_END_OF_FORMAT(*format)) - return "Unmatched left paren in format " - "string"; - msg = skipitem(&format, p_va, flags); - if (msg) - return msg; - } - format++; - break; - } + case 'O': /* object */ + { + if (*format == '!') { + format++; + (void) va_arg(*p_va, PyTypeObject*); + (void) va_arg(*p_va, PyObject **); + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + (void) va_arg(*p_va, converter); + (void) va_arg(*p_va, void *); + format++; + } + else { + (void) va_arg(*p_va, PyObject **); + } + break; + } - case ')': - return "Unmatched right paren in format string"; + case '(': /* bypass tuple, not handled at all previously */ + { + char *msg; + for (;;) { + if (*format==')') + break; + if (IS_END_OF_FORMAT(*format)) + return "Unmatched left paren in format " + "string"; + msg = skipitem(&format, p_va, flags); + if (msg) + return msg; + } + format++; + break; + } - default: + case ')': + return "Unmatched right paren in format string"; + + default: err: - return "impossible"; - - } + return "impossible"; - *p_format = format; - return NULL; + } + + *p_format = format; + return NULL; } int PyArg_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, ...) { - Py_ssize_t i, l; - PyObject **o; - va_list vargs; + Py_ssize_t i, l; + PyObject **o; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, max); + va_start(vargs, max); #else - va_start(vargs); + va_start(vargs); #endif - assert(min >= 0); - assert(min <= max); - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "PyArg_UnpackTuple() argument list is not a tuple"); - return 0; - } - l = PyTuple_GET_SIZE(args); - if (l < min) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at least "), min, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at least "), min, l); - va_end(vargs); - return 0; - } - if (l > max) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at most "), max, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at most "), max, l); - va_end(vargs); - return 0; - } - for (i = 0; i < l; i++) { - o = va_arg(vargs, PyObject **); - *o = PyTuple_GET_ITEM(args, i); - } - va_end(vargs); - return 1; + assert(min >= 0); + assert(min <= max); + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "PyArg_UnpackTuple() argument list is not a tuple"); + return 0; + } + l = PyTuple_GET_SIZE(args); + if (l < min) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at least "), min, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at least "), min, l); + va_end(vargs); + return 0; + } + if (l > max) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at most "), max, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at most "), max, l); + va_end(vargs); + return 0; + } + for (i = 0; i < l; i++) { + o = va_arg(vargs, PyObject **); + *o = PyTuple_GET_ITEM(args, i); + } + va_end(vargs); + return 1; } /* For type constructors that don't take keyword args * - * Sets a TypeError and returns 0 if the kwds dict is + * Sets a TypeError and returns 0 if the kwds dict is * not empty, returns 1 otherwise */ int _PyArg_NoKeywords(const char *funcname, PyObject *kw) { - if (kw == NULL) - return 1; - if (!PyDict_CheckExact(kw)) { - PyErr_BadInternalCall(); - return 0; - } - if (PyDict_Size(kw) == 0) - return 1; - - PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", - funcname); - return 0; + if (kw == NULL) + return 1; + if (!PyDict_CheckExact(kw)) { + PyErr_BadInternalCall(); + return 0; + } + if (PyDict_Size(kw) == 0) + return 1; + + PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", + funcname); + return 0; } #ifdef __cplusplus }; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -33,41 +33,41 @@ static int countformat(const char *format, int endchar) { - int count = 0; - int level = 0; - while (level > 0 || *format != endchar) { - switch (*format) { - case '\0': - /* Premature end */ - PyErr_SetString(PyExc_SystemError, - "unmatched paren in format"); - return -1; - case '(': - case '[': - case '{': - if (level == 0) - count++; - level++; - break; - case ')': - case ']': - case '}': - level--; - break; - case '#': - case '&': - case ',': - case ':': - case ' ': - case '\t': - break; - default: - if (level == 0) - count++; - } - format++; - } - return count; + int count = 0; + int level = 0; + while (level > 0 || *format != endchar) { + switch (*format) { + case '\0': + /* Premature end */ + PyErr_SetString(PyExc_SystemError, + "unmatched paren in format"); + return -1; + case '(': + case '[': + case '{': + if (level == 0) + count++; + level++; + break; + case ')': + case ']': + case '}': + level--; + break; + case '#': + case '&': + case ',': + case ':': + case ' ': + case '\t': + break; + default: + if (level == 0) + count++; + } + format++; + } + return count; } @@ -83,582 +83,435 @@ static PyObject * do_mkdict(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *d; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((d = PyDict_New()) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i+= 2) { - PyObject *k, *v; - int err; - k = do_mkvalue(p_format, p_va, flags); - if (k == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - k = Py_None; - } - v = do_mkvalue(p_format, p_va, flags); - if (v == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - v = Py_None; - } - err = PyDict_SetItem(d, k, v); - Py_DECREF(k); - Py_DECREF(v); - if (err < 0 || itemfailed) { - Py_DECREF(d); - return NULL; - } - } - if (d != NULL && **p_format != endchar) { - Py_DECREF(d); - d = NULL; - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - } - else if (endchar) - ++*p_format; - return d; + PyObject *d; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((d = PyDict_New()) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i+= 2) { + PyObject *k, *v; + int err; + k = do_mkvalue(p_format, p_va, flags); + if (k == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + k = Py_None; + } + v = do_mkvalue(p_format, p_va, flags); + if (v == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + v = Py_None; + } + err = PyDict_SetItem(d, k, v); + Py_DECREF(k); + Py_DECREF(v); + if (err < 0 || itemfailed) { + Py_DECREF(d); + return NULL; + } + } + if (d != NULL && **p_format != endchar) { + Py_DECREF(d); + d = NULL; + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + } + else if (endchar) + ++*p_format; + return d; } static PyObject * do_mklist(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - v = PyList_New(n); - if (v == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyList_SET_ITEM(v, i, w); - } + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + v = PyList_New(n); + if (v == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyList_SET_ITEM(v, i, w); + } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } #ifdef Py_USING_UNICODE static int _ustrlen(Py_UNICODE *u) { - int i = 0; - Py_UNICODE *v = u; - while (*v != 0) { i++; v++; } - return i; + int i = 0; + Py_UNICODE *v = u; + while (*v != 0) { i++; v++; } + return i; } #endif static PyObject * do_mktuple(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((v = PyTuple_New(n)) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyTuple_SET_ITEM(v, i, w); - } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((v = PyTuple_New(n)) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyTuple_SET_ITEM(v, i, w); + } + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } static PyObject * do_mkvalue(const char **p_format, va_list *p_va, int flags) { - for (;;) { - switch (*(*p_format)++) { - case '(': - return do_mktuple(p_format, p_va, ')', - countformat(*p_format, ')'), flags); + for (;;) { + switch (*(*p_format)++) { + case '(': + return do_mktuple(p_format, p_va, ')', + countformat(*p_format, ')'), flags); - case '[': - return do_mklist(p_format, p_va, ']', - countformat(*p_format, ']'), flags); + case '[': + return do_mklist(p_format, p_va, ']', + countformat(*p_format, ']'), flags); - case '{': - return do_mkdict(p_format, p_va, '}', - countformat(*p_format, '}'), flags); + case '{': + return do_mkdict(p_format, p_va, '}', + countformat(*p_format, '}'), flags); - case 'b': - case 'B': - case 'h': - case 'i': - return PyInt_FromLong((long)va_arg(*p_va, int)); - - case 'H': - return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + case 'b': + case 'B': + case 'h': + case 'i': + return PyInt_FromLong((long)va_arg(*p_va, int)); - case 'I': - { - unsigned int n; - n = va_arg(*p_va, unsigned int); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong((unsigned long)n); - else - return PyInt_FromLong(n); - } - - case 'n': + case 'H': + return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + + case 'I': + { + unsigned int n; + n = va_arg(*p_va, unsigned int); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong((unsigned long)n); + else + return PyInt_FromLong(n); + } + + case 'n': #if SIZEOF_SIZE_T!=SIZEOF_LONG - return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); + return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); #endif - /* Fall through from 'n' to 'l' if Py_ssize_t is long */ - case 'l': - return PyInt_FromLong(va_arg(*p_va, long)); + /* Fall through from 'n' to 'l' if Py_ssize_t is long */ + case 'l': + return PyInt_FromLong(va_arg(*p_va, long)); - case 'k': - { - unsigned long n; - n = va_arg(*p_va, unsigned long); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong(n); - else - return PyInt_FromLong(n); - } + case 'k': + { + unsigned long n; + n = va_arg(*p_va, unsigned long); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong(n); + else + return PyInt_FromLong(n); + } #ifdef HAVE_LONG_LONG - case 'L': - return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); + case 'L': + return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); - case 'K': - return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); + case 'K': + return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); #endif #ifdef Py_USING_UNICODE - case 'u': - { - PyObject *v; - Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (u == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) - n = _ustrlen(u); - v = PyUnicode_FromUnicode(u, n); - } - return v; - } + case 'u': + { + PyObject *v; + Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (u == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) + n = _ustrlen(u); + v = PyUnicode_FromUnicode(u, n); + } + return v; + } #endif - case 'f': - case 'd': - return PyFloat_FromDouble( - (double)va_arg(*p_va, va_double)); + case 'f': + case 'd': + return PyFloat_FromDouble( + (double)va_arg(*p_va, va_double)); #ifndef WITHOUT_COMPLEX - case 'D': - return PyComplex_FromCComplex( - *((Py_complex *)va_arg(*p_va, Py_complex *))); + case 'D': + return PyComplex_FromCComplex( + *((Py_complex *)va_arg(*p_va, Py_complex *))); #endif /* WITHOUT_COMPLEX */ - case 'c': - { - char p[1]; - p[0] = (char)va_arg(*p_va, int); - return PyString_FromStringAndSize(p, 1); - } + case 'c': + { + char p[1]; + p[0] = (char)va_arg(*p_va, int); + return PyString_FromStringAndSize(p, 1); + } - case 's': - case 'z': - { - PyObject *v; - char *str = va_arg(*p_va, char *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (str == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) { - size_t m = strlen(str); - if (m > PY_SSIZE_T_MAX) { - PyErr_SetString(PyExc_OverflowError, - "string too long for Python string"); - return NULL; - } - n = (Py_ssize_t)m; - } - v = PyString_FromStringAndSize(str, n); - } - return v; - } + case 's': + case 'z': + { + PyObject *v; + char *str = va_arg(*p_va, char *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (str == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) { + size_t m = strlen(str); + if (m > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_OverflowError, + "string too long for Python string"); + return NULL; + } + n = (Py_ssize_t)m; + } + v = PyString_FromStringAndSize(str, n); + } + return v; + } - case 'N': - case 'S': - case 'O': - if (**p_format == '&') { - typedef PyObject *(*converter)(void *); - converter func = va_arg(*p_va, converter); - void *arg = va_arg(*p_va, void *); - ++*p_format; - return (*func)(arg); - } - else { - PyObject *v; - v = va_arg(*p_va, PyObject *); - if (v != NULL) { - if (*(*p_format - 1) != 'N') - Py_INCREF(v); - } - else if (!PyErr_Occurred()) - /* If a NULL was passed - * because a call that should - * have constructed a value - * failed, that's OK, and we - * pass the error on; but if - * no error occurred it's not - * clear that the caller knew - * what she was doing. */ - PyErr_SetString(PyExc_SystemError, - "NULL object passed to Py_BuildValue"); - return v; - } + case 'N': + case 'S': + case 'O': + if (**p_format == '&') { + typedef PyObject *(*converter)(void *); + converter func = va_arg(*p_va, converter); + void *arg = va_arg(*p_va, void *); + ++*p_format; + return (*func)(arg); + } + else { + PyObject *v; + v = va_arg(*p_va, PyObject *); + if (v != NULL) { + if (*(*p_format - 1) != 'N') + Py_INCREF(v); + } + else if (!PyErr_Occurred()) + /* If a NULL was passed + * because a call that should + * have constructed a value + * failed, that's OK, and we + * pass the error on; but if + * no error occurred it's not + * clear that the caller knew + * what she was doing. */ + PyErr_SetString(PyExc_SystemError, + "NULL object passed to Py_BuildValue"); + return v; + } - case ':': - case ',': - case ' ': - case '\t': - break; + case ':': + case ',': + case ' ': + case '\t': + break; - default: - PyErr_SetString(PyExc_SystemError, - "bad format char passed to Py_BuildValue"); - return NULL; + default: + PyErr_SetString(PyExc_SystemError, + "bad format char passed to Py_BuildValue"); + return NULL; - } - } + } + } } PyObject * Py_BuildValue(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, 0); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, 0); + va_end(va); + return retval; } PyObject * _Py_BuildValue_SizeT(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, FLAG_SIZE_T); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, FLAG_SIZE_T); + va_end(va); + return retval; } PyObject * Py_VaBuildValue(const char *format, va_list va) { - return va_build_value(format, va, 0); + return va_build_value(format, va, 0); } PyObject * _Py_VaBuildValue_SizeT(const char *format, va_list va) { - return va_build_value(format, va, FLAG_SIZE_T); + return va_build_value(format, va, FLAG_SIZE_T); } static PyObject * va_build_value(const char *format, va_list va, int flags) { - const char *f = format; - int n = countformat(f, '\0'); - va_list lva; + const char *f = format; + int n = countformat(f, '\0'); + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - if (n < 0) - return NULL; - if (n == 0) { - Py_INCREF(Py_None); - return Py_None; - } - if (n == 1) - return do_mkvalue(&f, &lva, flags); - return do_mktuple(&f, &lva, '\0', n, flags); + if (n < 0) + return NULL; + if (n == 0) { + Py_INCREF(Py_None); + return Py_None; + } + if (n == 1) + return do_mkvalue(&f, &lva, flags); + return do_mktuple(&f, &lva, '\0', n, flags); } PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...) { - va_list vargs; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *args; + PyObject *res; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) - return NULL; + if (args == NULL) + return NULL; - res = PyEval_CallObject(obj, args); - Py_DECREF(args); + res = PyEval_CallObject(obj, args); + Py_DECREF(args); - return res; + return res; } PyObject * PyEval_CallMethod(PyObject *obj, const char *methodname, const char *format, ...) { - va_list vargs; - PyObject *meth; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *meth; + PyObject *args; + PyObject *res; - meth = PyObject_GetAttrString(obj, methodname); - if (meth == NULL) - return NULL; + meth = PyObject_GetAttrString(obj, methodname); + if (meth == NULL) + return NULL; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) { - Py_DECREF(meth); - return NULL; - } + if (args == NULL) { + Py_DECREF(meth); + return NULL; + } - res = PyEval_CallObject(meth, args); - Py_DECREF(meth); - Py_DECREF(args); + res = PyEval_CallObject(meth, args); + Py_DECREF(meth); + Py_DECREF(args); - return res; -} - -static PyObject* -call_function_tail(PyObject *callable, PyObject *args) -{ - PyObject *retval; - - if (args == NULL) - return NULL; - - if (!PyTuple_Check(args)) { - PyObject *a; - - a = PyTuple_New(1); - if (a == NULL) { - Py_DECREF(args); - return NULL; - } - PyTuple_SET_ITEM(a, 0, args); - args = a; - } - retval = PyObject_Call(callable, args, NULL); - - Py_DECREF(args); - - return retval; -} - -PyObject * -PyObject_CallFunction(PyObject *callable, const char *format, ...) -{ - va_list va; - PyObject *args; - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - return call_function_tail(callable, args); -} - -PyObject * -PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) -{ - va_list va; - PyObject *args; - PyObject *func = NULL; - PyObject *retval = NULL; - - func = PyObject_GetAttrString(o, name); - if (func == NULL) { - PyErr_SetString(PyExc_AttributeError, name); - return 0; - } - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - retval = call_function_tail(func, args); - - exit: - /* args gets consumed in call_function_tail */ - Py_XDECREF(func); - - return retval; -} - -static PyObject * -objargs_mktuple(va_list va) -{ - int i, n = 0; - va_list countva; - PyObject *result, *tmp; - -#ifdef VA_LIST_IS_ARRAY - memcpy(countva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(countva, va); -#else - countva = va; -#endif -#endif - - while (((PyObject *)va_arg(countva, PyObject *)) != NULL) - ++n; - result = PyTuple_New(n); - if (result != NULL && n > 0) { - for (i = 0; i < n; ++i) { - tmp = (PyObject *)va_arg(va, PyObject *); - Py_INCREF(tmp); - PyTuple_SET_ITEM(result, i, tmp); - } - } - return result; -} - -PyObject * -PyObject_CallFunctionObjArgs(PyObject *callable, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - /* count the args */ - va_start(vargs, callable); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) - return NULL; - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - - return tmp; -} - -PyObject * -PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - callable = PyObject_GetAttr(callable, name); - if (callable == NULL) - return NULL; - - /* count the args */ - va_start(vargs, name); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) { - Py_DECREF(callable); - return NULL; - } - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - Py_DECREF(callable); - - return tmp; + return res; } /* returns -1 in case of error, 0 if a new key was added, 1 if the key @@ -666,67 +519,67 @@ static int _PyModule_AddObject_NoConsumeRef(PyObject *m, const char *name, PyObject *o) { - PyObject *dict, *prev; - if (!PyModule_Check(m)) { - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs module as first arg"); - return -1; - } - if (!o) { - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs non-NULL value"); - return -1; - } + PyObject *dict, *prev; + if (!PyModule_Check(m)) { + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs module as first arg"); + return -1; + } + if (!o) { + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs non-NULL value"); + return -1; + } - dict = PyModule_GetDict(m); - if (dict == NULL) { - /* Internal error -- modules must have a dict! */ - PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", - PyModule_GetName(m)); - return -1; - } - prev = PyDict_GetItemString(dict, name); - if (PyDict_SetItemString(dict, name, o)) - return -1; - return prev != NULL; + dict = PyModule_GetDict(m); + if (dict == NULL) { + /* Internal error -- modules must have a dict! */ + PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", + PyModule_GetName(m)); + return -1; + } + prev = PyDict_GetItemString(dict, name); + if (PyDict_SetItemString(dict, name, o)) + return -1; + return prev != NULL; } int PyModule_AddObject(PyObject *m, const char *name, PyObject *o) { - int result = _PyModule_AddObject_NoConsumeRef(m, name, o); - /* XXX WORKAROUND for a common misusage of PyModule_AddObject: - for the common case of adding a new key, we don't consume a - reference, but instead just leak it away. The issue is that - people generally don't realize that this function consumes a - reference, because on CPython the reference is still stored - on the dictionary. */ - if (result != 0) - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result = _PyModule_AddObject_NoConsumeRef(m, name, o); + /* XXX WORKAROUND for a common misusage of PyModule_AddObject: + for the common case of adding a new key, we don't consume a + reference, but instead just leak it away. The issue is that + people generally don't realize that this function consumes a + reference, because on CPython the reference is still stored + on the dictionary. */ + if (result != 0) + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddIntConstant(PyObject *m, const char *name, long value) { - int result; - PyObject *o = PyInt_FromLong(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyInt_FromLong(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddStringConstant(PyObject *m, const char *name, const char *value) { - int result; - PyObject *o = PyString_FromString(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyString_FromString(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } diff --git a/pypy/module/cpyext/src/mysnprintf.c b/pypy/module/cpyext/src/mysnprintf.c --- a/pypy/module/cpyext/src/mysnprintf.c +++ b/pypy/module/cpyext/src/mysnprintf.c @@ -20,86 +20,86 @@ Return value (rv): - When 0 <= rv < size, the output conversion was unexceptional, and - rv characters were written to str (excluding a trailing \0 byte at - str[rv]). + When 0 <= rv < size, the output conversion was unexceptional, and + rv characters were written to str (excluding a trailing \0 byte at + str[rv]). - When rv >= size, output conversion was truncated, and a buffer of - size rv+1 would have been needed to avoid truncation. str[size-1] - is \0 in this case. + When rv >= size, output conversion was truncated, and a buffer of + size rv+1 would have been needed to avoid truncation. str[size-1] + is \0 in this case. - When rv < 0, "something bad happened". str[size-1] is \0 in this - case too, but the rest of str is unreliable. It could be that - an error in format codes was detected by libc, or on platforms - with a non-C99 vsnprintf simply that the buffer wasn't big enough - to avoid truncation, or on platforms without any vsnprintf that - PyMem_Malloc couldn't obtain space for a temp buffer. + When rv < 0, "something bad happened". str[size-1] is \0 in this + case too, but the rest of str is unreliable. It could be that + an error in format codes was detected by libc, or on platforms + with a non-C99 vsnprintf simply that the buffer wasn't big enough + to avoid truncation, or on platforms without any vsnprintf that + PyMem_Malloc couldn't obtain space for a temp buffer. CAUTION: Unlike C99, str != NULL and size > 0 are required. */ int +PyOS_snprintf(char *str, size_t size, const char *format, ...) +{ + int rc; + va_list va; + + va_start(va, format); + rc = PyOS_vsnprintf(str, size, format, va); + va_end(va); + return rc; +} + +int PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va) { - int len; /* # bytes written, excluding \0 */ + int len; /* # bytes written, excluding \0 */ #ifdef HAVE_SNPRINTF #define _PyOS_vsnprintf_EXTRA_SPACE 1 #else #define _PyOS_vsnprintf_EXTRA_SPACE 512 - char *buffer; + char *buffer; #endif - assert(str != NULL); - assert(size > 0); - assert(format != NULL); - /* We take a size_t as input but return an int. Sanity check - * our input so that it won't cause an overflow in the - * vsnprintf return value or the buffer malloc size. */ - if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { - len = -666; - goto Done; - } + assert(str != NULL); + assert(size > 0); + assert(format != NULL); + /* We take a size_t as input but return an int. Sanity check + * our input so that it won't cause an overflow in the + * vsnprintf return value or the buffer malloc size. */ + if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { + len = -666; + goto Done; + } #ifdef HAVE_SNPRINTF - len = vsnprintf(str, size, format, va); + len = vsnprintf(str, size, format, va); #else - /* Emulate it. */ - buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); - if (buffer == NULL) { - len = -666; - goto Done; - } + /* Emulate it. */ + buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); + if (buffer == NULL) { + len = -666; + goto Done; + } - len = vsprintf(buffer, format, va); - if (len < 0) - /* ignore the error */; + len = vsprintf(buffer, format, va); + if (len < 0) + /* ignore the error */; - else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) - Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); + else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) + Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); - else { - const size_t to_copy = (size_t)len < size ? - (size_t)len : size - 1; - assert(to_copy < size); - memcpy(str, buffer, to_copy); - str[to_copy] = '\0'; - } - PyMem_FREE(buffer); + else { + const size_t to_copy = (size_t)len < size ? + (size_t)len : size - 1; + assert(to_copy < size); + memcpy(str, buffer, to_copy); + str[to_copy] = '\0'; + } + PyMem_FREE(buffer); #endif Done: - if (size > 0) - str[size-1] = '\0'; - return len; + if (size > 0) + str[size-1] = '\0'; + return len; #undef _PyOS_vsnprintf_EXTRA_SPACE } - -int -PyOS_snprintf(char *str, size_t size, const char *format, ...) -{ - int rc; - va_list va; - - va_start(va, format); - rc = PyOS_vsnprintf(str, size, format, va); - va_end(va); - return rc; -} diff --git a/pypy/module/cpyext/src/object.c b/pypy/module/cpyext/src/object.c deleted file mode 100644 --- a/pypy/module/cpyext/src/object.c +++ /dev/null @@ -1,91 +0,0 @@ -// contains code from abstract.c -#include - - -static PyObject * -null_error(void) -{ - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_SystemError, - "null argument to internal routine"); - return NULL; -} - -int PyObject_AsReadBuffer(PyObject *obj, - const void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void *pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a readable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getreadbuffer)(obj, 0, &pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int PyObject_AsWriteBuffer(PyObject *obj, - void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void*pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a writeable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getwritebuffer)(obj,0,&pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int -PyObject_CheckReadBuffer(PyObject *obj) -{ - PyBufferProcs *pb = obj->ob_type->tp_as_buffer; - - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - (*pb->bf_getsegcount)(obj, NULL) != 1) - return 0; - return 1; -} diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -4,72 +4,75 @@ PyObject * PyErr_Format(PyObject *exception, const char *format, ...) { - va_list vargs; - PyObject* string; + va_list vargs; + PyObject* string; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - string = PyString_FromFormatV(format, vargs); - PyErr_SetObject(exception, string); - Py_XDECREF(string); - va_end(vargs); - return NULL; + string = PyString_FromFormatV(format, vargs); + PyErr_SetObject(exception, string); + Py_XDECREF(string); + va_end(vargs); + return NULL; } + + PyObject * PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { - char *dot; - PyObject *modulename = NULL; - PyObject *classname = NULL; - PyObject *mydict = NULL; - PyObject *bases = NULL; - PyObject *result = NULL; - dot = strrchr(name, '.'); - if (dot == NULL) { - PyErr_SetString(PyExc_SystemError, - "PyErr_NewException: name must be module.class"); - return NULL; - } - if (base == NULL) - base = PyExc_Exception; - if (dict == NULL) { - dict = mydict = PyDict_New(); - if (dict == NULL) - goto failure; - } - if (PyDict_GetItemString(dict, "__module__") == NULL) { - modulename = PyString_FromStringAndSize(name, - (Py_ssize_t)(dot-name)); - if (modulename == NULL) - goto failure; - if (PyDict_SetItemString(dict, "__module__", modulename) != 0) - goto failure; - } - if (PyTuple_Check(base)) { - bases = base; - /* INCREF as we create a new ref in the else branch */ - Py_INCREF(bases); - } else { - bases = PyTuple_Pack(1, base); - if (bases == NULL) - goto failure; - } - /* Create a real new-style class. */ - result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", - dot+1, bases, dict); + char *dot; + PyObject *modulename = NULL; + PyObject *classname = NULL; + PyObject *mydict = NULL; + PyObject *bases = NULL; + PyObject *result = NULL; + dot = strrchr(name, '.'); + if (dot == NULL) { + PyErr_SetString(PyExc_SystemError, + "PyErr_NewException: name must be module.class"); + return NULL; + } + if (base == NULL) + base = PyExc_Exception; + if (dict == NULL) { + dict = mydict = PyDict_New(); + if (dict == NULL) + goto failure; + } + if (PyDict_GetItemString(dict, "__module__") == NULL) { + modulename = PyString_FromStringAndSize(name, + (Py_ssize_t)(dot-name)); + if (modulename == NULL) + goto failure; + if (PyDict_SetItemString(dict, "__module__", modulename) != 0) + goto failure; + } + if (PyTuple_Check(base)) { + bases = base; + /* INCREF as we create a new ref in the else branch */ + Py_INCREF(bases); + } else { + bases = PyTuple_Pack(1, base); + if (bases == NULL) + goto failure; + } + /* Create a real new-style class. */ + result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", + dot+1, bases, dict); failure: - Py_XDECREF(bases); - Py_XDECREF(mydict); - Py_XDECREF(classname); - Py_XDECREF(modulename); - return result; + Py_XDECREF(bases); + Py_XDECREF(mydict); + Py_XDECREF(classname); + Py_XDECREF(modulename); + return result; } + /* Create an exception with docstring */ PyObject * PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) diff --git a/pypy/module/cpyext/src/pysignals.c b/pypy/module/cpyext/src/pysignals.c --- a/pypy/module/cpyext/src/pysignals.c +++ b/pypy/module/cpyext/src/pysignals.c @@ -17,17 +17,34 @@ PyOS_getsig(int sig) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context; - if (sigaction(sig, NULL, &context) == -1) - return SIG_ERR; - return context.sa_handler; + /* assume sigaction exists */ + struct sigaction context; + if (sigaction(sig, NULL, &context) == -1) + return SIG_ERR; + return context.sa_handler; #else - PyOS_sighandler_t handler; - handler = signal(sig, SIG_IGN); - if (handler != SIG_ERR) - signal(sig, handler); - return handler; + PyOS_sighandler_t handler; +/* Special signal handling for the secure CRT in Visual Studio 2005 */ +#if defined(_MSC_VER) && _MSC_VER >= 1400 + switch (sig) { + /* Only these signals are valid */ + case SIGINT: + case SIGILL: + case SIGFPE: + case SIGSEGV: + case SIGTERM: + case SIGBREAK: + case SIGABRT: + break; + /* Don't call signal() with other values or it will assert */ + default: + return SIG_ERR; + } +#endif /* _MSC_VER && _MSC_VER >= 1400 */ + handler = signal(sig, SIG_IGN); + if (handler != SIG_ERR) + signal(sig, handler); + return handler; #endif } @@ -35,21 +52,21 @@ PyOS_setsig(int sig, PyOS_sighandler_t handler) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context, ocontext; - context.sa_handler = handler; - sigemptyset(&context.sa_mask); - context.sa_flags = 0; - if (sigaction(sig, &context, &ocontext) == -1) - return SIG_ERR; - return ocontext.sa_handler; + /* assume sigaction exists */ + struct sigaction context, ocontext; + context.sa_handler = handler; + sigemptyset(&context.sa_mask); + context.sa_flags = 0; + if (sigaction(sig, &context, &ocontext) == -1) + return SIG_ERR; + return ocontext.sa_handler; #else - PyOS_sighandler_t oldhandler; - oldhandler = signal(sig, handler); + PyOS_sighandler_t oldhandler; + oldhandler = signal(sig, handler); #ifndef MS_WINDOWS - /* should check if this exists */ - siginterrupt(sig, 1); + /* should check if this exists */ + siginterrupt(sig, 1); #endif - return oldhandler; + return oldhandler; #endif } diff --git a/pypy/module/cpyext/src/pythonrun.c b/pypy/module/cpyext/src/pythonrun.c --- a/pypy/module/cpyext/src/pythonrun.c +++ b/pypy/module/cpyext/src/pythonrun.c @@ -9,28 +9,28 @@ void Py_FatalError(const char *msg) { - fprintf(stderr, "Fatal Python error: %s\n", msg); - fflush(stderr); /* it helps in Windows debug build */ + fprintf(stderr, "Fatal Python error: %s\n", msg); + fflush(stderr); /* it helps in Windows debug build */ #ifdef MS_WINDOWS - { - size_t len = strlen(msg); - WCHAR* buffer; - size_t i; + { + size_t len = strlen(msg); + WCHAR* buffer; + size_t i; - /* Convert the message to wchar_t. This uses a simple one-to-one - conversion, assuming that the this error message actually uses ASCII - only. If this ceases to be true, we will have to convert. */ - buffer = alloca( (len+1) * (sizeof *buffer)); - for( i=0; i<=len; ++i) - buffer[i] = msg[i]; - OutputDebugStringW(L"Fatal Python error: "); - OutputDebugStringW(buffer); - OutputDebugStringW(L"\n"); - } + /* Convert the message to wchar_t. This uses a simple one-to-one + conversion, assuming that the this error message actually uses ASCII + only. If this ceases to be true, we will have to convert. */ + buffer = alloca( (len+1) * (sizeof *buffer)); + for( i=0; i<=len; ++i) + buffer[i] = msg[i]; + OutputDebugStringW(L"Fatal Python error: "); + OutputDebugStringW(buffer); + OutputDebugStringW(L"\n"); + } #ifdef _DEBUG - DebugBreak(); + DebugBreak(); #endif #endif /* MS_WINDOWS */ - abort(); + abort(); } diff --git a/pypy/module/cpyext/src/stringobject.c b/pypy/module/cpyext/src/stringobject.c --- a/pypy/module/cpyext/src/stringobject.c +++ b/pypy/module/cpyext/src/stringobject.c @@ -4,246 +4,247 @@ PyObject * PyString_FromFormatV(const char *format, va_list vargs) { - va_list count; - Py_ssize_t n = 0; - const char* f; - char *s; - PyObject* string; + va_list count; + Py_ssize_t n = 0; + const char* f; + char *s; + PyObject* string; #ifdef VA_LIST_IS_ARRAY - Py_MEMCPY(count, vargs, sizeof(va_list)); + Py_MEMCPY(count, vargs, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(count, vargs); + __va_copy(count, vargs); #else - count = vargs; + count = vargs; #endif #endif - /* step 1: figure out how large a buffer we need */ - for (f = format; *f; f++) { - if (*f == '%') { + /* step 1: figure out how large a buffer we need */ + for (f = format; *f; f++) { + if (*f == '%') { #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - const char* p = f; - while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - ; + const char* p = f; + while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + ; - /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since - * they don't affect the amount of space we reserve. - */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - ++f; - } + /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since + * they don't affect the amount of space we reserve. + */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - ++f; - } + } + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + ++f; + } - switch (*f) { - case 'c': - (void)va_arg(count, int); - /* fall through... */ - case '%': - n++; - break; - case 'd': case 'u': case 'i': case 'x': - (void) va_arg(count, int); + switch (*f) { + case 'c': + (void)va_arg(count, int); + /* fall through... */ + case '%': + n++; + break; + case 'd': case 'u': case 'i': case 'x': + (void) va_arg(count, int); #ifdef HAVE_LONG_LONG - /* Need at most - ceil(log10(256)*SIZEOF_LONG_LONG) digits, - plus 1 for the sign. 53/22 is an upper - bound for log10(256). */ - if (longlongflag) - n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; - else + /* Need at most + ceil(log10(256)*SIZEOF_LONG_LONG) digits, + plus 1 for the sign. 53/22 is an upper + bound for log10(256). */ + if (longlongflag) + n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; + else #endif - /* 20 bytes is enough to hold a 64-bit - integer. Decimal takes the most - space. This isn't enough for - octal. */ - n += 20; + /* 20 bytes is enough to hold a 64-bit + integer. Decimal takes the most + space. This isn't enough for + octal. */ + n += 20; - break; - case 's': - s = va_arg(count, char*); - n += strlen(s); - break; - case 'p': - (void) va_arg(count, int); - /* maximum 64-bit pointer representation: - * 0xffffffffffffffff - * so 19 characters is enough. - * XXX I count 18 -- what's the extra for? - */ - n += 19; - break; - default: - /* if we stumble upon an unknown - formatting code, copy the rest of - the format string to the output - string. (we cannot just skip the - code, since there's no way to know - what's in the argument list) */ - n += strlen(p); - goto expand; - } - } else - n++; - } + break; + case 's': + s = va_arg(count, char*); + n += strlen(s); + break; + case 'p': + (void) va_arg(count, int); + /* maximum 64-bit pointer representation: + * 0xffffffffffffffff + * so 19 characters is enough. + * XXX I count 18 -- what's the extra for? + */ + n += 19; + break; + default: + /* if we stumble upon an unknown + formatting code, copy the rest of + the format string to the output + string. (we cannot just skip the + code, since there's no way to know + what's in the argument list) */ + n += strlen(p); + goto expand; + } + } else + n++; + } expand: - /* step 2: fill the buffer */ - /* Since we've analyzed how much space we need for the worst case, - use sprintf directly instead of the slower PyOS_snprintf. */ - string = PyString_FromStringAndSize(NULL, n); - if (!string) - return NULL; + /* step 2: fill the buffer */ + /* Since we've analyzed how much space we need for the worst case, + use sprintf directly instead of the slower PyOS_snprintf. */ + string = PyString_FromStringAndSize(NULL, n); + if (!string) + return NULL; - s = PyString_AsString(string); + s = PyString_AsString(string); - for (f = format; *f; f++) { - if (*f == '%') { - const char* p = f++; - Py_ssize_t i; - int longflag = 0; + for (f = format; *f; f++) { + if (*f == '%') { + const char* p = f++; + Py_ssize_t i; + int longflag = 0; #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - int size_tflag = 0; - /* parse the width.precision part (we're only - interested in the precision value, if any) */ - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - if (*f == '.') { - f++; - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - } - while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - f++; - /* Handle %ld, %lu, %lld and %llu. */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - longflag = 1; - ++f; - } + int size_tflag = 0; + /* parse the width.precision part (we're only + interested in the precision value, if any) */ + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + if (*f == '.') { + f++; + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + } + while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + f++; + /* Handle %ld, %lu, %lld and %llu. */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + longflag = 1; + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - /* handle the size_t flag. */ - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - size_tflag = 1; - ++f; - } + } + /* handle the size_t flag. */ + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + size_tflag = 1; + ++f; + } - switch (*f) { - case 'c': - *s++ = va_arg(vargs, int); - break; - case 'd': - if (longflag) - sprintf(s, "%ld", va_arg(vargs, long)); + switch (*f) { + case 'c': + *s++ = va_arg(vargs, int); + break; + case 'd': + if (longflag) + sprintf(s, "%ld", va_arg(vargs, long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "d", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "d", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "d", - va_arg(vargs, Py_ssize_t)); - else - sprintf(s, "%d", va_arg(vargs, int)); - s += strlen(s); - break; - case 'u': - if (longflag) - sprintf(s, "%lu", - va_arg(vargs, unsigned long)); + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "d", + va_arg(vargs, Py_ssize_t)); + else + sprintf(s, "%d", va_arg(vargs, int)); + s += strlen(s); + break; + case 'u': + if (longflag) + sprintf(s, "%lu", + va_arg(vargs, unsigned long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "u", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "u", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "u", - va_arg(vargs, size_t)); - else - sprintf(s, "%u", - va_arg(vargs, unsigned int)); - s += strlen(s); - break; - case 'i': - sprintf(s, "%i", va_arg(vargs, int)); - s += strlen(s); - break; - case 'x': - sprintf(s, "%x", va_arg(vargs, int)); - s += strlen(s); - break; - case 's': - p = va_arg(vargs, char*); - i = strlen(p); - if (n > 0 && i > n) - i = n; - Py_MEMCPY(s, p, i); - s += i; - break; - case 'p': - sprintf(s, "%p", va_arg(vargs, void*)); - /* %p is ill-defined: ensure leading 0x. */ - if (s[1] == 'X') - s[1] = 'x'; - else if (s[1] != 'x') { - memmove(s+2, s, strlen(s)+1); - s[0] = '0'; - s[1] = 'x'; - } - s += strlen(s); - break; - case '%': - *s++ = '%'; - break; - default: - strcpy(s, p); - s += strlen(s); - goto end; - } - } else - *s++ = *f; - } + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "u", + va_arg(vargs, size_t)); + else + sprintf(s, "%u", + va_arg(vargs, unsigned int)); + s += strlen(s); + break; + case 'i': + sprintf(s, "%i", va_arg(vargs, int)); + s += strlen(s); + break; + case 'x': + sprintf(s, "%x", va_arg(vargs, int)); + s += strlen(s); + break; + case 's': + p = va_arg(vargs, char*); + i = strlen(p); + if (n > 0 && i > n) + i = n; + Py_MEMCPY(s, p, i); + s += i; + break; + case 'p': + sprintf(s, "%p", va_arg(vargs, void*)); + /* %p is ill-defined: ensure leading 0x. */ + if (s[1] == 'X') + s[1] = 'x'; + else if (s[1] != 'x') { + memmove(s+2, s, strlen(s)+1); + s[0] = '0'; + s[1] = 'x'; + } + s += strlen(s); + break; + case '%': + *s++ = '%'; + break; + default: + strcpy(s, p); + s += strlen(s); + goto end; + } + } else + *s++ = *f; + } end: - _PyString_Resize(&string, s - PyString_AS_STRING(string)); - return string; + if (_PyString_Resize(&string, s - PyString_AS_STRING(string))) + return NULL; + return string; } PyObject * PyString_FromFormat(const char *format, ...) { - PyObject* ret; - va_list vargs; + PyObject* ret; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - ret = PyString_FromFormatV(format, vargs); - va_end(vargs); - return ret; + ret = PyString_FromFormatV(format, vargs); + va_end(vargs); + return ret; } diff --git a/pypy/module/cpyext/src/structseq.c b/pypy/module/cpyext/src/structseq.c --- a/pypy/module/cpyext/src/structseq.c +++ b/pypy/module/cpyext/src/structseq.c @@ -175,32 +175,33 @@ if (min_len != max_len) { if (len < min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at least %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at least %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } if (len > max_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at most %zd-sequence (%zd-sequence given)", - type->tp_name, max_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at most %zd-sequence (%zd-sequence given)", + type->tp_name, max_len, len); + Py_DECREF(arg); + return NULL; } } else { if (len != min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes a %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes a %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } } res = (PyStructSequence*) PyStructSequence_New(type); if (res == NULL) { + Py_DECREF(arg); return NULL; } for (i = 0; i < len; ++i) { diff --git a/pypy/module/cpyext/src/sysmodule.c b/pypy/module/cpyext/src/sysmodule.c --- a/pypy/module/cpyext/src/sysmodule.c +++ b/pypy/module/cpyext/src/sysmodule.c @@ -100,4 +100,3 @@ sys_write("stderr", stderr, format, va); va_end(va); } - diff --git a/pypy/module/cpyext/src/varargwrapper.c b/pypy/module/cpyext/src/varargwrapper.c --- a/pypy/module/cpyext/src/varargwrapper.c +++ b/pypy/module/cpyext/src/varargwrapper.c @@ -1,21 +1,25 @@ #include #include -PyObject * PyTuple_Pack(Py_ssize_t size, ...) +PyObject * +PyTuple_Pack(Py_ssize_t n, ...) { - va_list ap; - PyObject *cur, *tuple; - int i; + Py_ssize_t i; + PyObject *o; + PyObject *result; + va_list vargs; - tuple = PyTuple_New(size); - va_start(ap, size); - for (i = 0; i < size; i++) { - cur = va_arg(ap, PyObject*); - Py_INCREF(cur); - if (PyTuple_SetItem(tuple, i, cur) < 0) + va_start(vargs, n); + result = PyTuple_New(n); + if (result == NULL) + return NULL; + for (i = 0; i < n; i++) { + o = va_arg(vargs, PyObject *); + Py_INCREF(o); + if (PyTuple_SetItem(result, i, o) < 0) return NULL; } - va_end(ap); - return tuple; + va_end(vargs); + return result; } From noreply at buildbot.pypy.org Wed May 2 01:02:00 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:02:00 +0200 (CEST) Subject: [pypy-commit] pypy default: Forgot to add this file Message-ID: <20120501230200.2BB6582F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r54844:d95ebc2b9ef2 Date: 2012-04-28 23:33 +0200 http://bitbucket.org/pypy/pypy/changeset/d95ebc2b9ef2/ Log: Forgot to add this file diff --git a/pypy/module/cpyext/src/abstract.c b/pypy/module/cpyext/src/abstract.c new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/src/abstract.c @@ -0,0 +1,269 @@ +/* Abstract Object Interface */ + +#include "Python.h" + +/* Shorthands to return certain errors */ + +static PyObject * +type_error(const char *msg, PyObject *obj) +{ + PyErr_Format(PyExc_TypeError, msg, obj->ob_type->tp_name); + return NULL; +} + +static PyObject * +null_error(void) +{ + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_SystemError, + "null argument to internal routine"); + return NULL; +} + +/* Operations on any object */ + +int +PyObject_CheckReadBuffer(PyObject *obj) +{ + PyBufferProcs *pb = obj->ob_type->tp_as_buffer; + + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + (*pb->bf_getsegcount)(obj, NULL) != 1) + return 0; + return 1; +} + +int PyObject_AsReadBuffer(PyObject *obj, + const void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void *pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a readable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getreadbuffer)(obj, 0, &pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +int PyObject_AsWriteBuffer(PyObject *obj, + void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void*pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a writeable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getwritebuffer)(obj,0,&pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +/* Operations on callable objects */ + +static PyObject* +call_function_tail(PyObject *callable, PyObject *args) +{ + PyObject *retval; + + if (args == NULL) + return NULL; + + if (!PyTuple_Check(args)) { + PyObject *a; + + a = PyTuple_New(1); + if (a == NULL) { + Py_DECREF(args); + return NULL; + } + PyTuple_SET_ITEM(a, 0, args); + args = a; + } + retval = PyObject_Call(callable, args, NULL); + + Py_DECREF(args); + + return retval; +} + +PyObject * +PyObject_CallFunction(PyObject *callable, const char *format, ...) +{ + va_list va; + PyObject *args; + + if (callable == NULL) + return null_error(); + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + return call_function_tail(callable, args); +} + +PyObject * +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) +{ + va_list va; + PyObject *args; + PyObject *func = NULL; + PyObject *retval = NULL; + + if (o == NULL || name == NULL) + return null_error(); + + func = PyObject_GetAttrString(o, name); + if (func == NULL) { + PyErr_SetString(PyExc_AttributeError, name); + return 0; + } + + if (!PyCallable_Check(func)) { + type_error("attribute of type '%.200s' is not callable", func); + goto exit; + } + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + retval = call_function_tail(func, args); + + exit: + /* args gets consumed in call_function_tail */ + Py_XDECREF(func); + + return retval; +} + +static PyObject * +objargs_mktuple(va_list va) +{ + int i, n = 0; + va_list countva; + PyObject *result, *tmp; + +#ifdef VA_LIST_IS_ARRAY + memcpy(countva, va, sizeof(va_list)); +#else +#ifdef __va_copy + __va_copy(countva, va); +#else + countva = va; +#endif +#endif + + while (((PyObject *)va_arg(countva, PyObject *)) != NULL) + ++n; + result = PyTuple_New(n); + if (result != NULL && n > 0) { + for (i = 0; i < n; ++i) { + tmp = (PyObject *)va_arg(va, PyObject *); + Py_INCREF(tmp); + PyTuple_SET_ITEM(result, i, tmp); + } + } + return result; +} + +PyObject * +PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL || name == NULL) + return null_error(); + + callable = PyObject_GetAttr(callable, name); + if (callable == NULL) + return NULL; + + /* count the args */ + va_start(vargs, name); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) { + Py_DECREF(callable); + return NULL; + } + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + Py_DECREF(callable); + + return tmp; +} + +PyObject * +PyObject_CallFunctionObjArgs(PyObject *callable, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL) + return null_error(); + + /* count the args */ + va_start(vargs, callable); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) + return NULL; + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + + return tmp; +} + From noreply at buildbot.pypy.org Wed May 2 01:02:01 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:02:01 +0200 (CEST) Subject: [pypy-commit] pypy default: cpyext: add PyComplex_FromCComplex Message-ID: <20120501230201.65ED982F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r54845:3617c6ce9ef2 Date: 2012-04-29 21:47 +0200 http://bitbucket.org/pypy/pypy/changeset/3617c6ce9ef2/ Log: cpyext: add PyComplex_FromCComplex diff --git a/pypy/module/cpyext/complexobject.py b/pypy/module/cpyext/complexobject.py --- a/pypy/module/cpyext/complexobject.py +++ b/pypy/module/cpyext/complexobject.py @@ -33,6 +33,11 @@ # CPython also accepts anything return 0.0 + at cpython_api([Py_complex_ptr], PyObject) +def _PyComplex_FromCComplex(space, v): + """Create a new Python complex number object from a C Py_complex value.""" + return space.newcomplex(v.c_real, v.c_imag) + # lltype does not handle functions returning a structure. This implements a # helper function, which takes as argument a reference to the return value. @cpython_api([PyObject, Py_complex_ptr], lltype.Void) diff --git a/pypy/module/cpyext/include/complexobject.h b/pypy/module/cpyext/include/complexobject.h --- a/pypy/module/cpyext/include/complexobject.h +++ b/pypy/module/cpyext/include/complexobject.h @@ -21,6 +21,8 @@ return result; } +#define PyComplex_FromCComplex(c) _PyComplex_FromCComplex(&c) + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/test/test_complexobject.py b/pypy/module/cpyext/test/test_complexobject.py --- a/pypy/module/cpyext/test/test_complexobject.py +++ b/pypy/module/cpyext/test/test_complexobject.py @@ -31,3 +31,12 @@ assert module.as_tuple(12-34j) == (12, -34) assert module.as_tuple(-3.14) == (-3.14, 0.0) raises(TypeError, module.as_tuple, "12") + + def test_FromCComplex(self): + module = self.import_extension('foo', [ + ("test", "METH_NOARGS", + """ + Py_complex c = {1.2, 3.4}; + return PyComplex_FromCComplex(c); + """)]) + assert module.test() == 1.2 + 3.4j From noreply at buildbot.pypy.org Wed May 2 01:02:02 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:02:02 +0200 (CEST) Subject: [pypy-commit] pypy default: cpyext: add PyLong_AsSsize_t() Message-ID: <20120501230202.A4C1F82F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r54846:fcff0efbb7a3 Date: 2012-04-30 22:44 +0200 http://bitbucket.org/pypy/pypy/changeset/fcff0efbb7a3/ Log: cpyext: add PyLong_AsSsize_t() diff --git a/pypy/module/cpyext/longobject.py b/pypy/module/cpyext/longobject.py --- a/pypy/module/cpyext/longobject.py +++ b/pypy/module/cpyext/longobject.py @@ -1,6 +1,7 @@ from pypy.rpython.lltypesystem import lltype, rffi -from pypy.module.cpyext.api import (cpython_api, PyObject, build_type_checkers, - CONST_STRING, ADDR, CANNOT_FAIL) +from pypy.module.cpyext.api import ( + cpython_api, PyObject, build_type_checkers, Py_ssize_t, + CONST_STRING, ADDR, CANNOT_FAIL) from pypy.objspace.std.longobject import W_LongObject from pypy.interpreter.error import OperationError from pypy.module.cpyext.intobject import PyInt_AsUnsignedLongMask @@ -56,6 +57,14 @@ and -1 will be returned.""" return space.int_w(w_long) + at cpython_api([PyObject], Py_ssize_t, error=-1) +def PyLong_AsSsize_t(space, w_long): + """Return a C Py_ssize_t representation of the contents of pylong. If + pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised + and -1 will be returned. + """ + return space.int_w(w_long) + @cpython_api([PyObject], rffi.LONGLONG, error=-1) def PyLong_AsLongLong(space, w_long): """ diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1431,14 +1431,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject], Py_ssize_t, error=-1) -def PyLong_AsSsize_t(space, pylong): - """Return a C Py_ssize_t representation of the contents of pylong. If - pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised - and -1 will be returned. - """ - raise NotImplementedError - @cpython_api([PyObject, rffi.CCHARP], rffi.INT_real, error=-1) def PyMapping_DelItemString(space, o, key): """Remove the mapping for object key from the object o. Return -1 on diff --git a/pypy/module/cpyext/test/test_longobject.py b/pypy/module/cpyext/test/test_longobject.py --- a/pypy/module/cpyext/test/test_longobject.py +++ b/pypy/module/cpyext/test/test_longobject.py @@ -31,6 +31,11 @@ value = api.PyLong_AsUnsignedLong(w_value) assert value == (sys.maxint - 1) * 2 + def test_as_ssize_t(self, space, api): + w_value = space.newlong(2) + value = api.PyLong_AsSsize_t(w_value) + assert value == 2 + def test_fromdouble(self, space, api): w_value = api.PyLong_FromDouble(-12.74) assert space.unwrap(w_value) == -12 From noreply at buildbot.pypy.org Wed May 2 01:02:03 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:02:03 +0200 (CEST) Subject: [pypy-commit] pypy default: cpyext: support T_PYSSIZET struct members Message-ID: <20120501230203.DD9AA82F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r54847:7fd659f71123 Date: 2012-04-30 23:51 +0200 http://bitbucket.org/pypy/pypy/changeset/7fd659f71123/ Log: cpyext: support T_PYSSIZET struct members diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -40,7 +40,8 @@ when the value is NULL, instead of converting to None. */ #define T_LONGLONG 17 -#define T_ULONGLONG 18 +#define T_ULONGLONG 18 +#define T_PYSSIZET 19 /* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 diff --git a/pypy/module/cpyext/structmember.py b/pypy/module/cpyext/structmember.py --- a/pypy/module/cpyext/structmember.py +++ b/pypy/module/cpyext/structmember.py @@ -10,7 +10,7 @@ PyString_FromString, PyString_FromStringAndSize) from pypy.module.cpyext.floatobject import PyFloat_AsDouble from pypy.module.cpyext.longobject import ( - PyLong_AsLongLong, PyLong_AsUnsignedLongLong) + PyLong_AsLongLong, PyLong_AsUnsignedLongLong, PyLong_AsSsize_t) from pypy.module.cpyext.typeobjectdefs import PyMemberDef from pypy.rlib.unroll import unrolling_iterable @@ -28,6 +28,7 @@ (T_DOUBLE, rffi.DOUBLE, PyFloat_AsDouble), (T_LONGLONG, rffi.LONGLONG, PyLong_AsLongLong), (T_ULONGLONG, rffi.ULONGLONG, PyLong_AsUnsignedLongLong), + (T_PYSSIZET, rffi.SSIZE_T, PyLong_AsSsize_t), ]) diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -18,6 +18,7 @@ T_OBJECT_EX = 16 T_LONGLONG = 17 T_ULONGLONG = 18 +T_PYSSIZET = 19 READONLY = RO = 1 READ_RESTRICTED = 2 diff --git a/pypy/module/cpyext/test/foo.c b/pypy/module/cpyext/test/foo.c --- a/pypy/module/cpyext/test/foo.c +++ b/pypy/module/cpyext/test/foo.c @@ -19,6 +19,7 @@ double foo_double; long long foo_longlong; unsigned long long foo_ulonglong; + Py_ssize_t foo_ssizet; } fooobject; static PyTypeObject footype; @@ -172,7 +173,8 @@ {"float_member", T_FLOAT, offsetof(fooobject, foo_float), 0, NULL}, {"double_member", T_DOUBLE, offsetof(fooobject, foo_double), 0, NULL}, {"longlong_member", T_LONGLONG, offsetof(fooobject, foo_longlong), 0, NULL}, - {"ulonglong_member", T_ULONGLONG, offsetof(fooobject, foo_ulonglong), 0, NULL}, + {"ulonglong_member", T_ULONGLONG, offsetof(fooobject, foo_ulonglong), 0, NULL}, + {"ssizet_member", T_PYSSIZET, offsetof(fooobject, foo_ssizet), 0, NULL}, {NULL} /* Sentinel */ }; diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -107,6 +107,7 @@ obj.double_member = 9.25; assert obj.double_member == 9.25 obj.longlong_member = -2**59; assert obj.longlong_member == -2**59 obj.ulonglong_member = 2**63; assert obj.ulonglong_member == 2**63 + obj.ssizet_member = 2**31; assert obj.ssizet_member == 2**31 # def test_staticmethod(self): From noreply at buildbot.pypy.org Wed May 2 01:02:05 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:02:05 +0200 (CEST) Subject: [pypy-commit] pypy default: cpyext: use a recent version of array and _sre modules in tests Message-ID: <20120501230205.3F8BD82F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r54848:75ec4c67a6e9 Date: 2012-04-30 23:51 +0200 http://bitbucket.org/pypy/pypy/changeset/75ec4c67a6e9/ Log: cpyext: use a recent version of array and _sre modules in tests diff --git a/pypy/module/cpyext/test/_sre.c b/pypy/module/cpyext/test/_sre.c --- a/pypy/module/cpyext/test/_sre.c +++ b/pypy/module/cpyext/test/_sre.c @@ -81,9 +81,6 @@ #define PyObject_DEL(op) PyMem_DEL((op)) #endif -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - /* -------------------------------------------------------------------- */ #if defined(_MSC_VER) @@ -1689,7 +1686,7 @@ if (PyUnicode_Check(string)) { /* unicode strings doesn't always support the buffer interface */ ptr = (void*) PyUnicode_AS_DATA(string); - bytes = PyUnicode_GET_DATA_SIZE(string); + /* bytes = PyUnicode_GET_DATA_SIZE(string); */ size = PyUnicode_GET_SIZE(string); charsize = sizeof(Py_UNICODE); @@ -2601,46 +2598,22 @@ {NULL, NULL} }; -static PyObject* -pattern_getattr(PatternObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(pattern_methods, (PyObject*) self, name); - - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - if (!strcmp(name, "flags")) - return Py_BuildValue("i", self->flags); - - if (!strcmp(name, "groups")) - return Py_BuildValue("i", self->groups); - - if (!strcmp(name, "groupindex") && self->groupindex) { - Py_INCREF(self->groupindex); - return self->groupindex; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define PAT_OFF(x) offsetof(PatternObject, x) +static PyMemberDef pattern_members[] = { + {"pattern", T_OBJECT, PAT_OFF(pattern), READONLY}, + {"flags", T_INT, PAT_OFF(flags), READONLY}, + {"groups", T_PYSSIZET, PAT_OFF(groups), READONLY}, + {"groupindex", T_OBJECT, PAT_OFF(groupindex), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Pattern_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Pattern", sizeof(PatternObject), sizeof(SRE_CODE), (destructor)pattern_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)pattern_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattrn */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ @@ -2653,12 +2626,16 @@ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ pattern_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(PatternObject, weakreflist), /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + pattern_methods, /* tp_methods */ + pattern_members, /* tp_members */ }; static int _validate(PatternObject *self); /* Forward */ @@ -2767,7 +2744,7 @@ #if defined(VVERBOSE) #define VTRACE(v) printf v #else -#define VTRACE(v) +#define VTRACE(v) do {} while(0) /* do nothing */ #endif /* Report failure */ @@ -2970,13 +2947,13 @@ <1=skip> <2=flags> <3=min> <4=max>; If SRE_INFO_PREFIX or SRE_INFO_CHARSET is in the flags, more follows. */ - SRE_CODE flags, min, max, i; + SRE_CODE flags, i; SRE_CODE *newcode; GET_SKIP; newcode = code+skip-1; GET_ARG; flags = arg; - GET_ARG; min = arg; - GET_ARG; max = arg; + GET_ARG; /* min */ + GET_ARG; /* max */ /* Check that only valid flags are present */ if ((flags & ~(SRE_INFO_PREFIX | SRE_INFO_LITERAL | @@ -2992,9 +2969,9 @@ FAIL; /* Validate the prefix */ if (flags & SRE_INFO_PREFIX) { - SRE_CODE prefix_len, prefix_skip; + SRE_CODE prefix_len; GET_ARG; prefix_len = arg; - GET_ARG; prefix_skip = arg; + GET_ARG; /* prefix skip */ /* Here comes the prefix string */ if (code+prefix_len < code || code+prefix_len > newcode) FAIL; @@ -3565,7 +3542,7 @@ #endif } -static PyMethodDef match_methods[] = { +static struct PyMethodDef match_methods[] = { {"group", (PyCFunction) match_group, METH_VARARGS}, {"start", (PyCFunction) match_start, METH_VARARGS}, {"end", (PyCFunction) match_end, METH_VARARGS}, @@ -3578,80 +3555,90 @@ {NULL, NULL} }; -static PyObject* -match_getattr(MatchObject* self, char* name) +static PyObject * +match_lastindex_get(MatchObject *self) { - PyObject* res; - - res = Py_FindMethod(match_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - if (!strcmp(name, "lastindex")) { - if (self->lastindex >= 0) - return Py_BuildValue("i", self->lastindex); - Py_INCREF(Py_None); - return Py_None; + if (self->lastindex >= 0) + return Py_BuildValue("i", self->lastindex); + Py_INCREF(Py_None); + return Py_None; +} + +static PyObject * +match_lastgroup_get(MatchObject *self) +{ + if (self->pattern->indexgroup && self->lastindex >= 0) { + PyObject* result = PySequence_GetItem( + self->pattern->indexgroup, self->lastindex + ); + if (result) + return result; + PyErr_Clear(); } - - if (!strcmp(name, "lastgroup")) { - if (self->pattern->indexgroup && self->lastindex >= 0) { - PyObject* result = PySequence_GetItem( - self->pattern->indexgroup, self->lastindex - ); - if (result) - return result; - PyErr_Clear(); - } - Py_INCREF(Py_None); - return Py_None; - } - - if (!strcmp(name, "string")) { - if (self->string) { - Py_INCREF(self->string); - return self->string; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - - if (!strcmp(name, "regs")) { - if (self->regs) { - Py_INCREF(self->regs); - return self->regs; - } else - return match_regs(self); - } - - if (!strcmp(name, "re")) { - Py_INCREF(self->pattern); - return (PyObject*) self->pattern; - } - - if (!strcmp(name, "pos")) - return Py_BuildValue("i", self->pos); - - if (!strcmp(name, "endpos")) - return Py_BuildValue("i", self->endpos); - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; + Py_INCREF(Py_None); + return Py_None; } +static PyObject * +match_regs_get(MatchObject *self) +{ + if (self->regs) { + Py_INCREF(self->regs); + return self->regs; + } else + return match_regs(self); +} + +static PyGetSetDef match_getset[] = { + {"lastindex", (getter)match_lastindex_get, (setter)NULL}, + {"lastgroup", (getter)match_lastgroup_get, (setter)NULL}, + {"regs", (getter)match_regs_get, (setter)NULL}, + {NULL} +}; + +#define MATCH_OFF(x) offsetof(MatchObject, x) +static PyMemberDef match_members[] = { + {"string", T_OBJECT, MATCH_OFF(string), READONLY}, + {"re", T_OBJECT, MATCH_OFF(pattern), READONLY}, + {"pos", T_PYSSIZET, MATCH_OFF(pos), READONLY}, + {"endpos", T_PYSSIZET, MATCH_OFF(endpos), READONLY}, + {NULL} +}; + + /* FIXME: implement setattr("string", None) as a special case (to detach the associated string, if any */ -statichere PyTypeObject Match_Type = { - PyObject_HEAD_INIT(NULL) - 0, "_" SRE_MODULE ".SRE_Match", +static PyTypeObject Match_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "_" SRE_MODULE ".SRE_Match", sizeof(MatchObject), sizeof(Py_ssize_t), - (destructor)match_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)match_getattr /*tp_getattr*/ + (destructor)match_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + match_methods, /* tp_methods */ + match_members, /* tp_members */ + match_getset, /* tp_getset */ }; static PyObject* @@ -3800,34 +3787,42 @@ {NULL, NULL} }; -static PyObject* -scanner_getattr(ScannerObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(scanner_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define SCAN_OFF(x) offsetof(ScannerObject, x) +static PyMemberDef scanner_members[] = { + {"pattern", T_OBJECT, SCAN_OFF(pattern), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Scanner_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Scanner", sizeof(ScannerObject), 0, (destructor)scanner_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)scanner_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_reserved */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + scanner_methods, /* tp_methods */ + scanner_members, /* tp_members */ + 0, /* tp_getset */ }; static PyObject* @@ -3879,8 +3874,9 @@ PyObject* x; /* Patch object types */ - Pattern_Type.ob_type = Match_Type.ob_type = - Scanner_Type.ob_type = &PyType_Type; + if (PyType_Ready(&Pattern_Type) || PyType_Ready(&Match_Type) || + PyType_Ready(&Scanner_Type)) + return; m = Py_InitModule("_" SRE_MODULE, _functions); if (m == NULL) diff --git a/pypy/module/cpyext/test/array.c b/pypy/module/cpyext/test/array.c --- a/pypy/module/cpyext/test/array.c +++ b/pypy/module/cpyext/test/array.c @@ -11,13 +11,10 @@ #include #else /* !STDC_HEADERS */ #ifdef HAVE_SYS_TYPES_H -#include /* For size_t */ +#include /* For size_t */ #endif /* HAVE_SYS_TYPES_H */ #endif /* !STDC_HEADERS */ -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - struct arrayobject; /* Forward */ /* All possible arraydescr values are defined in the vector "descriptors" @@ -25,18 +22,18 @@ * functions aren't visible yet. */ struct arraydescr { - int typecode; - int itemsize; - PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); - int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); + int typecode; + int itemsize; + PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); + int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); }; typedef struct arrayobject { - PyObject_VAR_HEAD - char *ob_item; - Py_ssize_t allocated; - struct arraydescr *ob_descr; - PyObject *weakreflist; /* List of weak references */ + PyObject_VAR_HEAD + char *ob_item; + Py_ssize_t allocated; + struct arraydescr *ob_descr; + PyObject *weakreflist; /* List of weak references */ } arrayobject; static PyTypeObject Arraytype; @@ -47,49 +44,49 @@ static int array_resize(arrayobject *self, Py_ssize_t newsize) { - char *items; - size_t _new_size; + char *items; + size_t _new_size; - /* Bypass realloc() when a previous overallocation is large enough - to accommodate the newsize. If the newsize is 16 smaller than the - current size, then proceed with the realloc() to shrink the list. - */ + /* Bypass realloc() when a previous overallocation is large enough + to accommodate the newsize. If the newsize is 16 smaller than the + current size, then proceed with the realloc() to shrink the list. + */ - if (self->allocated >= newsize && - Py_SIZE(self) < newsize + 16 && - self->ob_item != NULL) { - Py_SIZE(self) = newsize; - return 0; - } + if (self->allocated >= newsize && + Py_SIZE(self) < newsize + 16 && + self->ob_item != NULL) { + Py_SIZE(self) = newsize; + return 0; + } - /* This over-allocates proportional to the array size, making room - * for additional growth. The over-allocation is mild, but is - * enough to give linear-time amortized behavior over a long - * sequence of appends() in the presence of a poorly-performing - * system realloc(). - * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... - * Note, the pattern starts out the same as for lists but then - * grows at a smaller rate so that larger arrays only overallocate - * by about 1/16th -- this is done because arrays are presumed to be more - * memory critical. - */ + /* This over-allocates proportional to the array size, making room + * for additional growth. The over-allocation is mild, but is + * enough to give linear-time amortized behavior over a long + * sequence of appends() in the presence of a poorly-performing + * system realloc(). + * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... + * Note, the pattern starts out the same as for lists but then + * grows at a smaller rate so that larger arrays only overallocate + * by about 1/16th -- this is done because arrays are presumed to be more + * memory critical. + */ - _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; - items = self->ob_item; - /* XXX The following multiplication and division does not optimize away - like it does for lists since the size is not known at compile time */ - if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) - PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); - else - items = NULL; - if (items == NULL) { - PyErr_NoMemory(); - return -1; - } - self->ob_item = items; - Py_SIZE(self) = newsize; - self->allocated = _new_size; - return 0; + _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; + items = self->ob_item; + /* XXX The following multiplication and division does not optimize away + like it does for lists since the size is not known at compile time */ + if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) + PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); + else + items = NULL; + if (items == NULL) { + PyErr_NoMemory(); + return -1; + } + self->ob_item = items; + Py_SIZE(self) = newsize; + self->allocated = _new_size; + return 0; } /**************************************************************************** @@ -107,308 +104,308 @@ static PyObject * c_getitem(arrayobject *ap, Py_ssize_t i) { - return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); + return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); } static int c_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - char x; - if (!PyArg_Parse(v, "c;array item must be char", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; + char x; + if (!PyArg_Parse(v, "c;array item must be char", &x)) + return -1; + if (i >= 0) + ((char *)ap->ob_item)[i] = x; + return 0; } static PyObject * b_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((char *)ap->ob_item)[i]; - if (x >= 128) - x -= 256; - return PyInt_FromLong(x); + long x = ((char *)ap->ob_item)[i]; + if (x >= 128) + x -= 256; + return PyInt_FromLong(x); } static int b_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore - must use the next size up that is signed ('h') and manually do - the overflow checking */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - else if (x < -128) { - PyErr_SetString(PyExc_OverflowError, - "signed char is less than minimum"); - return -1; - } - else if (x > 127) { - PyErr_SetString(PyExc_OverflowError, - "signed char is greater than maximum"); - return -1; - } - if (i >= 0) - ((char *)ap->ob_item)[i] = (char)x; - return 0; + short x; + /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore + must use the next size up that is signed ('h') and manually do + the overflow checking */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + else if (x < -128) { + PyErr_SetString(PyExc_OverflowError, + "signed char is less than minimum"); + return -1; + } + else if (x > 127) { + PyErr_SetString(PyExc_OverflowError, + "signed char is greater than maximum"); + return -1; + } + if (i >= 0) + ((char *)ap->ob_item)[i] = (char)x; + return 0; } static PyObject * BB_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((unsigned char *)ap->ob_item)[i]; - return PyInt_FromLong(x); + long x = ((unsigned char *)ap->ob_item)[i]; + return PyInt_FromLong(x); } static int BB_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned char x; - /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ - if (!PyArg_Parse(v, "b;array item must be integer", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; + unsigned char x; + /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ + if (!PyArg_Parse(v, "b;array item must be integer", &x)) + return -1; + if (i >= 0) + ((char *)ap->ob_item)[i] = x; + return 0; } #ifdef Py_USING_UNICODE static PyObject * u_getitem(arrayobject *ap, Py_ssize_t i) { - return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); + return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); } static int u_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - Py_UNICODE *p; - Py_ssize_t len; + Py_UNICODE *p; + Py_ssize_t len; - if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) - return -1; - if (len != 1) { - PyErr_SetString(PyExc_TypeError, - "array item must be unicode character"); - return -1; - } - if (i >= 0) - ((Py_UNICODE *)ap->ob_item)[i] = p[0]; - return 0; + if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) + return -1; + if (len != 1) { + PyErr_SetString(PyExc_TypeError, + "array item must be unicode character"); + return -1; + } + if (i >= 0) + ((Py_UNICODE *)ap->ob_item)[i] = p[0]; + return 0; } #endif static PyObject * h_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); } static int h_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - if (i >= 0) - ((short *)ap->ob_item)[i] = x; - return 0; + short x; + /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + if (i >= 0) + ((short *)ap->ob_item)[i] = x; + return 0; } static PyObject * HH_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); } static int HH_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* PyArg_Parse's 'h' formatter is for a signed short, therefore - must use the next size up and manually do the overflow checking */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - else if (x < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is less than minimum"); - return -1; - } - else if (x > USHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is greater than maximum"); - return -1; - } - if (i >= 0) - ((short *)ap->ob_item)[i] = (short)x; - return 0; + int x; + /* PyArg_Parse's 'h' formatter is for a signed short, therefore + must use the next size up and manually do the overflow checking */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + else if (x < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is less than minimum"); + return -1; + } + else if (x > USHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is greater than maximum"); + return -1; + } + if (i >= 0) + ((short *)ap->ob_item)[i] = (short)x; + return 0; } static PyObject * i_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); } static int i_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - if (i >= 0) - ((int *)ap->ob_item)[i] = x; - return 0; + int x; + /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + if (i >= 0) + ((int *)ap->ob_item)[i] = x; + return 0; } static PyObject * II_getitem(arrayobject *ap, Py_ssize_t i) { - return PyLong_FromUnsignedLong( - (unsigned long) ((unsigned int *)ap->ob_item)[i]); + return PyLong_FromUnsignedLong( + (unsigned long) ((unsigned int *)ap->ob_item)[i]); } static int II_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned long x; - if (PyLong_Check(v)) { - x = PyLong_AsUnsignedLong(v); - if (x == (unsigned long) -1 && PyErr_Occurred()) - return -1; - } - else { - long y; - if (!PyArg_Parse(v, "l;array item must be integer", &y)) - return -1; - if (y < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is less than minimum"); - return -1; - } - x = (unsigned long)y; + unsigned long x; + if (PyLong_Check(v)) { + x = PyLong_AsUnsignedLong(v); + if (x == (unsigned long) -1 && PyErr_Occurred()) + return -1; + } + else { + long y; + if (!PyArg_Parse(v, "l;array item must be integer", &y)) + return -1; + if (y < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is less than minimum"); + return -1; + } + x = (unsigned long)y; - } - if (x > UINT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is greater than maximum"); - return -1; - } + } + if (x > UINT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is greater than maximum"); + return -1; + } - if (i >= 0) - ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; - return 0; + if (i >= 0) + ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; + return 0; } static PyObject * l_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong(((long *)ap->ob_item)[i]); + return PyInt_FromLong(((long *)ap->ob_item)[i]); } static int l_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - long x; - if (!PyArg_Parse(v, "l;array item must be integer", &x)) - return -1; - if (i >= 0) - ((long *)ap->ob_item)[i] = x; - return 0; + long x; + if (!PyArg_Parse(v, "l;array item must be integer", &x)) + return -1; + if (i >= 0) + ((long *)ap->ob_item)[i] = x; + return 0; } static PyObject * LL_getitem(arrayobject *ap, Py_ssize_t i) { - return PyLong_FromUnsignedLong(((unsigned long *)ap->ob_item)[i]); + return PyLong_FromUnsignedLong(((unsigned long *)ap->ob_item)[i]); } static int LL_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned long x; - if (PyLong_Check(v)) { - x = PyLong_AsUnsignedLong(v); - if (x == (unsigned long) -1 && PyErr_Occurred()) - return -1; - } - else { - long y; - if (!PyArg_Parse(v, "l;array item must be integer", &y)) - return -1; - if (y < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned long is less than minimum"); - return -1; - } - x = (unsigned long)y; + unsigned long x; + if (PyLong_Check(v)) { + x = PyLong_AsUnsignedLong(v); + if (x == (unsigned long) -1 && PyErr_Occurred()) + return -1; + } + else { + long y; + if (!PyArg_Parse(v, "l;array item must be integer", &y)) + return -1; + if (y < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned long is less than minimum"); + return -1; + } + x = (unsigned long)y; - } - if (x > ULONG_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned long is greater than maximum"); - return -1; - } + } + if (x > ULONG_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned long is greater than maximum"); + return -1; + } - if (i >= 0) - ((unsigned long *)ap->ob_item)[i] = x; - return 0; + if (i >= 0) + ((unsigned long *)ap->ob_item)[i] = x; + return 0; } static PyObject * f_getitem(arrayobject *ap, Py_ssize_t i) { - return PyFloat_FromDouble((double) ((float *)ap->ob_item)[i]); + return PyFloat_FromDouble((double) ((float *)ap->ob_item)[i]); } static int f_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - float x; - if (!PyArg_Parse(v, "f;array item must be float", &x)) - return -1; - if (i >= 0) - ((float *)ap->ob_item)[i] = x; - return 0; + float x; + if (!PyArg_Parse(v, "f;array item must be float", &x)) + return -1; + if (i >= 0) + ((float *)ap->ob_item)[i] = x; + return 0; } static PyObject * d_getitem(arrayobject *ap, Py_ssize_t i) { - return PyFloat_FromDouble(((double *)ap->ob_item)[i]); + return PyFloat_FromDouble(((double *)ap->ob_item)[i]); } static int d_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - double x; - if (!PyArg_Parse(v, "d;array item must be float", &x)) - return -1; - if (i >= 0) - ((double *)ap->ob_item)[i] = x; - return 0; + double x; + if (!PyArg_Parse(v, "d;array item must be float", &x)) + return -1; + if (i >= 0) + ((double *)ap->ob_item)[i] = x; + return 0; } /* Description of types */ static struct arraydescr descriptors[] = { - {'c', sizeof(char), c_getitem, c_setitem}, - {'b', sizeof(char), b_getitem, b_setitem}, - {'B', sizeof(char), BB_getitem, BB_setitem}, + {'c', sizeof(char), c_getitem, c_setitem}, + {'b', sizeof(char), b_getitem, b_setitem}, + {'B', sizeof(char), BB_getitem, BB_setitem}, #ifdef Py_USING_UNICODE - {'u', sizeof(Py_UNICODE), u_getitem, u_setitem}, + {'u', sizeof(Py_UNICODE), u_getitem, u_setitem}, #endif - {'h', sizeof(short), h_getitem, h_setitem}, - {'H', sizeof(short), HH_getitem, HH_setitem}, - {'i', sizeof(int), i_getitem, i_setitem}, - {'I', sizeof(int), II_getitem, II_setitem}, - {'l', sizeof(long), l_getitem, l_setitem}, - {'L', sizeof(long), LL_getitem, LL_setitem}, - {'f', sizeof(float), f_getitem, f_setitem}, - {'d', sizeof(double), d_getitem, d_setitem}, - {'\0', 0, 0, 0} /* Sentinel */ + {'h', sizeof(short), h_getitem, h_setitem}, + {'H', sizeof(short), HH_getitem, HH_setitem}, + {'i', sizeof(int), i_getitem, i_setitem}, + {'I', sizeof(int), II_getitem, II_setitem}, + {'l', sizeof(long), l_getitem, l_setitem}, + {'L', sizeof(long), LL_getitem, LL_setitem}, + {'f', sizeof(float), f_getitem, f_setitem}, + {'d', sizeof(double), d_getitem, d_setitem}, + {'\0', 0, 0, 0} /* Sentinel */ }; /**************************************************************************** @@ -418,78 +415,78 @@ static PyObject * newarrayobject(PyTypeObject *type, Py_ssize_t size, struct arraydescr *descr) { - arrayobject *op; - size_t nbytes; + arrayobject *op; + size_t nbytes; - if (size < 0) { - PyErr_BadInternalCall(); - return NULL; - } + if (size < 0) { + PyErr_BadInternalCall(); + return NULL; + } - nbytes = size * descr->itemsize; - /* Check for overflow */ - if (nbytes / descr->itemsize != (size_t)size) { - return PyErr_NoMemory(); - } - op = (arrayobject *) type->tp_alloc(type, 0); - if (op == NULL) { - return NULL; - } - op->ob_descr = descr; - op->allocated = size; - op->weakreflist = NULL; - Py_SIZE(op) = size; - if (size <= 0) { - op->ob_item = NULL; - } - else { - op->ob_item = PyMem_NEW(char, nbytes); - if (op->ob_item == NULL) { - Py_DECREF(op); - return PyErr_NoMemory(); - } - } - return (PyObject *) op; + nbytes = size * descr->itemsize; + /* Check for overflow */ + if (nbytes / descr->itemsize != (size_t)size) { + return PyErr_NoMemory(); + } + op = (arrayobject *) type->tp_alloc(type, 0); + if (op == NULL) { + return NULL; + } + op->ob_descr = descr; + op->allocated = size; + op->weakreflist = NULL; + Py_SIZE(op) = size; + if (size <= 0) { + op->ob_item = NULL; + } + else { + op->ob_item = PyMem_NEW(char, nbytes); + if (op->ob_item == NULL) { + Py_DECREF(op); + return PyErr_NoMemory(); + } + } + return (PyObject *) op; } static PyObject * getarrayitem(PyObject *op, Py_ssize_t i) { - register arrayobject *ap; - assert(array_Check(op)); - ap = (arrayobject *)op; - assert(i>=0 && iob_descr->getitem)(ap, i); + register arrayobject *ap; + assert(array_Check(op)); + ap = (arrayobject *)op; + assert(i>=0 && iob_descr->getitem)(ap, i); } static int ins1(arrayobject *self, Py_ssize_t where, PyObject *v) { - char *items; - Py_ssize_t n = Py_SIZE(self); - if (v == NULL) { - PyErr_BadInternalCall(); - return -1; - } - if ((*self->ob_descr->setitem)(self, -1, v) < 0) - return -1; + char *items; + Py_ssize_t n = Py_SIZE(self); + if (v == NULL) { + PyErr_BadInternalCall(); + return -1; + } + if ((*self->ob_descr->setitem)(self, -1, v) < 0) + return -1; - if (array_resize(self, n+1) == -1) - return -1; - items = self->ob_item; - if (where < 0) { - where += n; - if (where < 0) - where = 0; - } - if (where > n) - where = n; - /* appends don't need to call memmove() */ - if (where != n) - memmove(items + (where+1)*self->ob_descr->itemsize, - items + where*self->ob_descr->itemsize, - (n-where)*self->ob_descr->itemsize); - return (*self->ob_descr->setitem)(self, where, v); + if (array_resize(self, n+1) == -1) + return -1; + items = self->ob_item; + if (where < 0) { + where += n; + if (where < 0) + where = 0; + } + if (where > n) + where = n; + /* appends don't need to call memmove() */ + if (where != n) + memmove(items + (where+1)*self->ob_descr->itemsize, + items + where*self->ob_descr->itemsize, + (n-where)*self->ob_descr->itemsize); + return (*self->ob_descr->setitem)(self, where, v); } /* Methods */ @@ -497,141 +494,141 @@ static void array_dealloc(arrayobject *op) { - if (op->weakreflist != NULL) - PyObject_ClearWeakRefs((PyObject *) op); - if (op->ob_item != NULL) - PyMem_DEL(op->ob_item); - Py_TYPE(op)->tp_free((PyObject *)op); + if (op->weakreflist != NULL) + PyObject_ClearWeakRefs((PyObject *) op); + if (op->ob_item != NULL) + PyMem_DEL(op->ob_item); + Py_TYPE(op)->tp_free((PyObject *)op); } static PyObject * array_richcompare(PyObject *v, PyObject *w, int op) { - arrayobject *va, *wa; - PyObject *vi = NULL; - PyObject *wi = NULL; - Py_ssize_t i, k; - PyObject *res; + arrayobject *va, *wa; + PyObject *vi = NULL; + PyObject *wi = NULL; + Py_ssize_t i, k; + PyObject *res; - if (!array_Check(v) || !array_Check(w)) { - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } + if (!array_Check(v) || !array_Check(w)) { + Py_INCREF(Py_NotImplemented); + return Py_NotImplemented; + } - va = (arrayobject *)v; - wa = (arrayobject *)w; + va = (arrayobject *)v; + wa = (arrayobject *)w; - if (Py_SIZE(va) != Py_SIZE(wa) && (op == Py_EQ || op == Py_NE)) { - /* Shortcut: if the lengths differ, the arrays differ */ - if (op == Py_EQ) - res = Py_False; - else - res = Py_True; - Py_INCREF(res); - return res; - } + if (Py_SIZE(va) != Py_SIZE(wa) && (op == Py_EQ || op == Py_NE)) { + /* Shortcut: if the lengths differ, the arrays differ */ + if (op == Py_EQ) + res = Py_False; + else + res = Py_True; + Py_INCREF(res); + return res; + } - /* Search for the first index where items are different */ - k = 1; - for (i = 0; i < Py_SIZE(va) && i < Py_SIZE(wa); i++) { - vi = getarrayitem(v, i); - wi = getarrayitem(w, i); - if (vi == NULL || wi == NULL) { - Py_XDECREF(vi); - Py_XDECREF(wi); - return NULL; - } - k = PyObject_RichCompareBool(vi, wi, Py_EQ); - if (k == 0) - break; /* Keeping vi and wi alive! */ - Py_DECREF(vi); - Py_DECREF(wi); - if (k < 0) - return NULL; - } + /* Search for the first index where items are different */ + k = 1; + for (i = 0; i < Py_SIZE(va) && i < Py_SIZE(wa); i++) { + vi = getarrayitem(v, i); + wi = getarrayitem(w, i); + if (vi == NULL || wi == NULL) { + Py_XDECREF(vi); + Py_XDECREF(wi); + return NULL; + } + k = PyObject_RichCompareBool(vi, wi, Py_EQ); + if (k == 0) + break; /* Keeping vi and wi alive! */ + Py_DECREF(vi); + Py_DECREF(wi); + if (k < 0) + return NULL; + } - if (k) { - /* No more items to compare -- compare sizes */ - Py_ssize_t vs = Py_SIZE(va); - Py_ssize_t ws = Py_SIZE(wa); - int cmp; - switch (op) { - case Py_LT: cmp = vs < ws; break; - case Py_LE: cmp = vs <= ws; break; - case Py_EQ: cmp = vs == ws; break; - case Py_NE: cmp = vs != ws; break; - case Py_GT: cmp = vs > ws; break; - case Py_GE: cmp = vs >= ws; break; - default: return NULL; /* cannot happen */ - } - if (cmp) - res = Py_True; - else - res = Py_False; - Py_INCREF(res); - return res; - } + if (k) { + /* No more items to compare -- compare sizes */ + Py_ssize_t vs = Py_SIZE(va); + Py_ssize_t ws = Py_SIZE(wa); + int cmp; + switch (op) { + case Py_LT: cmp = vs < ws; break; + case Py_LE: cmp = vs <= ws; break; + case Py_EQ: cmp = vs == ws; break; + case Py_NE: cmp = vs != ws; break; + case Py_GT: cmp = vs > ws; break; + case Py_GE: cmp = vs >= ws; break; + default: return NULL; /* cannot happen */ + } + if (cmp) + res = Py_True; + else + res = Py_False; + Py_INCREF(res); + return res; + } - /* We have an item that differs. First, shortcuts for EQ/NE */ - if (op == Py_EQ) { - Py_INCREF(Py_False); - res = Py_False; - } - else if (op == Py_NE) { - Py_INCREF(Py_True); - res = Py_True; - } - else { - /* Compare the final item again using the proper operator */ - res = PyObject_RichCompare(vi, wi, op); - } - Py_DECREF(vi); - Py_DECREF(wi); - return res; + /* We have an item that differs. First, shortcuts for EQ/NE */ + if (op == Py_EQ) { + Py_INCREF(Py_False); + res = Py_False; + } + else if (op == Py_NE) { + Py_INCREF(Py_True); + res = Py_True; + } + else { + /* Compare the final item again using the proper operator */ + res = PyObject_RichCompare(vi, wi, op); + } + Py_DECREF(vi); + Py_DECREF(wi); + return res; } static Py_ssize_t array_length(arrayobject *a) { - return Py_SIZE(a); + return Py_SIZE(a); } static PyObject * array_item(arrayobject *a, Py_ssize_t i) { - if (i < 0 || i >= Py_SIZE(a)) { - PyErr_SetString(PyExc_IndexError, "array index out of range"); - return NULL; - } - return getarrayitem((PyObject *)a, i); + if (i < 0 || i >= Py_SIZE(a)) { + PyErr_SetString(PyExc_IndexError, "array index out of range"); + return NULL; + } + return getarrayitem((PyObject *)a, i); } static PyObject * array_slice(arrayobject *a, Py_ssize_t ilow, Py_ssize_t ihigh) { - arrayobject *np; - if (ilow < 0) - ilow = 0; - else if (ilow > Py_SIZE(a)) - ilow = Py_SIZE(a); - if (ihigh < 0) - ihigh = 0; - if (ihigh < ilow) - ihigh = ilow; - else if (ihigh > Py_SIZE(a)) - ihigh = Py_SIZE(a); - np = (arrayobject *) newarrayobject(&Arraytype, ihigh - ilow, a->ob_descr); - if (np == NULL) - return NULL; - memcpy(np->ob_item, a->ob_item + ilow * a->ob_descr->itemsize, - (ihigh-ilow) * a->ob_descr->itemsize); - return (PyObject *)np; + arrayobject *np; + if (ilow < 0) + ilow = 0; + else if (ilow > Py_SIZE(a)) + ilow = Py_SIZE(a); + if (ihigh < 0) + ihigh = 0; + if (ihigh < ilow) + ihigh = ilow; + else if (ihigh > Py_SIZE(a)) + ihigh = Py_SIZE(a); + np = (arrayobject *) newarrayobject(&Arraytype, ihigh - ilow, a->ob_descr); + if (np == NULL) + return NULL; + memcpy(np->ob_item, a->ob_item + ilow * a->ob_descr->itemsize, + (ihigh-ilow) * a->ob_descr->itemsize); + return (PyObject *)np; } static PyObject * array_copy(arrayobject *a, PyObject *unused) { - return array_slice(a, 0, Py_SIZE(a)); + return array_slice(a, 0, Py_SIZE(a)); } PyDoc_STRVAR(copy_doc, @@ -642,297 +639,297 @@ static PyObject * array_concat(arrayobject *a, PyObject *bb) { - Py_ssize_t size; - arrayobject *np; - if (!array_Check(bb)) { - PyErr_Format(PyExc_TypeError, - "can only append array (not \"%.200s\") to array", - Py_TYPE(bb)->tp_name); - return NULL; - } + Py_ssize_t size; + arrayobject *np; + if (!array_Check(bb)) { + PyErr_Format(PyExc_TypeError, + "can only append array (not \"%.200s\") to array", + Py_TYPE(bb)->tp_name); + return NULL; + } #define b ((arrayobject *)bb) - if (a->ob_descr != b->ob_descr) { - PyErr_BadArgument(); - return NULL; - } - if (Py_SIZE(a) > PY_SSIZE_T_MAX - Py_SIZE(b)) { - return PyErr_NoMemory(); - } - size = Py_SIZE(a) + Py_SIZE(b); - np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); - if (np == NULL) { - return NULL; - } - memcpy(np->ob_item, a->ob_item, Py_SIZE(a)*a->ob_descr->itemsize); - memcpy(np->ob_item + Py_SIZE(a)*a->ob_descr->itemsize, - b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); - return (PyObject *)np; + if (a->ob_descr != b->ob_descr) { + PyErr_BadArgument(); + return NULL; + } + if (Py_SIZE(a) > PY_SSIZE_T_MAX - Py_SIZE(b)) { + return PyErr_NoMemory(); + } + size = Py_SIZE(a) + Py_SIZE(b); + np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); + if (np == NULL) { + return NULL; + } + memcpy(np->ob_item, a->ob_item, Py_SIZE(a)*a->ob_descr->itemsize); + memcpy(np->ob_item + Py_SIZE(a)*a->ob_descr->itemsize, + b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); + return (PyObject *)np; #undef b } static PyObject * array_repeat(arrayobject *a, Py_ssize_t n) { - Py_ssize_t i; - Py_ssize_t size; - arrayobject *np; - char *p; - Py_ssize_t nbytes; - if (n < 0) - n = 0; - if ((Py_SIZE(a) != 0) && (n > PY_SSIZE_T_MAX / Py_SIZE(a))) { - return PyErr_NoMemory(); - } - size = Py_SIZE(a) * n; - np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); - if (np == NULL) - return NULL; - p = np->ob_item; - nbytes = Py_SIZE(a) * a->ob_descr->itemsize; - for (i = 0; i < n; i++) { - memcpy(p, a->ob_item, nbytes); - p += nbytes; - } - return (PyObject *) np; + Py_ssize_t i; + Py_ssize_t size; + arrayobject *np; + char *p; + Py_ssize_t nbytes; + if (n < 0) + n = 0; + if ((Py_SIZE(a) != 0) && (n > PY_SSIZE_T_MAX / Py_SIZE(a))) { + return PyErr_NoMemory(); + } + size = Py_SIZE(a) * n; + np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); + if (np == NULL) + return NULL; + p = np->ob_item; + nbytes = Py_SIZE(a) * a->ob_descr->itemsize; + for (i = 0; i < n; i++) { + memcpy(p, a->ob_item, nbytes); + p += nbytes; + } + return (PyObject *) np; } static int array_ass_slice(arrayobject *a, Py_ssize_t ilow, Py_ssize_t ihigh, PyObject *v) { - char *item; - Py_ssize_t n; /* Size of replacement array */ - Py_ssize_t d; /* Change in size */ + char *item; + Py_ssize_t n; /* Size of replacement array */ + Py_ssize_t d; /* Change in size */ #define b ((arrayobject *)v) - if (v == NULL) - n = 0; - else if (array_Check(v)) { - n = Py_SIZE(b); - if (a == b) { - /* Special case "a[i:j] = a" -- copy b first */ - int ret; - v = array_slice(b, 0, n); - if (!v) - return -1; - ret = array_ass_slice(a, ilow, ihigh, v); - Py_DECREF(v); - return ret; - } - if (b->ob_descr != a->ob_descr) { - PyErr_BadArgument(); - return -1; - } - } - else { - PyErr_Format(PyExc_TypeError, - "can only assign array (not \"%.200s\") to array slice", - Py_TYPE(v)->tp_name); - return -1; - } - if (ilow < 0) - ilow = 0; - else if (ilow > Py_SIZE(a)) - ilow = Py_SIZE(a); - if (ihigh < 0) - ihigh = 0; - if (ihigh < ilow) - ihigh = ilow; - else if (ihigh > Py_SIZE(a)) - ihigh = Py_SIZE(a); - item = a->ob_item; - d = n - (ihigh-ilow); - if (d < 0) { /* Delete -d items */ - memmove(item + (ihigh+d)*a->ob_descr->itemsize, - item + ihigh*a->ob_descr->itemsize, - (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); - Py_SIZE(a) += d; - PyMem_RESIZE(item, char, Py_SIZE(a)*a->ob_descr->itemsize); - /* Can't fail */ - a->ob_item = item; - a->allocated = Py_SIZE(a); - } - else if (d > 0) { /* Insert d items */ - PyMem_RESIZE(item, char, - (Py_SIZE(a) + d)*a->ob_descr->itemsize); - if (item == NULL) { - PyErr_NoMemory(); - return -1; - } - memmove(item + (ihigh+d)*a->ob_descr->itemsize, - item + ihigh*a->ob_descr->itemsize, - (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); - a->ob_item = item; - Py_SIZE(a) += d; - a->allocated = Py_SIZE(a); - } - if (n > 0) - memcpy(item + ilow*a->ob_descr->itemsize, b->ob_item, - n*b->ob_descr->itemsize); - return 0; + if (v == NULL) + n = 0; + else if (array_Check(v)) { + n = Py_SIZE(b); + if (a == b) { + /* Special case "a[i:j] = a" -- copy b first */ + int ret; + v = array_slice(b, 0, n); + if (!v) + return -1; + ret = array_ass_slice(a, ilow, ihigh, v); + Py_DECREF(v); + return ret; + } + if (b->ob_descr != a->ob_descr) { + PyErr_BadArgument(); + return -1; + } + } + else { + PyErr_Format(PyExc_TypeError, + "can only assign array (not \"%.200s\") to array slice", + Py_TYPE(v)->tp_name); + return -1; + } + if (ilow < 0) + ilow = 0; + else if (ilow > Py_SIZE(a)) + ilow = Py_SIZE(a); + if (ihigh < 0) + ihigh = 0; + if (ihigh < ilow) + ihigh = ilow; + else if (ihigh > Py_SIZE(a)) + ihigh = Py_SIZE(a); + item = a->ob_item; + d = n - (ihigh-ilow); + if (d < 0) { /* Delete -d items */ + memmove(item + (ihigh+d)*a->ob_descr->itemsize, + item + ihigh*a->ob_descr->itemsize, + (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); + Py_SIZE(a) += d; + PyMem_RESIZE(item, char, Py_SIZE(a)*a->ob_descr->itemsize); + /* Can't fail */ + a->ob_item = item; + a->allocated = Py_SIZE(a); + } + else if (d > 0) { /* Insert d items */ + PyMem_RESIZE(item, char, + (Py_SIZE(a) + d)*a->ob_descr->itemsize); + if (item == NULL) { + PyErr_NoMemory(); + return -1; + } + memmove(item + (ihigh+d)*a->ob_descr->itemsize, + item + ihigh*a->ob_descr->itemsize, + (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); + a->ob_item = item; + Py_SIZE(a) += d; + a->allocated = Py_SIZE(a); + } + if (n > 0) + memcpy(item + ilow*a->ob_descr->itemsize, b->ob_item, + n*b->ob_descr->itemsize); + return 0; #undef b } static int array_ass_item(arrayobject *a, Py_ssize_t i, PyObject *v) { - if (i < 0 || i >= Py_SIZE(a)) { - PyErr_SetString(PyExc_IndexError, - "array assignment index out of range"); - return -1; - } - if (v == NULL) - return array_ass_slice(a, i, i+1, v); - return (*a->ob_descr->setitem)(a, i, v); + if (i < 0 || i >= Py_SIZE(a)) { + PyErr_SetString(PyExc_IndexError, + "array assignment index out of range"); + return -1; + } + if (v == NULL) + return array_ass_slice(a, i, i+1, v); + return (*a->ob_descr->setitem)(a, i, v); } static int setarrayitem(PyObject *a, Py_ssize_t i, PyObject *v) { - assert(array_Check(a)); - return array_ass_item((arrayobject *)a, i, v); + assert(array_Check(a)); + return array_ass_item((arrayobject *)a, i, v); } static int array_iter_extend(arrayobject *self, PyObject *bb) { - PyObject *it, *v; + PyObject *it, *v; - it = PyObject_GetIter(bb); - if (it == NULL) - return -1; + it = PyObject_GetIter(bb); + if (it == NULL) + return -1; - while ((v = PyIter_Next(it)) != NULL) { - if (ins1(self, (int) Py_SIZE(self), v) != 0) { - Py_DECREF(v); - Py_DECREF(it); - return -1; - } - Py_DECREF(v); - } - Py_DECREF(it); - if (PyErr_Occurred()) - return -1; - return 0; + while ((v = PyIter_Next(it)) != NULL) { + if (ins1(self, Py_SIZE(self), v) != 0) { + Py_DECREF(v); + Py_DECREF(it); + return -1; + } + Py_DECREF(v); + } + Py_DECREF(it); + if (PyErr_Occurred()) + return -1; + return 0; } static int array_do_extend(arrayobject *self, PyObject *bb) { - Py_ssize_t size; - char *old_item; + Py_ssize_t size; + char *old_item; - if (!array_Check(bb)) - return array_iter_extend(self, bb); + if (!array_Check(bb)) + return array_iter_extend(self, bb); #define b ((arrayobject *)bb) - if (self->ob_descr != b->ob_descr) { - PyErr_SetString(PyExc_TypeError, - "can only extend with array of same kind"); - return -1; - } - if ((Py_SIZE(self) > PY_SSIZE_T_MAX - Py_SIZE(b)) || - ((Py_SIZE(self) + Py_SIZE(b)) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { - PyErr_NoMemory(); - return -1; - } - size = Py_SIZE(self) + Py_SIZE(b); - old_item = self->ob_item; - PyMem_RESIZE(self->ob_item, char, size*self->ob_descr->itemsize); - if (self->ob_item == NULL) { - self->ob_item = old_item; - PyErr_NoMemory(); - return -1; - } - memcpy(self->ob_item + Py_SIZE(self)*self->ob_descr->itemsize, - b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); - Py_SIZE(self) = size; - self->allocated = size; + if (self->ob_descr != b->ob_descr) { + PyErr_SetString(PyExc_TypeError, + "can only extend with array of same kind"); + return -1; + } + if ((Py_SIZE(self) > PY_SSIZE_T_MAX - Py_SIZE(b)) || + ((Py_SIZE(self) + Py_SIZE(b)) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { + PyErr_NoMemory(); + return -1; + } + size = Py_SIZE(self) + Py_SIZE(b); + old_item = self->ob_item; + PyMem_RESIZE(self->ob_item, char, size*self->ob_descr->itemsize); + if (self->ob_item == NULL) { + self->ob_item = old_item; + PyErr_NoMemory(); + return -1; + } + memcpy(self->ob_item + Py_SIZE(self)*self->ob_descr->itemsize, + b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); + Py_SIZE(self) = size; + self->allocated = size; - return 0; + return 0; #undef b } static PyObject * array_inplace_concat(arrayobject *self, PyObject *bb) { - if (!array_Check(bb)) { - PyErr_Format(PyExc_TypeError, - "can only extend array with array (not \"%.200s\")", - Py_TYPE(bb)->tp_name); - return NULL; - } - if (array_do_extend(self, bb) == -1) - return NULL; - Py_INCREF(self); - return (PyObject *)self; + if (!array_Check(bb)) { + PyErr_Format(PyExc_TypeError, + "can only extend array with array (not \"%.200s\")", + Py_TYPE(bb)->tp_name); + return NULL; + } + if (array_do_extend(self, bb) == -1) + return NULL; + Py_INCREF(self); + return (PyObject *)self; } static PyObject * array_inplace_repeat(arrayobject *self, Py_ssize_t n) { - char *items, *p; - Py_ssize_t size, i; + char *items, *p; + Py_ssize_t size, i; - if (Py_SIZE(self) > 0) { - if (n < 0) - n = 0; - items = self->ob_item; - if ((self->ob_descr->itemsize != 0) && - (Py_SIZE(self) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { - return PyErr_NoMemory(); - } - size = Py_SIZE(self) * self->ob_descr->itemsize; - if (n == 0) { - PyMem_FREE(items); - self->ob_item = NULL; - Py_SIZE(self) = 0; - self->allocated = 0; - } - else { - if (size > PY_SSIZE_T_MAX / n) { - return PyErr_NoMemory(); - } - PyMem_RESIZE(items, char, n * size); - if (items == NULL) - return PyErr_NoMemory(); - p = items; - for (i = 1; i < n; i++) { - p += size; - memcpy(p, items, size); - } - self->ob_item = items; - Py_SIZE(self) *= n; - self->allocated = Py_SIZE(self); - } - } - Py_INCREF(self); - return (PyObject *)self; + if (Py_SIZE(self) > 0) { + if (n < 0) + n = 0; + items = self->ob_item; + if ((self->ob_descr->itemsize != 0) && + (Py_SIZE(self) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { + return PyErr_NoMemory(); + } + size = Py_SIZE(self) * self->ob_descr->itemsize; + if (n == 0) { + PyMem_FREE(items); + self->ob_item = NULL; + Py_SIZE(self) = 0; + self->allocated = 0; + } + else { + if (size > PY_SSIZE_T_MAX / n) { + return PyErr_NoMemory(); + } + PyMem_RESIZE(items, char, n * size); + if (items == NULL) + return PyErr_NoMemory(); + p = items; + for (i = 1; i < n; i++) { + p += size; + memcpy(p, items, size); + } + self->ob_item = items; + Py_SIZE(self) *= n; + self->allocated = Py_SIZE(self); + } + } + Py_INCREF(self); + return (PyObject *)self; } static PyObject * ins(arrayobject *self, Py_ssize_t where, PyObject *v) { - if (ins1(self, where, v) != 0) - return NULL; - Py_INCREF(Py_None); - return Py_None; + if (ins1(self, where, v) != 0) + return NULL; + Py_INCREF(Py_None); + return Py_None; } static PyObject * array_count(arrayobject *self, PyObject *v) { - Py_ssize_t count = 0; - Py_ssize_t i; + Py_ssize_t count = 0; + Py_ssize_t i; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self, i); - int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - if (cmp > 0) - count++; - else if (cmp < 0) - return NULL; - } - return PyInt_FromSsize_t(count); + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self, i); + int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + if (cmp > 0) + count++; + else if (cmp < 0) + return NULL; + } + return PyInt_FromSsize_t(count); } PyDoc_STRVAR(count_doc, @@ -943,20 +940,20 @@ static PyObject * array_index(arrayobject *self, PyObject *v) { - Py_ssize_t i; + Py_ssize_t i; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self, i); - int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - if (cmp > 0) { - return PyInt_FromLong((long)i); - } - else if (cmp < 0) - return NULL; - } - PyErr_SetString(PyExc_ValueError, "array.index(x): x not in list"); - return NULL; + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self, i); + int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + if (cmp > 0) { + return PyInt_FromLong((long)i); + } + else if (cmp < 0) + return NULL; + } + PyErr_SetString(PyExc_ValueError, "array.index(x): x not in list"); + return NULL; } PyDoc_STRVAR(index_doc, @@ -967,38 +964,38 @@ static int array_contains(arrayobject *self, PyObject *v) { - Py_ssize_t i; - int cmp; + Py_ssize_t i; + int cmp; - for (i = 0, cmp = 0 ; cmp == 0 && i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self, i); - cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - } - return cmp; + for (i = 0, cmp = 0 ; cmp == 0 && i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self, i); + cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + } + return cmp; } static PyObject * array_remove(arrayobject *self, PyObject *v) { - int i; + int i; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self,i); - int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - if (cmp > 0) { - if (array_ass_slice(self, i, i+1, - (PyObject *)NULL) != 0) - return NULL; - Py_INCREF(Py_None); - return Py_None; - } - else if (cmp < 0) - return NULL; - } - PyErr_SetString(PyExc_ValueError, "array.remove(x): x not in list"); - return NULL; + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self,i); + int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + if (cmp > 0) { + if (array_ass_slice(self, i, i+1, + (PyObject *)NULL) != 0) + return NULL; + Py_INCREF(Py_None); + return Py_None; + } + else if (cmp < 0) + return NULL; + } + PyErr_SetString(PyExc_ValueError, "array.remove(x): x not in list"); + return NULL; } PyDoc_STRVAR(remove_doc, @@ -1009,27 +1006,27 @@ static PyObject * array_pop(arrayobject *self, PyObject *args) { - Py_ssize_t i = -1; - PyObject *v; - if (!PyArg_ParseTuple(args, "|n:pop", &i)) - return NULL; - if (Py_SIZE(self) == 0) { - /* Special-case most common failure cause */ - PyErr_SetString(PyExc_IndexError, "pop from empty array"); - return NULL; - } - if (i < 0) - i += Py_SIZE(self); - if (i < 0 || i >= Py_SIZE(self)) { - PyErr_SetString(PyExc_IndexError, "pop index out of range"); - return NULL; - } - v = getarrayitem((PyObject *)self,i); - if (array_ass_slice(self, i, i+1, (PyObject *)NULL) != 0) { - Py_DECREF(v); - return NULL; - } - return v; + Py_ssize_t i = -1; + PyObject *v; + if (!PyArg_ParseTuple(args, "|n:pop", &i)) + return NULL; + if (Py_SIZE(self) == 0) { + /* Special-case most common failure cause */ + PyErr_SetString(PyExc_IndexError, "pop from empty array"); + return NULL; + } + if (i < 0) + i += Py_SIZE(self); + if (i < 0 || i >= Py_SIZE(self)) { + PyErr_SetString(PyExc_IndexError, "pop index out of range"); + return NULL; + } + v = getarrayitem((PyObject *)self,i); + if (array_ass_slice(self, i, i+1, (PyObject *)NULL) != 0) { + Py_DECREF(v); + return NULL; + } + return v; } PyDoc_STRVAR(pop_doc, @@ -1040,10 +1037,10 @@ static PyObject * array_extend(arrayobject *self, PyObject *bb) { - if (array_do_extend(self, bb) == -1) - return NULL; - Py_INCREF(Py_None); - return Py_None; + if (array_do_extend(self, bb) == -1) + return NULL; + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(extend_doc, @@ -1054,11 +1051,11 @@ static PyObject * array_insert(arrayobject *self, PyObject *args) { - Py_ssize_t i; - PyObject *v; - if (!PyArg_ParseTuple(args, "nO:insert", &i, &v)) - return NULL; - return ins(self, i, v); + Py_ssize_t i; + PyObject *v; + if (!PyArg_ParseTuple(args, "nO:insert", &i, &v)) + return NULL; + return ins(self, i, v); } PyDoc_STRVAR(insert_doc, @@ -1070,15 +1067,15 @@ static PyObject * array_buffer_info(arrayobject *self, PyObject *unused) { - PyObject* retval = NULL; - retval = PyTuple_New(2); - if (!retval) - return NULL; + PyObject* retval = NULL; + retval = PyTuple_New(2); + if (!retval) + return NULL; - PyTuple_SET_ITEM(retval, 0, PyLong_FromVoidPtr(self->ob_item)); - PyTuple_SET_ITEM(retval, 1, PyInt_FromLong((long)(Py_SIZE(self)))); + PyTuple_SET_ITEM(retval, 0, PyLong_FromVoidPtr(self->ob_item)); + PyTuple_SET_ITEM(retval, 1, PyInt_FromLong((long)(Py_SIZE(self)))); - return retval; + return retval; } PyDoc_STRVAR(buffer_info_doc, @@ -1093,7 +1090,7 @@ static PyObject * array_append(arrayobject *self, PyObject *v) { - return ins(self, (int) Py_SIZE(self), v); + return ins(self, Py_SIZE(self), v); } PyDoc_STRVAR(append_doc, @@ -1105,52 +1102,52 @@ static PyObject * array_byteswap(arrayobject *self, PyObject *unused) { - char *p; - Py_ssize_t i; + char *p; + Py_ssize_t i; - switch (self->ob_descr->itemsize) { - case 1: - break; - case 2: - for (p = self->ob_item, i = Py_SIZE(self); --i >= 0; p += 2) { - char p0 = p[0]; - p[0] = p[1]; - p[1] = p0; - } - break; - case 4: - for (p = self->ob_item, i = Py_SIZE(self); --i >= 0; p += 4) { - char p0 = p[0]; - char p1 = p[1]; - p[0] = p[3]; - p[1] = p[2]; - p[2] = p1; - p[3] = p0; - } - break; - case 8: - for (p = self->ob_item, i = Py_SIZE(self); --i >= 0; p += 8) { - char p0 = p[0]; - char p1 = p[1]; - char p2 = p[2]; - char p3 = p[3]; - p[0] = p[7]; - p[1] = p[6]; - p[2] = p[5]; - p[3] = p[4]; - p[4] = p3; - p[5] = p2; - p[6] = p1; - p[7] = p0; - } - break; - default: - PyErr_SetString(PyExc_RuntimeError, - "don't know how to byteswap this array type"); - return NULL; - } - Py_INCREF(Py_None); - return Py_None; + switch (self->ob_descr->itemsize) { + case 1: + break; + case 2: + for (p = self->ob_item, i = Py_SIZE(self); --i >= 0; p += 2) { + char p0 = p[0]; + p[0] = p[1]; + p[1] = p0; + } + break; + case 4: + for (p = self->ob_item, i = Py_SIZE(self); --i >= 0; p += 4) { + char p0 = p[0]; + char p1 = p[1]; + p[0] = p[3]; + p[1] = p[2]; + p[2] = p1; + p[3] = p0; + } + break; + case 8: + for (p = self->ob_item, i = Py_SIZE(self); --i >= 0; p += 8) { + char p0 = p[0]; + char p1 = p[1]; + char p2 = p[2]; + char p3 = p[3]; + p[0] = p[7]; + p[1] = p[6]; + p[2] = p[5]; + p[3] = p[4]; + p[4] = p3; + p[5] = p2; + p[6] = p1; + p[7] = p0; + } + break; + default: + PyErr_SetString(PyExc_RuntimeError, + "don't know how to byteswap this array type"); + return NULL; + } + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(byteswap_doc, @@ -1160,64 +1157,30 @@ 4, or 8 bytes in size, RuntimeError is raised."); static PyObject * -array_reduce(arrayobject *array) -{ - PyObject *dict, *result; - - dict = PyObject_GetAttrString((PyObject *)array, "__dict__"); - if (dict == NULL) { - PyErr_Clear(); - dict = Py_None; - Py_INCREF(dict); - } - if (Py_SIZE(array) > 0) { - if (array->ob_descr->itemsize - > PY_SSIZE_T_MAX / array->ob_size) { - return PyErr_NoMemory(); - } - result = Py_BuildValue("O(cs#)O", - Py_TYPE(array), - array->ob_descr->typecode, - array->ob_item, - Py_SIZE(array) * array->ob_descr->itemsize, - dict); - } else { - result = Py_BuildValue("O(c)O", - Py_TYPE(array), - array->ob_descr->typecode, - dict); - } - Py_DECREF(dict); - return result; -} - -PyDoc_STRVAR(array_doc, "Return state information for pickling."); - -static PyObject * array_reverse(arrayobject *self, PyObject *unused) { - register Py_ssize_t itemsize = self->ob_descr->itemsize; - register char *p, *q; - /* little buffer to hold items while swapping */ - char tmp[256]; /* 8 is probably enough -- but why skimp */ - assert((size_t)itemsize <= sizeof(tmp)); + register Py_ssize_t itemsize = self->ob_descr->itemsize; + register char *p, *q; + /* little buffer to hold items while swapping */ + char tmp[256]; /* 8 is probably enough -- but why skimp */ + assert((size_t)itemsize <= sizeof(tmp)); - if (Py_SIZE(self) > 1) { - for (p = self->ob_item, - q = self->ob_item + (Py_SIZE(self) - 1)*itemsize; - p < q; - p += itemsize, q -= itemsize) { - /* memory areas guaranteed disjoint, so memcpy - * is safe (& memmove may be slower). - */ - memcpy(tmp, p, itemsize); - memcpy(p, q, itemsize); - memcpy(q, tmp, itemsize); - } - } + if (Py_SIZE(self) > 1) { + for (p = self->ob_item, + q = self->ob_item + (Py_SIZE(self) - 1)*itemsize; + p < q; + p += itemsize, q -= itemsize) { + /* memory areas guaranteed disjoint, so memcpy + * is safe (& memmove may be slower). + */ + memcpy(tmp, p, itemsize); + memcpy(p, q, itemsize); + memcpy(q, tmp, itemsize); + } + } - Py_INCREF(Py_None); - return Py_None; + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(reverse_doc, @@ -1228,50 +1191,56 @@ static PyObject * array_fromfile(arrayobject *self, PyObject *args) { - PyObject *f; - Py_ssize_t n; - FILE *fp; - if (!PyArg_ParseTuple(args, "On:fromfile", &f, &n)) - return NULL; - fp = PyFile_AsFile(f); - if (fp == NULL) { - PyErr_SetString(PyExc_TypeError, "arg1 must be open file"); - return NULL; - } - if (n > 0) { - char *item = self->ob_item; - Py_ssize_t itemsize = self->ob_descr->itemsize; - size_t nread; - Py_ssize_t newlength; - size_t newbytes; - /* Be careful here about overflow */ - if ((newlength = Py_SIZE(self) + n) <= 0 || - (newbytes = newlength * itemsize) / itemsize != - (size_t)newlength) - goto nomem; - PyMem_RESIZE(item, char, newbytes); - if (item == NULL) { - nomem: - PyErr_NoMemory(); - return NULL; - } - self->ob_item = item; - Py_SIZE(self) += n; - self->allocated = Py_SIZE(self); - nread = fread(item + (Py_SIZE(self) - n) * itemsize, - itemsize, n, fp); - if (nread < (size_t)n) { - Py_SIZE(self) -= (n - nread); - PyMem_RESIZE(item, char, Py_SIZE(self)*itemsize); - self->ob_item = item; - self->allocated = Py_SIZE(self); - PyErr_SetString(PyExc_EOFError, - "not enough items in file"); - return NULL; - } - } - Py_INCREF(Py_None); - return Py_None; + PyObject *f; + Py_ssize_t n; + FILE *fp; + if (!PyArg_ParseTuple(args, "On:fromfile", &f, &n)) + return NULL; + fp = PyFile_AsFile(f); + if (fp == NULL) { + PyErr_SetString(PyExc_TypeError, "arg1 must be open file"); + return NULL; + } + if (n > 0) { + char *item = self->ob_item; + Py_ssize_t itemsize = self->ob_descr->itemsize; + size_t nread; + Py_ssize_t newlength; + size_t newbytes; + /* Be careful here about overflow */ + if ((newlength = Py_SIZE(self) + n) <= 0 || + (newbytes = newlength * itemsize) / itemsize != + (size_t)newlength) + goto nomem; + PyMem_RESIZE(item, char, newbytes); + if (item == NULL) { + nomem: + PyErr_NoMemory(); + return NULL; + } + self->ob_item = item; + Py_SIZE(self) += n; + self->allocated = Py_SIZE(self); + nread = fread(item + (Py_SIZE(self) - n) * itemsize, + itemsize, n, fp); + if (nread < (size_t)n) { + Py_SIZE(self) -= (n - nread); + PyMem_RESIZE(item, char, Py_SIZE(self)*itemsize); + self->ob_item = item; + self->allocated = Py_SIZE(self); + if (ferror(fp)) { + PyErr_SetFromErrno(PyExc_IOError); + clearerr(fp); + } + else { + PyErr_SetString(PyExc_EOFError, + "not enough items in file"); + } + return NULL; + } + } + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(fromfile_doc, @@ -1284,33 +1253,33 @@ static PyObject * array_fromfile_as_read(arrayobject *self, PyObject *args) { - if (PyErr_WarnPy3k("array.read() not supported in 3.x; " - "use array.fromfile()", 1) < 0) - return NULL; - return array_fromfile(self, args); + if (PyErr_WarnPy3k("array.read() not supported in 3.x; " + "use array.fromfile()", 1) < 0) + return NULL; + return array_fromfile(self, args); } static PyObject * array_tofile(arrayobject *self, PyObject *f) { - FILE *fp; + FILE *fp; - fp = PyFile_AsFile(f); - if (fp == NULL) { - PyErr_SetString(PyExc_TypeError, "arg must be open file"); - return NULL; - } - if (self->ob_size > 0) { - if (fwrite(self->ob_item, self->ob_descr->itemsize, - self->ob_size, fp) != (size_t)self->ob_size) { - PyErr_SetFromErrno(PyExc_IOError); - clearerr(fp); - return NULL; - } - } - Py_INCREF(Py_None); - return Py_None; + fp = PyFile_AsFile(f); + if (fp == NULL) { + PyErr_SetString(PyExc_TypeError, "arg must be open file"); + return NULL; + } + if (self->ob_size > 0) { + if (fwrite(self->ob_item, self->ob_descr->itemsize, + self->ob_size, fp) != (size_t)self->ob_size) { + PyErr_SetFromErrno(PyExc_IOError); + clearerr(fp); + return NULL; + } + } + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(tofile_doc, @@ -1323,53 +1292,53 @@ static PyObject * array_tofile_as_write(arrayobject *self, PyObject *f) { - if (PyErr_WarnPy3k("array.write() not supported in 3.x; " - "use array.tofile()", 1) < 0) - return NULL; - return array_tofile(self, f); + if (PyErr_WarnPy3k("array.write() not supported in 3.x; " + "use array.tofile()", 1) < 0) + return NULL; + return array_tofile(self, f); } static PyObject * array_fromlist(arrayobject *self, PyObject *list) { - Py_ssize_t n; - Py_ssize_t itemsize = self->ob_descr->itemsize; + Py_ssize_t n; + Py_ssize_t itemsize = self->ob_descr->itemsize; - if (!PyList_Check(list)) { - PyErr_SetString(PyExc_TypeError, "arg must be list"); - return NULL; - } - n = PyList_Size(list); - if (n > 0) { - char *item = self->ob_item; - Py_ssize_t i; - PyMem_RESIZE(item, char, (Py_SIZE(self) + n) * itemsize); - if (item == NULL) { - PyErr_NoMemory(); - return NULL; - } - self->ob_item = item; - Py_SIZE(self) += n; - self->allocated = Py_SIZE(self); - for (i = 0; i < n; i++) { - PyObject *v = PyList_GetItem(list, i); - if ((*self->ob_descr->setitem)(self, - Py_SIZE(self) - n + i, v) != 0) { - Py_SIZE(self) -= n; - if (itemsize && (self->ob_size > PY_SSIZE_T_MAX / itemsize)) { - return PyErr_NoMemory(); - } - PyMem_RESIZE(item, char, - Py_SIZE(self) * itemsize); - self->ob_item = item; - self->allocated = Py_SIZE(self); - return NULL; - } - } - } - Py_INCREF(Py_None); - return Py_None; + if (!PyList_Check(list)) { + PyErr_SetString(PyExc_TypeError, "arg must be list"); + return NULL; + } + n = PyList_Size(list); + if (n > 0) { + char *item = self->ob_item; + Py_ssize_t i; + PyMem_RESIZE(item, char, (Py_SIZE(self) + n) * itemsize); + if (item == NULL) { + PyErr_NoMemory(); + return NULL; + } + self->ob_item = item; + Py_SIZE(self) += n; + self->allocated = Py_SIZE(self); + for (i = 0; i < n; i++) { + PyObject *v = PyList_GetItem(list, i); + if ((*self->ob_descr->setitem)(self, + Py_SIZE(self) - n + i, v) != 0) { + Py_SIZE(self) -= n; + if (itemsize && (self->ob_size > PY_SSIZE_T_MAX / itemsize)) { + return PyErr_NoMemory(); + } + PyMem_RESIZE(item, char, + Py_SIZE(self) * itemsize); + self->ob_item = item; + self->allocated = Py_SIZE(self); + return NULL; + } + } + } + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(fromlist_doc, @@ -1381,20 +1350,20 @@ static PyObject * array_tolist(arrayobject *self, PyObject *unused) { - PyObject *list = PyList_New(Py_SIZE(self)); - Py_ssize_t i; + PyObject *list = PyList_New(Py_SIZE(self)); + Py_ssize_t i; - if (list == NULL) - return NULL; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *v = getarrayitem((PyObject *)self, i); - if (v == NULL) { - Py_DECREF(list); - return NULL; - } - PyList_SetItem(list, i, v); - } - return list; + if (list == NULL) + return NULL; + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *v = getarrayitem((PyObject *)self, i); + if (v == NULL) { + Py_DECREF(list); + return NULL; + } + PyList_SetItem(list, i, v); + } + return list; } PyDoc_STRVAR(tolist_doc, @@ -1406,36 +1375,36 @@ static PyObject * array_fromstring(arrayobject *self, PyObject *args) { - char *str; - Py_ssize_t n; - int itemsize = self->ob_descr->itemsize; - if (!PyArg_ParseTuple(args, "s#:fromstring", &str, &n)) - return NULL; - if (n % itemsize != 0) { - PyErr_SetString(PyExc_ValueError, - "string length not a multiple of item size"); - return NULL; - } - n = n / itemsize; - if (n > 0) { - char *item = self->ob_item; - if ((n > PY_SSIZE_T_MAX - Py_SIZE(self)) || - ((Py_SIZE(self) + n) > PY_SSIZE_T_MAX / itemsize)) { - return PyErr_NoMemory(); - } - PyMem_RESIZE(item, char, (Py_SIZE(self) + n) * itemsize); - if (item == NULL) { - PyErr_NoMemory(); - return NULL; - } - self->ob_item = item; - Py_SIZE(self) += n; - self->allocated = Py_SIZE(self); - memcpy(item + (Py_SIZE(self) - n) * itemsize, - str, itemsize*n); - } - Py_INCREF(Py_None); - return Py_None; + char *str; + Py_ssize_t n; + int itemsize = self->ob_descr->itemsize; + if (!PyArg_ParseTuple(args, "s#:fromstring", &str, &n)) + return NULL; + if (n % itemsize != 0) { + PyErr_SetString(PyExc_ValueError, + "string length not a multiple of item size"); + return NULL; + } + n = n / itemsize; + if (n > 0) { + char *item = self->ob_item; + if ((n > PY_SSIZE_T_MAX - Py_SIZE(self)) || + ((Py_SIZE(self) + n) > PY_SSIZE_T_MAX / itemsize)) { + return PyErr_NoMemory(); + } + PyMem_RESIZE(item, char, (Py_SIZE(self) + n) * itemsize); + if (item == NULL) { + PyErr_NoMemory(); + return NULL; + } + self->ob_item = item; + Py_SIZE(self) += n; + self->allocated = Py_SIZE(self); + memcpy(item + (Py_SIZE(self) - n) * itemsize, + str, itemsize*n); + } + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(fromstring_doc, @@ -1448,12 +1417,12 @@ static PyObject * array_tostring(arrayobject *self, PyObject *unused) { - if (self->ob_size <= PY_SSIZE_T_MAX / self->ob_descr->itemsize) { - return PyString_FromStringAndSize(self->ob_item, - Py_SIZE(self) * self->ob_descr->itemsize); - } else { - return PyErr_NoMemory(); - } + if (self->ob_size <= PY_SSIZE_T_MAX / self->ob_descr->itemsize) { + return PyString_FromStringAndSize(self->ob_item, + Py_SIZE(self) * self->ob_descr->itemsize); + } else { + return PyErr_NoMemory(); + } } PyDoc_STRVAR(tostring_doc, @@ -1468,36 +1437,36 @@ static PyObject * array_fromunicode(arrayobject *self, PyObject *args) { - Py_UNICODE *ustr; - Py_ssize_t n; + Py_UNICODE *ustr; + Py_ssize_t n; - if (!PyArg_ParseTuple(args, "u#:fromunicode", &ustr, &n)) - return NULL; - if (self->ob_descr->typecode != 'u') { - PyErr_SetString(PyExc_ValueError, - "fromunicode() may only be called on " - "type 'u' arrays"); - return NULL; - } - if (n > 0) { - Py_UNICODE *item = (Py_UNICODE *) self->ob_item; - if (Py_SIZE(self) > PY_SSIZE_T_MAX - n) { - return PyErr_NoMemory(); - } - PyMem_RESIZE(item, Py_UNICODE, Py_SIZE(self) + n); - if (item == NULL) { - PyErr_NoMemory(); - return NULL; - } - self->ob_item = (char *) item; - Py_SIZE(self) += n; - self->allocated = Py_SIZE(self); - memcpy(item + Py_SIZE(self) - n, - ustr, n * sizeof(Py_UNICODE)); - } + if (!PyArg_ParseTuple(args, "u#:fromunicode", &ustr, &n)) + return NULL; + if (self->ob_descr->typecode != 'u') { + PyErr_SetString(PyExc_ValueError, + "fromunicode() may only be called on " + "type 'u' arrays"); + return NULL; + } + if (n > 0) { + Py_UNICODE *item = (Py_UNICODE *) self->ob_item; + if (Py_SIZE(self) > PY_SSIZE_T_MAX - n) { + return PyErr_NoMemory(); + } + PyMem_RESIZE(item, Py_UNICODE, Py_SIZE(self) + n); + if (item == NULL) { + PyErr_NoMemory(); + return NULL; + } + self->ob_item = (char *) item; + Py_SIZE(self) += n; + self->allocated = Py_SIZE(self); + memcpy(item + Py_SIZE(self) - n, + ustr, n * sizeof(Py_UNICODE)); + } - Py_INCREF(Py_None); - return Py_None; + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(fromunicode_doc, @@ -1512,12 +1481,12 @@ static PyObject * array_tounicode(arrayobject *self, PyObject *unused) { - if (self->ob_descr->typecode != 'u') { - PyErr_SetString(PyExc_ValueError, - "tounicode() may only be called on type 'u' arrays"); - return NULL; - } - return PyUnicode_FromUnicode((Py_UNICODE *) self->ob_item, Py_SIZE(self)); + if (self->ob_descr->typecode != 'u') { + PyErr_SetString(PyExc_ValueError, + "tounicode() may only be called on type 'u' arrays"); + return NULL; + } + return PyUnicode_FromUnicode((Py_UNICODE *) self->ob_item, Py_SIZE(self)); } PyDoc_STRVAR(tounicode_doc, @@ -1530,325 +1499,357 @@ #endif /* Py_USING_UNICODE */ +static PyObject * +array_reduce(arrayobject *array) +{ + PyObject *dict, *result, *list; + + dict = PyObject_GetAttrString((PyObject *)array, "__dict__"); + if (dict == NULL) { + if (!PyErr_ExceptionMatches(PyExc_AttributeError)) + return NULL; + PyErr_Clear(); + dict = Py_None; + Py_INCREF(dict); + } + /* Unlike in Python 3.x, we never use the more efficient memory + * representation of an array for pickling. This is unfortunately + * necessary to allow array objects to be unpickled by Python 3.x, + * since str objects from 2.x are always decoded to unicode in + * Python 3.x. + */ + list = array_tolist(array, NULL); + if (list == NULL) { + Py_DECREF(dict); + return NULL; + } + result = Py_BuildValue( + "O(cO)O", Py_TYPE(array), array->ob_descr->typecode, list, dict); + Py_DECREF(list); + Py_DECREF(dict); + return result; +} + +PyDoc_STRVAR(reduce_doc, "Return state information for pickling."); static PyObject * array_get_typecode(arrayobject *a, void *closure) { - char tc = a->ob_descr->typecode; - return PyString_FromStringAndSize(&tc, 1); + char tc = a->ob_descr->typecode; + return PyString_FromStringAndSize(&tc, 1); } static PyObject * array_get_itemsize(arrayobject *a, void *closure) { - return PyInt_FromLong((long)a->ob_descr->itemsize); + return PyInt_FromLong((long)a->ob_descr->itemsize); } static PyGetSetDef array_getsets [] = { - {"typecode", (getter) array_get_typecode, NULL, - "the typecode character used to create the array"}, - {"itemsize", (getter) array_get_itemsize, NULL, - "the size, in bytes, of one array item"}, - {NULL} + {"typecode", (getter) array_get_typecode, NULL, + "the typecode character used to create the array"}, + {"itemsize", (getter) array_get_itemsize, NULL, + "the size, in bytes, of one array item"}, + {NULL} }; static PyMethodDef array_methods[] = { - {"append", (PyCFunction)array_append, METH_O, - append_doc}, - {"buffer_info", (PyCFunction)array_buffer_info, METH_NOARGS, - buffer_info_doc}, - {"byteswap", (PyCFunction)array_byteswap, METH_NOARGS, - byteswap_doc}, - {"__copy__", (PyCFunction)array_copy, METH_NOARGS, - copy_doc}, - {"count", (PyCFunction)array_count, METH_O, - count_doc}, - {"__deepcopy__",(PyCFunction)array_copy, METH_O, - copy_doc}, - {"extend", (PyCFunction)array_extend, METH_O, - extend_doc}, - {"fromfile", (PyCFunction)array_fromfile, METH_VARARGS, - fromfile_doc}, - {"fromlist", (PyCFunction)array_fromlist, METH_O, - fromlist_doc}, - {"fromstring", (PyCFunction)array_fromstring, METH_VARARGS, - fromstring_doc}, + {"append", (PyCFunction)array_append, METH_O, + append_doc}, + {"buffer_info", (PyCFunction)array_buffer_info, METH_NOARGS, + buffer_info_doc}, + {"byteswap", (PyCFunction)array_byteswap, METH_NOARGS, + byteswap_doc}, + {"__copy__", (PyCFunction)array_copy, METH_NOARGS, + copy_doc}, + {"count", (PyCFunction)array_count, METH_O, + count_doc}, + {"__deepcopy__",(PyCFunction)array_copy, METH_O, + copy_doc}, + {"extend", (PyCFunction)array_extend, METH_O, + extend_doc}, + {"fromfile", (PyCFunction)array_fromfile, METH_VARARGS, + fromfile_doc}, + {"fromlist", (PyCFunction)array_fromlist, METH_O, + fromlist_doc}, + {"fromstring", (PyCFunction)array_fromstring, METH_VARARGS, + fromstring_doc}, #ifdef Py_USING_UNICODE - {"fromunicode", (PyCFunction)array_fromunicode, METH_VARARGS, - fromunicode_doc}, + {"fromunicode", (PyCFunction)array_fromunicode, METH_VARARGS, + fromunicode_doc}, #endif - {"index", (PyCFunction)array_index, METH_O, - index_doc}, - {"insert", (PyCFunction)array_insert, METH_VARARGS, - insert_doc}, - {"pop", (PyCFunction)array_pop, METH_VARARGS, - pop_doc}, - {"read", (PyCFunction)array_fromfile_as_read, METH_VARARGS, - fromfile_doc}, - {"__reduce__", (PyCFunction)array_reduce, METH_NOARGS, - array_doc}, - {"remove", (PyCFunction)array_remove, METH_O, - remove_doc}, - {"reverse", (PyCFunction)array_reverse, METH_NOARGS, - reverse_doc}, -/* {"sort", (PyCFunction)array_sort, METH_VARARGS, - sort_doc},*/ - {"tofile", (PyCFunction)array_tofile, METH_O, - tofile_doc}, - {"tolist", (PyCFunction)array_tolist, METH_NOARGS, - tolist_doc}, - {"tostring", (PyCFunction)array_tostring, METH_NOARGS, - tostring_doc}, + {"index", (PyCFunction)array_index, METH_O, + index_doc}, + {"insert", (PyCFunction)array_insert, METH_VARARGS, + insert_doc}, + {"pop", (PyCFunction)array_pop, METH_VARARGS, + pop_doc}, + {"read", (PyCFunction)array_fromfile_as_read, METH_VARARGS, + fromfile_doc}, + {"__reduce__", (PyCFunction)array_reduce, METH_NOARGS, + reduce_doc}, + {"remove", (PyCFunction)array_remove, METH_O, + remove_doc}, + {"reverse", (PyCFunction)array_reverse, METH_NOARGS, + reverse_doc}, +/* {"sort", (PyCFunction)array_sort, METH_VARARGS, + sort_doc},*/ + {"tofile", (PyCFunction)array_tofile, METH_O, + tofile_doc}, + {"tolist", (PyCFunction)array_tolist, METH_NOARGS, + tolist_doc}, + {"tostring", (PyCFunction)array_tostring, METH_NOARGS, + tostring_doc}, #ifdef Py_USING_UNICODE - {"tounicode", (PyCFunction)array_tounicode, METH_NOARGS, - tounicode_doc}, + {"tounicode", (PyCFunction)array_tounicode, METH_NOARGS, + tounicode_doc}, #endif - {"write", (PyCFunction)array_tofile_as_write, METH_O, - tofile_doc}, - {NULL, NULL} /* sentinel */ + {"write", (PyCFunction)array_tofile_as_write, METH_O, + tofile_doc}, + {NULL, NULL} /* sentinel */ }; static PyObject * array_repr(arrayobject *a) { - char buf[256], typecode; - PyObject *s, *t, *v = NULL; - Py_ssize_t len; + char buf[256], typecode; + PyObject *s, *t, *v = NULL; + Py_ssize_t len; - len = Py_SIZE(a); - typecode = a->ob_descr->typecode; - if (len == 0) { - PyOS_snprintf(buf, sizeof(buf), "array('%c')", typecode); - return PyString_FromString(buf); - } - - if (typecode == 'c') - v = array_tostring(a, NULL); + len = Py_SIZE(a); + typecode = a->ob_descr->typecode; + if (len == 0) { + PyOS_snprintf(buf, sizeof(buf), "array('%c')", typecode); + return PyString_FromString(buf); + } + + if (typecode == 'c') + v = array_tostring(a, NULL); #ifdef Py_USING_UNICODE - else if (typecode == 'u') - v = array_tounicode(a, NULL); + else if (typecode == 'u') + v = array_tounicode(a, NULL); #endif - else - v = array_tolist(a, NULL); - t = PyObject_Repr(v); - Py_XDECREF(v); + else + v = array_tolist(a, NULL); + t = PyObject_Repr(v); + Py_XDECREF(v); - PyOS_snprintf(buf, sizeof(buf), "array('%c', ", typecode); - s = PyString_FromString(buf); - PyString_ConcatAndDel(&s, t); - PyString_ConcatAndDel(&s, PyString_FromString(")")); - return s; + PyOS_snprintf(buf, sizeof(buf), "array('%c', ", typecode); + s = PyString_FromString(buf); + PyString_ConcatAndDel(&s, t); + PyString_ConcatAndDel(&s, PyString_FromString(")")); + return s; } static PyObject* array_subscr(arrayobject* self, PyObject* item) { - if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i==-1 && PyErr_Occurred()) { - return NULL; - } - if (i < 0) - i += Py_SIZE(self); - return array_item(self, i); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength, cur, i; - PyObject* result; - arrayobject* ar; - int itemsize = self->ob_descr->itemsize; + if (PyIndex_Check(item)) { + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i==-1 && PyErr_Occurred()) { + return NULL; + } + if (i < 0) + i += Py_SIZE(self); + return array_item(self, i); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength, cur, i; + PyObject* result; + arrayobject* ar; + int itemsize = self->ob_descr->itemsize; - if (PySlice_GetIndicesEx((PySliceObject*)item, Py_SIZE(self), - &start, &stop, &step, &slicelength) < 0) { - return NULL; - } + if (PySlice_GetIndicesEx((PySliceObject*)item, Py_SIZE(self), + &start, &stop, &step, &slicelength) < 0) { + return NULL; + } - if (slicelength <= 0) { - return newarrayobject(&Arraytype, 0, self->ob_descr); - } - else if (step == 1) { - PyObject *result = newarrayobject(&Arraytype, - slicelength, self->ob_descr); - if (result == NULL) - return NULL; - memcpy(((arrayobject *)result)->ob_item, - self->ob_item + start * itemsize, - slicelength * itemsize); - return result; - } - else { - result = newarrayobject(&Arraytype, slicelength, self->ob_descr); - if (!result) return NULL; + if (slicelength <= 0) { + return newarrayobject(&Arraytype, 0, self->ob_descr); + } + else if (step == 1) { + PyObject *result = newarrayobject(&Arraytype, + slicelength, self->ob_descr); + if (result == NULL) + return NULL; + memcpy(((arrayobject *)result)->ob_item, + self->ob_item + start * itemsize, + slicelength * itemsize); + return result; + } + else { + result = newarrayobject(&Arraytype, slicelength, self->ob_descr); + if (!result) return NULL; - ar = (arrayobject*)result; + ar = (arrayobject*)result; - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - memcpy(ar->ob_item + i*itemsize, - self->ob_item + cur*itemsize, - itemsize); - } - - return result; - } - } - else { - PyErr_SetString(PyExc_TypeError, - "array indices must be integers"); - return NULL; - } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + memcpy(ar->ob_item + i*itemsize, + self->ob_item + cur*itemsize, + itemsize); + } + + return result; + } + } + else { + PyErr_SetString(PyExc_TypeError, + "array indices must be integers"); + return NULL; + } } static int array_ass_subscr(arrayobject* self, PyObject* item, PyObject* value) { - Py_ssize_t start, stop, step, slicelength, needed; - arrayobject* other; - int itemsize; + Py_ssize_t start, stop, step, slicelength, needed; + arrayobject* other; + int itemsize; - if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - - if (i == -1 && PyErr_Occurred()) - return -1; - if (i < 0) - i += Py_SIZE(self); - if (i < 0 || i >= Py_SIZE(self)) { - PyErr_SetString(PyExc_IndexError, - "array assignment index out of range"); - return -1; - } - if (value == NULL) { - /* Fall through to slice assignment */ - start = i; - stop = i + 1; - step = 1; - slicelength = 1; - } - else - return (*self->ob_descr->setitem)(self, i, value); - } - else if (PySlice_Check(item)) { - if (PySlice_GetIndicesEx((PySliceObject *)item, - Py_SIZE(self), &start, &stop, - &step, &slicelength) < 0) { - return -1; - } - } - else { - PyErr_SetString(PyExc_TypeError, - "array indices must be integer"); - return -1; - } - if (value == NULL) { - other = NULL; - needed = 0; - } - else if (array_Check(value)) { - other = (arrayobject *)value; - needed = Py_SIZE(other); - if (self == other) { - /* Special case "self[i:j] = self" -- copy self first */ - int ret; - value = array_slice(other, 0, needed); - if (value == NULL) - return -1; - ret = array_ass_subscr(self, item, value); - Py_DECREF(value); - return ret; - } - if (other->ob_descr != self->ob_descr) { - PyErr_BadArgument(); - return -1; - } - } - else { - PyErr_Format(PyExc_TypeError, - "can only assign array (not \"%.200s\") to array slice", - Py_TYPE(value)->tp_name); - return -1; - } - itemsize = self->ob_descr->itemsize; - /* for 'a[2:1] = ...', the insertion point is 'start', not 'stop' */ - if ((step > 0 && stop < start) || - (step < 0 && stop > start)) - stop = start; - if (step == 1) { - if (slicelength > needed) { - memmove(self->ob_item + (start + needed) * itemsize, - self->ob_item + stop * itemsize, - (Py_SIZE(self) - stop) * itemsize); - if (array_resize(self, Py_SIZE(self) + - needed - slicelength) < 0) - return -1; - } - else if (slicelength < needed) { - if (array_resize(self, Py_SIZE(self) + - needed - slicelength) < 0) - return -1; - memmove(self->ob_item + (start + needed) * itemsize, - self->ob_item + stop * itemsize, - (Py_SIZE(self) - start - needed) * itemsize); - } - if (needed > 0) - memcpy(self->ob_item + start * itemsize, - other->ob_item, needed * itemsize); - return 0; - } - else if (needed == 0) { - /* Delete slice */ - size_t cur; - Py_ssize_t i; + if (PyIndex_Check(item)) { + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (step < 0) { - stop = start + 1; - start = stop + step * (slicelength - 1) - 1; - step = -step; - } - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - Py_ssize_t lim = step - 1; + if (i == -1 && PyErr_Occurred()) + return -1; + if (i < 0) + i += Py_SIZE(self); + if (i < 0 || i >= Py_SIZE(self)) { + PyErr_SetString(PyExc_IndexError, + "array assignment index out of range"); + return -1; + } + if (value == NULL) { + /* Fall through to slice assignment */ + start = i; + stop = i + 1; + step = 1; + slicelength = 1; + } + else + return (*self->ob_descr->setitem)(self, i, value); + } + else if (PySlice_Check(item)) { + if (PySlice_GetIndicesEx((PySliceObject *)item, + Py_SIZE(self), &start, &stop, + &step, &slicelength) < 0) { + return -1; + } + } + else { + PyErr_SetString(PyExc_TypeError, + "array indices must be integer"); + return -1; + } + if (value == NULL) { + other = NULL; + needed = 0; + } + else if (array_Check(value)) { + other = (arrayobject *)value; + needed = Py_SIZE(other); + if (self == other) { + /* Special case "self[i:j] = self" -- copy self first */ + int ret; + value = array_slice(other, 0, needed); + if (value == NULL) + return -1; + ret = array_ass_subscr(self, item, value); + Py_DECREF(value); + return ret; + } + if (other->ob_descr != self->ob_descr) { + PyErr_BadArgument(); + return -1; + } + } + else { + PyErr_Format(PyExc_TypeError, + "can only assign array (not \"%.200s\") to array slice", + Py_TYPE(value)->tp_name); + return -1; + } + itemsize = self->ob_descr->itemsize; + /* for 'a[2:1] = ...', the insertion point is 'start', not 'stop' */ + if ((step > 0 && stop < start) || + (step < 0 && stop > start)) + stop = start; + if (step == 1) { + if (slicelength > needed) { + memmove(self->ob_item + (start + needed) * itemsize, + self->ob_item + stop * itemsize, + (Py_SIZE(self) - stop) * itemsize); + if (array_resize(self, Py_SIZE(self) + + needed - slicelength) < 0) + return -1; + } + else if (slicelength < needed) { + if (array_resize(self, Py_SIZE(self) + + needed - slicelength) < 0) + return -1; + memmove(self->ob_item + (start + needed) * itemsize, + self->ob_item + stop * itemsize, + (Py_SIZE(self) - start - needed) * itemsize); + } + if (needed > 0) + memcpy(self->ob_item + start * itemsize, + other->ob_item, needed * itemsize); + return 0; + } + else if (needed == 0) { + /* Delete slice */ + size_t cur; + Py_ssize_t i; - if (cur + step >= (size_t)Py_SIZE(self)) - lim = Py_SIZE(self) - cur - 1; - memmove(self->ob_item + (cur - i) * itemsize, - self->ob_item + (cur + 1) * itemsize, - lim * itemsize); - } - cur = start + slicelength * step; - if (cur < (size_t)Py_SIZE(self)) { - memmove(self->ob_item + (cur-slicelength) * itemsize, - self->ob_item + cur * itemsize, - (Py_SIZE(self) - cur) * itemsize); - } - if (array_resize(self, Py_SIZE(self) - slicelength) < 0) - return -1; - return 0; - } - else { - Py_ssize_t cur, i; + if (step < 0) { + stop = start + 1; + start = stop + step * (slicelength - 1) - 1; + step = -step; + } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + Py_ssize_t lim = step - 1; - if (needed != slicelength) { - PyErr_Format(PyExc_ValueError, - "attempt to assign array of size %zd " - "to extended slice of size %zd", - needed, slicelength); - return -1; - } - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - memcpy(self->ob_item + cur * itemsize, - other->ob_item + i * itemsize, - itemsize); - } - return 0; - } + if (cur + step >= (size_t)Py_SIZE(self)) + lim = Py_SIZE(self) - cur - 1; + memmove(self->ob_item + (cur - i) * itemsize, + self->ob_item + (cur + 1) * itemsize, + lim * itemsize); + } + cur = start + slicelength * step; + if (cur < (size_t)Py_SIZE(self)) { + memmove(self->ob_item + (cur-slicelength) * itemsize, + self->ob_item + cur * itemsize, + (Py_SIZE(self) - cur) * itemsize); + } + if (array_resize(self, Py_SIZE(self) - slicelength) < 0) + return -1; + return 0; + } + else { + Py_ssize_t cur, i; + + if (needed != slicelength) { + PyErr_Format(PyExc_ValueError, + "attempt to assign array of size %zd " + "to extended slice of size %zd", + needed, slicelength); + return -1; + } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + memcpy(self->ob_item + cur * itemsize, + other->ob_item + i * itemsize, + itemsize); + } + return 0; + } } static PyMappingMethods array_as_mapping = { - (lenfunc)array_length, - (binaryfunc)array_subscr, - (objobjargproc)array_ass_subscr + (lenfunc)array_length, + (binaryfunc)array_subscr, + (objobjargproc)array_ass_subscr }; static const void *emptybuf = ""; @@ -1856,164 +1857,164 @@ static Py_ssize_t array_buffer_getreadbuf(arrayobject *self, Py_ssize_t index, const void **ptr) { - if ( index != 0 ) { - PyErr_SetString(PyExc_SystemError, - "Accessing non-existent array segment"); - return -1; - } - *ptr = (void *)self->ob_item; - if (*ptr == NULL) - *ptr = emptybuf; - return Py_SIZE(self)*self->ob_descr->itemsize; + if ( index != 0 ) { + PyErr_SetString(PyExc_SystemError, + "Accessing non-existent array segment"); + return -1; + } + *ptr = (void *)self->ob_item; + if (*ptr == NULL) + *ptr = emptybuf; + return Py_SIZE(self)*self->ob_descr->itemsize; } static Py_ssize_t array_buffer_getwritebuf(arrayobject *self, Py_ssize_t index, const void **ptr) { - if ( index != 0 ) { - PyErr_SetString(PyExc_SystemError, - "Accessing non-existent array segment"); - return -1; - } - *ptr = (void *)self->ob_item; - if (*ptr == NULL) - *ptr = emptybuf; - return Py_SIZE(self)*self->ob_descr->itemsize; + if ( index != 0 ) { + PyErr_SetString(PyExc_SystemError, + "Accessing non-existent array segment"); + return -1; + } + *ptr = (void *)self->ob_item; + if (*ptr == NULL) + *ptr = emptybuf; + return Py_SIZE(self)*self->ob_descr->itemsize; } static Py_ssize_t array_buffer_getsegcount(arrayobject *self, Py_ssize_t *lenp) { - if ( lenp ) - *lenp = Py_SIZE(self)*self->ob_descr->itemsize; - return 1; + if ( lenp ) + *lenp = Py_SIZE(self)*self->ob_descr->itemsize; + return 1; } static PySequenceMethods array_as_sequence = { - (lenfunc)array_length, /*sq_length*/ - (binaryfunc)array_concat, /*sq_concat*/ - (ssizeargfunc)array_repeat, /*sq_repeat*/ - (ssizeargfunc)array_item, /*sq_item*/ - (ssizessizeargfunc)array_slice, /*sq_slice*/ - (ssizeobjargproc)array_ass_item, /*sq_ass_item*/ - (ssizessizeobjargproc)array_ass_slice, /*sq_ass_slice*/ - (objobjproc)array_contains, /*sq_contains*/ - (binaryfunc)array_inplace_concat, /*sq_inplace_concat*/ - (ssizeargfunc)array_inplace_repeat /*sq_inplace_repeat*/ + (lenfunc)array_length, /*sq_length*/ + (binaryfunc)array_concat, /*sq_concat*/ + (ssizeargfunc)array_repeat, /*sq_repeat*/ + (ssizeargfunc)array_item, /*sq_item*/ + (ssizessizeargfunc)array_slice, /*sq_slice*/ + (ssizeobjargproc)array_ass_item, /*sq_ass_item*/ + (ssizessizeobjargproc)array_ass_slice, /*sq_ass_slice*/ + (objobjproc)array_contains, /*sq_contains*/ + (binaryfunc)array_inplace_concat, /*sq_inplace_concat*/ + (ssizeargfunc)array_inplace_repeat /*sq_inplace_repeat*/ }; static PyBufferProcs array_as_buffer = { - (readbufferproc)array_buffer_getreadbuf, - (writebufferproc)array_buffer_getwritebuf, - (segcountproc)array_buffer_getsegcount, - NULL, + (readbufferproc)array_buffer_getreadbuf, + (writebufferproc)array_buffer_getwritebuf, + (segcountproc)array_buffer_getsegcount, + NULL, }; static PyObject * array_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { - char c; - PyObject *initial = NULL, *it = NULL; - struct arraydescr *descr; - - if (type == &Arraytype && !_PyArg_NoKeywords("array.array()", kwds)) - return NULL; - - if (!PyArg_ParseTuple(args, "c|O:array", &c, &initial)) - return NULL; - - if (!(initial == NULL || PyList_Check(initial) - || PyString_Check(initial) || PyTuple_Check(initial) - || (c == 'u' && PyUnicode_Check(initial)))) { - it = PyObject_GetIter(initial); - if (it == NULL) - return NULL; - /* We set initial to NULL so that the subsequent code - will create an empty array of the appropriate type - and afterwards we can use array_iter_extend to populate - the array. - */ - initial = NULL; - } - for (descr = descriptors; descr->typecode != '\0'; descr++) { - if (descr->typecode == c) { - PyObject *a; - Py_ssize_t len; + char c; + PyObject *initial = NULL, *it = NULL; + struct arraydescr *descr; - if (initial == NULL || !(PyList_Check(initial) - || PyTuple_Check(initial))) - len = 0; - else - len = PySequence_Size(initial); + if (type == &Arraytype && !_PyArg_NoKeywords("array.array()", kwds)) + return NULL; - a = newarrayobject(type, len, descr); - if (a == NULL) - return NULL; + if (!PyArg_ParseTuple(args, "c|O:array", &c, &initial)) + return NULL; - if (len > 0) { - Py_ssize_t i; - for (i = 0; i < len; i++) { - PyObject *v = - PySequence_GetItem(initial, i); - if (v == NULL) { - Py_DECREF(a); - return NULL; - } - if (setarrayitem(a, i, v) != 0) { - Py_DECREF(v); - Py_DECREF(a); - return NULL; - } - Py_DECREF(v); - } - } else if (initial != NULL && PyString_Check(initial)) { - PyObject *t_initial, *v; - t_initial = PyTuple_Pack(1, initial); - if (t_initial == NULL) { - Py_DECREF(a); - return NULL; - } - v = array_fromstring((arrayobject *)a, - t_initial); - Py_DECREF(t_initial); - if (v == NULL) { - Py_DECREF(a); - return NULL; - } - Py_DECREF(v); + if (!(initial == NULL || PyList_Check(initial) + || PyString_Check(initial) || PyTuple_Check(initial) + || (c == 'u' && PyUnicode_Check(initial)))) { + it = PyObject_GetIter(initial); + if (it == NULL) + return NULL; + /* We set initial to NULL so that the subsequent code + will create an empty array of the appropriate type + and afterwards we can use array_iter_extend to populate + the array. + */ + initial = NULL; + } + for (descr = descriptors; descr->typecode != '\0'; descr++) { + if (descr->typecode == c) { + PyObject *a; + Py_ssize_t len; + + if (initial == NULL || !(PyList_Check(initial) + || PyTuple_Check(initial))) + len = 0; + else + len = PySequence_Size(initial); + + a = newarrayobject(type, len, descr); + if (a == NULL) + return NULL; + + if (len > 0) { + Py_ssize_t i; + for (i = 0; i < len; i++) { + PyObject *v = + PySequence_GetItem(initial, i); + if (v == NULL) { + Py_DECREF(a); + return NULL; + } + if (setarrayitem(a, i, v) != 0) { + Py_DECREF(v); + Py_DECREF(a); + return NULL; + } + Py_DECREF(v); + } + } else if (initial != NULL && PyString_Check(initial)) { + PyObject *t_initial, *v; + t_initial = PyTuple_Pack(1, initial); + if (t_initial == NULL) { + Py_DECREF(a); + return NULL; + } + v = array_fromstring((arrayobject *)a, + t_initial); + Py_DECREF(t_initial); + if (v == NULL) { + Py_DECREF(a); + return NULL; + } + Py_DECREF(v); #ifdef Py_USING_UNICODE - } else if (initial != NULL && PyUnicode_Check(initial)) { - Py_ssize_t n = PyUnicode_GET_DATA_SIZE(initial); - if (n > 0) { - arrayobject *self = (arrayobject *)a; - char *item = self->ob_item; - item = (char *)PyMem_Realloc(item, n); - if (item == NULL) { - PyErr_NoMemory(); - Py_DECREF(a); - return NULL; - } - self->ob_item = item; - Py_SIZE(self) = n / sizeof(Py_UNICODE); - memcpy(item, PyUnicode_AS_DATA(initial), n); - self->allocated = Py_SIZE(self); - } + } else if (initial != NULL && PyUnicode_Check(initial)) { + Py_ssize_t n = PyUnicode_GET_DATA_SIZE(initial); + if (n > 0) { + arrayobject *self = (arrayobject *)a; + char *item = self->ob_item; + item = (char *)PyMem_Realloc(item, n); + if (item == NULL) { + PyErr_NoMemory(); + Py_DECREF(a); + return NULL; + } + self->ob_item = item; + Py_SIZE(self) = n / sizeof(Py_UNICODE); + memcpy(item, PyUnicode_AS_DATA(initial), n); + self->allocated = Py_SIZE(self); + } #endif - } - if (it != NULL) { - if (array_iter_extend((arrayobject *)a, it) == -1) { - Py_DECREF(it); - Py_DECREF(a); - return NULL; - } - Py_DECREF(it); - } - return a; - } - } - PyErr_SetString(PyExc_ValueError, - "bad typecode (must be c, b, B, u, h, H, i, I, l, L, f or d)"); - return NULL; + } + if (it != NULL) { + if (array_iter_extend((arrayobject *)a, it) == -1) { + Py_DECREF(it); + Py_DECREF(a); + return NULL; + } + Py_DECREF(it); + } + return a; + } + } + PyErr_SetString(PyExc_ValueError, + "bad typecode (must be c, b, B, u, h, H, i, I, l, L, f or d)"); + return NULL; } @@ -2049,7 +2050,7 @@ \n\ Return a new array whose items are restricted by typecode, and\n\ initialized from the optional initializer value, which must be a list,\n\ -string. or iterable over elements of the appropriate type.\n\ +string or iterable over elements of the appropriate type.\n\ \n\ Arrays represent basic values and behave very much like lists, except\n\ the type of objects stored in them is constrained.\n\ @@ -2084,55 +2085,55 @@ static PyObject *array_iter(arrayobject *ao); static PyTypeObject Arraytype = { - PyVarObject_HEAD_INIT(NULL, 0) - "array.array", - sizeof(arrayobject), - 0, - (destructor)array_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - (reprfunc)array_repr, /* tp_repr */ - 0, /* tp_as_number*/ - &array_as_sequence, /* tp_as_sequence*/ - &array_as_mapping, /* tp_as_mapping*/ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - &array_as_buffer, /* tp_as_buffer*/ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ - arraytype_doc, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - array_richcompare, /* tp_richcompare */ - offsetof(arrayobject, weakreflist), /* tp_weaklistoffset */ - (getiterfunc)array_iter, /* tp_iter */ - 0, /* tp_iternext */ - array_methods, /* tp_methods */ - 0, /* tp_members */ - array_getsets, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - PyType_GenericAlloc, /* tp_alloc */ - array_new, /* tp_new */ - PyObject_Del, /* tp_free */ + PyVarObject_HEAD_INIT(NULL, 0) + "array.array", + sizeof(arrayobject), + 0, + (destructor)array_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + (reprfunc)array_repr, /* tp_repr */ + 0, /* tp_as_number*/ + &array_as_sequence, /* tp_as_sequence*/ + &array_as_mapping, /* tp_as_mapping*/ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + &array_as_buffer, /* tp_as_buffer*/ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ + arraytype_doc, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + array_richcompare, /* tp_richcompare */ + offsetof(arrayobject, weakreflist), /* tp_weaklistoffset */ + (getiterfunc)array_iter, /* tp_iter */ + 0, /* tp_iternext */ + array_methods, /* tp_methods */ + 0, /* tp_members */ + array_getsets, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + PyType_GenericAlloc, /* tp_alloc */ + array_new, /* tp_new */ + PyObject_Del, /* tp_free */ }; /*********************** Array Iterator **************************/ typedef struct { - PyObject_HEAD - Py_ssize_t index; - arrayobject *ao; - PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); + PyObject_HEAD + Py_ssize_t index; + arrayobject *ao; + PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); } arrayiterobject; static PyTypeObject PyArrayIter_Type; @@ -2142,79 +2143,79 @@ static PyObject * array_iter(arrayobject *ao) { - arrayiterobject *it; + arrayiterobject *it; - if (!array_Check(ao)) { - PyErr_BadInternalCall(); - return NULL; - } + if (!array_Check(ao)) { + PyErr_BadInternalCall(); + return NULL; + } - it = PyObject_GC_New(arrayiterobject, &PyArrayIter_Type); - if (it == NULL) - return NULL; + it = PyObject_GC_New(arrayiterobject, &PyArrayIter_Type); + if (it == NULL) + return NULL; - Py_INCREF(ao); - it->ao = ao; - it->index = 0; - it->getitem = ao->ob_descr->getitem; - PyObject_GC_Track(it); - return (PyObject *)it; + Py_INCREF(ao); + it->ao = ao; + it->index = 0; + it->getitem = ao->ob_descr->getitem; + PyObject_GC_Track(it); + return (PyObject *)it; } static PyObject * arrayiter_next(arrayiterobject *it) { - assert(PyArrayIter_Check(it)); - if (it->index < Py_SIZE(it->ao)) - return (*it->getitem)(it->ao, it->index++); - return NULL; + assert(PyArrayIter_Check(it)); + if (it->index < Py_SIZE(it->ao)) + return (*it->getitem)(it->ao, it->index++); + return NULL; } static void arrayiter_dealloc(arrayiterobject *it) { - PyObject_GC_UnTrack(it); - Py_XDECREF(it->ao); - PyObject_GC_Del(it); + PyObject_GC_UnTrack(it); + Py_XDECREF(it->ao); + PyObject_GC_Del(it); } static int arrayiter_traverse(arrayiterobject *it, visitproc visit, void *arg) { - Py_VISIT(it->ao); - return 0; + Py_VISIT(it->ao); + return 0; } static PyTypeObject PyArrayIter_Type = { - PyVarObject_HEAD_INIT(NULL, 0) - "arrayiterator", /* tp_name */ - sizeof(arrayiterobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)arrayiter_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */ - 0, /* tp_doc */ - (traverseproc)arrayiter_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)arrayiter_next, /* tp_iternext */ - 0, /* tp_methods */ + PyVarObject_HEAD_INIT(NULL, 0) + "arrayiterator", /* tp_name */ + sizeof(arrayiterobject), /* tp_basicsize */ + 0, /* tp_itemsize */ + /* methods */ + (destructor)arrayiter_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */ + 0, /* tp_doc */ + (traverseproc)arrayiter_traverse, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + PyObject_SelfIter, /* tp_iter */ + (iternextfunc)arrayiter_next, /* tp_iternext */ + 0, /* tp_methods */ }; @@ -2229,17 +2230,17 @@ PyMODINIT_FUNC initarray(void) { - PyObject *m; + PyObject *m; - Arraytype.ob_type = &PyType_Type; - PyArrayIter_Type.ob_type = &PyType_Type; - m = Py_InitModule3("array", a_methods, module_doc); - if (m == NULL) - return; + Arraytype.ob_type = &PyType_Type; + PyArrayIter_Type.ob_type = &PyType_Type; + m = Py_InitModule3("array", a_methods, module_doc); + if (m == NULL) + return; - Py_INCREF((PyObject *)&Arraytype); - PyModule_AddObject(m, "ArrayType", (PyObject *)&Arraytype); - Py_INCREF((PyObject *)&Arraytype); - PyModule_AddObject(m, "array", (PyObject *)&Arraytype); - /* No need to check the error here, the caller will do that */ + Py_INCREF((PyObject *)&Arraytype); + PyModule_AddObject(m, "ArrayType", (PyObject *)&Arraytype); + Py_INCREF((PyObject *)&Arraytype); + PyModule_AddObject(m, "array", (PyObject *)&Arraytype); + /* No need to check the error here, the caller will do that */ } From noreply at buildbot.pypy.org Wed May 2 01:02:06 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:02:06 +0200 (CEST) Subject: [pypy-commit] pypy default: cpyext: add PyLong_FromSsize_t() Message-ID: <20120501230206.9ECB982F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r54849:f4fd4ab5e166 Date: 2012-05-01 00:01 +0200 http://bitbucket.org/pypy/pypy/changeset/f4fd4ab5e166/ Log: cpyext: add PyLong_FromSsize_t() diff --git a/pypy/module/cpyext/longobject.py b/pypy/module/cpyext/longobject.py --- a/pypy/module/cpyext/longobject.py +++ b/pypy/module/cpyext/longobject.py @@ -16,6 +16,13 @@ """Return a new PyLongObject object from v, or NULL on failure.""" return space.newlong(val) + at cpython_api([Py_ssize_t], PyObject) +def PyLong_FromSsize_t(space, val): + """Return a new PyLongObject object from a C Py_ssize_t, or + NULL on failure. + """ + return space.newlong(val) + @cpython_api([rffi.LONGLONG], PyObject) def PyLong_FromLongLong(space, val): """Return a new PyLongObject object from a C long long, or NULL diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1405,13 +1405,6 @@ """ raise NotImplementedError - at cpython_api([Py_ssize_t], PyObject) -def PyLong_FromSsize_t(space, v): - """Return a new PyLongObject object from a C Py_ssize_t, or - NULL on failure. - """ - raise NotImplementedError - @cpython_api([rffi.SIZE_T], PyObject) def PyLong_FromSize_t(space, v): """Return a new PyLongObject object from a C size_t, or diff --git a/pypy/module/cpyext/test/test_longobject.py b/pypy/module/cpyext/test/test_longobject.py --- a/pypy/module/cpyext/test/test_longobject.py +++ b/pypy/module/cpyext/test/test_longobject.py @@ -35,6 +35,7 @@ w_value = space.newlong(2) value = api.PyLong_AsSsize_t(w_value) assert value == 2 + assert space.eq_w(w_value, api.PyLong_FromSsize_t(2)) def test_fromdouble(self, space, api): w_value = api.PyLong_FromDouble(-12.74) From noreply at buildbot.pypy.org Wed May 2 01:02:07 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:02:07 +0200 (CEST) Subject: [pypy-commit] pypy default: cpyext: Implement _PyFloat_Unpack4() and _PyFloat_Unpack8(). Message-ID: <20120501230207.DB80682F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r54850:ac5690f5e02b Date: 2012-05-01 18:02 +0200 http://bitbucket.org/pypy/pypy/changeset/ac5690f5e02b/ Log: cpyext: Implement _PyFloat_Unpack4() and _PyFloat_Unpack8(). They are used by the py3k version of arraymodule. diff --git a/pypy/module/cpyext/floatobject.py b/pypy/module/cpyext/floatobject.py --- a/pypy/module/cpyext/floatobject.py +++ b/pypy/module/cpyext/floatobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.api import ( CANNOT_FAIL, cpython_api, PyObject, build_type_checkers, CONST_STRING) from pypy.interpreter.error import OperationError +from pypy.rlib.rstruct import runpack PyFloat_Check, PyFloat_CheckExact = build_type_checkers("Float") @@ -33,3 +34,19 @@ backward compatibility.""" return space.call_function(space.w_float, w_obj) + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack4(space, ptr, le): + input = rffi.charpsize2str(ptr, 4) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("f", input) + + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack8(space, ptr, le): + input = rffi.charpsize2str(ptr, 8) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("d", input) + diff --git a/pypy/module/cpyext/test/test_floatobject.py b/pypy/module/cpyext/test/test_floatobject.py --- a/pypy/module/cpyext/test/test_floatobject.py +++ b/pypy/module/cpyext/test/test_floatobject.py @@ -1,5 +1,6 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase +from pypy.rpython.lltypesystem import rffi class TestFloatObject(BaseApiTest): def test_floatobject(self, space, api): @@ -20,6 +21,16 @@ assert space.eq_w(api.PyNumber_Float(space.wrap(Coerce())), space.wrap(42.5)) + def test_unpack(self, space, api): + with rffi.scoped_str2charp("\x9a\x99\x99?") as ptr: + assert abs(api._PyFloat_Unpack4(ptr, 1) - 1.2) < 1e-7 + with rffi.scoped_str2charp("?\x99\x99\x9a") as ptr: + assert abs(api._PyFloat_Unpack4(ptr, 0) - 1.2) < 1e-7 + with rffi.scoped_str2charp("\x1f\x85\xebQ\xb8\x1e\t@") as ptr: + assert abs(api._PyFloat_Unpack8(ptr, 1) - 3.14) < 1e-15 + with rffi.scoped_str2charp("@\t\x1e\xb8Q\xeb\x85\x1f") as ptr: + assert abs(api._PyFloat_Unpack8(ptr, 0) - 3.14) < 1e-15 + class AppTestFloatObject(AppTestCpythonExtensionBase): def test_fromstring(self): module = self.import_extension('foo', [ From noreply at buildbot.pypy.org Wed May 2 01:02:09 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:02:09 +0200 (CEST) Subject: [pypy-commit] pypy default: cpyext: Add PyUnicode_DecodeUTF32() Message-ID: <20120501230209.25A7082F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r54851:3495b7bb0a18 Date: 2012-05-01 18:32 +0200 http://bitbucket.org/pypy/pypy/changeset/3495b7bb0a18/ Log: cpyext: Add PyUnicode_DecodeUTF32() diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1965,35 +1965,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) -def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): - """Decode length bytes from a UTF-32 encoded buffer string and return the - corresponding Unicode object. errors (if non-NULL) defines the error - handling. It defaults to "strict". - - If byteorder is non-NULL, the decoder starts decoding using the given byte - order: - - *byteorder == -1: little endian - *byteorder == 0: native order - *byteorder == 1: big endian - - If *byteorder is zero, and the first four bytes of the input data are a - byte order mark (BOM), the decoder switches to this byte order and the BOM is - not copied into the resulting Unicode string. If *byteorder is -1 or - 1, any byte order mark is copied to the output. - - After completion, *byteorder is set to the current byte order at the end - of input data. - - In a narrow build codepoints outside the BMP will be decoded as surrogate pairs. - - If byteorder is NULL, the codec starts in native order mode. - - Return NULL if an exception was raised by the codec. - """ - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP, Py_ssize_t], PyObject) def PyUnicode_DecodeUTF32Stateful(space, s, size, errors, byteorder, consumed): """If consumed is NULL, behave like PyUnicode_DecodeUTF32(). If diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -356,6 +356,42 @@ test("\xFE\xFF\x00\x61\x00\x62\x00\x63\x00\x64", 0, 1) test("\xFF\xFE\x61\x00\x62\x00\x63\x00\x64\x00", 0, -1) + def test_decode_utf32(self, space, api): + def test(encoded, endian, realendian=None): + encoded_charp = rffi.str2charp(encoded) + strict_charp = rffi.str2charp("strict") + if endian is not None: + if endian < 0: + value = -1 + elif endian > 0: + value = 1 + else: + value = 0 + pendian = lltype.malloc(rffi.INTP.TO, 1, flavor='raw') + pendian[0] = rffi.cast(rffi.INT, value) + else: + pendian = None + + w_ustr = api.PyUnicode_DecodeUTF32(encoded_charp, len(encoded), strict_charp, pendian) + assert space.eq_w(space.call_method(w_ustr, 'encode', space.wrap('ascii')), + space.wrap("ab")) + + rffi.free_charp(encoded_charp) + rffi.free_charp(strict_charp) + if pendian: + if realendian is not None: + assert rffi.cast(rffi.INT, realendian) == pendian[0] + lltype.free(pendian, flavor='raw') + + test("\x61\x00\x00\x00\x62\x00\x00\x00", -1) + + test("\x61\x00\x00\x00\x62\x00\x00\x00", None) + + test("\x00\x00\x00\x61\x00\x00\x00\x62", 1) + + test("\x00\x00\xFE\xFF\x00\x00\x00\x61\x00\x00\x00\x62", 0, 1) + test("\xFF\xFE\x00\x00\x61\x00\x00\x00\x62\x00\x00\x00", 0, -1) + def test_compare(self, space, api): assert api.PyUnicode_Compare(space.wrap('a'), space.wrap('b')) == -1 diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -525,9 +525,8 @@ string = rffi.charpsize2str(s, size) - #FIXME: I don't like these prefixes - if pbyteorder is not None: # correct NULL check? - llbyteorder = rffi.cast(lltype.Signed, pbyteorder[0]) # compatible with int? + if pbyteorder is not None: + llbyteorder = rffi.cast(lltype.Signed, pbyteorder[0]) if llbyteorder < 0: byteorder = "little" elif llbyteorder > 0: @@ -542,11 +541,67 @@ else: errors = None - result, length, byteorder = runicode.str_decode_utf_16_helper(string, size, - errors, - True, # final ? false for multiple passes? - None, # errorhandler - byteorder) + result, length, byteorder = runicode.str_decode_utf_16_helper( + string, size, errors, + True, # final ? false for multiple passes? + None, # errorhandler + byteorder) + if pbyteorder is not None: + pbyteorder[0] = rffi.cast(rffi.INT, byteorder) + + return space.wrap(result) + + at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) +def PyUnicode_DecodeUTF32(space, s, size, llerrors, pbyteorder): + """Decode length bytes from a UTF-32 encoded buffer string and + return the corresponding Unicode object. errors (if non-NULL) + defines the error handling. It defaults to "strict". + + If byteorder is non-NULL, the decoder starts decoding using the + given byte order: + *byteorder == -1: little endian + *byteorder == 0: native order + *byteorder == 1: big endian + + If *byteorder is zero, and the first four bytes of the input data + are a byte order mark (BOM), the decoder switches to this byte + order and the BOM is not copied into the resulting Unicode string. + If *byteorder is -1 or 1, any byte order mark is copied to the + output. + + After completion, *byteorder is set to the current byte order at + the end of input data. + + In a narrow build codepoints outside the BMP will be decoded as + surrogate pairs. + + If byteorder is NULL, the codec starts in native order mode. + + Return NULL if an exception was raised by the codec. + """ + string = rffi.charpsize2str(s, size) + + if pbyteorder: + llbyteorder = rffi.cast(lltype.Signed, pbyteorder[0]) + if llbyteorder < 0: + byteorder = "little" + elif llbyteorder > 0: + byteorder = "big" + else: + byteorder = "native" + else: + byteorder = "native" + + if llerrors: + errors = rffi.charp2str(llerrors) + else: + errors = None + + result, length, byteorder = runicode.str_decode_utf_32_helper( + string, size, errors, + True, # final ? false for multiple passes? + None, # errorhandler + byteorder) if pbyteorder is not None: pbyteorder[0] = rffi.cast(rffi.INT, byteorder) From noreply at buildbot.pypy.org Wed May 2 01:02:10 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:02:10 +0200 (CEST) Subject: [pypy-commit] pypy default: cpyext: add support for PyUnicode_FromStringAndSize(NULL, size) Message-ID: <20120501230210.6F0F482F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r54852:be37d8224c03 Date: 2012-05-01 18:49 +0200 http://bitbucket.org/pypy/pypy/changeset/be37d8224c03/ Log: cpyext: add support for PyUnicode_FromStringAndSize(NULL, size) diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -4,7 +4,7 @@ from pypy.module.cpyext.unicodeobject import ( Py_UNICODE, PyUnicodeObject, new_empty_unicode) from pypy.module.cpyext.api import PyObjectP, PyObject -from pypy.module.cpyext.pyobject import Py_DecRef +from pypy.module.cpyext.pyobject import Py_DecRef, from_ref from pypy.rpython.lltypesystem import rffi, lltype import sys, py @@ -145,7 +145,9 @@ w_res = api.PyUnicode_FromString(s) assert space.unwrap(w_res) == u'sp�m' - w_res = api.PyUnicode_FromStringAndSize(s, 4) + res = api.PyUnicode_FromStringAndSize(s, 4) + w_res = from_ref(space, res) + api.Py_DecRef(res) assert space.unwrap(w_res) == u'sp�' rffi.free_charp(s) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -415,10 +415,10 @@ needed data. The buffer is copied into the new object. If the buffer is not NULL, the return value might be a shared object. Therefore, modification of the resulting Unicode object is only allowed when u is NULL.""" - if not s: - raise NotImplementedError - w_str = space.wrap(rffi.charpsize2str(s, size)) - return space.call_method(w_str, 'decode', space.wrap("utf-8")) + if s: + return make_ref(space, PyUnicode_DecodeUTF8(space, s, size, None)) + else: + return rffi.cast(PyObject, new_empty_unicode(space, size)) @cpython_api([rffi.INT_real], PyObject) def PyUnicode_FromOrdinal(space, ordinal): @@ -477,6 +477,7 @@ else: w_errors = space.w_None return space.call_method(w_s, 'decode', space.wrap(encoding), w_errors) + globals()['PyUnicode_Decode%s' % suffix] = PyUnicode_DecodeXXX @cpython_api([CONST_WSTRING, Py_ssize_t, CONST_STRING], PyObject) @func_renamer('PyUnicode_Encode%s' % suffix) @@ -490,6 +491,7 @@ else: w_errors = space.w_None return space.call_method(w_u, 'encode', space.wrap(encoding), w_errors) + globals()['PyUnicode_Encode%s' % suffix] = PyUnicode_EncodeXXX make_conversion_functions('UTF8', 'utf-8') make_conversion_functions('ASCII', 'ascii') From noreply at buildbot.pypy.org Wed May 2 01:02:11 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:02:11 +0200 (CEST) Subject: [pypy-commit] pypy default: translation fix Message-ID: <20120501230211.A872B82F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r54853:c5c6c9577065 Date: 2012-05-01 23:52 +0200 http://bitbucket.org/pypy/pypy/changeset/c5c6c9577065/ Log: translation fix diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -416,7 +416,8 @@ NULL, the return value might be a shared object. Therefore, modification of the resulting Unicode object is only allowed when u is NULL.""" if s: - return make_ref(space, PyUnicode_DecodeUTF8(space, s, size, None)) + return make_ref(space, PyUnicode_DecodeUTF8( + space, s, size, lltype.nullptr(rffi.CCHARP.TO))) else: return rffi.cast(PyObject, new_empty_unicode(space, size)) From noreply at buildbot.pypy.org Wed May 2 01:02:14 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:02:14 +0200 (CEST) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20120501230214.0394982F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r54854:4fc21e56dbc9 Date: 2012-05-02 01:00 +0200 http://bitbucket.org/pypy/pypy/changeset/4fc21e56dbc9/ Log: merge heads diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst --- a/pypy/doc/cppyy.rst +++ b/pypy/doc/cppyy.rst @@ -21,6 +21,26 @@ .. _`llvm`: http://llvm.org/ +Motivation +========== + +The cppyy module offers two unique features, which result in great +performance as well as better functionality and cross-language integration +than would otherwise be possible. +First, cppyy is written in RPython and therefore open to optimizations by the +JIT up until the actual point of call into C++. +This means that there are no conversions necessary between a garbage collected +and a reference counted environment, as is needed for the use of existing +extension modules written or generated for CPython. +It also means that if variables are already unboxed by the JIT, they can be +passed through directly to C++. +Second, Reflex (and cling far more so) adds dynamic features to C++, thus +greatly reducing impedance mismatches between the two languages. +In fact, Reflex is dynamic enough that you could write the runtime bindings +generation in python (as opposed to RPython) and this is used to create very +natural "pythonizations" of the bound code. + + Installation ============ @@ -195,10 +215,12 @@ >>>> d = cppyy.gbl.BaseFactory("name", 42, 3.14) >>>> type(d) - >>>> d.m_i - 42 - >>>> d.m_d - 3.14 + >>>> isinstance(d, cppyy.gbl.Base1) + True + >>>> isinstance(d, cppyy.gbl.Base2) + True + >>>> d.m_i, d.m_d + (42, 3.14) >>>> d.m_name == "name" True >>>> @@ -295,6 +317,9 @@ To select a specific virtual method, do like with normal python classes that override methods: select it from the class that you need, rather than calling the method on the instance. + To select a specific overload, use the __dispatch__ special function, which + takes the name of the desired method and its signature (which can be + obtained from the doc string) as arguments. * **namespaces**: Are represented as python classes. Namespaces are more open-ended than classes, so sometimes initial access may diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst --- a/pypy/doc/extending.rst +++ b/pypy/doc/extending.rst @@ -116,13 +116,21 @@ Reflex ====== -This method is only experimental for now, and is being exercised on a branch, -`reflex-support`_, so you will have to build PyPy yourself. +This method is still experimental and is being exercised on a branch, +`reflex-support`_, which adds the `cppyy`_ module. The method works by using the `Reflex package`_ to provide reflection information of the C++ code, which is then used to automatically generate -bindings at runtime, which can then be used from python. +bindings at runtime. +From a python standpoint, there is no difference between generating bindings +at runtime, or having them "statically" generated and available in scripts +or compiled into extension modules: python classes and functions are always +runtime structures, created when a script or module loads. +However, if the backend itself is capable of dynamic behavior, it is a much +better functional match to python, allowing tighter integration and more +natural language mappings. Full details are `available here`_. +.. _`cppyy`: cppyy.html .. _`reflex-support`: cppyy.html .. _`Reflex package`: http://root.cern.ch/drupal/content/reflex .. _`available here`: cppyy.html @@ -130,16 +138,33 @@ Pros ---- -If it works, it is mostly automatic, and hence easy in use. -The bindings can make use of direct pointers, in which case the calls are -very fast. +The cppyy module is written in RPython, which makes it possible to keep the +code execution visible to the JIT all the way to the actual point of call into +C++, thus allowing for a very fast interface. +Reflex is currently in use in large software environments in High Energy +Physics (HEP), across many different projects and packages, and its use can be +virtually completely automated in a production environment. +One of its uses in HEP is in providing language bindings for CPython. +Thus, it is possible to use Reflex to have bound code work on both CPython and +on PyPy. +In the medium-term, Reflex will be replaced by `cling`_, which is based on +`llvm`_. +This will affect the backend only; the python-side interface is expected to +remain the same, except that cling adds a lot of dynamic behavior to C++, +enabling further language integration. + +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ Cons ---- -C++ is a large language, and these bindings are not yet feature-complete. -Although missing features should do no harm if you don't use them, if you do -need a particular feature, it may be necessary to work around it in python -or with a C++ helper function. +C++ is a large language, and cppyy is not yet feature-complete. +Still, the experience gained in developing the equivalent bindings for CPython +means that adding missing features is a simple matter of engineering, not a +question of research. +The module is written so that currently missing features should do no harm if +you don't use them, if you do need a particular feature, it may be necessary +to work around it in python or with a C++ helper function. Although Reflex works on various platforms, the bindings with PyPy have only been tested on Linux. diff --git a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py --- a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py +++ b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py @@ -217,7 +217,8 @@ encoded = ''.join(writtencode).encode('hex').upper() ataddr = '@%x' % addr assert log == [('test-logname-section', - [('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] + [('debug_print', 'SYS_EXECUTABLE', '??'), + ('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] lltype.free(p, flavor='raw') diff --git a/pypy/jit/metainterp/jitexc.py b/pypy/jit/metainterp/jitexc.py --- a/pypy/jit/metainterp/jitexc.py +++ b/pypy/jit/metainterp/jitexc.py @@ -12,7 +12,6 @@ """ _go_through_llinterp_uncaught_ = True # ugh - def _get_standard_error(rtyper, Class): exdata = rtyper.getexceptiondata() clsdef = rtyper.annotator.bookkeeper.getuniqueclassdef(Class) diff --git a/pypy/jit/metainterp/optimize.py b/pypy/jit/metainterp/optimize.py --- a/pypy/jit/metainterp/optimize.py +++ b/pypy/jit/metainterp/optimize.py @@ -5,3 +5,9 @@ """Raised when the optimize*.py detect that the loop that we are trying to build cannot possibly make sense as a long-running loop (e.g. it cannot run 2 complete iterations).""" + + def __init__(self, msg='?'): + debug_start("jit-abort") + debug_print(msg) + debug_stop("jit-abort") + self.msg = msg diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -49,8 +49,9 @@ optimizations.append(OptFfiCall()) if ('rewrite' not in enable_opts or 'virtualize' not in enable_opts - or 'heap' not in enable_opts or 'unroll' not in enable_opts): - optimizations.append(OptSimplify()) + or 'heap' not in enable_opts or 'unroll' not in enable_opts + or 'pure' not in enable_opts): + optimizations.append(OptSimplify(unroll)) return optimizations, unroll diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -257,8 +257,8 @@ opnum == rop.COPYSTRCONTENT or # no effect on GC struct/array opnum == rop.COPYUNICODECONTENT): # no effect on GC struct/array return - assert opnum != rop.CALL_PURE if (opnum == rop.CALL or + opnum == rop.CALL_PURE or opnum == rop.CALL_MAY_FORCE or opnum == rop.CALL_RELEASE_GIL or opnum == rop.CALL_ASSEMBLER): @@ -481,7 +481,7 @@ # already between the tracing and now. In this case, we are # simply ignoring the QUASIIMMUT_FIELD hint and compiling it # as a regular getfield. - if not qmutdescr.is_still_valid(): + if not qmutdescr.is_still_valid_for(structvalue.get_key_box()): self._remove_guard_not_invalidated = True return # record as an out-of-line guard diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -191,10 +191,13 @@ # GUARD_OVERFLOW, then the loop is invalid. lastop = self.last_emitted_operation if lastop is None: - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') opnum = lastop.getopnum() if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') + self.emit_operation(op) def optimize_INT_ADD_OVF(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -525,6 +525,7 @@ @specialize.argtype(0) def _emit_operation(self, op): + assert op.getopnum() != rop.CALL_PURE for i in range(op.numargs()): arg = op.getarg(i) try: diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -208,7 +208,8 @@ box = value.box assert isinstance(box, Const) if not box.same_constant(constbox): - raise InvalidLoop + raise InvalidLoop('A GURAD_{VALUE,TRUE,FALSE} was proven to' + + 'always fail') return if emit_operation: self.emit_operation(op) @@ -220,7 +221,7 @@ if value.is_null(): return elif value.is_nonnull(): - raise InvalidLoop + raise InvalidLoop('A GUARD_ISNULL was proven to always fail') self.emit_operation(op) value.make_constant(self.optimizer.cpu.ts.CONST_NULL) @@ -229,7 +230,7 @@ if value.is_nonnull(): return elif value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL was proven to always fail') self.emit_operation(op) value.make_nonnull(op) @@ -278,7 +279,7 @@ if realclassbox is not None: if realclassbox.same_constant(expectedclassbox): return - raise InvalidLoop + raise InvalidLoop('A GUARD_CLASS was proven to always fail') if value.last_guard: # there already has been a guard_nonnull or guard_class or # guard_nonnull_class on this value. @@ -301,7 +302,8 @@ def optimize_GUARD_NONNULL_CLASS(self, op): value = self.getvalue(op.getarg(0)) if value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL_CLASS was proven to always ' + + 'fail') self.optimize_GUARD_CLASS(op) def optimize_CALL_LOOPINVARIANT(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -4,8 +4,9 @@ from pypy.jit.metainterp.history import TargetToken, JitCellToken class OptSimplify(Optimization): - def __init__(self): + def __init__(self, unroll): self.last_label_descr = None + self.unroll = unroll def optimize_CALL_PURE(self, op): args = op.getarglist() @@ -35,24 +36,26 @@ pass def optimize_LABEL(self, op): - descr = op.getdescr() - if isinstance(descr, JitCellToken): - return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) - self.last_label_descr = op.getdescr() + if not self.unroll: + descr = op.getdescr() + if isinstance(descr, JitCellToken): + return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) + self.last_label_descr = op.getdescr() self.emit_operation(op) def optimize_JUMP(self, op): - descr = op.getdescr() - assert isinstance(descr, JitCellToken) - if not descr.target_tokens: - assert self.last_label_descr is not None - target_token = self.last_label_descr - assert isinstance(target_token, TargetToken) - assert target_token.targeting_jitcell_token is descr - op.setdescr(self.last_label_descr) - else: - assert len(descr.target_tokens) == 1 - op.setdescr(descr.target_tokens[0]) + if not self.unroll: + descr = op.getdescr() + assert isinstance(descr, JitCellToken) + if not descr.target_tokens: + assert self.last_label_descr is not None + target_token = self.last_label_descr + assert isinstance(target_token, TargetToken) + assert target_token.targeting_jitcell_token is descr + op.setdescr(self.last_label_descr) + else: + assert len(descr.target_tokens) == 1 + op.setdescr(descr.target_tokens[0]) self.emit_operation(op) dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py @@ -0,0 +1,46 @@ +from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import OptimizeOptTest +from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin +from pypy.jit.metainterp.resoperation import rop + + +allopts = OptimizeOptTest.enable_opts.split(':') +for optnum in range(len(allopts)): + myopts = allopts[:] + del myopts[optnum] + + class TestLLtype(OptimizeOptTest, LLtypeMixin): + enable_opts = ':'.join(myopts) + + def optimize_loop(self, ops, expected, expected_preamble=None, + call_pure_results=None, expected_short=None): + loop = self.parse(ops) + if expected != "crash!": + expected = self.parse(expected) + if expected_preamble: + expected_preamble = self.parse(expected_preamble) + if expected_short: + expected_short = self.parse(expected_short) + + preamble = self.unroll_and_optimize(loop, call_pure_results) + + for op in preamble.operations + loop.operations: + assert op.getopnum() not in (rop.CALL_PURE, + rop.CALL_LOOPINVARIANT, + rop.VIRTUAL_REF_FINISH, + rop.VIRTUAL_REF, + rop.QUASIIMMUT_FIELD, + rop.MARK_OPAQUE_PTR, + rop.RECORD_KNOWN_CLASS) + + def raises(self, e, fn, *args): + try: + fn(*args) + except e: + pass + + opt = allopts[optnum] + exec "TestNo%sLLtype = TestLLtype" % (opt[0].upper() + opt[1:]) + +del TestLLtype # No need to run the last set twice +del TestNoUnrollLLtype # This case is take care of by test_optimizebasic + diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -5090,7 +5090,6 @@ class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass - ##class TestOOtype(BaseTestOptimizeBasic, OOtypeMixin): ## def test_instanceof(self): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -105,6 +105,9 @@ return loop + def raises(self, e, fn, *args): + py.test.raises(e, fn, *args) + class OptimizeOptTest(BaseTestWithUnroll): def setup_method(self, meth=None): @@ -2639,7 +2642,7 @@ p2 = new_with_vtable(ConstClass(node_vtable)) jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_2(self): @@ -2651,7 +2654,7 @@ escape(p2) # prevent it from staying Virtual jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_3(self): @@ -2665,7 +2668,7 @@ setfield_gc(p3, p4, descr=nextdescr) jump(p3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_merge_guard_class_guard_value(self): @@ -4411,7 +4414,7 @@ setfield_gc(p1, p3, descr=nextdescr) jump(p3) """ - py.test.raises(BogusPureField, self.optimize_loop, ops, "crash!") + self.raises(BogusPureField, self.optimize_loop, ops, "crash!") def test_dont_complains_different_field(self): ops = """ @@ -5024,7 +5027,7 @@ i2 = int_add(i0, 3) jump(i2) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_ne_const_not(self): ops = """ @@ -5074,7 +5077,7 @@ i3 = int_add(i0, 3) jump(i3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_lshift(self): ops = """ @@ -6533,9 +6536,9 @@ def test_quasi_immut_2(self): ops = """ [] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) escape(i1) jump() """ @@ -6585,13 +6588,13 @@ def test_call_may_force_invalidated_guards_reload(self): ops = """ [i0a, i0b] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) call_may_force(i0b, descr=mayforcevirtdescr) - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i2 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i2 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) i3 = escape(i1) i4 = escape(i2) jump(i3, i4) @@ -7813,6 +7816,52 @@ """ self.optimize_loop(ops, expected) + def test_issue1080_infinitie_loop_virtual(self): + ops = """ + [p10] + p52 = getfield_gc(p10, descr=nextdescr) # inst_storage + p54 = getarrayitem_gc(p52, 0, descr=arraydescr) + p69 = getfield_gc_pure(p54, descr=otherdescr) # inst_w_function + + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + + p106 = new_with_vtable(ConstClass(node_vtable)) + p108 = new_array(3, descr=arraydescr) + p110 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p110, ConstPtr(myptr2), descr=otherdescr) # inst_w_function + setarrayitem_gc(p108, 0, p110, descr=arraydescr) + setfield_gc(p106, p108, descr=nextdescr) # inst_storage + jump(p106) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr2), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + + + def test_issue1080_infinitie_loop_simple(self): + ops = """ + [p69] + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + jump(ConstPtr(myptr)) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -122,6 +122,7 @@ quasi.inst_field = -4247 quasifielddescr = cpu.fielddescrof(QUASI, 'inst_field') quasibox = BoxPtr(lltype.cast_opaque_ptr(llmemory.GCREF, quasi)) + quasiptr = quasibox.value quasiimmutdescr = QuasiImmutDescr(cpu, quasibox, quasifielddescr, cpu.fielddescrof(QUASI, 'mutate_field')) diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -315,7 +315,10 @@ try: jumpargs = virtual_state.make_inputargs(values, self.optimizer) except BadVirtualState: - raise InvalidLoop + raise InvalidLoop('The state of the optimizer at the end of ' + + 'peeled loop is inconsistent with the ' + + 'VirtualState at the begining of the peeled ' + + 'loop') jumpop.initarglist(jumpargs) # Inline the short preamble at the end of the loop @@ -325,7 +328,11 @@ for i in range(len(short_inputargs)): if short_inputargs[i] in args: if args[short_inputargs[i]] != jmp_to_short_args[i]: - raise InvalidLoop + raise InvalidLoop('The short preamble wants the ' + + 'same box passed to multiple of its ' + + 'inputargs, but the jump at the ' + + 'end of this bridge does not do that.') + args[short_inputargs[i]] = jmp_to_short_args[i] self.short_inliner = Inliner(short_inputargs, jmp_to_short_args) for op in self.short[1:]: @@ -378,7 +385,10 @@ #final_virtual_state.debug_print("Bad virtual state at end of loop, ", # bad) #debug_stop('jit-log-virtualstate') - raise InvalidLoop + raise InvalidLoop('The virtual state at the end of the peeled ' + + 'loop is not compatible with the virtual ' + + 'state at the start of the loop which makes ' + + 'it impossible to close the loop') #debug_stop('jit-log-virtualstate') @@ -526,8 +536,8 @@ args = jumpop.getarglist() modifier = VirtualStateAdder(self.optimizer) virtual_state = modifier.get_virtual_state(args) - #debug_start('jit-log-virtualstate') - #virtual_state.debug_print("Looking for ") + debug_start('jit-log-virtualstate') + virtual_state.debug_print("Looking for ") for target in cell_token.target_tokens: if not target.virtual_state: @@ -536,10 +546,10 @@ extra_guards = [] bad = {} - #debugmsg = 'Did not match ' + debugmsg = 'Did not match ' if target.virtual_state.generalization_of(virtual_state, bad): ok = True - #debugmsg = 'Matched ' + debugmsg = 'Matched ' else: try: cpu = self.optimizer.cpu @@ -548,13 +558,13 @@ extra_guards) ok = True - #debugmsg = 'Guarded to match ' + debugmsg = 'Guarded to match ' except InvalidLoop: pass - #target.virtual_state.debug_print(debugmsg, bad) + target.virtual_state.debug_print(debugmsg, bad) if ok: - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') values = [self.getvalue(arg) for arg in jumpop.getarglist()] @@ -581,7 +591,7 @@ jumpop.setdescr(cell_token.target_tokens[0]) self.optimizer.send_extra_operation(jumpop) return True - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') return False class ValueImporter(object): diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -27,11 +27,15 @@ if self.generalization_of(other, renum, {}): return if renum[self.position] != other.position: - raise InvalidLoop + raise InvalidLoop('The numbering of the virtual states does not ' + + 'match. This means that two virtual fields ' + + 'have been set to the same Box in one of the ' + + 'virtual states but not in the other.') self._generate_guards(other, box, cpu, extra_guards) def _generate_guards(self, other, box, cpu, extra_guards): - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') def enum_forced_boxes(self, boxes, value, optimizer): raise NotImplementedError @@ -346,10 +350,12 @@ def _generate_guards(self, other, box, cpu, extra_guards): if not isinstance(other, NotVirtualStateInfo): - raise InvalidLoop + raise InvalidLoop('The VirtualStates does not match as a ' + + 'virtual appears where a pointer is needed ' + + 'and it is too late to force it.') if self.lenbound or other.lenbound: - raise InvalidLoop + raise InvalidLoop('The array length bounds does not match.') if self.level == LEVEL_KNOWNCLASS and \ box.nonnull() and \ @@ -400,7 +406,8 @@ return # Remaining cases are probably not interesting - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') if self.level == LEVEL_CONSTANT: import pdb; pdb.set_trace() raise NotImplementedError diff --git a/pypy/jit/metainterp/quasiimmut.py b/pypy/jit/metainterp/quasiimmut.py --- a/pypy/jit/metainterp/quasiimmut.py +++ b/pypy/jit/metainterp/quasiimmut.py @@ -120,8 +120,10 @@ self.fielddescr, self.structbox) return fieldbox.constbox() - def is_still_valid(self): + def is_still_valid_for(self, structconst): assert self.structbox is not None + if not self.structbox.constbox().same_constant(structconst): + return False cpu = self.cpu gcref = self.structbox.getref_base() qmut = get_current_qmut_instance(cpu, gcref, self.mutatefielddescr) diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.quasiimmut import get_current_qmut_instance from pypy.jit.metainterp.test.support import LLJitMixin from pypy.jit.codewriter.policy import StopAtXPolicy -from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe +from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe, promote def test_get_current_qmut_instance(): @@ -506,6 +506,27 @@ "guard_not_invalidated": 2 }) + def test_issue1080(self): + myjitdriver = JitDriver(greens=[], reds=["n", "sa", "a"]) + class Foo(object): + _immutable_fields_ = ["x?"] + def __init__(self, x): + self.x = x + one, two = Foo(1), Foo(2) + def main(n): + sa = 0 + a = one + while n: + myjitdriver.jit_merge_point(n=n, sa=sa, a=a) + sa += a.x + if a.x == 1: + a = two + elif a.x == 2: + a = one + n -= 1 + return sa + res = self.meta_interp(main, [10]) + assert res == main(10) class TestLLtypeGreenFieldsTests(QuasiImmutTests, LLJitMixin): pass diff --git a/pypy/module/rctime/test/test_rctime.py b/pypy/module/rctime/test/test_rctime.py --- a/pypy/module/rctime/test/test_rctime.py +++ b/pypy/module/rctime/test/test_rctime.py @@ -64,6 +64,7 @@ def test_localtime(self): import time as rctime + import os raises(TypeError, rctime.localtime, "foo") rctime.localtime() rctime.localtime(None) @@ -75,6 +76,10 @@ assert 0 <= (t1 - t0) < 1.2 t = rctime.time() assert rctime.localtime(t) == rctime.localtime(t) + if os.name == 'nt': + raises(ValueError, rctime.localtime, -1) + else: + rctime.localtime(-1) def test_mktime(self): import time as rctime @@ -108,8 +113,8 @@ assert long(rctime.mktime(rctime.gmtime(t))) - rctime.timezone == long(t) ltime = rctime.localtime() assert rctime.mktime(tuple(ltime)) == rctime.mktime(ltime) - - assert rctime.mktime(rctime.localtime(-1)) == -1 + if os.name != 'nt': + assert rctime.mktime(rctime.localtime(-1)) == -1 def test_asctime(self): import time as rctime diff --git a/pypy/module/select/__init__.py b/pypy/module/select/__init__.py --- a/pypy/module/select/__init__.py +++ b/pypy/module/select/__init__.py @@ -2,6 +2,7 @@ from pypy.interpreter.mixedmodule import MixedModule import sys +import os class Module(MixedModule): @@ -9,11 +10,13 @@ } interpleveldefs = { - 'poll' : 'interp_select.poll', 'select': 'interp_select.select', 'error' : 'space.fromcache(interp_select.Cache).w_error' } + if os.name =='posix': + interpleveldefs['poll'] = 'interp_select.poll' + if sys.platform.startswith('linux'): interpleveldefs['epoll'] = 'interp_epoll.W_Epoll' from pypy.module.select.interp_epoll import cconfig, public_symbols diff --git a/pypy/module/select/test/test_select.py b/pypy/module/select/test/test_select.py --- a/pypy/module/select/test/test_select.py +++ b/pypy/module/select/test/test_select.py @@ -214,6 +214,8 @@ def test_poll(self): import select + if not hasattr(select, 'poll'): + skip("no select.poll() on this platform") readend, writeend = self.getpair() try: class A(object): diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -1,10 +1,12 @@ import sys, time from pypy.rpython.extregistry import ExtRegistryEntry +from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.rarithmetic import is_valid_int def ll_assert(x, msg): """After translation to C, this becomes an RPyAssert.""" + assert type(x) is bool, "bad type! got %r" % (type(x),) assert x, msg class Entry(ExtRegistryEntry): @@ -21,8 +23,13 @@ hop.exception_cannot_occur() hop.genop('debug_assert', vlist) +class FatalError(Exception): + pass + def fatalerror(msg): # print the RPython traceback and abort with a fatal error + if not we_are_translated(): + raise FatalError(msg) from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop llop.debug_print_traceback(lltype.Void) @@ -33,6 +40,8 @@ def fatalerror_notb(msg): # a variant of fatalerror() that doesn't print the RPython traceback + if not we_are_translated(): + raise FatalError(msg) from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop llop.debug_fatalerror(lltype.Void, msg) diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -916,7 +916,7 @@ ll_assert(not self.is_in_nursery(obj), "object in nursery after collection") # similarily, all objects should have this flag: - ll_assert(self.header(obj).tid & GCFLAG_TRACK_YOUNG_PTRS, + ll_assert(self.header(obj).tid & GCFLAG_TRACK_YOUNG_PTRS != 0, "missing GCFLAG_TRACK_YOUNG_PTRS") # the GCFLAG_VISITED should not be set between collections ll_assert(self.header(obj).tid & GCFLAG_VISITED == 0, diff --git a/pypy/rpython/memory/gc/semispace.py b/pypy/rpython/memory/gc/semispace.py --- a/pypy/rpython/memory/gc/semispace.py +++ b/pypy/rpython/memory/gc/semispace.py @@ -640,7 +640,7 @@ between collections.""" tid = self.header(obj).tid if tid & GCFLAG_EXTERNAL: - ll_assert(tid & GCFLAG_FORWARDED, "bug: external+!forwarded") + ll_assert(tid & GCFLAG_FORWARDED != 0, "bug: external+!forwarded") ll_assert(not (self.tospace <= obj < self.free), "external flag but object inside the semispaces") else: diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -8,7 +8,6 @@ from pypy.rpython.memory.gcheader import GCHeaderBuilder from pypy.rlib.rarithmetic import ovfcheck from pypy.rlib import rgc -from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.translator.backendopt import graphanalyze from pypy.translator.backendopt.support import var_needsgc From noreply at buildbot.pypy.org Wed May 2 01:03:58 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:03:58 +0200 (CEST) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120501230358.A0F3A82F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54855:d8adf28f689b Date: 2012-04-28 22:23 +0200 http://bitbucket.org/pypy/pypy/changeset/d8adf28f689b/ Log: hg merge default diff --git a/lib-python/modified-2.7/test/test_peepholer.py b/lib-python/modified-2.7/test/test_peepholer.py --- a/lib-python/modified-2.7/test/test_peepholer.py +++ b/lib-python/modified-2.7/test/test_peepholer.py @@ -145,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib_pypy/_ctypes/builtin.py b/lib_pypy/_ctypes/builtin.py --- a/lib_pypy/_ctypes/builtin.py +++ b/lib_pypy/_ctypes/builtin.py @@ -3,7 +3,8 @@ try: from thread import _local as local except ImportError: - local = object # no threads + class local(object): # no threads + pass class ConvMode: encoding = 'ascii' diff --git a/lib_pypy/_ctypes_test.py b/lib_pypy/_ctypes_test.py --- a/lib_pypy/_ctypes_test.py +++ b/lib_pypy/_ctypes_test.py @@ -21,7 +21,7 @@ # Compile .c file include_dir = os.path.join(thisdir, '..', 'include') if sys.platform == 'win32': - ccflags = [] + ccflags = ['-D_CRT_SECURE_NO_WARNINGS'] else: ccflags = ['-fPIC'] res = compiler.compile([os.path.join(thisdir, '_ctypes_test.c')], @@ -34,6 +34,13 @@ if sys.platform == 'win32': # XXX libpypy-c.lib is currently not installed automatically library = os.path.join(thisdir, '..', 'include', 'libpypy-c') + if not os.path.exists(library + '.lib'): + #For a nightly build + library = os.path.join(thisdir, '..', 'include', 'python27') + if not os.path.exists(library + '.lib'): + # For a local translation + library = os.path.join(thisdir, '..', 'pypy', 'translator', + 'goal', 'libpypy-c') libraries = [library, 'oleaut32'] extra_ldargs = ['/MANIFEST'] # needed for VC10 else: diff --git a/lib_pypy/_testcapi.py b/lib_pypy/_testcapi.py --- a/lib_pypy/_testcapi.py +++ b/lib_pypy/_testcapi.py @@ -16,7 +16,7 @@ # Compile .c file include_dir = os.path.join(thisdir, '..', 'include') if sys.platform == 'win32': - ccflags = [] + ccflags = ['-D_CRT_SECURE_NO_WARNINGS'] else: ccflags = ['-fPIC', '-Wimplicit-function-declaration'] res = compiler.compile([os.path.join(thisdir, '_testcapimodule.c')], @@ -29,6 +29,13 @@ if sys.platform == 'win32': # XXX libpypy-c.lib is currently not installed automatically library = os.path.join(thisdir, '..', 'include', 'libpypy-c') + if not os.path.exists(library + '.lib'): + #For a nightly build + library = os.path.join(thisdir, '..', 'include', 'python27') + if not os.path.exists(library + '.lib'): + # For a local translation + library = os.path.join(thisdir, '..', 'pypy', 'translator', + 'goal', 'libpypy-c') libraries = [library, 'oleaut32'] extra_ldargs = ['/MANIFEST', # needed for VC10 '/EXPORT:init_testcapi'] diff --git a/lib_pypy/pyrepl/reader.py b/lib_pypy/pyrepl/reader.py --- a/lib_pypy/pyrepl/reader.py +++ b/lib_pypy/pyrepl/reader.py @@ -152,8 +152,8 @@ (r'\', 'delete'), (r'\', 'backspace'), (r'\M-\', 'backward-kill-word'), - (r'\', 'end'), - (r'\', 'home'), + (r'\', 'end-of-line'), # was 'end' + (r'\', 'beginning-of-line'), # was 'home' (r'\', 'help'), (r'\EOF', 'end'), # the entries in the terminfo database for xterms (r'\EOH', 'home'), # seem to be wrong. this is a less than ideal diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -229,8 +229,8 @@ return thing elif hasattr(thing, '__name__'): # mostly types and functions return thing.__name__ - elif hasattr(thing, 'name'): # mostly ClassDescs - return thing.name + elif hasattr(thing, 'name') and isinstance(thing.name, str): + return thing.name # mostly ClassDescs elif isinstance(thing, tuple): return '_'.join(map(nameof, thing)) else: diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -321,10 +321,14 @@ default=False), BoolOption("getattributeshortcut", "track types that override __getattribute__", - default=False), + default=False, + # weakrefs needed, because of get_subclasses() + requires=[("translation.rweakref", True)]), BoolOption("newshortcut", "cache and shortcut calling __new__ from builtin types", - default=False), + default=False, + # weakrefs needed, because of get_subclasses() + requires=[("translation.rweakref", True)]), BoolOption("logspaceoptypes", "a instrumentation option: before exit, print the types seen by " @@ -338,7 +342,9 @@ requires=[("objspace.std.builtinshortcut", True)]), BoolOption("withidentitydict", "track types that override __hash__, __eq__ or __cmp__ and use a special dict strategy for those which do not", - default=False), + default=False, + # weakrefs needed, because of get_subclasses() + requires=[("translation.rweakref", True)]), ]), ]) diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/cppyy.rst @@ -0,0 +1,554 @@ +============================ +cppyy: C++ bindings for PyPy +============================ + +The cppyy module provides C++ bindings for PyPy by using the reflection +information extracted from C++ header files by means of the +`Reflex package`_. +For this to work, you have to both install Reflex and build PyPy from the +reflex-support branch. +As indicated by this being a branch, support for Reflex is still +experimental. +However, it is functional enough to put it in the hands of those who want +to give it a try. +In the medium term, cppyy will move away from Reflex and instead use +`cling`_ as its backend, which is based on `llvm`_. +Although that will change the logistics on the generation of reflection +information, it will not change the python-side interface. + +.. _`Reflex package`: http://root.cern.ch/drupal/content/reflex +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ + + +Installation +============ + +For now, the easiest way of getting the latest version of Reflex, is by +installing the ROOT package. +Besides getting the latest version of Reflex, another advantage is that with +the full ROOT package, you can also use your Reflex-bound code on `CPython`_. +`Download`_ a binary or install from `source`_. +Some Linux and Mac systems may have ROOT provided in the list of scientific +software of their packager. +A current, standalone version of Reflex should be provided at some point, +once the dependencies and general packaging have been thought out. +Also, make sure you have a version of `gccxml`_ installed, which is most +easily provided by the packager of your system. +If you read up on gccxml, you'll probably notice that it is no longer being +developed and hence will not provide C++11 support. +That's why the medium term plan is to move to `cling`_. + +.. _`Download`: http://root.cern.ch/drupal/content/downloading-root +.. _`source`: http://root.cern.ch/drupal/content/installing-root-source +.. _`gccxml`: http://www.gccxml.org + +Next, get the `PyPy sources`_, select the reflex-support branch, and build +pypy-c. +For the build to succeed, the ``$ROOTSYS`` environment variable must point to +the location of your ROOT installation:: + + $ hg clone https://bitbucket.org/pypy/pypy + $ cd pypy + $ hg up reflex-support + $ cd pypy/translator/goal + $ python translate.py -O jit --gcrootfinder=shadowstack targetpypystandalone.py --withmod-cppyy + +This will build a ``pypy-c`` that includes the cppyy module, and through that, +Reflex support. +Of course, if you already have a pre-built version of the ``pypy`` interpreter, +you can use that for the translation rather than ``python``. + +.. _`PyPy sources`: https://bitbucket.org/pypy/pypy/overview + + +Basic example +============= + +Now test with a trivial example whether all packages are properly installed +and functional. +First, create a C++ header file with some class in it (note that all functions +are made inline for convenience; a real-world example would of course have a +corresponding source file):: + + $ cat MyClass.h + class MyClass { + public: + MyClass(int i = -99) : m_myint(i) {} + + int GetMyInt() { return m_myint; } + void SetMyInt(int i) { m_myint = i; } + + public: + int m_myint; + }; + +Then, generate the bindings using ``genreflex`` (part of ROOT), and compile the +code:: + + $ genreflex MyClass.h + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyClass_rflx.cpp -o libMyClassDict.so + +Now you're ready to use the bindings. +Since the bindings are designed to look pythonistic, it should be +straightforward:: + + $ pypy-c + >>>> import cppyy + >>>> cppyy.load_reflection_info("libMyClassDict.so") + + >>>> myinst = cppyy.gbl.MyClass(42) + >>>> print myinst.GetMyInt() + 42 + >>>> myinst.SetMyInt(33) + >>>> print myinst.m_myint + 33 + >>>> myinst.m_myint = 77 + >>>> print myinst.GetMyInt() + 77 + >>>> help(cppyy.gbl.MyClass) # shows that normal python introspection works + +That's all there is to it! + + +Advanced example +================ +The following snippet of C++ is very contrived, to allow showing that such +pathological code can be handled and to show how certain features play out in +practice:: + + $ cat MyAdvanced.h + #include + + class Base1 { + public: + Base1(int i) : m_i(i) {} + virtual ~Base1() {} + int m_i; + }; + + class Base2 { + public: + Base2(double d) : m_d(d) {} + virtual ~Base2() {} + double m_d; + }; + + class C; + + class Derived : public virtual Base1, public virtual Base2 { + public: + Derived(const std::string& name, int i, double d) : Base1(i), Base2(d), m_name(name) {} + virtual C* gimeC() { return (C*)0; } + std::string m_name; + }; + + Base1* BaseFactory(const std::string& name, int i, double d) { + return new Derived(name, i, d); + } + +This code is still only in a header file, with all functions inline, for +convenience of the example. +If the implementations live in a separate source file or shared library, the +only change needed is to link those in when building the reflection library. + +If you were to run ``genreflex`` like above in the basic example, you will +find that not all classes of interest will be reflected, nor will be the +global factory function. +In particular, ``std::string`` will be missing, since it is not defined in +this header file, but in a header file that is included. +In practical terms, general classes such as ``std::string`` should live in a +core reflection set, but for the moment assume we want to have it in the +reflection library that we are building for this example. + +The ``genreflex`` script can be steered using a so-called `selection file`_, +which is a simple XML file specifying, either explicitly or by using a +pattern, which classes, variables, namespaces, etc. to select from the given +header file. +With the aid of a selection file, a large project can be easily managed: +simply ``#include`` all relevant headers into a single header file that is +handed to ``genreflex``. +Then, apply a selection file to pick up all the relevant classes. +For our purposes, the following rather straightforward selection will do +(the name ``lcgdict`` for the root is historical, but required):: + + $ cat MyAdvanced.xml + + + + + + + +.. _`selection file`: http://root.cern.ch/drupal/content/generating-reflex-dictionaries + +Now the reflection info can be generated and compiled:: + + $ genreflex MyAdvanced.h --selection=MyAdvanced.xml + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyAdvanced_rflx.cpp -o libAdvExDict.so + +and subsequently be used from PyPy:: + + >>>> import cppyy + >>>> cppyy.load_reflection_info("libAdvExDict.so") + + >>>> d = cppyy.gbl.BaseFactory("name", 42, 3.14) + >>>> type(d) + + >>>> d.m_i + 42 + >>>> d.m_d + 3.14 + >>>> d.m_name == "name" + True + >>>> + +Again, that's all there is to it! + +A couple of things to note, though. +If you look back at the C++ definition of the ``BaseFactory`` function, +you will see that it declares the return type to be a ``Base1``, yet the +bindings return an object of the actual type ``Derived``? +This choice is made for a couple of reasons. +First, it makes method dispatching easier: if bound objects are always their +most derived type, then it is easy to calculate any offsets, if necessary. +Second, it makes memory management easier: the combination of the type and +the memory address uniquely identifies an object. +That way, it can be recycled and object identity can be maintained if it is +entered as a function argument into C++ and comes back to PyPy as a return +value. +Last, but not least, casting is decidedly unpythonistic. +By always providing the most derived type known, casting becomes unnecessary. +For example, the data member of ``Base2`` is simply directly available. +Note also that the unreflected ``gimeC`` method of ``Derived`` does not +preclude its use. +It is only the ``gimeC`` method that is unusable as long as class ``C`` is +unknown to the system. + + +Features +======== + +The following is not meant to be an exhaustive list, since cppyy is still +under active development. +Furthermore, the intention is that every feature is as natural as possible on +the python side, so if you find something missing in the list below, simply +try it out. +It is not always possible to provide exact mapping between python and C++ +(active memory management is one such case), but by and large, if the use of a +feature does not strike you as obvious, it is more likely to simply be a bug. +That is a strong statement to make, but also a worthy goal. + +* **abstract classes**: Are represented as python classes, since they are + needed to complete the inheritance hierarchies, but will raise an exception + if an attempt is made to instantiate from them. + +* **arrays**: Supported for builtin data types only, as used from module + ``array``. + Out-of-bounds checking is limited to those cases where the size is known at + compile time (and hence part of the reflection info). + +* **builtin data types**: Map onto the expected equivalent python types, with + the caveat that there may be size differences, and thus it is possible that + exceptions are raised if an overflow is detected. + +* **casting**: Is supposed to be unnecessary. + Object pointer returns from functions provide the most derived class known + in the hierarchy of the object being returned. + This is important to preserve object identity as well as to make casting, + a pure C++ feature after all, superfluous. + +* **classes and structs**: Get mapped onto python classes, where they can be + instantiated as expected. + If classes are inner classes or live in a namespace, their naming and + location will reflect that. + +* **data members**: Public data members are represented as python properties + and provide read and write access on instances as expected. + +* **default arguments**: C++ default arguments work as expected, but python + keywords are not supported. + It is technically possible to support keywords, but for the C++ interface, + the formal argument names have no meaning and are not considered part of the + API, hence it is not a good idea to use keywords. + +* **doc strings**: The doc string of a method or function contains the C++ + arguments and return types of all overloads of that name, as applicable. + +* **enums**: Are translated as ints with no further checking. + +* **functions**: Work as expected and live in their appropriate namespace + (which can be the global one, ``cppyy.gbl``). + +* **inheritance**: All combinations of inheritance on the C++ (single, + multiple, virtual) are supported in the binding. + However, new python classes can only use single inheritance from a bound C++ + class. + Multiple inheritance would introduce two "this" pointers in the binding. + This is a current, not a fundamental, limitation. + The C++ side will not see any overridden methods on the python side, as + cross-inheritance is planned but not yet supported. + +* **methods**: Are represented as python methods and work as expected. + They are first class objects and can be bound to an instance. + Virtual C++ methods work as expected. + To select a specific virtual method, do like with normal python classes + that override methods: select it from the class that you need, rather than + calling the method on the instance. + +* **namespaces**: Are represented as python classes. + Namespaces are more open-ended than classes, so sometimes initial access may + result in updates as data and functions are looked up and constructed + lazily. + Thus the result of ``dir()`` on a namespace should not be relied upon: it + only shows the already accessed members. (TODO: to be fixed by implementing + __dir__.) + The global namespace is ``cppyy.gbl``. + +* **operator conversions**: If defined in the C++ class and a python + equivalent exists (i.e. all builtin integer and floating point types, as well + as ``bool``), it will map onto that python conversion. + Note that ``char*`` is mapped onto ``__str__``. + +* **operator overloads**: If defined in the C++ class and if a python + equivalent is available (not always the case, think e.g. of ``operator||``), + then they work as expected. + Special care needs to be taken for global operator overloads in C++: first, + make sure that they are actually reflected, especially for the global + overloads for ``operator==`` and ``operator!=`` of STL iterators in the case + of gcc. + Second, make sure that reflection info is loaded in the proper order. + I.e. that these global overloads are available before use. + +* **pointers**: For builtin data types, see arrays. + For objects, a pointer to an object and an object looks the same, unless + the pointer is a data member. + In that case, assigning to the data member will cause a copy of the pointer + and care should be taken about the object's life time. + If a pointer is a global variable, the C++ side can replace the underlying + object and the python side will immediately reflect that. + +* **static data members**: Are represented as python property objects on the + class and the meta-class. + Both read and write access is as expected. + +* **static methods**: Are represented as python's ``staticmethod`` objects + and can be called both from the class as well as from instances. + +* **strings**: The std::string class is considered a builtin C++ type and + mixes quite well with python's str. + Python's str can be passed where a ``const char*`` is expected, and an str + will be returned if the return type is ``const char*``. + +* **templated classes**: Are represented in a meta-class style in python. + This looks a little bit confusing, but conceptually is rather natural. + For example, given the class ``std::vector``, the meta-class part would + be ``std.vector`` in python. + Then, to get the instantiation on ``int``, do ``std.vector(int)`` and to + create an instance of that class, do ``std.vector(int)()``. + Note that templates can be build up by handing actual types to the class + instantiation (as done in this vector example), or by passing in the list of + template arguments as a string. + The former is a lot easier to work with if you have template instantiations + using classes that themselves are templates (etc.) in the arguments. + All classes must already exist in the loaded reflection info. + +* **typedefs**: Are simple python references to the actual classes to which + they refer. + +* **unary operators**: Are supported if a python equivalent exists, and if the + operator is defined in the C++ class. + +You can always find more detailed examples and see the full of supported +features by looking at the tests in pypy/module/cppyy/test. + +If a feature or reflection info is missing, this is supposed to be handled +gracefully. +In fact, there are unit tests explicitly for this purpose (even as their use +becomes less interesting over time, as the number of missing features +decreases). +Only when a missing feature is used, should there be an exception. +For example, if no reflection info is available for a return type, then a +class that has a method with that return type can still be used. +Only that one specific method can not be used. + + +Templates +========= + +A bit of special care needs to be taken for the use of templates. +For a templated class to be completely available, it must be guaranteed that +said class is fully instantiated, and hence all executable C++ code is +generated and compiled in. +The easiest way to fulfill that guarantee is by explicit instantiation in the +header file that is handed to ``genreflex``. +The following example should make that clear:: + + $ cat MyTemplate.h + #include + + class MyClass { + public: + MyClass(int i = -99) : m_i(i) {} + MyClass(const MyClass& s) : m_i(s.m_i) {} + MyClass& operator=(const MyClass& s) { m_i = s.m_i; return *this; } + ~MyClass() {} + int m_i; + }; + + template class std::vector; + +If you know for certain that all symbols will be linked in from other sources, +you can also declare the explicit template instantiation ``extern``. + +Unfortunately, this is not enough for gcc. +The iterators, if they are going to be used, need to be instantiated as well, +as do the comparison operators on those iterators, as these live in an +internal namespace, rather than in the iterator classes. +One way to handle this, is to deal with this once in a macro, then reuse that +macro for all ``vector`` classes. +Thus, the header above needs this, instead of just the explicit instantiation +of the ``vector``:: + + #define STLTYPES_EXPLICIT_INSTANTIATION_DECL(STLTYPE, TTYPE) \ + template class std::STLTYPE< TTYPE >; \ + template class __gnu_cxx::__normal_iterator >; \ + template class __gnu_cxx::__normal_iterator >;\ + namespace __gnu_cxx { \ + template bool operator==(const std::STLTYPE< TTYPE >::iterator&, \ + const std::STLTYPE< TTYPE >::iterator&); \ + template bool operator!=(const std::STLTYPE< TTYPE >::iterator&, \ + const std::STLTYPE< TTYPE >::iterator&); \ + } + + STLTYPES_EXPLICIT_INSTANTIATION_DECL(vector, MyClass) + +Then, still for gcc, the selection file needs to contain the full hierarchy as +well as the global overloads for comparisons for the iterators:: + + $ cat MyTemplate.xml + + + + + + + + + + + + + +Run the normal ``genreflex`` and compilation steps:: + + $ genreflex MyTemplate.h --selection=MyTemplate.xm + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyTemplate_rflx.cpp -o libTemplateDict.so + +Note: this is a dirty corner that clearly could do with some automation, +even if the macro already helps. +Such automation is planned. +In fact, in the cling world, the backend can perform the template +instantations and generate the reflection info on the fly, and none of the +above will any longer be necessary. + +Subsequent use should be as expected. +Note the meta-class style of "instantiating" the template:: + + >>>> import cppyy + >>>> cppyy.load_reflection_info("libTemplateDict.so") + >>>> std = cppyy.gbl.std + >>>> MyClass = cppyy.gbl.MyClass + >>>> v = std.vector(MyClass)() + >>>> v += [MyClass(1), MyClass(2), MyClass(3)] + >>>> for m in v: + .... print m.m_i, + .... + 1 2 3 + >>>> + +Other templates work similarly. +The arguments to the template instantiation can either be a string with the +full list of arguments, or the explicit classes. +The latter makes for easier code writing if the classes passed to the +instantiation are themselves templates. + + +The fast lane +============= + +The following is an experimental feature of cppyy, and that makes it doubly +experimental, so caveat emptor. +With a slight modification of Reflex, it can provide function pointers for +C++ methods, and hence allow PyPy to call those pointers directly, rather than +calling C++ through a Reflex stub. +This results in a rather significant speed-up. +Mind you, the normal stub path is not exactly slow, so for now only use this +out of curiosity or if you really need it. + +To install this patch of Reflex, locate the file genreflex-methptrgetter.patch +in pypy/module/cppyy and apply it to the genreflex python scripts found in +``$ROOTSYS/lib``:: + + $ cd $ROOTSYS/lib + $ patch -p2 < genreflex-methptrgetter.patch + +With this patch, ``genreflex`` will have grown the ``--with-methptrgetter`` +option. +Use this option when running ``genreflex``, and add the +``-Wno-pmf-conversions`` option to ``g++`` when compiling. +The rest works the same way: the fast path will be used transparently (which +also means that you can't actually find out whether it is in use, other than +by running a micro-benchmark). + + +CPython +======= + +Most of the ideas in cppyy come originally from the `PyROOT`_ project. +Although PyROOT does not support Reflex directly, it has an alter ego called +"PyCintex" that, in a somewhat roundabout way, does. +If you installed ROOT, rather than just Reflex, PyCintex should be available +immediately if you add ``$ROOTSYS/lib`` to the ``PYTHONPATH`` environment +variable. + +.. _`PyROOT`: http://root.cern.ch/drupal/content/pyroot + +There are a couple of minor differences between PyCintex and cppyy, most to do +with naming. +The one that you will run into directly, is that PyCintex uses a function +called ``loadDictionary`` rather than ``load_reflection_info``. +The reason for this is that Reflex calls the shared libraries that contain +reflection info "dictionaries." +However, in python, the name `dictionary` already has a well-defined meaning, +so a more descriptive name was chosen for cppyy. +In addition, PyCintex requires that the names of shared libraries so loaded +start with "lib" in their name. +The basic example above, rewritten for PyCintex thus goes like this:: + + $ python + >>> import PyCintex + >>> PyCintex.loadDictionary("libMyClassDict.so") + >>> myinst = PyCintex.gbl.MyClass(42) + >>> print myinst.GetMyInt() + 42 + >>> myinst.SetMyInt(33) + >>> print myinst.m_myint + 33 + >>> myinst.m_myint = 77 + >>> print myinst.GetMyInt() + 77 + >>> help(PyCintex.gbl.MyClass) # shows that normal python introspection works + +Other naming differences are such things as taking an address of an object. +In PyCintex, this is done with ``AddressOf`` whereas in cppyy the choice was +made to follow the naming as in ``ctypes`` and hence use ``addressof`` +(PyROOT/PyCintex predate ``ctypes`` by several years, and the ROOT project +follows camel-case, hence the differences). + +Of course, this is python, so if any of the naming is not to your liking, all +you have to do is provide a wrapper script that you import instead of +importing the ``cppyy`` or ``PyCintex`` modules directly. +In that wrapper script you can rename methods exactly the way you need it. + +In the cling world, all these differences will be resolved. diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst --- a/pypy/doc/extending.rst +++ b/pypy/doc/extending.rst @@ -23,6 +23,8 @@ * Write them in RPython as mixedmodule_, using *rffi* as bindings. +* Write them in C++ and bind them through Reflex_ (EXPERIMENTAL) + .. _ctypes: #CTypes .. _\_ffi: #LibFFI .. _mixedmodule: #Mixed Modules @@ -110,3 +112,34 @@ XXX we should provide detailed docs about lltype and rffi, especially if we want people to follow that way. + +Reflex +====== + +This method is only experimental for now, and is being exercised on a branch, +`reflex-support`_, so you will have to build PyPy yourself. +The method works by using the `Reflex package`_ to provide reflection +information of the C++ code, which is then used to automatically generate +bindings at runtime, which can then be used from python. +Full details are `available here`_. + +.. _`reflex-support`: cppyy.html +.. _`Reflex package`: http://root.cern.ch/drupal/content/reflex +.. _`available here`: cppyy.html + +Pros +---- + +If it works, it is mostly automatic, and hence easy in use. +The bindings can make use of direct pointers, in which case the calls are +very fast. + +Cons +---- + +C++ is a large language, and these bindings are not yet feature-complete. +Although missing features should do no harm if you don't use them, if you do +need a particular feature, it may be necessary to work around it in python +or with a C++ helper function. +Although Reflex works on various platforms, the bindings with PyPy have only +been tested on Linux. diff --git a/pypy/doc/windows.rst b/pypy/doc/windows.rst --- a/pypy/doc/windows.rst +++ b/pypy/doc/windows.rst @@ -24,7 +24,8 @@ translation. Failing that, they will pick the most recent Visual Studio compiler they can find. In addition, the target architecture (32 bits, 64 bits) is automatically selected. A 32 bit build can only be built -using a 32 bit Python and vice versa. +using a 32 bit Python and vice versa. By default pypy is built using the +Multi-threaded DLL (/MD) runtime environment. **Note:** PyPy is currently not supported for 64 bit Windows, and translation will fail in this case. @@ -102,10 +103,12 @@ Download the source code of expat on sourceforge: http://sourceforge.net/projects/expat/ and extract it in the base -directory. Then open the project file ``expat.dsw`` with Visual +directory. Version 2.1.0 is known to pass tests. Then open the project +file ``expat.dsw`` with Visual Studio; follow the instruction for converting the project files, -switch to the "Release" configuration, and build the solution (the -``expat`` project is actually enough for pypy). +switch to the "Release" configuration, reconfigure the runtime for +Multi-threaded DLL (/MD) and build the solution (the ``expat`` project +is actually enough for pypy). Then, copy the file ``win32\bin\release\libexpat.dll`` somewhere in your PATH. diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -98,6 +98,10 @@ Collects the arguments of a function call. Instances should be considered immutable. + + Some parts of this class are written in a slightly convoluted style to help + the JIT. It is really crucial to get this right, because Python's argument + semantics are complex, but calls occur everywhere. """ ### Construction ### @@ -184,7 +188,13 @@ space = self.space keywords, values_w = space.view_as_kwargs(w_starstararg) if keywords is not None: # this path also taken for empty dicts - self._add_keywordargs_no_unwrapping(keywords, values_w) + if self.keywords is None: + self.keywords = keywords[:] # copy to make non-resizable + self.keywords_w = values_w[:] + else: + self._check_not_duplicate_kwargs(keywords, values_w) + self.keywords = self.keywords + keywords + self.keywords_w = self.keywords_w + values_w return not jit.isconstant(len(self.keywords)) if space.isinstance_w(w_starstararg, space.w_dict): keys_w = space.unpackiterable(w_starstararg) @@ -242,22 +252,16 @@ @jit.look_inside_iff(lambda self, keywords, keywords_w: jit.isconstant(len(keywords) and jit.isconstant(self.keywords))) - def _add_keywordargs_no_unwrapping(self, keywords, keywords_w): - if self.keywords is None: - self.keywords = keywords[:] # copy to make non-resizable - self.keywords_w = keywords_w[:] - else: - # looks quadratic, but the JIT should remove all of it nicely. - # Also, all the lists should be small - for key in keywords: - for otherkey in self.keywords: - if otherkey == key: - raise operationerrfmt(self.space.w_TypeError, - "got multiple values " - "for keyword argument " - "'%s'", key) - self.keywords = self.keywords + keywords - self.keywords_w = self.keywords_w + keywords_w + def _check_not_duplicate_kwargs(self, keywords, keywords_w): + # looks quadratic, but the JIT should remove all of it nicely. + # Also, all the lists should be small + for key in keywords: + for otherkey in self.keywords: + if otherkey == key: + raise operationerrfmt(self.space.w_TypeError, + "got multiple values " + "for keyword argument " + "'%s'", key) def fixedunpack(self, argcount): """The simplest argument parsing: get the 'argcount' arguments, diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -311,14 +311,19 @@ # produce compatible pycs. if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): - unistr = self.space.unicode_w(w_const) - if len(unistr) == 1: - ch = ord(unistr[0]) - else: - ch = 0 - if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): - return subs + #unistr = self.space.unicode_w(w_const) + #if len(unistr) == 1: + # ch = ord(unistr[0]) + #else: + # ch = 0 + #if (ch > 0xFFFF or + # (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): + # --XXX-- for now we always disable optimization of + # u'...'[constant] because the tests above are not + # enough to fix issue5057 (CPython has the same + # problem as of April 24, 2012). + # See test_const_fold_unicode_subscr + return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -910,7 +910,8 @@ return "abc"[0] """ counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + if 0: # xxx later? + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} # getitem outside of the BMP should not be optimized source = """def f(): @@ -920,12 +921,20 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + source = """def f(): + return u"\U00012345abcdef"[3] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, + ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) source = """def f(): return "\uE01F"[0] """ counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + if 0: # xxx later? + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} monkeypatch.undo() # getslice is not yet optimized. diff --git a/pypy/interpreter/function.py b/pypy/interpreter/function.py --- a/pypy/interpreter/function.py +++ b/pypy/interpreter/function.py @@ -51,7 +51,9 @@ def __repr__(self): # return "function %s.%s" % (self.space, self.name) # maybe we want this shorter: - name = getattr(self, 'name', '?') + name = getattr(self, 'name', None) + if not isinstance(name, str): + name = '?' return "<%s %s>" % (self.__class__.__name__, name) def call_args(self, args): diff --git a/pypy/jit/backend/llsupport/asmmemmgr.py b/pypy/jit/backend/llsupport/asmmemmgr.py --- a/pypy/jit/backend/llsupport/asmmemmgr.py +++ b/pypy/jit/backend/llsupport/asmmemmgr.py @@ -277,6 +277,8 @@ from pypy.jit.backend.hlinfo import highleveljitinfo if highleveljitinfo.sys_executable: debug_print('SYS_EXECUTABLE', highleveljitinfo.sys_executable) + else: + debug_print('SYS_EXECUTABLE', '??') # HEX = '0123456789ABCDEF' dump = [] diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -1656,15 +1656,21 @@ else: # XXX hard-coded assumption: to go from an object to its class # we use the following algorithm: - # - read the typeid from mem(locs[0]), i.e. at offset 0 - # - keep the lower 16 bits read there - # - multiply by 4 and use it as an offset in type_info_group - # - add 16 bytes, to go past the TYPE_INFO structure + # - read the typeid from mem(locs[0]), i.e. at offset 0; + # this is a complete word (N=4 bytes on 32-bit, N=8 on + # 64-bits) + # - keep the lower half of what is read there (i.e. + # truncate to an unsigned 'N / 2' bytes value) + # - multiply by 4 (on 32-bits only) and use it as an + # offset in type_info_group + # - add 16/32 bytes, to go past the TYPE_INFO structure loc = locs[1] assert isinstance(loc, ImmedLoc) classptr = loc.value # here, we have to go back from 'classptr' to the value expected - # from reading the 16 bits in the object header + # from reading the half-word in the object header. Note that + # this half-word is at offset 0 on a little-endian machine; + # it would be at offset 2 or 4 on a big-endian machine. from pypy.rpython.memory.gctypelayout import GCData sizeof_ti = rffi.sizeof(GCData.TYPE_INFO) type_info_group = llop.gc_get_type_info_group(llmemory.Address) diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -20,6 +20,7 @@ self.dependencies = {} # contains frame boxes that are not virtualizables self.nonstandard_virtualizables = {} + # heap cache # maps descrs to {from_box, to_box} dicts self.heap_cache = {} @@ -29,6 +30,26 @@ # cache the length of arrays self.length_cache = {} + # replace_box is called surprisingly often, therefore it's not efficient + # to go over all the dicts and fix them. + # instead, these two dicts are kept, and a replace_box adds an entry to + # each of them. + # every time one of the dicts heap_cache, heap_array_cache, length_cache + # is accessed, suitable indirections need to be performed + + # this looks all very subtle, but in practice the patterns of + # replacements should not be that complex. Usually a box is replaced by + # a const, once. Also, if something goes wrong, the effect is that less + # caching than possible is done, which is not a huge problem. + self.input_indirections = {} + self.output_indirections = {} + + def _input_indirection(self, box): + return self.input_indirections.get(box, box) + + def _output_indirection(self, box): + return self.output_indirections.get(box, box) + def invalidate_caches(self, opnum, descr, argboxes): self.mark_escaped(opnum, argboxes) self.clear_caches(opnum, descr, argboxes) @@ -132,14 +153,16 @@ self.arraylen_now_known(box, lengthbox) def getfield(self, box, descr): + box = self._input_indirection(box) d = self.heap_cache.get(descr, None) if d: tobox = d.get(box, None) - if tobox: - return tobox + return self._output_indirection(tobox) return None def getfield_now_known(self, box, descr, fieldbox): + box = self._input_indirection(box) + fieldbox = self._input_indirection(fieldbox) self.heap_cache.setdefault(descr, {})[box] = fieldbox def setfield(self, box, descr, fieldbox): @@ -148,6 +171,8 @@ self.heap_cache[descr] = new_d def _do_write_with_aliasing(self, d, box, fieldbox): + box = self._input_indirection(box) + fieldbox = self._input_indirection(fieldbox) # slightly subtle logic here # a write to an arbitrary box, all other boxes can alias this one if not d or box not in self.new_boxes: @@ -166,6 +191,7 @@ return new_d def getarrayitem(self, box, descr, indexbox): + box = self._input_indirection(box) if not isinstance(indexbox, ConstInt): return index = indexbox.getint() @@ -173,9 +199,11 @@ if cache: indexcache = cache.get(index, None) if indexcache is not None: - return indexcache.get(box, None) + return self._output_indirection(indexcache.get(box, None)) def getarrayitem_now_known(self, box, descr, indexbox, valuebox): + box = self._input_indirection(box) + valuebox = self._input_indirection(valuebox) if not isinstance(indexbox, ConstInt): return index = indexbox.getint() @@ -198,25 +226,13 @@ cache[index] = self._do_write_with_aliasing(indexcache, box, valuebox) def arraylen(self, box): - return self.length_cache.get(box, None) + box = self._input_indirection(box) + return self._output_indirection(self.length_cache.get(box, None)) def arraylen_now_known(self, box, lengthbox): - self.length_cache[box] = lengthbox - - def _replace_box(self, d, oldbox, newbox): - new_d = {} - for frombox, tobox in d.iteritems(): - if frombox is oldbox: - frombox = newbox - if tobox is oldbox: - tobox = newbox - new_d[frombox] = tobox - return new_d + box = self._input_indirection(box) + self.length_cache[box] = self._input_indirection(lengthbox) def replace_box(self, oldbox, newbox): - for descr, d in self.heap_cache.iteritems(): - self.heap_cache[descr] = self._replace_box(d, oldbox, newbox) - for descr, d in self.heap_array_cache.iteritems(): - for index, cache in d.iteritems(): - d[index] = self._replace_box(cache, oldbox, newbox) - self.length_cache = self._replace_box(self.length_cache, oldbox, newbox) + self.input_indirections[self._output_indirection(newbox)] = self._input_indirection(oldbox) + self.output_indirections[self._input_indirection(oldbox)] = self._output_indirection(newbox) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -7,7 +7,7 @@ import pypy.jit.metainterp.optimizeopt.optimizer as optimizeopt import pypy.jit.metainterp.optimizeopt.virtualize as virtualize from pypy.jit.metainterp.optimize import InvalidLoop -from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt +from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt, get_const_ptr_for_string from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation from pypy.rlib.rarithmetic import LONG_BIT @@ -5067,6 +5067,25 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_call_pure_vstring_const(self): + ops = """ + [] + p0 = newstr(3) + strsetitem(p0, 0, 97) + strsetitem(p0, 1, 98) + strsetitem(p0, 2, 99) + i0 = call_pure(123, p0, descr=nonwritedescr) + finish(i0) + """ + expected = """ + [] + finish(5) + """ + call_pure_results = { + (ConstInt(123), get_const_ptr_for_string("abc"),): ConstInt(5), + } + self.optimize_loop(ops, expected, call_pure_results) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1223,7 +1223,7 @@ def run_one_step(self): # Execute the frame forward. This method contains a loop that leaves # whenever the 'opcode_implementations' (which is one of the 'opimpl_' - # methods) returns True. This is the case when the current frame + # methods) raises ChangeFrame. This is the case when the current frame # changes, due to a call or a return. try: staticdata = self.metainterp.staticdata diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py --- a/pypy/jit/metainterp/test/test_heapcache.py +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -2,12 +2,14 @@ from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.history import ConstInt -box1 = object() -box2 = object() -box3 = object() -box4 = object() +box1 = "box1" +box2 = "box2" +box3 = "box3" +box4 = "box4" +box5 = "box5" lengthbox1 = object() lengthbox2 = object() +lengthbox3 = object() descr1 = object() descr2 = object() descr3 = object() @@ -276,11 +278,43 @@ h.setfield(box1, descr2, box3) h.setfield(box2, descr3, box3) h.replace_box(box1, box4) - assert h.getfield(box1, descr1) is None - assert h.getfield(box1, descr2) is None assert h.getfield(box4, descr1) is box2 assert h.getfield(box4, descr2) is box3 assert h.getfield(box2, descr3) is box3 + h.setfield(box4, descr1, box3) + assert h.getfield(box4, descr1) is box3 + + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box3, box4) + assert h.getfield(box1, descr1) is box2 + assert h.getfield(box1, descr2) is box4 + assert h.getfield(box2, descr3) is box4 + + def test_replace_box_twice(self): + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box1, box4) + h.replace_box(box4, box5) + assert h.getfield(box5, descr1) is box2 + assert h.getfield(box5, descr2) is box3 + assert h.getfield(box2, descr3) is box3 + h.setfield(box5, descr1, box3) + assert h.getfield(box4, descr1) is box3 + + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box3, box4) + h.replace_box(box4, box5) + assert h.getfield(box1, descr1) is box2 + assert h.getfield(box1, descr2) is box5 + assert h.getfield(box2, descr3) is box5 def test_replace_box_array(self): h = HeapCache() @@ -291,9 +325,6 @@ h.setarrayitem(box3, descr2, index2, box1) h.setarrayitem(box2, descr3, index2, box3) h.replace_box(box1, box4) - assert h.getarrayitem(box1, descr1, index1) is None - assert h.getarrayitem(box1, descr2, index1) is None - assert h.arraylen(box1) is None assert h.arraylen(box4) is lengthbox1 assert h.getarrayitem(box4, descr1, index1) is box2 assert h.getarrayitem(box4, descr2, index1) is box3 @@ -304,6 +335,27 @@ h.replace_box(lengthbox1, lengthbox2) assert h.arraylen(box4) is lengthbox2 + def test_replace_box_array_twice(self): + h = HeapCache() + h.setarrayitem(box1, descr1, index1, box2) + h.setarrayitem(box1, descr2, index1, box3) + h.arraylen_now_known(box1, lengthbox1) + h.setarrayitem(box2, descr1, index2, box1) + h.setarrayitem(box3, descr2, index2, box1) + h.setarrayitem(box2, descr3, index2, box3) + h.replace_box(box1, box4) + h.replace_box(box4, box5) + assert h.arraylen(box4) is lengthbox1 + assert h.getarrayitem(box5, descr1, index1) is box2 + assert h.getarrayitem(box5, descr2, index1) is box3 + assert h.getarrayitem(box2, descr1, index2) is box5 + assert h.getarrayitem(box3, descr2, index2) is box5 + assert h.getarrayitem(box2, descr3, index2) is box3 + + h.replace_box(lengthbox1, lengthbox2) + h.replace_box(lengthbox2, lengthbox3) + assert h.arraylen(box4) is lengthbox3 + def test_ll_arraycopy(self): h = HeapCache() h.new_array(box1, lengthbox1) diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -345,9 +345,13 @@ def add(self, w_iobase): assert w_iobase.streamholder is None - holder = StreamHolder(w_iobase) - w_iobase.streamholder = holder - self.streams[holder] = None + if rweakref.has_weakref_support(): + holder = StreamHolder(w_iobase) + w_iobase.streamholder = holder + self.streams[holder] = None + #else: + # no support for weakrefs, so ignore and we + # will not get autoflushing def remove(self, w_iobase): holder = w_iobase.streamholder diff --git a/pypy/module/_multiprocessing/test/test_connection.py b/pypy/module/_multiprocessing/test/test_connection.py --- a/pypy/module/_multiprocessing/test/test_connection.py +++ b/pypy/module/_multiprocessing/test/test_connection.py @@ -157,13 +157,15 @@ raises(IOError, _multiprocessing.Connection, -15) def test_byte_order(self): + import socket + if not 'fromfd' in dir(socket): + skip('No fromfd in socket') # The exact format of net strings (length in network byte # order) is important for interoperation with others # implementations. rhandle, whandle = self.make_pair() whandle.send_bytes("abc") whandle.send_bytes("defg") - import socket sock = socket.fromfd(rhandle.fileno(), socket.AF_INET, socket.SOCK_STREAM) data1 = sock.recv(7) diff --git a/pypy/module/_winreg/test/test_winreg.py b/pypy/module/_winreg/test/test_winreg.py --- a/pypy/module/_winreg/test/test_winreg.py +++ b/pypy/module/_winreg/test/test_winreg.py @@ -198,7 +198,10 @@ import nt r = ExpandEnvironmentStrings(u"%windir%\\test") assert isinstance(r, unicode) - assert r == nt.environ["WINDIR"] + "\\test" + if 'WINDIR' in nt.environ.keys(): + assert r == nt.environ["WINDIR"] + "\\test" + else: + assert r == nt.environ["windir"] + "\\test" def test_long_key(self): from _winreg import ( diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -102,8 +102,8 @@ """.split() for name in constant_names: setattr(CConfig_constants, name, rffi_platform.ConstantInteger(name)) -udir.join('pypy_decl.h').write("/* Will be filled later */") -udir.join('pypy_macros.h').write("/* Will be filled later */") +udir.join('pypy_decl.h').write("/* Will be filled later */\n") +udir.join('pypy_macros.h').write("/* Will be filled later */\n") globals().update(rffi_platform.configure(CConfig_constants)) def copy_header_files(dstdir): @@ -924,12 +924,12 @@ source_dir / "pyerrors.c", source_dir / "modsupport.c", source_dir / "getargs.c", + source_dir / "abstract.c", source_dir / "unicodeobject.c", source_dir / "mysnprintf.c", source_dir / "pythonrun.c", source_dir / "sysmodule.c", source_dir / "bufferobject.c", - source_dir / "object.c", source_dir / "cobject.c", source_dir / "structseq.c", source_dir / "capsule.c", diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -38,10 +38,19 @@ PyObject_VAR_HEAD } PyVarObject; +#ifndef PYPY_DEBUG_REFCOUNT #define Py_INCREF(ob) (Py_IncRef((PyObject *)ob)) #define Py_DECREF(ob) (Py_DecRef((PyObject *)ob)) #define Py_XINCREF(ob) (Py_IncRef((PyObject *)ob)) #define Py_XDECREF(ob) (Py_DecRef((PyObject *)ob)) +#else +#define Py_INCREF(ob) (((PyObject *)ob)->ob_refcnt++) +#define Py_DECREF(ob) ((((PyObject *)ob)->ob_refcnt > 1) ? \ + ((PyObject *)ob)->ob_refcnt-- : (Py_DecRef((PyObject *)ob))) + +#define Py_XINCREF(op) do { if ((op) == NULL) ; else Py_INCREF(op); } while (0) +#define Py_XDECREF(op) do { if ((op) == NULL) ; else Py_DECREF(op); } while (0) +#endif #define Py_CLEAR(op) \ do { \ diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -29,6 +29,10 @@ # define vsnprintf _vsnprintf #endif +#include +PyAPI_FUNC(int) PyOS_snprintf(char *str, size_t size, const char *format, ...); +PyAPI_FUNC(int) PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va); + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/stringobject.h b/pypy/module/cpyext/include/stringobject.h --- a/pypy/module/cpyext/include/stringobject.h +++ b/pypy/module/cpyext/include/stringobject.h @@ -7,8 +7,6 @@ extern "C" { #endif -int PyOS_snprintf(char *str, size_t size, const char *format, ...); - #define PyString_GET_SIZE(op) PyString_Size(op) #define PyString_AS_STRING(op) PyString_AsString(op) diff --git a/pypy/module/cpyext/iterator.py b/pypy/module/cpyext/iterator.py --- a/pypy/module/cpyext/iterator.py +++ b/pypy/module/cpyext/iterator.py @@ -22,7 +22,7 @@ cannot be iterated.""" return space.iter(w_obj) - at cpython_api([PyObject], PyObject, error=CANNOT_FAIL) + at cpython_api([PyObject], PyObject) def PyIter_Next(space, w_obj): """Return the next value from the iteration o. If the object is an iterator, this retrieves the next value from the iteration, and returns diff --git a/pypy/module/cpyext/listobject.py b/pypy/module/cpyext/listobject.py --- a/pypy/module/cpyext/listobject.py +++ b/pypy/module/cpyext/listobject.py @@ -110,6 +110,16 @@ space.call_method(w_list, "reverse") return 0 + at cpython_api([PyObject, Py_ssize_t, Py_ssize_t], PyObject) +def PyList_GetSlice(space, w_list, low, high): + """Return a list of the objects in list containing the objects between low + and high. Return NULL and set an exception if unsuccessful. Analogous + to list[low:high]. Negative indices, as when slicing from Python, are not + supported.""" + w_start = space.wrap(low) + w_stop = space.wrap(high) + return space.getslice(w_list, w_start, w_stop) + @cpython_api([PyObject, Py_ssize_t, Py_ssize_t, PyObject], rffi.INT_real, error=-1) def PyList_SetSlice(space, w_list, low, high, w_sequence): """Set the slice of list between low and high to the contents of diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -390,6 +390,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], lltype.Signed, error=-1) +def PyObject_HashNotImplemented(space, o): + """Set a TypeError indicating that type(o) is not hashable and return -1. + This function receives special treatment when stored in a tp_hash slot, + allowing a type to explicitly indicate to the interpreter that it is not + hashable. + """ + raise OperationError(space.w_TypeError, space.wrap("unhashable type")) + @cpython_api([PyObject], PyObject) def PyObject_Dir(space, w_o): """This is equivalent to the Python expression dir(o), returning a (possibly diff --git a/pypy/module/cpyext/pyerrors.py b/pypy/module/cpyext/pyerrors.py --- a/pypy/module/cpyext/pyerrors.py +++ b/pypy/module/cpyext/pyerrors.py @@ -313,7 +313,10 @@ """This function simulates the effect of a SIGINT signal arriving --- the next time PyErr_CheckSignals() is called, KeyboardInterrupt will be raised. It may be called without holding the interpreter lock.""" - space.check_signal_action.set_interrupt() + if space.check_signal_action is not None: + space.check_signal_action.set_interrupt() + #else: + # no 'signal' module present, ignore... We can't return an error here @cpython_api([PyObjectP, PyObjectP, PyObjectP], lltype.Void) def PyErr_GetExcInfo(space, ptype, pvalue, ptraceback): diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -7,7 +7,7 @@ cpython_api, generic_cpy_call, PyObject, Py_ssize_t) from pypy.module.cpyext.typeobjectdefs import ( unaryfunc, wrapperfunc, ternaryfunc, PyTypeObjectPtr, binaryfunc, - getattrfunc, getattrofunc, setattrofunc, lenfunc, ssizeargfunc, + getattrfunc, getattrofunc, setattrofunc, lenfunc, ssizeargfunc, inquiry, ssizessizeargfunc, ssizeobjargproc, iternextfunc, initproc, richcmpfunc, cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, objobjargproc, readbufferproc) @@ -60,6 +60,16 @@ args_w = space.fixedview(w_args) return generic_cpy_call(space, func_binary, w_self, args_w[0]) +def wrap_inquirypred(space, w_self, w_args, func): + func_inquiry = rffi.cast(inquiry, func) + check_num_args(space, w_args, 0) + args_w = space.fixedview(w_args) + res = generic_cpy_call(space, func_inquiry, w_self) + res = rffi.cast(lltype.Signed, res) + if res == -1: + space.fromcache(State).check_and_raise_exception() + return space.wrap(bool(res)) + def wrap_getattr(space, w_self, w_args, func): func_target = rffi.cast(getattrfunc, func) check_num_args(space, w_args, 1) diff --git a/pypy/module/cpyext/src/bufferobject.c b/pypy/module/cpyext/src/bufferobject.c --- a/pypy/module/cpyext/src/bufferobject.c +++ b/pypy/module/cpyext/src/bufferobject.c @@ -34,13 +34,13 @@ proc = bp->bf_getreadbuffer; else if ((buffer_type == WRITE_BUFFER) || (buffer_type == ANY_BUFFER)) - proc = (readbufferproc)bp->bf_getwritebuffer; + proc = (readbufferproc)bp->bf_getwritebuffer; else if (buffer_type == CHAR_BUFFER) { if (!PyType_HasFeature(self->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER)) { - PyErr_SetString(PyExc_TypeError, - "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); - return 0; + Py_TPFLAGS_HAVE_GETCHARBUFFER)) { + PyErr_SetString(PyExc_TypeError, + "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); + return 0; } proc = (readbufferproc)bp->bf_getcharbuffer; } @@ -86,18 +86,18 @@ static PyObject * buffer_from_memory(PyObject *base, Py_ssize_t size, Py_ssize_t offset, void *ptr, - int readonly) + int readonly) { PyBufferObject * b; if (size < 0 && size != Py_END_OF_BUFFER) { PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); + "size must be zero or positive"); return NULL; } if (offset < 0) { PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); + "offset must be zero or positive"); return NULL; } @@ -121,7 +121,7 @@ { if (offset < 0) { PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); + "offset must be zero or positive"); return NULL; } if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { @@ -193,7 +193,7 @@ if (size < 0) { PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); + "size must be zero or positive"); return NULL; } if (sizeof(*b) > PY_SSIZE_T_MAX - size) { @@ -225,9 +225,11 @@ Py_ssize_t offset = 0; Py_ssize_t size = Py_END_OF_BUFFER; - /*if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) - return NULL;*/ - + /* + * if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) + * return NULL; + */ + if (!_PyArg_NoKeywords("buffer()", kw)) return NULL; @@ -278,12 +280,11 @@ const char *status = self->b_readonly ? "read-only" : "read-write"; if ( self->b_base == NULL ) - return PyUnicode_FromFormat( - "<%s buffer ptr %p, size %zd at %p>", - status, - self->b_ptr, - self->b_size, - self); + return PyUnicode_FromFormat("<%s buffer ptr %p, size %zd at %p>", + status, + self->b_ptr, + self->b_size, + self); else return PyUnicode_FromFormat( "<%s buffer for %p, size %zd, offset %zd at %p>", @@ -316,7 +317,7 @@ if ( !self->b_readonly ) { PyErr_SetString(PyExc_TypeError, - "writable buffers are not hashable"); + "writable buffers are not hashable"); return -1; } @@ -376,13 +377,13 @@ { /* ### use a different exception type/message? */ PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); + "single-segment buffer object expected"); return NULL; } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return NULL; - + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return NULL; + /* optimize special case */ if ( size == 0 ) { @@ -395,12 +396,12 @@ assert(count <= PY_SIZE_MAX - size); - ob = PyString_FromStringAndSize(NULL, size + count); + ob = PyString_FromStringAndSize(NULL, size + count); if ( ob == NULL ) return NULL; - p = PyString_AS_STRING(ob); - memcpy(p, ptr1, size); - memcpy(p + size, ptr2, count); + p = PyString_AS_STRING(ob); + memcpy(p, ptr1, size); + memcpy(p + size, ptr2, count); /* there is an extra byte in the string object, so this is safe */ p[size + count] = '\0'; @@ -471,7 +472,7 @@ if ( right < left ) right = left; return PyString_FromStringAndSize((char *)ptr + left, - right - left); + right - left); } static PyObject * @@ -479,10 +480,9 @@ { void *p; Py_ssize_t size; - + if (!get_buf(self, &p, &size, ANY_BUFFER)) return NULL; - if (PyIndex_Check(item)) { Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); if (i == -1 && PyErr_Occurred()) @@ -495,7 +495,7 @@ Py_ssize_t start, stop, step, slicelength, cur, i; if (PySlice_GetIndicesEx((PySliceObject*)item, size, - &start, &stop, &step, &slicelength) < 0) { + &start, &stop, &step, &slicelength) < 0) { return NULL; } @@ -503,7 +503,7 @@ return PyString_FromStringAndSize("", 0); else if (step == 1) return PyString_FromStringAndSize((char *)p + start, - stop - start); + stop - start); else { PyObject *result; char *source_buf = (char *)p; @@ -518,14 +518,14 @@ } result = PyString_FromStringAndSize(result_buf, - slicelength); + slicelength); PyMem_Free(result_buf); return result; } } else { PyErr_SetString(PyExc_TypeError, - "sequence index must be integer"); + "sequence index must be integer"); return NULL; } } @@ -540,7 +540,7 @@ if ( self->b_readonly ) { PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); + "buffer is read-only"); return -1; } @@ -549,7 +549,7 @@ if (idx < 0 || idx >= size) { PyErr_SetString(PyExc_IndexError, - "buffer assignment index out of range"); + "buffer assignment index out of range"); return -1; } @@ -565,7 +565,7 @@ { /* ### use a different exception type/message? */ PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); + "single-segment buffer object expected"); return -1; } @@ -573,7 +573,7 @@ return -1; if ( count != 1 ) { PyErr_SetString(PyExc_TypeError, - "right operand must be a single byte"); + "right operand must be a single byte"); return -1; } @@ -592,7 +592,7 @@ if ( self->b_readonly ) { PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); + "buffer is read-only"); return -1; } @@ -608,7 +608,7 @@ { /* ### use a different exception type/message? */ PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); + "single-segment buffer object expected"); return -1; } if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) @@ -649,7 +649,7 @@ if ( self->b_readonly ) { PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); + "buffer is read-only"); return -1; } @@ -665,12 +665,11 @@ { /* ### use a different exception type/message? */ PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); + "single-segment buffer object expected"); return -1; } if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) return -1; - if (PyIndex_Check(item)) { Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); if (i == -1 && PyErr_Occurred()) @@ -681,9 +680,9 @@ } else if (PySlice_Check(item)) { Py_ssize_t start, stop, step, slicelength; - + if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, - &start, &stop, &step, &slicelength) < 0) + &start, &stop, &step, &slicelength) < 0) return -1; if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) @@ -704,7 +703,7 @@ } else { Py_ssize_t cur, i; - + for (cur = start, i = 0; i < slicelength; cur += step, i++) { ((char *)ptr1)[cur] = ((char *)ptr2)[i]; @@ -714,7 +713,7 @@ } } else { PyErr_SetString(PyExc_TypeError, - "buffer indices must be integers"); + "buffer indices must be integers"); return -1; } } @@ -727,7 +726,7 @@ Py_ssize_t size; if ( idx != 0 ) { PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); + "accessing non-existent buffer segment"); return -1; } if (!get_buf(self, pp, &size, READ_BUFFER)) @@ -748,7 +747,7 @@ if ( idx != 0 ) { PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); + "accessing non-existent buffer segment"); return -1; } if (!get_buf(self, pp, &size, WRITE_BUFFER)) @@ -775,7 +774,7 @@ Py_ssize_t size; if ( idx != 0 ) { PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); + "accessing non-existent buffer segment"); return -1; } if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) @@ -813,44 +812,42 @@ }; PyTypeObject PyBuffer_Type = { - PyObject_HEAD_INIT(NULL) - 0, + PyVarObject_HEAD_INIT(NULL, 0) "buffer", sizeof(PyBufferObject), 0, - (destructor)buffer_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - (cmpfunc)buffer_compare, /* tp_compare */ - (reprfunc)buffer_repr, /* tp_repr */ - 0, /* tp_as_number */ - &buffer_as_sequence, /* tp_as_sequence */ - &buffer_as_mapping, /* tp_as_mapping */ - (hashfunc)buffer_hash, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)buffer_str, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - &buffer_as_buffer, /* tp_as_buffer */ + (destructor)buffer_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + (cmpfunc)buffer_compare, /* tp_compare */ + (reprfunc)buffer_repr, /* tp_repr */ + 0, /* tp_as_number */ + &buffer_as_sequence, /* tp_as_sequence */ + &buffer_as_mapping, /* tp_as_mapping */ + (hashfunc)buffer_hash, /* tp_hash */ + 0, /* tp_call */ + (reprfunc)buffer_str, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + &buffer_as_buffer, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ - buffer_doc, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - buffer_new, /* tp_new */ + buffer_doc, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + buffer_new, /* tp_new */ }; - diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -50,6 +50,10 @@ PyCObject_AsVoidPtr(PyObject *self) { if (self) { + if (PyCapsule_CheckExact(self)) { + const char *name = PyCapsule_GetName(self); + return (void *)PyCapsule_GetPointer(self, name); + } if (self->ob_type == &PyCObject_Type) return ((PyCObject *)self)->cobject; PyErr_SetString(PyExc_TypeError, diff --git a/pypy/module/cpyext/src/getargs.c b/pypy/module/cpyext/src/getargs.c --- a/pypy/module/cpyext/src/getargs.c +++ b/pypy/module/cpyext/src/getargs.c @@ -7,349 +7,348 @@ #ifdef __cplusplus -extern "C" { +extern "C" { #endif - int PyArg_Parse(PyObject *, const char *, ...); int PyArg_ParseTuple(PyObject *, const char *, ...); int PyArg_VaParse(PyObject *, const char *, va_list); int PyArg_ParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, ...); + const char *, char **, ...); int PyArg_VaParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, va_list); + const char *, char **, va_list); #define FLAG_COMPAT 1 #define FLAG_SIZE_T 2 -typedef int (*destr_t)(PyObject *, void *); - - -/* Keep track of "objects" that have been allocated or initialized and - which will need to be deallocated or cleaned up somehow if overall - parsing fails. -*/ -typedef struct { - void *item; - destr_t destructor; -} freelistentry_t; - -typedef struct { - int first_available; - freelistentry_t *entries; -} freelist_t; - /* Forward */ static int vgetargs1(PyObject *, const char *, va_list *, int); static void seterror(int, const char *, int *, const char *, const char *); -static char *convertitem(PyObject *, const char **, va_list *, int, int *, - char *, size_t, freelist_t *); +static char *convertitem(PyObject *, const char **, va_list *, int, int *, + char *, size_t, PyObject **); static char *converttuple(PyObject *, const char **, va_list *, int, - int *, char *, size_t, int, freelist_t *); + int *, char *, size_t, int, PyObject **); static char *convertsimple(PyObject *, const char **, va_list *, int, char *, - size_t, freelist_t *); + size_t, PyObject **); static Py_ssize_t convertbuffer(PyObject *, void **p, char **); static int getbuffer(PyObject *, Py_buffer *, char**); static int vgetargskeywords(PyObject *, PyObject *, - const char *, char **, va_list *, int); + const char *, char **, va_list *, int); static char *skipitem(const char **, va_list *, int); int PyArg_Parse(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT); + va_end(va); + return retval; } int _PyArg_Parse_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_ParseTuple(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, 0); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTuple_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParse(PyObject *args, const char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, 0); + return vgetargs1(args, format, &lva, 0); } int _PyArg_VaParse_SizeT(PyObject *args, char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, FLAG_SIZE_T); + return vgetargs1(args, format, &lva, FLAG_SIZE_T); } /* Handle cleanup of allocated memory in case of exception */ +#define GETARGS_CAPSULE_NAME_CLEANUP_PTR "getargs.cleanup_ptr" +#define GETARGS_CAPSULE_NAME_CLEANUP_BUFFER "getargs.cleanup_buffer" + +static void +cleanup_ptr(PyObject *self) +{ + void *ptr = PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_PTR); + if (ptr) { + PyMem_FREE(ptr); + } +} + +static void +cleanup_buffer(PyObject *self) +{ + Py_buffer *ptr = (Py_buffer *)PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_BUFFER); + if (ptr) { + PyBuffer_Release(ptr); + } +} + static int -cleanup_ptr(PyObject *self, void *ptr) +addcleanup(void *ptr, PyObject **freelist, PyCapsule_Destructor destr) { - if (ptr) { - PyMem_FREE(ptr); + PyObject *cobj; + const char *name; + + if (!*freelist) { + *freelist = PyList_New(0); + if (!*freelist) { + destr(ptr); + return -1; + } } + + if (destr == cleanup_ptr) { + name = GETARGS_CAPSULE_NAME_CLEANUP_PTR; + } else if (destr == cleanup_buffer) { + name = GETARGS_CAPSULE_NAME_CLEANUP_BUFFER; + } else { + return -1; + } + cobj = PyCapsule_New(ptr, name, destr); + if (!cobj) { + destr(ptr); + return -1; + } + if (PyList_Append(*freelist, cobj)) { + Py_DECREF(cobj); + return -1; + } + Py_DECREF(cobj); return 0; } static int -cleanup_buffer(PyObject *self, void *ptr) +cleanreturn(int retval, PyObject *freelist) { - Py_buffer *buf = (Py_buffer *)ptr; - if (buf) { - PyBuffer_Release(buf); + if (freelist && retval != 0) { + /* We were successful, reset the destructors so that they + don't get called. */ + Py_ssize_t len = PyList_GET_SIZE(freelist), i; + for (i = 0; i < len; i++) + PyCapsule_SetDestructor(PyList_GET_ITEM(freelist, i), NULL); } - return 0; + Py_XDECREF(freelist); + return retval; } -static int -addcleanup(void *ptr, freelist_t *freelist, destr_t destructor) -{ - int index; - - index = freelist->first_available; - freelist->first_available += 1; - - freelist->entries[index].item = ptr; - freelist->entries[index].destructor = destructor; - - return 0; -} - -static int -cleanreturn(int retval, freelist_t *freelist) -{ - int index; - - if (retval == 0) { - /* A failure occurred, therefore execute all of the cleanup - functions. - */ - for (index = 0; index < freelist->first_available; ++index) { - freelist->entries[index].destructor(NULL, - freelist->entries[index].item); - } - } - PyMem_Free(freelist->entries); - return retval; -} static int vgetargs1(PyObject *args, const char *format, va_list *p_va, int flags) { - char msgbuf[256]; - int levels[32]; - const char *fname = NULL; - const char *message = NULL; - int min = -1; - int max = 0; - int level = 0; - int endfmt = 0; - const char *formatsave = format; - Py_ssize_t i, len; - char *msg; - freelist_t freelist = {0, NULL}; - int compat = flags & FLAG_COMPAT; + char msgbuf[256]; + int levels[32]; + const char *fname = NULL; + const char *message = NULL; + int min = -1; + int max = 0; + int level = 0; + int endfmt = 0; + const char *formatsave = format; + Py_ssize_t i, len; + char *msg; + PyObject *freelist = NULL; + int compat = flags & FLAG_COMPAT; - assert(compat || (args != (PyObject*)NULL)); - flags = flags & ~FLAG_COMPAT; + assert(compat || (args != (PyObject*)NULL)); + flags = flags & ~FLAG_COMPAT; - while (endfmt == 0) { - int c = *format++; - switch (c) { - case '(': - if (level == 0) - max++; - level++; - if (level >= 30) - Py_FatalError("too many tuple nesting levels " - "in argument format string"); - break; - case ')': - if (level == 0) - Py_FatalError("excess ')' in getargs format"); - else - level--; - break; - case '\0': - endfmt = 1; - break; - case ':': - fname = format; - endfmt = 1; - break; - case ';': - message = format; - endfmt = 1; - break; - default: - if (level == 0) { - if (c == 'O') - max++; - else if (isalpha(Py_CHARMASK(c))) { - if (c != 'e') /* skip encoded */ - max++; - } else if (c == '|') - min = max; - } - break; - } - } - - if (level != 0) - Py_FatalError(/* '(' */ "missing ')' in getargs format"); - - if (min < 0) - min = max; - - format = formatsave; - - freelist.entries = PyMem_New(freelistentry_t, max); + while (endfmt == 0) { + int c = *format++; + switch (c) { + case '(': + if (level == 0) + max++; + level++; + if (level >= 30) + Py_FatalError("too many tuple nesting levels " + "in argument format string"); + break; + case ')': + if (level == 0) + Py_FatalError("excess ')' in getargs format"); + else + level--; + break; + case '\0': + endfmt = 1; + break; + case ':': + fname = format; + endfmt = 1; + break; + case ';': + message = format; + endfmt = 1; + break; + default: + if (level == 0) { + if (c == 'O') + max++; + else if (isalpha(Py_CHARMASK(c))) { + if (c != 'e') /* skip encoded */ + max++; + } else if (c == '|') + min = max; + } + break; + } + } - if (compat) { - if (max == 0) { - if (args == NULL) - return cleanreturn(1, &freelist); - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes no arguments", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - else if (min == 1 && max == 1) { - if (args == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes at least one argument", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - msg = convertitem(args, &format, p_va, flags, levels, - msgbuf, sizeof(msgbuf), &freelist); - if (msg == NULL) - return cleanreturn(1, &freelist); - seterror(levels[0], msg, levels+1, fname, message); - return cleanreturn(0, &freelist); - } - else { - PyErr_SetString(PyExc_SystemError, - "old style getargs format uses new features"); - return cleanreturn(0, &freelist); - } - } - - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "new style getargs format but argument is not a tuple"); - return cleanreturn(0, &freelist); - } - - len = PyTuple_GET_SIZE(args); - - if (len < min || max < len) { - if (message == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.150s%s takes %s %d argument%s " - "(%ld given)", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()", - min==max ? "exactly" - : len < min ? "at least" : "at most", - len < min ? min : max, - (len < min ? min : max) == 1 ? "" : "s", - Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); - message = msgbuf; - } - PyErr_SetString(PyExc_TypeError, message); - return cleanreturn(0, &freelist); - } - - for (i = 0; i < len; i++) { - if (*format == '|') - format++; - msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, - flags, levels, msgbuf, - sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, message); - return cleanreturn(0, &freelist); - } - } + if (level != 0) + Py_FatalError(/* '(' */ "missing ')' in getargs format"); - if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && - *format != '(' && - *format != '|' && *format != ':' && *format != ';') { - PyErr_Format(PyExc_SystemError, - "bad format string: %.200s", formatsave); - return cleanreturn(0, &freelist); - } - - return cleanreturn(1, &freelist); + if (min < 0) + min = max; + + format = formatsave; + + if (compat) { + if (max == 0) { + if (args == NULL) + return 1; + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes no arguments", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + else if (min == 1 && max == 1) { + if (args == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes at least one argument", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + msg = convertitem(args, &format, p_va, flags, levels, + msgbuf, sizeof(msgbuf), &freelist); + if (msg == NULL) + return cleanreturn(1, freelist); + seterror(levels[0], msg, levels+1, fname, message); + return cleanreturn(0, freelist); + } + else { + PyErr_SetString(PyExc_SystemError, + "old style getargs format uses new features"); + return 0; + } + } + + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "new style getargs format but argument is not a tuple"); + return 0; + } + + len = PyTuple_GET_SIZE(args); + + if (len < min || max < len) { + if (message == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.150s%s takes %s %d argument%s " + "(%ld given)", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()", + min==max ? "exactly" + : len < min ? "at least" : "at most", + len < min ? min : max, + (len < min ? min : max) == 1 ? "" : "s", + Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); + message = msgbuf; + } + PyErr_SetString(PyExc_TypeError, message); + return 0; + } + + for (i = 0; i < len; i++) { + if (*format == '|') + format++; + msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, + flags, levels, msgbuf, + sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, msg); + return cleanreturn(0, freelist); + } + } + + if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && + *format != '(' && + *format != '|' && *format != ':' && *format != ';') { + PyErr_Format(PyExc_SystemError, + "bad format string: %.200s", formatsave); + return cleanreturn(0, freelist); + } + + return cleanreturn(1, freelist); } @@ -358,37 +357,37 @@ seterror(int iarg, const char *msg, int *levels, const char *fname, const char *message) { - char buf[512]; - int i; - char *p = buf; + char buf[512]; + int i; + char *p = buf; - if (PyErr_Occurred()) - return; - else if (message == NULL) { - if (fname != NULL) { - PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); - p += strlen(p); - } - if (iarg != 0) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - "argument %d", iarg); - i = 0; - p += strlen(p); - while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - ", item %d", levels[i]-1); - p += strlen(p); - i++; - } - } - else { - PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); - p += strlen(p); - } - PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); - message = buf; - } - PyErr_SetString(PyExc_TypeError, message); + if (PyErr_Occurred()) + return; + else if (message == NULL) { + if (fname != NULL) { + PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); + p += strlen(p); + } + if (iarg != 0) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + "argument %d", iarg); + i = 0; + p += strlen(p); + while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + ", item %d", levels[i]-1); + p += strlen(p); + i++; + } + } + else { + PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); + p += strlen(p); + } + PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); + message = buf; + } + PyErr_SetString(PyExc_TypeError, message); } @@ -404,85 +403,84 @@ *p_va is undefined, *levels is a 0-terminated list of item numbers, *msgbuf contains an error message, whose format is: - "must be , not ", where: - is the name of the expected type, and - is the name of the actual type, + "must be , not ", where: + is the name of the expected type, and + is the name of the actual type, and msgbuf is returned. */ static char * converttuple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, int toplevel, - freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, int toplevel, + PyObject **freelist) { - int level = 0; - int n = 0; - const char *format = *p_format; - int i; - - for (;;) { - int c = *format++; - if (c == '(') { - if (level == 0) - n++; - level++; - } - else if (c == ')') { - if (level == 0) - break; - level--; - } - else if (c == ':' || c == ';' || c == '\0') - break; - else if (level == 0 && isalpha(Py_CHARMASK(c))) - n++; - } - - if (!PySequence_Check(arg) || PyString_Check(arg)) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %.50s" : - "must be %d-item sequence, not %.50s", - n, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; - } - - if ((i = PySequence_Size(arg)) != n) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %d" : - "must be sequence of length %d, not %d", - n, i); - return msgbuf; - } + int level = 0; + int n = 0; + const char *format = *p_format; + int i; - format = *p_format; - for (i = 0; i < n; i++) { - char *msg; - PyObject *item; + for (;;) { + int c = *format++; + if (c == '(') { + if (level == 0) + n++; + level++; + } + else if (c == ')') { + if (level == 0) + break; + level--; + } + else if (c == ':' || c == ';' || c == '\0') + break; + else if (level == 0 && isalpha(Py_CHARMASK(c))) + n++; + } + + if (!PySequence_Check(arg) || PyString_Check(arg)) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %.50s" : + "must be %d-item sequence, not %.50s", + n, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; + } + + if ((i = PySequence_Size(arg)) != n) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %d" : + "must be sequence of length %d, not %d", + n, i); + return msgbuf; + } + + format = *p_format; + for (i = 0; i < n; i++) { + char *msg; + PyObject *item; item = PySequence_GetItem(arg, i); - if (item == NULL) { - PyErr_Clear(); - levels[0] = i+1; - levels[1] = 0; - strncpy(msgbuf, "is not retrievable", - bufsize); - return msgbuf; - } - PyPy_Borrow(arg, item); - msg = convertitem(item, &format, p_va, flags, levels+1, - msgbuf, bufsize, freelist); + if (item == NULL) { + PyErr_Clear(); + levels[0] = i+1; + levels[1] = 0; + strncpy(msgbuf, "is not retrievable", bufsize); + return msgbuf; + } + PyPy_Borrow(arg, item); + msg = convertitem(item, &format, p_va, flags, levels+1, + msgbuf, bufsize, freelist); /* PySequence_GetItem calls tp->sq_item, which INCREFs */ Py_XDECREF(item); - if (msg != NULL) { - levels[0] = i+1; - return msg; - } - } + if (msg != NULL) { + levels[0] = i+1; + return msg; + } + } - *p_format = format; - return NULL; + *p_format = format; + return NULL; } @@ -490,45 +488,45 @@ static char * convertitem(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, PyObject **freelist) { - char *msg; - const char *format = *p_format; - - if (*format == '(' /* ')' */) { - format++; - msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, - bufsize, 0, freelist); - if (msg == NULL) - format++; - } - else { - msg = convertsimple(arg, &format, p_va, flags, - msgbuf, bufsize, freelist); - if (msg != NULL) - levels[0] = 0; - } - if (msg == NULL) - *p_format = format; - return msg; + char *msg; + const char *format = *p_format; + + if (*format == '(' /* ')' */) { + format++; + msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, + bufsize, 0, freelist); + if (msg == NULL) + format++; + } + else { + msg = convertsimple(arg, &format, p_va, flags, + msgbuf, bufsize, freelist); + if (msg != NULL) + levels[0] = 0; + } + if (msg == NULL) + *p_format = format; + return msg; } #define UNICODE_DEFAULT_ENCODING(arg) \ - _PyUnicode_AsDefaultEncodedString(arg, NULL) + _PyUnicode_AsDefaultEncodedString(arg, NULL) /* Format an error message generated by convertsimple(). */ static char * converterr(const char *expected, PyObject *arg, char *msgbuf, size_t bufsize) { - assert(expected != NULL); - assert(arg != NULL); - PyOS_snprintf(msgbuf, bufsize, - "must be %.50s, not %.50s", expected, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; + assert(expected != NULL); + assert(arg != NULL); + PyOS_snprintf(msgbuf, bufsize, + "must be %.50s, not %.50s", expected, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; } #define CONV_UNICODE "(unicode conversion error)" @@ -536,14 +534,28 @@ /* explicitly check for float arguments when integers are expected. For now * signal a warning. Returns true if an exception was raised. */ static int +float_argument_warning(PyObject *arg) +{ + if (PyFloat_Check(arg) && + PyErr_Warn(PyExc_DeprecationWarning, + "integer argument expected, got float" )) + return 1; + else + return 0; +} + +/* explicitly check for float arguments when integers are expected. Raises + TypeError and returns true for float arguments. */ +static int float_argument_error(PyObject *arg) { - if (PyFloat_Check(arg) && - PyErr_Warn(PyExc_DeprecationWarning, - "integer argument expected, got float" )) - return 1; - else - return 0; + if (PyFloat_Check(arg)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 1; + } + else + return 0; } /* Convert a non-tuple argument. Return NULL if conversion went OK, @@ -557,836 +569,839 @@ static char * convertsimple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - char *msgbuf, size_t bufsize, freelist_t *freelist) + char *msgbuf, size_t bufsize, PyObject **freelist) { - /* For # codes */ -#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ - if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ - else q=va_arg(*p_va, int*); -#define STORE_SIZE(s) if (flags & FLAG_SIZE_T) *q2=s; else *q=s; + /* For # codes */ +#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ + if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ + else q=va_arg(*p_va, int*); +#define STORE_SIZE(s) \ + if (flags & FLAG_SIZE_T) \ + *q2=s; \ + else { \ + if (INT_MAX < s) { \ + PyErr_SetString(PyExc_OverflowError, \ + "size does not fit in an int"); \ + return converterr("", arg, msgbuf, bufsize); \ + } \ + *q=s; \ + } #define BUFFER_LEN ((flags & FLAG_SIZE_T) ? *q2:*q) - const char *format = *p_format; - char c = *format++; + const char *format = *p_format; + char c = *format++; #ifdef Py_USING_UNICODE - PyObject *uarg; -#endif - - switch (c) { - - case 'b': { /* unsigned byte -- very short int */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > UCHAR_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (unsigned char) ival; - break; - } - - case 'B': {/* byte sized bitfield - both signed and unsigned - values allowed */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned char) ival; - break; - } - - case 'h': {/* signed short int */ - short *p = va_arg(*p_va, short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < SHRT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > SHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (short) ival; - break; - } - - case 'H': { /* short int sized bitfield, both signed and - unsigned allowed */ - unsigned short *p = va_arg(*p_va, unsigned short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned short) ival; - break; - } - case 'i': {/* signed int */ - int *p = va_arg(*p_va, int *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival > INT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival < INT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = ival; - break; - } - case 'I': { /* int sized bitfield, both signed and - unsigned allowed */ - unsigned int *p = va_arg(*p_va, unsigned int *); - unsigned int ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); - if (ival == (unsigned int)-1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - case 'n': /* Py_ssize_t */ -#if SIZEOF_SIZE_T != SIZEOF_LONG - { - Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); - Py_ssize_t ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsSsize_t(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif - /* Fall through from 'n' to 'l' if Py_ssize_t is int */ - case 'l': {/* long int */ - long *p = va_arg(*p_va, long *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - - case 'k': { /* long sized bitfield */ - unsigned long *p = va_arg(*p_va, unsigned long *); - unsigned long ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } - -#ifdef HAVE_LONG_LONG - case 'L': {/* PY_LONG_LONG */ - PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); - PY_LONG_LONG ival = PyLong_AsLongLong( arg ); - if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { - return converterr("long", arg, msgbuf, bufsize); - } else { - *p = ival; - } - break; - } - - case 'K': { /* long long sized bitfield */ - unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); - unsigned PY_LONG_LONG ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif // HAVE_LONG_LONG - - case 'f': {/* float */ - float *p = va_arg(*p_va, float *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = (float) dval; - break; - } - - case 'd': {/* double */ - double *p = va_arg(*p_va, double *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = dval; - break; - } - -#ifndef WITHOUT_COMPLEX - case 'D': {/* complex double */ - Py_complex *p = va_arg(*p_va, Py_complex *); - Py_complex cval; - cval = PyComplex_AsCComplex(arg); - if (PyErr_Occurred()) - return converterr("complex", arg, msgbuf, bufsize); - else - *p = cval; - break; - } -#endif /* WITHOUT_COMPLEX */ - - case 'c': {/* char */ - char *p = va_arg(*p_va, char *); - if (PyString_Check(arg) && PyString_Size(arg) == 1) - *p = PyString_AS_STRING(arg)[0]; - else - return converterr("char", arg, msgbuf, bufsize); - break; - } - case 's': {/* string */ - if (*format == '*') { - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (PyString_Check(arg)) { - fflush(stdout); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { -#if 0 - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); -#else - return converterr("string or buffer", arg, msgbuf, bufsize); -#endif - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; - } else if (*format == '#') { - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string", arg, msgbuf, bufsize); - if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr("string without null bytes", - arg, msgbuf, bufsize); - } - break; - } - - case 'z': {/* string, may be NULL (None) */ - if (*format == '*') { - Py_FatalError("'*' format not supported in PyArg_*\n"); -#if 0 - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (arg == Py_None) - PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); - else if (PyString_Check(arg)) { - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; -#endif - } else if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (arg == Py_None) { - *p = 0; - STORE_SIZE(0); - } - else if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (arg == Py_None) - *p = 0; - else if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string or None", - arg, msgbuf, bufsize); - if (*format == '#') { - FETCH_SIZE; - assert(0); /* XXX redundant with if-case */ - if (arg == Py_None) - *q = 0; - else - *q = PyString_Size(arg); - format++; - } - else if (*p != NULL && - (Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr( - "string without null bytes or None", - arg, msgbuf, bufsize); - } - break; - } - case 'e': {/* encoded string */ - char **buffer; - const char *encoding; - PyObject *s; - Py_ssize_t size; - int recode_strings; - - /* Get 'e' parameter: the encoding name */ - encoding = (const char *)va_arg(*p_va, const char *); -#ifdef Py_USING_UNICODE - if (encoding == NULL) - encoding = PyUnicode_GetDefaultEncoding(); + PyObject *uarg; #endif - /* Get output buffer parameter: - 's' (recode all objects via Unicode) or - 't' (only recode non-string objects) - */ - if (*format == 's') - recode_strings = 1; - else if (*format == 't') - recode_strings = 0; - else - return converterr( - "(unknown parser marker combination)", - arg, msgbuf, bufsize); - buffer = (char **)va_arg(*p_va, char **); - format++; - if (buffer == NULL) - return converterr("(buffer is NULL)", - arg, msgbuf, bufsize); - - /* Encode object */ - if (!recode_strings && PyString_Check(arg)) { - s = arg; - Py_INCREF(s); - } - else { + switch (c) { + + case 'b': { /* unsigned byte -- very short int */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > UCHAR_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (unsigned char) ival; + break; + } + + case 'B': {/* byte sized bitfield - both signed and unsigned + values allowed */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned char) ival; + break; + } + + case 'h': {/* signed short int */ + short *p = va_arg(*p_va, short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < SHRT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > SHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (short) ival; + break; + } + + case 'H': { /* short int sized bitfield, both signed and + unsigned allowed */ + unsigned short *p = va_arg(*p_va, unsigned short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned short) ival; + break; + } + + case 'i': {/* signed int */ + int *p = va_arg(*p_va, int *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival > INT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival < INT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = ival; + break; + } + + case 'I': { /* int sized bitfield, both signed and + unsigned allowed */ + unsigned int *p = va_arg(*p_va, unsigned int *); + unsigned int ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); + if (ival == (unsigned int)-1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'n': /* Py_ssize_t */ +#if SIZEOF_SIZE_T != SIZEOF_LONG + { + Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); + Py_ssize_t ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsSsize_t(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + /* Fall through from 'n' to 'l' if Py_ssize_t is int */ + case 'l': {/* long int */ + long *p = va_arg(*p_va, long *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'k': { /* long sized bitfield */ + unsigned long *p = va_arg(*p_va, unsigned long *); + unsigned long ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } + +#ifdef HAVE_LONG_LONG + case 'L': {/* PY_LONG_LONG */ + PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); + PY_LONG_LONG ival; + if (float_argument_warning(arg)) + return converterr("long", arg, msgbuf, bufsize); + ival = PyLong_AsLongLong(arg); + if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { + return converterr("long", arg, msgbuf, bufsize); + } else { + *p = ival; + } + break; + } + + case 'K': { /* long long sized bitfield */ + unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); + unsigned PY_LONG_LONG ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + + case 'f': {/* float */ + float *p = va_arg(*p_va, float *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = (float) dval; + break; + } + + case 'd': {/* double */ + double *p = va_arg(*p_va, double *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = dval; + break; + } + +#ifndef WITHOUT_COMPLEX + case 'D': {/* complex double */ + Py_complex *p = va_arg(*p_va, Py_complex *); + Py_complex cval; + cval = PyComplex_AsCComplex(arg); + if (PyErr_Occurred()) + return converterr("complex", arg, msgbuf, bufsize); + else + *p = cval; + break; + } +#endif /* WITHOUT_COMPLEX */ + + case 'c': {/* char */ + char *p = va_arg(*p_va, char *); + if (PyString_Check(arg) && PyString_Size(arg) == 1) + *p = PyString_AS_STRING(arg)[0]; + else + return converterr("char", arg, msgbuf, bufsize); + break; + } + + case 's': {/* string */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } #ifdef Py_USING_UNICODE - PyObject *u; + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; - /* Convert object to Unicode */ - u = PyUnicode_FromObject(arg); - if (u == NULL) - return converterr( - "string or unicode or text buffer", - arg, msgbuf, bufsize); - - /* Encode object; use default error handling */ - s = PyUnicode_AsEncodedString(u, - encoding, - NULL); - Py_DECREF(u); - if (s == NULL) - return converterr("(encoding failed)", - arg, msgbuf, bufsize); - if (!PyString_Check(s)) { - Py_DECREF(s); - return converterr( - "(encoder failed to return a string)", - arg, msgbuf, bufsize); - } + if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string", arg, msgbuf, bufsize); + if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr("string without null bytes", + arg, msgbuf, bufsize); + } + break; + } + + case 'z': {/* string, may be NULL (None) */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (arg == Py_None) + PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); + else if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + + if (arg == Py_None) { + *p = 0; + STORE_SIZE(0); + } + else if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (arg == Py_None) + *p = 0; + else if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string or None", + arg, msgbuf, bufsize); + if (*format == '#') { + FETCH_SIZE; + assert(0); /* XXX redundant with if-case */ + if (arg == Py_None) { + STORE_SIZE(0); + } else { + STORE_SIZE(PyString_Size(arg)); + } + format++; + } + else if (*p != NULL && + (Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr( + "string without null bytes or None", + arg, msgbuf, bufsize); + } + break; + } + + case 'e': {/* encoded string */ + char **buffer; + const char *encoding; + PyObject *s; + Py_ssize_t size; + int recode_strings; + + /* Get 'e' parameter: the encoding name */ + encoding = (const char *)va_arg(*p_va, const char *); +#ifdef Py_USING_UNICODE + if (encoding == NULL) + encoding = PyUnicode_GetDefaultEncoding(); +#endif + + /* Get output buffer parameter: + 's' (recode all objects via Unicode) or + 't' (only recode non-string objects) + */ + if (*format == 's') + recode_strings = 1; + else if (*format == 't') + recode_strings = 0; + else + return converterr( + "(unknown parser marker combination)", + arg, msgbuf, bufsize); + buffer = (char **)va_arg(*p_va, char **); + format++; + if (buffer == NULL) + return converterr("(buffer is NULL)", + arg, msgbuf, bufsize); + + /* Encode object */ + if (!recode_strings && PyString_Check(arg)) { + s = arg; + Py_INCREF(s); + } + else { +#ifdef Py_USING_UNICODE + PyObject *u; + + /* Convert object to Unicode */ + u = PyUnicode_FromObject(arg); + if (u == NULL) + return converterr( + "string or unicode or text buffer", + arg, msgbuf, bufsize); + + /* Encode object; use default error handling */ + s = PyUnicode_AsEncodedString(u, + encoding, + NULL); + Py_DECREF(u); + if (s == NULL) + return converterr("(encoding failed)", + arg, msgbuf, bufsize); + if (!PyString_Check(s)) { + Py_DECREF(s); + return converterr( + "(encoder failed to return a string)", + arg, msgbuf, bufsize); + } #else - return converterr("string", arg, msgbuf, bufsize); + return converterr("string", arg, msgbuf, bufsize); #endif - } - size = PyString_GET_SIZE(s); + } + size = PyString_GET_SIZE(s); - /* Write output; output is guaranteed to be 0-terminated */ - if (*format == '#') { - /* Using buffer length parameter '#': - - - if *buffer is NULL, a new buffer of the - needed size is allocated and the data - copied into it; *buffer is updated to point - to the new buffer; the caller is - responsible for PyMem_Free()ing it after - usage + /* Write output; output is guaranteed to be 0-terminated */ + if (*format == '#') { + /* Using buffer length parameter '#': - - if *buffer is not NULL, the data is - copied to *buffer; *buffer_len has to be - set to the size of the buffer on input; - buffer overflow is signalled with an error; - buffer has to provide enough room for the - encoded string plus the trailing 0-byte - - - in both cases, *buffer_len is updated to - the size of the buffer /excluding/ the - trailing 0-byte - - */ - FETCH_SIZE; + - if *buffer is NULL, a new buffer of the + needed size is allocated and the data + copied into it; *buffer is updated to point + to the new buffer; the caller is + responsible for PyMem_Free()ing it after + usage - format++; - if (q == NULL && q2 == NULL) { - Py_DECREF(s); - return converterr( - "(buffer_len is NULL)", - arg, msgbuf, bufsize); - } - if (*buffer == NULL) { - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr( - "(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - } else { - if (size + 1 > BUFFER_LEN) { - Py_DECREF(s); - return converterr( - "(buffer overflow)", - arg, msgbuf, bufsize); - } - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - STORE_SIZE(size); - } else { - /* Using a 0-terminated buffer: - - - the encoded string has to be 0-terminated - for this variant to work; if it is not, an - error raised + - if *buffer is not NULL, the data is + copied to *buffer; *buffer_len has to be + set to the size of the buffer on input; + buffer overflow is signalled with an error; + buffer has to provide enough room for the + encoded string plus the trailing 0-byte - - a new buffer of the needed size is - allocated and the data copied into it; - *buffer is updated to point to the new - buffer; the caller is responsible for - PyMem_Free()ing it after usage + - in both cases, *buffer_len is updated to + the size of the buffer /excluding/ the + trailing 0-byte - */ - if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) - != size) { - Py_DECREF(s); - return converterr( - "encoded string without NULL bytes", - arg, msgbuf, bufsize); - } - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr("(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr("(cleanup problem)", - arg, msgbuf, bufsize); - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - } - Py_DECREF(s); - break; - } + */ + FETCH_SIZE; + + format++; + if (q == NULL && q2 == NULL) { + Py_DECREF(s); + return converterr( + "(buffer_len is NULL)", + arg, msgbuf, bufsize); + } + if (*buffer == NULL) { + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr( + "(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + } else { + if (size + 1 > BUFFER_LEN) { + Py_DECREF(s); + return converterr( + "(buffer overflow)", + arg, msgbuf, bufsize); + } + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + STORE_SIZE(size); + } else { + /* Using a 0-terminated buffer: + + - the encoded string has to be 0-terminated + for this variant to work; if it is not, an + error raised + + - a new buffer of the needed size is + allocated and the data copied into it; + *buffer is updated to point to the new + buffer; the caller is responsible for + PyMem_Free()ing it after usage + + */ + if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) + != size) { + Py_DECREF(s); + return converterr( + "encoded string without NULL bytes", + arg, msgbuf, bufsize); + } + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr("(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr("(cleanup problem)", + arg, msgbuf, bufsize); + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + } + Py_DECREF(s); + break; + } #ifdef Py_USING_UNICODE - case 'u': {/* raw unicode buffer (Py_UNICODE *) */ - if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - if (PyUnicode_Check(arg)) { - *p = PyUnicode_AS_UNICODE(arg); - STORE_SIZE(PyUnicode_GET_SIZE(arg)); - } - else { - return converterr("cannot convert raw buffers", - arg, msgbuf, bufsize); - } - format++; - } else { - Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); - if (PyUnicode_Check(arg)) - *p = PyUnicode_AS_UNICODE(arg); - else - return converterr("unicode", arg, msgbuf, bufsize); - } - break; - } + case 'u': {/* raw unicode buffer (Py_UNICODE *) */ + if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + if (PyUnicode_Check(arg)) { + *p = PyUnicode_AS_UNICODE(arg); + STORE_SIZE(PyUnicode_GET_SIZE(arg)); + } + else { + return converterr("cannot convert raw buffers", + arg, msgbuf, bufsize); + } + format++; + } else { + Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); + if (PyUnicode_Check(arg)) + *p = PyUnicode_AS_UNICODE(arg); + else + return converterr("unicode", arg, msgbuf, bufsize); + } + break; + } #endif - case 'S': { /* string object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyString_Check(arg)) - *p = arg; - else - return converterr("string", arg, msgbuf, bufsize); - break; - } - + case 'S': { /* string object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyString_Check(arg)) + *p = arg; + else + return converterr("string", arg, msgbuf, bufsize); + break; + } + #ifdef Py_USING_UNICODE - case 'U': { /* Unicode object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyUnicode_Check(arg)) - *p = arg; - else - return converterr("unicode", arg, msgbuf, bufsize); - break; - } + case 'U': { /* Unicode object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyUnicode_Check(arg)) + *p = arg; + else + return converterr("unicode", arg, msgbuf, bufsize); + break; + } #endif - case 'O': { /* object */ - PyTypeObject *type; - PyObject **p; - if (*format == '!') { - type = va_arg(*p_va, PyTypeObject*); - p = va_arg(*p_va, PyObject **); - format++; - if (PyType_IsSubtype(arg->ob_type, type)) - *p = arg; - else - return converterr(type->tp_name, arg, msgbuf, bufsize); - } - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - p = va_arg(*p_va, PyObject **); - format++; - if ((*pred)(arg)) - *p = arg; - else - return converterr("(unspecified)", - arg, msgbuf, bufsize); - - } - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - converter convert = va_arg(*p_va, converter); - void *addr = va_arg(*p_va, void *); - format++; - if (! (*convert)(arg, addr)) - return converterr("(unspecified)", - arg, msgbuf, bufsize); - } - else { - p = va_arg(*p_va, PyObject **); - *p = arg; - } - break; - } - - case 'w': { /* memory buffer, read-write access */ - Py_FatalError("'w' unsupported\n"); -#if 0 - void **p = va_arg(*p_va, void **); - void *res; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; + case 'O': { /* object */ + PyTypeObject *type; + PyObject **p; + if (*format == '!') { + type = va_arg(*p_va, PyTypeObject*); + p = va_arg(*p_va, PyObject **); + format++; + if (PyType_IsSubtype(arg->ob_type, type)) + *p = arg; + else + return converterr(type->tp_name, arg, msgbuf, bufsize); - if (pb && pb->bf_releasebuffer && *format != '*') - /* Buffer must be released, yet caller does not use - the Py_buffer protocol. */ - return converterr("pinned buffer", arg, msgbuf, bufsize); + } + else if (*format == '?') { + inquiry pred = va_arg(*p_va, inquiry); + p = va_arg(*p_va, PyObject **); + format++; + if ((*pred)(arg)) + *p = arg; + else + return converterr("(unspecified)", + arg, msgbuf, bufsize); - if (pb && pb->bf_getbuffer && *format == '*') { - /* Caller is interested in Py_buffer, and the object - supports it directly. */ - format++; - if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { - PyErr_Clear(); - return converterr("read-write buffer", arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) - return converterr("contiguous buffer", arg, msgbuf, bufsize); - break; - } + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + converter convert = va_arg(*p_va, converter); + void *addr = va_arg(*p_va, void *); + format++; + if (! (*convert)(arg, addr)) + return converterr("(unspecified)", + arg, msgbuf, bufsize); + } + else { + p = va_arg(*p_va, PyObject **); + *p = arg; + } + break; + } - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) - return converterr("read-write buffer", arg, msgbuf, bufsize); - if ((*pb->bf_getsegcount)(arg, NULL) != 1) - return converterr("single-segment read-write buffer", - arg, msgbuf, bufsize); - if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); - if (*format == '*') { - PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); - format++; - } - else { - *p = res; - if (*format == '#') { - FETCH_SIZE; - STORE_SIZE(count); - format++; - } - } - break; -#endif - } - - case 't': { /* 8-bit character buffer, read-only access */ - char **p = va_arg(*p_va, char **); - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; -#if 0 - if (*format++ != '#') - return converterr( - "invalid use of 't' format character", - arg, msgbuf, bufsize); -#endif - if (!PyType_HasFeature(arg->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER) -#if 0 - || pb == NULL || pb->bf_getcharbuffer == NULL || - pb->bf_getsegcount == NULL -#endif - ) - return converterr( - "string or read-only character buffer", - arg, msgbuf, bufsize); -#if 0 - if (pb->bf_getsegcount(arg, NULL) != 1) - return converterr( - "string or single-segment read-only buffer", - arg, msgbuf, bufsize); + case 'w': { /* memory buffer, read-write access */ + void **p = va_arg(*p_va, void **); + void *res; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; - if (pb->bf_releasebuffer) - return converterr( - "string or pinned buffer", - arg, msgbuf, bufsize); -#endif - count = pb->bf_getcharbuffer(arg, 0, p); -#if 0 - if (count < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); -#endif - { - FETCH_SIZE; - STORE_SIZE(count); - ++format; - } - break; - } - default: - return converterr("impossible", arg, msgbuf, bufsize); - - } - - *p_format = format; - return NULL; + if (pb && pb->bf_releasebuffer && *format != '*') + /* Buffer must be released, yet caller does not use + the Py_buffer protocol. */ + return converterr("pinned buffer", arg, msgbuf, bufsize); + + if (pb && pb->bf_getbuffer && *format == '*') { + /* Caller is interested in Py_buffer, and the object + supports it directly. */ + format++; + if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { + PyErr_Clear(); + return converterr("read-write buffer", arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) + return converterr("contiguous buffer", arg, msgbuf, bufsize); + break; + } + + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr("read-write buffer", arg, msgbuf, bufsize); + if ((*pb->bf_getsegcount)(arg, NULL) != 1) + return converterr("single-segment read-write buffer", + arg, msgbuf, bufsize); + if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + if (*format == '*') { + PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); + format++; + } + else { + *p = res; + if (*format == '#') { + FETCH_SIZE; + STORE_SIZE(count); + format++; + } + } + break; + } + + case 't': { /* 8-bit character buffer, read-only access */ + char **p = va_arg(*p_va, char **); + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + + if (*format++ != '#') + return converterr( + "invalid use of 't' format character", + arg, msgbuf, bufsize); + if (!PyType_HasFeature(arg->ob_type, + Py_TPFLAGS_HAVE_GETCHARBUFFER) || + pb == NULL || pb->bf_getcharbuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr( + "string or read-only character buffer", + arg, msgbuf, bufsize); + + if (pb->bf_getsegcount(arg, NULL) != 1) + return converterr( + "string or single-segment read-only buffer", + arg, msgbuf, bufsize); + + if (pb->bf_releasebuffer) + return converterr( + "string or pinned buffer", + arg, msgbuf, bufsize); + + count = pb->bf_getcharbuffer(arg, 0, p); + if (count < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + { + FETCH_SIZE; + STORE_SIZE(count); + } + break; + } + + default: + return converterr("impossible", arg, msgbuf, bufsize); + + } + + *p_format = format; + return NULL; } static Py_ssize_t convertbuffer(PyObject *arg, void **p, char **errmsg) { - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - pb->bf_releasebuffer != NULL) { - *errmsg = "string or read-only buffer"; - return -1; - } - if ((*pb->bf_getsegcount)(arg, NULL) != 1) { - *errmsg = "string or single-segment read-only buffer"; - return -1; - } - if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { - *errmsg = "(unspecified)"; - } - return count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + pb->bf_releasebuffer != NULL) { + *errmsg = "string or read-only buffer"; + return -1; + } + if ((*pb->bf_getsegcount)(arg, NULL) != 1) { + *errmsg = "string or single-segment read-only buffer"; + return -1; + } + if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { + *errmsg = "(unspecified)"; + } + return count; } static int getbuffer(PyObject *arg, Py_buffer *view, char **errmsg) { - void *buf; - Py_ssize_t count; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - if (pb == NULL) { - *errmsg = "string or buffer"; - return -1; - } - if (pb->bf_getbuffer) { - if (pb->bf_getbuffer(arg, view, 0) < 0) { - *errmsg = "convertible to a buffer"; - return -1; - } - if (!PyBuffer_IsContiguous(view, 'C')) { - *errmsg = "contiguous buffer"; - return -1; - } - return 0; - } + void *buf; + Py_ssize_t count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + if (pb == NULL) { + *errmsg = "string or buffer"; + return -1; + } + if (pb->bf_getbuffer) { + if (pb->bf_getbuffer(arg, view, 0) < 0) { + *errmsg = "convertible to a buffer"; + return -1; + } + if (!PyBuffer_IsContiguous(view, 'C')) { + *errmsg = "contiguous buffer"; + return -1; + } + return 0; + } - count = convertbuffer(arg, &buf, errmsg); - if (count < 0) { - *errmsg = "convertible to a buffer"; - return count; - } - PyBuffer_FillInfo(view, NULL, buf, count, 1, 0); - return 0; + count = convertbuffer(arg, &buf, errmsg); + if (count < 0) { + *errmsg = "convertible to a buffer"; + return count; + } + PyBuffer_FillInfo(view, arg, buf, count, 1, 0); + return 0; } /* Support for keyword arguments donated by @@ -1395,501 +1410,487 @@ /* Return false (0) for error, else true. */ int PyArg_ParseTupleAndKeywords(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, - kwlist, &va, FLAG_SIZE_T); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, + kwlist, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParseTupleAndKeywords(PyObject *args, PyObject *keywords, - const char *format, + const char *format, char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); - return retval; + retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); + return retval; } int _PyArg_VaParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, va_list va) + PyObject *keywords, + const char *format, + char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, - kwlist, &lva, FLAG_SIZE_T); - return retval; + retval = vgetargskeywords(args, keywords, format, + kwlist, &lva, FLAG_SIZE_T); + return retval; } #define IS_END_OF_FORMAT(c) (c == '\0' || c == ';' || c == ':') static int vgetargskeywords(PyObject *args, PyObject *keywords, const char *format, - char **kwlist, va_list *p_va, int flags) + char **kwlist, va_list *p_va, int flags) { - char msgbuf[512]; - int levels[32]; - const char *fname, *msg, *custom_msg, *keyword; - int min = INT_MAX; - int i, len, nargs, nkeywords; - PyObject *current_arg; - freelist_t freelist = {0, NULL}; + char msgbuf[512]; + int levels[32]; + const char *fname, *msg, *custom_msg, *keyword; + int min = INT_MAX; + int i, len, nargs, nkeywords; + PyObject *freelist = NULL, *current_arg; + assert(args != NULL && PyTuple_Check(args)); + assert(keywords == NULL || PyDict_Check(keywords)); + assert(format != NULL); + assert(kwlist != NULL); + assert(p_va != NULL); - assert(args != NULL && PyTuple_Check(args)); - assert(keywords == NULL || PyDict_Check(keywords)); - assert(format != NULL); - assert(kwlist != NULL); - assert(p_va != NULL); + /* grab the function name or custom error msg first (mutually exclusive) */ + fname = strchr(format, ':'); + if (fname) { + fname++; + custom_msg = NULL; + } + else { + custom_msg = strchr(format,';'); + if (custom_msg) + custom_msg++; + } - /* grab the function name or custom error msg first (mutually exclusive) */ - fname = strchr(format, ':'); - if (fname) { - fname++; - custom_msg = NULL; - } - else { - custom_msg = strchr(format,';'); - if (custom_msg) - custom_msg++; - } + /* scan kwlist and get greatest possible nbr of args */ + for (len=0; kwlist[len]; len++) + continue; - /* scan kwlist and get greatest possible nbr of args */ - for (len=0; kwlist[len]; len++) - continue; + nargs = PyTuple_GET_SIZE(args); + nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); + if (nargs + nkeywords > len) { + PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " + "argument%s (%d given)", + (fname == NULL) ? "function" : fname, + (fname == NULL) ? "" : "()", + len, + (len == 1) ? "" : "s", + nargs + nkeywords); + return 0; + } - freelist.entries = PyMem_New(freelistentry_t, len); + /* convert tuple args and keyword args in same loop, using kwlist to drive process */ + for (i = 0; i < len; i++) { + keyword = kwlist[i]; + if (*format == '|') { + min = i; + format++; + } + if (IS_END_OF_FORMAT(*format)) { + PyErr_Format(PyExc_RuntimeError, + "More keyword list entries (%d) than " + "format specifiers (%d)", len, i); + return cleanreturn(0, freelist); + } + current_arg = NULL; + if (nkeywords) { + current_arg = PyDict_GetItemString(keywords, keyword); + } + if (current_arg) { + --nkeywords; + if (i < nargs) { + /* arg present in tuple and in dict */ + PyErr_Format(PyExc_TypeError, + "Argument given by name ('%s') " + "and position (%d)", + keyword, i+1); + return cleanreturn(0, freelist); + } + } + else if (nkeywords && PyErr_Occurred()) + return cleanreturn(0, freelist); + else if (i < nargs) + current_arg = PyTuple_GET_ITEM(args, i); - nargs = PyTuple_GET_SIZE(args); - nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); - if (nargs + nkeywords > len) { - PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " - "argument%s (%d given)", - (fname == NULL) ? "function" : fname, - (fname == NULL) ? "" : "()", - len, - (len == 1) ? "" : "s", - nargs + nkeywords); - return cleanreturn(0, &freelist); - } + if (current_arg) { + msg = convertitem(current_arg, &format, p_va, flags, + levels, msgbuf, sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, custom_msg); + return cleanreturn(0, freelist); + } + continue; + } - /* convert tuple args and keyword args in same loop, using kwlist to drive process */ - for (i = 0; i < len; i++) { - keyword = kwlist[i]; - if (*format == '|') { - min = i; - format++; - } - if (IS_END_OF_FORMAT(*format)) { - PyErr_Format(PyExc_RuntimeError, - "More keyword list entries (%d) than " - "format specifiers (%d)", len, i); - return cleanreturn(0, &freelist); - } - current_arg = NULL; - if (nkeywords) { - current_arg = PyDict_GetItemString(keywords, keyword); - } - if (current_arg) { - --nkeywords; - if (i < nargs) { - /* arg present in tuple and in dict */ - PyErr_Format(PyExc_TypeError, - "Argument given by name ('%s') " - "and position (%d)", - keyword, i+1); - return cleanreturn(0, &freelist); - } - } - else if (nkeywords && PyErr_Occurred()) - return cleanreturn(0, &freelist); - else if (i < nargs) - current_arg = PyTuple_GET_ITEM(args, i); - - if (current_arg) { - msg = convertitem(current_arg, &format, p_va, flags, - levels, msgbuf, sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, custom_msg); - return cleanreturn(0, &freelist); - } - continue; - } + if (i < min) { + PyErr_Format(PyExc_TypeError, "Required argument " + "'%s' (pos %d) not found", + keyword, i+1); + return cleanreturn(0, freelist); + } + /* current code reports success when all required args + * fulfilled and no keyword args left, with no further + * validation. XXX Maybe skip this in debug build ? + */ + if (!nkeywords) + return cleanreturn(1, freelist); - if (i < min) { - PyErr_Format(PyExc_TypeError, "Required argument " - "'%s' (pos %d) not found", - keyword, i+1); - return cleanreturn(0, &freelist); - } - /* current code reports success when all required args - * fulfilled and no keyword args left, with no further - * validation. XXX Maybe skip this in debug build ? - */ - if (!nkeywords) - return cleanreturn(1, &freelist); + /* We are into optional args, skip thru to any remaining + * keyword args */ + msg = skipitem(&format, p_va, flags); + if (msg) { + PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, + format); + return cleanreturn(0, freelist); + } + } - /* We are into optional args, skip thru to any remaining - * keyword args */ - msg = skipitem(&format, p_va, flags); - if (msg) { - PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, - format); - return cleanreturn(0, &freelist); - } - } + if (!IS_END_OF_FORMAT(*format) && *format != '|') { + PyErr_Format(PyExc_RuntimeError, + "more argument specifiers than keyword list entries " + "(remaining format:'%s')", format); + return cleanreturn(0, freelist); + } - if (!IS_END_OF_FORMAT(*format) && *format != '|') { - PyErr_Format(PyExc_RuntimeError, - "more argument specifiers than keyword list entries " - "(remaining format:'%s')", format); - return cleanreturn(0, &freelist); - } + /* make sure there are no extraneous keyword arguments */ + if (nkeywords > 0) { + PyObject *key, *value; + Py_ssize_t pos = 0; + while (PyDict_Next(keywords, &pos, &key, &value)) { + int match = 0; + char *ks; + if (!PyString_Check(key)) { + PyErr_SetString(PyExc_TypeError, + "keywords must be strings"); + return cleanreturn(0, freelist); + } + ks = PyString_AsString(key); + for (i = 0; i < len; i++) { + if (!strcmp(ks, kwlist[i])) { + match = 1; + break; + } + } + if (!match) { + PyErr_Format(PyExc_TypeError, + "'%s' is an invalid keyword " + "argument for this function", + ks); + return cleanreturn(0, freelist); + } + } + } - /* make sure there are no extraneous keyword arguments */ - if (nkeywords > 0) { - PyObject *key, *value; - Py_ssize_t pos = 0; - while (PyDict_Next(keywords, &pos, &key, &value)) { - int match = 0; - char *ks; - if (!PyString_Check(key)) { - PyErr_SetString(PyExc_TypeError, - "keywords must be strings"); - return cleanreturn(0, &freelist); - } - ks = PyString_AsString(key); - for (i = 0; i < len; i++) { - if (!strcmp(ks, kwlist[i])) { - match = 1; - break; - } - } - if (!match) { - PyErr_Format(PyExc_TypeError, - "'%s' is an invalid keyword " - "argument for this function", - ks); - return cleanreturn(0, &freelist); - } - } - } - - return cleanreturn(1, &freelist); + return cleanreturn(1, freelist); } static char * skipitem(const char **p_format, va_list *p_va, int flags) { - const char *format = *p_format; - char c = *format++; - - switch (c) { + const char *format = *p_format; + char c = *format++; - /* simple codes - * The individual types (second arg of va_arg) are irrelevant */ + switch (c) { - case 'b': /* byte -- very short int */ - case 'B': /* byte as bitfield */ - case 'h': /* short int */ - case 'H': /* short int as bitfield */ - case 'i': /* int */ - case 'I': /* int sized bitfield */ - case 'l': /* long int */ - case 'k': /* long int sized bitfield */ + /* simple codes + * The individual types (second arg of va_arg) are irrelevant */ + + case 'b': /* byte -- very short int */ + case 'B': /* byte as bitfield */ + case 'h': /* short int */ + case 'H': /* short int as bitfield */ + case 'i': /* int */ + case 'I': /* int sized bitfield */ + case 'l': /* long int */ + case 'k': /* long int sized bitfield */ #ifdef HAVE_LONG_LONG - case 'L': /* PY_LONG_LONG */ - case 'K': /* PY_LONG_LONG sized bitfield */ + case 'L': /* PY_LONG_LONG */ + case 'K': /* PY_LONG_LONG sized bitfield */ #endif - case 'f': /* float */ - case 'd': /* double */ + case 'f': /* float */ + case 'd': /* double */ #ifndef WITHOUT_COMPLEX - case 'D': /* complex double */ + case 'D': /* complex double */ #endif - case 'c': /* char */ - { - (void) va_arg(*p_va, void *); - break; - } + case 'c': /* char */ + { + (void) va_arg(*p_va, void *); + break; + } - case 'n': /* Py_ssize_t */ - { - (void) va_arg(*p_va, Py_ssize_t *); - break; - } - - /* string codes */ - - case 'e': /* string with encoding */ - { - (void) va_arg(*p_va, const char *); - if (!(*format == 's' || *format == 't')) - /* after 'e', only 's' and 't' is allowed */ - goto err; - format++; - /* explicit fallthrough to string cases */ - } - - case 's': /* string */ - case 'z': /* string or None */ + case 'n': /* Py_ssize_t */ + { + (void) va_arg(*p_va, Py_ssize_t *); + break; + } + + /* string codes */ + + case 'e': /* string with encoding */ + { + (void) va_arg(*p_va, const char *); + if (!(*format == 's' || *format == 't')) + /* after 'e', only 's' and 't' is allowed */ + goto err; + format++; + /* explicit fallthrough to string cases */ + } + + case 's': /* string */ + case 'z': /* string or None */ #ifdef Py_USING_UNICODE - case 'u': /* unicode string */ + case 'u': /* unicode string */ #endif - case 't': /* buffer, read-only */ - case 'w': /* buffer, read-write */ - { - (void) va_arg(*p_va, char **); - if (*format == '#') { - if (flags & FLAG_SIZE_T) - (void) va_arg(*p_va, Py_ssize_t *); - else - (void) va_arg(*p_va, int *); - format++; - } else if ((c == 's' || c == 'z') && *format == '*') { - format++; - } - break; - } + case 't': /* buffer, read-only */ + case 'w': /* buffer, read-write */ + { + (void) va_arg(*p_va, char **); + if (*format == '#') { + if (flags & FLAG_SIZE_T) + (void) va_arg(*p_va, Py_ssize_t *); + else + (void) va_arg(*p_va, int *); + format++; + } else if ((c == 's' || c == 'z') && *format == '*') { + format++; + } + break; + } - /* object codes */ + /* object codes */ - case 'S': /* string object */ + case 'S': /* string object */ #ifdef Py_USING_UNICODE - case 'U': /* unicode string object */ + case 'U': /* unicode string object */ #endif - { - (void) va_arg(*p_va, PyObject **); - break; - } - - case 'O': /* object */ - { - if (*format == '!') { - format++; - (void) va_arg(*p_va, PyTypeObject*); - (void) va_arg(*p_va, PyObject **); - } -#if 0 -/* I don't know what this is for */ - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - format++; - if ((*pred)(arg)) { - (void) va_arg(*p_va, PyObject **); - } - } -#endif - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - (void) va_arg(*p_va, converter); - (void) va_arg(*p_va, void *); - format++; - } - else { - (void) va_arg(*p_va, PyObject **); - } - break; - } + { + (void) va_arg(*p_va, PyObject **); + break; + } - case '(': /* bypass tuple, not handled at all previously */ - { - char *msg; - for (;;) { - if (*format==')') - break; - if (IS_END_OF_FORMAT(*format)) - return "Unmatched left paren in format " - "string"; - msg = skipitem(&format, p_va, flags); - if (msg) - return msg; - } - format++; - break; - } + case 'O': /* object */ + { + if (*format == '!') { + format++; + (void) va_arg(*p_va, PyTypeObject*); + (void) va_arg(*p_va, PyObject **); + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + (void) va_arg(*p_va, converter); + (void) va_arg(*p_va, void *); + format++; + } + else { + (void) va_arg(*p_va, PyObject **); + } + break; + } - case ')': - return "Unmatched right paren in format string"; + case '(': /* bypass tuple, not handled at all previously */ + { + char *msg; + for (;;) { + if (*format==')') + break; + if (IS_END_OF_FORMAT(*format)) + return "Unmatched left paren in format " + "string"; + msg = skipitem(&format, p_va, flags); + if (msg) + return msg; + } + format++; + break; + } - default: + case ')': + return "Unmatched right paren in format string"; + + default: err: - return "impossible"; - - } + return "impossible"; - *p_format = format; - return NULL; + } + + *p_format = format; + return NULL; } int PyArg_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, ...) { - Py_ssize_t i, l; - PyObject **o; - va_list vargs; + Py_ssize_t i, l; + PyObject **o; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, max); + va_start(vargs, max); #else - va_start(vargs); + va_start(vargs); #endif - assert(min >= 0); - assert(min <= max); - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "PyArg_UnpackTuple() argument list is not a tuple"); - return 0; - } - l = PyTuple_GET_SIZE(args); - if (l < min) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at least "), min, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at least "), min, l); - va_end(vargs); - return 0; - } - if (l > max) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at most "), max, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at most "), max, l); - va_end(vargs); - return 0; - } - for (i = 0; i < l; i++) { - o = va_arg(vargs, PyObject **); - *o = PyTuple_GET_ITEM(args, i); - } - va_end(vargs); - return 1; + assert(min >= 0); + assert(min <= max); + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "PyArg_UnpackTuple() argument list is not a tuple"); + return 0; + } + l = PyTuple_GET_SIZE(args); + if (l < min) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at least "), min, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at least "), min, l); + va_end(vargs); + return 0; + } + if (l > max) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at most "), max, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at most "), max, l); + va_end(vargs); + return 0; + } + for (i = 0; i < l; i++) { + o = va_arg(vargs, PyObject **); + *o = PyTuple_GET_ITEM(args, i); + } + va_end(vargs); + return 1; } /* For type constructors that don't take keyword args * - * Sets a TypeError and returns 0 if the kwds dict is + * Sets a TypeError and returns 0 if the kwds dict is * not empty, returns 1 otherwise */ int _PyArg_NoKeywords(const char *funcname, PyObject *kw) { - if (kw == NULL) - return 1; - if (!PyDict_CheckExact(kw)) { - PyErr_BadInternalCall(); - return 0; - } - if (PyDict_Size(kw) == 0) - return 1; - - PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", - funcname); - return 0; + if (kw == NULL) + return 1; + if (!PyDict_CheckExact(kw)) { + PyErr_BadInternalCall(); + return 0; + } + if (PyDict_Size(kw) == 0) + return 1; + + PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", + funcname); + return 0; } #ifdef __cplusplus }; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -33,41 +33,41 @@ static int countformat(const char *format, int endchar) { - int count = 0; - int level = 0; - while (level > 0 || *format != endchar) { - switch (*format) { - case '\0': - /* Premature end */ - PyErr_SetString(PyExc_SystemError, - "unmatched paren in format"); - return -1; - case '(': - case '[': - case '{': - if (level == 0) - count++; - level++; - break; - case ')': - case ']': - case '}': - level--; - break; - case '#': - case '&': - case ',': - case ':': - case ' ': - case '\t': - break; - default: - if (level == 0) - count++; - } - format++; - } - return count; + int count = 0; + int level = 0; + while (level > 0 || *format != endchar) { + switch (*format) { + case '\0': + /* Premature end */ + PyErr_SetString(PyExc_SystemError, + "unmatched paren in format"); + return -1; + case '(': + case '[': + case '{': + if (level == 0) + count++; + level++; + break; + case ')': + case ']': + case '}': + level--; + break; + case '#': + case '&': + case ',': + case ':': + case ' ': + case '\t': + break; + default: + if (level == 0) + count++; + } + format++; + } + return count; } @@ -83,582 +83,435 @@ static PyObject * do_mkdict(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *d; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((d = PyDict_New()) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i+= 2) { - PyObject *k, *v; - int err; - k = do_mkvalue(p_format, p_va, flags); - if (k == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - k = Py_None; - } - v = do_mkvalue(p_format, p_va, flags); - if (v == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - v = Py_None; - } - err = PyDict_SetItem(d, k, v); - Py_DECREF(k); - Py_DECREF(v); - if (err < 0 || itemfailed) { - Py_DECREF(d); - return NULL; - } - } - if (d != NULL && **p_format != endchar) { - Py_DECREF(d); - d = NULL; - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - } - else if (endchar) - ++*p_format; - return d; + PyObject *d; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((d = PyDict_New()) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i+= 2) { + PyObject *k, *v; + int err; + k = do_mkvalue(p_format, p_va, flags); + if (k == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + k = Py_None; + } + v = do_mkvalue(p_format, p_va, flags); + if (v == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + v = Py_None; + } + err = PyDict_SetItem(d, k, v); + Py_DECREF(k); + Py_DECREF(v); + if (err < 0 || itemfailed) { + Py_DECREF(d); + return NULL; + } + } + if (d != NULL && **p_format != endchar) { + Py_DECREF(d); + d = NULL; + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + } + else if (endchar) + ++*p_format; + return d; } static PyObject * do_mklist(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - v = PyList_New(n); - if (v == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyList_SET_ITEM(v, i, w); - } + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + v = PyList_New(n); + if (v == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyList_SET_ITEM(v, i, w); + } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } #ifdef Py_USING_UNICODE static int _ustrlen(Py_UNICODE *u) { - int i = 0; - Py_UNICODE *v = u; - while (*v != 0) { i++; v++; } - return i; + int i = 0; + Py_UNICODE *v = u; + while (*v != 0) { i++; v++; } + return i; } #endif static PyObject * do_mktuple(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((v = PyTuple_New(n)) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyTuple_SET_ITEM(v, i, w); - } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((v = PyTuple_New(n)) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyTuple_SET_ITEM(v, i, w); + } + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } static PyObject * do_mkvalue(const char **p_format, va_list *p_va, int flags) { - for (;;) { - switch (*(*p_format)++) { - case '(': - return do_mktuple(p_format, p_va, ')', - countformat(*p_format, ')'), flags); + for (;;) { + switch (*(*p_format)++) { + case '(': + return do_mktuple(p_format, p_va, ')', + countformat(*p_format, ')'), flags); - case '[': - return do_mklist(p_format, p_va, ']', - countformat(*p_format, ']'), flags); + case '[': + return do_mklist(p_format, p_va, ']', + countformat(*p_format, ']'), flags); - case '{': - return do_mkdict(p_format, p_va, '}', - countformat(*p_format, '}'), flags); + case '{': + return do_mkdict(p_format, p_va, '}', + countformat(*p_format, '}'), flags); - case 'b': - case 'B': - case 'h': - case 'i': - return PyInt_FromLong((long)va_arg(*p_va, int)); - - case 'H': - return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + case 'b': + case 'B': + case 'h': + case 'i': + return PyInt_FromLong((long)va_arg(*p_va, int)); - case 'I': - { - unsigned int n; - n = va_arg(*p_va, unsigned int); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong((unsigned long)n); - else - return PyInt_FromLong(n); - } - - case 'n': + case 'H': + return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + + case 'I': + { + unsigned int n; + n = va_arg(*p_va, unsigned int); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong((unsigned long)n); + else + return PyInt_FromLong(n); + } + + case 'n': #if SIZEOF_SIZE_T!=SIZEOF_LONG - return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); + return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); #endif - /* Fall through from 'n' to 'l' if Py_ssize_t is long */ - case 'l': - return PyInt_FromLong(va_arg(*p_va, long)); + /* Fall through from 'n' to 'l' if Py_ssize_t is long */ + case 'l': + return PyInt_FromLong(va_arg(*p_va, long)); - case 'k': - { - unsigned long n; - n = va_arg(*p_va, unsigned long); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong(n); - else - return PyInt_FromLong(n); - } + case 'k': + { + unsigned long n; + n = va_arg(*p_va, unsigned long); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong(n); + else + return PyInt_FromLong(n); + } #ifdef HAVE_LONG_LONG - case 'L': - return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); + case 'L': + return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); - case 'K': - return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); + case 'K': + return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); #endif #ifdef Py_USING_UNICODE - case 'u': - { - PyObject *v; - Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (u == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) - n = _ustrlen(u); - v = PyUnicode_FromUnicode(u, n); - } - return v; - } + case 'u': + { + PyObject *v; + Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (u == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) + n = _ustrlen(u); + v = PyUnicode_FromUnicode(u, n); + } + return v; + } #endif - case 'f': - case 'd': - return PyFloat_FromDouble( - (double)va_arg(*p_va, va_double)); + case 'f': + case 'd': + return PyFloat_FromDouble( + (double)va_arg(*p_va, va_double)); #ifndef WITHOUT_COMPLEX - case 'D': - return PyComplex_FromCComplex( - *((Py_complex *)va_arg(*p_va, Py_complex *))); + case 'D': + return PyComplex_FromCComplex( + *((Py_complex *)va_arg(*p_va, Py_complex *))); #endif /* WITHOUT_COMPLEX */ - case 'c': - { - char p[1]; - p[0] = (char)va_arg(*p_va, int); - return PyString_FromStringAndSize(p, 1); - } + case 'c': + { + char p[1]; + p[0] = (char)va_arg(*p_va, int); + return PyString_FromStringAndSize(p, 1); + } - case 's': - case 'z': - { - PyObject *v; - char *str = va_arg(*p_va, char *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (str == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) { - size_t m = strlen(str); - if (m > PY_SSIZE_T_MAX) { - PyErr_SetString(PyExc_OverflowError, - "string too long for Python string"); - return NULL; - } - n = (Py_ssize_t)m; - } - v = PyString_FromStringAndSize(str, n); - } - return v; - } + case 's': + case 'z': + { + PyObject *v; + char *str = va_arg(*p_va, char *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (str == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) { + size_t m = strlen(str); + if (m > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_OverflowError, + "string too long for Python string"); + return NULL; + } + n = (Py_ssize_t)m; + } + v = PyString_FromStringAndSize(str, n); + } + return v; + } - case 'N': - case 'S': - case 'O': - if (**p_format == '&') { - typedef PyObject *(*converter)(void *); - converter func = va_arg(*p_va, converter); - void *arg = va_arg(*p_va, void *); - ++*p_format; - return (*func)(arg); - } - else { - PyObject *v; - v = va_arg(*p_va, PyObject *); - if (v != NULL) { - if (*(*p_format - 1) != 'N') - Py_INCREF(v); - } - else if (!PyErr_Occurred()) - /* If a NULL was passed - * because a call that should - * have constructed a value - * failed, that's OK, and we - * pass the error on; but if - * no error occurred it's not - * clear that the caller knew - * what she was doing. */ - PyErr_SetString(PyExc_SystemError, - "NULL object passed to Py_BuildValue"); - return v; - } + case 'N': + case 'S': + case 'O': + if (**p_format == '&') { + typedef PyObject *(*converter)(void *); + converter func = va_arg(*p_va, converter); + void *arg = va_arg(*p_va, void *); + ++*p_format; + return (*func)(arg); + } + else { + PyObject *v; + v = va_arg(*p_va, PyObject *); + if (v != NULL) { + if (*(*p_format - 1) != 'N') + Py_INCREF(v); + } + else if (!PyErr_Occurred()) + /* If a NULL was passed + * because a call that should + * have constructed a value + * failed, that's OK, and we + * pass the error on; but if + * no error occurred it's not + * clear that the caller knew + * what she was doing. */ + PyErr_SetString(PyExc_SystemError, + "NULL object passed to Py_BuildValue"); + return v; + } - case ':': - case ',': - case ' ': - case '\t': - break; + case ':': + case ',': + case ' ': + case '\t': + break; - default: - PyErr_SetString(PyExc_SystemError, - "bad format char passed to Py_BuildValue"); - return NULL; + default: + PyErr_SetString(PyExc_SystemError, + "bad format char passed to Py_BuildValue"); + return NULL; - } - } + } + } } PyObject * Py_BuildValue(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, 0); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, 0); + va_end(va); + return retval; } PyObject * _Py_BuildValue_SizeT(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, FLAG_SIZE_T); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, FLAG_SIZE_T); + va_end(va); + return retval; } PyObject * Py_VaBuildValue(const char *format, va_list va) { - return va_build_value(format, va, 0); + return va_build_value(format, va, 0); } PyObject * _Py_VaBuildValue_SizeT(const char *format, va_list va) { - return va_build_value(format, va, FLAG_SIZE_T); + return va_build_value(format, va, FLAG_SIZE_T); } static PyObject * va_build_value(const char *format, va_list va, int flags) { - const char *f = format; - int n = countformat(f, '\0'); - va_list lva; + const char *f = format; + int n = countformat(f, '\0'); + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - if (n < 0) - return NULL; - if (n == 0) { - Py_INCREF(Py_None); - return Py_None; - } - if (n == 1) - return do_mkvalue(&f, &lva, flags); - return do_mktuple(&f, &lva, '\0', n, flags); + if (n < 0) + return NULL; + if (n == 0) { + Py_INCREF(Py_None); + return Py_None; + } + if (n == 1) + return do_mkvalue(&f, &lva, flags); + return do_mktuple(&f, &lva, '\0', n, flags); } PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...) { - va_list vargs; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *args; + PyObject *res; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) - return NULL; + if (args == NULL) + return NULL; - res = PyEval_CallObject(obj, args); - Py_DECREF(args); + res = PyEval_CallObject(obj, args); + Py_DECREF(args); - return res; + return res; } PyObject * PyEval_CallMethod(PyObject *obj, const char *methodname, const char *format, ...) { - va_list vargs; - PyObject *meth; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *meth; + PyObject *args; + PyObject *res; - meth = PyObject_GetAttrString(obj, methodname); - if (meth == NULL) - return NULL; + meth = PyObject_GetAttrString(obj, methodname); + if (meth == NULL) + return NULL; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) { - Py_DECREF(meth); - return NULL; - } + if (args == NULL) { + Py_DECREF(meth); + return NULL; + } - res = PyEval_CallObject(meth, args); - Py_DECREF(meth); - Py_DECREF(args); + res = PyEval_CallObject(meth, args); + Py_DECREF(meth); + Py_DECREF(args); - return res; -} - -static PyObject* -call_function_tail(PyObject *callable, PyObject *args) -{ - PyObject *retval; - - if (args == NULL) - return NULL; - - if (!PyTuple_Check(args)) { - PyObject *a; - - a = PyTuple_New(1); - if (a == NULL) { - Py_DECREF(args); - return NULL; - } - PyTuple_SET_ITEM(a, 0, args); - args = a; - } - retval = PyObject_Call(callable, args, NULL); - - Py_DECREF(args); - - return retval; -} - -PyObject * -PyObject_CallFunction(PyObject *callable, const char *format, ...) -{ - va_list va; - PyObject *args; - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - return call_function_tail(callable, args); -} - -PyObject * -PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) -{ - va_list va; - PyObject *args; - PyObject *func = NULL; - PyObject *retval = NULL; - - func = PyObject_GetAttrString(o, name); - if (func == NULL) { - PyErr_SetString(PyExc_AttributeError, name); - return 0; - } - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - retval = call_function_tail(func, args); - - exit: - /* args gets consumed in call_function_tail */ - Py_XDECREF(func); - - return retval; -} - -static PyObject * -objargs_mktuple(va_list va) -{ - int i, n = 0; - va_list countva; - PyObject *result, *tmp; - -#ifdef VA_LIST_IS_ARRAY - memcpy(countva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(countva, va); -#else - countva = va; -#endif -#endif - - while (((PyObject *)va_arg(countva, PyObject *)) != NULL) - ++n; - result = PyTuple_New(n); - if (result != NULL && n > 0) { - for (i = 0; i < n; ++i) { - tmp = (PyObject *)va_arg(va, PyObject *); - Py_INCREF(tmp); - PyTuple_SET_ITEM(result, i, tmp); - } - } - return result; -} - -PyObject * -PyObject_CallFunctionObjArgs(PyObject *callable, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - /* count the args */ - va_start(vargs, callable); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) - return NULL; - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - - return tmp; -} - -PyObject * -PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - callable = PyObject_GetAttr(callable, name); - if (callable == NULL) - return NULL; - - /* count the args */ - va_start(vargs, name); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) { - Py_DECREF(callable); - return NULL; - } - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - Py_DECREF(callable); - - return tmp; + return res; } /* returns -1 in case of error, 0 if a new key was added, 1 if the key @@ -666,67 +519,67 @@ static int _PyModule_AddObject_NoConsumeRef(PyObject *m, const char *name, PyObject *o) { - PyObject *dict, *prev; - if (!PyModule_Check(m)) { - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs module as first arg"); - return -1; - } - if (!o) { - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs non-NULL value"); - return -1; - } + PyObject *dict, *prev; + if (!PyModule_Check(m)) { + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs module as first arg"); + return -1; + } + if (!o) { + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs non-NULL value"); + return -1; + } - dict = PyModule_GetDict(m); - if (dict == NULL) { - /* Internal error -- modules must have a dict! */ - PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", - PyModule_GetName(m)); - return -1; - } - prev = PyDict_GetItemString(dict, name); - if (PyDict_SetItemString(dict, name, o)) - return -1; - return prev != NULL; + dict = PyModule_GetDict(m); + if (dict == NULL) { + /* Internal error -- modules must have a dict! */ + PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", + PyModule_GetName(m)); + return -1; + } + prev = PyDict_GetItemString(dict, name); + if (PyDict_SetItemString(dict, name, o)) + return -1; + return prev != NULL; } int PyModule_AddObject(PyObject *m, const char *name, PyObject *o) { - int result = _PyModule_AddObject_NoConsumeRef(m, name, o); - /* XXX WORKAROUND for a common misusage of PyModule_AddObject: - for the common case of adding a new key, we don't consume a - reference, but instead just leak it away. The issue is that - people generally don't realize that this function consumes a - reference, because on CPython the reference is still stored - on the dictionary. */ - if (result != 0) - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result = _PyModule_AddObject_NoConsumeRef(m, name, o); + /* XXX WORKAROUND for a common misusage of PyModule_AddObject: + for the common case of adding a new key, we don't consume a + reference, but instead just leak it away. The issue is that + people generally don't realize that this function consumes a + reference, because on CPython the reference is still stored + on the dictionary. */ + if (result != 0) + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddIntConstant(PyObject *m, const char *name, long value) { - int result; - PyObject *o = PyInt_FromLong(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyInt_FromLong(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddStringConstant(PyObject *m, const char *name, const char *value) { - int result; - PyObject *o = PyString_FromString(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyString_FromString(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } diff --git a/pypy/module/cpyext/src/mysnprintf.c b/pypy/module/cpyext/src/mysnprintf.c --- a/pypy/module/cpyext/src/mysnprintf.c +++ b/pypy/module/cpyext/src/mysnprintf.c @@ -20,86 +20,86 @@ Return value (rv): - When 0 <= rv < size, the output conversion was unexceptional, and - rv characters were written to str (excluding a trailing \0 byte at - str[rv]). + When 0 <= rv < size, the output conversion was unexceptional, and + rv characters were written to str (excluding a trailing \0 byte at + str[rv]). - When rv >= size, output conversion was truncated, and a buffer of - size rv+1 would have been needed to avoid truncation. str[size-1] - is \0 in this case. + When rv >= size, output conversion was truncated, and a buffer of + size rv+1 would have been needed to avoid truncation. str[size-1] + is \0 in this case. - When rv < 0, "something bad happened". str[size-1] is \0 in this - case too, but the rest of str is unreliable. It could be that - an error in format codes was detected by libc, or on platforms - with a non-C99 vsnprintf simply that the buffer wasn't big enough - to avoid truncation, or on platforms without any vsnprintf that - PyMem_Malloc couldn't obtain space for a temp buffer. + When rv < 0, "something bad happened". str[size-1] is \0 in this + case too, but the rest of str is unreliable. It could be that + an error in format codes was detected by libc, or on platforms + with a non-C99 vsnprintf simply that the buffer wasn't big enough + to avoid truncation, or on platforms without any vsnprintf that + PyMem_Malloc couldn't obtain space for a temp buffer. CAUTION: Unlike C99, str != NULL and size > 0 are required. */ int +PyOS_snprintf(char *str, size_t size, const char *format, ...) +{ + int rc; + va_list va; + + va_start(va, format); + rc = PyOS_vsnprintf(str, size, format, va); + va_end(va); + return rc; +} + +int PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va) { - int len; /* # bytes written, excluding \0 */ + int len; /* # bytes written, excluding \0 */ #ifdef HAVE_SNPRINTF #define _PyOS_vsnprintf_EXTRA_SPACE 1 #else #define _PyOS_vsnprintf_EXTRA_SPACE 512 - char *buffer; + char *buffer; #endif - assert(str != NULL); - assert(size > 0); - assert(format != NULL); - /* We take a size_t as input but return an int. Sanity check - * our input so that it won't cause an overflow in the - * vsnprintf return value or the buffer malloc size. */ - if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { - len = -666; - goto Done; - } + assert(str != NULL); + assert(size > 0); + assert(format != NULL); + /* We take a size_t as input but return an int. Sanity check + * our input so that it won't cause an overflow in the + * vsnprintf return value or the buffer malloc size. */ + if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { + len = -666; + goto Done; + } #ifdef HAVE_SNPRINTF - len = vsnprintf(str, size, format, va); + len = vsnprintf(str, size, format, va); #else - /* Emulate it. */ - buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); - if (buffer == NULL) { - len = -666; - goto Done; - } + /* Emulate it. */ + buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); + if (buffer == NULL) { + len = -666; + goto Done; + } - len = vsprintf(buffer, format, va); - if (len < 0) - /* ignore the error */; + len = vsprintf(buffer, format, va); + if (len < 0) + /* ignore the error */; - else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) - Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); + else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) + Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); - else { - const size_t to_copy = (size_t)len < size ? - (size_t)len : size - 1; - assert(to_copy < size); - memcpy(str, buffer, to_copy); - str[to_copy] = '\0'; - } - PyMem_FREE(buffer); + else { + const size_t to_copy = (size_t)len < size ? + (size_t)len : size - 1; + assert(to_copy < size); + memcpy(str, buffer, to_copy); + str[to_copy] = '\0'; + } + PyMem_FREE(buffer); #endif Done: - if (size > 0) - str[size-1] = '\0'; - return len; + if (size > 0) + str[size-1] = '\0'; + return len; #undef _PyOS_vsnprintf_EXTRA_SPACE } - -int -PyOS_snprintf(char *str, size_t size, const char *format, ...) -{ - int rc; - va_list va; - - va_start(va, format); - rc = PyOS_vsnprintf(str, size, format, va); - va_end(va); - return rc; -} diff --git a/pypy/module/cpyext/src/object.c b/pypy/module/cpyext/src/object.c deleted file mode 100644 --- a/pypy/module/cpyext/src/object.c +++ /dev/null @@ -1,91 +0,0 @@ -// contains code from abstract.c -#include - - -static PyObject * -null_error(void) -{ - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_SystemError, - "null argument to internal routine"); - return NULL; -} - -int PyObject_AsReadBuffer(PyObject *obj, - const void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void *pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a readable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getreadbuffer)(obj, 0, &pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int PyObject_AsWriteBuffer(PyObject *obj, - void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void*pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a writeable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getwritebuffer)(obj,0,&pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int -PyObject_CheckReadBuffer(PyObject *obj) -{ - PyBufferProcs *pb = obj->ob_type->tp_as_buffer; - - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - (*pb->bf_getsegcount)(obj, NULL) != 1) - return 0; - return 1; -} diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -25,7 +25,7 @@ PyObject * PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { - const char *dot; + char *dot; PyObject *modulename = NULL; PyObject *classname = NULL; PyObject *mydict = NULL; diff --git a/pypy/module/cpyext/src/pysignals.c b/pypy/module/cpyext/src/pysignals.c --- a/pypy/module/cpyext/src/pysignals.c +++ b/pypy/module/cpyext/src/pysignals.c @@ -17,17 +17,34 @@ PyOS_getsig(int sig) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context; - if (sigaction(sig, NULL, &context) == -1) - return SIG_ERR; - return context.sa_handler; + /* assume sigaction exists */ + struct sigaction context; + if (sigaction(sig, NULL, &context) == -1) + return SIG_ERR; + return context.sa_handler; #else - PyOS_sighandler_t handler; - handler = signal(sig, SIG_IGN); - if (handler != SIG_ERR) - signal(sig, handler); - return handler; + PyOS_sighandler_t handler; +/* Special signal handling for the secure CRT in Visual Studio 2005 */ +#if defined(_MSC_VER) && _MSC_VER >= 1400 + switch (sig) { + /* Only these signals are valid */ + case SIGINT: + case SIGILL: + case SIGFPE: + case SIGSEGV: + case SIGTERM: + case SIGBREAK: + case SIGABRT: + break; + /* Don't call signal() with other values or it will assert */ + default: + return SIG_ERR; + } +#endif /* _MSC_VER && _MSC_VER >= 1400 */ + handler = signal(sig, SIG_IGN); + if (handler != SIG_ERR) + signal(sig, handler); + return handler; #endif } @@ -35,21 +52,21 @@ PyOS_setsig(int sig, PyOS_sighandler_t handler) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context, ocontext; - context.sa_handler = handler; - sigemptyset(&context.sa_mask); - context.sa_flags = 0; - if (sigaction(sig, &context, &ocontext) == -1) - return SIG_ERR; - return ocontext.sa_handler; + /* assume sigaction exists */ + struct sigaction context, ocontext; + context.sa_handler = handler; + sigemptyset(&context.sa_mask); + context.sa_flags = 0; + if (sigaction(sig, &context, &ocontext) == -1) + return SIG_ERR; + return ocontext.sa_handler; #else - PyOS_sighandler_t oldhandler; - oldhandler = signal(sig, handler); + PyOS_sighandler_t oldhandler; + oldhandler = signal(sig, handler); #ifndef MS_WINDOWS - /* should check if this exists */ - siginterrupt(sig, 1); + /* should check if this exists */ + siginterrupt(sig, 1); #endif - return oldhandler; + return oldhandler; #endif } diff --git a/pypy/module/cpyext/src/pythonrun.c b/pypy/module/cpyext/src/pythonrun.c --- a/pypy/module/cpyext/src/pythonrun.c +++ b/pypy/module/cpyext/src/pythonrun.c @@ -9,28 +9,28 @@ void Py_FatalError(const char *msg) { - fprintf(stderr, "Fatal Python error: %s\n", msg); - fflush(stderr); /* it helps in Windows debug build */ + fprintf(stderr, "Fatal Python error: %s\n", msg); + fflush(stderr); /* it helps in Windows debug build */ #ifdef MS_WINDOWS - { - size_t len = strlen(msg); - WCHAR* buffer; - size_t i; + { + size_t len = strlen(msg); + WCHAR* buffer; + size_t i; - /* Convert the message to wchar_t. This uses a simple one-to-one - conversion, assuming that the this error message actually uses ASCII - only. If this ceases to be true, we will have to convert. */ - buffer = alloca( (len+1) * (sizeof *buffer)); - for( i=0; i<=len; ++i) - buffer[i] = msg[i]; - OutputDebugStringW(L"Fatal Python error: "); - OutputDebugStringW(buffer); - OutputDebugStringW(L"\n"); - } + /* Convert the message to wchar_t. This uses a simple one-to-one + conversion, assuming that the this error message actually uses ASCII + only. If this ceases to be true, we will have to convert. */ + buffer = alloca( (len+1) * (sizeof *buffer)); + for( i=0; i<=len; ++i) + buffer[i] = msg[i]; + OutputDebugStringW(L"Fatal Python error: "); + OutputDebugStringW(buffer); + OutputDebugStringW(L"\n"); + } #ifdef _DEBUG - DebugBreak(); + DebugBreak(); #endif #endif /* MS_WINDOWS */ - abort(); + abort(); } diff --git a/pypy/module/cpyext/src/structseq.c b/pypy/module/cpyext/src/structseq.c --- a/pypy/module/cpyext/src/structseq.c +++ b/pypy/module/cpyext/src/structseq.c @@ -175,32 +175,33 @@ if (min_len != max_len) { if (len < min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at least %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at least %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } if (len > max_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at most %zd-sequence (%zd-sequence given)", - type->tp_name, max_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at most %zd-sequence (%zd-sequence given)", + type->tp_name, max_len, len); + Py_DECREF(arg); + return NULL; } } else { if (len != min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes a %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes a %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } } res = (PyStructSequence*) PyStructSequence_New(type); if (res == NULL) { + Py_DECREF(arg); return NULL; } for (i = 0; i < len; ++i) { diff --git a/pypy/module/cpyext/src/sysmodule.c b/pypy/module/cpyext/src/sysmodule.c --- a/pypy/module/cpyext/src/sysmodule.c +++ b/pypy/module/cpyext/src/sysmodule.c @@ -100,4 +100,3 @@ sys_write("stderr", stderr, format, va); va_end(va); } - diff --git a/pypy/module/cpyext/src/varargwrapper.c b/pypy/module/cpyext/src/varargwrapper.c --- a/pypy/module/cpyext/src/varargwrapper.c +++ b/pypy/module/cpyext/src/varargwrapper.c @@ -1,21 +1,25 @@ #include #include -PyObject * PyTuple_Pack(Py_ssize_t size, ...) +PyObject * +PyTuple_Pack(Py_ssize_t n, ...) { - va_list ap; - PyObject *cur, *tuple; - int i; + Py_ssize_t i; + PyObject *o; + PyObject *result; + va_list vargs; - tuple = PyTuple_New(size); - va_start(ap, size); - for (i = 0; i < size; i++) { - cur = va_arg(ap, PyObject*); - Py_INCREF(cur); - if (PyTuple_SetItem(tuple, i, cur) < 0) + va_start(vargs, n); + result = PyTuple_New(n); + if (result == NULL) + return NULL; + for (i = 0; i < n; i++) { + o = va_arg(vargs, PyObject *); + Py_INCREF(o); + if (PyTuple_SetItem(result, i, o) < 0) return NULL; } - va_end(ap); - return tuple; + va_end(vargs); + return result; } diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -288,6 +288,26 @@ w_errors = space.wrap(rffi.charp2str(errors)) return space.call_method(w_str, 'encode', w_encoding, w_errors) + at cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) +def PyString_AsDecodedObject(space, w_str, encoding, errors): + """Decode a string object by passing it to the codec registered + for encoding and return the result as Python object. encoding and + errors have the same meaning as the parameters of the same name in + the string encode() method. The codec to be used is looked up + using the Python codec registry. Return NULL if an exception was + raised by the codec. + + This function is not available in 3.x and does not have a PyBytes alias.""" + if not PyString_Check(space, w_str): + PyErr_BadArgument(space) + + w_encoding = w_errors = space.w_None + if encoding: + w_encoding = space.wrap(rffi.charp2str(encoding)) + if errors: + w_errors = space.wrap(rffi.charp2str(errors)) + return space.call_method(w_str, "decode", w_encoding, w_errors) + @cpython_api([PyObject, PyObject], PyObject) def _PyString_Join(space, w_sep, w_seq): return space.call_method(w_sep, 'join', w_seq) diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1405,17 +1405,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, Py_ssize_t, Py_ssize_t], PyObject) -def PyList_GetSlice(space, list, low, high): - """Return a list of the objects in list containing the objects between low - and high. Return NULL and set an exception if unsuccessful. Analogous - to list[low:high]. Negative indices, as when slicing from Python, are not - supported. - - This function used an int for low and high. This might - require changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - @cpython_api([Py_ssize_t], PyObject) def PyLong_FromSsize_t(space, v): """Return a new PyLongObject object from a C Py_ssize_t, or @@ -1588,15 +1577,6 @@ for PyObject_Str().""" raise NotImplementedError - at cpython_api([PyObject], lltype.Signed, error=-1) -def PyObject_HashNotImplemented(space, o): - """Set a TypeError indicating that type(o) is not hashable and return -1. - This function receives special treatment when stored in a tp_hash slot, - allowing a type to explicitly indicate to the interpreter that it is not - hashable. - """ - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is @@ -1719,17 +1699,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) -def PyString_AsDecodedObject(space, str, encoding, errors): - """Decode a string object by passing it to the codec registered for encoding and - return the result as Python object. encoding and errors have the same - meaning as the parameters of the same name in the string encode() method. - The codec to be used is looked up using the Python codec registry. Return NULL - if an exception was raised by the codec. - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Encode(space, s, size, encoding, errors): """Encode the char buffer of the given size by passing it to the codec diff --git a/pypy/module/cpyext/test/test_iterator.py b/pypy/module/cpyext/test/test_iterator.py --- a/pypy/module/cpyext/test/test_iterator.py +++ b/pypy/module/cpyext/test/test_iterator.py @@ -15,3 +15,8 @@ assert space.unwrap(api.PyIter_Next(w_iter)) == 3 assert api.PyIter_Next(w_iter) is None assert not api.PyErr_Occurred() + + def test_iternext_error(self,space, api): + assert api.PyIter_Next(space.w_None) is None + assert api.PyErr_Occurred() is space.w_TypeError + api.PyErr_Clear() diff --git a/pypy/module/cpyext/test/test_listobject.py b/pypy/module/cpyext/test/test_listobject.py --- a/pypy/module/cpyext/test/test_listobject.py +++ b/pypy/module/cpyext/test/test_listobject.py @@ -58,6 +58,11 @@ w_t = api.PyList_AsTuple(w_l) assert space.unwrap(w_t) == (3, 2, 1) + def test_list_getslice(self, space, api): + w_l = space.newlist([space.wrap(3), space.wrap(2), space.wrap(1)]) + w_s = api.PyList_GetSlice(w_l, 1, 5) + assert space.unwrap(w_s) == [2, 1] + class AppTestListObject(AppTestCpythonExtensionBase): def test_listobject(self): import sys diff --git a/pypy/module/cpyext/test/test_stringobject.py b/pypy/module/cpyext/test/test_stringobject.py --- a/pypy/module/cpyext/test/test_stringobject.py +++ b/pypy/module/cpyext/test/test_stringobject.py @@ -273,6 +273,43 @@ rffi.free_charp(buf) assert w_s1 is w_s2 + def test_AsEncodedObject(self, space, api): + ptr = space.wrap('abc') + + errors = rffi.str2charp("strict") + + encoding = rffi.str2charp("hex") + res = api.PyString_AsEncodedObject( + ptr, encoding, errors) + assert space.unwrap(res) == "616263" + + res = api.PyString_AsEncodedObject( + ptr, encoding, lltype.nullptr(rffi.CCHARP.TO)) + assert space.unwrap(res) == "616263" + rffi.free_charp(encoding) + + encoding = rffi.str2charp("unknown_encoding") + self.raises(space, api, LookupError, api.PyString_AsEncodedObject, + ptr, encoding, errors) + rffi.free_charp(encoding) + + rffi.free_charp(errors) + + res = api.PyString_AsEncodedObject( + ptr, lltype.nullptr(rffi.CCHARP.TO), lltype.nullptr(rffi.CCHARP.TO)) + assert space.unwrap(res) == "abc" + + self.raises(space, api, TypeError, api.PyString_AsEncodedObject, + space.wrap(2), lltype.nullptr(rffi.CCHARP.TO), lltype.nullptr(rffi.CCHARP.TO) + ) + + def test_AsDecodedObject(self, space, api): + w_str = space.wrap('caf\xe9') + encoding = rffi.str2charp("latin-1") + w_res = api.PyString_AsDecodedObject(w_str, encoding, None) + rffi.free_charp(encoding) + assert space.unwrap(w_res) == u"caf\xe9" + def test_eq(self, space, api): assert 1 == api._PyString_Eq(space.wrapbytes("hello"), space.wrapbytes("hello")) assert 0 == api._PyString_Eq(space.wrapbytes("hello"), space.wrapbytes("world")) diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -488,3 +488,55 @@ assert type(it) is type(iter([])) assert module.tp_iternext(it) == 1 raises(StopIteration, module.tp_iternext, it) + + def test_bool(self): + module = self.import_extension('foo', [ + ("newInt", "METH_VARARGS", + """ + IntLikeObject *intObj; + long intval; + PyObject *name; + + if (!PyArg_ParseTuple(args, "i", &intval)) + return NULL; + + IntLike_Type.tp_as_number = &intlike_as_number; + intlike_as_number.nb_nonzero = intlike_nb_nonzero; + if (PyType_Ready(&IntLike_Type) < 0) return NULL; + intObj = PyObject_New(IntLikeObject, &IntLike_Type); + if (!intObj) { + return NULL; + } + + intObj->value = intval; + return (PyObject *)intObj; + """)], + """ + typedef struct + { + PyObject_HEAD + int value; + } IntLikeObject; + + static int + intlike_nb_nonzero(IntLikeObject *v) + { + if (v->value == -42) { + PyErr_SetNone(PyExc_ValueError); + return -1; + } + return v->value; + } + + PyTypeObject IntLike_Type = { + PyObject_HEAD_INIT(0) + /*ob_size*/ 0, + /*tp_name*/ "IntLike", + /*tp_basicsize*/ sizeof(IntLikeObject), + }; + static PyNumberMethods intlike_as_number; + """) + assert not bool(module.newInt(0)) + assert bool(module.newInt(1)) + assert bool(module.newInt(-1)) + raises(ValueError, bool, module.newInt(-42)) diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -488,10 +488,22 @@ def test_tailmatch(self, space, api): w_str = space.wrap(u"abcdef") - assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 2, 10, 1) == 1 - assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 1, 5, -1) == 1 + # prefix match + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 2, 9, -1) == 1 + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 2, 4, -1) == 0 # ends at 'd' + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 1, 6, -1) == 0 # starts at 'b' + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cdf"), 2, 6, -1) == 0 + # suffix match + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 1, 5, 1) == 1 + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 3, 5, 1) == 0 # starts at 'd' + assert api.PyUnicode_Tailmatch(w_str, space.wrap("cde"), 1, 6, 1) == 0 # ends at 'f' + assert api.PyUnicode_Tailmatch(w_str, space.wrap("bde"), 1, 5, 1) == 0 + # type checks self.raises(space, api, TypeError, api.PyUnicode_Tailmatch, w_str, space.wrap(3), 2, 10, 1) + self.raises(space, api, TypeError, + api.PyUnicode_Tailmatch, space.wrap(3), space.wrap("abc"), + 2, 10, 1) def test_count(self, space, api): w_str = space.wrap(u"abcabdab") diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -597,7 +597,7 @@ suffix match), 0 otherwise. Return -1 if an error occurred.""" str = space.unicode_w(w_str) substr = space.unicode_w(w_substr) - if rffi.cast(lltype.Signed, direction) >= 0: + if rffi.cast(lltype.Signed, direction) <= 0: return stringtype.stringstartswith(str, substr, start, end) else: return stringtype.stringendswith(str, substr, start, end) diff --git a/pypy/module/micronumpy/__init__.py b/pypy/module/micronumpy/__init__.py --- a/pypy/module/micronumpy/__init__.py +++ b/pypy/module/micronumpy/__init__.py @@ -5,6 +5,7 @@ interpleveldefs = { 'debug_repr': 'interp_extras.debug_repr', 'remove_invalidates': 'interp_extras.remove_invalidates', + 'set_invalidation': 'interp_extras.set_invalidation', } appleveldefs = {} @@ -30,6 +31,7 @@ 'isna': 'interp_numarray.isna', 'concatenate': 'interp_numarray.concatenate', 'repeat': 'interp_numarray.repeat', + 'where': 'interp_arrayops.where', 'set_string_function': 'appbridge.set_string_function', diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -10,6 +10,7 @@ from pypy.module.micronumpy.interp_dtype import get_dtype_cache from pypy.module.micronumpy.interp_numarray import (Scalar, BaseArray, scalar_w, W_NDimArray, array) +from pypy.module.micronumpy.interp_arrayops import where from pypy.module.micronumpy import interp_ufuncs from pypy.rlib.objectmodel import specialize, instantiate @@ -35,6 +36,7 @@ SINGLE_ARG_FUNCTIONS = ["sum", "prod", "max", "min", "all", "any", "unegative", "flat", "tostring"] TWO_ARG_FUNCTIONS = ["dot", 'take'] +THREE_ARG_FUNCTIONS = ['where'] class FakeSpace(object): w_ValueError = None @@ -445,14 +447,25 @@ arg = self.args[1].execute(interp) if not isinstance(arg, BaseArray): raise ArgumentNotAnArray - if not isinstance(arg, BaseArray): - raise ArgumentNotAnArray if self.name == "dot": w_res = arr.descr_dot(interp.space, arg) elif self.name == 'take': w_res = arr.descr_take(interp.space, arg) else: assert False # unreachable code + elif self.name in THREE_ARG_FUNCTIONS: + if len(self.args) != 3: + raise ArgumentMismatch + arg1 = self.args[1].execute(interp) + arg2 = self.args[2].execute(interp) + if not isinstance(arg1, BaseArray): + raise ArgumentNotAnArray + if not isinstance(arg2, BaseArray): + raise ArgumentNotAnArray + if self.name == "where": + w_res = where(interp.space, arr, arg1, arg2) + else: + assert False else: raise WrongFunctionName if isinstance(w_res, BaseArray): diff --git a/pypy/module/micronumpy/interp_arrayops.py b/pypy/module/micronumpy/interp_arrayops.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/interp_arrayops.py @@ -0,0 +1,90 @@ + +from pypy.module.micronumpy.interp_numarray import convert_to_array,\ + VirtualArray +from pypy.module.micronumpy import signature + +class WhereArray(VirtualArray): + def __init__(self, space, arr, x, y): + self.arr = arr + self.x = x + self.y = y + VirtualArray.__init__(self, 'where', arr.shape[:], + x.find_dtype()) + + def create_sig(self): + if self.forced_result is not None: + return self.forced_result.create_sig() + return signature.WhereSignature(self.res_dtype, self.arr.find_dtype(), + self.arr.create_sig(), + self.x.create_sig(), + self.y.create_sig()) + + def _del_sources(self): + self.arr = None + self.x = None + self.y = None + +def where(space, w_arr, w_x, w_y): + """where(condition, [x, y]) + + Return elements, either from `x` or `y`, depending on `condition`. + + If only `condition` is given, return ``condition.nonzero()``. + + Parameters + ---------- + condition : array_like, bool + When True, yield `x`, otherwise yield `y`. + x, y : array_like, optional + Values from which to choose. `x` and `y` need to have the same + shape as `condition`. + + Returns + ------- + out : ndarray or tuple of ndarrays + If both `x` and `y` are specified, the output array contains + elements of `x` where `condition` is True, and elements from + `y` elsewhere. + + If only `condition` is given, return the tuple + ``condition.nonzero()``, the indices where `condition` is True. + + See Also + -------- + nonzero, choose + + Notes + ----- + If `x` and `y` are given and input arrays are 1-D, `where` is + equivalent to:: + + [xv if c else yv for (c,xv,yv) in zip(condition,x,y)] + + Examples + -------- + >>> np.where([[True, False], [True, True]], + ... [[1, 2], [3, 4]], + ... [[9, 8], [7, 6]]) + array([[1, 8], + [3, 4]]) + + >>> np.where([[0, 1], [1, 0]]) + (array([0, 1]), array([1, 0])) + + >>> x = np.arange(9.).reshape(3, 3) + >>> np.where( x > 5 ) + (array([2, 2, 2]), array([0, 1, 2])) + >>> x[np.where( x > 3.0 )] # Note: result is 1D. + array([ 4., 5., 6., 7., 8.]) + >>> np.where(x < 5, x, -1) # Note: broadcasting. + array([[ 0., 1., 2.], + [ 3., 4., -1.], + [-1., -1., -1.]]) + + + NOTE: support for not passing x and y is unsupported + """ + arr = convert_to_array(space, w_arr) + x = convert_to_array(space, w_x) + y = convert_to_array(space, w_y) + return WhereArray(space, arr, x, y) diff --git a/pypy/module/micronumpy/interp_extras.py b/pypy/module/micronumpy/interp_extras.py --- a/pypy/module/micronumpy/interp_extras.py +++ b/pypy/module/micronumpy/interp_extras.py @@ -1,5 +1,5 @@ from pypy.interpreter.gateway import unwrap_spec -from pypy.module.micronumpy.interp_numarray import BaseArray +from pypy.module.micronumpy.interp_numarray import BaseArray, get_numarray_cache @unwrap_spec(array=BaseArray) @@ -13,3 +13,7 @@ """ del array.invalidates[:] return space.w_None + + at unwrap_spec(arg=bool) +def set_invalidation(space, arg): + get_numarray_cache(space).enable_invalidation = arg diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -72,9 +72,10 @@ arr.force_if_needed() del self.invalidates[:] - def add_invalidates(self, other): - self.invalidates.append(other) - + def add_invalidates(self, space, other): + if get_numarray_cache(space).enable_invalidation: + self.invalidates.append(other) + def descr__new__(space, w_subtype, w_size, w_dtype=None): dtype = space.interp_w(interp_dtype.W_Dtype, space.call_function(space.gettypefor(interp_dtype.W_Dtype), w_dtype) @@ -1583,3 +1584,10 @@ arr.fill(space, space.wrap(False)) return arr return space.wrap(False) + +class NumArrayCache(object): + def __init__(self, space): + self.enable_invalidation = True + +def get_numarray_cache(space): + return space.fromcache(NumArrayCache) diff --git a/pypy/module/micronumpy/interp_ufuncs.py b/pypy/module/micronumpy/interp_ufuncs.py --- a/pypy/module/micronumpy/interp_ufuncs.py +++ b/pypy/module/micronumpy/interp_ufuncs.py @@ -278,7 +278,7 @@ else: w_res = Call1(self.func, self.name, w_obj.shape, calc_dtype, res_dtype, w_obj) - w_obj.add_invalidates(w_res) + w_obj.add_invalidates(space, w_res) return w_res @@ -347,8 +347,8 @@ w_res = Call2(self.func, self.name, new_shape, calc_dtype, res_dtype, w_lhs, w_rhs, out) - w_lhs.add_invalidates(w_res) - w_rhs.add_invalidates(w_res) + w_lhs.add_invalidates(space, w_res) + w_rhs.add_invalidates(space, w_res) if out: w_res.get_concrete() return w_res diff --git a/pypy/module/micronumpy/signature.py b/pypy/module/micronumpy/signature.py --- a/pypy/module/micronumpy/signature.py +++ b/pypy/module/micronumpy/signature.py @@ -498,3 +498,63 @@ arr.left.setitem(iterator.offset, value) def debug_repr(self): return 'AxisReduceSig(%s, %s)' % (self.name, self.right.debug_repr()) + +class WhereSignature(Signature): + _immutable_fields_ = ['dtype', 'arrdtype', 'arrsig', 'xsig', 'ysig'] + + def __init__(self, dtype, arrdtype, arrsig, xsig, ysig): + self.dtype = dtype + self.arrdtype = arrdtype + self.arrsig = arrsig + self.xsig = xsig + self.ysig = ysig + + def hash(self): + return (intmask(self.arrsig.hash() << 1) ^ + intmask(self.xsig.hash() << 2) ^ + intmask(self.ysig.hash() << 3)) + + def eq(self, other, compare_array_no=True): + if type(self) is not type(other): + return False + assert isinstance(other, WhereSignature) + return (self.arrsig.eq(other.arrsig, compare_array_no) and + self.xsig.eq(other.xsig, compare_array_no) and + self.ysig.eq(other.ysig, compare_array_no)) + + def _invent_array_numbering(self, arr, cache): + from pypy.module.micronumpy.interp_arrayops import WhereArray + assert isinstance(arr, WhereArray) + self.arrsig._invent_array_numbering(arr.arr, cache) + self.xsig._invent_array_numbering(arr.x, cache) + self.ysig._invent_array_numbering(arr.y, cache) + + def _invent_numbering(self, cache, allnumbers): + self.arrsig._invent_numbering(cache, allnumbers) + self.xsig._invent_numbering(cache, allnumbers) + self.ysig._invent_numbering(cache, allnumbers) + + def _create_iter(self, iterlist, arraylist, arr, transforms): + from pypy.module.micronumpy.interp_arrayops import WhereArray + + assert isinstance(arr, WhereArray) + # XXX this does not support broadcasting correctly + self.arrsig._create_iter(iterlist, arraylist, arr.arr, transforms) + self.xsig._create_iter(iterlist, arraylist, arr.x, transforms) + self.ysig._create_iter(iterlist, arraylist, arr.y, transforms) + + def eval(self, frame, arr): + from pypy.module.micronumpy.interp_arrayops import WhereArray + assert isinstance(arr, WhereArray) + lhs = self.xsig.eval(frame, arr.x).convert_to(self.dtype) + rhs = self.ysig.eval(frame, arr.y).convert_to(self.dtype) + w_val = self.arrsig.eval(frame, arr.arr) + if self.arrdtype.itemtype.bool(w_val): + return lhs + else: + return rhs + + def debug_repr(self): + return 'Where(%s, %s, %s)' % (self.arrsig.debug_repr(), + self.xsig.debug_repr(), + self.ysig.debug_repr()) diff --git a/pypy/module/micronumpy/test/test_arrayops.py b/pypy/module/micronumpy/test/test_arrayops.py new file mode 100644 --- /dev/null +++ b/pypy/module/micronumpy/test/test_arrayops.py @@ -0,0 +1,16 @@ + +from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest + +class AppTestNumSupport(BaseNumpyAppTest): + def test_where(self): + from _numpypy import where, ones, zeros, array + a = [1, 2, 3, 0, -3] + a = where(array(a) > 0, ones(5), zeros(5)) + assert (a == [1, 1, 1, 0, 0]).all() + + def test_where_invalidates(self): + from _numpypy import where, ones, zeros, array + a = array([1, 2, 3, 0, -3]) + b = where(a > 0, ones(5), zeros(5)) + a[0] = 0 + assert (b == [1, 1, 1, 0, 0]).all() diff --git a/pypy/module/micronumpy/test/test_compile.py b/pypy/module/micronumpy/test/test_compile.py --- a/pypy/module/micronumpy/test/test_compile.py +++ b/pypy/module/micronumpy/test/test_compile.py @@ -270,3 +270,13 @@ b -> 2 """) assert interp.results[0].value == 3 + + def test_where(self): + interp = self.run(''' + a = [1, 0, 3, 0] + b = [1, 1, 1, 1] + c = [0, 0, 0, 0] + d = where(a, b, c) + d -> 1 + ''') + assert interp.results[0].value == 0 diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -184,6 +184,33 @@ assert dtype("float") is dtype(float) + def test_index_int8(self): + from _numpypy import array, int8 + + a = array(range(10), dtype=int8) + b = array([0] * 10, dtype=int8) + for idx in b: a[idx] += 1 + + def test_index_int16(self): + from _numpypy import array, int16 + + a = array(range(10), dtype=int16) + b = array([0] * 10, dtype=int16) + for idx in b: a[idx] += 1 + + def test_index_int32(self): + from _numpypy import array, int32 + + a = array(range(10), dtype=int32) + b = array([0] * 10, dtype=int32) + for idx in b: a[idx] += 1 + + def test_index_int64(self): + from _numpypy import array, int64 + + a = array(range(10), dtype=int64) + b = array([0] * 10, dtype=int64) + for idx in b: a[idx] += 1 class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -1,9 +1,8 @@ import py -from pypy.conftest import gettestobjspace, option +from pypy.conftest import option from pypy.interpreter.error import OperationError -from pypy.module.micronumpy import signature from pypy.module.micronumpy.appbridge import get_appbridge_cache from pypy.module.micronumpy.interp_iter import Chunk, Chunks from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement @@ -1831,6 +1830,19 @@ a[a & 1 == 1] = array([8, 9, 10]) assert (a == [[0, 8], [2, 9], [4, 10]]).all() + def test_array_indexing_bool_setitem_multidim(self): + from _numpypy import arange + a = arange(10).reshape(5, 2) + a[a & 1 == 0] = 15 + assert (a == [[15, 1], [15, 3], [15, 5], [15, 7], [15, 9]]).all() + + def test_array_indexing_bool_setitem_2(self): + from _numpypy import arange + a = arange(10).reshape(5, 2) + a = a[::2] + a[a & 1 == 0] = 15 + assert (a == [[15, 1], [15, 5], [15, 9]]).all() + def test_copy_kwarg(self): from _numpypy import array x = array([1, 2, 3]) diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -1,7 +1,6 @@ from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest - class AppTestUfuncs(BaseNumpyAppTest): def test_ufunc_instance(self): from _numpypy import add, ufunc @@ -149,7 +148,11 @@ assert math.isnan(fmax(0, nan)) assert math.isnan(fmax(nan, nan)) # The numpy docs specify that the FIRST NaN should be used if both are NaN - assert math.copysign(1.0, fmax(nnan, nan)) == -1.0 + # Since comparisons with nnan and nan all return false, + # use copysign on both sides to sidestep bug in nan representaion + # on Microsoft win32 + assert math.copysign(1., fmax(nnan, nan)) == math.copysign(1., nnan) + def test_fmin(self): from _numpypy import fmin @@ -165,7 +168,9 @@ assert math.isnan(fmin(0, nan)) assert math.isnan(fmin(nan, nan)) # The numpy docs specify that the FIRST NaN should be used if both are NaN - assert math.copysign(1.0, fmin(nnan, nan)) == -1.0 + # use copysign on both sides to sidestep bug in nan representaion + # on Microsoft win32 + assert math.copysign(1., fmin(nnan, nan)) == math.copysign(1., nnan) def test_fmod(self): from _numpypy import fmod @@ -762,6 +767,8 @@ def test_logaddexp(self): import math + import sys + float_max, float_min = sys.float_info.max, sys.float_info.min from _numpypy import logaddexp # From the numpy documentation @@ -772,7 +779,8 @@ assert logaddexp(0, 0) == math.log(2) assert logaddexp(float('-inf'), 0) == 0 - assert logaddexp(12345678, 12345678) == float('inf') + assert logaddexp(float_max, float_max) == float_max + assert logaddexp(float_min, float_min) == math.log(2) assert math.isnan(logaddexp(float('nan'), 1)) assert math.isnan(logaddexp(1, float('nan'))) @@ -785,6 +793,8 @@ def test_logaddexp2(self): import math + import sys + float_max, float_min = sys.float_info.max, sys.float_info.min from _numpypy import logaddexp2 log2 = math.log(2) @@ -796,7 +806,8 @@ assert logaddexp2(0, 0) == 1 assert logaddexp2(float('-inf'), 0) == 0 - assert logaddexp2(12345678, 12345678) == float('inf') + assert logaddexp2(float_max, float_max) == float_max + assert logaddexp2(float_min, float_min) == 1.0 assert math.isnan(logaddexp2(float('nan'), 1)) assert math.isnan(logaddexp2(1, float('nan'))) diff --git a/pypy/module/micronumpy/types.py b/pypy/module/micronumpy/types.py --- a/pypy/module/micronumpy/types.py +++ b/pypy/module/micronumpy/types.py @@ -17,6 +17,7 @@ 'render_as_void': True}) degToRad = math.pi / 180.0 log2 = math.log(2) +log2e = 1./log2 def simple_unary_op(func): specialize.argtype(1)(func) @@ -841,45 +842,26 @@ @simple_binary_op def logaddexp(self, v1, v2): - try: - v1e = math.exp(v1) - except OverflowError: - v1e = rfloat.INFINITY - try: - v2e = math.exp(v2) - except OverflowError: - v2e = rfloat.INFINITY + tmp = v1 - v2 + if tmp > 0: + return v1 + rfloat.log1p(math.exp(-tmp)) + elif tmp <= 0: + return v2 + rfloat.log1p(math.exp(tmp)) + else: + return v1 + v2 - v12e = v1e + v2e - try: - return math.log(v12e) - except ValueError: - if v12e == 0.0: - # CPython raises ValueError here, so we have to check - # the value to find the correct numpy return value - return -rfloat.INFINITY - return rfloat.NAN + def npy_log2_1p(self, v): + return log2e * rfloat.log1p(v) @simple_binary_op def logaddexp2(self, v1, v2): - try: - v1e = math.pow(2, v1) - except OverflowError: - v1e = rfloat.INFINITY - try: - v2e = math.pow(2, v2) - except OverflowError: - v2e = rfloat.INFINITY - - v12e = v1e + v2e - try: - return math.log(v12e) / log2 - except ValueError: - if v12e == 0.0: - # CPython raises ValueError here, so we have to check - # the value to find the correct numpy return value - return -rfloat.INFINITY - return rfloat.NAN + tmp = v1 - v2 + if tmp > 0: + return v1 + self.npy_log2_1p(math.pow(2, -tmp)) + if tmp <= 0: + return v2 + self.npy_log2_1p(math.pow(2, tmp)) + else: + return v1 + v2 class NonNativeFloat(NonNativePrimitive, Float): _mixin_ = True diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -54,7 +54,8 @@ cmdline += ['--jit', ','.join(jitcmdline)] cmdline.append(str(self.filepath)) # - env={'PYPYLOG': self.log_string + ':' + str(logfile)} + env = os.environ.copy() + env['PYPYLOG'] = self.log_string + ':' + str(logfile) pipe = subprocess.Popen(cmdline, env=env, stdout=subprocess.PIPE, diff --git a/pypy/module/pypyjit/test_pypy_c/test_exception.py b/pypy/module/pypyjit/test_pypy_c/test_exception.py --- a/pypy/module/pypyjit/test_pypy_c/test_exception.py +++ b/pypy/module/pypyjit/test_pypy_c/test_exception.py @@ -91,3 +91,29 @@ --TICK-- jump(..., descr=...) """) + + def test_continue_in_finally(self): + # check that 'continue' inside a try:finally: block is correctly + # detected as closing a loop + py.test.skip("is this case important?") + def f(n): + i = 0 + while 1: + try: + if i < n: + continue + finally: + i += 1 + return i + + log = self.run(f, [2000]) + assert log.result == 2001 + loop, = log.loops_by_filename(self.filepath) + assert loop.match(""" + i40 = int_add_ovf(i31, 1) + guard_no_overflow(descr=...) + i41 = int_lt(i40, i33) + guard_true(i41, descr=...) + --TICK-- + jump(..., descr=...) + """) diff --git a/pypy/module/rctime/interp_time.py b/pypy/module/rctime/interp_time.py --- a/pypy/module/rctime/interp_time.py +++ b/pypy/module/rctime/interp_time.py @@ -24,10 +24,9 @@ from pypy.module.thread import ll_thread as thread eci = ExternalCompilationInfo( + includes = ['windows.h'], post_include_bits = ["BOOL pypy_timemodule_setCtrlHandler(HANDLE event);"], separate_module_sources=[''' - #include - static HANDLE interrupt_event; static BOOL WINAPI CtrlHandlerRoutine( @@ -573,7 +572,7 @@ if i < length and format[i] == '#': # not documented by python i += 1 - if i >= length or format[i] not in "aAbBcdfHIjmMpSUwWxXyYzZ%": + if i >= length or format[i] not in "aAbBcdHIjmMpSUwWxXyYzZ%": raise OperationError(space.w_ValueError, space.wrap("invalid format string")) i += 1 diff --git a/pypy/module/zipimport/interp_zipimport.py b/pypy/module/zipimport/interp_zipimport.py --- a/pypy/module/zipimport/interp_zipimport.py +++ b/pypy/module/zipimport/interp_zipimport.py @@ -229,7 +229,11 @@ startpos = fullname.rfind('.') + 1 # 0 when not found assert startpos >= 0 subname = fullname[startpos:] - return self.prefix + subname.replace('.', '/') + if ZIPSEP == os.path.sep: + return self.prefix + subname.replace('.', '/') + else: + return self.prefix.replace(os.path.sep, ZIPSEP) + \ + subname.replace('.', '/') def make_co_filename(self, filename): """ diff --git a/pypy/module/zipimport/test/test_zipimport.py b/pypy/module/zipimport/test/test_zipimport.py --- a/pypy/module/zipimport/test/test_zipimport.py +++ b/pypy/module/zipimport/test/test_zipimport.py @@ -314,13 +314,11 @@ assert z.get_filename("package") == mod.__file__ def test_subdirectory_twice(self): - import os, zipimport + #import os, zipimport self.writefile("package/__init__.py", "") self.writefile("package/subpackage/__init__.py", "") self.writefile("package/subpackage/foo.py", "") - import sys - print(sys.path) mod = __import__('package.subpackage.foo', None, None, []) assert mod diff --git a/pypy/objspace/descroperation.py b/pypy/objspace/descroperation.py --- a/pypy/objspace/descroperation.py +++ b/pypy/objspace/descroperation.py @@ -391,7 +391,8 @@ def contains(space, w_container, w_item): w_descr = space.lookup(w_container, '__contains__') if w_descr is not None: - return space.get_and_call_function(w_descr, w_container, w_item) + w_result = space.get_and_call_function(w_descr, w_container, w_item) + return space.nonzero(w_result) return space.sequence_contains(w_container, w_item) def sequence_contains(space, w_container, w_item): diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -358,8 +358,8 @@ result = op(*args) except Exception, e: etype = e.__class__ - msg = "generated by a constant operation: %s" % ( - name) + msg = "generated by a constant operation:\n\t%s%r" % ( + name, tuple(args)) raise OperationThatShouldNotBePropagatedError( self.wrap(etype), self.wrap(msg)) else: diff --git a/pypy/objspace/std/model.py b/pypy/objspace/std/model.py --- a/pypy/objspace/std/model.py +++ b/pypy/objspace/std/model.py @@ -355,7 +355,10 @@ __slots__ = () def __repr__(self): - s = '%s(%s)' % (self.__class__.__name__, getattr(self, 'name', '')) + name = getattr(self, 'name', '') + if not isinstance(name, str): + name = '' + s = '%s(%s)' % (self.__class__.__name__, name) w_cls = getattr(self, 'w__class__', None) if w_cls is not None and w_cls is not self: s += ' instance of %s' % self.w__class__ diff --git a/pypy/objspace/std/test/test_methodcache.py b/pypy/objspace/std/test/test_methodcache.py --- a/pypy/objspace/std/test/test_methodcache.py +++ b/pypy/objspace/std/test/test_methodcache.py @@ -70,8 +70,31 @@ assert a.f() == 42 + i A.f = eval("lambda self: %s" % (42 + i + 1, )) cache_counter = __pypy__.method_cache_counter("f") - # the cache hits come from A.f = ..., which first does a lookup on A as - # well + # + # a bit of explanation about what's going on. (1) is the line "a.f()" + # and (2) is "A.f = ...". + # + # at line (1) we do the lookup on type(a).f + # + # at line (2) we do a setattr on A. However, descr_setattr does also a + # lookup of type(A).f i.e. type.f, to check if by chance 'f' is a data + # descriptor. + # + # At the first iteration: + # (1) is a miss because it's the first lookup of A.f. The result is cached + # + # (2) is a miss because it is the first lookup of type.f. The + # (non-existant) result is cached. The version of A changes, and 'f' + # is changed to be a cell object, so that subsequest assignments won't + # change the version of A + # + # At the second iteration: + # (1) is a miss because the version of A changed just before + # (2) is a hit, because type.f is cached. The version of A no longer changes + # + # At the third and subsequent iterations: + # (1) is a hit, because the version of A did not change + # (2) is a hit, see above assert cache_counter == (17, 3) def test_subclasses(self): diff --git a/pypy/objspace/test/test_descroperation.py b/pypy/objspace/test/test_descroperation.py --- a/pypy/objspace/test/test_descroperation.py +++ b/pypy/objspace/test/test_descroperation.py @@ -640,5 +640,30 @@ except (Exception) as e_len: assert str(e_bool) == str(e_len) + def test_bool___contains__(self): + class X(object): + def __contains__(self, item): + if item == 'foo': + return 42 + else: + return 'hello world' + x = X() + res = 'foo' in x + assert res is True + res = 'bar' in x + assert res is True + # + class MyError(Exception): + pass + class CannotConvertToBool(object): + def __nonzero__(self): + raise MyError + class X(object): + def __contains__(self, item): + return CannotConvertToBool() + x = X() + raises(MyError, "'foo' in x") + + class AppTestWithBuiltinShortcut(AppTest_Descroperation): OPTIONS = {'objspace.std.builtinshortcut': True} diff --git a/pypy/pytest.ini b/pypy/pytest.ini --- a/pypy/pytest.ini +++ b/pypy/pytest.ini @@ -1,2 +1,2 @@ [pytest] -addopts = --assert=plain -rf +addopts = --assert=reinterp -rf diff --git a/pypy/rlib/parsing/test/test_ebnfparse.py b/pypy/rlib/parsing/test/test_ebnfparse.py --- a/pypy/rlib/parsing/test/test_ebnfparse.py +++ b/pypy/rlib/parsing/test/test_ebnfparse.py @@ -103,6 +103,7 @@ """) parse = make_parse_function(regexs, rules) tree = parse("prefix(\n\tlonger(and_nested(term(X))), Xya, _, X0, _).") + assert tree.children[0].children[0].children[2].children[0].getsourcepos().lineno == 1 assert tree is not None tree = parse(""" foo(X, Y) :- bar(Y, X), bar(Y, X) ; foobar(X, Y, 1234, atom).""") diff --git a/pypy/rlib/parsing/tree.py b/pypy/rlib/parsing/tree.py --- a/pypy/rlib/parsing/tree.py +++ b/pypy/rlib/parsing/tree.py @@ -23,6 +23,9 @@ self.symbol = symbol self.additional_info = additional_info self.token = token + + def getsourcepos(self): + return self.token.source_pos def __repr__(self): return "Symbol(%r, %r)" % (self.symbol, self.additional_info) @@ -49,6 +52,9 @@ self.children = children self.symbol = symbol + def getsourcepos(self): + return self.children[0].getsourcepos() + def __str__(self): return "%s(%s)" % (self.symbol, ", ".join([str(c) for c in self.children])) diff --git a/pypy/rlib/rbigint.py b/pypy/rlib/rbigint.py --- a/pypy/rlib/rbigint.py +++ b/pypy/rlib/rbigint.py @@ -40,7 +40,7 @@ # In that case, do 5 bits at a time. The potential drawback is that # a table of 2**5 intermediate results is computed. -## FIVEARY_CUTOFF = 8 disabled for now +FIVEARY_CUTOFF = 8 def _mask_digit(x): @@ -474,7 +474,7 @@ # python adaptation: moved macros REDUCE(X) and MULT(X, Y, result) # into helper function result = _help_mult(x, y, c) - if 1: ## b.numdigits() <= FIVEARY_CUTOFF: + if b.numdigits() <= FIVEARY_CUTOFF: # Left-to-right binary exponentiation (HAC Algorithm 14.79) # http://www.cacr.math.uwaterloo.ca/hac/about/chap14.pdf i = b.numdigits() - 1 @@ -487,30 +487,51 @@ z = _help_mult(z, a, c) j >>= 1 i -= 1 -## else: -## This code is disabled for now, because it assumes that -## SHIFT is a multiple of 5. It could be fixed but it looks -## like it's more troubles than benefits... -## -## # Left-to-right 5-ary exponentiation (HAC Algorithm 14.82) -## # This is only useful in the case where c != None. -## # z still holds 1L -## table = [z] * 32 -## table[0] = z -## for i in range(1, 32): -## table[i] = _help_mult(table[i-1], a, c) -## i = b.numdigits() - 1 -## while i >= 0: -## bi = b.digit(i) -## j = SHIFT - 5 -## while j >= 0: -## index = (bi >> j) & 0x1f -## for k in range(5): -## z = _help_mult(z, z, c) -## if index: -## z = _help_mult(z, table[index], c) -## j -= 5 -## i -= 1 + else: + # Left-to-right 5-ary exponentiation (HAC Algorithm 14.82) + # This is only useful in the case where c != None. + # z still holds 1L + table = [z] * 32 + table[0] = z + for i in range(1, 32): + table[i] = _help_mult(table[i-1], a, c) + i = b.numdigits() + # Note that here SHIFT is not a multiple of 5. The difficulty + # is to extract 5 bits at a time from 'b', starting from the + # most significant digits, so that at the end of the algorithm + # it falls exactly to zero. + # m = max number of bits = i * SHIFT + # m+ = m rounded up to the next multiple of 5 + # j = (m+) % SHIFT = (m+) - (i * SHIFT) + # (computed without doing "i * SHIFT", which might overflow) + j = i % 5 + if j != 0: + j = 5 - j + if not we_are_translated(): + assert j == (i*SHIFT+4)//5*5 - i*SHIFT + # + accum = r_uint(0) + while True: + j -= 5 + if j >= 0: + index = (accum >> j) & 0x1f + else: + # 'accum' does not have enough digit. + # must get the next digit from 'b' in order to complete + i -= 1 + if i < 0: + break # done + bi = b.udigit(i) + index = ((accum << (-j)) | (bi >> (j+SHIFT))) & 0x1f + accum = bi + j += SHIFT + # + for k in range(5): + z = _help_mult(z, z, c) + if index: + z = _help_mult(z, table[index], c) + # + assert j == -5 if negativeOutput and z.sign != 0: z = z.sub(c) diff --git a/pypy/rlib/ropenssl.py b/pypy/rlib/ropenssl.py --- a/pypy/rlib/ropenssl.py +++ b/pypy/rlib/ropenssl.py @@ -4,8 +4,10 @@ from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rlib.unroll import unrolling_iterable -import sys +import sys, os +link_files = [] +testonly_libraries = [] if sys.platform == 'win32' and platform.name != 'mingw32': libraries = ['libeay32', 'ssleay32', 'user32', 'advapi32', 'gdi32', 'msvcrt', 'ws2_32'] @@ -18,8 +20,18 @@ # so that openssl/ssl.h can repair this nonsense. 'wincrypt.h'] else: - libraries = ['ssl', 'crypto'] + libraries = ['z'] includes = [] + if (sys.platform.startswith('linux') and + os.path.exists('/usr/lib/libssl.a') and + os.path.exists('/usr/lib/libcrypto.a')): + # use static linking to avoid the infinite + # amount of troubles due to symbol versions + # and 0.9.8/1.0.0 + link_files += ['/usr/lib/libssl.a', '/usr/lib/libcrypto.a'] + testonly_libraries += ['ssl', 'crypto'] + else: + libraries += ['ssl', 'crypto'] includes += [ 'openssl/ssl.h', @@ -31,6 +43,8 @@ eci = ExternalCompilationInfo( libraries = libraries, + link_files = link_files, + testonly_libraries = testonly_libraries, includes = includes, export_symbols = [], post_include_bits = [ diff --git a/pypy/rlib/rposix.py b/pypy/rlib/rposix.py --- a/pypy/rlib/rposix.py +++ b/pypy/rlib/rposix.py @@ -1,9 +1,11 @@ import os -from pypy.rpython.lltypesystem.rffi import CConstant, CExternVariable, INT +from pypy.rpython.lltypesystem.rffi import (CConstant, CExternVariable, + INT, CCHARPP) from pypy.rpython.lltypesystem import lltype, ll2ctypes, rffi from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rlib.rarithmetic import intmask from pypy.rlib.objectmodel import specialize +from pypy.rlib import jit class CConstantErrno(CConstant): # these accessors are used when calling get_errno() or set_errno() @@ -18,9 +20,69 @@ def __setitem__(self, index, value): assert index == 0 ll2ctypes.TLS.errno = value +if os.name == 'nt': + separate_module_sources =[''' + /* Lifted completely from CPython 3.3 Modules/posix_module.c */ + #include /* for _msize */ + typedef struct { + intptr_t osfhnd; + char osfile; + } my_ioinfo; + extern __declspec(dllimport) char * __pioinfo[]; + #define IOINFO_L2E 5 + #define IOINFO_ARRAY_ELTS (1 << IOINFO_L2E) + #define IOINFO_ARRAYS 64 + #define _NHANDLE_ (IOINFO_ARRAYS * IOINFO_ARRAY_ELTS) + #define FOPEN 0x01 + #define _NO_CONSOLE_FILENO (intptr_t)-2 + /* This function emulates what the windows CRT + does to validate file handles */ + int + _PyVerify_fd(int fd) + { + const int i1 = fd >> IOINFO_L2E; + const int i2 = fd & ((1 << IOINFO_L2E) - 1); + + static size_t sizeof_ioinfo = 0; + + /* Determine the actual size of the ioinfo structure, + * as used by the CRT loaded in memory + */ + if (sizeof_ioinfo == 0 && __pioinfo[0] != NULL) { + sizeof_ioinfo = _msize(__pioinfo[0]) / IOINFO_ARRAY_ELTS; + } + if (sizeof_ioinfo == 0) { + /* This should not happen... */ + goto fail; + } + + /* See that it isn't a special CLEAR fileno */ + if (fd != _NO_CONSOLE_FILENO) { + /* Microsoft CRT would check that 0<=fd<_nhandle but we can't do that. Instead + * we check pointer validity and other info + */ + if (0 <= i1 && i1 < IOINFO_ARRAYS && __pioinfo[i1] != NULL) { + /* finally, check that the file is open */ + my_ioinfo* info = (my_ioinfo*)(__pioinfo[i1] + i2 * sizeof_ioinfo); + if (info->osfile & FOPEN) { + return 1; + } + } + } + fail: + errno = EBADF; + return 0; + } + ''',] + export_symbols = ['_PyVerify_fd'] +else: + separate_module_sources = [] + export_symbols = [] errno_eci = ExternalCompilationInfo( - includes=['errno.h'] + includes=['errno.h','stdio.h'], + separate_module_sources = separate_module_sources, + export_symbols = export_symbols, ) _get_errno, _set_errno = CExternVariable(INT, 'errno', errno_eci, @@ -35,6 +97,21 @@ def set_errno(errno): _set_errno(rffi.cast(INT, errno)) +if os.name == 'nt': + _validate_fd = rffi.llexternal( + "_PyVerify_fd", [rffi.INT], rffi.INT, + compilation_info=errno_eci, + ) + @jit.dont_look_inside + def validate_fd(fd): + if not _validate_fd(fd): + raise OSError(get_errno(), 'Bad file descriptor') +else: + def _validate_fd(fd): + return 1 + + def validate_fd(fd): + return 1 def closerange(fd_low, fd_high): # this behaves like os.closerange() from Python 2.6. diff --git a/pypy/rlib/runicode.py b/pypy/rlib/runicode.py --- a/pypy/rlib/runicode.py +++ b/pypy/rlib/runicode.py @@ -1239,7 +1239,7 @@ pos += 1 continue - if 0xD800 <= oc < 0xDC00 and pos + 1 < size: + if MAXUNICODE < 65536 and 0xD800 <= oc < 0xDC00 and pos + 1 < size: # Map UTF-16 surrogate pairs to Unicode \UXXXXXXXX escapes pos += 1 oc2 = ord(s[pos]) @@ -1355,6 +1355,20 @@ pos = 0 while pos < size: oc = ord(s[pos]) + + if MAXUNICODE < 65536 and 0xD800 <= oc < 0xDC00 and pos + 1 < size: + # Map UTF-16 surrogate pairs to Unicode \UXXXXXXXX escapes + pos += 1 + oc2 = ord(s[pos]) + + if 0xDC00 <= oc2 <= 0xDFFF: + ucs = (((oc & 0x03FF) << 10) | (oc2 & 0x03FF)) + 0x00010000 + raw_unicode_escape_helper(result, ucs) + pos += 1 + continue + # Fall through: isolated surrogates are copied as-is + pos -= 1 + if oc < 0x100: result.append(chr(oc)) else: diff --git a/pypy/rlib/rweakref.py b/pypy/rlib/rweakref.py --- a/pypy/rlib/rweakref.py +++ b/pypy/rlib/rweakref.py @@ -9,6 +9,9 @@ ref = weakref.ref # basic regular weakrefs are supported in RPython +def has_weakref_support(): + return True # returns False if --no-translation-rweakref + class RWeakValueDictionary(object): """A dictionary containing weak values.""" @@ -68,6 +71,20 @@ from pypy.annotation.bookkeeper import getbookkeeper from pypy.tool.pairtype import pairtype +class Entry(extregistry.ExtRegistryEntry): + _about_ = has_weakref_support + + def compute_result_annotation(self): + translator = self.bookkeeper.annotator.translator + res = translator.config.translation.rweakref + return self.bookkeeper.immutablevalue(res) + + def specialize_call(self, hop): + from pypy.rpython.lltypesystem import lltype + hop.exception_cannot_occur() + return hop.inputconst(lltype.Bool, hop.s_result.const) + + class SomeWeakValueDict(annmodel.SomeObject): knowntype = RWeakValueDictionary diff --git a/pypy/rlib/test/test_rbigint.py b/pypy/rlib/test/test_rbigint.py --- a/pypy/rlib/test/test_rbigint.py +++ b/pypy/rlib/test/test_rbigint.py @@ -379,6 +379,18 @@ for n, expected in [(37, 9), (1291, 931), (67889, 39464)]: v = two.pow(t, rbigint.fromint(n)) assert v.toint() == expected + # + # more tests, comparing against CPython's answer + enabled = sample(range(5*32), 10) + for i in range(5*32): + t = t.mul(two) # add one random bit + if random() >= 0.5: + t = t.add(rbigint.fromint(1)) + if i not in enabled: + continue # don't take forever + n = randint(1, sys.maxint) + v = two.pow(t, rbigint.fromint(n)) + assert v.toint() == pow(2, t.tolong(), n) def test_pow_lln(self): x = 10L diff --git a/pypy/rlib/test/test_rposix.py b/pypy/rlib/test/test_rposix.py --- a/pypy/rlib/test/test_rposix.py +++ b/pypy/rlib/test/test_rposix.py @@ -131,3 +131,15 @@ os.rmdir(self.ufilename) except Exception: pass + + def test_validate_fd(self): + if os.name != 'nt': + skip('relevant for windows only') + assert rposix._validate_fd(0) == 1 + fid = open(str(udir.join('validate_test.txt')), 'w') + fd = fid.fileno() + assert rposix._validate_fd(fd) == 1 + fid.close() + assert rposix._validate_fd(fd) == 0 + + diff --git a/pypy/rlib/test/test_runicode.py b/pypy/rlib/test/test_runicode.py --- a/pypy/rlib/test/test_runicode.py +++ b/pypy/rlib/test/test_runicode.py @@ -731,3 +731,18 @@ res = interpret(f, [0x10140]) assert res == 0x10140 + + def test_encode_surrogate_pair(self): + u = runicode.UNICHR(0xD800) + runicode.UNICHR(0xDC00) + if runicode.MAXUNICODE < 65536: + # Narrow unicode build, consider utf16 surrogate pairs + assert runicode.unicode_encode_unicode_escape( + u, len(u), True) == r'\U00010000' + assert runicode.unicode_encode_raw_unicode_escape( + u, len(u), True) == r'\U00010000' + else: + # Wide unicode build, don't merge utf16 surrogate pairs + assert runicode.unicode_encode_unicode_escape( + u, len(u), True) == r'\ud800\udc00' + assert runicode.unicode_encode_raw_unicode_escape( + u, len(u), True) == r'\ud800\udc00' diff --git a/pypy/rlib/test/test_rweakref.py b/pypy/rlib/test/test_rweakref.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/test/test_rweakref.py @@ -0,0 +1,14 @@ +from pypy.rlib.rweakref import has_weakref_support +from pypy.rpython.test.test_llinterp import interpret + + +def test_has_weakref_support(): + assert has_weakref_support() + + res = interpret(lambda: has_weakref_support(), [], + **{'translation.rweakref': True}) + assert res == True + + res = interpret(lambda: has_weakref_support(), [], + **{'translation.rweakref': False}) + assert res == False diff --git a/pypy/rpython/annlowlevel.py b/pypy/rpython/annlowlevel.py --- a/pypy/rpython/annlowlevel.py +++ b/pypy/rpython/annlowlevel.py @@ -488,6 +488,8 @@ else: TO = PTR if not hasattr(object, '_carry_around_for_tests'): + if object is None: + return lltype.nullptr(PTR.TO) assert not hasattr(object, '_TYPE') object._carry_around_for_tests = True object._TYPE = TO @@ -557,6 +559,8 @@ """NOT_RPYTHON: hack. Reverse the hacking done in cast_object_to_ptr().""" if isinstance(lltype.typeOf(ptr), lltype.Ptr): ptr = ptr._as_obj() + if ptr is None: + return None if not isinstance(ptr, Class): raise NotImplementedError("cast_base_ptr_to_instance: casting %r to %r" % (ptr, Class)) diff --git a/pypy/rpython/lltypesystem/ll2ctypes.py b/pypy/rpython/lltypesystem/ll2ctypes.py --- a/pypy/rpython/lltypesystem/ll2ctypes.py +++ b/pypy/rpython/lltypesystem/ll2ctypes.py @@ -1072,7 +1072,7 @@ try: eci = _eci_cache[old_eci] except KeyError: - eci = old_eci.compile_shared_lib() + eci = old_eci.compile_shared_lib(ignore_a_files=True) _eci_cache[old_eci] = eci libraries = eci.testonly_libraries + eci.libraries + eci.frameworks diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -1167,7 +1167,7 @@ try: return self._lookup_adtmeth(field_name) except AttributeError: - raise AttributeError("%r instance has no field %r" % (self._T._name, + raise AttributeError("%r instance has no field %r" % (self._T, field_name)) def __setattr__(self, field_name, val): diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -1168,8 +1168,11 @@ DIRENTP = lltype.Ptr(DIRENT) os_opendir = self.llexternal('opendir', [rffi.CCHARP], DIRP, compilation_info=compilation_info) + # XXX macro=True is hack to make sure we get the correct kind of + # dirent struct (which depends on defines) os_readdir = self.llexternal('readdir', [DIRP], DIRENTP, - compilation_info=compilation_info) + compilation_info=compilation_info, + macro=True) os_closedir = self.llexternal('closedir', [DIRP], rffi.INT, compilation_info=compilation_info) diff --git a/pypy/rpython/module/test/test_ll_os.py b/pypy/rpython/module/test/test_ll_os.py --- a/pypy/rpython/module/test/test_ll_os.py +++ b/pypy/rpython/module/test/test_ll_os.py @@ -4,6 +4,7 @@ import pypy from pypy.tool.udir import udir from pypy.translator.c.test.test_genc import compile +from pypy.rpython.module import ll_os #has side effect of registering functions from pypy.rpython import extregistry import errno diff --git a/pypy/rpython/test/test_llann.py b/pypy/rpython/test/test_llann.py --- a/pypy/rpython/test/test_llann.py +++ b/pypy/rpython/test/test_llann.py @@ -9,6 +9,7 @@ from pypy.rpython.annlowlevel import MixLevelHelperAnnotator from pypy.rpython.annlowlevel import PseudoHighLevelCallable from pypy.rpython.annlowlevel import llhelper, cast_instance_to_base_ptr +from pypy.rpython.annlowlevel import cast_base_ptr_to_instance from pypy.rpython.annlowlevel import base_ptr_lltype from pypy.rpython.llinterp import LLInterpreter from pypy.rpython.test.test_llinterp import interpret @@ -502,7 +503,10 @@ self.y = y def f(x, y): - a = A(x, y) + if x > 20: + a = None + else: + a = A(x, y) a1 = cast_instance_to_base_ptr(a) return a1 @@ -510,3 +514,30 @@ assert typeOf(res) == base_ptr_lltype() assert fishllattr(res, 'x') == 5 assert fishllattr(res, 'y') == 10 + + res = interpret(f, [25, 10]) + assert res == nullptr(base_ptr_lltype().TO) + + +def test_cast_base_ptr_to_instance(): + class A: + def __init__(self, x, y): + self.x = x + self.y = y + + def f(x, y): + if x > 20: + a = None + else: + a = A(x, y) + a1 = cast_instance_to_base_ptr(a) + b = cast_base_ptr_to_instance(A, a1) + return a is b + + assert f(5, 10) is True + assert f(25, 10) is True + + res = interpret(f, [5, 10]) + assert res is True + res = interpret(f, [25, 10]) + assert res is True diff --git a/pypy/rpython/tool/rffi_platform.py b/pypy/rpython/tool/rffi_platform.py --- a/pypy/rpython/tool/rffi_platform.py +++ b/pypy/rpython/tool/rffi_platform.py @@ -379,7 +379,7 @@ self.name = name def prepare_code(self): - yield 'if ((%s) < 0) {' % (self.name,) + yield 'if ((%s) <= 0) {' % (self.name,) yield ' long long x = (long long)(%s);' % (self.name,) yield ' printf("value: %lld\\n", x);' yield '} else {' @@ -401,7 +401,7 @@ def prepare_code(self): yield '#ifdef %s' % self.macro yield 'dump("defined", 1);' - yield 'if ((%s) < 0) {' % (self.macro,) + yield 'if ((%s) <= 0) {' % (self.macro,) yield ' long long x = (long long)(%s);' % (self.macro,) yield ' printf("value: %lld\\n", x);' yield '} else {' diff --git a/pypy/tool/compare_last_builds.py b/pypy/tool/compare_last_builds.py new file mode 100644 --- /dev/null +++ b/pypy/tool/compare_last_builds.py @@ -0,0 +1,122 @@ +import os +import urllib2 +import json +import sys +import md5 + +wanted = sys.argv[1:] +if not wanted: + wanted = ['default'] +base = "http://buildbot.pypy.org/json/builders/" + +cachedir = os.environ.get('PYPY_BUILDS_CACHE') +if cachedir and not os.path.exists(cachedir): + os.makedirs(cachedir) + + + +def get_json(url, cache=cachedir): + return json.loads(get_data(url, cache)) + + +def get_data(url, cache=cachedir): + url = str(url) + if cache: + digest = md5.md5() + digest.update(url) + digest = digest.hexdigest() + cachepath = os.path.join(cachedir, digest) + if os.path.exists(cachepath): + with open(cachepath) as fp: + return fp.read() + + print 'GET', url + fp = urllib2.urlopen(url) + try: + data = fp.read() + if cache: + with open(cachepath, 'wb') as cp: + cp.write(data) + return data + finally: + fp.close() + +def parse_log(log): + items = [] + for v in log.splitlines(1): + if not v[0].isspace() and v[1].isspace(): + items.append(v) + return sorted(items) #sort cause testrunner order is non-deterministic + +def gather_logdata(build): + logdata = get_data(str(build['log']) + '?as_text=1') + logdata = logdata.replace('', '') + logdata = logdata.replace('', '') + del build['log'] + build['log'] = parse_log(logdata) + + +def branch_mapping(l): + keep = 3 - len(wanted) + d = {} + for x in reversed(l): + gather_logdata(x) + if not x['log']: + continue + b = x['branch'] + if b not in d: + d[b] = [] + d[b].insert(0, x) + if len(d[b]) > keep: + d[b].pop() + return d + +def cleanup_build(d): + for a in 'times eta steps slave reason sourceStamp blame currentStep text'.split(): + del d[a] + + props = d.pop(u'logs') + for name, val in props: + if name == u'pytestLog': + d['log'] = val + props = d.pop(u'properties') + for name, val, _ in props: + if name == u'branch': + d['branch'] = val or 'default' + return d + +def collect_builds(d): + name = str(d['basedir']) + builds = d['cachedBuilds'] + l = [] + for build in builds: + d = get_json(base + '%s/builds/%s' % (name, build)) + cleanup_build(d) + l.append(d) + + l = [x for x in l if x['branch'] in wanted and 'log' in x] + d = branch_mapping(l) + return [x for lst in d.values() for x in lst] + + +def only_linux32(d): + return d['own-linux-x86-32'] + + +own_builds = get_json(base, cache=False)['own-linux-x86-32'] + +builds = collect_builds(own_builds) + + +builds.sort(key=lambda x: (wanted.index(x['branch']), x['number'])) +logs = [x.pop('log') for x in builds] +for b, s in zip(builds, logs): + b['resultset'] = len(s) +import pprint +pprint.pprint(builds) + +from difflib import Differ + +for x in Differ().compare(*logs): + if x[0]!=' ': + sys.stdout.write(x) diff --git a/pypy/tool/pytest/pypy_test_failure_demo.py b/pypy/tool/pytest/pypy_test_failure_demo.py --- a/pypy/tool/pytest/pypy_test_failure_demo.py +++ b/pypy/tool/pytest/pypy_test_failure_demo.py @@ -8,6 +8,10 @@ def test_interp_func(space): assert space.is_true(space.w_None) +def test_interp_reinterpret(space): + a = 1 + assert a == 2 + class TestInterpTest: def test_interp_method(self): assert self.space.is_true(self.space.w_False) diff --git a/pypy/translator/c/extfunc.py b/pypy/translator/c/extfunc.py --- a/pypy/translator/c/extfunc.py +++ b/pypy/translator/c/extfunc.py @@ -5,7 +5,6 @@ from pypy.rpython.lltypesystem.rstr import STR, mallocstr from pypy.rpython.lltypesystem import rstr from pypy.rpython.lltypesystem import rlist -from pypy.rpython.module import ll_time, ll_os # table of functions hand-written in src/ll_*.h # Note about *.im_func: The annotator and the rtyper expect direct diff --git a/pypy/translator/c/src/cjkcodecs/cjkcodecs.h b/pypy/translator/c/src/cjkcodecs/cjkcodecs.h --- a/pypy/translator/c/src/cjkcodecs/cjkcodecs.h +++ b/pypy/translator/c/src/cjkcodecs/cjkcodecs.h @@ -210,15 +210,15 @@ #define BEGIN_CODECS_LIST /* empty */ #define _CODEC(name) \ - static const MultibyteCodec _pypy_cjkcodec_##name; \ - const MultibyteCodec *pypy_cjkcodec_##name(void) { \ + static MultibyteCodec _pypy_cjkcodec_##name; \ + MultibyteCodec *pypy_cjkcodec_##name(void) { \ if (_pypy_cjkcodec_##name.codecinit != NULL) { \ int r = _pypy_cjkcodec_##name.codecinit(_pypy_cjkcodec_##name.config); \ assert(r == 0); \ } \ return &_pypy_cjkcodec_##name; \ } \ - static const MultibyteCodec _pypy_cjkcodec_##name + static MultibyteCodec _pypy_cjkcodec_##name #define _STATEFUL_METHODS(enc) \ enc##_encode, \ enc##_encode_init, \ diff --git a/pypy/translator/c/src/cjkcodecs/multibytecodec.h b/pypy/translator/c/src/cjkcodecs/multibytecodec.h --- a/pypy/translator/c/src/cjkcodecs/multibytecodec.h +++ b/pypy/translator/c/src/cjkcodecs/multibytecodec.h @@ -131,7 +131,7 @@ /* list of codecs defined in the .c files */ #define DEFINE_CODEC(name) \ - const MultibyteCodec *pypy_cjkcodec_##name(void); + MultibyteCodec *pypy_cjkcodec_##name(void); // _codecs_cn DEFINE_CODEC(gb2312) diff --git a/pypy/translator/c/src/exception.h b/pypy/translator/c/src/exception.h --- a/pypy/translator/c/src/exception.h +++ b/pypy/translator/c/src/exception.h @@ -43,6 +43,16 @@ filename, lineno, functionname); } #endif +#else /* !DO_LOG_EXC: define the function anyway, so that we can shut + off the prints of a debug_exc by remaking only testing_1.o */ +void RPyDebugReturnShowException(const char *msg, const char *filename, + long lineno, const char *functionname); +#ifndef PYPY_NOT_MAIN_FILE +void RPyDebugReturnShowException(const char *msg, const char *filename, + long lineno, const char *functionname) +{ +} +#endif #endif /* DO_LOG_EXC */ /* Hint: functions and macros not defined here, like RPyRaiseException, diff --git a/pypy/translator/tool/cbuild.py b/pypy/translator/tool/cbuild.py --- a/pypy/translator/tool/cbuild.py +++ b/pypy/translator/tool/cbuild.py @@ -267,9 +267,12 @@ d['separate_module_files'] = () return files, ExternalCompilationInfo(**d) - def compile_shared_lib(self, outputfilename=None): + def compile_shared_lib(self, outputfilename=None, ignore_a_files=False): self = self.convert_sources_to_files() - if not self.separate_module_files: + if ignore_a_files: + if not [fn for fn in self.link_files if fn.endswith('.a')]: + ignore_a_files = False # there are none + if not self.separate_module_files and not ignore_a_files: if sys.platform != 'win32': return self if not self.export_symbols: @@ -288,6 +291,13 @@ num += 1 basepath.ensure(dir=1) outputfilename = str(pth.dirpath().join(pth.purebasename)) + + if ignore_a_files: + d = self._copy_attributes() + d['link_files'] = [fn for fn in d['link_files'] + if not fn.endswith('.a')] + self = ExternalCompilationInfo(**d) + lib = str(host.compile([], self, outputfilename=outputfilename, standalone=False)) d = self._copy_attributes() From noreply at buildbot.pypy.org Wed May 2 01:04:00 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:04:00 +0200 (CEST) Subject: [pypy-commit] pypy py3k: update cpyext src directory with CPython 3.2 source code Message-ID: <20120501230400.70AEB82F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54856:da56b5415d72 Date: 2012-04-28 23:11 +0200 http://bitbucket.org/pypy/pypy/changeset/da56b5415d72/ Log: update cpyext src directory with CPython 3.2 source code diff --git a/pypy/module/cpyext/src/getargs.c b/pypy/module/cpyext/src/getargs.c --- a/pypy/module/cpyext/src/getargs.c +++ b/pypy/module/cpyext/src/getargs.c @@ -94,15 +94,7 @@ { va_list lva; -#ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(lva, va); -#else - lva = va; -#endif -#endif + Py_VA_COPY(lva, va); return vgetargs1(args, format, &lva, 0); } @@ -112,15 +104,7 @@ { va_list lva; -#ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(lva, va); -#else - lva = va; -#endif -#endif + Py_VA_COPY(lva, va); return vgetargs1(args, format, &lva, FLAG_SIZE_T); } @@ -130,13 +114,14 @@ #define GETARGS_CAPSULE_NAME_CLEANUP_PTR "getargs.cleanup_ptr" #define GETARGS_CAPSULE_NAME_CLEANUP_BUFFER "getargs.cleanup_buffer" +#define GETARGS_CAPSULE_NAME_CLEANUP_CONVERT "getargs.cleanup_convert" static void cleanup_ptr(PyObject *self) { void *ptr = PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_PTR); if (ptr) { - PyMem_FREE(ptr); + PyMem_FREE(ptr); } } @@ -150,10 +135,19 @@ } static int -addcleanup(void *ptr, PyObject **freelist, PyCapsule_Destructor destr) +addcleanup(void *ptr, PyObject **freelist, int is_buffer) { PyObject *cobj; const char *name; + PyCapsule_Destructor destr; + + if (is_buffer) { + destr = cleanup_buffer; + name = GETARGS_CAPSULE_NAME_CLEANUP_BUFFER; + } else { + destr = cleanup_ptr; + name = GETARGS_CAPSULE_NAME_CLEANUP_PTR; + } if (!*freelist) { *freelist = PyList_New(0); @@ -163,13 +157,6 @@ } } - if (destr == cleanup_ptr) { - name = GETARGS_CAPSULE_NAME_CLEANUP_PTR; - } else if (destr == cleanup_buffer) { - name = GETARGS_CAPSULE_NAME_CLEANUP_BUFFER; - } else { - return -1; - } cobj = PyCapsule_New(ptr, name, destr); if (!cobj) { destr(ptr); @@ -183,6 +170,46 @@ return 0; } +static void +cleanup_convert(PyObject *self) +{ + typedef int (*destr_t)(PyObject *, void *); + destr_t destr = (destr_t)PyCapsule_GetContext(self); + void *ptr = PyCapsule_GetPointer(self, + GETARGS_CAPSULE_NAME_CLEANUP_CONVERT); + if (ptr && destr) + destr(NULL, ptr); +} + +static int +addcleanup_convert(void *ptr, PyObject **freelist, int (*destr)(PyObject*,void*)) +{ + PyObject *cobj; + if (!*freelist) { + *freelist = PyList_New(0); + if (!*freelist) { + destr(NULL, ptr); + return -1; + } + } + cobj = PyCapsule_New(ptr, GETARGS_CAPSULE_NAME_CLEANUP_CONVERT, + cleanup_convert); + if (!cobj) { + destr(NULL, ptr); + return -1; + } + if (PyCapsule_SetContext(cobj, destr) == -1) { + /* This really should not happen. */ + Py_FatalError("capsule refused setting of context."); + } + if (PyList_Append(*freelist, cobj)) { + Py_DECREF(cobj); /* This will also call destr. */ + return -1; + } + Py_DECREF(cobj); + return 0; +} + static int cleanreturn(int retval, PyObject *freelist) { @@ -437,7 +464,7 @@ n++; } - if (!PySequence_Check(arg) || PyString_Check(arg)) { + if (!PySequence_Check(arg) || PyBytes_Check(arg)) { levels[0] = 0; PyOS_snprintf(msgbuf, bufsize, toplevel ? "expected %d arguments, not %.50s" : @@ -531,27 +558,14 @@ #define CONV_UNICODE "(unicode conversion error)" -/* explicitly check for float arguments when integers are expected. For now - * signal a warning. Returns true if an exception was raised. */ -static int -float_argument_warning(PyObject *arg) -{ - if (PyFloat_Check(arg) && - PyErr_Warn(PyExc_DeprecationWarning, - "integer argument expected, got float" )) - return 1; - else - return 0; -} - -/* explicitly check for float arguments when integers are expected. Raises - TypeError and returns true for float arguments. */ +/* Explicitly check for float arguments when integers are expected. + Return 1 for error, 0 if ok. */ static int float_argument_error(PyObject *arg) { if (PyFloat_Check(arg)) { PyErr_SetString(PyExc_TypeError, - "integer argument expected, got float"); + "integer argument expected, got float" ); return 1; } else @@ -587,12 +601,11 @@ *q=s; \ } #define BUFFER_LEN ((flags & FLAG_SIZE_T) ? *q2:*q) +#define RETURN_ERR_OCCURRED return msgbuf const char *format = *p_format; char c = *format++; -#ifdef Py_USING_UNICODE PyObject *uarg; -#endif switch (c) { @@ -600,19 +613,19 @@ char *p = va_arg(*p_va, char *); long ival; if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); + RETURN_ERR_OCCURRED; + ival = PyLong_AsLong(arg); if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); + RETURN_ERR_OCCURRED; else if (ival < 0) { PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); + "unsigned byte integer is less than minimum"); + RETURN_ERR_OCCURRED; } else if (ival > UCHAR_MAX) { PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); + "unsigned byte integer is greater than maximum"); + RETURN_ERR_OCCURRED; } else *p = (unsigned char) ival; @@ -624,10 +637,10 @@ char *p = va_arg(*p_va, char *); long ival; if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); + RETURN_ERR_OCCURRED; + ival = PyLong_AsUnsignedLongMask(arg); if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); + RETURN_ERR_OCCURRED; else *p = (unsigned char) ival; break; @@ -637,19 +650,19 @@ short *p = va_arg(*p_va, short *); long ival; if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); + RETURN_ERR_OCCURRED; + ival = PyLong_AsLong(arg); if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); + RETURN_ERR_OCCURRED; else if (ival < SHRT_MIN) { PyErr_SetString(PyExc_OverflowError, - "signed short integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); + "signed short integer is less than minimum"); + RETURN_ERR_OCCURRED; } else if (ival > SHRT_MAX) { PyErr_SetString(PyExc_OverflowError, - "signed short integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); + "signed short integer is greater than maximum"); + RETURN_ERR_OCCURRED; } else *p = (short) ival; @@ -661,10 +674,10 @@ unsigned short *p = va_arg(*p_va, unsigned short *); long ival; if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); + RETURN_ERR_OCCURRED; + ival = PyLong_AsUnsignedLongMask(arg); if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); + RETURN_ERR_OCCURRED; else *p = (unsigned short) ival; break; @@ -674,19 +687,19 @@ int *p = va_arg(*p_va, int *); long ival; if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); + RETURN_ERR_OCCURRED; + ival = PyLong_AsLong(arg); if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); + RETURN_ERR_OCCURRED; else if (ival > INT_MAX) { PyErr_SetString(PyExc_OverflowError, - "signed integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); + "signed integer is greater than maximum"); + RETURN_ERR_OCCURRED; } else if (ival < INT_MIN) { PyErr_SetString(PyExc_OverflowError, - "signed integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); + "signed integer is less than minimum"); + RETURN_ERR_OCCURRED; } else *p = ival; @@ -698,38 +711,40 @@ unsigned int *p = va_arg(*p_va, unsigned int *); unsigned int ival; if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); + RETURN_ERR_OCCURRED; + ival = (unsigned int)PyLong_AsUnsignedLongMask(arg); if (ival == (unsigned int)-1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); + RETURN_ERR_OCCURRED; else *p = ival; break; } case 'n': /* Py_ssize_t */ -#if SIZEOF_SIZE_T != SIZEOF_LONG { + PyObject *iobj; Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); - Py_ssize_t ival; + Py_ssize_t ival = -1; if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsSsize_t(arg); + RETURN_ERR_OCCURRED; + iobj = PyNumber_Index(arg); + if (iobj != NULL) { + ival = PyLong_AsSsize_t(iobj); + Py_DECREF(iobj); + } if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); + RETURN_ERR_OCCURRED; *p = ival; break; } -#endif - /* Fall through from 'n' to 'l' if Py_ssize_t is int */ case 'l': {/* long int */ long *p = va_arg(*p_va, long *); long ival; if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); + RETURN_ERR_OCCURRED; + ival = PyLong_AsLong(arg); if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); + RETURN_ERR_OCCURRED; else *p = ival; break; @@ -738,9 +753,7 @@ case 'k': { /* long sized bitfield */ unsigned long *p = va_arg(*p_va, unsigned long *); unsigned long ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) + if (PyLong_Check(arg)) ival = PyLong_AsUnsignedLongMask(arg); else return converterr("integer", arg, msgbuf, bufsize); @@ -752,23 +765,20 @@ case 'L': {/* PY_LONG_LONG */ PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); PY_LONG_LONG ival; - if (float_argument_warning(arg)) - return converterr("long", arg, msgbuf, bufsize); + if (float_argument_error(arg)) + RETURN_ERR_OCCURRED; ival = PyLong_AsLongLong(arg); - if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { - return converterr("long", arg, msgbuf, bufsize); - } else { + if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred()) + RETURN_ERR_OCCURRED; + else *p = ival; - } break; } case 'K': { /* long long sized bitfield */ unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); unsigned PY_LONG_LONG ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) + if (PyLong_Check(arg)) ival = PyLong_AsUnsignedLongLongMask(arg); else return converterr("integer", arg, msgbuf, bufsize); @@ -781,7 +791,7 @@ float *p = va_arg(*p_va, float *); double dval = PyFloat_AsDouble(arg); if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); + RETURN_ERR_OCCURRED; else *p = (float) dval; break; @@ -791,84 +801,124 @@ double *p = va_arg(*p_va, double *); double dval = PyFloat_AsDouble(arg); if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); + RETURN_ERR_OCCURRED; else *p = dval; break; } -#ifndef WITHOUT_COMPLEX case 'D': {/* complex double */ Py_complex *p = va_arg(*p_va, Py_complex *); Py_complex cval; cval = PyComplex_AsCComplex(arg); if (PyErr_Occurred()) - return converterr("complex", arg, msgbuf, bufsize); + RETURN_ERR_OCCURRED; else *p = cval; break; } -#endif /* WITHOUT_COMPLEX */ case 'c': {/* char */ char *p = va_arg(*p_va, char *); - if (PyString_Check(arg) && PyString_Size(arg) == 1) - *p = PyString_AS_STRING(arg)[0]; + if (PyBytes_Check(arg) && PyBytes_Size(arg) == 1) + *p = PyBytes_AS_STRING(arg)[0]; else - return converterr("char", arg, msgbuf, bufsize); + return converterr("a byte string of length 1", arg, msgbuf, bufsize); break; } - case 's': {/* string */ + case 'C': {/* unicode char */ + int *p = va_arg(*p_va, int *); + if (PyUnicode_Check(arg) && + PyUnicode_GET_SIZE(arg) == 1) + *p = PyUnicode_AS_UNICODE(arg)[0]; + else + return converterr("a unicode character", arg, msgbuf, bufsize); + break; + } + + /* XXX WAAAAH! 's', 'y', 'z', 'u', 'Z', 'e', 'w' codes all + need to be cleaned up! */ + + case 'y': {/* any buffer-like object, but not PyUnicode */ + void **p = (void **)va_arg(*p_va, char **); + char *buf; + Py_ssize_t count; if (*format == '*') { + if (getbuffer(arg, (Py_buffer*)p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + format++; + if (addcleanup(p, freelist, 1)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + break; + } + count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + if (*format == '#') { + FETCH_SIZE; + STORE_SIZE(count); + format++; + } else { + if (strlen(*p) != count) + return converterr( + "bytes without null bytes", + arg, msgbuf, bufsize); + } + break; + } + + case 's': /* text string */ + case 'z': /* text string or None */ + { + if (*format == '*') { + /* "s*" or "z*" */ Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - if (PyString_Check(arg)) { - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE + if (c == 'z' && arg == Py_None) + PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); else if (PyUnicode_Check(arg)) { uarg = UNICODE_DEFAULT_ENCODING(arg); if (uarg == NULL) return converterr(CONV_UNICODE, arg, msgbuf, bufsize); PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + PyBytes_AS_STRING(uarg), PyBytes_GET_SIZE(uarg), 1, 0); } -#endif else { /* any buffer-like object */ char *buf; if (getbuffer(arg, p, &buf) < 0) return converterr(buf, arg, msgbuf, bufsize); } - if (addcleanup(p, freelist, cleanup_buffer)) { + if (addcleanup(p, freelist, 1)) { return converterr( "(cleanup problem)", arg, msgbuf, bufsize); } format++; - } else if (*format == '#') { + } else if (*format == '#') { /* any buffer-like object */ + /* "s#" or "z#" */ void **p = (void **)va_arg(*p_va, char **); FETCH_SIZE; - if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); + if (c == 'z' && arg == Py_None) { + *p = NULL; + STORE_SIZE(0); } -#ifdef Py_USING_UNICODE else if (PyUnicode_Check(arg)) { uarg = UNICODE_DEFAULT_ENCODING(arg); if (uarg == NULL) return converterr(CONV_UNICODE, arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); + *p = PyBytes_AS_STRING(uarg); + STORE_SIZE(PyBytes_GET_SIZE(uarg)); } -#endif else { /* any buffer-like object */ + /* XXX Really? */ char *buf; Py_ssize_t count = convertbuffer(arg, p, &buf); if (count < 0) @@ -877,125 +927,66 @@ } format++; } else { + /* "s" or "z" */ char **p = va_arg(*p_va, char **); + uarg = NULL; - if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE + if (c == 'z' && arg == Py_None) + *p = NULL; else if (PyUnicode_Check(arg)) { uarg = UNICODE_DEFAULT_ENCODING(arg); if (uarg == NULL) return converterr(CONV_UNICODE, arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); + *p = PyBytes_AS_STRING(uarg); } -#endif else - return converterr("string", arg, msgbuf, bufsize); - if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr("string without null bytes", + return converterr(c == 'z' ? "str or None" : "str", arg, msgbuf, bufsize); + if (*p != NULL && uarg != NULL && + (Py_ssize_t) strlen(*p) != PyBytes_GET_SIZE(uarg)) + return converterr( + c == 'z' ? "str without null bytes or None" + : "str without null bytes", + arg, msgbuf, bufsize); } break; } - case 'z': {/* string, may be NULL (None) */ - if (*format == '*') { - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (arg == Py_None) - PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); - else if (PyString_Check(arg)) { - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; - } else if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); + case 'u': /* raw unicode buffer (Py_UNICODE *) */ + case 'Z': /* raw unicode buffer or None */ + { + if (*format == '#') { /* any buffer-like object */ + /* "s#" or "Z#" */ + Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); FETCH_SIZE; - if (arg == Py_None) { - *p = 0; + if (c == 'Z' && arg == Py_None) { + *p = NULL; STORE_SIZE(0); } - else if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); + else if (PyUnicode_Check(arg)) { + *p = PyUnicode_AS_UNICODE(arg); + STORE_SIZE(PyUnicode_GET_SIZE(arg)); } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } + else + return converterr("str or None", arg, msgbuf, bufsize); format++; } else { - char **p = va_arg(*p_va, char **); + /* "s" or "Z" */ + Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); - if (arg == Py_None) - *p = 0; - else if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE + if (c == 'Z' && arg == Py_None) + *p = NULL; else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string or None", + *p = PyUnicode_AS_UNICODE(arg); + if (Py_UNICODE_strlen(*p) != PyUnicode_GET_SIZE(arg)) + return converterr( + "str without null character or None", + arg, msgbuf, bufsize); + } else + return converterr(c == 'Z' ? "str or None" : "str", arg, msgbuf, bufsize); - if (*format == '#') { - FETCH_SIZE; - assert(0); /* XXX redundant with if-case */ - if (arg == Py_None) { - STORE_SIZE(0); - } else { - STORE_SIZE(PyString_Size(arg)); - } - format++; - } - else if (*p != NULL && - (Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr( - "string without null bytes or None", - arg, msgbuf, bufsize); } break; } @@ -1004,15 +995,14 @@ char **buffer; const char *encoding; PyObject *s; + int recode_strings; Py_ssize_t size; - int recode_strings; + const char *ptr; /* Get 'e' parameter: the encoding name */ encoding = (const char *)va_arg(*p_va, const char *); -#ifdef Py_USING_UNICODE if (encoding == NULL) encoding = PyUnicode_GetDefaultEncoding(); -#endif /* Get output buffer parameter: 's' (recode all objects via Unicode) or @@ -1033,12 +1023,15 @@ arg, msgbuf, bufsize); /* Encode object */ - if (!recode_strings && PyString_Check(arg)) { + if (!recode_strings && + (PyBytes_Check(arg) || PyByteArray_Check(arg))) { s = arg; Py_INCREF(s); + if (PyObject_AsCharBuffer(s, &ptr, &size) < 0) + return converterr("(AsCharBuffer failed)", + arg, msgbuf, bufsize); } else { -#ifdef Py_USING_UNICODE PyObject *u; /* Convert object to Unicode */ @@ -1056,17 +1049,17 @@ if (s == NULL) return converterr("(encoding failed)", arg, msgbuf, bufsize); - if (!PyString_Check(s)) { + if (!PyBytes_Check(s)) { Py_DECREF(s); return converterr( - "(encoder failed to return a string)", + "(encoder failed to return bytes)", arg, msgbuf, bufsize); } -#else - return converterr("string", arg, msgbuf, bufsize); -#endif + size = PyBytes_GET_SIZE(s); + ptr = PyBytes_AS_STRING(s); + if (ptr == NULL) + ptr = ""; } - size = PyString_GET_SIZE(s); /* Write output; output is guaranteed to be 0-terminated */ if (*format == '#') { @@ -1104,11 +1097,10 @@ *buffer = PyMem_NEW(char, size + 1); if (*buffer == NULL) { Py_DECREF(s); - return converterr( - "(memory error)", - arg, msgbuf, bufsize); + PyErr_NoMemory(); + RETURN_ERR_OCCURRED; } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { + if (addcleanup(*buffer, freelist, 0)) { Py_DECREF(s); return converterr( "(cleanup problem)", @@ -1122,9 +1114,7 @@ arg, msgbuf, bufsize); } } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); + memcpy(*buffer, ptr, size+1); STORE_SIZE(size); } else { /* Using a 0-terminated buffer: @@ -1140,8 +1130,7 @@ PyMem_Free()ing it after usage */ - if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) - != size) { + if ((Py_ssize_t)strlen(ptr) != size) { Py_DECREF(s); return converterr( "encoded string without NULL bytes", @@ -1150,66 +1139,46 @@ *buffer = PyMem_NEW(char, size + 1); if (*buffer == NULL) { Py_DECREF(s); - return converterr("(memory error)", - arg, msgbuf, bufsize); + PyErr_NoMemory(); + RETURN_ERR_OCCURRED; } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { + if (addcleanup(*buffer, freelist, 0)) { Py_DECREF(s); return converterr("(cleanup problem)", arg, msgbuf, bufsize); } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); + memcpy(*buffer, ptr, size+1); } Py_DECREF(s); break; } -#ifdef Py_USING_UNICODE - case 'u': {/* raw unicode buffer (Py_UNICODE *) */ - if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - if (PyUnicode_Check(arg)) { - *p = PyUnicode_AS_UNICODE(arg); - STORE_SIZE(PyUnicode_GET_SIZE(arg)); - } - else { - return converterr("cannot convert raw buffers", - arg, msgbuf, bufsize); - } - format++; - } else { - Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); - if (PyUnicode_Check(arg)) - *p = PyUnicode_AS_UNICODE(arg); - else - return converterr("unicode", arg, msgbuf, bufsize); - } - break; - } -#endif - - case 'S': { /* string object */ + case 'S': { /* PyBytes object */ PyObject **p = va_arg(*p_va, PyObject **); - if (PyString_Check(arg)) + if (PyBytes_Check(arg)) *p = arg; else - return converterr("string", arg, msgbuf, bufsize); + return converterr("bytes", arg, msgbuf, bufsize); break; } -#ifdef Py_USING_UNICODE - case 'U': { /* Unicode object */ + case 'Y': { /* PyByteArray object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyByteArray_Check(arg)) + *p = arg; + else + return converterr("bytearray", arg, msgbuf, bufsize); + break; + } + + case 'U': { /* PyUnicode object */ PyObject **p = va_arg(*p_va, PyObject **); if (PyUnicode_Check(arg)) *p = arg; else - return converterr("unicode", arg, msgbuf, bufsize); + return converterr("str", arg, msgbuf, bufsize); break; } -#endif case 'O': { /* object */ PyTypeObject *type; @@ -1224,25 +1193,19 @@ return converterr(type->tp_name, arg, msgbuf, bufsize); } - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - p = va_arg(*p_va, PyObject **); - format++; - if ((*pred)(arg)) - *p = arg; - else - return converterr("(unspecified)", - arg, msgbuf, bufsize); - - } else if (*format == '&') { typedef int (*converter)(PyObject *, void *); converter convert = va_arg(*p_va, converter); void *addr = va_arg(*p_va, void *); + int res; format++; - if (! (*convert)(arg, addr)) + if (! (res = (*convert)(arg, addr))) return converterr("(unspecified)", arg, msgbuf, bufsize); + if (res == Py_CLEANUP_SUPPORTED && + addcleanup_convert(addr, freelist, convert) == -1) + return converterr("(cleanup problem)", + arg, msgbuf, bufsize); } else { p = va_arg(*p_va, PyObject **); @@ -1252,92 +1215,29 @@ } - case 'w': { /* memory buffer, read-write access */ + case 'w': { /* "w*": memory buffer, read-write access */ void **p = va_arg(*p_va, void **); - void *res; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; - if (pb && pb->bf_releasebuffer && *format != '*') - /* Buffer must be released, yet caller does not use - the Py_buffer protocol. */ - return converterr("pinned buffer", arg, msgbuf, bufsize); + if (*format != '*') + return converterr( + "invalid use of 'w' format character", + arg, msgbuf, bufsize); + format++; - if (pb && pb->bf_getbuffer && *format == '*') { - /* Caller is interested in Py_buffer, and the object - supports it directly. */ - format++; - if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { - PyErr_Clear(); - return converterr("read-write buffer", arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) - return converterr("contiguous buffer", arg, msgbuf, bufsize); - break; + /* Caller is interested in Py_buffer, and the object + supports it directly. */ + if (PyObject_GetBuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { + PyErr_Clear(); + return converterr("read-write buffer", arg, msgbuf, bufsize); } - - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) - return converterr("read-write buffer", arg, msgbuf, bufsize); - if ((*pb->bf_getsegcount)(arg, NULL) != 1) - return converterr("single-segment read-write buffer", - arg, msgbuf, bufsize); - if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); - if (*format == '*') { - PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); - format++; + if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) { + PyBuffer_Release((Py_buffer*)p); + return converterr("contiguous buffer", arg, msgbuf, bufsize); } - else { - *p = res; - if (*format == '#') { - FETCH_SIZE; - STORE_SIZE(count); - format++; - } - } - break; - } - - case 't': { /* 8-bit character buffer, read-only access */ - char **p = va_arg(*p_va, char **); - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; - - if (*format++ != '#') + if (addcleanup(p, freelist, 1)) { return converterr( - "invalid use of 't' format character", + "(cleanup problem)", arg, msgbuf, bufsize); - if (!PyType_HasFeature(arg->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER) || - pb == NULL || pb->bf_getcharbuffer == NULL || - pb->bf_getsegcount == NULL) - return converterr( - "string or read-only character buffer", - arg, msgbuf, bufsize); - - if (pb->bf_getsegcount(arg, NULL) != 1) - return converterr( - "string or single-segment read-only buffer", - arg, msgbuf, bufsize); - - if (pb->bf_releasebuffer) - return converterr( - "string or pinned buffer", - arg, msgbuf, bufsize); - - count = pb->bf_getcharbuffer(arg, 0, p); - if (count < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); - { - FETCH_SIZE; - STORE_SIZE(count); } break; } @@ -1349,58 +1249,47 @@ *p_format = format; return NULL; + +#undef FETCH_SIZE +#undef STORE_SIZE +#undef BUFFER_LEN +#undef RETURN_ERR_OCCURRED } static Py_ssize_t convertbuffer(PyObject *arg, void **p, char **errmsg) { - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + PyBufferProcs *pb = Py_TYPE(arg)->tp_as_buffer; Py_ssize_t count; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - pb->bf_releasebuffer != NULL) { - *errmsg = "string or read-only buffer"; + Py_buffer view; + + *errmsg = NULL; + *p = NULL; + if (pb != NULL && pb->bf_releasebuffer != NULL) { + *errmsg = "read-only pinned buffer"; return -1; } - if ((*pb->bf_getsegcount)(arg, NULL) != 1) { - *errmsg = "string or single-segment read-only buffer"; + + if (getbuffer(arg, &view, errmsg) < 0) return -1; - } - if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { - *errmsg = "(unspecified)"; - } + count = view.len; + *p = view.buf; + PyBuffer_Release(&view); return count; } static int getbuffer(PyObject *arg, Py_buffer *view, char **errmsg) { - void *buf; - Py_ssize_t count; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - if (pb == NULL) { - *errmsg = "string or buffer"; + if (PyObject_GetBuffer(arg, view, PyBUF_SIMPLE) != 0) { + *errmsg = "bytes or buffer"; return -1; } - if (pb->bf_getbuffer) { - if (pb->bf_getbuffer(arg, view, 0) < 0) { - *errmsg = "convertible to a buffer"; - return -1; - } - if (!PyBuffer_IsContiguous(view, 'C')) { - *errmsg = "contiguous buffer"; - return -1; - } - return 0; + if (!PyBuffer_IsContiguous(view, 'C')) { + PyBuffer_Release(view); + *errmsg = "contiguous buffer"; + return -1; } - - count = convertbuffer(arg, &buf, errmsg); - if (count < 0) { - *errmsg = "convertible to a buffer"; - return count; - } - PyBuffer_FillInfo(view, arg, buf, count, 1, 0); return 0; } @@ -1476,15 +1365,7 @@ return 0; } -#ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(lva, va); -#else - lva = va; -#endif -#endif + Py_VA_COPY(lva, va); retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); return retval; @@ -1508,21 +1389,28 @@ return 0; } -#ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(lva, va); -#else - lva = va; -#endif -#endif + Py_VA_COPY(lva, va); retval = vgetargskeywords(args, keywords, format, kwlist, &lva, FLAG_SIZE_T); return retval; } +int +PyArg_ValidateKeywordArguments(PyObject *kwargs) +{ + if (!PyDict_Check(kwargs)) { + PyErr_BadInternalCall(); + return 0; + } + if (!_PyDict_HasOnlyStringKeys(kwargs)) { + PyErr_SetString(PyExc_TypeError, + "keyword arguments must be strings"); + return 0; + } + return 1; +} + #define IS_END_OF_FORMAT(c) (c == '\0' || c == ';' || c == ':') static int @@ -1651,23 +1539,26 @@ while (PyDict_Next(keywords, &pos, &key, &value)) { int match = 0; char *ks; - if (!PyString_Check(key)) { + if (!PyUnicode_Check(key)) { PyErr_SetString(PyExc_TypeError, "keywords must be strings"); return cleanreturn(0, freelist); } - ks = PyString_AsString(key); - for (i = 0; i < len; i++) { - if (!strcmp(ks, kwlist[i])) { - match = 1; - break; + /* check that _PyUnicode_AsString() result is not NULL */ + ks = _PyUnicode_AsString(key); + if (ks != NULL) { + for (i = 0; i < len; i++) { + if (!strcmp(ks, kwlist[i])) { + match = 1; + break; + } } } if (!match) { PyErr_Format(PyExc_TypeError, - "'%s' is an invalid keyword " + "'%U' is an invalid keyword " "argument for this function", - ks); + key); return cleanreturn(0, freelist); } } @@ -1702,10 +1593,9 @@ #endif case 'f': /* float */ case 'd': /* double */ -#ifndef WITHOUT_COMPLEX case 'D': /* complex double */ -#endif case 'c': /* char */ + case 'C': /* unicode char */ { (void) va_arg(*p_va, void *); break; @@ -1731,10 +1621,8 @@ case 's': /* string */ case 'z': /* string or None */ -#ifdef Py_USING_UNICODE + case 'y': /* bytes */ case 'u': /* unicode string */ -#endif - case 't': /* buffer, read-only */ case 'w': /* buffer, read-write */ { (void) va_arg(*p_va, char **); @@ -1744,7 +1632,7 @@ else (void) va_arg(*p_va, int *); format++; - } else if ((c == 's' || c == 'z') && *format == '*') { + } else if ((c == 's' || c == 'z' || c == 'y') && *format == '*') { format++; } break; @@ -1753,9 +1641,8 @@ /* object codes */ case 'S': /* string object */ -#ifdef Py_USING_UNICODE + case 'Y': /* string object */ case 'U': /* unicode string object */ -#endif { (void) va_arg(*p_va, PyObject **); break; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -11,23 +11,6 @@ /* Package context -- the full module name for package imports */ char *_Py_PackageContext = NULL; -/* Py_InitModule4() parameters: - - name is the module name - - methods is the list of top-level functions - - doc is the documentation string - - passthrough is passed as self to functions defined in the module - - api_version is the value of PYTHON_API_VERSION at the time the - module was compiled - - Return value is a borrowed reference to the module object; or NULL - if an error occurred (in Python 1.4 and before, errors were fatal). - Errors may still leak memory. -*/ - -static char api_version_warning[] = -"Python C API version mismatch for module %.100s:\ - This Python has API version %d, module %.100s has version %d."; - /* Helper for mkvalue() to scan the length of a format */ static int @@ -165,7 +148,6 @@ return v; } -#ifdef Py_USING_UNICODE static int _ustrlen(Py_UNICODE *u) { @@ -174,7 +156,6 @@ while (*v != 0) { i++; v++; } return i; } -#endif static PyObject * do_mktuple(const char **p_format, va_list *p_va, int endchar, int n, int flags) @@ -234,37 +215,31 @@ case 'B': case 'h': case 'i': - return PyInt_FromLong((long)va_arg(*p_va, int)); + return PyLong_FromLong((long)va_arg(*p_va, int)); case 'H': - return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + return PyLong_FromLong((long)va_arg(*p_va, unsigned int)); case 'I': { unsigned int n; n = va_arg(*p_va, unsigned int); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong((unsigned long)n); - else - return PyInt_FromLong(n); + return PyLong_FromUnsignedLong(n); } case 'n': #if SIZEOF_SIZE_T!=SIZEOF_LONG - return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); + return PyLong_FromSsize_t(va_arg(*p_va, Py_ssize_t)); #endif /* Fall through from 'n' to 'l' if Py_ssize_t is long */ case 'l': - return PyInt_FromLong(va_arg(*p_va, long)); + return PyLong_FromLong(va_arg(*p_va, long)); case 'k': { unsigned long n; n = va_arg(*p_va, unsigned long); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong(n); - else - return PyInt_FromLong(n); + return PyLong_FromUnsignedLong(n); } #ifdef HAVE_LONG_LONG @@ -274,7 +249,6 @@ case 'K': return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); #endif -#ifdef Py_USING_UNICODE case 'u': { PyObject *v; @@ -300,27 +274,35 @@ } return v; } -#endif case 'f': case 'd': return PyFloat_FromDouble( (double)va_arg(*p_va, va_double)); -#ifndef WITHOUT_COMPLEX case 'D': return PyComplex_FromCComplex( *((Py_complex *)va_arg(*p_va, Py_complex *))); -#endif /* WITHOUT_COMPLEX */ case 'c': { char p[1]; p[0] = (char)va_arg(*p_va, int); - return PyString_FromStringAndSize(p, 1); + return PyBytes_FromStringAndSize(p, 1); + } + case 'C': + { + int i = va_arg(*p_va, int); + if (i < 0 || i > PyUnicode_GetMax()) { + PyErr_SetString(PyExc_OverflowError, + "%c arg not in range(0x110000)"); + return NULL; + } + return PyUnicode_FromOrdinal(i); } case 's': case 'z': + case 'U': /* XXX deprecated alias */ { PyObject *v; char *str = va_arg(*p_va, char *); @@ -348,7 +330,40 @@ } n = (Py_ssize_t)m; } - v = PyString_FromStringAndSize(str, n); + v = PyUnicode_FromStringAndSize(str, n); + } + return v; + } + + case 'y': + { + PyObject *v; + char *str = va_arg(*p_va, char *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (str == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) { + size_t m = strlen(str); + if (m > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_OverflowError, + "string too long for Python bytes"); + return NULL; + } + n = (Py_ssize_t)m; + } + v = PyBytes_FromStringAndSize(str, n); } return v; } @@ -441,15 +456,7 @@ int n = countformat(f, '\0'); va_list lva; -#ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(lva, va); -#else - lva = va; -#endif -#endif + Py_VA_COPY(lva, va); if (n < 0) return NULL; @@ -564,7 +571,7 @@ PyModule_AddIntConstant(PyObject *m, const char *name, long value) { int result; - PyObject *o = PyInt_FromLong(value); + PyObject *o = PyLong_FromLong(value); if (!o) return -1; result = _PyModule_AddObject_NoConsumeRef(m, name, o); @@ -576,7 +583,7 @@ PyModule_AddStringConstant(PyObject *m, const char *name, const char *value) { int result; - PyObject *o = PyString_FromString(value); + PyObject *o = PyUnicode_FromString(value); if (!o) return -1; result = _PyModule_AddObject_NoConsumeRef(m, name, o); diff --git a/pypy/module/cpyext/src/mysnprintf.c b/pypy/module/cpyext/src/mysnprintf.c --- a/pypy/module/cpyext/src/mysnprintf.c +++ b/pypy/module/cpyext/src/mysnprintf.c @@ -1,5 +1,4 @@ #include "Python.h" -#include /* snprintf() wrappers. If the platform has vsnprintf, we use it, else we emulate it in a half-hearted way. Even if the platform has it, we wrap diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -25,7 +25,7 @@ PyObject * PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { - char *dot; + const char *dot; PyObject *modulename = NULL; PyObject *classname = NULL; PyObject *mydict = NULL; diff --git a/pypy/module/cpyext/src/pysignals.c b/pypy/module/cpyext/src/pysignals.c --- a/pypy/module/cpyext/src/pysignals.c +++ b/pypy/module/cpyext/src/pysignals.c @@ -48,6 +48,11 @@ #endif } +/* + * All of the code in this function must only use async-signal-safe functions, + * listed at `man 7 signal` or + * http://www.opengroup.org/onlinepubs/009695399/functions/xsh_chap02_04.html. + */ PyOS_sighandler_t PyOS_setsig(int sig, PyOS_sighandler_t handler) { diff --git a/pypy/module/cpyext/src/pythonrun.c b/pypy/module/cpyext/src/pythonrun.c --- a/pypy/module/cpyext/src/pythonrun.c +++ b/pypy/module/cpyext/src/pythonrun.c @@ -11,7 +11,9 @@ { fprintf(stderr, "Fatal Python error: %s\n", msg); fflush(stderr); /* it helps in Windows debug build */ - + if (PyErr_Occurred()) { + PyErr_PrintEx(0); + } #ifdef MS_WINDOWS { size_t len = strlen(msg); diff --git a/pypy/module/cpyext/src/structseq.c b/pypy/module/cpyext/src/structseq.c --- a/pypy/module/cpyext/src/structseq.c +++ b/pypy/module/cpyext/src/structseq.c @@ -14,14 +14,14 @@ char *PyStructSequence_UnnamedField = "unnamed field"; #define VISIBLE_SIZE(op) Py_SIZE(op) -#define VISIBLE_SIZE_TP(tp) PyInt_AsLong( \ +#define VISIBLE_SIZE_TP(tp) PyLong_AsLong( \ PyDict_GetItemString((tp)->tp_dict, visible_length_key)) -#define REAL_SIZE_TP(tp) PyInt_AsLong( \ +#define REAL_SIZE_TP(tp) PyLong_AsLong( \ PyDict_GetItemString((tp)->tp_dict, real_length_key)) #define REAL_SIZE(op) REAL_SIZE_TP(Py_TYPE(op)) -#define UNNAMED_FIELDS_TP(tp) PyInt_AsLong( \ +#define UNNAMED_FIELDS_TP(tp) PyLong_AsLong( \ PyDict_GetItemString((tp)->tp_dict, unnamed_fields_key)) #define UNNAMED_FIELDS(op) UNNAMED_FIELDS_TP(Py_TYPE(op)) @@ -30,113 +30,42 @@ PyStructSequence_New(PyTypeObject *type) { PyStructSequence *obj; + Py_ssize_t size = REAL_SIZE_TP(type), i; - obj = PyObject_New(PyStructSequence, type); + obj = PyObject_GC_NewVar(PyStructSequence, type, size); if (obj == NULL) return NULL; + /* Hack the size of the variable object, so invisible fields don't appear + to Python code. */ Py_SIZE(obj) = VISIBLE_SIZE_TP(type); + for (i = 0; i < size; i++) + obj->ob_item[i] = NULL; - return (PyObject*) obj; + return (PyObject*)obj; +} + +void +PyStructSequence_SetItem(PyObject* op, Py_ssize_t i, PyObject* v) +{ + PyStructSequence_SET_ITEM(op, i, v); +} + +PyObject* +PyStructSequence_GetItem(PyObject* op, Py_ssize_t i) +{ + return PyStructSequence_GET_ITEM(op, i); } static void structseq_dealloc(PyStructSequence *obj) { Py_ssize_t i, size; - + size = REAL_SIZE(obj); for (i = 0; i < size; ++i) { Py_XDECREF(obj->ob_item[i]); } - PyObject_Del(obj); -} - -static Py_ssize_t -structseq_length(PyStructSequence *obj) -{ - return VISIBLE_SIZE(obj); -} - -static PyObject* -structseq_item(PyStructSequence *obj, Py_ssize_t i) -{ - if (i < 0 || i >= VISIBLE_SIZE(obj)) { - PyErr_SetString(PyExc_IndexError, "tuple index out of range"); - return NULL; - } - Py_INCREF(obj->ob_item[i]); - return obj->ob_item[i]; -} - -static PyObject* -structseq_slice(PyStructSequence *obj, Py_ssize_t low, Py_ssize_t high) -{ - PyTupleObject *np; - Py_ssize_t i; - - if (low < 0) - low = 0; - if (high > VISIBLE_SIZE(obj)) - high = VISIBLE_SIZE(obj); - if (high < low) - high = low; - np = (PyTupleObject *)PyTuple_New(high-low); - if (np == NULL) - return NULL; - for(i = low; i < high; ++i) { - PyObject *v = obj->ob_item[i]; - Py_INCREF(v); - PyTuple_SET_ITEM(np, i-low, v); - } - return (PyObject *) np; -} - -static PyObject * -structseq_subscript(PyStructSequence *self, PyObject *item) -{ - if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return NULL; - - if (i < 0) - i += VISIBLE_SIZE(self); - - if (i < 0 || i >= VISIBLE_SIZE(self)) { - PyErr_SetString(PyExc_IndexError, - "tuple index out of range"); - return NULL; - } - Py_INCREF(self->ob_item[i]); - return self->ob_item[i]; - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelen, cur, i; - PyObject *result; - - if (PySlice_GetIndicesEx((PySliceObject *)item, - VISIBLE_SIZE(self), &start, &stop, - &step, &slicelen) < 0) { - return NULL; - } - if (slicelen <= 0) - return PyTuple_New(0); - result = PyTuple_New(slicelen); - if (result == NULL) - return NULL; - for (cur = start, i = 0; i < slicelen; - cur += step, i++) { - PyObject *v = self->ob_item[cur]; - Py_INCREF(v); - PyTuple_SET_ITEM(result, i, v); - } - return result; - } - else { - PyErr_SetString(PyExc_TypeError, - "structseq index must be integer"); - return NULL; - } + PyObject_GC_Del(obj); } static PyObject * @@ -175,33 +104,32 @@ if (min_len != max_len) { if (len < min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at least %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at least %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } if (len > max_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at most %zd-sequence (%zd-sequence given)", - type->tp_name, max_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at most %zd-sequence (%zd-sequence given)", + type->tp_name, max_len, len); + Py_DECREF(arg); + return NULL; } } else { if (len != min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes a %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes a %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } } res = (PyStructSequence*) PyStructSequence_New(type); if (res == NULL) { - Py_DECREF(arg); return NULL; } for (i = 0; i < len; ++i) { @@ -224,11 +152,6 @@ return (PyObject*) res; } -static PyObject * -make_tuple(PyStructSequence *obj) -{ - return structseq_slice(obj, 0, VISIBLE_SIZE(obj)); -} static PyObject * structseq_repr(PyStructSequence *obj) @@ -237,7 +160,6 @@ #define REPR_BUFFER_SIZE 512 #define TYPE_MAXSIZE 100 - PyObject *tup; PyTypeObject *typ = Py_TYPE(obj); int i, removelast = 0; Py_ssize_t len; @@ -247,10 +169,6 @@ /* pointer to end of writeable buffer; safes space for "...)\0" */ endofbuf= &buf[REPR_BUFFER_SIZE-5]; - if ((tup = make_tuple(obj)) == NULL) { - return NULL; - } - /* "typename(", limited to TYPE_MAXSIZE */ len = strlen(typ->tp_name) > TYPE_MAXSIZE ? TYPE_MAXSIZE : strlen(typ->tp_name); @@ -263,19 +181,17 @@ char *cname, *crepr; cname = typ->tp_members[i].name; - - val = PyTuple_GetItem(tup, i); - if (cname == NULL || val == NULL) { + if (cname == NULL) { + PyErr_Format(PyExc_SystemError, "In structseq_repr(), member %d name is NULL" + " for type %.500s", i, typ->tp_name); return NULL; } + val = PyStructSequence_GET_ITEM(obj, i); repr = PyObject_Repr(val); - if (repr == NULL) { - Py_DECREF(tup); + if (repr == NULL) return NULL; - } - crepr = PyString_AsString(repr); + crepr = _PyUnicode_AsString(repr); if (crepr == NULL) { - Py_DECREF(tup); Py_DECREF(repr); return NULL; } @@ -301,7 +217,6 @@ break; } } - Py_DECREF(tup); if (removelast) { /* overwrite last ", " */ pbuf-=2; @@ -309,63 +224,7 @@ *pbuf++ = ')'; *pbuf = '\0'; - return PyString_FromString(buf); -} - -static PyObject * -structseq_concat(PyStructSequence *obj, PyObject *b) -{ - PyObject *tup, *result; - tup = make_tuple(obj); - result = PySequence_Concat(tup, b); - Py_DECREF(tup); - return result; -} - -static PyObject * -structseq_repeat(PyStructSequence *obj, Py_ssize_t n) -{ - PyObject *tup, *result; - tup = make_tuple(obj); - result = PySequence_Repeat(tup, n); - Py_DECREF(tup); - return result; -} - -static int -structseq_contains(PyStructSequence *obj, PyObject *o) -{ - PyObject *tup; - int result; - tup = make_tuple(obj); - if (!tup) - return -1; - result = PySequence_Contains(tup, o); - Py_DECREF(tup); - return result; -} - -static long -structseq_hash(PyObject *obj) -{ - PyObject *tup; - long result; - tup = make_tuple((PyStructSequence*) obj); - if (!tup) - return -1; - result = PyObject_Hash(tup); - Py_DECREF(tup); - return result; -} - -static PyObject * -structseq_richcompare(PyObject *obj, PyObject *o2, int op) -{ - PyObject *tup, *result; - tup = make_tuple((PyStructSequence*) obj); - result = PyObject_RichCompare(tup, o2, op); - Py_DECREF(tup); - return result; + return PyUnicode_FromString(buf); } static PyObject * @@ -410,53 +269,36 @@ return result; } -static PySequenceMethods structseq_as_sequence = { - (lenfunc)structseq_length, - (binaryfunc)structseq_concat, /* sq_concat */ - (ssizeargfunc)structseq_repeat, /* sq_repeat */ - (ssizeargfunc)structseq_item, /* sq_item */ - (ssizessizeargfunc)structseq_slice, /* sq_slice */ - 0, /* sq_ass_item */ - 0, /* sq_ass_slice */ - (objobjproc)structseq_contains, /* sq_contains */ -}; - -static PyMappingMethods structseq_as_mapping = { - (lenfunc)structseq_length, - (binaryfunc)structseq_subscript, -}; - static PyMethodDef structseq_methods[] = { - {"__reduce__", (PyCFunction)structseq_reduce, - METH_NOARGS, NULL}, + {"__reduce__", (PyCFunction)structseq_reduce, METH_NOARGS, NULL}, {NULL, NULL} }; static PyTypeObject _struct_sequence_template = { PyVarObject_HEAD_INIT(&PyType_Type, 0) NULL, /* tp_name */ - 0, /* tp_basicsize */ - 0, /* tp_itemsize */ + sizeof(PyStructSequence) - sizeof(PyObject *), /* tp_basicsize */ + sizeof(PyObject *), /* tp_itemsize */ (destructor)structseq_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ - 0, /* tp_compare */ + 0, /* tp_reserved */ (reprfunc)structseq_repr, /* tp_repr */ 0, /* tp_as_number */ - &structseq_as_sequence, /* tp_as_sequence */ - &structseq_as_mapping, /* tp_as_mapping */ - structseq_hash, /* tp_hash */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ NULL, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ - structseq_richcompare, /* tp_richcompare */ + 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ @@ -483,7 +325,7 @@ #ifdef Py_TRACE_REFS /* if the type object was chained, unchain it first before overwriting its storage */ - if (type->_ob_next) { + if (type->ob_base.ob_base._ob_next) { _Py_ForgetReference((PyObject*)type); } #endif @@ -495,11 +337,9 @@ n_members = i; memcpy(type, &_struct_sequence_template, sizeof(PyTypeObject)); + type->tp_base = &PyTuple_Type; type->tp_name = desc->name; type->tp_doc = desc->doc; - type->tp_basicsize = sizeof(PyStructSequence)+ - sizeof(PyObject*)*(n_members-1); - type->tp_itemsize = 0; members = PyMem_NEW(PyMemberDef, n_members-n_unnamed_members+1); if (members == NULL) @@ -527,7 +367,7 @@ dict = type->tp_dict; #define SET_DICT_FROM_INT(key, value) \ do { \ - PyObject *v = PyInt_FromLong((long) value); \ + PyObject *v = PyLong_FromLong((long) value); \ if (v != NULL) { \ PyDict_SetItemString(dict, key, v); \ Py_DECREF(v); \ @@ -538,3 +378,11 @@ SET_DICT_FROM_INT(real_length_key, n_members); SET_DICT_FROM_INT(unnamed_fields_key, n_unnamed_members); } + +PyTypeObject* +PyStructSequence_NewType(PyStructSequence_Desc *desc) +{ + PyTypeObject *result = (PyTypeObject*)PyType_GenericAlloc(&PyType_Type, 0); + PyStructSequence_InitType(result, desc); + return result; +} diff --git a/pypy/module/cpyext/src/sysmodule.c b/pypy/module/cpyext/src/sysmodule.c --- a/pypy/module/cpyext/src/sysmodule.c +++ b/pypy/module/cpyext/src/sysmodule.c @@ -68,7 +68,7 @@ PyErr_Fetch(&error_type, &error_value, &error_traceback); file = PySys_GetObject(name); - written = vsnprintf(buffer, sizeof(buffer), format, va); + written = PyOS_vsnprintf(buffer, sizeof(buffer), format, va); if (sys_pyfile_write(buffer, file) != 0) { PyErr_Clear(); fputs(buffer, fp); diff --git a/pypy/module/cpyext/src/unicodeobject.c b/pypy/module/cpyext/src/unicodeobject.c --- a/pypy/module/cpyext/src/unicodeobject.c +++ b/pypy/module/cpyext/src/unicodeobject.c @@ -504,6 +504,8 @@ return NULL; } +#undef appendstring + PyObject * PyUnicode_FromFormat(const char *format, ...) { From noreply at buildbot.pypy.org Wed May 2 01:04:01 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:04:01 +0200 (CEST) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120501230401.B5B9B82F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54857:bcf26626c1c6 Date: 2012-04-30 22:49 +0200 http://bitbucket.org/pypy/pypy/changeset/bcf26626c1c6/ Log: hg merge default diff --git a/pypy/module/cpyext/complexobject.py b/pypy/module/cpyext/complexobject.py --- a/pypy/module/cpyext/complexobject.py +++ b/pypy/module/cpyext/complexobject.py @@ -33,6 +33,11 @@ # CPython also accepts anything return 0.0 + at cpython_api([Py_complex_ptr], PyObject) +def _PyComplex_FromCComplex(space, v): + """Create a new Python complex number object from a C Py_complex value.""" + return space.newcomplex(v.c_real, v.c_imag) + # lltype does not handle functions returning a structure. This implements a # helper function, which takes as argument a reference to the return value. @cpython_api([PyObject, Py_complex_ptr], lltype.Void) diff --git a/pypy/module/cpyext/include/complexobject.h b/pypy/module/cpyext/include/complexobject.h --- a/pypy/module/cpyext/include/complexobject.h +++ b/pypy/module/cpyext/include/complexobject.h @@ -21,6 +21,8 @@ return result; } +#define PyComplex_FromCComplex(c) _PyComplex_FromCComplex(&c) + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/longobject.py b/pypy/module/cpyext/longobject.py --- a/pypy/module/cpyext/longobject.py +++ b/pypy/module/cpyext/longobject.py @@ -1,6 +1,7 @@ from pypy.rpython.lltypesystem import lltype, rffi -from pypy.module.cpyext.api import (cpython_api, PyObject, build_type_checkers, - CONST_STRING, ADDR, CANNOT_FAIL) +from pypy.module.cpyext.api import ( + cpython_api, PyObject, build_type_checkers, Py_ssize_t, + CONST_STRING, ADDR, CANNOT_FAIL) from pypy.objspace.std.longobject import W_LongObject from pypy.interpreter.error import OperationError from pypy.module.cpyext.intobject import PyInt_AsUnsignedLongMask @@ -56,6 +57,14 @@ and -1 will be returned.""" return space.int_w(w_long) + at cpython_api([PyObject], Py_ssize_t, error=-1) +def PyLong_AsSsize_t(space, w_long): + """Return a C Py_ssize_t representation of the contents of pylong. If + pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised + and -1 will be returned. + """ + return space.int_w(w_long) + @cpython_api([PyObject], rffi.LONGLONG, error=-1) def PyLong_AsLongLong(space, w_long): """ diff --git a/pypy/module/cpyext/src/abstract.c b/pypy/module/cpyext/src/abstract.c new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/src/abstract.c @@ -0,0 +1,269 @@ +/* Abstract Object Interface */ + +#include "Python.h" + +/* Shorthands to return certain errors */ + +static PyObject * +type_error(const char *msg, PyObject *obj) +{ + PyErr_Format(PyExc_TypeError, msg, obj->ob_type->tp_name); + return NULL; +} + +static PyObject * +null_error(void) +{ + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_SystemError, + "null argument to internal routine"); + return NULL; +} + +/* Operations on any object */ + +int +PyObject_CheckReadBuffer(PyObject *obj) +{ + PyBufferProcs *pb = obj->ob_type->tp_as_buffer; + + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + (*pb->bf_getsegcount)(obj, NULL) != 1) + return 0; + return 1; +} + +int PyObject_AsReadBuffer(PyObject *obj, + const void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void *pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a readable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getreadbuffer)(obj, 0, &pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +int PyObject_AsWriteBuffer(PyObject *obj, + void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void*pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a writeable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getwritebuffer)(obj,0,&pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +/* Operations on callable objects */ + +static PyObject* +call_function_tail(PyObject *callable, PyObject *args) +{ + PyObject *retval; + + if (args == NULL) + return NULL; + + if (!PyTuple_Check(args)) { + PyObject *a; + + a = PyTuple_New(1); + if (a == NULL) { + Py_DECREF(args); + return NULL; + } + PyTuple_SET_ITEM(a, 0, args); + args = a; + } + retval = PyObject_Call(callable, args, NULL); + + Py_DECREF(args); + + return retval; +} + +PyObject * +PyObject_CallFunction(PyObject *callable, const char *format, ...) +{ + va_list va; + PyObject *args; + + if (callable == NULL) + return null_error(); + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + return call_function_tail(callable, args); +} + +PyObject * +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) +{ + va_list va; + PyObject *args; + PyObject *func = NULL; + PyObject *retval = NULL; + + if (o == NULL || name == NULL) + return null_error(); + + func = PyObject_GetAttrString(o, name); + if (func == NULL) { + PyErr_SetString(PyExc_AttributeError, name); + return 0; + } + + if (!PyCallable_Check(func)) { + type_error("attribute of type '%.200s' is not callable", func); + goto exit; + } + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + retval = call_function_tail(func, args); + + exit: + /* args gets consumed in call_function_tail */ + Py_XDECREF(func); + + return retval; +} + +static PyObject * +objargs_mktuple(va_list va) +{ + int i, n = 0; + va_list countva; + PyObject *result, *tmp; + +#ifdef VA_LIST_IS_ARRAY + memcpy(countva, va, sizeof(va_list)); +#else +#ifdef __va_copy + __va_copy(countva, va); +#else + countva = va; +#endif +#endif + + while (((PyObject *)va_arg(countva, PyObject *)) != NULL) + ++n; + result = PyTuple_New(n); + if (result != NULL && n > 0) { + for (i = 0; i < n; ++i) { + tmp = (PyObject *)va_arg(va, PyObject *); + Py_INCREF(tmp); + PyTuple_SET_ITEM(result, i, tmp); + } + } + return result; +} + +PyObject * +PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL || name == NULL) + return null_error(); + + callable = PyObject_GetAttr(callable, name); + if (callable == NULL) + return NULL; + + /* count the args */ + va_start(vargs, name); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) { + Py_DECREF(callable); + return NULL; + } + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + Py_DECREF(callable); + + return tmp; +} + +PyObject * +PyObject_CallFunctionObjArgs(PyObject *callable, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL) + return null_error(); + + /* count the args */ + va_start(vargs, callable); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) + return NULL; + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + + return tmp; +} + diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1431,14 +1431,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject], Py_ssize_t, error=-1) -def PyLong_AsSsize_t(space, pylong): - """Return a C Py_ssize_t representation of the contents of pylong. If - pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised - and -1 will be returned. - """ - raise NotImplementedError - @cpython_api([PyObject, rffi.CCHARP], rffi.INT_real, error=-1) def PyMapping_DelItemString(space, o, key): """Remove the mapping for object key from the object o. Return -1 on diff --git a/pypy/module/cpyext/test/test_complexobject.py b/pypy/module/cpyext/test/test_complexobject.py --- a/pypy/module/cpyext/test/test_complexobject.py +++ b/pypy/module/cpyext/test/test_complexobject.py @@ -31,3 +31,12 @@ assert module.as_tuple(12-34j) == (12, -34) assert module.as_tuple(-3.14) == (-3.14, 0.0) raises(TypeError, module.as_tuple, "12") + + def test_FromCComplex(self): + module = self.import_extension('foo', [ + ("test", "METH_NOARGS", + """ + Py_complex c = {1.2, 3.4}; + return PyComplex_FromCComplex(c); + """)]) + assert module.test() == 1.2 + 3.4j diff --git a/pypy/module/cpyext/test/test_longobject.py b/pypy/module/cpyext/test/test_longobject.py --- a/pypy/module/cpyext/test/test_longobject.py +++ b/pypy/module/cpyext/test/test_longobject.py @@ -31,6 +31,11 @@ value = api.PyLong_AsUnsignedLong(w_value) assert value == (sys.maxint - 1) * 2 + def test_as_ssize_t(self, space, api): + w_value = space.newlong(2) + value = api.PyLong_AsSsize_t(w_value) + assert value == 2 + def test_fromdouble(self, space, api): w_value = api.PyLong_FromDouble(-12.74) assert space.unwrap(w_value) == -12 From noreply at buildbot.pypy.org Wed May 2 01:04:03 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:04:03 +0200 (CEST) Subject: [pypy-commit] pypy py3k: Various fixes to cpyext, until the test suite executes all tests. Message-ID: <20120501230403.157FD82F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54858:c346a55c6258 Date: 2012-04-30 23:01 +0200 http://bitbucket.org/pypy/pypy/changeset/c346a55c6258/ Log: Various fixes to cpyext, until the test suite executes all tests. (103 failures, 227 passed) diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -215,3 +215,11 @@ w_frozendict = make_frozendict(space) return space.call_function(w_frozendict, w_dict) + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def _PyDict_HasOnlyStringKeys(space, w_dict): + keys_w = space.unpackiterable(w_dict) + for w_key in keys_w: + if not space.isinstance_w(w_key, space.w_unicode): + return 0 + return 1 + diff --git a/pypy/module/cpyext/include/bytesobject.h b/pypy/module/cpyext/include/bytesobject.h --- a/pypy/module/cpyext/include/bytesobject.h +++ b/pypy/module/cpyext/include/bytesobject.h @@ -25,3 +25,6 @@ #define _PyBytes_Join _PyString_Join #define PyBytes_AsStringAndSize PyString_AsStringAndSize #define _PyBytes_InsertThousandsGrouping _PyString_InsertThousandsGrouping + +#define PyByteArray_Check(obj) \ + PyObject_IsInstance(obj, (PyObject *)&PyByteArray_Type) diff --git a/pypy/module/cpyext/include/modsupport.h b/pypy/module/cpyext/include/modsupport.h --- a/pypy/module/cpyext/include/modsupport.h +++ b/pypy/module/cpyext/include/modsupport.h @@ -7,6 +7,8 @@ extern "C" { #endif +#define Py_CLEANUP_SUPPORTED 0x20000 + #define PYTHON_API_VERSION 1013 #define PYTHON_API_STRING "1013" diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -529,6 +529,8 @@ #define PyObject_GC_New(type, typeobj) \ ( (type *) _PyObject_GC_New(typeobj) ) +#define PyObject_GC_NewVar(type, typeobj, size) \ + ( (type *) _PyObject_GC_NewVar(typeobj, size) ) /* A dummy PyGC_Head, just to please some tests. Don't use it! */ typedef union _gc_head { diff --git a/pypy/module/cpyext/include/structseq.h b/pypy/module/cpyext/include/structseq.h --- a/pypy/module/cpyext/include/structseq.h +++ b/pypy/module/cpyext/include/structseq.h @@ -8,21 +8,21 @@ #endif typedef struct PyStructSequence_Field { - char *name; - char *doc; + char *name; + char *doc; } PyStructSequence_Field; typedef struct PyStructSequence_Desc { - char *name; - char *doc; - struct PyStructSequence_Field *fields; - int n_in_sequence; + char *name; + char *doc; + struct PyStructSequence_Field *fields; + int n_in_sequence; } PyStructSequence_Desc; extern char* PyStructSequence_UnnamedField; PyAPI_FUNC(void) PyStructSequence_InitType(PyTypeObject *type, - PyStructSequence_Desc *desc); + PyStructSequence_Desc *desc); PyAPI_FUNC(PyObject *) PyStructSequence_New(PyTypeObject* type); @@ -32,8 +32,9 @@ } PyStructSequence; /* Macro, *only* to be used to fill in brand new objects */ -#define PyStructSequence_SET_ITEM(op, i, v) \ - (((PyStructSequence *)(op))->ob_item[i] = v) +#define PyStructSequence_SET_ITEM(op, i, v) PyTuple_SET_ITEM(op, i, v) + +#define PyStructSequence_GET_ITEM(op, i) PyTuple_GET_ITEM(op, i) #ifdef __cplusplus } diff --git a/pypy/module/cpyext/include/unicodeobject.h b/pypy/module/cpyext/include/unicodeobject.h --- a/pypy/module/cpyext/include/unicodeobject.h +++ b/pypy/module/cpyext/include/unicodeobject.h @@ -29,6 +29,16 @@ PyObject *PyUnicode_FromFormatV(const char *format, va_list vargs); PyObject *PyUnicode_FromFormat(const char *format, ...); +Py_LOCAL_INLINE(size_t) Py_UNICODE_strlen(const Py_UNICODE *u) +{ + int res = 0; + while(*u++) + res++; + return res; +} + + + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -57,6 +57,10 @@ def _PyObject_GC_New(space, type): return _PyObject_New(space, type) + at cpython_api([PyTypeObjectPtr, Py_ssize_t], PyObject) +def _PyObject_GC_NewVar(space, type, itemcount): + return _PyObject_NewVar(space, type, itemcount) + @cpython_api([rffi.VOIDP], lltype.Void) def PyObject_GC_Del(space, obj): PyObject_Del(space, obj) diff --git a/pypy/module/cpyext/test/test_classobject.py b/pypy/module/cpyext/test/test_classobject.py deleted file mode 100644 --- a/pypy/module/cpyext/test/test_classobject.py +++ /dev/null @@ -1,65 +0,0 @@ -from pypy.module.cpyext.test.test_api import BaseApiTest -from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase -from pypy.interpreter.function import Function, Method - -class TestClassObject(BaseApiTest): - def test_newinstance(self, space, api): - w_class = space.appexec([], """(): - class C: - x = None - def __init__(self): - self.x = 1 - return C - """) - - assert api.PyClass_Check(w_class) - - w_instance = api.PyInstance_NewRaw(w_class, None) - assert api.PyInstance_Check(w_instance) - assert space.getattr(w_instance, space.wrap('x')) is space.w_None - - w_instance = api.PyInstance_NewRaw(w_class, space.wrap(dict(a=3))) - assert space.getattr(w_instance, space.wrap('x')) is space.w_None - assert space.unwrap(space.getattr(w_instance, space.wrap('a'))) == 3 - - def test_lookup(self, space, api): - w_instance = space.appexec([], """(): - class C: - def __init__(self): - self.x = None - def f(self): pass - return C() - """) - - assert api.PyInstance_Check(w_instance) - assert api.PyObject_GetAttr(w_instance, space.wrap('x')) is space.w_None - assert api._PyInstance_Lookup(w_instance, space.wrap('x')) is space.w_None - assert api._PyInstance_Lookup(w_instance, space.wrap('y')) is None - assert not api.PyErr_Occurred() - - # getattr returns a bound method - assert not isinstance(api.PyObject_GetAttr(w_instance, space.wrap('f')), Function) - # _PyInstance_Lookup returns the raw descriptor - assert isinstance(api._PyInstance_Lookup(w_instance, space.wrap('f')), Function) - - def test_pyclass_new(self, space, api): - w_bases = space.newtuple([]) - w_dict = space.newdict() - w_name = space.wrap("C") - w_class = api.PyClass_New(w_bases, w_dict, w_name) - assert not space.isinstance_w(w_class, space.w_type) - w_instance = space.call_function(w_class) - assert api.PyInstance_Check(w_instance) - assert space.is_true(space.call_method(space.builtin, "isinstance", - w_instance, w_class)) - -class AppTestStringObject(AppTestCpythonExtensionBase): - def test_class_type(self): - module = self.import_extension('foo', [ - ("get_classtype", "METH_NOARGS", - """ - Py_INCREF(&PyClass_Type); - return &PyClass_Type; - """)]) - class C: pass - assert module.get_classtype() is type(C) diff --git a/pypy/module/cpyext/test/test_stringobject.py b/pypy/module/cpyext/test/test_stringobject.py --- a/pypy/module/cpyext/test/test_stringobject.py +++ b/pypy/module/cpyext/test/test_stringobject.py @@ -139,42 +139,6 @@ ]) module.getstring() - def test_format_v(self): - module = self.import_extension('foo', [ - ("test_string_format_v", "METH_VARARGS", - ''' - return helper("bla %d ble %s\\n", - PyInt_AsLong(PyTuple_GetItem(args, 0)), - PyString_AsString(PyTuple_GetItem(args, 1))); - ''' - ) - ], prologue=''' - PyObject* helper(char* fmt, ...) - { - va_list va; - PyObject* res; - va_start(va, fmt); - res = PyString_FromFormatV(fmt, va); - va_end(va); - return res; - } - ''') - res = module.test_string_format_v(1, "xyz") - assert res == "bla 1 ble xyz\n" - - def test_format(self): - module = self.import_extension('foo', [ - ("test_string_format", "METH_VARARGS", - ''' - return PyString_FromFormat("bla %d ble %s\\n", - PyInt_AsLong(PyTuple_GetItem(args, 0)), - PyString_AsString(PyTuple_GetItem(args, 1))); - ''' - ) - ]) - res = module.test_string_format(1, "xyz") - assert res == "bla 1 ble xyz\n" - def test_intern_inplace(self): module = self.import_extension('foo', [ ("test_intern_inplace", "METH_O", From noreply at buildbot.pypy.org Wed May 2 01:04:04 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:04:04 +0200 (CEST) Subject: [pypy-commit] pypy py3k: To fake a py3k generator on top of pyhon2, it's necessary to Message-ID: <20120501230404.5506E82F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54859:3f67f5b4d3d4 Date: 2012-04-30 23:28 +0200 http://bitbucket.org/pypy/pypy/changeset/3f67f5b4d3d4/ Log: To fake a py3k generator on top of pyhon2, it's necessary to rename .next() to .__next__(). This fixes space.wrap((x for x in range(10))) diff --git a/pypy/objspace/std/fake.py b/pypy/objspace/std/fake.py --- a/pypy/objspace/std/fake.py +++ b/pypy/objspace/std/fake.py @@ -77,6 +77,8 @@ kw[meth_name] = EvenMoreObscureWrapping(__builtin__.eval("lambda m,*args,**kwds: m.%s(*args,**kwds)" % meth_name)) else: for s, v in cpy_type.__dict__.items(): + if s == 'next': + s = '__next__' if not (cpy_type is unicode and s in ['__add__', '__contains__']): if s != '__getattribute__' or cpy_type is type(sys) or cpy_type is type(Exception): kw[s] = v From noreply at buildbot.pypy.org Wed May 2 01:04:05 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:04:05 +0200 (CEST) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120501230405.C578782F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54860:ac2b9fa0c351 Date: 2012-04-30 23:52 +0200 http://bitbucket.org/pypy/pypy/changeset/ac2b9fa0c351/ Log: hg merge default diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -40,7 +40,8 @@ when the value is NULL, instead of converting to None. */ #define T_LONGLONG 17 -#define T_ULONGLONG 18 +#define T_ULONGLONG 18 +#define T_PYSSIZET 19 /* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 diff --git a/pypy/module/cpyext/structmember.py b/pypy/module/cpyext/structmember.py --- a/pypy/module/cpyext/structmember.py +++ b/pypy/module/cpyext/structmember.py @@ -10,7 +10,7 @@ PyString_FromString, PyString_FromStringAndSize) from pypy.module.cpyext.floatobject import PyFloat_AsDouble from pypy.module.cpyext.longobject import ( - PyLong_AsLongLong, PyLong_AsUnsignedLongLong) + PyLong_AsLongLong, PyLong_AsUnsignedLongLong, PyLong_AsSsize_t) from pypy.module.cpyext.typeobjectdefs import PyMemberDef from pypy.rlib.unroll import unrolling_iterable @@ -28,6 +28,7 @@ (T_DOUBLE, rffi.DOUBLE, PyFloat_AsDouble), (T_LONGLONG, rffi.LONGLONG, PyLong_AsLongLong), (T_ULONGLONG, rffi.ULONGLONG, PyLong_AsUnsignedLongLong), + (T_PYSSIZET, rffi.SSIZE_T, PyLong_AsSsize_t), ]) diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -18,6 +18,7 @@ T_OBJECT_EX = 16 T_LONGLONG = 17 T_ULONGLONG = 18 +T_PYSSIZET = 19 READONLY = RO = 1 READ_RESTRICTED = 2 diff --git a/pypy/module/cpyext/test/_sre.c b/pypy/module/cpyext/test/_sre.c --- a/pypy/module/cpyext/test/_sre.c +++ b/pypy/module/cpyext/test/_sre.c @@ -81,9 +81,6 @@ #define PyObject_DEL(op) PyMem_DEL((op)) #endif -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - /* -------------------------------------------------------------------- */ #if defined(_MSC_VER) @@ -1689,7 +1686,7 @@ if (PyUnicode_Check(string)) { /* unicode strings doesn't always support the buffer interface */ ptr = (void*) PyUnicode_AS_DATA(string); - bytes = PyUnicode_GET_DATA_SIZE(string); + /* bytes = PyUnicode_GET_DATA_SIZE(string); */ size = PyUnicode_GET_SIZE(string); charsize = sizeof(Py_UNICODE); @@ -2601,46 +2598,22 @@ {NULL, NULL} }; -static PyObject* -pattern_getattr(PatternObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(pattern_methods, (PyObject*) self, name); - - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - if (!strcmp(name, "flags")) - return Py_BuildValue("i", self->flags); - - if (!strcmp(name, "groups")) - return Py_BuildValue("i", self->groups); - - if (!strcmp(name, "groupindex") && self->groupindex) { - Py_INCREF(self->groupindex); - return self->groupindex; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define PAT_OFF(x) offsetof(PatternObject, x) +static PyMemberDef pattern_members[] = { + {"pattern", T_OBJECT, PAT_OFF(pattern), READONLY}, + {"flags", T_INT, PAT_OFF(flags), READONLY}, + {"groups", T_PYSSIZET, PAT_OFF(groups), READONLY}, + {"groupindex", T_OBJECT, PAT_OFF(groupindex), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Pattern_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Pattern", sizeof(PatternObject), sizeof(SRE_CODE), (destructor)pattern_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)pattern_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattrn */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ @@ -2653,12 +2626,16 @@ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ pattern_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(PatternObject, weakreflist), /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + pattern_methods, /* tp_methods */ + pattern_members, /* tp_members */ }; static int _validate(PatternObject *self); /* Forward */ @@ -2767,7 +2744,7 @@ #if defined(VVERBOSE) #define VTRACE(v) printf v #else -#define VTRACE(v) +#define VTRACE(v) do {} while(0) /* do nothing */ #endif /* Report failure */ @@ -2970,13 +2947,13 @@ <1=skip> <2=flags> <3=min> <4=max>; If SRE_INFO_PREFIX or SRE_INFO_CHARSET is in the flags, more follows. */ - SRE_CODE flags, min, max, i; + SRE_CODE flags, i; SRE_CODE *newcode; GET_SKIP; newcode = code+skip-1; GET_ARG; flags = arg; - GET_ARG; min = arg; - GET_ARG; max = arg; + GET_ARG; /* min */ + GET_ARG; /* max */ /* Check that only valid flags are present */ if ((flags & ~(SRE_INFO_PREFIX | SRE_INFO_LITERAL | @@ -2992,9 +2969,9 @@ FAIL; /* Validate the prefix */ if (flags & SRE_INFO_PREFIX) { - SRE_CODE prefix_len, prefix_skip; + SRE_CODE prefix_len; GET_ARG; prefix_len = arg; - GET_ARG; prefix_skip = arg; + GET_ARG; /* prefix skip */ /* Here comes the prefix string */ if (code+prefix_len < code || code+prefix_len > newcode) FAIL; @@ -3565,7 +3542,7 @@ #endif } -static PyMethodDef match_methods[] = { +static struct PyMethodDef match_methods[] = { {"group", (PyCFunction) match_group, METH_VARARGS}, {"start", (PyCFunction) match_start, METH_VARARGS}, {"end", (PyCFunction) match_end, METH_VARARGS}, @@ -3578,80 +3555,90 @@ {NULL, NULL} }; -static PyObject* -match_getattr(MatchObject* self, char* name) +static PyObject * +match_lastindex_get(MatchObject *self) { - PyObject* res; - - res = Py_FindMethod(match_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - if (!strcmp(name, "lastindex")) { - if (self->lastindex >= 0) - return Py_BuildValue("i", self->lastindex); - Py_INCREF(Py_None); - return Py_None; + if (self->lastindex >= 0) + return Py_BuildValue("i", self->lastindex); + Py_INCREF(Py_None); + return Py_None; +} + +static PyObject * +match_lastgroup_get(MatchObject *self) +{ + if (self->pattern->indexgroup && self->lastindex >= 0) { + PyObject* result = PySequence_GetItem( + self->pattern->indexgroup, self->lastindex + ); + if (result) + return result; + PyErr_Clear(); } - - if (!strcmp(name, "lastgroup")) { - if (self->pattern->indexgroup && self->lastindex >= 0) { - PyObject* result = PySequence_GetItem( - self->pattern->indexgroup, self->lastindex - ); - if (result) - return result; - PyErr_Clear(); - } - Py_INCREF(Py_None); - return Py_None; - } - - if (!strcmp(name, "string")) { - if (self->string) { - Py_INCREF(self->string); - return self->string; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - - if (!strcmp(name, "regs")) { - if (self->regs) { - Py_INCREF(self->regs); - return self->regs; - } else - return match_regs(self); - } - - if (!strcmp(name, "re")) { - Py_INCREF(self->pattern); - return (PyObject*) self->pattern; - } - - if (!strcmp(name, "pos")) - return Py_BuildValue("i", self->pos); - - if (!strcmp(name, "endpos")) - return Py_BuildValue("i", self->endpos); - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; + Py_INCREF(Py_None); + return Py_None; } +static PyObject * +match_regs_get(MatchObject *self) +{ + if (self->regs) { + Py_INCREF(self->regs); + return self->regs; + } else + return match_regs(self); +} + +static PyGetSetDef match_getset[] = { + {"lastindex", (getter)match_lastindex_get, (setter)NULL}, + {"lastgroup", (getter)match_lastgroup_get, (setter)NULL}, + {"regs", (getter)match_regs_get, (setter)NULL}, + {NULL} +}; + +#define MATCH_OFF(x) offsetof(MatchObject, x) +static PyMemberDef match_members[] = { + {"string", T_OBJECT, MATCH_OFF(string), READONLY}, + {"re", T_OBJECT, MATCH_OFF(pattern), READONLY}, + {"pos", T_PYSSIZET, MATCH_OFF(pos), READONLY}, + {"endpos", T_PYSSIZET, MATCH_OFF(endpos), READONLY}, + {NULL} +}; + + /* FIXME: implement setattr("string", None) as a special case (to detach the associated string, if any */ -statichere PyTypeObject Match_Type = { - PyObject_HEAD_INIT(NULL) - 0, "_" SRE_MODULE ".SRE_Match", +static PyTypeObject Match_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "_" SRE_MODULE ".SRE_Match", sizeof(MatchObject), sizeof(Py_ssize_t), - (destructor)match_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)match_getattr /*tp_getattr*/ + (destructor)match_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + match_methods, /* tp_methods */ + match_members, /* tp_members */ + match_getset, /* tp_getset */ }; static PyObject* @@ -3800,34 +3787,42 @@ {NULL, NULL} }; -static PyObject* -scanner_getattr(ScannerObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(scanner_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define SCAN_OFF(x) offsetof(ScannerObject, x) +static PyMemberDef scanner_members[] = { + {"pattern", T_OBJECT, SCAN_OFF(pattern), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Scanner_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Scanner", sizeof(ScannerObject), 0, (destructor)scanner_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)scanner_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_reserved */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + scanner_methods, /* tp_methods */ + scanner_members, /* tp_members */ + 0, /* tp_getset */ }; static PyObject* @@ -3879,8 +3874,9 @@ PyObject* x; /* Patch object types */ - Pattern_Type.ob_type = Match_Type.ob_type = - Scanner_Type.ob_type = &PyType_Type; + if (PyType_Ready(&Pattern_Type) || PyType_Ready(&Match_Type) || + PyType_Ready(&Scanner_Type)) + return; m = Py_InitModule("_" SRE_MODULE, _functions); if (m == NULL) diff --git a/pypy/module/cpyext/test/array.c b/pypy/module/cpyext/test/array.c --- a/pypy/module/cpyext/test/array.c +++ b/pypy/module/cpyext/test/array.c @@ -11,13 +11,10 @@ #include #else /* !STDC_HEADERS */ #ifdef HAVE_SYS_TYPES_H -#include /* For size_t */ +#include /* For size_t */ #endif /* HAVE_SYS_TYPES_H */ #endif /* !STDC_HEADERS */ -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - struct arrayobject; /* Forward */ /* All possible arraydescr values are defined in the vector "descriptors" @@ -25,18 +22,18 @@ * functions aren't visible yet. */ struct arraydescr { - int typecode; - int itemsize; - PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); - int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); + int typecode; + int itemsize; + PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); + int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); }; typedef struct arrayobject { - PyObject_VAR_HEAD - char *ob_item; - Py_ssize_t allocated; - struct arraydescr *ob_descr; - PyObject *weakreflist; /* List of weak references */ + PyObject_VAR_HEAD + char *ob_item; + Py_ssize_t allocated; + struct arraydescr *ob_descr; + PyObject *weakreflist; /* List of weak references */ } arrayobject; static PyTypeObject Arraytype; @@ -47,49 +44,49 @@ static int array_resize(arrayobject *self, Py_ssize_t newsize) { - char *items; - size_t _new_size; + char *items; + size_t _new_size; - /* Bypass realloc() when a previous overallocation is large enough - to accommodate the newsize. If the newsize is 16 smaller than the - current size, then proceed with the realloc() to shrink the list. - */ + /* Bypass realloc() when a previous overallocation is large enough + to accommodate the newsize. If the newsize is 16 smaller than the + current size, then proceed with the realloc() to shrink the list. + */ - if (self->allocated >= newsize && - Py_SIZE(self) < newsize + 16 && - self->ob_item != NULL) { - Py_SIZE(self) = newsize; - return 0; - } + if (self->allocated >= newsize && + Py_SIZE(self) < newsize + 16 && + self->ob_item != NULL) { + Py_SIZE(self) = newsize; + return 0; + } - /* This over-allocates proportional to the array size, making room - * for additional growth. The over-allocation is mild, but is - * enough to give linear-time amortized behavior over a long - * sequence of appends() in the presence of a poorly-performing - * system realloc(). - * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... - * Note, the pattern starts out the same as for lists but then - * grows at a smaller rate so that larger arrays only overallocate - * by about 1/16th -- this is done because arrays are presumed to be more - * memory critical. - */ + /* This over-allocates proportional to the array size, making room + * for additional growth. The over-allocation is mild, but is + * enough to give linear-time amortized behavior over a long + * sequence of appends() in the presence of a poorly-performing + * system realloc(). + * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... + * Note, the pattern starts out the same as for lists but then + * grows at a smaller rate so that larger arrays only overallocate + * by about 1/16th -- this is done because arrays are presumed to be more + * memory critical. + */ - _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; - items = self->ob_item; - /* XXX The following multiplication and division does not optimize away - like it does for lists since the size is not known at compile time */ - if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) - PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); - else - items = NULL; - if (items == NULL) { - PyErr_NoMemory(); - return -1; - } - self->ob_item = items; - Py_SIZE(self) = newsize; - self->allocated = _new_size; - return 0; + _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; + items = self->ob_item; + /* XXX The following multiplication and division does not optimize away + like it does for lists since the size is not known at compile time */ + if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) + PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); + else + items = NULL; + if (items == NULL) { + PyErr_NoMemory(); + return -1; + } + self->ob_item = items; + Py_SIZE(self) = newsize; + self->allocated = _new_size; + return 0; } /**************************************************************************** @@ -107,308 +104,308 @@ static PyObject * c_getitem(arrayobject *ap, Py_ssize_t i) { - return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); + return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); } static int c_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - char x; - if (!PyArg_Parse(v, "c;array item must be char", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; + char x; + if (!PyArg_Parse(v, "c;array item must be char", &x)) + return -1; + if (i >= 0) + ((char *)ap->ob_item)[i] = x; + return 0; } static PyObject * b_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((char *)ap->ob_item)[i]; - if (x >= 128) - x -= 256; - return PyInt_FromLong(x); + long x = ((char *)ap->ob_item)[i]; + if (x >= 128) + x -= 256; + return PyInt_FromLong(x); } static int b_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore - must use the next size up that is signed ('h') and manually do - the overflow checking */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - else if (x < -128) { - PyErr_SetString(PyExc_OverflowError, - "signed char is less than minimum"); - return -1; - } - else if (x > 127) { - PyErr_SetString(PyExc_OverflowError, - "signed char is greater than maximum"); - return -1; - } - if (i >= 0) - ((char *)ap->ob_item)[i] = (char)x; - return 0; + short x; + /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore + must use the next size up that is signed ('h') and manually do + the overflow checking */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + else if (x < -128) { + PyErr_SetString(PyExc_OverflowError, + "signed char is less than minimum"); + return -1; + } + else if (x > 127) { + PyErr_SetString(PyExc_OverflowError, + "signed char is greater than maximum"); + return -1; + } + if (i >= 0) + ((char *)ap->ob_item)[i] = (char)x; + return 0; } static PyObject * BB_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((unsigned char *)ap->ob_item)[i]; - return PyInt_FromLong(x); + long x = ((unsigned char *)ap->ob_item)[i]; + return PyInt_FromLong(x); } static int BB_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned char x; - /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ - if (!PyArg_Parse(v, "b;array item must be integer", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; + unsigned char x; + /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ + if (!PyArg_Parse(v, "b;array item must be integer", &x)) + return -1; + if (i >= 0) + ((char *)ap->ob_item)[i] = x; + return 0; } #ifdef Py_USING_UNICODE static PyObject * u_getitem(arrayobject *ap, Py_ssize_t i) { - return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); + return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); } static int u_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - Py_UNICODE *p; - Py_ssize_t len; + Py_UNICODE *p; + Py_ssize_t len; - if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) - return -1; - if (len != 1) { - PyErr_SetString(PyExc_TypeError, - "array item must be unicode character"); - return -1; - } - if (i >= 0) - ((Py_UNICODE *)ap->ob_item)[i] = p[0]; - return 0; + if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) + return -1; + if (len != 1) { + PyErr_SetString(PyExc_TypeError, + "array item must be unicode character"); + return -1; + } + if (i >= 0) + ((Py_UNICODE *)ap->ob_item)[i] = p[0]; + return 0; } #endif static PyObject * h_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); } static int h_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - if (i >= 0) - ((short *)ap->ob_item)[i] = x; - return 0; + short x; + /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + if (i >= 0) + ((short *)ap->ob_item)[i] = x; + return 0; } static PyObject * HH_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); } static int HH_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* PyArg_Parse's 'h' formatter is for a signed short, therefore - must use the next size up and manually do the overflow checking */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - else if (x < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is less than minimum"); - return -1; - } - else if (x > USHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is greater than maximum"); - return -1; - } - if (i >= 0) - ((short *)ap->ob_item)[i] = (short)x; - return 0; + int x; + /* PyArg_Parse's 'h' formatter is for a signed short, therefore + must use the next size up and manually do the overflow checking */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + else if (x < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is less than minimum"); + return -1; + } + else if (x > USHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is greater than maximum"); + return -1; + } + if (i >= 0) + ((short *)ap->ob_item)[i] = (short)x; + return 0; } static PyObject * i_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); } static int i_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - if (i >= 0) - ((int *)ap->ob_item)[i] = x; - return 0; + int x; + /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + if (i >= 0) + ((int *)ap->ob_item)[i] = x; + return 0; } static PyObject * II_getitem(arrayobject *ap, Py_ssize_t i) { - return PyLong_FromUnsignedLong( - (unsigned long) ((unsigned int *)ap->ob_item)[i]); + return PyLong_FromUnsignedLong( + (unsigned long) ((unsigned int *)ap->ob_item)[i]); } static int II_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned long x; - if (PyLong_Check(v)) { - x = PyLong_AsUnsignedLong(v); - if (x == (unsigned long) -1 && PyErr_Occurred()) - return -1; - } - else { - long y; - if (!PyArg_Parse(v, "l;array item must be integer", &y)) - return -1; - if (y < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is less than minimum"); - return -1; - } - x = (unsigned long)y; + unsigned long x; + if (PyLong_Check(v)) { + x = PyLong_AsUnsignedLong(v); + if (x == (unsigned long) -1 && PyErr_Occurred()) + return -1; + } + else { + long y; + if (!PyArg_Parse(v, "l;array item must be integer", &y)) + return -1; + if (y < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is less than minimum"); + return -1; + } + x = (unsigned long)y; - } - if (x > UINT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is greater than maximum"); - return -1; - } + } + if (x > UINT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is greater than maximum"); + return -1; + } - if (i >= 0) - ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; - return 0; + if (i >= 0) + ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; + return 0; } static PyObject * l_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong(((long *)ap->ob_item)[i]); + return PyInt_FromLong(((long *)ap->ob_item)[i]); } static int l_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - long x; - if (!PyArg_Parse(v, "l;array item must be integer", &x)) - return -1; - if (i >= 0) - ((long *)ap->ob_item)[i] = x; - return 0; + long x; + if (!PyArg_Parse(v, "l;array item must be integer", &x)) + return -1; + if (i >= 0) + ((long *)ap->ob_item)[i] = x; + return 0; } static PyObject * LL_getitem(arrayobject *ap, Py_ssize_t i) { - return PyLong_FromUnsignedLong(((unsigned long *)ap->ob_item)[i]); + return PyLong_FromUnsignedLong(((unsigned long *)ap->ob_item)[i]); } static int LL_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned long x; - if (PyLong_Check(v)) { - x = PyLong_AsUnsignedLong(v); - if (x == (unsigned long) -1 && PyErr_Occurred()) - return -1; - } - else { - long y; - if (!PyArg_Parse(v, "l;array item must be integer", &y)) - return -1; - if (y < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned long is less than minimum"); - return -1; - } - x = (unsigned long)y; + unsigned long x; + if (PyLong_Check(v)) { + x = PyLong_AsUnsignedLong(v); + if (x == (unsigned long) -1 && PyErr_Occurred()) + return -1; + } + else { + long y; + if (!PyArg_Parse(v, "l;array item must be integer", &y)) + return -1; + if (y < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned long is less than minimum"); + return -1; + } + x = (unsigned long)y; - } - if (x > ULONG_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned long is greater than maximum"); - return -1; - } + } + if (x > ULONG_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned long is greater than maximum"); + return -1; + } - if (i >= 0) - ((unsigned long *)ap->ob_item)[i] = x; - return 0; + if (i >= 0) + ((unsigned long *)ap->ob_item)[i] = x; + return 0; } static PyObject * f_getitem(arrayobject *ap, Py_ssize_t i) { - return PyFloat_FromDouble((double) ((float *)ap->ob_item)[i]); + return PyFloat_FromDouble((double) ((float *)ap->ob_item)[i]); } static int f_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - float x; - if (!PyArg_Parse(v, "f;array item must be float", &x)) - return -1; - if (i >= 0) - ((float *)ap->ob_item)[i] = x; - return 0; + float x; + if (!PyArg_Parse(v, "f;array item must be float", &x)) + return -1; + if (i >= 0) + ((float *)ap->ob_item)[i] = x; + return 0; } static PyObject * d_getitem(arrayobject *ap, Py_ssize_t i) { - return PyFloat_FromDouble(((double *)ap->ob_item)[i]); + return PyFloat_FromDouble(((double *)ap->ob_item)[i]); } static int d_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - double x; - if (!PyArg_Parse(v, "d;array item must be float", &x)) - return -1; - if (i >= 0) - ((double *)ap->ob_item)[i] = x; - return 0; + double x; + if (!PyArg_Parse(v, "d;array item must be float", &x)) + return -1; + if (i >= 0) + ((double *)ap->ob_item)[i] = x; + return 0; } /* Description of types */ static struct arraydescr descriptors[] = { - {'c', sizeof(char), c_getitem, c_setitem}, - {'b', sizeof(char), b_getitem, b_setitem}, - {'B', sizeof(char), BB_getitem, BB_setitem}, + {'c', sizeof(char), c_getitem, c_setitem}, + {'b', sizeof(char), b_getitem, b_setitem}, + {'B', sizeof(char), BB_getitem, BB_setitem}, #ifdef Py_USING_UNICODE - {'u', sizeof(Py_UNICODE), u_getitem, u_setitem}, + {'u', sizeof(Py_UNICODE), u_getitem, u_setitem}, #endif - {'h', sizeof(short), h_getitem, h_setitem}, - {'H', sizeof(short), HH_getitem, HH_setitem}, - {'i', sizeof(int), i_getitem, i_setitem}, - {'I', sizeof(int), II_getitem, II_setitem}, - {'l', sizeof(long), l_getitem, l_setitem}, - {'L', sizeof(long), LL_getitem, LL_setitem}, - {'f', sizeof(float), f_getitem, f_setitem}, - {'d', sizeof(double), d_getitem, d_setitem}, - {'\0', 0, 0, 0} /* Sentinel */ + {'h', sizeof(short), h_getitem, h_setitem}, + {'H', sizeof(short), HH_getitem, HH_setitem}, + {'i', sizeof(int), i_getitem, i_setitem}, + {'I', sizeof(int), II_getitem, II_setitem}, + {'l', sizeof(long), l_getitem, l_setitem}, + {'L', sizeof(long), LL_getitem, LL_setitem}, + {'f', sizeof(float), f_getitem, f_setitem}, + {'d', sizeof(double), d_getitem, d_setitem}, + {'\0', 0, 0, 0} /* Sentinel */ }; /**************************************************************************** @@ -418,78 +415,78 @@ static PyObject * newarrayobject(PyTypeObject *type, Py_ssize_t size, struct arraydescr *descr) { - arrayobject *op; - size_t nbytes; + arrayobject *op; + size_t nbytes; - if (size < 0) { - PyErr_BadInternalCall(); - return NULL; - } + if (size < 0) { + PyErr_BadInternalCall(); + return NULL; + } - nbytes = size * descr->itemsize; - /* Check for overflow */ - if (nbytes / descr->itemsize != (size_t)size) { - return PyErr_NoMemory(); - } - op = (arrayobject *) type->tp_alloc(type, 0); - if (op == NULL) { - return NULL; - } - op->ob_descr = descr; - op->allocated = size; - op->weakreflist = NULL; - Py_SIZE(op) = size; - if (size <= 0) { - op->ob_item = NULL; - } - else { - op->ob_item = PyMem_NEW(char, nbytes); - if (op->ob_item == NULL) { - Py_DECREF(op); - return PyErr_NoMemory(); - } - } - return (PyObject *) op; + nbytes = size * descr->itemsize; + /* Check for overflow */ + if (nbytes / descr->itemsize != (size_t)size) { + return PyErr_NoMemory(); + } + op = (arrayobject *) type->tp_alloc(type, 0); + if (op == NULL) { + return NULL; + } + op->ob_descr = descr; + op->allocated = size; + op->weakreflist = NULL; + Py_SIZE(op) = size; + if (size <= 0) { + op->ob_item = NULL; + } + else { + op->ob_item = PyMem_NEW(char, nbytes); + if (op->ob_item == NULL) { + Py_DECREF(op); + return PyErr_NoMemory(); + } + } + return (PyObject *) op; } static PyObject * getarrayitem(PyObject *op, Py_ssize_t i) { - register arrayobject *ap; - assert(array_Check(op)); - ap = (arrayobject *)op; - assert(i>=0 && iob_descr->getitem)(ap, i); + register arrayobject *ap; + assert(array_Check(op)); + ap = (arrayobject *)op; + assert(i>=0 && iob_descr->getitem)(ap, i); } static int ins1(arrayobject *self, Py_ssize_t where, PyObject *v) { - char *items; - Py_ssize_t n = Py_SIZE(self); - if (v == NULL) { - PyErr_BadInternalCall(); - return -1; - } - if ((*self->ob_descr->setitem)(self, -1, v) < 0) - return -1; + char *items; + Py_ssize_t n = Py_SIZE(self); + if (v == NULL) { + PyErr_BadInternalCall(); + return -1; + } + if ((*self->ob_descr->setitem)(self, -1, v) < 0) + return -1; - if (array_resize(self, n+1) == -1) - return -1; - items = self->ob_item; - if (where < 0) { - where += n; - if (where < 0) - where = 0; - } - if (where > n) - where = n; - /* appends don't need to call memmove() */ - if (where != n) - memmove(items + (where+1)*self->ob_descr->itemsize, - items + where*self->ob_descr->itemsize, - (n-where)*self->ob_descr->itemsize); - return (*self->ob_descr->setitem)(self, where, v); + if (array_resize(self, n+1) == -1) + return -1; + items = self->ob_item; + if (where < 0) { + where += n; + if (where < 0) + where = 0; + } + if (where > n) + where = n; + /* appends don't need to call memmove() */ + if (where != n) + memmove(items + (where+1)*self->ob_descr->itemsize, + items + where*self->ob_descr->itemsize, + (n-where)*self->ob_descr->itemsize); + return (*self->ob_descr->setitem)(self, where, v); } /* Methods */ @@ -497,141 +494,141 @@ static void array_dealloc(arrayobject *op) { - if (op->weakreflist != NULL) - PyObject_ClearWeakRefs((PyObject *) op); - if (op->ob_item != NULL) - PyMem_DEL(op->ob_item); - Py_TYPE(op)->tp_free((PyObject *)op); + if (op->weakreflist != NULL) + PyObject_ClearWeakRefs((PyObject *) op); + if (op->ob_item != NULL) + PyMem_DEL(op->ob_item); + Py_TYPE(op)->tp_free((PyObject *)op); } static PyObject * array_richcompare(PyObject *v, PyObject *w, int op) { - arrayobject *va, *wa; - PyObject *vi = NULL; - PyObject *wi = NULL; - Py_ssize_t i, k; - PyObject *res; + arrayobject *va, *wa; + PyObject *vi = NULL; + PyObject *wi = NULL; + Py_ssize_t i, k; + PyObject *res; - if (!array_Check(v) || !array_Check(w)) { - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } + if (!array_Check(v) || !array_Check(w)) { + Py_INCREF(Py_NotImplemented); + return Py_NotImplemented; + } - va = (arrayobject *)v; - wa = (arrayobject *)w; + va = (arrayobject *)v; + wa = (arrayobject *)w; - if (Py_SIZE(va) != Py_SIZE(wa) && (op == Py_EQ || op == Py_NE)) { - /* Shortcut: if the lengths differ, the arrays differ */ - if (op == Py_EQ) - res = Py_False; - else - res = Py_True; - Py_INCREF(res); - return res; - } + if (Py_SIZE(va) != Py_SIZE(wa) && (op == Py_EQ || op == Py_NE)) { + /* Shortcut: if the lengths differ, the arrays differ */ + if (op == Py_EQ) + res = Py_False; + else + res = Py_True; + Py_INCREF(res); + return res; + } - /* Search for the first index where items are different */ - k = 1; - for (i = 0; i < Py_SIZE(va) && i < Py_SIZE(wa); i++) { - vi = getarrayitem(v, i); - wi = getarrayitem(w, i); - if (vi == NULL || wi == NULL) { - Py_XDECREF(vi); - Py_XDECREF(wi); - return NULL; - } - k = PyObject_RichCompareBool(vi, wi, Py_EQ); - if (k == 0) - break; /* Keeping vi and wi alive! */ - Py_DECREF(vi); - Py_DECREF(wi); - if (k < 0) - return NULL; - } + /* Search for the first index where items are different */ + k = 1; + for (i = 0; i < Py_SIZE(va) && i < Py_SIZE(wa); i++) { + vi = getarrayitem(v, i); + wi = getarrayitem(w, i); + if (vi == NULL || wi == NULL) { + Py_XDECREF(vi); + Py_XDECREF(wi); + return NULL; + } + k = PyObject_RichCompareBool(vi, wi, Py_EQ); + if (k == 0) + break; /* Keeping vi and wi alive! */ + Py_DECREF(vi); + Py_DECREF(wi); + if (k < 0) + return NULL; + } - if (k) { - /* No more items to compare -- compare sizes */ - Py_ssize_t vs = Py_SIZE(va); - Py_ssize_t ws = Py_SIZE(wa); - int cmp; - switch (op) { - case Py_LT: cmp = vs < ws; break; - case Py_LE: cmp = vs <= ws; break; - case Py_EQ: cmp = vs == ws; break; - case Py_NE: cmp = vs != ws; break; - case Py_GT: cmp = vs > ws; break; - case Py_GE: cmp = vs >= ws; break; - default: return NULL; /* cannot happen */ - } - if (cmp) - res = Py_True; - else - res = Py_False; - Py_INCREF(res); - return res; - } + if (k) { + /* No more items to compare -- compare sizes */ + Py_ssize_t vs = Py_SIZE(va); + Py_ssize_t ws = Py_SIZE(wa); + int cmp; + switch (op) { + case Py_LT: cmp = vs < ws; break; + case Py_LE: cmp = vs <= ws; break; + case Py_EQ: cmp = vs == ws; break; + case Py_NE: cmp = vs != ws; break; + case Py_GT: cmp = vs > ws; break; + case Py_GE: cmp = vs >= ws; break; + default: return NULL; /* cannot happen */ + } + if (cmp) + res = Py_True; + else + res = Py_False; + Py_INCREF(res); + return res; + } - /* We have an item that differs. First, shortcuts for EQ/NE */ - if (op == Py_EQ) { - Py_INCREF(Py_False); - res = Py_False; - } - else if (op == Py_NE) { - Py_INCREF(Py_True); - res = Py_True; - } - else { - /* Compare the final item again using the proper operator */ - res = PyObject_RichCompare(vi, wi, op); - } - Py_DECREF(vi); - Py_DECREF(wi); - return res; + /* We have an item that differs. First, shortcuts for EQ/NE */ + if (op == Py_EQ) { + Py_INCREF(Py_False); + res = Py_False; + } + else if (op == Py_NE) { + Py_INCREF(Py_True); + res = Py_True; + } + else { + /* Compare the final item again using the proper operator */ + res = PyObject_RichCompare(vi, wi, op); + } + Py_DECREF(vi); + Py_DECREF(wi); + return res; } static Py_ssize_t array_length(arrayobject *a) { - return Py_SIZE(a); + return Py_SIZE(a); } static PyObject * array_item(arrayobject *a, Py_ssize_t i) { - if (i < 0 || i >= Py_SIZE(a)) { - PyErr_SetString(PyExc_IndexError, "array index out of range"); - return NULL; - } - return getarrayitem((PyObject *)a, i); + if (i < 0 || i >= Py_SIZE(a)) { + PyErr_SetString(PyExc_IndexError, "array index out of range"); + return NULL; + } + return getarrayitem((PyObject *)a, i); } static PyObject * array_slice(arrayobject *a, Py_ssize_t ilow, Py_ssize_t ihigh) { - arrayobject *np; - if (ilow < 0) - ilow = 0; - else if (ilow > Py_SIZE(a)) - ilow = Py_SIZE(a); - if (ihigh < 0) - ihigh = 0; - if (ihigh < ilow) - ihigh = ilow; - else if (ihigh > Py_SIZE(a)) - ihigh = Py_SIZE(a); - np = (arrayobject *) newarrayobject(&Arraytype, ihigh - ilow, a->ob_descr); - if (np == NULL) - return NULL; - memcpy(np->ob_item, a->ob_item + ilow * a->ob_descr->itemsize, - (ihigh-ilow) * a->ob_descr->itemsize); - return (PyObject *)np; + arrayobject *np; + if (ilow < 0) + ilow = 0; + else if (ilow > Py_SIZE(a)) + ilow = Py_SIZE(a); + if (ihigh < 0) + ihigh = 0; + if (ihigh < ilow) + ihigh = ilow; + else if (ihigh > Py_SIZE(a)) + ihigh = Py_SIZE(a); + np = (arrayobject *) newarrayobject(&Arraytype, ihigh - ilow, a->ob_descr); + if (np == NULL) + return NULL; + memcpy(np->ob_item, a->ob_item + ilow * a->ob_descr->itemsize, + (ihigh-ilow) * a->ob_descr->itemsize); + return (PyObject *)np; } static PyObject * array_copy(arrayobject *a, PyObject *unused) { - return array_slice(a, 0, Py_SIZE(a)); + return array_slice(a, 0, Py_SIZE(a)); } PyDoc_STRVAR(copy_doc, @@ -642,297 +639,297 @@ static PyObject * array_concat(arrayobject *a, PyObject *bb) { - Py_ssize_t size; - arrayobject *np; - if (!array_Check(bb)) { - PyErr_Format(PyExc_TypeError, - "can only append array (not \"%.200s\") to array", - Py_TYPE(bb)->tp_name); - return NULL; - } + Py_ssize_t size; + arrayobject *np; + if (!array_Check(bb)) { + PyErr_Format(PyExc_TypeError, + "can only append array (not \"%.200s\") to array", + Py_TYPE(bb)->tp_name); + return NULL; + } #define b ((arrayobject *)bb) - if (a->ob_descr != b->ob_descr) { - PyErr_BadArgument(); - return NULL; - } - if (Py_SIZE(a) > PY_SSIZE_T_MAX - Py_SIZE(b)) { - return PyErr_NoMemory(); - } - size = Py_SIZE(a) + Py_SIZE(b); - np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); - if (np == NULL) { - return NULL; - } - memcpy(np->ob_item, a->ob_item, Py_SIZE(a)*a->ob_descr->itemsize); - memcpy(np->ob_item + Py_SIZE(a)*a->ob_descr->itemsize, - b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); - return (PyObject *)np; + if (a->ob_descr != b->ob_descr) { + PyErr_BadArgument(); + return NULL; + } + if (Py_SIZE(a) > PY_SSIZE_T_MAX - Py_SIZE(b)) { + return PyErr_NoMemory(); + } + size = Py_SIZE(a) + Py_SIZE(b); + np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); + if (np == NULL) { + return NULL; + } + memcpy(np->ob_item, a->ob_item, Py_SIZE(a)*a->ob_descr->itemsize); + memcpy(np->ob_item + Py_SIZE(a)*a->ob_descr->itemsize, + b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); + return (PyObject *)np; #undef b } static PyObject * array_repeat(arrayobject *a, Py_ssize_t n) { - Py_ssize_t i; - Py_ssize_t size; - arrayobject *np; - char *p; - Py_ssize_t nbytes; - if (n < 0) - n = 0; - if ((Py_SIZE(a) != 0) && (n > PY_SSIZE_T_MAX / Py_SIZE(a))) { - return PyErr_NoMemory(); - } - size = Py_SIZE(a) * n; - np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); - if (np == NULL) - return NULL; - p = np->ob_item; - nbytes = Py_SIZE(a) * a->ob_descr->itemsize; - for (i = 0; i < n; i++) { - memcpy(p, a->ob_item, nbytes); - p += nbytes; - } - return (PyObject *) np; + Py_ssize_t i; + Py_ssize_t size; + arrayobject *np; + char *p; + Py_ssize_t nbytes; + if (n < 0) + n = 0; + if ((Py_SIZE(a) != 0) && (n > PY_SSIZE_T_MAX / Py_SIZE(a))) { + return PyErr_NoMemory(); + } + size = Py_SIZE(a) * n; + np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); + if (np == NULL) + return NULL; + p = np->ob_item; + nbytes = Py_SIZE(a) * a->ob_descr->itemsize; + for (i = 0; i < n; i++) { + memcpy(p, a->ob_item, nbytes); + p += nbytes; + } + return (PyObject *) np; } static int array_ass_slice(arrayobject *a, Py_ssize_t ilow, Py_ssize_t ihigh, PyObject *v) { - char *item; - Py_ssize_t n; /* Size of replacement array */ - Py_ssize_t d; /* Change in size */ + char *item; + Py_ssize_t n; /* Size of replacement array */ + Py_ssize_t d; /* Change in size */ #define b ((arrayobject *)v) - if (v == NULL) - n = 0; - else if (array_Check(v)) { - n = Py_SIZE(b); - if (a == b) { - /* Special case "a[i:j] = a" -- copy b first */ - int ret; - v = array_slice(b, 0, n); - if (!v) - return -1; - ret = array_ass_slice(a, ilow, ihigh, v); - Py_DECREF(v); - return ret; - } - if (b->ob_descr != a->ob_descr) { - PyErr_BadArgument(); - return -1; - } - } - else { - PyErr_Format(PyExc_TypeError, - "can only assign array (not \"%.200s\") to array slice", - Py_TYPE(v)->tp_name); - return -1; - } - if (ilow < 0) - ilow = 0; - else if (ilow > Py_SIZE(a)) - ilow = Py_SIZE(a); - if (ihigh < 0) - ihigh = 0; - if (ihigh < ilow) - ihigh = ilow; - else if (ihigh > Py_SIZE(a)) - ihigh = Py_SIZE(a); - item = a->ob_item; - d = n - (ihigh-ilow); - if (d < 0) { /* Delete -d items */ - memmove(item + (ihigh+d)*a->ob_descr->itemsize, - item + ihigh*a->ob_descr->itemsize, - (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); - Py_SIZE(a) += d; - PyMem_RESIZE(item, char, Py_SIZE(a)*a->ob_descr->itemsize); - /* Can't fail */ - a->ob_item = item; - a->allocated = Py_SIZE(a); - } - else if (d > 0) { /* Insert d items */ - PyMem_RESIZE(item, char, - (Py_SIZE(a) + d)*a->ob_descr->itemsize); - if (item == NULL) { - PyErr_NoMemory(); - return -1; - } - memmove(item + (ihigh+d)*a->ob_descr->itemsize, - item + ihigh*a->ob_descr->itemsize, - (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); - a->ob_item = item; - Py_SIZE(a) += d; - a->allocated = Py_SIZE(a); - } - if (n > 0) - memcpy(item + ilow*a->ob_descr->itemsize, b->ob_item, - n*b->ob_descr->itemsize); - return 0; + if (v == NULL) + n = 0; + else if (array_Check(v)) { + n = Py_SIZE(b); + if (a == b) { + /* Special case "a[i:j] = a" -- copy b first */ + int ret; + v = array_slice(b, 0, n); + if (!v) + return -1; + ret = array_ass_slice(a, ilow, ihigh, v); + Py_DECREF(v); + return ret; + } + if (b->ob_descr != a->ob_descr) { + PyErr_BadArgument(); + return -1; + } + } + else { + PyErr_Format(PyExc_TypeError, + "can only assign array (not \"%.200s\") to array slice", + Py_TYPE(v)->tp_name); + return -1; + } + if (ilow < 0) + ilow = 0; + else if (ilow > Py_SIZE(a)) + ilow = Py_SIZE(a); + if (ihigh < 0) + ihigh = 0; + if (ihigh < ilow) + ihigh = ilow; + else if (ihigh > Py_SIZE(a)) + ihigh = Py_SIZE(a); + item = a->ob_item; + d = n - (ihigh-ilow); + if (d < 0) { /* Delete -d items */ + memmove(item + (ihigh+d)*a->ob_descr->itemsize, + item + ihigh*a->ob_descr->itemsize, + (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); + Py_SIZE(a) += d; + PyMem_RESIZE(item, char, Py_SIZE(a)*a->ob_descr->itemsize); + /* Can't fail */ + a->ob_item = item; + a->allocated = Py_SIZE(a); + } + else if (d > 0) { /* Insert d items */ + PyMem_RESIZE(item, char, + (Py_SIZE(a) + d)*a->ob_descr->itemsize); + if (item == NULL) { + PyErr_NoMemory(); + return -1; + } + memmove(item + (ihigh+d)*a->ob_descr->itemsize, + item + ihigh*a->ob_descr->itemsize, + (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); + a->ob_item = item; + Py_SIZE(a) += d; + a->allocated = Py_SIZE(a); + } + if (n > 0) + memcpy(item + ilow*a->ob_descr->itemsize, b->ob_item, + n*b->ob_descr->itemsize); + return 0; #undef b } static int array_ass_item(arrayobject *a, Py_ssize_t i, PyObject *v) { - if (i < 0 || i >= Py_SIZE(a)) { - PyErr_SetString(PyExc_IndexError, - "array assignment index out of range"); - return -1; - } - if (v == NULL) - return array_ass_slice(a, i, i+1, v); - return (*a->ob_descr->setitem)(a, i, v); + if (i < 0 || i >= Py_SIZE(a)) { + PyErr_SetString(PyExc_IndexError, + "array assignment index out of range"); + return -1; + } + if (v == NULL) + return array_ass_slice(a, i, i+1, v); + return (*a->ob_descr->setitem)(a, i, v); } static int setarrayitem(PyObject *a, Py_ssize_t i, PyObject *v) { - assert(array_Check(a)); - return array_ass_item((arrayobject *)a, i, v); + assert(array_Check(a)); + return array_ass_item((arrayobject *)a, i, v); } static int array_iter_extend(arrayobject *self, PyObject *bb) { - PyObject *it, *v; + PyObject *it, *v; - it = PyObject_GetIter(bb); - if (it == NULL) - return -1; + it = PyObject_GetIter(bb); + if (it == NULL) + return -1; - while ((v = PyIter_Next(it)) != NULL) { - if (ins1(self, (int) Py_SIZE(self), v) != 0) { - Py_DECREF(v); - Py_DECREF(it); - return -1; - } - Py_DECREF(v); - } - Py_DECREF(it); - if (PyErr_Occurred()) - return -1; - return 0; + while ((v = PyIter_Next(it)) != NULL) { + if (ins1(self, Py_SIZE(self), v) != 0) { + Py_DECREF(v); + Py_DECREF(it); + return -1; + } + Py_DECREF(v); + } + Py_DECREF(it); + if (PyErr_Occurred()) + return -1; + return 0; } static int array_do_extend(arrayobject *self, PyObject *bb) { - Py_ssize_t size; - char *old_item; + Py_ssize_t size; + char *old_item; - if (!array_Check(bb)) - return array_iter_extend(self, bb); + if (!array_Check(bb)) + return array_iter_extend(self, bb); #define b ((arrayobject *)bb) - if (self->ob_descr != b->ob_descr) { - PyErr_SetString(PyExc_TypeError, - "can only extend with array of same kind"); - return -1; - } - if ((Py_SIZE(self) > PY_SSIZE_T_MAX - Py_SIZE(b)) || - ((Py_SIZE(self) + Py_SIZE(b)) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { - PyErr_NoMemory(); - return -1; - } - size = Py_SIZE(self) + Py_SIZE(b); - old_item = self->ob_item; - PyMem_RESIZE(self->ob_item, char, size*self->ob_descr->itemsize); - if (self->ob_item == NULL) { - self->ob_item = old_item; - PyErr_NoMemory(); - return -1; - } - memcpy(self->ob_item + Py_SIZE(self)*self->ob_descr->itemsize, - b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); - Py_SIZE(self) = size; - self->allocated = size; + if (self->ob_descr != b->ob_descr) { + PyErr_SetString(PyExc_TypeError, + "can only extend with array of same kind"); + return -1; + } + if ((Py_SIZE(self) > PY_SSIZE_T_MAX - Py_SIZE(b)) || + ((Py_SIZE(self) + Py_SIZE(b)) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { + PyErr_NoMemory(); + return -1; + } + size = Py_SIZE(self) + Py_SIZE(b); + old_item = self->ob_item; + PyMem_RESIZE(self->ob_item, char, size*self->ob_descr->itemsize); + if (self->ob_item == NULL) { + self->ob_item = old_item; + PyErr_NoMemory(); + return -1; + } + memcpy(self->ob_item + Py_SIZE(self)*self->ob_descr->itemsize, + b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); + Py_SIZE(self) = size; + self->allocated = size; - return 0; + return 0; #undef b } static PyObject * array_inplace_concat(arrayobject *self, PyObject *bb) { - if (!array_Check(bb)) { - PyErr_Format(PyExc_TypeError, - "can only extend array with array (not \"%.200s\")", - Py_TYPE(bb)->tp_name); - return NULL; - } - if (array_do_extend(self, bb) == -1) - return NULL; - Py_INCREF(self); - return (PyObject *)self; + if (!array_Check(bb)) { + PyErr_Format(PyExc_TypeError, + "can only extend array with array (not \"%.200s\")", + Py_TYPE(bb)->tp_name); + return NULL; + } + if (array_do_extend(self, bb) == -1) + return NULL; + Py_INCREF(self); + return (PyObject *)self; } static PyObject * array_inplace_repeat(arrayobject *self, Py_ssize_t n) { - char *items, *p; - Py_ssize_t size, i; + char *items, *p; + Py_ssize_t size, i; - if (Py_SIZE(self) > 0) { - if (n < 0) - n = 0; - items = self->ob_item; - if ((self->ob_descr->itemsize != 0) && - (Py_SIZE(self) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { - return PyErr_NoMemory(); - } - size = Py_SIZE(self) * self->ob_descr->itemsize; - if (n == 0) { - PyMem_FREE(items); - self->ob_item = NULL; - Py_SIZE(self) = 0; - self->allocated = 0; - } - else { - if (size > PY_SSIZE_T_MAX / n) { - return PyErr_NoMemory(); - } - PyMem_RESIZE(items, char, n * size); - if (items == NULL) - return PyErr_NoMemory(); - p = items; - for (i = 1; i < n; i++) { - p += size; - memcpy(p, items, size); - } - self->ob_item = items; - Py_SIZE(self) *= n; - self->allocated = Py_SIZE(self); - } - } - Py_INCREF(self); - return (PyObject *)self; + if (Py_SIZE(self) > 0) { + if (n < 0) + n = 0; + items = self->ob_item; + if ((self->ob_descr->itemsize != 0) && + (Py_SIZE(self) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { + return PyErr_NoMemory(); + } + size = Py_SIZE(self) * self->ob_descr->itemsize; + if (n == 0) { + PyMem_FREE(items); + self->ob_item = NULL; + Py_SIZE(self) = 0; + self->allocated = 0; + } + else { + if (size > PY_SSIZE_T_MAX / n) { + return PyErr_NoMemory(); + } + PyMem_RESIZE(items, char, n * size); + if (items == NULL) + return PyErr_NoMemory(); + p = items; + for (i = 1; i < n; i++) { + p += size; + memcpy(p, items, size); + } + self->ob_item = items; + Py_SIZE(self) *= n; + self->allocated = Py_SIZE(self); + } + } + Py_INCREF(self); + return (PyObject *)self; } static PyObject * ins(arrayobject *self, Py_ssize_t where, PyObject *v) { - if (ins1(self, where, v) != 0) - return NULL; - Py_INCREF(Py_None); - return Py_None; + if (ins1(self, where, v) != 0) + return NULL; + Py_INCREF(Py_None); + return Py_None; } static PyObject * array_count(arrayobject *self, PyObject *v) { - Py_ssize_t count = 0; - Py_ssize_t i; + Py_ssize_t count = 0; + Py_ssize_t i; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self, i); - int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - if (cmp > 0) - count++; - else if (cmp < 0) - return NULL; - } - return PyInt_FromSsize_t(count); + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self, i); + int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + if (cmp > 0) + count++; + else if (cmp < 0) + return NULL; + } + return PyInt_FromSsize_t(count); } PyDoc_STRVAR(count_doc, @@ -943,20 +940,20 @@ static PyObject * array_index(arrayobject *self, PyObject *v) { - Py_ssize_t i; + Py_ssize_t i; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self, i); - int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - if (cmp > 0) { - return PyInt_FromLong((long)i); - } - else if (cmp < 0) - return NULL; - } - PyErr_SetString(PyExc_ValueError, "array.index(x): x not in list"); - return NULL; + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self, i); + int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + if (cmp > 0) { + return PyInt_FromLong((long)i); + } + else if (cmp < 0) + return NULL; + } + PyErr_SetString(PyExc_ValueError, "array.index(x): x not in list"); + return NULL; } PyDoc_STRVAR(index_doc, @@ -967,38 +964,38 @@ static int array_contains(arrayobject *self, PyObject *v) { - Py_ssize_t i; - int cmp; + Py_ssize_t i; + int cmp; - for (i = 0, cmp = 0 ; cmp == 0 && i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self, i); - cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - } - return cmp; + for (i = 0, cmp = 0 ; cmp == 0 && i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self, i); + cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + } + return cmp; } static PyObject * array_remove(arrayobject *self, PyObject *v) { - int i; + int i; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self,i); - int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - if (cmp > 0) { - if (array_ass_slice(self, i, i+1, - (PyObject *)NULL) != 0) - return NULL; - Py_INCREF(Py_None); - return Py_None; - } - else if (cmp < 0) - return NULL; - } - PyErr_SetString(PyExc_ValueError, "array.remove(x): x not in list"); - return NULL; + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self,i); + int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + if (cmp > 0) { + if (array_ass_slice(self, i, i+1, + (PyObject *)NULL) != 0) + return NULL; + Py_INCREF(Py_None); + return Py_None; + } + else if (cmp < 0) + return NULL; + } + PyErr_SetString(PyExc_ValueError, "array.remove(x): x not in list"); + return NULL; } PyDoc_STRVAR(remove_doc, @@ -1009,27 +1006,27 @@ static PyObject * array_pop(arrayobject *self, PyObject *args) { - Py_ssize_t i = -1; - PyObject *v; - if (!PyArg_ParseTuple(args, "|n:pop", &i)) - return NULL; - if (Py_SIZE(self) == 0) { - /* Special-case most common failure cause */ - PyErr_SetString(PyExc_IndexError, "pop from empty array"); - return NULL; - } - if (i < 0) - i += Py_SIZE(self); - if (i < 0 || i >= Py_SIZE(self)) { - PyErr_SetString(PyExc_IndexError, "pop index out of range"); - return NULL; - } - v = getarrayitem((PyObject *)self,i); - if (array_ass_slice(self, i, i+1, (PyObject *)NULL) != 0) { - Py_DECREF(v); - return NULL; - } - return v; + Py_ssize_t i = -1; + PyObject *v; + if (!PyArg_ParseTuple(args, "|n:pop", &i)) + return NULL; + if (Py_SIZE(self) == 0) { + /* Special-case most common failure cause */ + PyErr_SetString(PyExc_IndexError, "pop from empty array"); + return NULL; + } + if (i < 0) + i += Py_SIZE(self); + if (i < 0 || i >= Py_SIZE(self)) { + PyErr_SetString(PyExc_IndexError, "pop index out of range"); + return NULL; + } + v = getarrayitem((PyObject *)self,i); + if (array_ass_slice(self, i, i+1, (PyObject *)NULL) != 0) { + Py_DECREF(v); + return NULL; + } + return v; } PyDoc_STRVAR(pop_doc, @@ -1040,10 +1037,10 @@ static PyObject * array_extend(arrayobject *self, PyObject *bb) { - if (array_do_extend(self, bb) == -1) - return NULL; - Py_INCREF(Py_None); - return Py_None; + if (array_do_extend(self, bb) == -1) + return NULL; + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(extend_doc, @@ -1054,11 +1051,11 @@ static PyObject * array_insert(arrayobject *self, PyObject *args) { - Py_ssize_t i; - PyObject *v; - if (!PyArg_ParseTuple(args, "nO:insert", &i, &v)) - return NULL; - return ins(self, i, v); + Py_ssize_t i; + PyObject *v; + if (!PyArg_ParseTuple(args, "nO:insert", &i, &v)) + return NULL; + return ins(self, i, v); } PyDoc_STRVAR(insert_doc, @@ -1070,15 +1067,15 @@ static PyObject * array_buffer_info(arrayobject *self, PyObject *unused) { - PyObject* retval = NULL; - retval = PyTuple_New(2); - if (!retval) - return NULL; + PyObject* retval = NULL; + retval = PyTuple_New(2); + if (!retval) + return NULL; - PyTuple_SET_ITEM(retval, 0, PyLong_FromVoidPtr(self->ob_item)); - PyTuple_SET_ITEM(retval, 1, PyInt_FromLong((long)(Py_SIZE(self)))); + PyTuple_SET_ITEM(retval, 0, PyLong_FromVoidPtr(self->ob_item)); + PyTuple_SET_ITEM(retval, 1, PyInt_FromLong((long)(Py_SIZE(self)))); - return retval; + return retval; } PyDoc_STRVAR(buffer_info_doc, @@ -1093,7 +1090,7 @@ static PyObject * array_append(arrayobject *self, PyObject *v) { - return ins(self, (int) Py_SIZE(self), v); + return ins(self, Py_SIZE(self), v); } PyDoc_STRVAR(append_doc, @@ -1105,52 +1102,52 @@ static PyObject * array_byteswap(arrayobject *self, PyObject *unused) { - char *p; - Py_ssize_t i; + char *p; + Py_ssize_t i; - switch (self->ob_descr->itemsize) { - case 1: - break; - case 2: - for (p = self->ob_item, i = Py_SIZE(self); --i >= 0; p += 2) { - char p0 = p[0]; - p[0] = p[1]; - p[1] = p0; - } - break; - case 4: - for (p = self->ob_item, i = Py_SIZE(self); --i >= 0; p += 4) { - char p0 = p[0]; - char p1 = p[1]; - p[0] = p[3]; - p[1] = p[2]; - p[2] = p1; - p[3] = p0; - } - break; - case 8: - for (p = self->ob_item, i = Py_SIZE(self); --i >= 0; p += 8) { - char p0 = p[0]; - char p1 = p[1]; - char p2 = p[2]; - char p3 = p[3]; - p[0] = p[7]; - p[1] = p[6]; - p[2] = p[5]; - p[3] = p[4]; - p[4] = p3; - p[5] = p2; - p[6] = p1; - p[7] = p0; - } - break; - default: - PyErr_SetString(PyExc_RuntimeError, - "don't know how to byteswap this array type"); - return NULL; - } - Py_INCREF(Py_None); - return Py_None; + switch (self->ob_descr->itemsize) { + case 1: + break; + case 2: + for (p = self->ob_item, i = Py_SIZE(self); --i >= 0; p += 2) { + char p0 = p[0]; + p[0] = p[1]; + p[1] = p0; + } + break; + case 4: + for (p = self->ob_item, i = Py_SIZE(self); --i >= 0; p += 4) { + char p0 = p[0]; + char p1 = p[1]; + p[0] = p[3]; + p[1] = p[2]; + p[2] = p1; + p[3] = p0; + } + break; + case 8: + for (p = self->ob_item, i = Py_SIZE(self); --i >= 0; p += 8) { + char p0 = p[0]; + char p1 = p[1]; + char p2 = p[2]; + char p3 = p[3]; + p[0] = p[7]; + p[1] = p[6]; + p[2] = p[5]; + p[3] = p[4]; + p[4] = p3; + p[5] = p2; + p[6] = p1; + p[7] = p0; + } + break; + default: + PyErr_SetString(PyExc_RuntimeError, + "don't know how to byteswap this array type"); + return NULL; + } + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(byteswap_doc, @@ -1160,64 +1157,30 @@ 4, or 8 bytes in size, RuntimeError is raised."); static PyObject * -array_reduce(arrayobject *array) -{ - PyObject *dict, *result; - - dict = PyObject_GetAttrString((PyObject *)array, "__dict__"); - if (dict == NULL) { - PyErr_Clear(); - dict = Py_None; - Py_INCREF(dict); - } - if (Py_SIZE(array) > 0) { - if (array->ob_descr->itemsize - > PY_SSIZE_T_MAX / array->ob_size) { - return PyErr_NoMemory(); - } - result = Py_BuildValue("O(cs#)O", - Py_TYPE(array), - array->ob_descr->typecode, - array->ob_item, - Py_SIZE(array) * array->ob_descr->itemsize, - dict); - } else { - result = Py_BuildValue("O(c)O", - Py_TYPE(array), - array->ob_descr->typecode, - dict); - } - Py_DECREF(dict); - return result; -} - -PyDoc_STRVAR(array_doc, "Return state information for pickling."); - -static PyObject * array_reverse(arrayobject *self, PyObject *unused) { - register Py_ssize_t itemsize = self->ob_descr->itemsize; - register char *p, *q; - /* little buffer to hold items while swapping */ - char tmp[256]; /* 8 is probably enough -- but why skimp */ - assert((size_t)itemsize <= sizeof(tmp)); + register Py_ssize_t itemsize = self->ob_descr->itemsize; + register char *p, *q; + /* little buffer to hold items while swapping */ + char tmp[256]; /* 8 is probably enough -- but why skimp */ + assert((size_t)itemsize <= sizeof(tmp)); - if (Py_SIZE(self) > 1) { - for (p = self->ob_item, - q = self->ob_item + (Py_SIZE(self) - 1)*itemsize; - p < q; - p += itemsize, q -= itemsize) { - /* memory areas guaranteed disjoint, so memcpy - * is safe (& memmove may be slower). - */ - memcpy(tmp, p, itemsize); - memcpy(p, q, itemsize); - memcpy(q, tmp, itemsize); - } - } + if (Py_SIZE(self) > 1) { + for (p = self->ob_item, + q = self->ob_item + (Py_SIZE(self) - 1)*itemsize; + p < q; + p += itemsize, q -= itemsize) { + /* memory areas guaranteed disjoint, so memcpy + * is safe (& memmove may be slower). + */ + memcpy(tmp, p, itemsize); + memcpy(p, q, itemsize); + memcpy(q, tmp, itemsize); + } + } - Py_INCREF(Py_None); - return Py_None; + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(reverse_doc, @@ -1228,50 +1191,56 @@ static PyObject * array_fromfile(arrayobject *self, PyObject *args) { - PyObject *f; - Py_ssize_t n; - FILE *fp; - if (!PyArg_ParseTuple(args, "On:fromfile", &f, &n)) - return NULL; - fp = PyFile_AsFile(f); - if (fp == NULL) { - PyErr_SetString(PyExc_TypeError, "arg1 must be open file"); - return NULL; - } - if (n > 0) { - char *item = self->ob_item; - Py_ssize_t itemsize = self->ob_descr->itemsize; - size_t nread; - Py_ssize_t newlength; - size_t newbytes; - /* Be careful here about overflow */ - if ((newlength = Py_SIZE(self) + n) <= 0 || - (newbytes = newlength * itemsize) / itemsize != - (size_t)newlength) - goto nomem; - PyMem_RESIZE(item, char, newbytes); - if (item == NULL) { - nomem: - PyErr_NoMemory(); - return NULL; - } - self->ob_item = item; - Py_SIZE(self) += n; - self->allocated = Py_SIZE(self); - nread = fread(item + (Py_SIZE(self) - n) * itemsize, - itemsize, n, fp); - if (nread < (size_t)n) { - Py_SIZE(self) -= (n - nread); - PyMem_RESIZE(item, char, Py_SIZE(self)*itemsize); - self->ob_item = item; - self->allocated = Py_SIZE(self); - PyErr_SetString(PyExc_EOFError, - "not enough items in file"); - return NULL; - } - } - Py_INCREF(Py_None); - return Py_None; + PyObject *f; + Py_ssize_t n; + FILE *fp; + if (!PyArg_ParseTuple(args, "On:fromfile", &f, &n)) + return NULL; + fp = PyFile_AsFile(f); + if (fp == NULL) { + PyErr_SetString(PyExc_TypeError, "arg1 must be open file"); + return NULL; + } + if (n > 0) { + char *item = self->ob_item; + Py_ssize_t itemsize = self->ob_descr->itemsize; + size_t nread; + Py_ssize_t newlength; + size_t newbytes; + /* Be careful here about overflow */ + if ((newlength = Py_SIZE(self) + n) <= 0 || + (newbytes = newlength * itemsize) / itemsize != + (size_t)newlength) + goto nomem; + PyMem_RESIZE(item, char, newbytes); + if (item == NULL) { + nomem: + PyErr_NoMemory(); + return NULL; + } + self->ob_item = item; + Py_SIZE(self) += n; + self->allocated = Py_SIZE(self); + nread = fread(item + (Py_SIZE(self) - n) * itemsize, + itemsize, n, fp); + if (nread < (size_t)n) { + Py_SIZE(self) -= (n - nread); + PyMem_RESIZE(item, char, Py_SIZE(self)*itemsize); + self->ob_item = item; + self->allocated = Py_SIZE(self); + if (ferror(fp)) { + PyErr_SetFromErrno(PyExc_IOError); + clearerr(fp); + } + else { + PyErr_SetString(PyExc_EOFError, + "not enough items in file"); + } + return NULL; + } + } + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(fromfile_doc, @@ -1284,33 +1253,33 @@ static PyObject * array_fromfile_as_read(arrayobject *self, PyObject *args) { - if (PyErr_WarnPy3k("array.read() not supported in 3.x; " - "use array.fromfile()", 1) < 0) - return NULL; - return array_fromfile(self, args); + if (PyErr_WarnPy3k("array.read() not supported in 3.x; " + "use array.fromfile()", 1) < 0) + return NULL; + return array_fromfile(self, args); } static PyObject * array_tofile(arrayobject *self, PyObject *f) { - FILE *fp; + FILE *fp; - fp = PyFile_AsFile(f); - if (fp == NULL) { - PyErr_SetString(PyExc_TypeError, "arg must be open file"); - return NULL; - } - if (self->ob_size > 0) { - if (fwrite(self->ob_item, self->ob_descr->itemsize, - self->ob_size, fp) != (size_t)self->ob_size) { - PyErr_SetFromErrno(PyExc_IOError); - clearerr(fp); - return NULL; - } - } - Py_INCREF(Py_None); - return Py_None; + fp = PyFile_AsFile(f); + if (fp == NULL) { + PyErr_SetString(PyExc_TypeError, "arg must be open file"); + return NULL; + } + if (self->ob_size > 0) { + if (fwrite(self->ob_item, self->ob_descr->itemsize, + self->ob_size, fp) != (size_t)self->ob_size) { + PyErr_SetFromErrno(PyExc_IOError); + clearerr(fp); + return NULL; + } + } + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(tofile_doc, @@ -1323,53 +1292,53 @@ static PyObject * array_tofile_as_write(arrayobject *self, PyObject *f) { - if (PyErr_WarnPy3k("array.write() not supported in 3.x; " - "use array.tofile()", 1) < 0) - return NULL; - return array_tofile(self, f); + if (PyErr_WarnPy3k("array.write() not supported in 3.x; " + "use array.tofile()", 1) < 0) + return NULL; + return array_tofile(self, f); } static PyObject * array_fromlist(arrayobject *self, PyObject *list) { - Py_ssize_t n; - Py_ssize_t itemsize = self->ob_descr->itemsize; + Py_ssize_t n; + Py_ssize_t itemsize = self->ob_descr->itemsize; - if (!PyList_Check(list)) { - PyErr_SetString(PyExc_TypeError, "arg must be list"); - return NULL; - } - n = PyList_Size(list); - if (n > 0) { - char *item = self->ob_item; - Py_ssize_t i; - PyMem_RESIZE(item, char, (Py_SIZE(self) + n) * itemsize); - if (item == NULL) { - PyErr_NoMemory(); - return NULL; - } - self->ob_item = item; - Py_SIZE(self) += n; - self->allocated = Py_SIZE(self); - for (i = 0; i < n; i++) { - PyObject *v = PyList_GetItem(list, i); - if ((*self->ob_descr->setitem)(self, - Py_SIZE(self) - n + i, v) != 0) { - Py_SIZE(self) -= n; - if (itemsize && (self->ob_size > PY_SSIZE_T_MAX / itemsize)) { - return PyErr_NoMemory(); - } - PyMem_RESIZE(item, char, - Py_SIZE(self) * itemsize); - self->ob_item = item; - self->allocated = Py_SIZE(self); - return NULL; - } - } - } - Py_INCREF(Py_None); - return Py_None; + if (!PyList_Check(list)) { + PyErr_SetString(PyExc_TypeError, "arg must be list"); + return NULL; + } + n = PyList_Size(list); + if (n > 0) { + char *item = self->ob_item; + Py_ssize_t i; + PyMem_RESIZE(item, char, (Py_SIZE(self) + n) * itemsize); + if (item == NULL) { + PyErr_NoMemory(); + return NULL; + } + self->ob_item = item; + Py_SIZE(self) += n; + self->allocated = Py_SIZE(self); + for (i = 0; i < n; i++) { + PyObject *v = PyList_GetItem(list, i); + if ((*self->ob_descr->setitem)(self, + Py_SIZE(self) - n + i, v) != 0) { + Py_SIZE(self) -= n; + if (itemsize && (self->ob_size > PY_SSIZE_T_MAX / itemsize)) { + return PyErr_NoMemory(); + } + PyMem_RESIZE(item, char, + Py_SIZE(self) * itemsize); + self->ob_item = item; + self->allocated = Py_SIZE(self); + return NULL; + } + } + } + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(fromlist_doc, @@ -1381,20 +1350,20 @@ static PyObject * array_tolist(arrayobject *self, PyObject *unused) { - PyObject *list = PyList_New(Py_SIZE(self)); - Py_ssize_t i; + PyObject *list = PyList_New(Py_SIZE(self)); + Py_ssize_t i; - if (list == NULL) - return NULL; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *v = getarrayitem((PyObject *)self, i); - if (v == NULL) { - Py_DECREF(list); - return NULL; - } - PyList_SetItem(list, i, v); - } - return list; + if (list == NULL) + return NULL; + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *v = getarrayitem((PyObject *)self, i); + if (v == NULL) { + Py_DECREF(list); + return NULL; + } + PyList_SetItem(list, i, v); + } + return list; } PyDoc_STRVAR(tolist_doc, @@ -1406,36 +1375,36 @@ static PyObject * array_fromstring(arrayobject *self, PyObject *args) { - char *str; - Py_ssize_t n; - int itemsize = self->ob_descr->itemsize; - if (!PyArg_ParseTuple(args, "s#:fromstring", &str, &n)) - return NULL; - if (n % itemsize != 0) { - PyErr_SetString(PyExc_ValueError, - "string length not a multiple of item size"); - return NULL; - } - n = n / itemsize; - if (n > 0) { - char *item = self->ob_item; - if ((n > PY_SSIZE_T_MAX - Py_SIZE(self)) || - ((Py_SIZE(self) + n) > PY_SSIZE_T_MAX / itemsize)) { - return PyErr_NoMemory(); - } - PyMem_RESIZE(item, char, (Py_SIZE(self) + n) * itemsize); - if (item == NULL) { - PyErr_NoMemory(); - return NULL; - } - self->ob_item = item; - Py_SIZE(self) += n; - self->allocated = Py_SIZE(self); - memcpy(item + (Py_SIZE(self) - n) * itemsize, - str, itemsize*n); - } - Py_INCREF(Py_None); - return Py_None; + char *str; + Py_ssize_t n; + int itemsize = self->ob_descr->itemsize; + if (!PyArg_ParseTuple(args, "s#:fromstring", &str, &n)) + return NULL; + if (n % itemsize != 0) { + PyErr_SetString(PyExc_ValueError, + "string length not a multiple of item size"); + return NULL; + } + n = n / itemsize; + if (n > 0) { + char *item = self->ob_item; + if ((n > PY_SSIZE_T_MAX - Py_SIZE(self)) || + ((Py_SIZE(self) + n) > PY_SSIZE_T_MAX / itemsize)) { + return PyErr_NoMemory(); + } + PyMem_RESIZE(item, char, (Py_SIZE(self) + n) * itemsize); + if (item == NULL) { + PyErr_NoMemory(); + return NULL; + } + self->ob_item = item; + Py_SIZE(self) += n; + self->allocated = Py_SIZE(self); + memcpy(item + (Py_SIZE(self) - n) * itemsize, + str, itemsize*n); + } + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(fromstring_doc, @@ -1448,12 +1417,12 @@ static PyObject * array_tostring(arrayobject *self, PyObject *unused) { - if (self->ob_size <= PY_SSIZE_T_MAX / self->ob_descr->itemsize) { - return PyString_FromStringAndSize(self->ob_item, - Py_SIZE(self) * self->ob_descr->itemsize); - } else { - return PyErr_NoMemory(); - } + if (self->ob_size <= PY_SSIZE_T_MAX / self->ob_descr->itemsize) { + return PyString_FromStringAndSize(self->ob_item, + Py_SIZE(self) * self->ob_descr->itemsize); + } else { + return PyErr_NoMemory(); + } } PyDoc_STRVAR(tostring_doc, @@ -1468,36 +1437,36 @@ static PyObject * array_fromunicode(arrayobject *self, PyObject *args) { - Py_UNICODE *ustr; - Py_ssize_t n; + Py_UNICODE *ustr; + Py_ssize_t n; - if (!PyArg_ParseTuple(args, "u#:fromunicode", &ustr, &n)) - return NULL; - if (self->ob_descr->typecode != 'u') { - PyErr_SetString(PyExc_ValueError, - "fromunicode() may only be called on " - "type 'u' arrays"); - return NULL; - } - if (n > 0) { - Py_UNICODE *item = (Py_UNICODE *) self->ob_item; - if (Py_SIZE(self) > PY_SSIZE_T_MAX - n) { - return PyErr_NoMemory(); - } - PyMem_RESIZE(item, Py_UNICODE, Py_SIZE(self) + n); - if (item == NULL) { - PyErr_NoMemory(); - return NULL; - } - self->ob_item = (char *) item; - Py_SIZE(self) += n; - self->allocated = Py_SIZE(self); - memcpy(item + Py_SIZE(self) - n, - ustr, n * sizeof(Py_UNICODE)); - } + if (!PyArg_ParseTuple(args, "u#:fromunicode", &ustr, &n)) + return NULL; + if (self->ob_descr->typecode != 'u') { + PyErr_SetString(PyExc_ValueError, + "fromunicode() may only be called on " + "type 'u' arrays"); + return NULL; + } + if (n > 0) { + Py_UNICODE *item = (Py_UNICODE *) self->ob_item; + if (Py_SIZE(self) > PY_SSIZE_T_MAX - n) { + return PyErr_NoMemory(); + } + PyMem_RESIZE(item, Py_UNICODE, Py_SIZE(self) + n); + if (item == NULL) { + PyErr_NoMemory(); + return NULL; + } + self->ob_item = (char *) item; + Py_SIZE(self) += n; + self->allocated = Py_SIZE(self); + memcpy(item + Py_SIZE(self) - n, + ustr, n * sizeof(Py_UNICODE)); + } - Py_INCREF(Py_None); - return Py_None; + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(fromunicode_doc, @@ -1512,12 +1481,12 @@ static PyObject * array_tounicode(arrayobject *self, PyObject *unused) { - if (self->ob_descr->typecode != 'u') { - PyErr_SetString(PyExc_ValueError, - "tounicode() may only be called on type 'u' arrays"); - return NULL; - } - return PyUnicode_FromUnicode((Py_UNICODE *) self->ob_item, Py_SIZE(self)); + if (self->ob_descr->typecode != 'u') { + PyErr_SetString(PyExc_ValueError, + "tounicode() may only be called on type 'u' arrays"); + return NULL; + } + return PyUnicode_FromUnicode((Py_UNICODE *) self->ob_item, Py_SIZE(self)); } PyDoc_STRVAR(tounicode_doc, @@ -1530,325 +1499,357 @@ #endif /* Py_USING_UNICODE */ +static PyObject * +array_reduce(arrayobject *array) +{ + PyObject *dict, *result, *list; + + dict = PyObject_GetAttrString((PyObject *)array, "__dict__"); + if (dict == NULL) { + if (!PyErr_ExceptionMatches(PyExc_AttributeError)) + return NULL; + PyErr_Clear(); + dict = Py_None; + Py_INCREF(dict); + } + /* Unlike in Python 3.x, we never use the more efficient memory + * representation of an array for pickling. This is unfortunately + * necessary to allow array objects to be unpickled by Python 3.x, + * since str objects from 2.x are always decoded to unicode in + * Python 3.x. + */ + list = array_tolist(array, NULL); + if (list == NULL) { + Py_DECREF(dict); + return NULL; + } + result = Py_BuildValue( + "O(cO)O", Py_TYPE(array), array->ob_descr->typecode, list, dict); + Py_DECREF(list); + Py_DECREF(dict); + return result; +} + +PyDoc_STRVAR(reduce_doc, "Return state information for pickling."); static PyObject * array_get_typecode(arrayobject *a, void *closure) { - char tc = a->ob_descr->typecode; - return PyString_FromStringAndSize(&tc, 1); + char tc = a->ob_descr->typecode; + return PyString_FromStringAndSize(&tc, 1); } static PyObject * array_get_itemsize(arrayobject *a, void *closure) { - return PyInt_FromLong((long)a->ob_descr->itemsize); + return PyInt_FromLong((long)a->ob_descr->itemsize); } static PyGetSetDef array_getsets [] = { - {"typecode", (getter) array_get_typecode, NULL, - "the typecode character used to create the array"}, - {"itemsize", (getter) array_get_itemsize, NULL, - "the size, in bytes, of one array item"}, - {NULL} + {"typecode", (getter) array_get_typecode, NULL, + "the typecode character used to create the array"}, + {"itemsize", (getter) array_get_itemsize, NULL, + "the size, in bytes, of one array item"}, + {NULL} }; static PyMethodDef array_methods[] = { - {"append", (PyCFunction)array_append, METH_O, - append_doc}, - {"buffer_info", (PyCFunction)array_buffer_info, METH_NOARGS, - buffer_info_doc}, - {"byteswap", (PyCFunction)array_byteswap, METH_NOARGS, - byteswap_doc}, - {"__copy__", (PyCFunction)array_copy, METH_NOARGS, - copy_doc}, - {"count", (PyCFunction)array_count, METH_O, - count_doc}, - {"__deepcopy__",(PyCFunction)array_copy, METH_O, - copy_doc}, - {"extend", (PyCFunction)array_extend, METH_O, - extend_doc}, - {"fromfile", (PyCFunction)array_fromfile, METH_VARARGS, - fromfile_doc}, - {"fromlist", (PyCFunction)array_fromlist, METH_O, - fromlist_doc}, - {"fromstring", (PyCFunction)array_fromstring, METH_VARARGS, - fromstring_doc}, + {"append", (PyCFunction)array_append, METH_O, + append_doc}, + {"buffer_info", (PyCFunction)array_buffer_info, METH_NOARGS, + buffer_info_doc}, + {"byteswap", (PyCFunction)array_byteswap, METH_NOARGS, + byteswap_doc}, + {"__copy__", (PyCFunction)array_copy, METH_NOARGS, + copy_doc}, + {"count", (PyCFunction)array_count, METH_O, + count_doc}, + {"__deepcopy__",(PyCFunction)array_copy, METH_O, + copy_doc}, + {"extend", (PyCFunction)array_extend, METH_O, + extend_doc}, + {"fromfile", (PyCFunction)array_fromfile, METH_VARARGS, + fromfile_doc}, + {"fromlist", (PyCFunction)array_fromlist, METH_O, + fromlist_doc}, + {"fromstring", (PyCFunction)array_fromstring, METH_VARARGS, + fromstring_doc}, #ifdef Py_USING_UNICODE - {"fromunicode", (PyCFunction)array_fromunicode, METH_VARARGS, - fromunicode_doc}, + {"fromunicode", (PyCFunction)array_fromunicode, METH_VARARGS, + fromunicode_doc}, #endif - {"index", (PyCFunction)array_index, METH_O, - index_doc}, - {"insert", (PyCFunction)array_insert, METH_VARARGS, - insert_doc}, - {"pop", (PyCFunction)array_pop, METH_VARARGS, - pop_doc}, - {"read", (PyCFunction)array_fromfile_as_read, METH_VARARGS, - fromfile_doc}, - {"__reduce__", (PyCFunction)array_reduce, METH_NOARGS, - array_doc}, - {"remove", (PyCFunction)array_remove, METH_O, - remove_doc}, - {"reverse", (PyCFunction)array_reverse, METH_NOARGS, - reverse_doc}, -/* {"sort", (PyCFunction)array_sort, METH_VARARGS, - sort_doc},*/ - {"tofile", (PyCFunction)array_tofile, METH_O, - tofile_doc}, - {"tolist", (PyCFunction)array_tolist, METH_NOARGS, - tolist_doc}, - {"tostring", (PyCFunction)array_tostring, METH_NOARGS, - tostring_doc}, + {"index", (PyCFunction)array_index, METH_O, + index_doc}, + {"insert", (PyCFunction)array_insert, METH_VARARGS, + insert_doc}, + {"pop", (PyCFunction)array_pop, METH_VARARGS, + pop_doc}, + {"read", (PyCFunction)array_fromfile_as_read, METH_VARARGS, + fromfile_doc}, + {"__reduce__", (PyCFunction)array_reduce, METH_NOARGS, + reduce_doc}, + {"remove", (PyCFunction)array_remove, METH_O, + remove_doc}, + {"reverse", (PyCFunction)array_reverse, METH_NOARGS, + reverse_doc}, +/* {"sort", (PyCFunction)array_sort, METH_VARARGS, + sort_doc},*/ + {"tofile", (PyCFunction)array_tofile, METH_O, + tofile_doc}, + {"tolist", (PyCFunction)array_tolist, METH_NOARGS, + tolist_doc}, + {"tostring", (PyCFunction)array_tostring, METH_NOARGS, + tostring_doc}, #ifdef Py_USING_UNICODE - {"tounicode", (PyCFunction)array_tounicode, METH_NOARGS, - tounicode_doc}, + {"tounicode", (PyCFunction)array_tounicode, METH_NOARGS, + tounicode_doc}, #endif - {"write", (PyCFunction)array_tofile_as_write, METH_O, - tofile_doc}, - {NULL, NULL} /* sentinel */ + {"write", (PyCFunction)array_tofile_as_write, METH_O, + tofile_doc}, + {NULL, NULL} /* sentinel */ }; static PyObject * array_repr(arrayobject *a) { - char buf[256], typecode; - PyObject *s, *t, *v = NULL; - Py_ssize_t len; + char buf[256], typecode; + PyObject *s, *t, *v = NULL; + Py_ssize_t len; - len = Py_SIZE(a); - typecode = a->ob_descr->typecode; - if (len == 0) { - PyOS_snprintf(buf, sizeof(buf), "array('%c')", typecode); - return PyString_FromString(buf); - } - - if (typecode == 'c') - v = array_tostring(a, NULL); + len = Py_SIZE(a); + typecode = a->ob_descr->typecode; + if (len == 0) { + PyOS_snprintf(buf, sizeof(buf), "array('%c')", typecode); + return PyString_FromString(buf); + } + + if (typecode == 'c') + v = array_tostring(a, NULL); #ifdef Py_USING_UNICODE - else if (typecode == 'u') - v = array_tounicode(a, NULL); + else if (typecode == 'u') + v = array_tounicode(a, NULL); #endif - else - v = array_tolist(a, NULL); - t = PyObject_Repr(v); - Py_XDECREF(v); + else + v = array_tolist(a, NULL); + t = PyObject_Repr(v); + Py_XDECREF(v); - PyOS_snprintf(buf, sizeof(buf), "array('%c', ", typecode); - s = PyString_FromString(buf); - PyString_ConcatAndDel(&s, t); - PyString_ConcatAndDel(&s, PyString_FromString(")")); - return s; + PyOS_snprintf(buf, sizeof(buf), "array('%c', ", typecode); + s = PyString_FromString(buf); + PyString_ConcatAndDel(&s, t); + PyString_ConcatAndDel(&s, PyString_FromString(")")); + return s; } static PyObject* array_subscr(arrayobject* self, PyObject* item) { - if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i==-1 && PyErr_Occurred()) { - return NULL; - } - if (i < 0) - i += Py_SIZE(self); - return array_item(self, i); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength, cur, i; - PyObject* result; - arrayobject* ar; - int itemsize = self->ob_descr->itemsize; + if (PyIndex_Check(item)) { + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i==-1 && PyErr_Occurred()) { + return NULL; + } + if (i < 0) + i += Py_SIZE(self); + return array_item(self, i); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength, cur, i; + PyObject* result; + arrayobject* ar; + int itemsize = self->ob_descr->itemsize; - if (PySlice_GetIndicesEx((PySliceObject*)item, Py_SIZE(self), - &start, &stop, &step, &slicelength) < 0) { - return NULL; - } + if (PySlice_GetIndicesEx((PySliceObject*)item, Py_SIZE(self), + &start, &stop, &step, &slicelength) < 0) { + return NULL; + } - if (slicelength <= 0) { - return newarrayobject(&Arraytype, 0, self->ob_descr); - } - else if (step == 1) { - PyObject *result = newarrayobject(&Arraytype, - slicelength, self->ob_descr); - if (result == NULL) - return NULL; - memcpy(((arrayobject *)result)->ob_item, - self->ob_item + start * itemsize, - slicelength * itemsize); - return result; - } - else { - result = newarrayobject(&Arraytype, slicelength, self->ob_descr); - if (!result) return NULL; + if (slicelength <= 0) { + return newarrayobject(&Arraytype, 0, self->ob_descr); + } + else if (step == 1) { + PyObject *result = newarrayobject(&Arraytype, + slicelength, self->ob_descr); + if (result == NULL) + return NULL; + memcpy(((arrayobject *)result)->ob_item, + self->ob_item + start * itemsize, + slicelength * itemsize); + return result; + } + else { + result = newarrayobject(&Arraytype, slicelength, self->ob_descr); + if (!result) return NULL; - ar = (arrayobject*)result; + ar = (arrayobject*)result; - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - memcpy(ar->ob_item + i*itemsize, - self->ob_item + cur*itemsize, - itemsize); - } - - return result; - } - } - else { - PyErr_SetString(PyExc_TypeError, - "array indices must be integers"); - return NULL; - } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + memcpy(ar->ob_item + i*itemsize, + self->ob_item + cur*itemsize, + itemsize); + } + + return result; + } + } + else { + PyErr_SetString(PyExc_TypeError, + "array indices must be integers"); + return NULL; + } } static int array_ass_subscr(arrayobject* self, PyObject* item, PyObject* value) { - Py_ssize_t start, stop, step, slicelength, needed; - arrayobject* other; - int itemsize; + Py_ssize_t start, stop, step, slicelength, needed; + arrayobject* other; + int itemsize; - if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - - if (i == -1 && PyErr_Occurred()) - return -1; - if (i < 0) - i += Py_SIZE(self); - if (i < 0 || i >= Py_SIZE(self)) { - PyErr_SetString(PyExc_IndexError, - "array assignment index out of range"); - return -1; - } - if (value == NULL) { - /* Fall through to slice assignment */ - start = i; - stop = i + 1; - step = 1; - slicelength = 1; - } - else - return (*self->ob_descr->setitem)(self, i, value); - } - else if (PySlice_Check(item)) { - if (PySlice_GetIndicesEx((PySliceObject *)item, - Py_SIZE(self), &start, &stop, - &step, &slicelength) < 0) { - return -1; - } - } - else { - PyErr_SetString(PyExc_TypeError, - "array indices must be integer"); - return -1; - } - if (value == NULL) { - other = NULL; - needed = 0; - } - else if (array_Check(value)) { - other = (arrayobject *)value; - needed = Py_SIZE(other); - if (self == other) { - /* Special case "self[i:j] = self" -- copy self first */ - int ret; - value = array_slice(other, 0, needed); - if (value == NULL) - return -1; - ret = array_ass_subscr(self, item, value); - Py_DECREF(value); - return ret; - } - if (other->ob_descr != self->ob_descr) { - PyErr_BadArgument(); - return -1; - } - } - else { - PyErr_Format(PyExc_TypeError, - "can only assign array (not \"%.200s\") to array slice", - Py_TYPE(value)->tp_name); - return -1; - } - itemsize = self->ob_descr->itemsize; - /* for 'a[2:1] = ...', the insertion point is 'start', not 'stop' */ - if ((step > 0 && stop < start) || - (step < 0 && stop > start)) - stop = start; - if (step == 1) { - if (slicelength > needed) { - memmove(self->ob_item + (start + needed) * itemsize, - self->ob_item + stop * itemsize, - (Py_SIZE(self) - stop) * itemsize); - if (array_resize(self, Py_SIZE(self) + - needed - slicelength) < 0) - return -1; - } - else if (slicelength < needed) { - if (array_resize(self, Py_SIZE(self) + - needed - slicelength) < 0) - return -1; - memmove(self->ob_item + (start + needed) * itemsize, - self->ob_item + stop * itemsize, - (Py_SIZE(self) - start - needed) * itemsize); - } - if (needed > 0) - memcpy(self->ob_item + start * itemsize, - other->ob_item, needed * itemsize); - return 0; - } - else if (needed == 0) { - /* Delete slice */ - size_t cur; - Py_ssize_t i; + if (PyIndex_Check(item)) { + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (step < 0) { - stop = start + 1; - start = stop + step * (slicelength - 1) - 1; - step = -step; - } - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - Py_ssize_t lim = step - 1; + if (i == -1 && PyErr_Occurred()) + return -1; + if (i < 0) + i += Py_SIZE(self); + if (i < 0 || i >= Py_SIZE(self)) { + PyErr_SetString(PyExc_IndexError, + "array assignment index out of range"); + return -1; + } + if (value == NULL) { + /* Fall through to slice assignment */ + start = i; + stop = i + 1; + step = 1; + slicelength = 1; + } + else + return (*self->ob_descr->setitem)(self, i, value); + } + else if (PySlice_Check(item)) { + if (PySlice_GetIndicesEx((PySliceObject *)item, + Py_SIZE(self), &start, &stop, + &step, &slicelength) < 0) { + return -1; + } + } + else { + PyErr_SetString(PyExc_TypeError, + "array indices must be integer"); + return -1; + } + if (value == NULL) { + other = NULL; + needed = 0; + } + else if (array_Check(value)) { + other = (arrayobject *)value; + needed = Py_SIZE(other); + if (self == other) { + /* Special case "self[i:j] = self" -- copy self first */ + int ret; + value = array_slice(other, 0, needed); + if (value == NULL) + return -1; + ret = array_ass_subscr(self, item, value); + Py_DECREF(value); + return ret; + } + if (other->ob_descr != self->ob_descr) { + PyErr_BadArgument(); + return -1; + } + } + else { + PyErr_Format(PyExc_TypeError, + "can only assign array (not \"%.200s\") to array slice", + Py_TYPE(value)->tp_name); + return -1; + } + itemsize = self->ob_descr->itemsize; + /* for 'a[2:1] = ...', the insertion point is 'start', not 'stop' */ + if ((step > 0 && stop < start) || + (step < 0 && stop > start)) + stop = start; + if (step == 1) { + if (slicelength > needed) { + memmove(self->ob_item + (start + needed) * itemsize, + self->ob_item + stop * itemsize, + (Py_SIZE(self) - stop) * itemsize); + if (array_resize(self, Py_SIZE(self) + + needed - slicelength) < 0) + return -1; + } + else if (slicelength < needed) { + if (array_resize(self, Py_SIZE(self) + + needed - slicelength) < 0) + return -1; + memmove(self->ob_item + (start + needed) * itemsize, + self->ob_item + stop * itemsize, + (Py_SIZE(self) - start - needed) * itemsize); + } + if (needed > 0) + memcpy(self->ob_item + start * itemsize, + other->ob_item, needed * itemsize); + return 0; + } + else if (needed == 0) { + /* Delete slice */ + size_t cur; + Py_ssize_t i; - if (cur + step >= (size_t)Py_SIZE(self)) - lim = Py_SIZE(self) - cur - 1; - memmove(self->ob_item + (cur - i) * itemsize, - self->ob_item + (cur + 1) * itemsize, - lim * itemsize); - } - cur = start + slicelength * step; - if (cur < (size_t)Py_SIZE(self)) { - memmove(self->ob_item + (cur-slicelength) * itemsize, - self->ob_item + cur * itemsize, - (Py_SIZE(self) - cur) * itemsize); - } - if (array_resize(self, Py_SIZE(self) - slicelength) < 0) - return -1; - return 0; - } - else { - Py_ssize_t cur, i; + if (step < 0) { + stop = start + 1; + start = stop + step * (slicelength - 1) - 1; + step = -step; + } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + Py_ssize_t lim = step - 1; - if (needed != slicelength) { - PyErr_Format(PyExc_ValueError, - "attempt to assign array of size %zd " - "to extended slice of size %zd", - needed, slicelength); - return -1; - } - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - memcpy(self->ob_item + cur * itemsize, - other->ob_item + i * itemsize, - itemsize); - } - return 0; - } + if (cur + step >= (size_t)Py_SIZE(self)) + lim = Py_SIZE(self) - cur - 1; + memmove(self->ob_item + (cur - i) * itemsize, + self->ob_item + (cur + 1) * itemsize, + lim * itemsize); + } + cur = start + slicelength * step; + if (cur < (size_t)Py_SIZE(self)) { + memmove(self->ob_item + (cur-slicelength) * itemsize, + self->ob_item + cur * itemsize, + (Py_SIZE(self) - cur) * itemsize); + } + if (array_resize(self, Py_SIZE(self) - slicelength) < 0) + return -1; + return 0; + } + else { + Py_ssize_t cur, i; + + if (needed != slicelength) { + PyErr_Format(PyExc_ValueError, + "attempt to assign array of size %zd " + "to extended slice of size %zd", + needed, slicelength); + return -1; + } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + memcpy(self->ob_item + cur * itemsize, + other->ob_item + i * itemsize, + itemsize); + } + return 0; + } } static PyMappingMethods array_as_mapping = { - (lenfunc)array_length, - (binaryfunc)array_subscr, - (objobjargproc)array_ass_subscr + (lenfunc)array_length, + (binaryfunc)array_subscr, + (objobjargproc)array_ass_subscr }; static const void *emptybuf = ""; @@ -1856,164 +1857,164 @@ static Py_ssize_t array_buffer_getreadbuf(arrayobject *self, Py_ssize_t index, const void **ptr) { - if ( index != 0 ) { - PyErr_SetString(PyExc_SystemError, - "Accessing non-existent array segment"); - return -1; - } - *ptr = (void *)self->ob_item; - if (*ptr == NULL) - *ptr = emptybuf; - return Py_SIZE(self)*self->ob_descr->itemsize; + if ( index != 0 ) { + PyErr_SetString(PyExc_SystemError, + "Accessing non-existent array segment"); + return -1; + } + *ptr = (void *)self->ob_item; + if (*ptr == NULL) + *ptr = emptybuf; + return Py_SIZE(self)*self->ob_descr->itemsize; } static Py_ssize_t array_buffer_getwritebuf(arrayobject *self, Py_ssize_t index, const void **ptr) { - if ( index != 0 ) { - PyErr_SetString(PyExc_SystemError, - "Accessing non-existent array segment"); - return -1; - } - *ptr = (void *)self->ob_item; - if (*ptr == NULL) - *ptr = emptybuf; - return Py_SIZE(self)*self->ob_descr->itemsize; + if ( index != 0 ) { + PyErr_SetString(PyExc_SystemError, + "Accessing non-existent array segment"); + return -1; + } + *ptr = (void *)self->ob_item; + if (*ptr == NULL) + *ptr = emptybuf; + return Py_SIZE(self)*self->ob_descr->itemsize; } static Py_ssize_t array_buffer_getsegcount(arrayobject *self, Py_ssize_t *lenp) { - if ( lenp ) - *lenp = Py_SIZE(self)*self->ob_descr->itemsize; - return 1; + if ( lenp ) + *lenp = Py_SIZE(self)*self->ob_descr->itemsize; + return 1; } static PySequenceMethods array_as_sequence = { - (lenfunc)array_length, /*sq_length*/ - (binaryfunc)array_concat, /*sq_concat*/ - (ssizeargfunc)array_repeat, /*sq_repeat*/ - (ssizeargfunc)array_item, /*sq_item*/ - (ssizessizeargfunc)array_slice, /*sq_slice*/ - (ssizeobjargproc)array_ass_item, /*sq_ass_item*/ - (ssizessizeobjargproc)array_ass_slice, /*sq_ass_slice*/ - (objobjproc)array_contains, /*sq_contains*/ - (binaryfunc)array_inplace_concat, /*sq_inplace_concat*/ - (ssizeargfunc)array_inplace_repeat /*sq_inplace_repeat*/ + (lenfunc)array_length, /*sq_length*/ + (binaryfunc)array_concat, /*sq_concat*/ + (ssizeargfunc)array_repeat, /*sq_repeat*/ + (ssizeargfunc)array_item, /*sq_item*/ + (ssizessizeargfunc)array_slice, /*sq_slice*/ + (ssizeobjargproc)array_ass_item, /*sq_ass_item*/ + (ssizessizeobjargproc)array_ass_slice, /*sq_ass_slice*/ + (objobjproc)array_contains, /*sq_contains*/ + (binaryfunc)array_inplace_concat, /*sq_inplace_concat*/ + (ssizeargfunc)array_inplace_repeat /*sq_inplace_repeat*/ }; static PyBufferProcs array_as_buffer = { - (readbufferproc)array_buffer_getreadbuf, - (writebufferproc)array_buffer_getwritebuf, - (segcountproc)array_buffer_getsegcount, - NULL, + (readbufferproc)array_buffer_getreadbuf, + (writebufferproc)array_buffer_getwritebuf, + (segcountproc)array_buffer_getsegcount, + NULL, }; static PyObject * array_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { - char c; - PyObject *initial = NULL, *it = NULL; - struct arraydescr *descr; - - if (type == &Arraytype && !_PyArg_NoKeywords("array.array()", kwds)) - return NULL; - - if (!PyArg_ParseTuple(args, "c|O:array", &c, &initial)) - return NULL; - - if (!(initial == NULL || PyList_Check(initial) - || PyString_Check(initial) || PyTuple_Check(initial) - || (c == 'u' && PyUnicode_Check(initial)))) { - it = PyObject_GetIter(initial); - if (it == NULL) - return NULL; - /* We set initial to NULL so that the subsequent code - will create an empty array of the appropriate type - and afterwards we can use array_iter_extend to populate - the array. - */ - initial = NULL; - } - for (descr = descriptors; descr->typecode != '\0'; descr++) { - if (descr->typecode == c) { - PyObject *a; - Py_ssize_t len; + char c; + PyObject *initial = NULL, *it = NULL; + struct arraydescr *descr; - if (initial == NULL || !(PyList_Check(initial) - || PyTuple_Check(initial))) - len = 0; - else - len = PySequence_Size(initial); + if (type == &Arraytype && !_PyArg_NoKeywords("array.array()", kwds)) + return NULL; - a = newarrayobject(type, len, descr); - if (a == NULL) - return NULL; + if (!PyArg_ParseTuple(args, "c|O:array", &c, &initial)) + return NULL; - if (len > 0) { - Py_ssize_t i; - for (i = 0; i < len; i++) { - PyObject *v = - PySequence_GetItem(initial, i); - if (v == NULL) { - Py_DECREF(a); - return NULL; - } - if (setarrayitem(a, i, v) != 0) { - Py_DECREF(v); - Py_DECREF(a); - return NULL; - } - Py_DECREF(v); - } - } else if (initial != NULL && PyString_Check(initial)) { - PyObject *t_initial, *v; - t_initial = PyTuple_Pack(1, initial); - if (t_initial == NULL) { - Py_DECREF(a); - return NULL; - } - v = array_fromstring((arrayobject *)a, - t_initial); - Py_DECREF(t_initial); - if (v == NULL) { - Py_DECREF(a); - return NULL; - } - Py_DECREF(v); + if (!(initial == NULL || PyList_Check(initial) + || PyString_Check(initial) || PyTuple_Check(initial) + || (c == 'u' && PyUnicode_Check(initial)))) { + it = PyObject_GetIter(initial); + if (it == NULL) + return NULL; + /* We set initial to NULL so that the subsequent code + will create an empty array of the appropriate type + and afterwards we can use array_iter_extend to populate + the array. + */ + initial = NULL; + } + for (descr = descriptors; descr->typecode != '\0'; descr++) { + if (descr->typecode == c) { + PyObject *a; + Py_ssize_t len; + + if (initial == NULL || !(PyList_Check(initial) + || PyTuple_Check(initial))) + len = 0; + else + len = PySequence_Size(initial); + + a = newarrayobject(type, len, descr); + if (a == NULL) + return NULL; + + if (len > 0) { + Py_ssize_t i; + for (i = 0; i < len; i++) { + PyObject *v = + PySequence_GetItem(initial, i); + if (v == NULL) { + Py_DECREF(a); + return NULL; + } + if (setarrayitem(a, i, v) != 0) { + Py_DECREF(v); + Py_DECREF(a); + return NULL; + } + Py_DECREF(v); + } + } else if (initial != NULL && PyString_Check(initial)) { + PyObject *t_initial, *v; + t_initial = PyTuple_Pack(1, initial); + if (t_initial == NULL) { + Py_DECREF(a); + return NULL; + } + v = array_fromstring((arrayobject *)a, + t_initial); + Py_DECREF(t_initial); + if (v == NULL) { + Py_DECREF(a); + return NULL; + } + Py_DECREF(v); #ifdef Py_USING_UNICODE - } else if (initial != NULL && PyUnicode_Check(initial)) { - Py_ssize_t n = PyUnicode_GET_DATA_SIZE(initial); - if (n > 0) { - arrayobject *self = (arrayobject *)a; - char *item = self->ob_item; - item = (char *)PyMem_Realloc(item, n); - if (item == NULL) { - PyErr_NoMemory(); - Py_DECREF(a); - return NULL; - } - self->ob_item = item; - Py_SIZE(self) = n / sizeof(Py_UNICODE); - memcpy(item, PyUnicode_AS_DATA(initial), n); - self->allocated = Py_SIZE(self); - } + } else if (initial != NULL && PyUnicode_Check(initial)) { + Py_ssize_t n = PyUnicode_GET_DATA_SIZE(initial); + if (n > 0) { + arrayobject *self = (arrayobject *)a; + char *item = self->ob_item; + item = (char *)PyMem_Realloc(item, n); + if (item == NULL) { + PyErr_NoMemory(); + Py_DECREF(a); + return NULL; + } + self->ob_item = item; + Py_SIZE(self) = n / sizeof(Py_UNICODE); + memcpy(item, PyUnicode_AS_DATA(initial), n); + self->allocated = Py_SIZE(self); + } #endif - } - if (it != NULL) { - if (array_iter_extend((arrayobject *)a, it) == -1) { - Py_DECREF(it); - Py_DECREF(a); - return NULL; - } - Py_DECREF(it); - } - return a; - } - } - PyErr_SetString(PyExc_ValueError, - "bad typecode (must be c, b, B, u, h, H, i, I, l, L, f or d)"); - return NULL; + } + if (it != NULL) { + if (array_iter_extend((arrayobject *)a, it) == -1) { + Py_DECREF(it); + Py_DECREF(a); + return NULL; + } + Py_DECREF(it); + } + return a; + } + } + PyErr_SetString(PyExc_ValueError, + "bad typecode (must be c, b, B, u, h, H, i, I, l, L, f or d)"); + return NULL; } @@ -2049,7 +2050,7 @@ \n\ Return a new array whose items are restricted by typecode, and\n\ initialized from the optional initializer value, which must be a list,\n\ -string. or iterable over elements of the appropriate type.\n\ +string or iterable over elements of the appropriate type.\n\ \n\ Arrays represent basic values and behave very much like lists, except\n\ the type of objects stored in them is constrained.\n\ @@ -2084,55 +2085,55 @@ static PyObject *array_iter(arrayobject *ao); static PyTypeObject Arraytype = { - PyVarObject_HEAD_INIT(NULL, 0) - "array.array", - sizeof(arrayobject), - 0, - (destructor)array_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - (reprfunc)array_repr, /* tp_repr */ - 0, /* tp_as_number*/ - &array_as_sequence, /* tp_as_sequence*/ - &array_as_mapping, /* tp_as_mapping*/ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - &array_as_buffer, /* tp_as_buffer*/ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ - arraytype_doc, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - array_richcompare, /* tp_richcompare */ - offsetof(arrayobject, weakreflist), /* tp_weaklistoffset */ - (getiterfunc)array_iter, /* tp_iter */ - 0, /* tp_iternext */ - array_methods, /* tp_methods */ - 0, /* tp_members */ - array_getsets, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - PyType_GenericAlloc, /* tp_alloc */ - array_new, /* tp_new */ - PyObject_Del, /* tp_free */ + PyVarObject_HEAD_INIT(NULL, 0) + "array.array", + sizeof(arrayobject), + 0, + (destructor)array_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + (reprfunc)array_repr, /* tp_repr */ + 0, /* tp_as_number*/ + &array_as_sequence, /* tp_as_sequence*/ + &array_as_mapping, /* tp_as_mapping*/ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + &array_as_buffer, /* tp_as_buffer*/ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ + arraytype_doc, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + array_richcompare, /* tp_richcompare */ + offsetof(arrayobject, weakreflist), /* tp_weaklistoffset */ + (getiterfunc)array_iter, /* tp_iter */ + 0, /* tp_iternext */ + array_methods, /* tp_methods */ + 0, /* tp_members */ + array_getsets, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + PyType_GenericAlloc, /* tp_alloc */ + array_new, /* tp_new */ + PyObject_Del, /* tp_free */ }; /*********************** Array Iterator **************************/ typedef struct { - PyObject_HEAD - Py_ssize_t index; - arrayobject *ao; - PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); + PyObject_HEAD + Py_ssize_t index; + arrayobject *ao; + PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); } arrayiterobject; static PyTypeObject PyArrayIter_Type; @@ -2142,79 +2143,79 @@ static PyObject * array_iter(arrayobject *ao) { - arrayiterobject *it; + arrayiterobject *it; - if (!array_Check(ao)) { - PyErr_BadInternalCall(); - return NULL; - } + if (!array_Check(ao)) { + PyErr_BadInternalCall(); + return NULL; + } - it = PyObject_GC_New(arrayiterobject, &PyArrayIter_Type); - if (it == NULL) - return NULL; + it = PyObject_GC_New(arrayiterobject, &PyArrayIter_Type); + if (it == NULL) + return NULL; - Py_INCREF(ao); - it->ao = ao; - it->index = 0; - it->getitem = ao->ob_descr->getitem; - PyObject_GC_Track(it); - return (PyObject *)it; + Py_INCREF(ao); + it->ao = ao; + it->index = 0; + it->getitem = ao->ob_descr->getitem; + PyObject_GC_Track(it); + return (PyObject *)it; } static PyObject * arrayiter_next(arrayiterobject *it) { - assert(PyArrayIter_Check(it)); - if (it->index < Py_SIZE(it->ao)) - return (*it->getitem)(it->ao, it->index++); - return NULL; + assert(PyArrayIter_Check(it)); + if (it->index < Py_SIZE(it->ao)) + return (*it->getitem)(it->ao, it->index++); + return NULL; } static void arrayiter_dealloc(arrayiterobject *it) { - PyObject_GC_UnTrack(it); - Py_XDECREF(it->ao); - PyObject_GC_Del(it); + PyObject_GC_UnTrack(it); + Py_XDECREF(it->ao); + PyObject_GC_Del(it); } static int arrayiter_traverse(arrayiterobject *it, visitproc visit, void *arg) { - Py_VISIT(it->ao); - return 0; + Py_VISIT(it->ao); + return 0; } static PyTypeObject PyArrayIter_Type = { - PyVarObject_HEAD_INIT(NULL, 0) - "arrayiterator", /* tp_name */ - sizeof(arrayiterobject), /* tp_basicsize */ - 0, /* tp_itemsize */ - /* methods */ - (destructor)arrayiter_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */ - 0, /* tp_doc */ - (traverseproc)arrayiter_traverse, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - PyObject_SelfIter, /* tp_iter */ - (iternextfunc)arrayiter_next, /* tp_iternext */ - 0, /* tp_methods */ + PyVarObject_HEAD_INIT(NULL, 0) + "arrayiterator", /* tp_name */ + sizeof(arrayiterobject), /* tp_basicsize */ + 0, /* tp_itemsize */ + /* methods */ + (destructor)arrayiter_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */ + 0, /* tp_doc */ + (traverseproc)arrayiter_traverse, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + PyObject_SelfIter, /* tp_iter */ + (iternextfunc)arrayiter_next, /* tp_iternext */ + 0, /* tp_methods */ }; @@ -2229,17 +2230,17 @@ PyMODINIT_FUNC initarray(void) { - PyObject *m; + PyObject *m; - Arraytype.ob_type = &PyType_Type; - PyArrayIter_Type.ob_type = &PyType_Type; - m = Py_InitModule3("array", a_methods, module_doc); - if (m == NULL) - return; + Arraytype.ob_type = &PyType_Type; + PyArrayIter_Type.ob_type = &PyType_Type; + m = Py_InitModule3("array", a_methods, module_doc); + if (m == NULL) + return; - Py_INCREF((PyObject *)&Arraytype); - PyModule_AddObject(m, "ArrayType", (PyObject *)&Arraytype); - Py_INCREF((PyObject *)&Arraytype); - PyModule_AddObject(m, "array", (PyObject *)&Arraytype); - /* No need to check the error here, the caller will do that */ + Py_INCREF((PyObject *)&Arraytype); + PyModule_AddObject(m, "ArrayType", (PyObject *)&Arraytype); + Py_INCREF((PyObject *)&Arraytype); + PyModule_AddObject(m, "array", (PyObject *)&Arraytype); + /* No need to check the error here, the caller will do that */ } diff --git a/pypy/module/cpyext/test/foo.c b/pypy/module/cpyext/test/foo.c --- a/pypy/module/cpyext/test/foo.c +++ b/pypy/module/cpyext/test/foo.c @@ -19,6 +19,7 @@ double foo_double; long long foo_longlong; unsigned long long foo_ulonglong; + Py_ssize_t foo_ssizet; } fooobject; static PyTypeObject footype; @@ -172,7 +173,8 @@ {"float_member", T_FLOAT, offsetof(fooobject, foo_float), 0, NULL}, {"double_member", T_DOUBLE, offsetof(fooobject, foo_double), 0, NULL}, {"longlong_member", T_LONGLONG, offsetof(fooobject, foo_longlong), 0, NULL}, - {"ulonglong_member", T_ULONGLONG, offsetof(fooobject, foo_ulonglong), 0, NULL}, + {"ulonglong_member", T_ULONGLONG, offsetof(fooobject, foo_ulonglong), 0, NULL}, + {"ssizet_member", T_PYSSIZET, offsetof(fooobject, foo_ssizet), 0, NULL}, {NULL} /* Sentinel */ }; diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -107,6 +107,7 @@ obj.double_member = 9.25; assert obj.double_member == 9.25 obj.longlong_member = -2**59; assert obj.longlong_member == -2**59 obj.ulonglong_member = 2**63; assert obj.ulonglong_member == 2**63 + obj.ssizet_member = 2**31; assert obj.ssizet_member == 2**31 # def test_staticmethod(self): From noreply at buildbot.pypy.org Wed May 2 01:04:07 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:04:07 +0200 (CEST) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120501230407.1B18482F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54861:3b723dd7df4f Date: 2012-05-01 18:04 +0200 http://bitbucket.org/pypy/pypy/changeset/3b723dd7df4f/ Log: hg merge default diff --git a/pypy/module/cpyext/floatobject.py b/pypy/module/cpyext/floatobject.py --- a/pypy/module/cpyext/floatobject.py +++ b/pypy/module/cpyext/floatobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.api import ( CANNOT_FAIL, cpython_api, PyObject, build_type_checkers, CONST_STRING) from pypy.interpreter.error import OperationError +from pypy.rlib.rstruct import runpack PyFloat_Check, PyFloat_CheckExact = build_type_checkers("Float") @@ -33,3 +34,19 @@ backward compatibility.""" return space.call_function(space.w_float, w_obj) + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack4(space, ptr, le): + input = rffi.charpsize2str(ptr, 4) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("f", input) + + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack8(space, ptr, le): + input = rffi.charpsize2str(ptr, 8) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("d", input) + diff --git a/pypy/module/cpyext/longobject.py b/pypy/module/cpyext/longobject.py --- a/pypy/module/cpyext/longobject.py +++ b/pypy/module/cpyext/longobject.py @@ -16,6 +16,13 @@ """Return a new PyLongObject object from v, or NULL on failure.""" return space.newlong(val) + at cpython_api([Py_ssize_t], PyObject) +def PyLong_FromSsize_t(space, val): + """Return a new PyLongObject object from a C Py_ssize_t, or + NULL on failure. + """ + return space.newlong(val) + @cpython_api([rffi.LONGLONG], PyObject) def PyLong_FromLongLong(space, val): """Return a new PyLongObject object from a C long long, or NULL diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1405,13 +1405,6 @@ """ raise NotImplementedError - at cpython_api([Py_ssize_t], PyObject) -def PyLong_FromSsize_t(space, v): - """Return a new PyLongObject object from a C Py_ssize_t, or - NULL on failure. - """ - raise NotImplementedError - @cpython_api([rffi.SIZE_T], PyObject) def PyLong_FromSize_t(space, v): """Return a new PyLongObject object from a C size_t, or diff --git a/pypy/module/cpyext/test/test_floatobject.py b/pypy/module/cpyext/test/test_floatobject.py --- a/pypy/module/cpyext/test/test_floatobject.py +++ b/pypy/module/cpyext/test/test_floatobject.py @@ -1,5 +1,6 @@ from pypy.module.cpyext.test.test_api import BaseApiTest from pypy.module.cpyext.test.test_cpyext import AppTestCpythonExtensionBase +from pypy.rpython.lltypesystem import rffi class TestFloatObject(BaseApiTest): def test_floatobject(self, space, api): @@ -20,6 +21,16 @@ assert space.eq_w(api.PyNumber_Float(space.wrap(Coerce())), space.wrap(42.5)) + def test_unpack(self, space, api): + with rffi.scoped_str2charp("\x9a\x99\x99?") as ptr: + assert abs(api._PyFloat_Unpack4(ptr, 1) - 1.2) < 1e-7 + with rffi.scoped_str2charp("?\x99\x99\x9a") as ptr: + assert abs(api._PyFloat_Unpack4(ptr, 0) - 1.2) < 1e-7 + with rffi.scoped_str2charp("\x1f\x85\xebQ\xb8\x1e\t@") as ptr: + assert abs(api._PyFloat_Unpack8(ptr, 1) - 3.14) < 1e-15 + with rffi.scoped_str2charp("@\t\x1e\xb8Q\xeb\x85\x1f") as ptr: + assert abs(api._PyFloat_Unpack8(ptr, 0) - 3.14) < 1e-15 + class AppTestFloatObject(AppTestCpythonExtensionBase): def test_fromstring(self): module = self.import_extension('foo', [ diff --git a/pypy/module/cpyext/test/test_longobject.py b/pypy/module/cpyext/test/test_longobject.py --- a/pypy/module/cpyext/test/test_longobject.py +++ b/pypy/module/cpyext/test/test_longobject.py @@ -35,6 +35,7 @@ w_value = space.newlong(2) value = api.PyLong_AsSsize_t(w_value) assert value == 2 + assert space.eq_w(w_value, api.PyLong_FromSsize_t(2)) def test_fromdouble(self, space, api): w_value = api.PyLong_FromDouble(-12.74) From noreply at buildbot.pypy.org Wed May 2 01:04:08 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:04:08 +0200 (CEST) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120501230408.5B78C82F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54862:1fb96540cdbc Date: 2012-05-01 18:33 +0200 http://bitbucket.org/pypy/pypy/changeset/1fb96540cdbc/ Log: hg merge default diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1947,35 +1947,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) -def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): - """Decode length bytes from a UTF-32 encoded buffer string and return the - corresponding Unicode object. errors (if non-NULL) defines the error - handling. It defaults to "strict". - - If byteorder is non-NULL, the decoder starts decoding using the given byte - order: - - *byteorder == -1: little endian - *byteorder == 0: native order - *byteorder == 1: big endian - - If *byteorder is zero, and the first four bytes of the input data are a - byte order mark (BOM), the decoder switches to this byte order and the BOM is - not copied into the resulting Unicode string. If *byteorder is -1 or - 1, any byte order mark is copied to the output. - - After completion, *byteorder is set to the current byte order at the end - of input data. - - In a narrow build codepoints outside the BMP will be decoded as surrogate pairs. - - If byteorder is NULL, the codec starts in native order mode. - - Return NULL if an exception was raised by the codec. - """ - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP, Py_ssize_t], PyObject) def PyUnicode_DecodeUTF32Stateful(space, s, size, errors, byteorder, consumed): """If consumed is NULL, behave like PyUnicode_DecodeUTF32(). If diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -391,6 +391,42 @@ test("\xFE\xFF\x00\x61\x00\x62\x00\x63\x00\x64", 0, 1) test("\xFF\xFE\x61\x00\x62\x00\x63\x00\x64\x00", 0, -1) + def test_decode_utf32(self, space, api): + def test(encoded, endian, realendian=None): + encoded_charp = rffi.str2charp(encoded) + strict_charp = rffi.str2charp("strict") + if endian is not None: + if endian < 0: + value = -1 + elif endian > 0: + value = 1 + else: + value = 0 + pendian = lltype.malloc(rffi.INTP.TO, 1, flavor='raw') + pendian[0] = rffi.cast(rffi.INT, value) + else: + pendian = None + + w_ustr = api.PyUnicode_DecodeUTF32(encoded_charp, len(encoded), strict_charp, pendian) + assert space.eq_w(space.call_method(w_ustr, 'encode', space.wrap('ascii')), + space.wrap("ab")) + + rffi.free_charp(encoded_charp) + rffi.free_charp(strict_charp) + if pendian: + if realendian is not None: + assert rffi.cast(rffi.INT, realendian) == pendian[0] + lltype.free(pendian, flavor='raw') + + test("\x61\x00\x00\x00\x62\x00\x00\x00", -1) + + test("\x61\x00\x00\x00\x62\x00\x00\x00", None) + + test("\x00\x00\x00\x61\x00\x00\x00\x62", 1) + + test("\x00\x00\xFE\xFF\x00\x00\x00\x61\x00\x00\x00\x62", 0, 1) + test("\xFF\xFE\x00\x00\x61\x00\x00\x00\x62\x00\x00\x00", 0, -1) + def test_compare(self, space, api): assert api.PyUnicode_Compare(space.wrap('a'), space.wrap('b')) == -1 diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -529,9 +529,8 @@ string = rffi.charpsize2str(s, size) - #FIXME: I don't like these prefixes - if pbyteorder is not None: # correct NULL check? - llbyteorder = rffi.cast(lltype.Signed, pbyteorder[0]) # compatible with int? + if pbyteorder is not None: + llbyteorder = rffi.cast(lltype.Signed, pbyteorder[0]) if llbyteorder < 0: byteorder = "little" elif llbyteorder > 0: @@ -546,11 +545,67 @@ else: errors = None - result, length, byteorder = runicode.str_decode_utf_16_helper(string, size, - errors, - True, # final ? false for multiple passes? - None, # errorhandler - byteorder) + result, length, byteorder = runicode.str_decode_utf_16_helper( + string, size, errors, + True, # final ? false for multiple passes? + None, # errorhandler + byteorder) + if pbyteorder is not None: + pbyteorder[0] = rffi.cast(rffi.INT, byteorder) + + return space.wrap(result) + + at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) +def PyUnicode_DecodeUTF32(space, s, size, llerrors, pbyteorder): + """Decode length bytes from a UTF-32 encoded buffer string and + return the corresponding Unicode object. errors (if non-NULL) + defines the error handling. It defaults to "strict". + + If byteorder is non-NULL, the decoder starts decoding using the + given byte order: + *byteorder == -1: little endian + *byteorder == 0: native order + *byteorder == 1: big endian + + If *byteorder is zero, and the first four bytes of the input data + are a byte order mark (BOM), the decoder switches to this byte + order and the BOM is not copied into the resulting Unicode string. + If *byteorder is -1 or 1, any byte order mark is copied to the + output. + + After completion, *byteorder is set to the current byte order at + the end of input data. + + In a narrow build codepoints outside the BMP will be decoded as + surrogate pairs. + + If byteorder is NULL, the codec starts in native order mode. + + Return NULL if an exception was raised by the codec. + """ + string = rffi.charpsize2str(s, size) + + if pbyteorder: + llbyteorder = rffi.cast(lltype.Signed, pbyteorder[0]) + if llbyteorder < 0: + byteorder = "little" + elif llbyteorder > 0: + byteorder = "big" + else: + byteorder = "native" + else: + byteorder = "native" + + if llerrors: + errors = rffi.charp2str(llerrors) + else: + errors = None + + result, length, byteorder = runicode.str_decode_utf_32_helper( + string, size, errors, + True, # final ? false for multiple passes? + None, # errorhandler + byteorder) if pbyteorder is not None: pbyteorder[0] = rffi.cast(rffi.INT, byteorder) From noreply at buildbot.pypy.org Wed May 2 01:04:09 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:04:09 +0200 (CEST) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120501230409.974DE82F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54863:67e35dd3ead5 Date: 2012-05-01 18:50 +0200 http://bitbucket.org/pypy/pypy/changeset/67e35dd3ead5/ Log: hg merge default diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -4,7 +4,7 @@ from pypy.module.cpyext.unicodeobject import ( Py_UNICODE, PyUnicodeObject, new_empty_unicode) from pypy.module.cpyext.api import PyObjectP, PyObject -from pypy.module.cpyext.pyobject import Py_DecRef +from pypy.module.cpyext.pyobject import Py_DecRef, from_ref from pypy.rpython.lltypesystem import rffi, lltype import sys, py @@ -180,7 +180,9 @@ w_res = api.PyUnicode_FromString(s) assert space.unwrap(w_res) == u'sp�m' - w_res = api.PyUnicode_FromStringAndSize(s, 4) + res = api.PyUnicode_FromStringAndSize(s, 4) + w_res = from_ref(space, res) + api.Py_DecRef(res) assert space.unwrap(w_res) == u'sp�' rffi.free_charp(s) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -419,10 +419,10 @@ needed data. The buffer is copied into the new object. If the buffer is not NULL, the return value might be a shared object. Therefore, modification of the resulting Unicode object is only allowed when u is NULL.""" - if not s: - raise NotImplementedError - w_str = space.wrap(rffi.charpsize2str(s, size)) - return space.call_method(w_str, 'decode', space.wrap("utf-8")) + if s: + return make_ref(space, PyUnicode_DecodeUTF8(space, s, size, None)) + else: + return rffi.cast(PyObject, new_empty_unicode(space, size)) @cpython_api([rffi.INT_real], PyObject) def PyUnicode_FromOrdinal(space, ordinal): @@ -481,6 +481,7 @@ else: w_errors = space.w_None return space.call_method(w_s, 'decode', space.wrap(encoding), w_errors) + globals()['PyUnicode_Decode%s' % suffix] = PyUnicode_DecodeXXX @cpython_api([CONST_WSTRING, Py_ssize_t, CONST_STRING], PyObject) @func_renamer('PyUnicode_Encode%s' % suffix) @@ -494,6 +495,7 @@ else: w_errors = space.w_None return space.call_method(w_u, 'encode', space.wrap(encoding), w_errors) + globals()['PyUnicode_Encode%s' % suffix] = PyUnicode_EncodeXXX make_conversion_functions('UTF8', 'utf-8') make_conversion_functions('ASCII', 'ascii') From noreply at buildbot.pypy.org Wed May 2 01:04:10 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:04:10 +0200 (CEST) Subject: [pypy-commit] pypy py3k: cpyext: copy the 3.2 version of arraymodule.c Message-ID: <20120501230410.E4BE182F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54864:475eb0b21db9 Date: 2012-05-01 18:51 +0200 http://bitbucket.org/pypy/pypy/changeset/475eb0b21db9/ Log: cpyext: copy the 3.2 version of arraymodule.c diff --git a/pypy/module/cpyext/test/array.c b/pypy/module/cpyext/test/array.c --- a/pypy/module/cpyext/test/array.c +++ b/pypy/module/cpyext/test/array.c @@ -22,10 +22,13 @@ * functions aren't visible yet. */ struct arraydescr { - int typecode; + Py_UNICODE typecode; int itemsize; PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); + char *formats; + int is_integer_type; + int is_signed; }; typedef struct arrayobject { @@ -34,6 +37,7 @@ Py_ssize_t allocated; struct arraydescr *ob_descr; PyObject *weakreflist; /* List of weak references */ + int ob_exports; /* Number of exported buffers */ } arrayobject; static PyTypeObject Arraytype; @@ -47,9 +51,15 @@ char *items; size_t _new_size; + if (self->ob_exports > 0 && newsize != Py_SIZE(self)) { + PyErr_SetString(PyExc_BufferError, + "cannot resize an array that is exporting buffers"); + return -1; + } + /* Bypass realloc() when a previous overallocation is large enough to accommodate the newsize. If the newsize is 16 smaller than the - current size, then proceed with the realloc() to shrink the list. + current size, then proceed with the realloc() to shrink the array. */ if (self->allocated >= newsize && @@ -59,6 +69,14 @@ return 0; } + if (newsize == 0) { + PyMem_FREE(self->ob_item); + self->ob_item = NULL; + Py_SIZE(self) = 0; + self->allocated = 0; + return 0; + } + /* This over-allocates proportional to the array size, making room * for additional growth. The over-allocation is mild, but is * enough to give linear-time amortized behavior over a long @@ -102,29 +120,12 @@ ****************************************************************************/ static PyObject * -c_getitem(arrayobject *ap, Py_ssize_t i) -{ - return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); -} - -static int -c_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) -{ - char x; - if (!PyArg_Parse(v, "c;array item must be char", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; -} - -static PyObject * b_getitem(arrayobject *ap, Py_ssize_t i) { long x = ((char *)ap->ob_item)[i]; if (x >= 128) x -= 256; - return PyInt_FromLong(x); + return PyLong_FromLong(x); } static int @@ -155,7 +156,7 @@ BB_getitem(arrayobject *ap, Py_ssize_t i) { long x = ((unsigned char *)ap->ob_item)[i]; - return PyInt_FromLong(x); + return PyLong_FromLong(x); } static int @@ -170,7 +171,6 @@ return 0; } -#ifdef Py_USING_UNICODE static PyObject * u_getitem(arrayobject *ap, Py_ssize_t i) { @@ -194,14 +194,15 @@ ((Py_UNICODE *)ap->ob_item)[i] = p[0]; return 0; } -#endif + static PyObject * h_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); + return PyLong_FromLong((long) ((short *)ap->ob_item)[i]); } + static int h_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { @@ -217,7 +218,7 @@ static PyObject * HH_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); + return PyLong_FromLong((long) ((unsigned short *)ap->ob_item)[i]); } static int @@ -246,7 +247,7 @@ static PyObject * i_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); + return PyLong_FromLong((long) ((int *)ap->ob_item)[i]); } static int @@ -303,7 +304,7 @@ static PyObject * l_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong(((long *)ap->ob_item)[i]); + return PyLong_FromLong(((long *)ap->ob_item)[i]); } static int @@ -389,23 +390,25 @@ return 0; } -/* Description of types */ + +/* Description of types. + * + * Don't forget to update typecode_to_mformat_code() if you add a new + * typecode. + */ static struct arraydescr descriptors[] = { - {'c', sizeof(char), c_getitem, c_setitem}, - {'b', sizeof(char), b_getitem, b_setitem}, - {'B', sizeof(char), BB_getitem, BB_setitem}, -#ifdef Py_USING_UNICODE - {'u', sizeof(Py_UNICODE), u_getitem, u_setitem}, -#endif - {'h', sizeof(short), h_getitem, h_setitem}, - {'H', sizeof(short), HH_getitem, HH_setitem}, - {'i', sizeof(int), i_getitem, i_setitem}, - {'I', sizeof(int), II_getitem, II_setitem}, - {'l', sizeof(long), l_getitem, l_setitem}, - {'L', sizeof(long), LL_getitem, LL_setitem}, - {'f', sizeof(float), f_getitem, f_setitem}, - {'d', sizeof(double), d_getitem, d_setitem}, - {'\0', 0, 0, 0} /* Sentinel */ + {'b', 1, b_getitem, b_setitem, "b", 1, 1}, + {'B', 1, BB_getitem, BB_setitem, "B", 1, 0}, + {'u', sizeof(Py_UNICODE), u_getitem, u_setitem, "u", 0, 0}, + {'h', sizeof(short), h_getitem, h_setitem, "h", 1, 1}, + {'H', sizeof(short), HH_getitem, HH_setitem, "H", 1, 0}, + {'i', sizeof(int), i_getitem, i_setitem, "i", 1, 1}, + {'I', sizeof(int), II_getitem, II_setitem, "I", 1, 0}, + {'l', sizeof(long), l_getitem, l_setitem, "l", 1, 1}, + {'L', sizeof(long), LL_getitem, LL_setitem, "L", 1, 0}, + {'f', sizeof(float), f_getitem, f_setitem, "f", 0, 0}, + {'d', sizeof(double), d_getitem, d_setitem, "d", 0, 0}, + {'\0', 0, 0, 0, 0, 0, 0} /* Sentinel */ }; /**************************************************************************** @@ -446,6 +449,7 @@ return PyErr_NoMemory(); } } + op->ob_exports = 0; return (PyObject *) op; } @@ -670,11 +674,9 @@ static PyObject * array_repeat(arrayobject *a, Py_ssize_t n) { - Py_ssize_t i; Py_ssize_t size; arrayobject *np; - char *p; - Py_ssize_t nbytes; + Py_ssize_t oldbytes, newbytes; if (n < 0) n = 0; if ((Py_SIZE(a) != 0) && (n > PY_SSIZE_T_MAX / Py_SIZE(a))) { @@ -684,13 +686,23 @@ np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); if (np == NULL) return NULL; - p = np->ob_item; - nbytes = Py_SIZE(a) * a->ob_descr->itemsize; - for (i = 0; i < n; i++) { - memcpy(p, a->ob_item, nbytes); - p += nbytes; + if (n == 0) + return (PyObject *)np; + oldbytes = Py_SIZE(a) * a->ob_descr->itemsize; + newbytes = oldbytes * n; + /* this follows the code in unicode_repeat */ + if (oldbytes == 1) { + memset(np->ob_item, a->ob_item[0], newbytes); + } else { + Py_ssize_t done = oldbytes; + Py_MEMCPY(np->ob_item, a->ob_item, oldbytes); + while (done < newbytes) { + Py_ssize_t ncopy = (done <= newbytes-done) ? done : newbytes-done; + Py_MEMCPY(np->ob_item+done, np->ob_item, ncopy); + done += ncopy; + } } - return (PyObject *) np; + return (PyObject *)np; } static int @@ -737,29 +749,27 @@ ihigh = Py_SIZE(a); item = a->ob_item; d = n - (ihigh-ilow); + /* Issue #4509: If the array has exported buffers and the slice + assignment would change the size of the array, fail early to make + sure we don't modify it. */ + if (d != 0 && a->ob_exports > 0) { + PyErr_SetString(PyExc_BufferError, + "cannot resize an array that is exporting buffers"); + return -1; + } if (d < 0) { /* Delete -d items */ memmove(item + (ihigh+d)*a->ob_descr->itemsize, item + ihigh*a->ob_descr->itemsize, (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); - Py_SIZE(a) += d; - PyMem_RESIZE(item, char, Py_SIZE(a)*a->ob_descr->itemsize); - /* Can't fail */ - a->ob_item = item; - a->allocated = Py_SIZE(a); + if (array_resize(a, Py_SIZE(a) + d) == -1) + return -1; } else if (d > 0) { /* Insert d items */ - PyMem_RESIZE(item, char, - (Py_SIZE(a) + d)*a->ob_descr->itemsize); - if (item == NULL) { - PyErr_NoMemory(); + if (array_resize(a, Py_SIZE(a) + d)) return -1; - } memmove(item + (ihigh+d)*a->ob_descr->itemsize, item + ihigh*a->ob_descr->itemsize, (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); - a->ob_item = item; - Py_SIZE(a) += d; - a->allocated = Py_SIZE(a); } if (n > 0) memcpy(item + ilow*a->ob_descr->itemsize, b->ob_item, @@ -814,8 +824,7 @@ static int array_do_extend(arrayobject *self, PyObject *bb) { - Py_ssize_t size; - char *old_item; + Py_ssize_t size, oldsize, bbsize; if (!array_Check(bb)) return array_iter_extend(self, bb); @@ -830,18 +839,14 @@ PyErr_NoMemory(); return -1; } - size = Py_SIZE(self) + Py_SIZE(b); - old_item = self->ob_item; - PyMem_RESIZE(self->ob_item, char, size*self->ob_descr->itemsize); - if (self->ob_item == NULL) { - self->ob_item = old_item; - PyErr_NoMemory(); + oldsize = Py_SIZE(self); + /* Get the size of bb before resizing the array since bb could be self. */ + bbsize = Py_SIZE(bb); + size = oldsize + Py_SIZE(b); + if (array_resize(self, size) == -1) return -1; - } - memcpy(self->ob_item + Py_SIZE(self)*self->ob_descr->itemsize, - b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); - Py_SIZE(self) = size; - self->allocated = size; + memcpy(self->ob_item + oldsize * self->ob_descr->itemsize, + b->ob_item, bbsize * b->ob_descr->itemsize); return 0; #undef b @@ -877,27 +882,15 @@ return PyErr_NoMemory(); } size = Py_SIZE(self) * self->ob_descr->itemsize; - if (n == 0) { - PyMem_FREE(items); - self->ob_item = NULL; - Py_SIZE(self) = 0; - self->allocated = 0; + if (n > 0 && size > PY_SSIZE_T_MAX / n) { + return PyErr_NoMemory(); } - else { - if (size > PY_SSIZE_T_MAX / n) { - return PyErr_NoMemory(); - } - PyMem_RESIZE(items, char, n * size); - if (items == NULL) - return PyErr_NoMemory(); - p = items; - for (i = 1; i < n; i++) { - p += size; - memcpy(p, items, size); - } - self->ob_item = items; - Py_SIZE(self) *= n; - self->allocated = Py_SIZE(self); + if (array_resize(self, n * Py_SIZE(self)) == -1) + return NULL; + items = p = self->ob_item; + for (i = 1; i < n; i++) { + p += size; + memcpy(p, items, size); } } Py_INCREF(self); @@ -929,7 +922,7 @@ else if (cmp < 0) return NULL; } - return PyInt_FromSsize_t(count); + return PyLong_FromSsize_t(count); } PyDoc_STRVAR(count_doc, @@ -947,7 +940,7 @@ int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); Py_DECREF(selfi); if (cmp > 0) { - return PyInt_FromLong((long)i); + return PyLong_FromLong((long)i); } else if (cmp < 0) return NULL; @@ -1073,7 +1066,7 @@ return NULL; PyTuple_SET_ITEM(retval, 0, PyLong_FromVoidPtr(self->ob_item)); - PyTuple_SET_ITEM(retval, 1, PyInt_FromLong((long)(Py_SIZE(self)))); + PyTuple_SET_ITEM(retval, 1, PyLong_FromLong((long)(Py_SIZE(self)))); return retval; } @@ -1188,96 +1181,97 @@ \n\ Reverse the order of the items in the array."); + +/* Forward */ +static PyObject *array_frombytes(arrayobject *self, PyObject *args); + static PyObject * array_fromfile(arrayobject *self, PyObject *args) { - PyObject *f; - Py_ssize_t n; - FILE *fp; + PyObject *f, *b, *res; + Py_ssize_t itemsize = self->ob_descr->itemsize; + Py_ssize_t n, nbytes; + int not_enough_bytes; + if (!PyArg_ParseTuple(args, "On:fromfile", &f, &n)) return NULL; - fp = PyFile_AsFile(f); - if (fp == NULL) { - PyErr_SetString(PyExc_TypeError, "arg1 must be open file"); + + nbytes = n * itemsize; + if (nbytes < 0 || nbytes/itemsize != n) { + PyErr_NoMemory(); return NULL; } - if (n > 0) { - char *item = self->ob_item; - Py_ssize_t itemsize = self->ob_descr->itemsize; - size_t nread; - Py_ssize_t newlength; - size_t newbytes; - /* Be careful here about overflow */ - if ((newlength = Py_SIZE(self) + n) <= 0 || - (newbytes = newlength * itemsize) / itemsize != - (size_t)newlength) - goto nomem; - PyMem_RESIZE(item, char, newbytes); - if (item == NULL) { - nomem: - PyErr_NoMemory(); - return NULL; - } - self->ob_item = item; - Py_SIZE(self) += n; - self->allocated = Py_SIZE(self); - nread = fread(item + (Py_SIZE(self) - n) * itemsize, - itemsize, n, fp); - if (nread < (size_t)n) { - Py_SIZE(self) -= (n - nread); - PyMem_RESIZE(item, char, Py_SIZE(self)*itemsize); - self->ob_item = item; - self->allocated = Py_SIZE(self); - if (ferror(fp)) { - PyErr_SetFromErrno(PyExc_IOError); - clearerr(fp); - } - else { - PyErr_SetString(PyExc_EOFError, - "not enough items in file"); - } - return NULL; - } + + b = PyObject_CallMethod(f, "read", "n", nbytes); + if (b == NULL) + return NULL; + + if (!PyBytes_Check(b)) { + PyErr_SetString(PyExc_TypeError, + "read() didn't return bytes"); + Py_DECREF(b); + return NULL; } - Py_INCREF(Py_None); - return Py_None; + + not_enough_bytes = (PyBytes_GET_SIZE(b) != nbytes); + + args = Py_BuildValue("(O)", b); + Py_DECREF(b); + if (args == NULL) + return NULL; + + res = array_frombytes(self, args); + Py_DECREF(args); + if (res == NULL) + return NULL; + + if (not_enough_bytes) { + PyErr_SetString(PyExc_EOFError, + "read() didn't return enough bytes"); + Py_DECREF(res); + return NULL; + } + + return res; } PyDoc_STRVAR(fromfile_doc, "fromfile(f, n)\n\ \n\ Read n objects from the file object f and append them to the end of the\n\ -array. Also called as read."); - - -static PyObject * -array_fromfile_as_read(arrayobject *self, PyObject *args) -{ - if (PyErr_WarnPy3k("array.read() not supported in 3.x; " - "use array.fromfile()", 1) < 0) - return NULL; - return array_fromfile(self, args); -} +array."); static PyObject * array_tofile(arrayobject *self, PyObject *f) { - FILE *fp; + Py_ssize_t nbytes = Py_SIZE(self) * self->ob_descr->itemsize; + /* Write 64K blocks at a time */ + /* XXX Make the block size settable */ + int BLOCKSIZE = 64*1024; + Py_ssize_t nblocks = (nbytes + BLOCKSIZE - 1) / BLOCKSIZE; + Py_ssize_t i; - fp = PyFile_AsFile(f); - if (fp == NULL) { - PyErr_SetString(PyExc_TypeError, "arg must be open file"); - return NULL; + if (Py_SIZE(self) == 0) + goto done; + + for (i = 0; i < nblocks; i++) { + char* ptr = self->ob_item + i*BLOCKSIZE; + Py_ssize_t size = BLOCKSIZE; + PyObject *bytes, *res; + if (i*BLOCKSIZE + size > nbytes) + size = nbytes - i*BLOCKSIZE; + bytes = PyBytes_FromStringAndSize(ptr, size); + if (bytes == NULL) + return NULL; + res = PyObject_CallMethod(f, "write", "O", bytes); + Py_DECREF(bytes); + if (res == NULL) + return NULL; + Py_DECREF(res); /* drop write result */ } - if (self->ob_size > 0) { - if (fwrite(self->ob_item, self->ob_descr->itemsize, - self->ob_size, fp) != (size_t)self->ob_size) { - PyErr_SetFromErrno(PyExc_IOError); - clearerr(fp); - return NULL; - } - } + + done: Py_INCREF(Py_None); return Py_None; } @@ -1285,25 +1279,13 @@ PyDoc_STRVAR(tofile_doc, "tofile(f)\n\ \n\ -Write all items (as machine values) to the file object f. Also called as\n\ -write."); - - -static PyObject * -array_tofile_as_write(arrayobject *self, PyObject *f) -{ - if (PyErr_WarnPy3k("array.write() not supported in 3.x; " - "use array.tofile()", 1) < 0) - return NULL; - return array_tofile(self, f); -} +Write all items (as machine values) to the file object f."); static PyObject * array_fromlist(arrayobject *self, PyObject *list) { Py_ssize_t n; - Py_ssize_t itemsize = self->ob_descr->itemsize; if (!PyList_Check(list)) { PyErr_SetString(PyExc_TypeError, "arg must be list"); @@ -1311,28 +1293,15 @@ } n = PyList_Size(list); if (n > 0) { - char *item = self->ob_item; - Py_ssize_t i; - PyMem_RESIZE(item, char, (Py_SIZE(self) + n) * itemsize); - if (item == NULL) { - PyErr_NoMemory(); + Py_ssize_t i, old_size; + old_size = Py_SIZE(self); + if (array_resize(self, old_size + n) == -1) return NULL; - } - self->ob_item = item; - Py_SIZE(self) += n; - self->allocated = Py_SIZE(self); for (i = 0; i < n; i++) { PyObject *v = PyList_GetItem(list, i); if ((*self->ob_descr->setitem)(self, Py_SIZE(self) - n + i, v) != 0) { - Py_SIZE(self) -= n; - if (itemsize && (self->ob_size > PY_SSIZE_T_MAX / itemsize)) { - return PyErr_NoMemory(); - } - PyMem_RESIZE(item, char, - Py_SIZE(self) * itemsize); - self->ob_item = item; - self->allocated = Py_SIZE(self); + array_resize(self, old_size); return NULL; } } @@ -1346,7 +1315,6 @@ \n\ Append items to array from list."); - static PyObject * array_tolist(arrayobject *self, PyObject *unused) { @@ -1371,97 +1339,139 @@ \n\ Convert array to an ordinary list with the same items."); - static PyObject * -array_fromstring(arrayobject *self, PyObject *args) +frombytes(arrayobject *self, Py_buffer *buffer) { - char *str; + int itemsize = self->ob_descr->itemsize; Py_ssize_t n; - int itemsize = self->ob_descr->itemsize; - if (!PyArg_ParseTuple(args, "s#:fromstring", &str, &n)) + if (buffer->itemsize != 1) { + PyBuffer_Release(buffer); + PyErr_SetString(PyExc_TypeError, "string/buffer of bytes required."); return NULL; + } + n = buffer->len; if (n % itemsize != 0) { + PyBuffer_Release(buffer); PyErr_SetString(PyExc_ValueError, "string length not a multiple of item size"); return NULL; } n = n / itemsize; if (n > 0) { - char *item = self->ob_item; - if ((n > PY_SSIZE_T_MAX - Py_SIZE(self)) || - ((Py_SIZE(self) + n) > PY_SSIZE_T_MAX / itemsize)) { + Py_ssize_t old_size = Py_SIZE(self); + if ((n > PY_SSIZE_T_MAX - old_size) || + ((old_size + n) > PY_SSIZE_T_MAX / itemsize)) { + PyBuffer_Release(buffer); return PyErr_NoMemory(); } - PyMem_RESIZE(item, char, (Py_SIZE(self) + n) * itemsize); - if (item == NULL) { - PyErr_NoMemory(); + if (array_resize(self, old_size + n) == -1) { + PyBuffer_Release(buffer); return NULL; } - self->ob_item = item; - Py_SIZE(self) += n; - self->allocated = Py_SIZE(self); - memcpy(item + (Py_SIZE(self) - n) * itemsize, - str, itemsize*n); + memcpy(self->ob_item + old_size * itemsize, + buffer->buf, n * itemsize); } + PyBuffer_Release(buffer); Py_INCREF(Py_None); return Py_None; } +static PyObject * +array_fromstring(arrayobject *self, PyObject *args) +{ + Py_buffer buffer; + if (PyErr_WarnEx(PyExc_DeprecationWarning, + "fromstring() is deprecated. Use frombytes() instead.", 2) != 0) + return NULL; + if (!PyArg_ParseTuple(args, "s*:fromstring", &buffer)) + return NULL; + else + return frombytes(self, &buffer); +} + PyDoc_STRVAR(fromstring_doc, "fromstring(string)\n\ \n\ Appends items from the string, interpreting it as an array of machine\n\ -values,as if it had been read from a file using the fromfile() method)."); +values, as if it had been read from a file using the fromfile() method).\n\ +\n\ +This method is deprecated. Use frombytes instead."); static PyObject * -array_tostring(arrayobject *self, PyObject *unused) +array_frombytes(arrayobject *self, PyObject *args) { - if (self->ob_size <= PY_SSIZE_T_MAX / self->ob_descr->itemsize) { - return PyString_FromStringAndSize(self->ob_item, + Py_buffer buffer; + if (!PyArg_ParseTuple(args, "y*:frombytes", &buffer)) + return NULL; + else + return frombytes(self, &buffer); +} + +PyDoc_STRVAR(frombytes_doc, +"frombytes(bytestring)\n\ +\n\ +Appends items from the string, interpreting it as an array of machine\n\ +values, as if it had been read from a file using the fromfile() method)."); + + +static PyObject * +array_tobytes(arrayobject *self, PyObject *unused) +{ + if (Py_SIZE(self) <= PY_SSIZE_T_MAX / self->ob_descr->itemsize) { + return PyBytes_FromStringAndSize(self->ob_item, Py_SIZE(self) * self->ob_descr->itemsize); } else { return PyErr_NoMemory(); } } -PyDoc_STRVAR(tostring_doc, -"tostring() -> string\n\ +PyDoc_STRVAR(tobytes_doc, +"tobytes() -> bytes\n\ \n\ -Convert the array to an array of machine values and return the string\n\ +Convert the array to an array of machine values and return the bytes\n\ representation."); +static PyObject * +array_tostring(arrayobject *self, PyObject *unused) +{ + if (PyErr_WarnEx(PyExc_DeprecationWarning, + "tostring() is deprecated. Use tobytes() instead.", 2) != 0) + return NULL; + return array_tobytes(self, unused); +} -#ifdef Py_USING_UNICODE +PyDoc_STRVAR(tostring_doc, +"tostring() -> bytes\n\ +\n\ +Convert the array to an array of machine values and return the bytes\n\ +representation.\n\ +\n\ +This method is deprecated. Use tobytes instead."); + + static PyObject * array_fromunicode(arrayobject *self, PyObject *args) { Py_UNICODE *ustr; Py_ssize_t n; + Py_UNICODE typecode; if (!PyArg_ParseTuple(args, "u#:fromunicode", &ustr, &n)) return NULL; - if (self->ob_descr->typecode != 'u') { + typecode = self->ob_descr->typecode; + if ((typecode != 'u')) { PyErr_SetString(PyExc_ValueError, "fromunicode() may only be called on " - "type 'u' arrays"); + "unicode type arrays"); return NULL; } if (n > 0) { - Py_UNICODE *item = (Py_UNICODE *) self->ob_item; - if (Py_SIZE(self) > PY_SSIZE_T_MAX - n) { - return PyErr_NoMemory(); - } - PyMem_RESIZE(item, Py_UNICODE, Py_SIZE(self) + n); - if (item == NULL) { - PyErr_NoMemory(); + Py_ssize_t old_size = Py_SIZE(self); + if (array_resize(self, old_size + n) == -1) return NULL; - } - self->ob_item = (char *) item; - Py_SIZE(self) += n; - self->allocated = Py_SIZE(self); - memcpy(item + Py_SIZE(self) - n, + memcpy(self->ob_item + old_size * sizeof(Py_UNICODE), ustr, n * sizeof(Py_UNICODE)); } @@ -1473,17 +1483,19 @@ "fromunicode(ustr)\n\ \n\ Extends this array with data from the unicode string ustr.\n\ -The array must be a type 'u' array; otherwise a ValueError\n\ -is raised. Use array.fromstring(ustr.decode(...)) to\n\ +The array must be a unicode type array; otherwise a ValueError\n\ +is raised. Use array.frombytes(ustr.decode(...)) to\n\ append Unicode data to an array of some other type."); static PyObject * array_tounicode(arrayobject *self, PyObject *unused) { - if (self->ob_descr->typecode != 'u') { + Py_UNICODE typecode; + typecode = self->ob_descr->typecode; + if ((typecode != 'u')) { PyErr_SetString(PyExc_ValueError, - "tounicode() may only be called on type 'u' arrays"); + "tounicode() may only be called on unicode type arrays"); return NULL; } return PyUnicode_FromUnicode((Py_UNICODE *) self->ob_item, Py_SIZE(self)); @@ -1493,16 +1505,458 @@ "tounicode() -> unicode\n\ \n\ Convert the array to a unicode string. The array must be\n\ -a type 'u' array; otherwise a ValueError is raised. Use\n\ +a unicode type array; otherwise a ValueError is raised. Use\n\ array.tostring().decode() to obtain a unicode string from\n\ an array of some other type."); -#endif /* Py_USING_UNICODE */ + + +/*********************** Pickling support ************************/ + +enum machine_format_code { + UNKNOWN_FORMAT = -1, + /* UNKNOWN_FORMAT is used to indicate that the machine format for an + * array type code cannot be interpreted. When this occurs, a list of + * Python objects is used to represent the content of the array + * instead of using the memory content of the array directly. In that + * case, the array_reconstructor mechanism is bypassed completely, and + * the standard array constructor is used instead. + * + * This is will most likely occur when the machine doesn't use IEEE + * floating-point numbers. + */ + + UNSIGNED_INT8 = 0, + SIGNED_INT8 = 1, + UNSIGNED_INT16_LE = 2, + UNSIGNED_INT16_BE = 3, + SIGNED_INT16_LE = 4, + SIGNED_INT16_BE = 5, + UNSIGNED_INT32_LE = 6, + UNSIGNED_INT32_BE = 7, + SIGNED_INT32_LE = 8, + SIGNED_INT32_BE = 9, + UNSIGNED_INT64_LE = 10, + UNSIGNED_INT64_BE = 11, + SIGNED_INT64_LE = 12, + SIGNED_INT64_BE = 13, + IEEE_754_FLOAT_LE = 14, + IEEE_754_FLOAT_BE = 15, + IEEE_754_DOUBLE_LE = 16, + IEEE_754_DOUBLE_BE = 17, + UTF16_LE = 18, + UTF16_BE = 19, + UTF32_LE = 20, + UTF32_BE = 21 +}; +#define MACHINE_FORMAT_CODE_MIN 0 +#define MACHINE_FORMAT_CODE_MAX 21 + +static const struct mformatdescr { + size_t size; + int is_signed; + int is_big_endian; +} mformat_descriptors[] = { + {1, 0, 0}, /* 0: UNSIGNED_INT8 */ + {1, 1, 0}, /* 1: SIGNED_INT8 */ + {2, 0, 0}, /* 2: UNSIGNED_INT16_LE */ + {2, 0, 1}, /* 3: UNSIGNED_INT16_BE */ + {2, 1, 0}, /* 4: SIGNED_INT16_LE */ + {2, 1, 1}, /* 5: SIGNED_INT16_BE */ + {4, 0, 0}, /* 6: UNSIGNED_INT32_LE */ + {4, 0, 1}, /* 7: UNSIGNED_INT32_BE */ + {4, 1, 0}, /* 8: SIGNED_INT32_LE */ + {4, 1, 1}, /* 9: SIGNED_INT32_BE */ + {8, 0, 0}, /* 10: UNSIGNED_INT64_LE */ + {8, 0, 1}, /* 11: UNSIGNED_INT64_BE */ + {8, 1, 0}, /* 12: SIGNED_INT64_LE */ + {8, 1, 1}, /* 13: SIGNED_INT64_BE */ + {4, 0, 0}, /* 14: IEEE_754_FLOAT_LE */ + {4, 0, 1}, /* 15: IEEE_754_FLOAT_BE */ + {8, 0, 0}, /* 16: IEEE_754_DOUBLE_LE */ + {8, 0, 1}, /* 17: IEEE_754_DOUBLE_BE */ + {4, 0, 0}, /* 18: UTF16_LE */ + {4, 0, 1}, /* 19: UTF16_BE */ + {8, 0, 0}, /* 20: UTF32_LE */ + {8, 0, 1} /* 21: UTF32_BE */ +}; + + +/* + * Internal: This function is used to find the machine format of a given + * array type code. This returns UNKNOWN_FORMAT when the machine format cannot + * be found. + */ +static enum machine_format_code +typecode_to_mformat_code(int typecode) +{ +#ifdef WORDS_BIGENDIAN + const int is_big_endian = 1; +#else + const int is_big_endian = 0; +#endif + size_t intsize; + int is_signed; + + switch (typecode) { + case 'b': + return SIGNED_INT8; + case 'B': + return UNSIGNED_INT8; + + case 'u': + if (sizeof(Py_UNICODE) == 2) { + return UTF16_LE + is_big_endian; + } + if (sizeof(Py_UNICODE) == 4) { + return UTF32_LE + is_big_endian; + } + return UNKNOWN_FORMAT; + + case 'f': + if (sizeof(float) == 4) { + const float y = 16711938.0; + if (memcmp(&y, "\x4b\x7f\x01\x02", 4) == 0) + return IEEE_754_FLOAT_BE; + if (memcmp(&y, "\x02\x01\x7f\x4b", 4) == 0) + return IEEE_754_FLOAT_LE; + } + return UNKNOWN_FORMAT; + + case 'd': + if (sizeof(double) == 8) { + const double x = 9006104071832581.0; + if (memcmp(&x, "\x43\x3f\xff\x01\x02\x03\x04\x05", 8) == 0) + return IEEE_754_DOUBLE_BE; + if (memcmp(&x, "\x05\x04\x03\x02\x01\xff\x3f\x43", 8) == 0) + return IEEE_754_DOUBLE_LE; + } + return UNKNOWN_FORMAT; + + /* Integers */ + case 'h': + intsize = sizeof(short); + is_signed = 1; + break; + case 'H': + intsize = sizeof(short); + is_signed = 0; + break; + case 'i': + intsize = sizeof(int); + is_signed = 1; + break; + case 'I': + intsize = sizeof(int); + is_signed = 0; + break; + case 'l': + intsize = sizeof(long); + is_signed = 1; + break; + case 'L': + intsize = sizeof(long); + is_signed = 0; + break; + default: + return UNKNOWN_FORMAT; + } + switch (intsize) { + case 2: + return UNSIGNED_INT16_LE + is_big_endian + (2 * is_signed); + case 4: + return UNSIGNED_INT32_LE + is_big_endian + (2 * is_signed); + case 8: + return UNSIGNED_INT64_LE + is_big_endian + (2 * is_signed); + default: + return UNKNOWN_FORMAT; + } +} + +/* Forward declaration. */ +static PyObject *array_new(PyTypeObject *type, PyObject *args, PyObject *kwds); + +/* + * Internal: This function wraps the array constructor--i.e., array_new()--to + * allow the creation of array objects from C code without having to deal + * directly the tuple argument of array_new(). The typecode argument is a + * Unicode character value, like 'i' or 'f' for example, representing an array + * type code. The items argument is a bytes or a list object from which + * contains the initial value of the array. + * + * On success, this functions returns the array object created. Otherwise, + * NULL is returned to indicate a failure. + */ +static PyObject * +make_array(PyTypeObject *arraytype, Py_UNICODE typecode, PyObject *items) +{ + PyObject *new_args; + PyObject *array_obj; + PyObject *typecode_obj; + + assert(arraytype != NULL); + assert(items != NULL); + + typecode_obj = PyUnicode_FromUnicode(&typecode, 1); + if (typecode_obj == NULL) + return NULL; + + new_args = PyTuple_New(2); + if (new_args == NULL) + return NULL; + Py_INCREF(items); + PyTuple_SET_ITEM(new_args, 0, typecode_obj); + PyTuple_SET_ITEM(new_args, 1, items); + + array_obj = array_new(arraytype, new_args, NULL); + Py_DECREF(new_args); + if (array_obj == NULL) + return NULL; + + return array_obj; +} + +/* + * This functions is a special constructor used when unpickling an array. It + * provides a portable way to rebuild an array from its memory representation. + */ +static PyObject * +array_reconstructor(PyObject *self, PyObject *args) +{ + PyTypeObject *arraytype; + PyObject *items; + PyObject *converted_items; + PyObject *result; + int typecode_int; + Py_UNICODE typecode; + enum machine_format_code mformat_code; + struct arraydescr *descr; + + if (!PyArg_ParseTuple(args, "OCiO:array._array_reconstructor", + &arraytype, &typecode_int, &mformat_code, &items)) + return NULL; + + typecode = (Py_UNICODE)typecode_int; + + if (!PyType_Check(arraytype)) { + PyErr_Format(PyExc_TypeError, + "first argument must a type object, not %.200s", + Py_TYPE(arraytype)->tp_name); + return NULL; + } + if (!PyType_IsSubtype(arraytype, &Arraytype)) { + PyErr_Format(PyExc_TypeError, + "%.200s is not a subtype of %.200s", + arraytype->tp_name, Arraytype.tp_name); + return NULL; + } + for (descr = descriptors; descr->typecode != '\0'; descr++) { + if (descr->typecode == typecode) + break; + } + if (descr->typecode == '\0') { + PyErr_SetString(PyExc_ValueError, + "second argument must be a valid type code"); + return NULL; + } + if (mformat_code < MACHINE_FORMAT_CODE_MIN || + mformat_code > MACHINE_FORMAT_CODE_MAX) { + PyErr_SetString(PyExc_ValueError, + "third argument must be a valid machine format code."); + return NULL; + } + if (!PyBytes_Check(items)) { + PyErr_Format(PyExc_TypeError, + "fourth argument should be bytes, not %.200s", + Py_TYPE(items)->tp_name); + return NULL; + } + + /* Fast path: No decoding has to be done. */ + if (mformat_code == typecode_to_mformat_code(typecode) || + mformat_code == UNKNOWN_FORMAT) { + return make_array(arraytype, typecode, items); + } + + /* Slow path: Decode the byte string according to the given machine + * format code. This occurs when the computer unpickling the array + * object is architecturally different from the one that pickled the + * array. + */ + if (Py_SIZE(items) % mformat_descriptors[mformat_code].size != 0) { + PyErr_SetString(PyExc_ValueError, + "string length not a multiple of item size"); + return NULL; + } + switch (mformat_code) { + case IEEE_754_FLOAT_LE: + case IEEE_754_FLOAT_BE: { + int i; + int le = (mformat_code == IEEE_754_FLOAT_LE) ? 1 : 0; + Py_ssize_t itemcount = Py_SIZE(items) / 4; + const unsigned char *memstr = + (unsigned char *)PyBytes_AS_STRING(items); + + converted_items = PyList_New(itemcount); + if (converted_items == NULL) + return NULL; + for (i = 0; i < itemcount; i++) { + PyObject *pyfloat = PyFloat_FromDouble( + _PyFloat_Unpack4(&memstr[i * 4], le)); + if (pyfloat == NULL) { + Py_DECREF(converted_items); + return NULL; + } + PyList_SET_ITEM(converted_items, i, pyfloat); + } + break; + } + case IEEE_754_DOUBLE_LE: + case IEEE_754_DOUBLE_BE: { + int i; + int le = (mformat_code == IEEE_754_DOUBLE_LE) ? 1 : 0; + Py_ssize_t itemcount = Py_SIZE(items) / 8; + const unsigned char *memstr = + (unsigned char *)PyBytes_AS_STRING(items); + + converted_items = PyList_New(itemcount); + if (converted_items == NULL) + return NULL; + for (i = 0; i < itemcount; i++) { + PyObject *pyfloat = PyFloat_FromDouble( + _PyFloat_Unpack8(&memstr[i * 8], le)); + if (pyfloat == NULL) { + Py_DECREF(converted_items); + return NULL; + } + PyList_SET_ITEM(converted_items, i, pyfloat); + } + break; + } + case UTF16_LE: + case UTF16_BE: { + int byteorder = (mformat_code == UTF16_LE) ? -1 : 1; + converted_items = PyUnicode_DecodeUTF16( + PyBytes_AS_STRING(items), Py_SIZE(items), + "strict", &byteorder); + if (converted_items == NULL) + return NULL; + break; + } + case UTF32_LE: + case UTF32_BE: { + int byteorder = (mformat_code == UTF32_LE) ? -1 : 1; + converted_items = PyUnicode_DecodeUTF32( + PyBytes_AS_STRING(items), Py_SIZE(items), + "strict", &byteorder); + if (converted_items == NULL) + return NULL; + break; + } + + case UNSIGNED_INT8: + case SIGNED_INT8: + case UNSIGNED_INT16_LE: + case UNSIGNED_INT16_BE: + case SIGNED_INT16_LE: + case SIGNED_INT16_BE: + case UNSIGNED_INT32_LE: + case UNSIGNED_INT32_BE: + case SIGNED_INT32_LE: + case SIGNED_INT32_BE: + case UNSIGNED_INT64_LE: + case UNSIGNED_INT64_BE: + case SIGNED_INT64_LE: + case SIGNED_INT64_BE: { + int i; + const struct mformatdescr mf_descr = + mformat_descriptors[mformat_code]; + Py_ssize_t itemcount = Py_SIZE(items) / mf_descr.size; + const unsigned char *memstr = + (unsigned char *)PyBytes_AS_STRING(items); + struct arraydescr *descr; + + /* If possible, try to pack array's items using a data type + * that fits better. This may result in an array with narrower + * or wider elements. + * + * For example, if a 32-bit machine pickles a L-code array of + * unsigned longs, then the array will be unpickled by 64-bit + * machine as an I-code array of unsigned ints. + * + * XXX: Is it possible to write a unit test for this? + */ + for (descr = descriptors; descr->typecode != '\0'; descr++) { + if (descr->is_integer_type && + descr->itemsize == mf_descr.size && + descr->is_signed == mf_descr.is_signed) + typecode = descr->typecode; + } + + converted_items = PyList_New(itemcount); + if (converted_items == NULL) + return NULL; + for (i = 0; i < itemcount; i++) { + PyObject *pylong; + + pylong = _PyLong_FromByteArray( + &memstr[i * mf_descr.size], + mf_descr.size, + !mf_descr.is_big_endian, + mf_descr.is_signed); + if (pylong == NULL) { + Py_DECREF(converted_items); + return NULL; + } + PyList_SET_ITEM(converted_items, i, pylong); + } + break; + } + case UNKNOWN_FORMAT: + /* Impossible, but needed to shut up GCC about the unhandled + * enumeration value. + */ + default: + PyErr_BadArgument(); + return NULL; + } + + result = make_array(arraytype, typecode, converted_items); + Py_DECREF(converted_items); + return result; +} static PyObject * -array_reduce(arrayobject *array) +array_reduce_ex(arrayobject *array, PyObject *value) { - PyObject *dict, *result, *list; + PyObject *dict; + PyObject *result; + PyObject *array_str; + int typecode = array->ob_descr->typecode; + int mformat_code; + static PyObject *array_reconstructor = NULL; + long protocol; + + if (array_reconstructor == NULL) { + PyObject *array_module = PyImport_ImportModule("array"); + if (array_module == NULL) + return NULL; + array_reconstructor = PyObject_GetAttrString( + array_module, + "_array_reconstructor"); + Py_DECREF(array_module); + if (array_reconstructor == NULL) + return NULL; + } + + if (!PyLong_Check(value)) { + PyErr_SetString(PyExc_TypeError, + "__reduce_ex__ argument should an integer"); + return NULL; + } + protocol = PyLong_AsLong(value); + if (protocol == -1 && PyErr_Occurred()) + return NULL; dict = PyObject_GetAttrString((PyObject *)array, "__dict__"); if (dict == NULL) { @@ -1512,20 +1966,41 @@ dict = Py_None; Py_INCREF(dict); } - /* Unlike in Python 3.x, we never use the more efficient memory - * representation of an array for pickling. This is unfortunately - * necessary to allow array objects to be unpickled by Python 3.x, - * since str objects from 2.x are always decoded to unicode in - * Python 3.x. - */ - list = array_tolist(array, NULL); - if (list == NULL) { + + mformat_code = typecode_to_mformat_code(typecode); + if (mformat_code == UNKNOWN_FORMAT || protocol < 3) { + /* Convert the array to a list if we got something weird + * (e.g., non-IEEE floats), or we are pickling the array using + * a Python 2.x compatible protocol. + * + * It is necessary to use a list representation for Python 2.x + * compatible pickle protocol, since Python 2's str objects + * are unpickled as unicode by Python 3. Thus it is impossible + * to make arrays unpicklable by Python 3 by using their memory + * representation, unless we resort to ugly hacks such as + * coercing unicode objects to bytes in array_reconstructor. + */ + PyObject *list; + list = array_tolist(array, NULL); + if (list == NULL) { + Py_DECREF(dict); + return NULL; + } + result = Py_BuildValue( + "O(CO)O", Py_TYPE(array), typecode, list, dict); + Py_DECREF(list); + Py_DECREF(dict); + return result; + } + + array_str = array_tobytes(array, NULL); + if (array_str == NULL) { Py_DECREF(dict); return NULL; } result = Py_BuildValue( - "O(cO)O", Py_TYPE(array), array->ob_descr->typecode, list, dict); - Py_DECREF(list); + "O(OCiN)O", array_reconstructor, Py_TYPE(array), typecode, + mformat_code, array_str, dict); Py_DECREF(dict); return result; } @@ -1535,14 +2010,14 @@ static PyObject * array_get_typecode(arrayobject *a, void *closure) { - char tc = a->ob_descr->typecode; - return PyString_FromStringAndSize(&tc, 1); + Py_UNICODE tc = a->ob_descr->typecode; + return PyUnicode_FromUnicode(&tc, 1); } static PyObject * array_get_itemsize(arrayobject *a, void *closure) { - return PyInt_FromLong((long)a->ob_descr->itemsize); + return PyLong_FromLong((long)a->ob_descr->itemsize); } static PyGetSetDef array_getsets [] = { @@ -1574,19 +2049,17 @@ fromlist_doc}, {"fromstring", (PyCFunction)array_fromstring, METH_VARARGS, fromstring_doc}, -#ifdef Py_USING_UNICODE + {"frombytes", (PyCFunction)array_frombytes, METH_VARARGS, + frombytes_doc}, {"fromunicode", (PyCFunction)array_fromunicode, METH_VARARGS, fromunicode_doc}, -#endif {"index", (PyCFunction)array_index, METH_O, index_doc}, {"insert", (PyCFunction)array_insert, METH_VARARGS, insert_doc}, {"pop", (PyCFunction)array_pop, METH_VARARGS, pop_doc}, - {"read", (PyCFunction)array_fromfile_as_read, METH_VARARGS, - fromfile_doc}, - {"__reduce__", (PyCFunction)array_reduce, METH_NOARGS, + {"__reduce_ex__", (PyCFunction)array_reduce_ex, METH_O, reduce_doc}, {"remove", (PyCFunction)array_remove, METH_O, remove_doc}, @@ -1600,44 +2073,32 @@ tolist_doc}, {"tostring", (PyCFunction)array_tostring, METH_NOARGS, tostring_doc}, -#ifdef Py_USING_UNICODE + {"tobytes", (PyCFunction)array_tobytes, METH_NOARGS, + tobytes_doc}, {"tounicode", (PyCFunction)array_tounicode, METH_NOARGS, tounicode_doc}, -#endif - {"write", (PyCFunction)array_tofile_as_write, METH_O, - tofile_doc}, {NULL, NULL} /* sentinel */ }; static PyObject * array_repr(arrayobject *a) { - char buf[256], typecode; - PyObject *s, *t, *v = NULL; + Py_UNICODE typecode; + PyObject *s, *v = NULL; Py_ssize_t len; len = Py_SIZE(a); typecode = a->ob_descr->typecode; if (len == 0) { - PyOS_snprintf(buf, sizeof(buf), "array('%c')", typecode); - return PyString_FromString(buf); + return PyUnicode_FromFormat("array('%c')", (int)typecode); } - - if (typecode == 'c') - v = array_tostring(a, NULL); -#ifdef Py_USING_UNICODE - else if (typecode == 'u') + if ((typecode == 'u')) v = array_tounicode(a, NULL); -#endif else v = array_tolist(a, NULL); - t = PyObject_Repr(v); - Py_XDECREF(v); - PyOS_snprintf(buf, sizeof(buf), "array('%c', ", typecode); - s = PyString_FromString(buf); - PyString_ConcatAndDel(&s, t); - PyString_ConcatAndDel(&s, PyString_FromString(")")); + s = PyUnicode_FromFormat("array('%c', %R)", (int)typecode, v); + Py_DECREF(v); return s; } @@ -1659,7 +2120,7 @@ arrayobject* ar; int itemsize = self->ob_descr->itemsize; - if (PySlice_GetIndicesEx((PySliceObject*)item, Py_SIZE(self), + if (PySlice_GetIndicesEx(item, Py_SIZE(self), &start, &stop, &step, &slicelength) < 0) { return NULL; } @@ -1730,7 +2191,7 @@ return (*self->ob_descr->setitem)(self, i, value); } else if (PySlice_Check(item)) { - if (PySlice_GetIndicesEx((PySliceObject *)item, + if (PySlice_GetIndicesEx(item, Py_SIZE(self), &start, &stop, &step, &slicelength) < 0) { return -1; @@ -1774,18 +2235,28 @@ if ((step > 0 && stop < start) || (step < 0 && stop > start)) stop = start; + + /* Issue #4509: If the array has exported buffers and the slice + assignment would change the size of the array, fail early to make + sure we don't modify it. */ + if ((needed == 0 || slicelength != needed) && self->ob_exports > 0) { + PyErr_SetString(PyExc_BufferError, + "cannot resize an array that is exporting buffers"); + return -1; + } + if (step == 1) { if (slicelength > needed) { memmove(self->ob_item + (start + needed) * itemsize, self->ob_item + stop * itemsize, (Py_SIZE(self) - stop) * itemsize); if (array_resize(self, Py_SIZE(self) + - needed - slicelength) < 0) + needed - slicelength) < 0) return -1; } else if (slicelength < needed) { if (array_resize(self, Py_SIZE(self) + - needed - slicelength) < 0) + needed - slicelength) < 0) return -1; memmove(self->ob_item + (start + needed) * itemsize, self->ob_item + stop * itemsize, @@ -1854,40 +2325,49 @@ static const void *emptybuf = ""; -static Py_ssize_t -array_buffer_getreadbuf(arrayobject *self, Py_ssize_t index, const void **ptr) + +static int +array_buffer_getbuf(arrayobject *self, Py_buffer *view, int flags) { - if ( index != 0 ) { - PyErr_SetString(PyExc_SystemError, - "Accessing non-existent array segment"); - return -1; + if (view==NULL) goto finish; + + view->buf = (void *)self->ob_item; + view->obj = (PyObject*)self; + Py_INCREF(self); + if (view->buf == NULL) + view->buf = (void *)emptybuf; + view->len = (Py_SIZE(self)) * self->ob_descr->itemsize; + view->readonly = 0; + view->ndim = 1; + view->itemsize = self->ob_descr->itemsize; + view->suboffsets = NULL; + view->shape = NULL; + if ((flags & PyBUF_ND)==PyBUF_ND) { + view->shape = &((Py_SIZE(self))); } - *ptr = (void *)self->ob_item; - if (*ptr == NULL) - *ptr = emptybuf; - return Py_SIZE(self)*self->ob_descr->itemsize; + view->strides = NULL; + if ((flags & PyBUF_STRIDES)==PyBUF_STRIDES) + view->strides = &(view->itemsize); + view->format = NULL; + view->internal = NULL; + if ((flags & PyBUF_FORMAT) == PyBUF_FORMAT) { + view->format = self->ob_descr->formats; +#ifdef Py_UNICODE_WIDE + if (self->ob_descr->typecode == 'u') { + view->format = "w"; + } +#endif + } + + finish: + self->ob_exports++; + return 0; } -static Py_ssize_t -array_buffer_getwritebuf(arrayobject *self, Py_ssize_t index, const void **ptr) +static void +array_buffer_relbuf(arrayobject *self, Py_buffer *view) { - if ( index != 0 ) { - PyErr_SetString(PyExc_SystemError, - "Accessing non-existent array segment"); - return -1; - } - *ptr = (void *)self->ob_item; - if (*ptr == NULL) - *ptr = emptybuf; - return Py_SIZE(self)*self->ob_descr->itemsize; -} - -static Py_ssize_t -array_buffer_getsegcount(arrayobject *self, Py_ssize_t *lenp) -{ - if ( lenp ) - *lenp = Py_SIZE(self)*self->ob_descr->itemsize; - return 1; + self->ob_exports--; } static PySequenceMethods array_as_sequence = { @@ -1895,37 +2375,39 @@ (binaryfunc)array_concat, /*sq_concat*/ (ssizeargfunc)array_repeat, /*sq_repeat*/ (ssizeargfunc)array_item, /*sq_item*/ - (ssizessizeargfunc)array_slice, /*sq_slice*/ + 0, /*sq_slice*/ (ssizeobjargproc)array_ass_item, /*sq_ass_item*/ - (ssizessizeobjargproc)array_ass_slice, /*sq_ass_slice*/ + 0, /*sq_ass_slice*/ (objobjproc)array_contains, /*sq_contains*/ (binaryfunc)array_inplace_concat, /*sq_inplace_concat*/ (ssizeargfunc)array_inplace_repeat /*sq_inplace_repeat*/ }; static PyBufferProcs array_as_buffer = { - (readbufferproc)array_buffer_getreadbuf, - (writebufferproc)array_buffer_getwritebuf, - (segcountproc)array_buffer_getsegcount, - NULL, + (getbufferproc)array_buffer_getbuf, + (releasebufferproc)array_buffer_relbuf }; static PyObject * array_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { - char c; + int c; PyObject *initial = NULL, *it = NULL; struct arraydescr *descr; if (type == &Arraytype && !_PyArg_NoKeywords("array.array()", kwds)) return NULL; - if (!PyArg_ParseTuple(args, "c|O:array", &c, &initial)) + if (!PyArg_ParseTuple(args, "C|O:array", &c, &initial)) return NULL; if (!(initial == NULL || PyList_Check(initial) - || PyString_Check(initial) || PyTuple_Check(initial) - || (c == 'u' && PyUnicode_Check(initial)))) { + || PyByteArray_Check(initial) + || PyBytes_Check(initial) + || PyTuple_Check(initial) + || ((c=='u') && PyUnicode_Check(initial)) + || (array_Check(initial) + && c == ((arrayobject*)initial)->ob_descr->typecode))) { it = PyObject_GetIter(initial); if (it == NULL) return NULL; @@ -1941,17 +2423,20 @@ PyObject *a; Py_ssize_t len; - if (initial == NULL || !(PyList_Check(initial) - || PyTuple_Check(initial))) + if (initial == NULL) len = 0; + else if (PyList_Check(initial)) + len = PyList_GET_SIZE(initial); + else if (PyTuple_Check(initial) || array_Check(initial)) + len = Py_SIZE(initial); else - len = PySequence_Size(initial); + len = 0; a = newarrayobject(type, len, descr); if (a == NULL) return NULL; - if (len > 0) { + if (len > 0 && !array_Check(initial)) { Py_ssize_t i; for (i = 0; i < len; i++) { PyObject *v = @@ -1967,14 +2452,16 @@ } Py_DECREF(v); } - } else if (initial != NULL && PyString_Check(initial)) { + } + else if (initial != NULL && (PyByteArray_Check(initial) || + PyBytes_Check(initial))) { PyObject *t_initial, *v; t_initial = PyTuple_Pack(1, initial); if (t_initial == NULL) { Py_DECREF(a); return NULL; } - v = array_fromstring((arrayobject *)a, + v = array_frombytes((arrayobject *)a, t_initial); Py_DECREF(t_initial); if (v == NULL) { @@ -1982,8 +2469,8 @@ return NULL; } Py_DECREF(v); -#ifdef Py_USING_UNICODE - } else if (initial != NULL && PyUnicode_Check(initial)) { + } + else if (initial != NULL && PyUnicode_Check(initial)) { Py_ssize_t n = PyUnicode_GET_DATA_SIZE(initial); if (n > 0) { arrayobject *self = (arrayobject *)a; @@ -1999,7 +2486,11 @@ memcpy(item, PyUnicode_AS_DATA(initial), n); self->allocated = Py_SIZE(self); } -#endif + } + else if (initial != NULL && array_Check(initial)) { + arrayobject *self = (arrayobject *)a; + arrayobject *other = (arrayobject *)initial; + memcpy(self->ob_item, other->ob_item, len * other->ob_descr->itemsize); } if (it != NULL) { if (array_iter_extend((arrayobject *)a, it) == -1) { @@ -2013,7 +2504,7 @@ } } PyErr_SetString(PyExc_ValueError, - "bad typecode (must be c, b, B, u, h, H, i, I, l, L, f or d)"); + "bad typecode (must be b, B, u, h, H, i, I, l, L, f or d)"); return NULL; } @@ -2027,10 +2518,9 @@ is a single character. The following type codes are defined:\n\ \n\ Type code C Type Minimum size in bytes \n\ - 'c' character 1 \n\ 'b' signed integer 1 \n\ 'B' unsigned integer 1 \n\ - 'u' Unicode character 2 \n\ + 'u' Unicode character 2 (see note) \n\ 'h' signed integer 2 \n\ 'H' unsigned integer 2 \n\ 'i' signed integer 2 \n\ @@ -2040,6 +2530,9 @@ 'f' floating point 4 \n\ 'd' floating point 8 \n\ \n\ +NOTE: The 'u' typecode corresponds to Python's unicode character. On \n\ +narrow builds this is 2-bytes on wide builds this is 4-bytes.\n\ +\n\ The constructor is:\n\ \n\ array(typecode [, initializer]) -- create a new array\n\ @@ -2050,7 +2543,7 @@ \n\ Return a new array whose items are restricted by typecode, and\n\ initialized from the optional initializer value, which must be a list,\n\ -string or iterable over elements of the appropriate type.\n\ +string. or iterable over elements of the appropriate type.\n\ \n\ Arrays represent basic values and behave very much like lists, except\n\ the type of objects stored in them is constrained.\n\ @@ -2068,13 +2561,11 @@ index() -- return index of first occurrence of an object\n\ insert() -- insert a new item into the array at a provided position\n\ pop() -- remove and return item (default last)\n\ -read() -- DEPRECATED, use fromfile()\n\ remove() -- remove first occurrence of an object\n\ reverse() -- reverse the order of the items in the array\n\ tofile() -- write all items to a file object\n\ tolist() -- return the array converted to an ordinary list\n\ tostring() -- return the array converted to a string\n\ -write() -- DEPRECATED, use tofile()\n\ \n\ Attributes:\n\ \n\ @@ -2093,7 +2584,7 @@ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ - 0, /* tp_compare */ + 0, /* tp_reserved */ (reprfunc)array_repr, /* tp_repr */ 0, /* tp_as_number*/ &array_as_sequence, /* tp_as_sequence*/ @@ -2104,7 +2595,7 @@ PyObject_GenericGetAttr, /* tp_getattro */ 0, /* tp_setattro */ &array_as_buffer, /* tp_as_buffer*/ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */ arraytype_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ @@ -2196,7 +2687,7 @@ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ - 0, /* tp_compare */ + 0, /* tp_reserved */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ @@ -2223,6 +2714,8 @@ /* No functions in array module. */ static PyMethodDef a_methods[] = { + {"_array_reconstructor", array_reconstructor, METH_VARARGS, + PyDoc_STR("Internal. Used for pickling support.")}, {NULL, NULL, 0, NULL} /* Sentinel */ }; @@ -2231,9 +2724,14 @@ initarray(void) { PyObject *m; + PyObject *typecodes; + Py_ssize_t size = 0; + register Py_UNICODE *p; + struct arraydescr *descr; - Arraytype.ob_type = &PyType_Type; - PyArrayIter_Type.ob_type = &PyType_Type; + if (PyType_Ready(&Arraytype) < 0) + return; + Py_TYPE(&PyArrayIter_Type) = &PyType_Type; m = Py_InitModule3("array", a_methods, module_doc); if (m == NULL) return; @@ -2242,5 +2740,16 @@ PyModule_AddObject(m, "ArrayType", (PyObject *)&Arraytype); Py_INCREF((PyObject *)&Arraytype); PyModule_AddObject(m, "array", (PyObject *)&Arraytype); - /* No need to check the error here, the caller will do that */ + + for (descr=descriptors; descr->typecode != '\0'; descr++) { + size++; + } + + typecodes = PyUnicode_FromStringAndSize(NULL, size); + p = PyUnicode_AS_UNICODE(typecodes); + for (descr = descriptors; descr->typecode != '\0'; descr++) { + *p++ = (char)descr->typecode; + } + + PyModule_AddObject(m, "typecodes", (PyObject *)typecodes); } From noreply at buildbot.pypy.org Wed May 2 01:04:12 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:04:12 +0200 (CEST) Subject: [pypy-commit] pypy py3k: cpyext: Update _sre.c used in tests with a recent 3.2 version, Message-ID: <20120501230412.3826F82F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54865:7235f7fa634e Date: 2012-05-01 19:17 +0200 http://bitbucket.org/pypy/pypy/changeset/7235f7fa634e/ Log: cpyext: Update _sre.c used in tests with a recent 3.2 version, and convert some print statements diff --git a/pypy/module/cpyext/test/_sre.c b/pypy/module/cpyext/test/_sre.c --- a/pypy/module/cpyext/test/_sre.c +++ b/pypy/module/cpyext/test/_sre.c @@ -58,12 +58,8 @@ /* defining this one enables tracing */ #undef VERBOSE -#if PY_VERSION_HEX >= 0x01060000 -#if PY_VERSION_HEX < 0x02020000 || defined(Py_USING_UNICODE) /* defining this enables unicode support (default under 1.6a1 and later) */ #define HAVE_UNICODE -#endif -#endif /* -------------------------------------------------------------------- */ /* optional features */ @@ -71,9 +67,6 @@ /* enables fast searching */ #define USE_FAST_SEARCH -/* enables aggressive inlining (always on for Visual C) */ -#undef USE_INLINE - /* enables copy/deepcopy handling (work in progress) */ #undef USE_BUILTIN_COPY @@ -1681,41 +1674,44 @@ Py_ssize_t size, bytes; int charsize; void* ptr; - -#if defined(HAVE_UNICODE) + Py_buffer view; + + /* Unicode objects do not support the buffer API. So, get the data + directly instead. */ if (PyUnicode_Check(string)) { - /* unicode strings doesn't always support the buffer interface */ - ptr = (void*) PyUnicode_AS_DATA(string); - /* bytes = PyUnicode_GET_DATA_SIZE(string); */ - size = PyUnicode_GET_SIZE(string); - charsize = sizeof(Py_UNICODE); - - } else { -#endif + ptr = (void *)PyUnicode_AS_DATA(string); + *p_length = PyUnicode_GET_SIZE(string); + *p_charsize = sizeof(Py_UNICODE); + return ptr; + } /* get pointer to string buffer */ + view.len = -1; buffer = Py_TYPE(string)->tp_as_buffer; - if (!buffer || !buffer->bf_getreadbuffer || !buffer->bf_getsegcount || - buffer->bf_getsegcount(string, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, "expected string or buffer"); - return NULL; + if (!buffer || !buffer->bf_getbuffer || + (*buffer->bf_getbuffer)(string, &view, PyBUF_SIMPLE) < 0) { + PyErr_SetString(PyExc_TypeError, "expected string or buffer"); + return NULL; } /* determine buffer size */ - bytes = buffer->bf_getreadbuffer(string, 0, &ptr); + bytes = view.len; + ptr = view.buf; + + /* Release the buffer immediately --- possibly dangerous + but doing something else would require some re-factoring + */ + PyBuffer_Release(&view); + if (bytes < 0) { PyErr_SetString(PyExc_TypeError, "buffer has negative size"); return NULL; } /* determine character size */ -#if PY_VERSION_HEX >= 0x01060000 size = PyObject_Size(string); -#else - size = PyObject_Length(string); -#endif - - if (PyString_Check(string) || bytes == size) + + if (PyBytes_Check(string) || bytes == size) charsize = 1; #if defined(HAVE_UNICODE) else if (bytes == (Py_ssize_t) (size * sizeof(Py_UNICODE))) @@ -1726,13 +1722,13 @@ return NULL; } -#if defined(HAVE_UNICODE) - } -#endif - *p_length = size; *p_charsize = charsize; + if (ptr == NULL) { + PyErr_SetString(PyExc_ValueError, + "Buffer is NULL"); + } return ptr; } @@ -1755,6 +1751,17 @@ if (!ptr) return NULL; + if (charsize == 1 && pattern->charsize > 1) { + PyErr_SetString(PyExc_TypeError, + "can't use a string pattern on a bytes-like object"); + return NULL; + } + if (charsize > 1 && pattern->charsize == 1) { + PyErr_SetString(PyExc_TypeError, + "can't use a bytes pattern on a string-like object"); + return NULL; + } + /* adjust boundaries */ if (start < 0) start = 0; @@ -1949,7 +1956,7 @@ if (!args) return NULL; - name = PyString_FromString(module); + name = PyUnicode_FromString(module); if (!name) return NULL; mod = PyImport_Import(name); @@ -2607,15 +2614,15 @@ {NULL} /* Sentinel */ }; -statichere PyTypeObject Pattern_Type = { - PyObject_HEAD_INIT(NULL) - 0, "_" SRE_MODULE ".SRE_Pattern", +static PyTypeObject Pattern_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "_" SRE_MODULE ".SRE_Pattern", sizeof(PatternObject), sizeof(SRE_CODE), - (destructor)pattern_dealloc, /*tp_dealloc*/ - 0, /* tp_print */ - 0, /* tp_getattrn */ + (destructor)pattern_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ 0, /* tp_setattr */ - 0, /* tp_compare */ + 0, /* tp_reserved */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ @@ -2626,7 +2633,7 @@ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ pattern_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ @@ -2673,8 +2680,7 @@ for (i = 0; i < n; i++) { PyObject *o = PyList_GET_ITEM(code, i); - unsigned long value = PyInt_Check(o) ? (unsigned long)PyInt_AsLong(o) - : PyLong_AsUnsignedLong(o); + unsigned long value = PyLong_AsUnsignedLong(o); self->code[i] = (SRE_CODE) value; if ((unsigned long) self->code[i] != value) { PyErr_SetString(PyExc_OverflowError, @@ -2688,6 +2694,16 @@ return NULL; } + if (pattern == Py_None) + self->charsize = -1; + else { + Py_ssize_t p_length; + if (!getstring(pattern, &p_length, &self->charsize)) { + Py_DECREF(self); + return NULL; + } + } + Py_INCREF(pattern); self->pattern = pattern; @@ -2744,7 +2760,7 @@ #if defined(VVERBOSE) #define VTRACE(v) printf v #else -#define VTRACE(v) do {} while(0) /* do nothing */ +#define VTRACE(v) #endif /* Report failure */ @@ -2947,13 +2963,13 @@ <1=skip> <2=flags> <3=min> <4=max>; If SRE_INFO_PREFIX or SRE_INFO_CHARSET is in the flags, more follows. */ - SRE_CODE flags, i; + SRE_CODE flags, min, max, i; SRE_CODE *newcode; GET_SKIP; newcode = code+skip-1; GET_ARG; flags = arg; - GET_ARG; /* min */ - GET_ARG; /* max */ + GET_ARG; min = arg; + GET_ARG; max = arg; /* Check that only valid flags are present */ if ((flags & ~(SRE_INFO_PREFIX | SRE_INFO_LITERAL | @@ -2969,9 +2985,9 @@ FAIL; /* Validate the prefix */ if (flags & SRE_INFO_PREFIX) { - SRE_CODE prefix_len; + SRE_CODE prefix_len, prefix_skip; GET_ARG; prefix_len = arg; - GET_ARG; /* prefix skip */ + GET_ARG; prefix_skip = arg; /* Here comes the prefix string */ if (code+prefix_len < code || code+prefix_len > newcode) FAIL; @@ -3221,16 +3237,20 @@ { Py_ssize_t i; - if (PyInt_Check(index)) - return PyInt_AsSsize_t(index); + if (index == NULL) + /* Default value */ + return 0; + + if (PyLong_Check(index)) + return PyLong_AsSsize_t(index); i = -1; if (self->pattern->groupindex) { index = PyObject_GetItem(self->pattern->groupindex, index); if (index) { - if (PyInt_Check(index) || PyLong_Check(index)) - i = PyInt_AsSsize_t(index); + if (PyLong_Check(index)) + i = PyLong_AsSsize_t(index); Py_DECREF(index); } else PyErr_Clear(); @@ -3371,7 +3391,7 @@ { Py_ssize_t index; - PyObject* index_ = Py_False; /* zero */ + PyObject* index_ = NULL; if (!PyArg_UnpackTuple(args, "start", 0, 1, &index_)) return NULL; @@ -3394,7 +3414,7 @@ { Py_ssize_t index; - PyObject* index_ = Py_False; /* zero */ + PyObject* index_ = NULL; if (!PyArg_UnpackTuple(args, "end", 0, 1, &index_)) return NULL; @@ -3422,12 +3442,12 @@ if (!pair) return NULL; - item = PyInt_FromSsize_t(i1); + item = PyLong_FromSsize_t(i1); if (!item) goto error; PyTuple_SET_ITEM(pair, 0, item); - item = PyInt_FromSsize_t(i2); + item = PyLong_FromSsize_t(i2); if (!item) goto error; PyTuple_SET_ITEM(pair, 1, item); @@ -3444,7 +3464,7 @@ { Py_ssize_t index; - PyObject* index_ = Py_False; /* zero */ + PyObject* index_ = NULL; if (!PyArg_UnpackTuple(args, "span", 0, 1, &index_)) return NULL; @@ -3542,7 +3562,7 @@ #endif } -static struct PyMethodDef match_methods[] = { +static PyMethodDef match_methods[] = { {"group", (PyCFunction) match_group, METH_VARARGS}, {"start", (PyCFunction) match_start, METH_VARARGS}, {"end", (PyCFunction) match_end, METH_VARARGS}, @@ -3605,37 +3625,36 @@ {NULL} }; - /* FIXME: implement setattr("string", None) as a special case (to detach the associated string, if any */ static PyTypeObject Match_Type = { - PyVarObject_HEAD_INIT(NULL, 0) + PyVarObject_HEAD_INIT(NULL,0) "_" SRE_MODULE ".SRE_Match", sizeof(MatchObject), sizeof(Py_ssize_t), - (destructor)match_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - 0, /* tp_compare */ - 0, /* tp_repr */ - 0, /* tp_as_number */ - 0, /* tp_as_sequence */ - 0, /* tp_as_mapping */ - 0, /* tp_hash */ - 0, /* tp_call */ - 0, /* tp_str */ - 0, /* tp_getattro */ - 0, /* tp_setattro */ - 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, - 0, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ + (destructor)match_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_reserved */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ match_methods, /* tp_methods */ match_members, /* tp_members */ match_getset, /* tp_getset */ @@ -3793,11 +3812,11 @@ {NULL} /* Sentinel */ }; -statichere PyTypeObject Scanner_Type = { - PyObject_HEAD_INIT(NULL) - 0, "_" SRE_MODULE ".SRE_Scanner", +static PyTypeObject Scanner_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "_" SRE_MODULE ".SRE_Scanner", sizeof(ScannerObject), 0, - (destructor)scanner_dealloc, /*tp_dealloc*/ + (destructor)scanner_dealloc,/* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ @@ -3863,11 +3882,7 @@ {NULL, NULL} }; -#if PY_VERSION_HEX < 0x02030000 -DL_EXPORT(void) init_sre(void) -#else PyMODINIT_FUNC init_sre(void) -#endif { PyObject* m; PyObject* d; @@ -3883,19 +3898,19 @@ return; d = PyModule_GetDict(m); - x = PyInt_FromLong(SRE_MAGIC); + x = PyLong_FromLong(SRE_MAGIC); if (x) { PyDict_SetItemString(d, "MAGIC", x); Py_DECREF(x); } - x = PyInt_FromLong(sizeof(SRE_CODE)); + x = PyLong_FromLong(sizeof(SRE_CODE)); if (x) { PyDict_SetItemString(d, "CODESIZE", x); Py_DECREF(x); } - x = PyString_FromString(copyright); + x = PyUnicode_FromString(copyright); if (x) { PyDict_SetItemString(d, "copyright", x); Py_DECREF(x); diff --git a/pypy/module/cpyext/test/test_typeobject.py b/pypy/module/cpyext/test/test_typeobject.py --- a/pypy/module/cpyext/test/test_typeobject.py +++ b/pypy/module/cpyext/test/test_typeobject.py @@ -14,12 +14,12 @@ assert 'foo' in sys.modules assert "copy" in dir(module.fooType) obj = module.new() - print obj.foo + print(obj.foo) assert obj.foo == 42 - print "Obj has type", type(obj) + print("Obj has type", type(obj)) assert type(obj) is module.fooType - print "type of obj has type", type(type(obj)) - print "type of type of obj has type", type(type(type(obj))) + print("type of obj has type", type(type(obj))) + print("type of type of obj has type", type(type(type(obj)))) assert module.fooType.__doc__ == "foo is for testing." def test_typeobject_method_descriptor(self): @@ -36,7 +36,7 @@ assert repr(module.fooType.__call__) == "" assert obj2(foo=1, bar=2) == dict(foo=1, bar=2) - print obj.foo + print(obj.foo) assert obj.foo == 42 assert obj.int_member == obj.foo From noreply at buildbot.pypy.org Wed May 2 01:04:13 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:04:13 +0200 (CEST) Subject: [pypy-commit] pypy py3k: cpyext: fixes in test_unicodeobject.py Message-ID: <20120501230413.8335582F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54866:b600cc5d3490 Date: 2012-05-01 22:49 +0200 http://bitbucket.org/pypy/pypy/changeset/b600cc5d3490/ Log: cpyext: fixes in test_unicodeobject.py diff --git a/pypy/module/cpyext/test/test_unicodeobject.py b/pypy/module/cpyext/test/test_unicodeobject.py --- a/pypy/module/cpyext/test/test_unicodeobject.py +++ b/pypy/module/cpyext/test/test_unicodeobject.py @@ -24,7 +24,7 @@ if(PyUnicode_GetSize(s) == 11) { result = 1; } - if(s->ob_type->tp_basicsize != sizeof(void*)*4) + if(s->ob_type->tp_basicsize != sizeof(void*)*5) result = 0; Py_DECREF(s); return PyBool_FromLong(result); @@ -41,11 +41,11 @@ """ return PyBool_FromLong(PyUnicode_Check(PyTuple_GetItem(args, 0))); """)]) - assert module.get_hello1() == u'Hello world' + assert module.get_hello1() == 'Hello world' assert module.test_GetSize() raises(TypeError, module.test_GetSize_exception) - assert module.test_is_unicode(u"") + assert module.test_is_unicode("") assert not module.test_is_unicode(()) def test_unicode_buffer_init(self): @@ -72,7 +72,7 @@ ]) s = module.getunicode() assert len(s) == 4 - assert s == u'a�\x00c' + assert s == 'a\xe9\x00c' def test_format_v(self): module = self.import_extension('foo', [ @@ -123,18 +123,6 @@ space.sys.get('getdefaultencoding') ) assert encoding == space.unwrap(w_default_encoding) - invalid = rffi.str2charp('invalid') - utf_8 = rffi.str2charp('utf-8') - prev_encoding = rffi.str2charp(space.unwrap(w_default_encoding)) - assert api.PyUnicode_SetDefaultEncoding(invalid) == -1 - assert api.PyErr_Occurred() is space.w_LookupError - api.PyErr_Clear() - assert api.PyUnicode_SetDefaultEncoding(utf_8) == 0 - assert rffi.charp2str(api.PyUnicode_GetDefaultEncoding()) == 'utf-8' - assert api.PyUnicode_SetDefaultEncoding(prev_encoding) == 0 - rffi.free_charp(invalid) - rffi.free_charp(utf_8) - rffi.free_charp(prev_encoding) def test_AS(self, space, api): word = space.wrap(u'spam') @@ -146,7 +134,7 @@ assert array2[i] == char assert array3[i] == char self.raises(space, api, TypeError, api.PyUnicode_AsUnicode, - space.wrap('spam')) + space.wrapbytes('spam')) utf_8 = rffi.str2charp('utf-8') encoded = api.PyUnicode_AsEncodedString(space.wrap(u'sp�m'), @@ -158,7 +146,7 @@ self.raises(space, api, TypeError, api.PyUnicode_AsEncodedString, space.newtuple([1, 2, 3]), None, None) self.raises(space, api, TypeError, api.PyUnicode_AsEncodedString, - space.wrap(''), None, None) + space.wrapbytes(''), None, None) ascii = rffi.str2charp('ascii') replace = rffi.str2charp('replace') encoded = api.PyUnicode_AsEncodedString(space.wrap(u'sp�m'), @@ -297,7 +285,7 @@ w_u = space.wrap(u'a') assert api.PyUnicode_FromObject(w_u) is w_u assert space.unwrap( - api.PyUnicode_FromObject(space.wrap('test'))) == 'test' + api.PyUnicode_FromObject(space.wrapbytes('test'))) == "b'test'" def test_decode(self, space, api): b_text = rffi.str2charp('caf\x82xx') @@ -305,7 +293,7 @@ assert space.unwrap( api.PyUnicode_Decode(b_text, 4, b_encoding, None)) == u'caf\xe9' - w_text = api.PyUnicode_FromEncodedObject(space.wrap("test"), b_encoding, None) + w_text = api.PyUnicode_FromEncodedObject(space.wrapbytes("test"), b_encoding, None) assert space.is_true(space.isinstance(w_text, space.w_unicode)) assert space.unwrap(w_text) == "test" @@ -341,7 +329,7 @@ def test(ustr): w_ustr = space.wrap(ustr.decode('Unicode-Escape')) result = api.PyUnicode_AsUnicodeEscapeString(w_ustr) - assert space.eq_w(space.wrap(ustr), result) + assert space.eq_w(space.wrapbytes(ustr), result) test('\\u674f\\u7f8e') test('\\u0105\\u0107\\u017c\\u017a') @@ -352,7 +340,7 @@ w_ustr = space.wrap(ustr.decode("ascii")) result = api.PyUnicode_AsASCIIString(w_ustr) - assert space.eq_w(space.wrap(ustr), result) + assert space.eq_w(space.wrapbytes(ustr), result) w_ustr = space.wrap(u"abcd\xe9f") self.raises(space, api, UnicodeEncodeError, api.PyUnicode_AsASCIIString, w_ustr) @@ -375,7 +363,7 @@ w_ustr = api.PyUnicode_DecodeUTF16(encoded_charp, len(encoded), strict_charp, pendian) assert space.eq_w(space.call_method(w_ustr, 'encode', space.wrap('ascii')), - space.wrap("abcd")) + space.wrapbytes("abcd")) rffi.free_charp(encoded_charp) rffi.free_charp(strict_charp) @@ -411,7 +399,7 @@ w_ustr = api.PyUnicode_DecodeUTF32(encoded_charp, len(encoded), strict_charp, pendian) assert space.eq_w(space.call_method(w_ustr, 'encode', space.wrap('ascii')), - space.wrap("ab")) + space.wrapbytes("ab")) rffi.free_charp(encoded_charp) rffi.free_charp(strict_charp) @@ -467,7 +455,7 @@ uni = u'abcdefg' data = rffi.unicode2wcharp(uni) w_s = api.PyUnicode_EncodeASCII(data, len(uni), lltype.nullptr(rffi.CCHARP.TO)) - assert space.eq_w(space.wrap("abcdefg"), w_s) + assert space.eq_w(space.wrapbytes("abcdefg"), w_s) rffi.free_wcharp(data) u = u'�bcd�fg' @@ -487,13 +475,13 @@ uni = u'abcdefg' data = rffi.unicode2wcharp(uni) w_s = api.PyUnicode_EncodeLatin1(data, len(uni), lltype.nullptr(rffi.CCHARP.TO)) - assert space.eq_w(space.wrap("abcdefg"), w_s) + assert space.eq_w(space.wrapbytes("abcdefg"), w_s) rffi.free_wcharp(data) ustr = "abcdef" w_ustr = space.wrap(ustr.decode("ascii")) result = api.PyUnicode_AsLatin1String(w_ustr) - assert space.eq_w(space.wrap(ustr), result) + assert space.eq_w(space.wrapbytes(ustr), result) def test_format(self, space, api): w_format = space.wrap(u'hi %s') @@ -560,13 +548,13 @@ def test_split(self, space, api): w_str = space.wrap(u"a\nb\nc\nd") - assert "[u'a', u'b', u'c', u'd']" == space.unwrap(space.repr( + assert "['a', 'b', 'c', 'd']" == space.unwrap(space.repr( api.PyUnicode_Split(w_str, space.wrap('\n'), -1))) - assert r"[u'a', u'b', u'c\nd']" == space.unwrap(space.repr( + assert r"['a', 'b', 'c\nd']" == space.unwrap(space.repr( api.PyUnicode_Split(w_str, space.wrap('\n'), 2))) - assert r"[u'a', u'b', u'c d']" == space.unwrap(space.repr( + assert r"['a', 'b', 'c d']" == space.unwrap(space.repr( api.PyUnicode_Split(space.wrap(u'a\nb c d'), None, 2))) - assert "[u'a', u'b', u'c', u'd']" == space.unwrap(space.repr( + assert "['a', 'b', 'c', 'd']" == space.unwrap(space.repr( api.PyUnicode_Splitlines(w_str, 0))) - assert r"[u'a\n', u'b\n', u'c\n', u'd']" == space.unwrap(space.repr( + assert r"['a\n', 'b\n', 'c\n', 'd']" == space.unwrap(space.repr( api.PyUnicode_Splitlines(w_str, 1))) diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -354,10 +354,10 @@ in the unicode() built-in function. The codec to be used is looked up using the Python codec registry. Return NULL if an exception was raised by the codec.""" - w_str = space.wrap(rffi.charpsize2str(s, size)) + w_str = space.wrapbytes(rffi.charpsize2str(s, size)) w_encoding = space.wrap(rffi.charp2str(encoding)) if errors: - w_errors = space.wrap(rffi.charp2str(errors)) + w_errors = space.wrapbytes(rffi.charp2str(errors)) else: w_errors = space.w_None return space.call_method(w_str, 'decode', w_encoding, w_errors) @@ -432,7 +432,7 @@ (UCS2), and range(0x110000) on wide builds (UCS4). A ValueError is raised in case it is not.""" w_ordinal = space.wrap(rffi.cast(lltype.Signed, ordinal)) - return space.call_function(space.builtin.get('unichr'), w_ordinal) + return space.call_function(space.builtin.get('chr'), w_ordinal) @cpython_api([PyObjectP, Py_ssize_t], rffi.INT_real, error=-1) def PyUnicode_Resize(space, ref, newsize): @@ -475,7 +475,7 @@ encoded string s. Return NULL if an exception was raised by the codec. """ - w_s = space.wrap(rffi.charpsize2str(s, size)) + w_s = space.wrapbytes(rffi.charpsize2str(s, size)) if errors: w_errors = space.wrap(rffi.charp2str(errors)) else: @@ -617,7 +617,11 @@ def PyUnicode_Compare(space, w_left, w_right): """Compare two strings and return -1, 0, 1 for less than, equal, and greater than, respectively.""" - return space.int_w(space.cmp(w_left, w_right)) + if space.is_true(space.lt(w_left, w_right)): + return -1 + if space.is_true(space.lt(w_right, w_left)): + return 1 + return 0 @cpython_api([rffi.CWCHARP, rffi.CWCHARP, Py_ssize_t], lltype.Void) def Py_UNICODE_COPY(space, target, source, length): From noreply at buildbot.pypy.org Wed May 2 01:04:14 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Wed, 2 May 2012 01:04:14 +0200 (CEST) Subject: [pypy-commit] pypy py3k: Adapt many cpyext tests to python3. Message-ID: <20120501230414.D5A9882F50@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54867:7c743ad72f55 Date: 2012-05-02 00:59 +0200 http://bitbucket.org/pypy/pypy/changeset/7c743ad72f55/ Log: Adapt many cpyext tests to python3. diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -111,19 +111,19 @@ def PyDict_Keys(space, w_obj): """Return a PyListObject containing all the keys from the dictionary, as in the dictionary method dict.keys().""" - return space.call_method(w_obj, "keys") + return space.call_function(space.w_list, space.call_method(w_obj, "keys")) @cpython_api([PyObject], PyObject) def PyDict_Values(space, w_obj): """Return a PyListObject containing all the values from the dictionary p, as in the dictionary method dict.values().""" - return space.call_method(w_obj, "values") + return space.call_function(space.w_list, space.call_method(w_obj, "values")) @cpython_api([PyObject], PyObject) def PyDict_Items(space, w_obj): """Return a PyListObject containing all the items from the dictionary, as in the dictionary method dict.items().""" - return space.call_method(w_obj, "items") + return space.call_function(space.w_list, space.call_method(w_obj, "items")) @cpython_api([PyObject, Py_ssize_tP, PyObjectP, PyObjectP], rffi.INT_real, error=CANNOT_FAIL) def PyDict_Next(space, w_dict, ppos, pkey, pvalue): diff --git a/pypy/module/cpyext/eval.py b/pypy/module/cpyext/eval.py --- a/pypy/module/cpyext/eval.py +++ b/pypy/module/cpyext/eval.py @@ -94,7 +94,7 @@ Py_eval_input = 258 def compile_string(space, source, filename, start, flags=0): - w_source = space.wrap(source) + w_source = space.wrapbytes(source) start = rffi.cast(lltype.Signed, start) if start == Py_file_input: mode = 'exec' diff --git a/pypy/module/cpyext/funcobject.py b/pypy/module/cpyext/funcobject.py --- a/pypy/module/cpyext/funcobject.py +++ b/pypy/module/cpyext/funcobject.py @@ -116,9 +116,10 @@ return [space.str_w(w_item) for w_item in space.fixedview(w_list)] @cpython_api([rffi.INT_real, rffi.INT_real, rffi.INT_real, rffi.INT_real, + rffi.INT_real, PyObject, PyObject, PyObject, PyObject, PyObject, PyObject, PyObject, PyObject, rffi.INT_real, PyObject], PyCodeObject) -def PyCode_New(space, argcount, nlocals, stacksize, flags, +def PyCode_New(space, argcount, kwonlyargcount, nlocals, stacksize, flags, w_code, w_consts, w_names, w_varnames, w_freevars, w_cellvars, w_filename, w_funcname, firstlineno, w_lnotab): """Return a new code object. If you need a dummy code object to @@ -147,6 +148,7 @@ """Creates a new empty code object with the specified source location.""" return space.wrap(PyCode(space, argcount=0, + kwonlyargcount=0, nlocals=0, stacksize=0, flags=0, diff --git a/pypy/module/cpyext/import_.py b/pypy/module/cpyext/import_.py --- a/pypy/module/cpyext/import_.py +++ b/pypy/module/cpyext/import_.py @@ -24,7 +24,7 @@ w_builtin = space.getitem(w_globals, space.wrap('__builtins__')) else: # No globals -- use standard builtins, and fake globals - w_builtin = space.getbuiltinmodule('__builtin__') + w_builtin = space.getbuiltinmodule('builtins') w_globals = space.newdict() space.setitem(w_globals, space.wrap("__builtins__"), w_builtin) @@ -121,5 +121,6 @@ pathname = code.co_filename w_mod = importing.add_module(space, w_name) space.setattr(w_mod, space.wrap('__file__'), space.wrap(pathname)) - importing.exec_code_module(space, w_mod, code, pathname) + cpathname = importing.make_compiled_pathname(pathname) + importing.exec_code_module(space, w_mod, code, pathname, cpathname) return w_mod diff --git a/pypy/module/cpyext/intobject.py b/pypy/module/cpyext/intobject.py --- a/pypy/module/cpyext/intobject.py +++ b/pypy/module/cpyext/intobject.py @@ -76,12 +76,8 @@ unsigned long. This function does not check for overflow. """ w_int = space.int(w_obj) - if space.is_true(space.isinstance(w_int, space.w_int)): - num = space.int_w(w_int) - return r_uint(num) - else: - num = space.bigint_w(w_int) - return num.uintmask() + num = space.bigint_w(w_int) + return num.uintmask() @cpython_api([PyObject], lltype.Signed, error=CANNOT_FAIL) def PyInt_AS_LONG(space, w_int): diff --git a/pypy/module/cpyext/longobject.py b/pypy/module/cpyext/longobject.py --- a/pypy/module/cpyext/longobject.py +++ b/pypy/module/cpyext/longobject.py @@ -9,7 +9,7 @@ from pypy.rlib.rarithmetic import intmask -PyLong_Check, PyLong_CheckExact = build_type_checkers("Long") +PyLong_Check, PyLong_CheckExact = build_type_checkers("Long", "w_int") @cpython_api([lltype.Signed], PyObject) def PyLong_FromLong(space, val): diff --git a/pypy/module/cpyext/mapping.py b/pypy/module/cpyext/mapping.py --- a/pypy/module/cpyext/mapping.py +++ b/pypy/module/cpyext/mapping.py @@ -22,20 +22,23 @@ def PyMapping_Keys(space, w_obj): """On success, return a list of the keys in object o. On failure, return NULL. This is equivalent to the Python expression o.keys().""" - return space.call_method(w_obj, "keys") + return space.call_function(space.w_list, + space.call_method(w_obj, "keys")) @cpython_api([PyObject], PyObject) def PyMapping_Values(space, w_obj): """On success, return a list of the values in object o. On failure, return NULL. This is equivalent to the Python expression o.values().""" - return space.call_method(w_obj, "values") + return space.call_function(space.w_list, + space.call_method(w_obj, "values")) @cpython_api([PyObject], PyObject) def PyMapping_Items(space, w_obj): """On success, return a list of the items in object o, where each item is a tuple containing a key-value pair. On failure, return NULL. This is equivalent to the Python expression o.items().""" - return space.call_method(w_obj, "items") + return space.call_function(space.w_list, + space.call_method(w_obj, "items")) @cpython_api([PyObject, CONST_STRING], PyObject) def PyMapping_GetItemString(space, w_obj, key): diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -249,26 +249,6 @@ function.""" return space.call_function(space.w_unicode, w_obj) - at cpython_api([PyObject, PyObject], rffi.INT_real, error=-1) -def PyObject_Compare(space, w_o1, w_o2): - """ - Compare the values of o1 and o2 using a routine provided by o1, if one - exists, otherwise with a routine provided by o2. Returns the result of the - comparison on success. On error, the value returned is undefined; use - PyErr_Occurred() to detect an error. This is equivalent to the Python - expression cmp(o1, o2).""" - return space.int_w(space.cmp(w_o1, w_o2)) - - at cpython_api([PyObject, PyObject, rffi.INTP], rffi.INT_real, error=-1) -def PyObject_Cmp(space, w_o1, w_o2, result): - """Compare the values of o1 and o2 using a routine provided by o1, if one - exists, otherwise with a routine provided by o2. The result of the - comparison is returned in result. Returns -1 on failure. This is the - equivalent of the Python statement result = cmp(o1, o2).""" - res = space.int_w(space.cmp(w_o1, w_o2)) - result[0] = rffi.cast(rffi.INT, res) - return 0 - @cpython_api([PyObject, PyObject, rffi.INT_real], PyObject) def PyObject_RichCompare(space, w_o1, w_o2, opid_int): """Compare the values of o1 and o2 using the operation specified by opid, diff --git a/pypy/module/cpyext/pyfile.py b/pypy/module/cpyext/pyfile.py --- a/pypy/module/cpyext/pyfile.py +++ b/pypy/module/cpyext/pyfile.py @@ -1,6 +1,6 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, build_type_checkers) + cpython_api, CANNOT_FAIL, CONST_STRING, FILEP) from pypy.module.cpyext.pyobject import PyObject, borrow_from from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.interpreter.error import OperationError @@ -38,9 +38,9 @@ On success, return a new file object that is opened on the file given by filename, with a file mode given by mode, where mode has the same semantics as the standard C routine fopen(). On failure, return NULL.""" - w_filename = space.wrap(rffi.charp2str(filename)) + w_filename = space.wrapbytes(rffi.charp2str(filename)) w_mode = space.wrap(rffi.charp2str(mode)) - return space.call_method(space.builtin, 'file', w_filename, w_mode) + return space.call_method(space.builtin, 'open', w_filename, w_mode) @cpython_api([FILEP, CONST_STRING, CONST_STRING, rffi.VOIDP], PyObject) def PyFile_FromFile(space, fp, name, mode, close): diff --git a/pypy/module/cpyext/structmember.py b/pypy/module/cpyext/structmember.py --- a/pypy/module/cpyext/structmember.py +++ b/pypy/module/cpyext/structmember.py @@ -55,7 +55,7 @@ if member_type == T_STRING: result = rffi.cast(rffi.CCHARPP, addr) if result[0]: - w_result = PyString_FromString(space, result[0]) + w_result = PyUnicode_FromString(space, result[0]) else: w_result = space.w_None elif member_type == T_STRING_INPLACE: diff --git a/pypy/module/cpyext/test/foo.c b/pypy/module/cpyext/test/foo.c --- a/pypy/module/cpyext/test/foo.c +++ b/pypy/module/cpyext/test/foo.c @@ -89,7 +89,7 @@ static PyObject * foo_get_name(PyObject *self, void *closure) { - return PyString_FromStringAndSize("Foo Example", 11); + return PyUnicode_FromStringAndSize("Foo Example", 11); } static PyObject * diff --git a/pypy/module/cpyext/test/test_cell.py b/pypy/module/cpyext/test/test_cell.py --- a/pypy/module/cpyext/test/test_cell.py +++ b/pypy/module/cpyext/test/test_cell.py @@ -16,5 +16,5 @@ return o return g - cell_type = type(f(0).func_closure[0]) + cell_type = type(f(0).__closure__[0]) assert d["cell"] is cell_type diff --git a/pypy/module/cpyext/test/test_dictobject.py b/pypy/module/cpyext/test/test_dictobject.py --- a/pypy/module/cpyext/test/test_dictobject.py +++ b/pypy/module/cpyext/test/test_dictobject.py @@ -139,9 +139,13 @@ api.Py_DecRef(py_dict) # release borrowed references assert space.eq_w(space.newlist(keys_w), - space.call_method(w_dict, "keys")) + space.call_function( + space.w_list, + space.call_method(w_dict, "keys"))) assert space.eq_w(space.newlist(values_w), - space.call_method(w_dict, "values")) + space.call_function( + space.w_list, + space.call_method(w_dict, "values"))) def test_dictproxy(self, space, api): w_dict = space.sys.get('modules') diff --git a/pypy/module/cpyext/test/test_eval.py b/pypy/module/cpyext/test/test_eval.py --- a/pypy/module/cpyext/test/test_eval.py +++ b/pypy/module/cpyext/test/test_eval.py @@ -117,7 +117,7 @@ flags = lltype.malloc(PyCompilerFlags, flavor='raw') flags.c_cf_flags = rffi.cast(rffi.INT, consts.PyCF_SOURCE_IS_UTF8) w_globals = space.newdict() - buf = rffi.str2charp("a = u'caf\xc3\xa9'") + buf = rffi.str2charp("a = 'caf\xc3\xa9'") try: api.PyRun_StringFlags(buf, Py_single_input, w_globals, w_globals, flags) @@ -229,7 +229,7 @@ module = self.import_extension('foo', [ ("call_func", "METH_VARARGS", """ - PyObject *t = PyString_FromString("t"); + PyObject *t = PyUnicode_FromString("t"); PyObject *res = PyObject_CallFunctionObjArgs( PyTuple_GetItem(args, 0), Py_None, NULL); @@ -238,8 +238,8 @@ """), ("call_method", "METH_VARARGS", """ - PyObject *t = PyString_FromString("t"); - PyObject *count = PyString_FromString("count"); + PyObject *t = PyUnicode_FromString("t"); + PyObject *count = PyUnicode_FromString("count"); PyObject *res = PyObject_CallMethodObjArgs( PyTuple_GetItem(args, 0), count, t, NULL); @@ -277,15 +277,15 @@ mod = module.exec_code(code) assert mod.__name__ == "cpyext_test_modname" assert mod.__file__ == "someFile" - print dir(mod) - print mod.__dict__ + print(dir(mod)) + print(mod.__dict__) assert mod.f(42) == 47 mod = module.exec_code_ex(code) assert mod.__name__ == "cpyext_test_modname" assert mod.__file__ == "otherFile" - print dir(mod) - print mod.__dict__ + print(dir(mod)) + print(mod.__dict__) assert mod.f(42) == 47 def test_merge_compiler_flags(self): @@ -301,7 +301,7 @@ assert module.get_flags() == (0, 0) ns = {'module':module} - exec """from __future__ import division \nif 1: + exec("""from __future__ import division \nif 1: def nested_flags(): - return module.get_flags()""" in ns + return module.get_flags()""", ns) assert ns['nested_flags']() == (1, 0x2000) # CO_FUTURE_DIVISION diff --git a/pypy/module/cpyext/test/test_funcobject.py b/pypy/module/cpyext/test/test_funcobject.py --- a/pypy/module/cpyext/test/test_funcobject.py +++ b/pypy/module/cpyext/test/test_funcobject.py @@ -29,8 +29,8 @@ return C().method """) - w_function = space.getattr(w_method, space.wrap("im_func")) - w_self = space.getattr(w_method, space.wrap("im_self")) + w_function = space.getattr(w_method, space.wrap("__func__")) + w_self = space.getattr(w_method, space.wrap("__self__")) assert space.is_w(api.PyMethod_Function(w_method), w_function) assert space.is_w(api.PyMethod_Self(w_method), w_self) diff --git a/pypy/module/cpyext/test/test_getargs.py b/pypy/module/cpyext/test/test_getargs.py --- a/pypy/module/cpyext/test/test_getargs.py +++ b/pypy/module/cpyext/test/test_getargs.py @@ -126,37 +126,22 @@ PyBuffer_Release(&buf); return result; ''') - assert 'foo\0bar\0baz' == pybuffer('foo\0bar\0baz') - - - def test_pyarg_parse_string_old_buffer(self): - pybuffer = self.import_parser( - ''' - Py_buffer buf; - PyObject *result; - if (!PyArg_ParseTuple(args, "s*", &buf)) { - return NULL; - } - result = PyString_FromStringAndSize(buf.buf, buf.len); - PyBuffer_Release(&buf); - return result; - ''') - assert 'foo\0bar\0baz' == pybuffer(buffer('foo\0bar\0baz')) + assert b'foo\0bar\0baz' == pybuffer('foo\0bar\0baz') def test_pyarg_parse_charbuf_and_length(self): """ - The `t#` format specifier can be used to parse a read-only 8-bit + The `s#` format specifier can be used to parse a read-only 8-bit character buffer into a char* and int giving its length in bytes. """ charbuf = self.import_parser( ''' char *buf; int len; - if (!PyArg_ParseTuple(args, "t#", &buf, &len)) { + if (!PyArg_ParseTuple(args, "s#", &buf, &len)) { return NULL; } return PyString_FromStringAndSize(buf, len); ''') raises(TypeError, "charbuf(10)") - assert 'foo\0bar\0baz' == charbuf('foo\0bar\0baz') + assert b'foo\0bar\0baz' == charbuf('foo\0bar\0baz') diff --git a/pypy/module/cpyext/test/test_import.py b/pypy/module/cpyext/test/test_import.py --- a/pypy/module/cpyext/test/test_import.py +++ b/pypy/module/cpyext/test/test_import.py @@ -4,9 +4,9 @@ class TestImport(BaseApiTest): def test_import(self, space, api): - pdb = api.PyImport_Import(space.wrap("pdb")) - assert pdb - assert space.getattr(pdb, space.wrap("pm")) + inspect = api.PyImport_Import(space.wrap("inspect")) + assert inspect + assert space.getattr(inspect, space.wrap("ismethod")) def test_addmodule(self, space, api): with rffi.scoped_str2charp("sys") as modname: @@ -19,7 +19,7 @@ space.wrap('__name__'))) == 'foobar' def test_getmoduledict(self, space, api): - testmod = "_functools" + testmod = "contextlib" w_pre_dict = api.PyImport_GetModuleDict() assert not space.is_true(space.contains(w_pre_dict, space.wrap(testmod))) @@ -32,10 +32,10 @@ assert space.is_true(space.contains(w_dict, space.wrap(testmod))) def test_reload(self, space, api): - pdb = api.PyImport_Import(space.wrap("pdb")) - space.delattr(pdb, space.wrap("set_trace")) - pdb = api.PyImport_ReloadModule(pdb) - assert space.getattr(pdb, space.wrap("set_trace")) + inspect = api.PyImport_Import(space.wrap("inspect")) + space.delattr(inspect, space.wrap("getframeinfo")) + inspect = api.PyImport_ReloadModule(inspect) + assert space.getattr(inspect, space.wrap("getframeinfo")) class AppTestImportLogic(AppTestCpythonExtensionBase): def test_import_logic(self): diff --git a/pypy/module/cpyext/test/test_intobject.py b/pypy/module/cpyext/test/test_intobject.py --- a/pypy/module/cpyext/test/test_intobject.py +++ b/pypy/module/cpyext/test/test_intobject.py @@ -64,7 +64,7 @@ ]) values = module.values() types = [type(x) for x in values] - assert types == [int, long, int, int] + assert types == [int, int, int, int] def test_int_subtype(self): module = self.import_extension( diff --git a/pypy/module/cpyext/test/test_listobject.py b/pypy/module/cpyext/test/test_listobject.py --- a/pypy/module/cpyext/test/test_listobject.py +++ b/pypy/module/cpyext/test/test_listobject.py @@ -125,11 +125,11 @@ assert len(l) == 1 assert l[0] == 14 - l = range(6) + l = list(range(6)) module.setslice(l, ['a']) assert l == [0, 'a', 4, 5] - l = range(6) + l = list(range(6)) module.setslice(l, None) assert l == [0, 4, 5] diff --git a/pypy/module/cpyext/test/test_longobject.py b/pypy/module/cpyext/test/test_longobject.py --- a/pypy/module/cpyext/test/test_longobject.py +++ b/pypy/module/cpyext/test/test_longobject.py @@ -158,7 +158,7 @@ int little_endian, is_signed; if (!PyArg_ParseTuple(args, "ii", &little_endian, &is_signed)) return NULL; - return _PyLong_FromByteArray("\x9A\xBC", 2, + return _PyLong_FromByteArray("\\x9A\\xBC", 2, little_endian, is_signed); """), ]) diff --git a/pypy/module/cpyext/test/test_memoryobject.py b/pypy/module/cpyext/test/test_memoryobject.py --- a/pypy/module/cpyext/test/test_memoryobject.py +++ b/pypy/module/cpyext/test/test_memoryobject.py @@ -7,7 +7,7 @@ space.wrap((2, 7)))): py.test.skip("unsupported before Python 2.7") - w_hello = space.wrap("hello") + w_hello = space.wrapbytes("hello") w_view = api.PyMemoryView_FromObject(w_hello) w_bytes = space.call_method(w_view, "tobytes") assert space.unwrap(w_bytes) == "hello" diff --git a/pypy/module/cpyext/test/test_methodobject.py b/pypy/module/cpyext/test/test_methodobject.py --- a/pypy/module/cpyext/test/test_methodobject.py +++ b/pypy/module/cpyext/test/test_methodobject.py @@ -44,7 +44,7 @@ ''' if(PyCFunction_Check(args)) { PyCFunctionObject* func = (PyCFunctionObject*)args; - return PyString_FromString(func->m_ml->ml_name); + return PyUnicode_FromString(func->m_ml->ml_name); } else { Py_RETURN_FALSE; @@ -108,7 +108,7 @@ ml.c_ml_meth = rffi.cast(PyCFunction_typedef, c_func.get_llhelper(space)) - method = api.PyDescr_NewMethod(space.w_str, ml) + method = api.PyDescr_NewMethod(space.w_unicode, ml) assert repr(method).startswith( " 4 - cmpr >= 4 + raises(TypeError, "cmpr < 4") + raises(TypeError, "cmpr <= 4") + raises(TypeError, "cmpr > 4") + raises(TypeError, "cmpr >= 4") assert cmpr.__le__(4) is NotImplemented diff --git a/pypy/module/cpyext/test/test_weakref.py b/pypy/module/cpyext/test/test_weakref.py --- a/pypy/module/cpyext/test/test_weakref.py +++ b/pypy/module/cpyext/test/test_weakref.py @@ -18,7 +18,7 @@ def test_proxy(self, space, api): w_obj = space.w_Warning # some weakrefable object w_proxy = api.PyWeakref_NewProxy(w_obj, None) - assert space.unwrap(space.str(w_proxy)) == "" + assert space.unwrap(space.str(w_proxy)) == "" assert space.unwrap(space.repr(w_proxy)).startswith(' Author: Amaury Forgeot d'Arc Branch: py3k Changeset: r54868:c2293a4badc4 Date: 2012-05-02 01:02 +0200 http://bitbucket.org/pypy/pypy/changeset/c2293a4badc4/ Log: hg merge default diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst --- a/pypy/doc/cppyy.rst +++ b/pypy/doc/cppyy.rst @@ -21,6 +21,26 @@ .. _`llvm`: http://llvm.org/ +Motivation +========== + +The cppyy module offers two unique features, which result in great +performance as well as better functionality and cross-language integration +than would otherwise be possible. +First, cppyy is written in RPython and therefore open to optimizations by the +JIT up until the actual point of call into C++. +This means that there are no conversions necessary between a garbage collected +and a reference counted environment, as is needed for the use of existing +extension modules written or generated for CPython. +It also means that if variables are already unboxed by the JIT, they can be +passed through directly to C++. +Second, Reflex (and cling far more so) adds dynamic features to C++, thus +greatly reducing impedance mismatches between the two languages. +In fact, Reflex is dynamic enough that you could write the runtime bindings +generation in python (as opposed to RPython) and this is used to create very +natural "pythonizations" of the bound code. + + Installation ============ @@ -195,10 +215,12 @@ >>>> d = cppyy.gbl.BaseFactory("name", 42, 3.14) >>>> type(d) - >>>> d.m_i - 42 - >>>> d.m_d - 3.14 + >>>> isinstance(d, cppyy.gbl.Base1) + True + >>>> isinstance(d, cppyy.gbl.Base2) + True + >>>> d.m_i, d.m_d + (42, 3.14) >>>> d.m_name == "name" True >>>> @@ -295,6 +317,9 @@ To select a specific virtual method, do like with normal python classes that override methods: select it from the class that you need, rather than calling the method on the instance. + To select a specific overload, use the __dispatch__ special function, which + takes the name of the desired method and its signature (which can be + obtained from the doc string) as arguments. * **namespaces**: Are represented as python classes. Namespaces are more open-ended than classes, so sometimes initial access may diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst --- a/pypy/doc/extending.rst +++ b/pypy/doc/extending.rst @@ -116,13 +116,21 @@ Reflex ====== -This method is only experimental for now, and is being exercised on a branch, -`reflex-support`_, so you will have to build PyPy yourself. +This method is still experimental and is being exercised on a branch, +`reflex-support`_, which adds the `cppyy`_ module. The method works by using the `Reflex package`_ to provide reflection information of the C++ code, which is then used to automatically generate -bindings at runtime, which can then be used from python. +bindings at runtime. +From a python standpoint, there is no difference between generating bindings +at runtime, or having them "statically" generated and available in scripts +or compiled into extension modules: python classes and functions are always +runtime structures, created when a script or module loads. +However, if the backend itself is capable of dynamic behavior, it is a much +better functional match to python, allowing tighter integration and more +natural language mappings. Full details are `available here`_. +.. _`cppyy`: cppyy.html .. _`reflex-support`: cppyy.html .. _`Reflex package`: http://root.cern.ch/drupal/content/reflex .. _`available here`: cppyy.html @@ -130,16 +138,33 @@ Pros ---- -If it works, it is mostly automatic, and hence easy in use. -The bindings can make use of direct pointers, in which case the calls are -very fast. +The cppyy module is written in RPython, which makes it possible to keep the +code execution visible to the JIT all the way to the actual point of call into +C++, thus allowing for a very fast interface. +Reflex is currently in use in large software environments in High Energy +Physics (HEP), across many different projects and packages, and its use can be +virtually completely automated in a production environment. +One of its uses in HEP is in providing language bindings for CPython. +Thus, it is possible to use Reflex to have bound code work on both CPython and +on PyPy. +In the medium-term, Reflex will be replaced by `cling`_, which is based on +`llvm`_. +This will affect the backend only; the python-side interface is expected to +remain the same, except that cling adds a lot of dynamic behavior to C++, +enabling further language integration. + +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ Cons ---- -C++ is a large language, and these bindings are not yet feature-complete. -Although missing features should do no harm if you don't use them, if you do -need a particular feature, it may be necessary to work around it in python -or with a C++ helper function. +C++ is a large language, and cppyy is not yet feature-complete. +Still, the experience gained in developing the equivalent bindings for CPython +means that adding missing features is a simple matter of engineering, not a +question of research. +The module is written so that currently missing features should do no harm if +you don't use them, if you do need a particular feature, it may be necessary +to work around it in python or with a C++ helper function. Although Reflex works on various platforms, the bindings with PyPy have only been tested on Linux. diff --git a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py --- a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py +++ b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py @@ -217,7 +217,8 @@ encoded = ''.join(writtencode).encode('hex').upper() ataddr = '@%x' % addr assert log == [('test-logname-section', - [('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] + [('debug_print', 'SYS_EXECUTABLE', '??'), + ('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] lltype.free(p, flavor='raw') diff --git a/pypy/jit/metainterp/jitexc.py b/pypy/jit/metainterp/jitexc.py --- a/pypy/jit/metainterp/jitexc.py +++ b/pypy/jit/metainterp/jitexc.py @@ -12,7 +12,6 @@ """ _go_through_llinterp_uncaught_ = True # ugh - def _get_standard_error(rtyper, Class): exdata = rtyper.getexceptiondata() clsdef = rtyper.annotator.bookkeeper.getuniqueclassdef(Class) diff --git a/pypy/jit/metainterp/optimize.py b/pypy/jit/metainterp/optimize.py --- a/pypy/jit/metainterp/optimize.py +++ b/pypy/jit/metainterp/optimize.py @@ -5,3 +5,9 @@ """Raised when the optimize*.py detect that the loop that we are trying to build cannot possibly make sense as a long-running loop (e.g. it cannot run 2 complete iterations).""" + + def __init__(self, msg='?'): + debug_start("jit-abort") + debug_print(msg) + debug_stop("jit-abort") + self.msg = msg diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -49,8 +49,9 @@ optimizations.append(OptFfiCall()) if ('rewrite' not in enable_opts or 'virtualize' not in enable_opts - or 'heap' not in enable_opts or 'unroll' not in enable_opts): - optimizations.append(OptSimplify()) + or 'heap' not in enable_opts or 'unroll' not in enable_opts + or 'pure' not in enable_opts): + optimizations.append(OptSimplify(unroll)) return optimizations, unroll diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -257,8 +257,8 @@ opnum == rop.COPYSTRCONTENT or # no effect on GC struct/array opnum == rop.COPYUNICODECONTENT): # no effect on GC struct/array return - assert opnum != rop.CALL_PURE if (opnum == rop.CALL or + opnum == rop.CALL_PURE or opnum == rop.CALL_MAY_FORCE or opnum == rop.CALL_RELEASE_GIL or opnum == rop.CALL_ASSEMBLER): @@ -481,7 +481,7 @@ # already between the tracing and now. In this case, we are # simply ignoring the QUASIIMMUT_FIELD hint and compiling it # as a regular getfield. - if not qmutdescr.is_still_valid(): + if not qmutdescr.is_still_valid_for(structvalue.get_key_box()): self._remove_guard_not_invalidated = True return # record as an out-of-line guard diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -191,10 +191,13 @@ # GUARD_OVERFLOW, then the loop is invalid. lastop = self.last_emitted_operation if lastop is None: - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') opnum = lastop.getopnum() if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') + self.emit_operation(op) def optimize_INT_ADD_OVF(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -525,6 +525,7 @@ @specialize.argtype(0) def _emit_operation(self, op): + assert op.getopnum() != rop.CALL_PURE for i in range(op.numargs()): arg = op.getarg(i) try: diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -208,7 +208,8 @@ box = value.box assert isinstance(box, Const) if not box.same_constant(constbox): - raise InvalidLoop + raise InvalidLoop('A GURAD_{VALUE,TRUE,FALSE} was proven to' + + 'always fail') return if emit_operation: self.emit_operation(op) @@ -220,7 +221,7 @@ if value.is_null(): return elif value.is_nonnull(): - raise InvalidLoop + raise InvalidLoop('A GUARD_ISNULL was proven to always fail') self.emit_operation(op) value.make_constant(self.optimizer.cpu.ts.CONST_NULL) @@ -229,7 +230,7 @@ if value.is_nonnull(): return elif value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL was proven to always fail') self.emit_operation(op) value.make_nonnull(op) @@ -278,7 +279,7 @@ if realclassbox is not None: if realclassbox.same_constant(expectedclassbox): return - raise InvalidLoop + raise InvalidLoop('A GUARD_CLASS was proven to always fail') if value.last_guard: # there already has been a guard_nonnull or guard_class or # guard_nonnull_class on this value. @@ -301,7 +302,8 @@ def optimize_GUARD_NONNULL_CLASS(self, op): value = self.getvalue(op.getarg(0)) if value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL_CLASS was proven to always ' + + 'fail') self.optimize_GUARD_CLASS(op) def optimize_CALL_LOOPINVARIANT(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -4,8 +4,9 @@ from pypy.jit.metainterp.history import TargetToken, JitCellToken class OptSimplify(Optimization): - def __init__(self): + def __init__(self, unroll): self.last_label_descr = None + self.unroll = unroll def optimize_CALL_PURE(self, op): args = op.getarglist() @@ -35,24 +36,26 @@ pass def optimize_LABEL(self, op): - descr = op.getdescr() - if isinstance(descr, JitCellToken): - return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) - self.last_label_descr = op.getdescr() + if not self.unroll: + descr = op.getdescr() + if isinstance(descr, JitCellToken): + return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) + self.last_label_descr = op.getdescr() self.emit_operation(op) def optimize_JUMP(self, op): - descr = op.getdescr() - assert isinstance(descr, JitCellToken) - if not descr.target_tokens: - assert self.last_label_descr is not None - target_token = self.last_label_descr - assert isinstance(target_token, TargetToken) - assert target_token.targeting_jitcell_token is descr - op.setdescr(self.last_label_descr) - else: - assert len(descr.target_tokens) == 1 - op.setdescr(descr.target_tokens[0]) + if not self.unroll: + descr = op.getdescr() + assert isinstance(descr, JitCellToken) + if not descr.target_tokens: + assert self.last_label_descr is not None + target_token = self.last_label_descr + assert isinstance(target_token, TargetToken) + assert target_token.targeting_jitcell_token is descr + op.setdescr(self.last_label_descr) + else: + assert len(descr.target_tokens) == 1 + op.setdescr(descr.target_tokens[0]) self.emit_operation(op) dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py @@ -0,0 +1,46 @@ +from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import OptimizeOptTest +from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin +from pypy.jit.metainterp.resoperation import rop + + +allopts = OptimizeOptTest.enable_opts.split(':') +for optnum in range(len(allopts)): + myopts = allopts[:] + del myopts[optnum] + + class TestLLtype(OptimizeOptTest, LLtypeMixin): + enable_opts = ':'.join(myopts) + + def optimize_loop(self, ops, expected, expected_preamble=None, + call_pure_results=None, expected_short=None): + loop = self.parse(ops) + if expected != "crash!": + expected = self.parse(expected) + if expected_preamble: + expected_preamble = self.parse(expected_preamble) + if expected_short: + expected_short = self.parse(expected_short) + + preamble = self.unroll_and_optimize(loop, call_pure_results) + + for op in preamble.operations + loop.operations: + assert op.getopnum() not in (rop.CALL_PURE, + rop.CALL_LOOPINVARIANT, + rop.VIRTUAL_REF_FINISH, + rop.VIRTUAL_REF, + rop.QUASIIMMUT_FIELD, + rop.MARK_OPAQUE_PTR, + rop.RECORD_KNOWN_CLASS) + + def raises(self, e, fn, *args): + try: + fn(*args) + except e: + pass + + opt = allopts[optnum] + exec "TestNo%sLLtype = TestLLtype" % (opt[0].upper() + opt[1:]) + +del TestLLtype # No need to run the last set twice +del TestNoUnrollLLtype # This case is take care of by test_optimizebasic + diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -5090,7 +5090,6 @@ class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass - ##class TestOOtype(BaseTestOptimizeBasic, OOtypeMixin): ## def test_instanceof(self): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -105,6 +105,9 @@ return loop + def raises(self, e, fn, *args): + py.test.raises(e, fn, *args) + class OptimizeOptTest(BaseTestWithUnroll): def setup_method(self, meth=None): @@ -2639,7 +2642,7 @@ p2 = new_with_vtable(ConstClass(node_vtable)) jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_2(self): @@ -2651,7 +2654,7 @@ escape(p2) # prevent it from staying Virtual jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_3(self): @@ -2665,7 +2668,7 @@ setfield_gc(p3, p4, descr=nextdescr) jump(p3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_merge_guard_class_guard_value(self): @@ -4411,7 +4414,7 @@ setfield_gc(p1, p3, descr=nextdescr) jump(p3) """ - py.test.raises(BogusPureField, self.optimize_loop, ops, "crash!") + self.raises(BogusPureField, self.optimize_loop, ops, "crash!") def test_dont_complains_different_field(self): ops = """ @@ -5024,7 +5027,7 @@ i2 = int_add(i0, 3) jump(i2) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_ne_const_not(self): ops = """ @@ -5074,7 +5077,7 @@ i3 = int_add(i0, 3) jump(i3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_lshift(self): ops = """ @@ -6533,9 +6536,9 @@ def test_quasi_immut_2(self): ops = """ [] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) escape(i1) jump() """ @@ -6585,13 +6588,13 @@ def test_call_may_force_invalidated_guards_reload(self): ops = """ [i0a, i0b] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) call_may_force(i0b, descr=mayforcevirtdescr) - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i2 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i2 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) i3 = escape(i1) i4 = escape(i2) jump(i3, i4) @@ -7813,6 +7816,52 @@ """ self.optimize_loop(ops, expected) + def test_issue1080_infinitie_loop_virtual(self): + ops = """ + [p10] + p52 = getfield_gc(p10, descr=nextdescr) # inst_storage + p54 = getarrayitem_gc(p52, 0, descr=arraydescr) + p69 = getfield_gc_pure(p54, descr=otherdescr) # inst_w_function + + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + + p106 = new_with_vtable(ConstClass(node_vtable)) + p108 = new_array(3, descr=arraydescr) + p110 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p110, ConstPtr(myptr2), descr=otherdescr) # inst_w_function + setarrayitem_gc(p108, 0, p110, descr=arraydescr) + setfield_gc(p106, p108, descr=nextdescr) # inst_storage + jump(p106) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr2), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + + + def test_issue1080_infinitie_loop_simple(self): + ops = """ + [p69] + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + jump(ConstPtr(myptr)) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -122,6 +122,7 @@ quasi.inst_field = -4247 quasifielddescr = cpu.fielddescrof(QUASI, 'inst_field') quasibox = BoxPtr(lltype.cast_opaque_ptr(llmemory.GCREF, quasi)) + quasiptr = quasibox.value quasiimmutdescr = QuasiImmutDescr(cpu, quasibox, quasifielddescr, cpu.fielddescrof(QUASI, 'mutate_field')) diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -315,7 +315,10 @@ try: jumpargs = virtual_state.make_inputargs(values, self.optimizer) except BadVirtualState: - raise InvalidLoop + raise InvalidLoop('The state of the optimizer at the end of ' + + 'peeled loop is inconsistent with the ' + + 'VirtualState at the begining of the peeled ' + + 'loop') jumpop.initarglist(jumpargs) # Inline the short preamble at the end of the loop @@ -325,7 +328,11 @@ for i in range(len(short_inputargs)): if short_inputargs[i] in args: if args[short_inputargs[i]] != jmp_to_short_args[i]: - raise InvalidLoop + raise InvalidLoop('The short preamble wants the ' + + 'same box passed to multiple of its ' + + 'inputargs, but the jump at the ' + + 'end of this bridge does not do that.') + args[short_inputargs[i]] = jmp_to_short_args[i] self.short_inliner = Inliner(short_inputargs, jmp_to_short_args) for op in self.short[1:]: @@ -378,7 +385,10 @@ #final_virtual_state.debug_print("Bad virtual state at end of loop, ", # bad) #debug_stop('jit-log-virtualstate') - raise InvalidLoop + raise InvalidLoop('The virtual state at the end of the peeled ' + + 'loop is not compatible with the virtual ' + + 'state at the start of the loop which makes ' + + 'it impossible to close the loop') #debug_stop('jit-log-virtualstate') @@ -526,8 +536,8 @@ args = jumpop.getarglist() modifier = VirtualStateAdder(self.optimizer) virtual_state = modifier.get_virtual_state(args) - #debug_start('jit-log-virtualstate') - #virtual_state.debug_print("Looking for ") + debug_start('jit-log-virtualstate') + virtual_state.debug_print("Looking for ") for target in cell_token.target_tokens: if not target.virtual_state: @@ -536,10 +546,10 @@ extra_guards = [] bad = {} - #debugmsg = 'Did not match ' + debugmsg = 'Did not match ' if target.virtual_state.generalization_of(virtual_state, bad): ok = True - #debugmsg = 'Matched ' + debugmsg = 'Matched ' else: try: cpu = self.optimizer.cpu @@ -548,13 +558,13 @@ extra_guards) ok = True - #debugmsg = 'Guarded to match ' + debugmsg = 'Guarded to match ' except InvalidLoop: pass - #target.virtual_state.debug_print(debugmsg, bad) + target.virtual_state.debug_print(debugmsg, bad) if ok: - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') values = [self.getvalue(arg) for arg in jumpop.getarglist()] @@ -581,7 +591,7 @@ jumpop.setdescr(cell_token.target_tokens[0]) self.optimizer.send_extra_operation(jumpop) return True - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') return False class ValueImporter(object): diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -27,11 +27,15 @@ if self.generalization_of(other, renum, {}): return if renum[self.position] != other.position: - raise InvalidLoop + raise InvalidLoop('The numbering of the virtual states does not ' + + 'match. This means that two virtual fields ' + + 'have been set to the same Box in one of the ' + + 'virtual states but not in the other.') self._generate_guards(other, box, cpu, extra_guards) def _generate_guards(self, other, box, cpu, extra_guards): - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') def enum_forced_boxes(self, boxes, value, optimizer): raise NotImplementedError @@ -346,10 +350,12 @@ def _generate_guards(self, other, box, cpu, extra_guards): if not isinstance(other, NotVirtualStateInfo): - raise InvalidLoop + raise InvalidLoop('The VirtualStates does not match as a ' + + 'virtual appears where a pointer is needed ' + + 'and it is too late to force it.') if self.lenbound or other.lenbound: - raise InvalidLoop + raise InvalidLoop('The array length bounds does not match.') if self.level == LEVEL_KNOWNCLASS and \ box.nonnull() and \ @@ -400,7 +406,8 @@ return # Remaining cases are probably not interesting - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') if self.level == LEVEL_CONSTANT: import pdb; pdb.set_trace() raise NotImplementedError diff --git a/pypy/jit/metainterp/quasiimmut.py b/pypy/jit/metainterp/quasiimmut.py --- a/pypy/jit/metainterp/quasiimmut.py +++ b/pypy/jit/metainterp/quasiimmut.py @@ -120,8 +120,10 @@ self.fielddescr, self.structbox) return fieldbox.constbox() - def is_still_valid(self): + def is_still_valid_for(self, structconst): assert self.structbox is not None + if not self.structbox.constbox().same_constant(structconst): + return False cpu = self.cpu gcref = self.structbox.getref_base() qmut = get_current_qmut_instance(cpu, gcref, self.mutatefielddescr) diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.quasiimmut import get_current_qmut_instance from pypy.jit.metainterp.test.support import LLJitMixin from pypy.jit.codewriter.policy import StopAtXPolicy -from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe +from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe, promote def test_get_current_qmut_instance(): @@ -506,6 +506,27 @@ "guard_not_invalidated": 2 }) + def test_issue1080(self): + myjitdriver = JitDriver(greens=[], reds=["n", "sa", "a"]) + class Foo(object): + _immutable_fields_ = ["x?"] + def __init__(self, x): + self.x = x + one, two = Foo(1), Foo(2) + def main(n): + sa = 0 + a = one + while n: + myjitdriver.jit_merge_point(n=n, sa=sa, a=a) + sa += a.x + if a.x == 1: + a = two + elif a.x == 2: + a = one + n -= 1 + return sa + res = self.meta_interp(main, [10]) + assert res == main(10) class TestLLtypeGreenFieldsTests(QuasiImmutTests, LLJitMixin): pass diff --git a/pypy/module/cpyext/unicodeobject.py b/pypy/module/cpyext/unicodeobject.py --- a/pypy/module/cpyext/unicodeobject.py +++ b/pypy/module/cpyext/unicodeobject.py @@ -420,7 +420,8 @@ NULL, the return value might be a shared object. Therefore, modification of the resulting Unicode object is only allowed when u is NULL.""" if s: - return make_ref(space, PyUnicode_DecodeUTF8(space, s, size, None)) + return make_ref(space, PyUnicode_DecodeUTF8( + space, s, size, lltype.nullptr(rffi.CCHARP.TO))) else: return rffi.cast(PyObject, new_empty_unicode(space, size)) diff --git a/pypy/module/rctime/test/test_rctime.py b/pypy/module/rctime/test/test_rctime.py --- a/pypy/module/rctime/test/test_rctime.py +++ b/pypy/module/rctime/test/test_rctime.py @@ -64,6 +64,7 @@ def test_localtime(self): import time as rctime + import os raises(TypeError, rctime.localtime, "foo") rctime.localtime() rctime.localtime(None) @@ -75,6 +76,10 @@ assert 0 <= (t1 - t0) < 1.2 t = rctime.time() assert rctime.localtime(t) == rctime.localtime(t) + if os.name == 'nt': + raises(ValueError, rctime.localtime, -1) + else: + rctime.localtime(-1) def test_mktime(self): import time as rctime @@ -108,8 +113,8 @@ assert int(rctime.mktime(rctime.gmtime(t))) - rctime.timezone == int(t) ltime = rctime.localtime() assert rctime.mktime(tuple(ltime)) == rctime.mktime(ltime) - - assert rctime.mktime(rctime.localtime(-1)) == -1 + if os.name != 'nt': + assert rctime.mktime(rctime.localtime(-1)) == -1 def test_asctime(self): import time as rctime diff --git a/pypy/module/select/__init__.py b/pypy/module/select/__init__.py --- a/pypy/module/select/__init__.py +++ b/pypy/module/select/__init__.py @@ -2,6 +2,7 @@ from pypy.interpreter.mixedmodule import MixedModule import sys +import os class Module(MixedModule): @@ -9,11 +10,13 @@ } interpleveldefs = { - 'poll' : 'interp_select.poll', 'select': 'interp_select.select', 'error' : 'space.fromcache(interp_select.Cache).w_error' } + if os.name =='posix': + interpleveldefs['poll'] = 'interp_select.poll' + if sys.platform.startswith('linux'): interpleveldefs['epoll'] = 'interp_epoll.W_Epoll' from pypy.module.select.interp_epoll import cconfig, public_symbols diff --git a/pypy/module/select/test/test_select.py b/pypy/module/select/test/test_select.py --- a/pypy/module/select/test/test_select.py +++ b/pypy/module/select/test/test_select.py @@ -214,6 +214,8 @@ def test_poll(self): import select + if not hasattr(select, 'poll'): + skip("no select.poll() on this platform") readend, writeend = self.getpair() try: class A(object): diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -1,10 +1,12 @@ import sys, time from pypy.rpython.extregistry import ExtRegistryEntry +from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.rarithmetic import is_valid_int def ll_assert(x, msg): """After translation to C, this becomes an RPyAssert.""" + assert type(x) is bool, "bad type! got %r" % (type(x),) assert x, msg class Entry(ExtRegistryEntry): @@ -21,8 +23,13 @@ hop.exception_cannot_occur() hop.genop('debug_assert', vlist) +class FatalError(Exception): + pass + def fatalerror(msg): # print the RPython traceback and abort with a fatal error + if not we_are_translated(): + raise FatalError(msg) from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop llop.debug_print_traceback(lltype.Void) @@ -33,6 +40,8 @@ def fatalerror_notb(msg): # a variant of fatalerror() that doesn't print the RPython traceback + if not we_are_translated(): + raise FatalError(msg) from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop llop.debug_fatalerror(lltype.Void, msg) diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -916,7 +916,7 @@ ll_assert(not self.is_in_nursery(obj), "object in nursery after collection") # similarily, all objects should have this flag: - ll_assert(self.header(obj).tid & GCFLAG_TRACK_YOUNG_PTRS, + ll_assert(self.header(obj).tid & GCFLAG_TRACK_YOUNG_PTRS != 0, "missing GCFLAG_TRACK_YOUNG_PTRS") # the GCFLAG_VISITED should not be set between collections ll_assert(self.header(obj).tid & GCFLAG_VISITED == 0, diff --git a/pypy/rpython/memory/gc/semispace.py b/pypy/rpython/memory/gc/semispace.py --- a/pypy/rpython/memory/gc/semispace.py +++ b/pypy/rpython/memory/gc/semispace.py @@ -640,7 +640,7 @@ between collections.""" tid = self.header(obj).tid if tid & GCFLAG_EXTERNAL: - ll_assert(tid & GCFLAG_FORWARDED, "bug: external+!forwarded") + ll_assert(tid & GCFLAG_FORWARDED != 0, "bug: external+!forwarded") ll_assert(not (self.tospace <= obj < self.free), "external flag but object inside the semispaces") else: diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -8,7 +8,6 @@ from pypy.rpython.memory.gcheader import GCHeaderBuilder from pypy.rlib.rarithmetic import ovfcheck from pypy.rlib import rgc -from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.translator.backendopt import graphanalyze from pypy.translator.backendopt.support import var_needsgc From noreply at buildbot.pypy.org Wed May 2 11:34:11 2012 From: noreply at buildbot.pypy.org (timo_jbo) Date: Wed, 2 May 2012 11:34:11 +0200 (CEST) Subject: [pypy-commit] pypy numpypy-issue1137: make setup_class unnecessary again. Message-ID: <20120502093411.55A5182F50@wyvern.cs.uni-duesseldorf.de> Author: Timo Paulssen Branch: numpypy-issue1137 Changeset: r54869:69e9f3547488 Date: 2012-05-02 11:33 +0200 http://bitbucket.org/pypy/pypy/changeset/69e9f3547488/ Log: make setup_class unnecessary again. diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -194,17 +194,16 @@ assert _to_coords(13, 'F') == [1, 0, 2] class AppTestNumArray(BaseNumpyAppTest): - def setup_class(cls): - super(AppTestNumArray, cls).setup_class() - - w_tup = cls.space.appexec([], """(): - + def w_CustomIndexObject(self, index): class CustomIndexObject(object): def __init__(self, index): self.index = index def __index__(self): return self.index + return CustomIndexObject(index) + + def w_CustomIndexIntObject(self, index, value): class CustomIndexIntObject(object): def __init__(self, index, value): self.index = index @@ -214,18 +213,16 @@ def __int__(self): return self.value + return CustomIndexIntObject(index, value) + + def w_CustomIntObject(self, value): class CustomIntObject(object): def __init__(self, value): self.value = value def __index__(self): return self.value - return CustomIndexObject, CustomIndexIntObject, CustomIntObject""") - - tup = cls.space.unpackiterable(w_tup) - cls.w_CustomIndexObject = tup[0] - cls.w_CustomIndexIntObject = tup[1] - cls.w_CustomIntObject = tup[2] + return CustomIntObject(value) def test_ndarray(self): from _numpypy import ndarray, array, dtype From noreply at buildbot.pypy.org Wed May 2 15:42:54 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Wed, 2 May 2012 15:42:54 +0200 (CEST) Subject: [pypy-commit] pypy stdlib-unification: merge from default Message-ID: <20120502134254.AF1CE82F50@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: stdlib-unification Changeset: r54870:edc8d0961161 Date: 2012-05-02 15:40 +0200 http://bitbucket.org/pypy/pypy/changeset/edc8d0961161/ Log: merge from default diff too long, truncating to 10000 out of 16100 lines diff --git a/lib-python/2.7/test/test_peepholer.py b/lib-python/2.7/test/test_peepholer.py --- a/lib-python/2.7/test/test_peepholer.py +++ b/lib-python/2.7/test/test_peepholer.py @@ -145,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib_pypy/_ctypes/builtin.py b/lib_pypy/_ctypes/builtin.py --- a/lib_pypy/_ctypes/builtin.py +++ b/lib_pypy/_ctypes/builtin.py @@ -3,7 +3,8 @@ try: from thread import _local as local except ImportError: - local = object # no threads + class local(object): # no threads + pass class ConvMode: encoding = 'ascii' diff --git a/lib_pypy/_ctypes_test.py b/lib_pypy/_ctypes_test.py --- a/lib_pypy/_ctypes_test.py +++ b/lib_pypy/_ctypes_test.py @@ -21,7 +21,7 @@ # Compile .c file include_dir = os.path.join(thisdir, '..', 'include') if sys.platform == 'win32': - ccflags = [] + ccflags = ['-D_CRT_SECURE_NO_WARNINGS'] else: ccflags = ['-fPIC'] res = compiler.compile([os.path.join(thisdir, '_ctypes_test.c')], @@ -34,6 +34,13 @@ if sys.platform == 'win32': # XXX libpypy-c.lib is currently not installed automatically library = os.path.join(thisdir, '..', 'include', 'libpypy-c') + if not os.path.exists(library + '.lib'): + #For a nightly build + library = os.path.join(thisdir, '..', 'include', 'python27') + if not os.path.exists(library + '.lib'): + # For a local translation + library = os.path.join(thisdir, '..', 'pypy', 'translator', + 'goal', 'libpypy-c') libraries = [library, 'oleaut32'] extra_ldargs = ['/MANIFEST'] # needed for VC10 else: diff --git a/lib_pypy/_testcapi.py b/lib_pypy/_testcapi.py --- a/lib_pypy/_testcapi.py +++ b/lib_pypy/_testcapi.py @@ -16,7 +16,7 @@ # Compile .c file include_dir = os.path.join(thisdir, '..', 'include') if sys.platform == 'win32': - ccflags = [] + ccflags = ['-D_CRT_SECURE_NO_WARNINGS'] else: ccflags = ['-fPIC', '-Wimplicit-function-declaration'] res = compiler.compile([os.path.join(thisdir, '_testcapimodule.c')], @@ -29,6 +29,13 @@ if sys.platform == 'win32': # XXX libpypy-c.lib is currently not installed automatically library = os.path.join(thisdir, '..', 'include', 'libpypy-c') + if not os.path.exists(library + '.lib'): + #For a nightly build + library = os.path.join(thisdir, '..', 'include', 'python27') + if not os.path.exists(library + '.lib'): + # For a local translation + library = os.path.join(thisdir, '..', 'pypy', 'translator', + 'goal', 'libpypy-c') libraries = [library, 'oleaut32'] extra_ldargs = ['/MANIFEST', # needed for VC10 '/EXPORT:init_testcapi'] diff --git a/lib_pypy/numpypy/core/fromnumeric.py b/lib_pypy/numpypy/core/fromnumeric.py --- a/lib_pypy/numpypy/core/fromnumeric.py +++ b/lib_pypy/numpypy/core/fromnumeric.py @@ -411,7 +411,8 @@ [3, 7]]]) """ - raise NotImplementedError('Waiting on interp level method') + swapaxes = a.swapaxes + return swapaxes(axis1, axis2) def transpose(a, axes=None): diff --git a/lib_pypy/pyrepl/reader.py b/lib_pypy/pyrepl/reader.py --- a/lib_pypy/pyrepl/reader.py +++ b/lib_pypy/pyrepl/reader.py @@ -152,8 +152,8 @@ (r'\', 'delete'), (r'\', 'backspace'), (r'\M-\', 'backward-kill-word'), - (r'\', 'end'), - (r'\', 'home'), + (r'\', 'end-of-line'), # was 'end' + (r'\', 'beginning-of-line'), # was 'home' (r'\', 'help'), (r'\EOF', 'end'), # the entries in the terminfo database for xterms (r'\EOH', 'home'), # seem to be wrong. this is a less than ideal diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -229,8 +229,8 @@ return thing elif hasattr(thing, '__name__'): # mostly types and functions return thing.__name__ - elif hasattr(thing, 'name'): # mostly ClassDescs - return thing.name + elif hasattr(thing, 'name') and isinstance(thing.name, str): + return thing.name # mostly ClassDescs elif isinstance(thing, tuple): return '_'.join(map(nameof, thing)) else: diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -320,10 +320,14 @@ default=False), BoolOption("getattributeshortcut", "track types that override __getattribute__", - default=False), + default=False, + # weakrefs needed, because of get_subclasses() + requires=[("translation.rweakref", True)]), BoolOption("newshortcut", "cache and shortcut calling __new__ from builtin types", - default=False), + default=False, + # weakrefs needed, because of get_subclasses() + requires=[("translation.rweakref", True)]), BoolOption("logspaceoptypes", "a instrumentation option: before exit, print the types seen by " @@ -337,7 +341,9 @@ requires=[("objspace.std.builtinshortcut", True)]), BoolOption("withidentitydict", "track types that override __hash__, __eq__ or __cmp__ and use a special dict strategy for those which do not", - default=False), + default=False, + # weakrefs needed, because of get_subclasses() + requires=[("translation.rweakref", True)]), ]), ]) diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/cppyy.rst @@ -0,0 +1,579 @@ +============================ +cppyy: C++ bindings for PyPy +============================ + +The cppyy module provides C++ bindings for PyPy by using the reflection +information extracted from C++ header files by means of the +`Reflex package`_. +For this to work, you have to both install Reflex and build PyPy from the +reflex-support branch. +As indicated by this being a branch, support for Reflex is still +experimental. +However, it is functional enough to put it in the hands of those who want +to give it a try. +In the medium term, cppyy will move away from Reflex and instead use +`cling`_ as its backend, which is based on `llvm`_. +Although that will change the logistics on the generation of reflection +information, it will not change the python-side interface. + +.. _`Reflex package`: http://root.cern.ch/drupal/content/reflex +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ + + +Motivation +========== + +The cppyy module offers two unique features, which result in great +performance as well as better functionality and cross-language integration +than would otherwise be possible. +First, cppyy is written in RPython and therefore open to optimizations by the +JIT up until the actual point of call into C++. +This means that there are no conversions necessary between a garbage collected +and a reference counted environment, as is needed for the use of existing +extension modules written or generated for CPython. +It also means that if variables are already unboxed by the JIT, they can be +passed through directly to C++. +Second, Reflex (and cling far more so) adds dynamic features to C++, thus +greatly reducing impedance mismatches between the two languages. +In fact, Reflex is dynamic enough that you could write the runtime bindings +generation in python (as opposed to RPython) and this is used to create very +natural "pythonizations" of the bound code. + + +Installation +============ + +For now, the easiest way of getting the latest version of Reflex, is by +installing the ROOT package. +Besides getting the latest version of Reflex, another advantage is that with +the full ROOT package, you can also use your Reflex-bound code on `CPython`_. +`Download`_ a binary or install from `source`_. +Some Linux and Mac systems may have ROOT provided in the list of scientific +software of their packager. +A current, standalone version of Reflex should be provided at some point, +once the dependencies and general packaging have been thought out. +Also, make sure you have a version of `gccxml`_ installed, which is most +easily provided by the packager of your system. +If you read up on gccxml, you'll probably notice that it is no longer being +developed and hence will not provide C++11 support. +That's why the medium term plan is to move to `cling`_. + +.. _`Download`: http://root.cern.ch/drupal/content/downloading-root +.. _`source`: http://root.cern.ch/drupal/content/installing-root-source +.. _`gccxml`: http://www.gccxml.org + +Next, get the `PyPy sources`_, select the reflex-support branch, and build +pypy-c. +For the build to succeed, the ``$ROOTSYS`` environment variable must point to +the location of your ROOT installation:: + + $ hg clone https://bitbucket.org/pypy/pypy + $ cd pypy + $ hg up reflex-support + $ cd pypy/translator/goal + $ python translate.py -O jit --gcrootfinder=shadowstack targetpypystandalone.py --withmod-cppyy + +This will build a ``pypy-c`` that includes the cppyy module, and through that, +Reflex support. +Of course, if you already have a pre-built version of the ``pypy`` interpreter, +you can use that for the translation rather than ``python``. + +.. _`PyPy sources`: https://bitbucket.org/pypy/pypy/overview + + +Basic example +============= + +Now test with a trivial example whether all packages are properly installed +and functional. +First, create a C++ header file with some class in it (note that all functions +are made inline for convenience; a real-world example would of course have a +corresponding source file):: + + $ cat MyClass.h + class MyClass { + public: + MyClass(int i = -99) : m_myint(i) {} + + int GetMyInt() { return m_myint; } + void SetMyInt(int i) { m_myint = i; } + + public: + int m_myint; + }; + +Then, generate the bindings using ``genreflex`` (part of ROOT), and compile the +code:: + + $ genreflex MyClass.h + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyClass_rflx.cpp -o libMyClassDict.so + +Now you're ready to use the bindings. +Since the bindings are designed to look pythonistic, it should be +straightforward:: + + $ pypy-c + >>>> import cppyy + >>>> cppyy.load_reflection_info("libMyClassDict.so") + + >>>> myinst = cppyy.gbl.MyClass(42) + >>>> print myinst.GetMyInt() + 42 + >>>> myinst.SetMyInt(33) + >>>> print myinst.m_myint + 33 + >>>> myinst.m_myint = 77 + >>>> print myinst.GetMyInt() + 77 + >>>> help(cppyy.gbl.MyClass) # shows that normal python introspection works + +That's all there is to it! + + +Advanced example +================ +The following snippet of C++ is very contrived, to allow showing that such +pathological code can be handled and to show how certain features play out in +practice:: + + $ cat MyAdvanced.h + #include + + class Base1 { + public: + Base1(int i) : m_i(i) {} + virtual ~Base1() {} + int m_i; + }; + + class Base2 { + public: + Base2(double d) : m_d(d) {} + virtual ~Base2() {} + double m_d; + }; + + class C; + + class Derived : public virtual Base1, public virtual Base2 { + public: + Derived(const std::string& name, int i, double d) : Base1(i), Base2(d), m_name(name) {} + virtual C* gimeC() { return (C*)0; } + std::string m_name; + }; + + Base1* BaseFactory(const std::string& name, int i, double d) { + return new Derived(name, i, d); + } + +This code is still only in a header file, with all functions inline, for +convenience of the example. +If the implementations live in a separate source file or shared library, the +only change needed is to link those in when building the reflection library. + +If you were to run ``genreflex`` like above in the basic example, you will +find that not all classes of interest will be reflected, nor will be the +global factory function. +In particular, ``std::string`` will be missing, since it is not defined in +this header file, but in a header file that is included. +In practical terms, general classes such as ``std::string`` should live in a +core reflection set, but for the moment assume we want to have it in the +reflection library that we are building for this example. + +The ``genreflex`` script can be steered using a so-called `selection file`_, +which is a simple XML file specifying, either explicitly or by using a +pattern, which classes, variables, namespaces, etc. to select from the given +header file. +With the aid of a selection file, a large project can be easily managed: +simply ``#include`` all relevant headers into a single header file that is +handed to ``genreflex``. +Then, apply a selection file to pick up all the relevant classes. +For our purposes, the following rather straightforward selection will do +(the name ``lcgdict`` for the root is historical, but required):: + + $ cat MyAdvanced.xml + + + + + + + +.. _`selection file`: http://root.cern.ch/drupal/content/generating-reflex-dictionaries + +Now the reflection info can be generated and compiled:: + + $ genreflex MyAdvanced.h --selection=MyAdvanced.xml + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyAdvanced_rflx.cpp -o libAdvExDict.so + +and subsequently be used from PyPy:: + + >>>> import cppyy + >>>> cppyy.load_reflection_info("libAdvExDict.so") + + >>>> d = cppyy.gbl.BaseFactory("name", 42, 3.14) + >>>> type(d) + + >>>> isinstance(d, cppyy.gbl.Base1) + True + >>>> isinstance(d, cppyy.gbl.Base2) + True + >>>> d.m_i, d.m_d + (42, 3.14) + >>>> d.m_name == "name" + True + >>>> + +Again, that's all there is to it! + +A couple of things to note, though. +If you look back at the C++ definition of the ``BaseFactory`` function, +you will see that it declares the return type to be a ``Base1``, yet the +bindings return an object of the actual type ``Derived``? +This choice is made for a couple of reasons. +First, it makes method dispatching easier: if bound objects are always their +most derived type, then it is easy to calculate any offsets, if necessary. +Second, it makes memory management easier: the combination of the type and +the memory address uniquely identifies an object. +That way, it can be recycled and object identity can be maintained if it is +entered as a function argument into C++ and comes back to PyPy as a return +value. +Last, but not least, casting is decidedly unpythonistic. +By always providing the most derived type known, casting becomes unnecessary. +For example, the data member of ``Base2`` is simply directly available. +Note also that the unreflected ``gimeC`` method of ``Derived`` does not +preclude its use. +It is only the ``gimeC`` method that is unusable as long as class ``C`` is +unknown to the system. + + +Features +======== + +The following is not meant to be an exhaustive list, since cppyy is still +under active development. +Furthermore, the intention is that every feature is as natural as possible on +the python side, so if you find something missing in the list below, simply +try it out. +It is not always possible to provide exact mapping between python and C++ +(active memory management is one such case), but by and large, if the use of a +feature does not strike you as obvious, it is more likely to simply be a bug. +That is a strong statement to make, but also a worthy goal. + +* **abstract classes**: Are represented as python classes, since they are + needed to complete the inheritance hierarchies, but will raise an exception + if an attempt is made to instantiate from them. + +* **arrays**: Supported for builtin data types only, as used from module + ``array``. + Out-of-bounds checking is limited to those cases where the size is known at + compile time (and hence part of the reflection info). + +* **builtin data types**: Map onto the expected equivalent python types, with + the caveat that there may be size differences, and thus it is possible that + exceptions are raised if an overflow is detected. + +* **casting**: Is supposed to be unnecessary. + Object pointer returns from functions provide the most derived class known + in the hierarchy of the object being returned. + This is important to preserve object identity as well as to make casting, + a pure C++ feature after all, superfluous. + +* **classes and structs**: Get mapped onto python classes, where they can be + instantiated as expected. + If classes are inner classes or live in a namespace, their naming and + location will reflect that. + +* **data members**: Public data members are represented as python properties + and provide read and write access on instances as expected. + +* **default arguments**: C++ default arguments work as expected, but python + keywords are not supported. + It is technically possible to support keywords, but for the C++ interface, + the formal argument names have no meaning and are not considered part of the + API, hence it is not a good idea to use keywords. + +* **doc strings**: The doc string of a method or function contains the C++ + arguments and return types of all overloads of that name, as applicable. + +* **enums**: Are translated as ints with no further checking. + +* **functions**: Work as expected and live in their appropriate namespace + (which can be the global one, ``cppyy.gbl``). + +* **inheritance**: All combinations of inheritance on the C++ (single, + multiple, virtual) are supported in the binding. + However, new python classes can only use single inheritance from a bound C++ + class. + Multiple inheritance would introduce two "this" pointers in the binding. + This is a current, not a fundamental, limitation. + The C++ side will not see any overridden methods on the python side, as + cross-inheritance is planned but not yet supported. + +* **methods**: Are represented as python methods and work as expected. + They are first class objects and can be bound to an instance. + Virtual C++ methods work as expected. + To select a specific virtual method, do like with normal python classes + that override methods: select it from the class that you need, rather than + calling the method on the instance. + To select a specific overload, use the __dispatch__ special function, which + takes the name of the desired method and its signature (which can be + obtained from the doc string) as arguments. + +* **namespaces**: Are represented as python classes. + Namespaces are more open-ended than classes, so sometimes initial access may + result in updates as data and functions are looked up and constructed + lazily. + Thus the result of ``dir()`` on a namespace should not be relied upon: it + only shows the already accessed members. (TODO: to be fixed by implementing + __dir__.) + The global namespace is ``cppyy.gbl``. + +* **operator conversions**: If defined in the C++ class and a python + equivalent exists (i.e. all builtin integer and floating point types, as well + as ``bool``), it will map onto that python conversion. + Note that ``char*`` is mapped onto ``__str__``. + +* **operator overloads**: If defined in the C++ class and if a python + equivalent is available (not always the case, think e.g. of ``operator||``), + then they work as expected. + Special care needs to be taken for global operator overloads in C++: first, + make sure that they are actually reflected, especially for the global + overloads for ``operator==`` and ``operator!=`` of STL iterators in the case + of gcc. + Second, make sure that reflection info is loaded in the proper order. + I.e. that these global overloads are available before use. + +* **pointers**: For builtin data types, see arrays. + For objects, a pointer to an object and an object looks the same, unless + the pointer is a data member. + In that case, assigning to the data member will cause a copy of the pointer + and care should be taken about the object's life time. + If a pointer is a global variable, the C++ side can replace the underlying + object and the python side will immediately reflect that. + +* **static data members**: Are represented as python property objects on the + class and the meta-class. + Both read and write access is as expected. + +* **static methods**: Are represented as python's ``staticmethod`` objects + and can be called both from the class as well as from instances. + +* **strings**: The std::string class is considered a builtin C++ type and + mixes quite well with python's str. + Python's str can be passed where a ``const char*`` is expected, and an str + will be returned if the return type is ``const char*``. + +* **templated classes**: Are represented in a meta-class style in python. + This looks a little bit confusing, but conceptually is rather natural. + For example, given the class ``std::vector``, the meta-class part would + be ``std.vector`` in python. + Then, to get the instantiation on ``int``, do ``std.vector(int)`` and to + create an instance of that class, do ``std.vector(int)()``. + Note that templates can be build up by handing actual types to the class + instantiation (as done in this vector example), or by passing in the list of + template arguments as a string. + The former is a lot easier to work with if you have template instantiations + using classes that themselves are templates (etc.) in the arguments. + All classes must already exist in the loaded reflection info. + +* **typedefs**: Are simple python references to the actual classes to which + they refer. + +* **unary operators**: Are supported if a python equivalent exists, and if the + operator is defined in the C++ class. + +You can always find more detailed examples and see the full of supported +features by looking at the tests in pypy/module/cppyy/test. + +If a feature or reflection info is missing, this is supposed to be handled +gracefully. +In fact, there are unit tests explicitly for this purpose (even as their use +becomes less interesting over time, as the number of missing features +decreases). +Only when a missing feature is used, should there be an exception. +For example, if no reflection info is available for a return type, then a +class that has a method with that return type can still be used. +Only that one specific method can not be used. + + +Templates +========= + +A bit of special care needs to be taken for the use of templates. +For a templated class to be completely available, it must be guaranteed that +said class is fully instantiated, and hence all executable C++ code is +generated and compiled in. +The easiest way to fulfill that guarantee is by explicit instantiation in the +header file that is handed to ``genreflex``. +The following example should make that clear:: + + $ cat MyTemplate.h + #include + + class MyClass { + public: + MyClass(int i = -99) : m_i(i) {} + MyClass(const MyClass& s) : m_i(s.m_i) {} + MyClass& operator=(const MyClass& s) { m_i = s.m_i; return *this; } + ~MyClass() {} + int m_i; + }; + + template class std::vector; + +If you know for certain that all symbols will be linked in from other sources, +you can also declare the explicit template instantiation ``extern``. + +Unfortunately, this is not enough for gcc. +The iterators, if they are going to be used, need to be instantiated as well, +as do the comparison operators on those iterators, as these live in an +internal namespace, rather than in the iterator classes. +One way to handle this, is to deal with this once in a macro, then reuse that +macro for all ``vector`` classes. +Thus, the header above needs this, instead of just the explicit instantiation +of the ``vector``:: + + #define STLTYPES_EXPLICIT_INSTANTIATION_DECL(STLTYPE, TTYPE) \ + template class std::STLTYPE< TTYPE >; \ + template class __gnu_cxx::__normal_iterator >; \ + template class __gnu_cxx::__normal_iterator >;\ + namespace __gnu_cxx { \ + template bool operator==(const std::STLTYPE< TTYPE >::iterator&, \ + const std::STLTYPE< TTYPE >::iterator&); \ + template bool operator!=(const std::STLTYPE< TTYPE >::iterator&, \ + const std::STLTYPE< TTYPE >::iterator&); \ + } + + STLTYPES_EXPLICIT_INSTANTIATION_DECL(vector, MyClass) + +Then, still for gcc, the selection file needs to contain the full hierarchy as +well as the global overloads for comparisons for the iterators:: + + $ cat MyTemplate.xml + + + + + + + + + + + + + +Run the normal ``genreflex`` and compilation steps:: + + $ genreflex MyTemplate.h --selection=MyTemplate.xm + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyTemplate_rflx.cpp -o libTemplateDict.so + +Note: this is a dirty corner that clearly could do with some automation, +even if the macro already helps. +Such automation is planned. +In fact, in the cling world, the backend can perform the template +instantations and generate the reflection info on the fly, and none of the +above will any longer be necessary. + +Subsequent use should be as expected. +Note the meta-class style of "instantiating" the template:: + + >>>> import cppyy + >>>> cppyy.load_reflection_info("libTemplateDict.so") + >>>> std = cppyy.gbl.std + >>>> MyClass = cppyy.gbl.MyClass + >>>> v = std.vector(MyClass)() + >>>> v += [MyClass(1), MyClass(2), MyClass(3)] + >>>> for m in v: + .... print m.m_i, + .... + 1 2 3 + >>>> + +Other templates work similarly. +The arguments to the template instantiation can either be a string with the +full list of arguments, or the explicit classes. +The latter makes for easier code writing if the classes passed to the +instantiation are themselves templates. + + +The fast lane +============= + +The following is an experimental feature of cppyy, and that makes it doubly +experimental, so caveat emptor. +With a slight modification of Reflex, it can provide function pointers for +C++ methods, and hence allow PyPy to call those pointers directly, rather than +calling C++ through a Reflex stub. +This results in a rather significant speed-up. +Mind you, the normal stub path is not exactly slow, so for now only use this +out of curiosity or if you really need it. + +To install this patch of Reflex, locate the file genreflex-methptrgetter.patch +in pypy/module/cppyy and apply it to the genreflex python scripts found in +``$ROOTSYS/lib``:: + + $ cd $ROOTSYS/lib + $ patch -p2 < genreflex-methptrgetter.patch + +With this patch, ``genreflex`` will have grown the ``--with-methptrgetter`` +option. +Use this option when running ``genreflex``, and add the +``-Wno-pmf-conversions`` option to ``g++`` when compiling. +The rest works the same way: the fast path will be used transparently (which +also means that you can't actually find out whether it is in use, other than +by running a micro-benchmark). + + +CPython +======= + +Most of the ideas in cppyy come originally from the `PyROOT`_ project. +Although PyROOT does not support Reflex directly, it has an alter ego called +"PyCintex" that, in a somewhat roundabout way, does. +If you installed ROOT, rather than just Reflex, PyCintex should be available +immediately if you add ``$ROOTSYS/lib`` to the ``PYTHONPATH`` environment +variable. + +.. _`PyROOT`: http://root.cern.ch/drupal/content/pyroot + +There are a couple of minor differences between PyCintex and cppyy, most to do +with naming. +The one that you will run into directly, is that PyCintex uses a function +called ``loadDictionary`` rather than ``load_reflection_info``. +The reason for this is that Reflex calls the shared libraries that contain +reflection info "dictionaries." +However, in python, the name `dictionary` already has a well-defined meaning, +so a more descriptive name was chosen for cppyy. +In addition, PyCintex requires that the names of shared libraries so loaded +start with "lib" in their name. +The basic example above, rewritten for PyCintex thus goes like this:: + + $ python + >>> import PyCintex + >>> PyCintex.loadDictionary("libMyClassDict.so") + >>> myinst = PyCintex.gbl.MyClass(42) + >>> print myinst.GetMyInt() + 42 + >>> myinst.SetMyInt(33) + >>> print myinst.m_myint + 33 + >>> myinst.m_myint = 77 + >>> print myinst.GetMyInt() + 77 + >>> help(PyCintex.gbl.MyClass) # shows that normal python introspection works + +Other naming differences are such things as taking an address of an object. +In PyCintex, this is done with ``AddressOf`` whereas in cppyy the choice was +made to follow the naming as in ``ctypes`` and hence use ``addressof`` +(PyROOT/PyCintex predate ``ctypes`` by several years, and the ROOT project +follows camel-case, hence the differences). + +Of course, this is python, so if any of the naming is not to your liking, all +you have to do is provide a wrapper script that you import instead of +importing the ``cppyy`` or ``PyCintex`` modules directly. +In that wrapper script you can rename methods exactly the way you need it. + +In the cling world, all these differences will be resolved. diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst --- a/pypy/doc/extending.rst +++ b/pypy/doc/extending.rst @@ -23,6 +23,8 @@ * Write them in RPython as mixedmodule_, using *rffi* as bindings. +* Write them in C++ and bind them through Reflex_ (EXPERIMENTAL) + .. _ctypes: #CTypes .. _\_ffi: #LibFFI .. _mixedmodule: #Mixed Modules @@ -110,3 +112,59 @@ XXX we should provide detailed docs about lltype and rffi, especially if we want people to follow that way. + +Reflex +====== + +This method is still experimental and is being exercised on a branch, +`reflex-support`_, which adds the `cppyy`_ module. +The method works by using the `Reflex package`_ to provide reflection +information of the C++ code, which is then used to automatically generate +bindings at runtime. +From a python standpoint, there is no difference between generating bindings +at runtime, or having them "statically" generated and available in scripts +or compiled into extension modules: python classes and functions are always +runtime structures, created when a script or module loads. +However, if the backend itself is capable of dynamic behavior, it is a much +better functional match to python, allowing tighter integration and more +natural language mappings. +Full details are `available here`_. + +.. _`cppyy`: cppyy.html +.. _`reflex-support`: cppyy.html +.. _`Reflex package`: http://root.cern.ch/drupal/content/reflex +.. _`available here`: cppyy.html + +Pros +---- + +The cppyy module is written in RPython, which makes it possible to keep the +code execution visible to the JIT all the way to the actual point of call into +C++, thus allowing for a very fast interface. +Reflex is currently in use in large software environments in High Energy +Physics (HEP), across many different projects and packages, and its use can be +virtually completely automated in a production environment. +One of its uses in HEP is in providing language bindings for CPython. +Thus, it is possible to use Reflex to have bound code work on both CPython and +on PyPy. +In the medium-term, Reflex will be replaced by `cling`_, which is based on +`llvm`_. +This will affect the backend only; the python-side interface is expected to +remain the same, except that cling adds a lot of dynamic behavior to C++, +enabling further language integration. + +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ + +Cons +---- + +C++ is a large language, and cppyy is not yet feature-complete. +Still, the experience gained in developing the equivalent bindings for CPython +means that adding missing features is a simple matter of engineering, not a +question of research. +The module is written so that currently missing features should do no harm if +you don't use them, if you do need a particular feature, it may be necessary +to work around it in python or with a C++ helper function. +Although Reflex works on various platforms, the bindings with PyPy have only +been tested on Linux. diff --git a/pypy/doc/windows.rst b/pypy/doc/windows.rst --- a/pypy/doc/windows.rst +++ b/pypy/doc/windows.rst @@ -24,7 +24,8 @@ translation. Failing that, they will pick the most recent Visual Studio compiler they can find. In addition, the target architecture (32 bits, 64 bits) is automatically selected. A 32 bit build can only be built -using a 32 bit Python and vice versa. +using a 32 bit Python and vice versa. By default pypy is built using the +Multi-threaded DLL (/MD) runtime environment. **Note:** PyPy is currently not supported for 64 bit Windows, and translation will fail in this case. @@ -102,10 +103,12 @@ Download the source code of expat on sourceforge: http://sourceforge.net/projects/expat/ and extract it in the base -directory. Then open the project file ``expat.dsw`` with Visual +directory. Version 2.1.0 is known to pass tests. Then open the project +file ``expat.dsw`` with Visual Studio; follow the instruction for converting the project files, -switch to the "Release" configuration, and build the solution (the -``expat`` project is actually enough for pypy). +switch to the "Release" configuration, reconfigure the runtime for +Multi-threaded DLL (/MD) and build the solution (the ``expat`` project +is actually enough for pypy). Then, copy the file ``win32\bin\release\libexpat.dll`` somewhere in your PATH. diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -85,6 +85,10 @@ Collects the arguments of a function call. Instances should be considered immutable. + + Some parts of this class are written in a slightly convoluted style to help + the JIT. It is really crucial to get this right, because Python's argument + semantics are complex, but calls occur everywhere. """ ### Construction ### @@ -171,7 +175,13 @@ space = self.space keywords, values_w = space.view_as_kwargs(w_starstararg) if keywords is not None: # this path also taken for empty dicts - self._add_keywordargs_no_unwrapping(keywords, values_w) + if self.keywords is None: + self.keywords = keywords[:] # copy to make non-resizable + self.keywords_w = values_w[:] + else: + self._check_not_duplicate_kwargs(keywords, values_w) + self.keywords = self.keywords + keywords + self.keywords_w = self.keywords_w + values_w return not jit.isconstant(len(self.keywords)) if space.isinstance_w(w_starstararg, space.w_dict): keys_w = space.unpackiterable(w_starstararg) @@ -229,22 +239,16 @@ @jit.look_inside_iff(lambda self, keywords, keywords_w: jit.isconstant(len(keywords) and jit.isconstant(self.keywords))) - def _add_keywordargs_no_unwrapping(self, keywords, keywords_w): - if self.keywords is None: - self.keywords = keywords[:] # copy to make non-resizable - self.keywords_w = keywords_w[:] - else: - # looks quadratic, but the JIT should remove all of it nicely. - # Also, all the lists should be small - for key in keywords: - for otherkey in self.keywords: - if otherkey == key: - raise operationerrfmt(self.space.w_TypeError, - "got multiple values " - "for keyword argument " - "'%s'", key) - self.keywords = self.keywords + keywords - self.keywords_w = self.keywords_w + keywords_w + def _check_not_duplicate_kwargs(self, keywords, keywords_w): + # looks quadratic, but the JIT should remove all of it nicely. + # Also, all the lists should be small + for key in keywords: + for otherkey in self.keywords: + if otherkey == key: + raise operationerrfmt(self.space.w_TypeError, + "got multiple values " + "for keyword argument " + "'%s'", key) def fixedunpack(self, argcount): """The simplest argument parsing: get the 'argcount' arguments, diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -304,14 +304,19 @@ # produce compatible pycs. if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): - unistr = self.space.unicode_w(w_const) - if len(unistr) == 1: - ch = ord(unistr[0]) - else: - ch = 0 - if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): - return subs + #unistr = self.space.unicode_w(w_const) + #if len(unistr) == 1: + # ch = ord(unistr[0]) + #else: + # ch = 0 + #if (ch > 0xFFFF or + # (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): + # --XXX-- for now we always disable optimization of + # u'...'[constant] because the tests above are not + # enough to fix issue5057 (CPython has the same + # problem as of April 24, 2012). + # See test_const_fold_unicode_subscr + return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -844,7 +844,8 @@ return u"abc"[0] """ counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + if 0: # xxx later? + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} # getitem outside of the BMP should not be optimized source = """def f(): @@ -854,12 +855,20 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + source = """def f(): + return u"\U00012345abcdef"[3] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, + ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) source = """def f(): return u"\uE01F"[0] """ counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + if 0: # xxx later? + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} monkeypatch.undo() # getslice is not yet optimized. diff --git a/pypy/interpreter/function.py b/pypy/interpreter/function.py --- a/pypy/interpreter/function.py +++ b/pypy/interpreter/function.py @@ -49,7 +49,9 @@ def __repr__(self): # return "function %s.%s" % (self.space, self.name) # maybe we want this shorter: - name = getattr(self, 'name', '?') + name = getattr(self, 'name', None) + if not isinstance(name, str): + name = '?' return "<%s %s>" % (self.__class__.__name__, name) def call_args(self, args): diff --git a/pypy/jit/backend/llsupport/asmmemmgr.py b/pypy/jit/backend/llsupport/asmmemmgr.py --- a/pypy/jit/backend/llsupport/asmmemmgr.py +++ b/pypy/jit/backend/llsupport/asmmemmgr.py @@ -277,6 +277,8 @@ from pypy.jit.backend.hlinfo import highleveljitinfo if highleveljitinfo.sys_executable: debug_print('SYS_EXECUTABLE', highleveljitinfo.sys_executable) + else: + debug_print('SYS_EXECUTABLE', '??') # HEX = '0123456789ABCDEF' dump = [] diff --git a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py --- a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py +++ b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py @@ -217,7 +217,8 @@ encoded = ''.join(writtencode).encode('hex').upper() ataddr = '@%x' % addr assert log == [('test-logname-section', - [('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] + [('debug_print', 'SYS_EXECUTABLE', '??'), + ('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] lltype.free(p, flavor='raw') diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -1656,15 +1656,21 @@ else: # XXX hard-coded assumption: to go from an object to its class # we use the following algorithm: - # - read the typeid from mem(locs[0]), i.e. at offset 0 - # - keep the lower 16 bits read there - # - multiply by 4 and use it as an offset in type_info_group - # - add 16 bytes, to go past the TYPE_INFO structure + # - read the typeid from mem(locs[0]), i.e. at offset 0; + # this is a complete word (N=4 bytes on 32-bit, N=8 on + # 64-bits) + # - keep the lower half of what is read there (i.e. + # truncate to an unsigned 'N / 2' bytes value) + # - multiply by 4 (on 32-bits only) and use it as an + # offset in type_info_group + # - add 16/32 bytes, to go past the TYPE_INFO structure loc = locs[1] assert isinstance(loc, ImmedLoc) classptr = loc.value # here, we have to go back from 'classptr' to the value expected - # from reading the 16 bits in the object header + # from reading the half-word in the object header. Note that + # this half-word is at offset 0 on a little-endian machine; + # it would be at offset 2 or 4 on a big-endian machine. from pypy.rpython.memory.gctypelayout import GCData sizeof_ti = rffi.sizeof(GCData.TYPE_INFO) type_info_group = llop.gc_get_type_info_group(llmemory.Address) diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -20,6 +20,7 @@ self.dependencies = {} # contains frame boxes that are not virtualizables self.nonstandard_virtualizables = {} + # heap cache # maps descrs to {from_box, to_box} dicts self.heap_cache = {} @@ -29,6 +30,26 @@ # cache the length of arrays self.length_cache = {} + # replace_box is called surprisingly often, therefore it's not efficient + # to go over all the dicts and fix them. + # instead, these two dicts are kept, and a replace_box adds an entry to + # each of them. + # every time one of the dicts heap_cache, heap_array_cache, length_cache + # is accessed, suitable indirections need to be performed + + # this looks all very subtle, but in practice the patterns of + # replacements should not be that complex. Usually a box is replaced by + # a const, once. Also, if something goes wrong, the effect is that less + # caching than possible is done, which is not a huge problem. + self.input_indirections = {} + self.output_indirections = {} + + def _input_indirection(self, box): + return self.input_indirections.get(box, box) + + def _output_indirection(self, box): + return self.output_indirections.get(box, box) + def invalidate_caches(self, opnum, descr, argboxes): self.mark_escaped(opnum, argboxes) self.clear_caches(opnum, descr, argboxes) @@ -132,14 +153,16 @@ self.arraylen_now_known(box, lengthbox) def getfield(self, box, descr): + box = self._input_indirection(box) d = self.heap_cache.get(descr, None) if d: tobox = d.get(box, None) - if tobox: - return tobox + return self._output_indirection(tobox) return None def getfield_now_known(self, box, descr, fieldbox): + box = self._input_indirection(box) + fieldbox = self._input_indirection(fieldbox) self.heap_cache.setdefault(descr, {})[box] = fieldbox def setfield(self, box, descr, fieldbox): @@ -148,6 +171,8 @@ self.heap_cache[descr] = new_d def _do_write_with_aliasing(self, d, box, fieldbox): + box = self._input_indirection(box) + fieldbox = self._input_indirection(fieldbox) # slightly subtle logic here # a write to an arbitrary box, all other boxes can alias this one if not d or box not in self.new_boxes: @@ -166,6 +191,7 @@ return new_d def getarrayitem(self, box, descr, indexbox): + box = self._input_indirection(box) if not isinstance(indexbox, ConstInt): return index = indexbox.getint() @@ -173,9 +199,11 @@ if cache: indexcache = cache.get(index, None) if indexcache is not None: - return indexcache.get(box, None) + return self._output_indirection(indexcache.get(box, None)) def getarrayitem_now_known(self, box, descr, indexbox, valuebox): + box = self._input_indirection(box) + valuebox = self._input_indirection(valuebox) if not isinstance(indexbox, ConstInt): return index = indexbox.getint() @@ -198,25 +226,13 @@ cache[index] = self._do_write_with_aliasing(indexcache, box, valuebox) def arraylen(self, box): - return self.length_cache.get(box, None) + box = self._input_indirection(box) + return self._output_indirection(self.length_cache.get(box, None)) def arraylen_now_known(self, box, lengthbox): - self.length_cache[box] = lengthbox - - def _replace_box(self, d, oldbox, newbox): - new_d = {} - for frombox, tobox in d.iteritems(): - if frombox is oldbox: - frombox = newbox - if tobox is oldbox: - tobox = newbox - new_d[frombox] = tobox - return new_d + box = self._input_indirection(box) + self.length_cache[box] = self._input_indirection(lengthbox) def replace_box(self, oldbox, newbox): - for descr, d in self.heap_cache.iteritems(): - self.heap_cache[descr] = self._replace_box(d, oldbox, newbox) - for descr, d in self.heap_array_cache.iteritems(): - for index, cache in d.iteritems(): - d[index] = self._replace_box(cache, oldbox, newbox) - self.length_cache = self._replace_box(self.length_cache, oldbox, newbox) + self.input_indirections[self._output_indirection(newbox)] = self._input_indirection(oldbox) + self.output_indirections[self._input_indirection(oldbox)] = self._output_indirection(newbox) diff --git a/pypy/jit/metainterp/jitexc.py b/pypy/jit/metainterp/jitexc.py --- a/pypy/jit/metainterp/jitexc.py +++ b/pypy/jit/metainterp/jitexc.py @@ -12,7 +12,6 @@ """ _go_through_llinterp_uncaught_ = True # ugh - def _get_standard_error(rtyper, Class): exdata = rtyper.getexceptiondata() clsdef = rtyper.annotator.bookkeeper.getuniqueclassdef(Class) diff --git a/pypy/jit/metainterp/optimize.py b/pypy/jit/metainterp/optimize.py --- a/pypy/jit/metainterp/optimize.py +++ b/pypy/jit/metainterp/optimize.py @@ -5,3 +5,9 @@ """Raised when the optimize*.py detect that the loop that we are trying to build cannot possibly make sense as a long-running loop (e.g. it cannot run 2 complete iterations).""" + + def __init__(self, msg='?'): + debug_start("jit-abort") + debug_print(msg) + debug_stop("jit-abort") + self.msg = msg diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -49,8 +49,9 @@ optimizations.append(OptFfiCall()) if ('rewrite' not in enable_opts or 'virtualize' not in enable_opts - or 'heap' not in enable_opts or 'unroll' not in enable_opts): - optimizations.append(OptSimplify()) + or 'heap' not in enable_opts or 'unroll' not in enable_opts + or 'pure' not in enable_opts): + optimizations.append(OptSimplify(unroll)) return optimizations, unroll diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -257,8 +257,8 @@ opnum == rop.COPYSTRCONTENT or # no effect on GC struct/array opnum == rop.COPYUNICODECONTENT): # no effect on GC struct/array return - assert opnum != rop.CALL_PURE if (opnum == rop.CALL or + opnum == rop.CALL_PURE or opnum == rop.CALL_MAY_FORCE or opnum == rop.CALL_RELEASE_GIL or opnum == rop.CALL_ASSEMBLER): @@ -481,7 +481,7 @@ # already between the tracing and now. In this case, we are # simply ignoring the QUASIIMMUT_FIELD hint and compiling it # as a regular getfield. - if not qmutdescr.is_still_valid(): + if not qmutdescr.is_still_valid_for(structvalue.get_key_box()): self._remove_guard_not_invalidated = True return # record as an out-of-line guard diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -191,10 +191,13 @@ # GUARD_OVERFLOW, then the loop is invalid. lastop = self.last_emitted_operation if lastop is None: - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') opnum = lastop.getopnum() if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') + self.emit_operation(op) def optimize_INT_ADD_OVF(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -525,6 +525,7 @@ @specialize.argtype(0) def _emit_operation(self, op): + assert op.getopnum() != rop.CALL_PURE for i in range(op.numargs()): arg = op.getarg(i) try: diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -208,7 +208,8 @@ box = value.box assert isinstance(box, Const) if not box.same_constant(constbox): - raise InvalidLoop + raise InvalidLoop('A GURAD_{VALUE,TRUE,FALSE} was proven to' + + 'always fail') return if emit_operation: self.emit_operation(op) @@ -220,7 +221,7 @@ if value.is_null(): return elif value.is_nonnull(): - raise InvalidLoop + raise InvalidLoop('A GUARD_ISNULL was proven to always fail') self.emit_operation(op) value.make_constant(self.optimizer.cpu.ts.CONST_NULL) @@ -229,7 +230,7 @@ if value.is_nonnull(): return elif value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL was proven to always fail') self.emit_operation(op) value.make_nonnull(op) @@ -278,7 +279,7 @@ if realclassbox is not None: if realclassbox.same_constant(expectedclassbox): return - raise InvalidLoop + raise InvalidLoop('A GUARD_CLASS was proven to always fail') if value.last_guard: # there already has been a guard_nonnull or guard_class or # guard_nonnull_class on this value. @@ -301,7 +302,8 @@ def optimize_GUARD_NONNULL_CLASS(self, op): value = self.getvalue(op.getarg(0)) if value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL_CLASS was proven to always ' + + 'fail') self.optimize_GUARD_CLASS(op) def optimize_CALL_LOOPINVARIANT(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -4,8 +4,9 @@ from pypy.jit.metainterp.history import TargetToken, JitCellToken class OptSimplify(Optimization): - def __init__(self): + def __init__(self, unroll): self.last_label_descr = None + self.unroll = unroll def optimize_CALL_PURE(self, op): args = op.getarglist() @@ -35,24 +36,26 @@ pass def optimize_LABEL(self, op): - descr = op.getdescr() - if isinstance(descr, JitCellToken): - return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) - self.last_label_descr = op.getdescr() + if not self.unroll: + descr = op.getdescr() + if isinstance(descr, JitCellToken): + return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) + self.last_label_descr = op.getdescr() self.emit_operation(op) def optimize_JUMP(self, op): - descr = op.getdescr() - assert isinstance(descr, JitCellToken) - if not descr.target_tokens: - assert self.last_label_descr is not None - target_token = self.last_label_descr - assert isinstance(target_token, TargetToken) - assert target_token.targeting_jitcell_token is descr - op.setdescr(self.last_label_descr) - else: - assert len(descr.target_tokens) == 1 - op.setdescr(descr.target_tokens[0]) + if not self.unroll: + descr = op.getdescr() + assert isinstance(descr, JitCellToken) + if not descr.target_tokens: + assert self.last_label_descr is not None + target_token = self.last_label_descr + assert isinstance(target_token, TargetToken) + assert target_token.targeting_jitcell_token is descr + op.setdescr(self.last_label_descr) + else: + assert len(descr.target_tokens) == 1 + op.setdescr(descr.target_tokens[0]) self.emit_operation(op) dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py @@ -0,0 +1,46 @@ +from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import OptimizeOptTest +from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin +from pypy.jit.metainterp.resoperation import rop + + +allopts = OptimizeOptTest.enable_opts.split(':') +for optnum in range(len(allopts)): + myopts = allopts[:] + del myopts[optnum] + + class TestLLtype(OptimizeOptTest, LLtypeMixin): + enable_opts = ':'.join(myopts) + + def optimize_loop(self, ops, expected, expected_preamble=None, + call_pure_results=None, expected_short=None): + loop = self.parse(ops) + if expected != "crash!": + expected = self.parse(expected) + if expected_preamble: + expected_preamble = self.parse(expected_preamble) + if expected_short: + expected_short = self.parse(expected_short) + + preamble = self.unroll_and_optimize(loop, call_pure_results) + + for op in preamble.operations + loop.operations: + assert op.getopnum() not in (rop.CALL_PURE, + rop.CALL_LOOPINVARIANT, + rop.VIRTUAL_REF_FINISH, + rop.VIRTUAL_REF, + rop.QUASIIMMUT_FIELD, + rop.MARK_OPAQUE_PTR, + rop.RECORD_KNOWN_CLASS) + + def raises(self, e, fn, *args): + try: + fn(*args) + except e: + pass + + opt = allopts[optnum] + exec "TestNo%sLLtype = TestLLtype" % (opt[0].upper() + opt[1:]) + +del TestLLtype # No need to run the last set twice +del TestNoUnrollLLtype # This case is take care of by test_optimizebasic + diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -7,7 +7,7 @@ import pypy.jit.metainterp.optimizeopt.optimizer as optimizeopt import pypy.jit.metainterp.optimizeopt.virtualize as virtualize from pypy.jit.metainterp.optimize import InvalidLoop -from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt +from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt, get_const_ptr_for_string from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation from pypy.rlib.rarithmetic import LONG_BIT @@ -5067,11 +5067,29 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_call_pure_vstring_const(self): + ops = """ + [] + p0 = newstr(3) + strsetitem(p0, 0, 97) + strsetitem(p0, 1, 98) + strsetitem(p0, 2, 99) + i0 = call_pure(123, p0, descr=nonwritedescr) + finish(i0) + """ + expected = """ + [] + finish(5) + """ + call_pure_results = { + (ConstInt(123), get_const_ptr_for_string("abc"),): ConstInt(5), + } + self.optimize_loop(ops, expected, call_pure_results) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass - ##class TestOOtype(BaseTestOptimizeBasic, OOtypeMixin): ## def test_instanceof(self): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -105,6 +105,9 @@ return loop + def raises(self, e, fn, *args): + py.test.raises(e, fn, *args) + class OptimizeOptTest(BaseTestWithUnroll): def setup_method(self, meth=None): @@ -2639,7 +2642,7 @@ p2 = new_with_vtable(ConstClass(node_vtable)) jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_2(self): @@ -2651,7 +2654,7 @@ escape(p2) # prevent it from staying Virtual jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_3(self): @@ -2665,7 +2668,7 @@ setfield_gc(p3, p4, descr=nextdescr) jump(p3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_merge_guard_class_guard_value(self): @@ -4411,7 +4414,7 @@ setfield_gc(p1, p3, descr=nextdescr) jump(p3) """ - py.test.raises(BogusPureField, self.optimize_loop, ops, "crash!") + self.raises(BogusPureField, self.optimize_loop, ops, "crash!") def test_dont_complains_different_field(self): ops = """ @@ -5024,7 +5027,7 @@ i2 = int_add(i0, 3) jump(i2) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_ne_const_not(self): ops = """ @@ -5074,7 +5077,7 @@ i3 = int_add(i0, 3) jump(i3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_lshift(self): ops = """ @@ -6533,9 +6536,9 @@ def test_quasi_immut_2(self): ops = """ [] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) escape(i1) jump() """ @@ -6585,13 +6588,13 @@ def test_call_may_force_invalidated_guards_reload(self): ops = """ [i0a, i0b] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) call_may_force(i0b, descr=mayforcevirtdescr) - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i2 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i2 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) i3 = escape(i1) i4 = escape(i2) jump(i3, i4) @@ -7813,6 +7816,52 @@ """ self.optimize_loop(ops, expected) + def test_issue1080_infinitie_loop_virtual(self): + ops = """ + [p10] + p52 = getfield_gc(p10, descr=nextdescr) # inst_storage + p54 = getarrayitem_gc(p52, 0, descr=arraydescr) + p69 = getfield_gc_pure(p54, descr=otherdescr) # inst_w_function + + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + + p106 = new_with_vtable(ConstClass(node_vtable)) + p108 = new_array(3, descr=arraydescr) + p110 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p110, ConstPtr(myptr2), descr=otherdescr) # inst_w_function + setarrayitem_gc(p108, 0, p110, descr=arraydescr) + setfield_gc(p106, p108, descr=nextdescr) # inst_storage + jump(p106) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr2), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + + + def test_issue1080_infinitie_loop_simple(self): + ops = """ + [p69] + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + jump(ConstPtr(myptr)) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -122,6 +122,7 @@ quasi.inst_field = -4247 quasifielddescr = cpu.fielddescrof(QUASI, 'inst_field') quasibox = BoxPtr(lltype.cast_opaque_ptr(llmemory.GCREF, quasi)) + quasiptr = quasibox.value quasiimmutdescr = QuasiImmutDescr(cpu, quasibox, quasifielddescr, cpu.fielddescrof(QUASI, 'mutate_field')) diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -315,7 +315,10 @@ try: jumpargs = virtual_state.make_inputargs(values, self.optimizer) except BadVirtualState: - raise InvalidLoop + raise InvalidLoop('The state of the optimizer at the end of ' + + 'peeled loop is inconsistent with the ' + + 'VirtualState at the begining of the peeled ' + + 'loop') jumpop.initarglist(jumpargs) # Inline the short preamble at the end of the loop @@ -325,7 +328,11 @@ for i in range(len(short_inputargs)): if short_inputargs[i] in args: if args[short_inputargs[i]] != jmp_to_short_args[i]: - raise InvalidLoop + raise InvalidLoop('The short preamble wants the ' + + 'same box passed to multiple of its ' + + 'inputargs, but the jump at the ' + + 'end of this bridge does not do that.') + args[short_inputargs[i]] = jmp_to_short_args[i] self.short_inliner = Inliner(short_inputargs, jmp_to_short_args) for op in self.short[1:]: @@ -378,7 +385,10 @@ #final_virtual_state.debug_print("Bad virtual state at end of loop, ", # bad) #debug_stop('jit-log-virtualstate') - raise InvalidLoop + raise InvalidLoop('The virtual state at the end of the peeled ' + + 'loop is not compatible with the virtual ' + + 'state at the start of the loop which makes ' + + 'it impossible to close the loop') #debug_stop('jit-log-virtualstate') @@ -526,8 +536,8 @@ args = jumpop.getarglist() modifier = VirtualStateAdder(self.optimizer) virtual_state = modifier.get_virtual_state(args) - #debug_start('jit-log-virtualstate') - #virtual_state.debug_print("Looking for ") + debug_start('jit-log-virtualstate') + virtual_state.debug_print("Looking for ") for target in cell_token.target_tokens: if not target.virtual_state: @@ -536,10 +546,10 @@ extra_guards = [] bad = {} - #debugmsg = 'Did not match ' + debugmsg = 'Did not match ' if target.virtual_state.generalization_of(virtual_state, bad): ok = True - #debugmsg = 'Matched ' + debugmsg = 'Matched ' else: try: cpu = self.optimizer.cpu @@ -548,13 +558,13 @@ extra_guards) ok = True - #debugmsg = 'Guarded to match ' + debugmsg = 'Guarded to match ' except InvalidLoop: pass - #target.virtual_state.debug_print(debugmsg, bad) + target.virtual_state.debug_print(debugmsg, bad) if ok: - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') values = [self.getvalue(arg) for arg in jumpop.getarglist()] @@ -581,7 +591,7 @@ jumpop.setdescr(cell_token.target_tokens[0]) self.optimizer.send_extra_operation(jumpop) return True - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') return False class ValueImporter(object): diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -27,11 +27,15 @@ if self.generalization_of(other, renum, {}): return if renum[self.position] != other.position: - raise InvalidLoop + raise InvalidLoop('The numbering of the virtual states does not ' + + 'match. This means that two virtual fields ' + + 'have been set to the same Box in one of the ' + + 'virtual states but not in the other.') self._generate_guards(other, box, cpu, extra_guards) def _generate_guards(self, other, box, cpu, extra_guards): - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') def enum_forced_boxes(self, boxes, value, optimizer): raise NotImplementedError @@ -346,10 +350,12 @@ def _generate_guards(self, other, box, cpu, extra_guards): if not isinstance(other, NotVirtualStateInfo): - raise InvalidLoop + raise InvalidLoop('The VirtualStates does not match as a ' + + 'virtual appears where a pointer is needed ' + + 'and it is too late to force it.') if self.lenbound or other.lenbound: - raise InvalidLoop + raise InvalidLoop('The array length bounds does not match.') if self.level == LEVEL_KNOWNCLASS and \ box.nonnull() and \ @@ -400,7 +406,8 @@ return # Remaining cases are probably not interesting - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') if self.level == LEVEL_CONSTANT: import pdb; pdb.set_trace() raise NotImplementedError diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1223,7 +1223,7 @@ def run_one_step(self): # Execute the frame forward. This method contains a loop that leaves # whenever the 'opcode_implementations' (which is one of the 'opimpl_' - # methods) returns True. This is the case when the current frame + # methods) raises ChangeFrame. This is the case when the current frame # changes, due to a call or a return. try: staticdata = self.metainterp.staticdata diff --git a/pypy/jit/metainterp/quasiimmut.py b/pypy/jit/metainterp/quasiimmut.py --- a/pypy/jit/metainterp/quasiimmut.py +++ b/pypy/jit/metainterp/quasiimmut.py @@ -120,8 +120,10 @@ self.fielddescr, self.structbox) return fieldbox.constbox() - def is_still_valid(self): + def is_still_valid_for(self, structconst): assert self.structbox is not None + if not self.structbox.constbox().same_constant(structconst): + return False cpu = self.cpu gcref = self.structbox.getref_base() qmut = get_current_qmut_instance(cpu, gcref, self.mutatefielddescr) diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py --- a/pypy/jit/metainterp/test/test_heapcache.py +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -2,12 +2,14 @@ from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.history import ConstInt -box1 = object() -box2 = object() -box3 = object() -box4 = object() +box1 = "box1" +box2 = "box2" +box3 = "box3" +box4 = "box4" +box5 = "box5" lengthbox1 = object() lengthbox2 = object() +lengthbox3 = object() descr1 = object() descr2 = object() descr3 = object() @@ -276,11 +278,43 @@ h.setfield(box1, descr2, box3) h.setfield(box2, descr3, box3) h.replace_box(box1, box4) - assert h.getfield(box1, descr1) is None - assert h.getfield(box1, descr2) is None assert h.getfield(box4, descr1) is box2 assert h.getfield(box4, descr2) is box3 assert h.getfield(box2, descr3) is box3 + h.setfield(box4, descr1, box3) + assert h.getfield(box4, descr1) is box3 + + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box3, box4) + assert h.getfield(box1, descr1) is box2 + assert h.getfield(box1, descr2) is box4 + assert h.getfield(box2, descr3) is box4 + + def test_replace_box_twice(self): + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box1, box4) + h.replace_box(box4, box5) + assert h.getfield(box5, descr1) is box2 + assert h.getfield(box5, descr2) is box3 + assert h.getfield(box2, descr3) is box3 + h.setfield(box5, descr1, box3) + assert h.getfield(box4, descr1) is box3 + + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box3, box4) + h.replace_box(box4, box5) + assert h.getfield(box1, descr1) is box2 + assert h.getfield(box1, descr2) is box5 + assert h.getfield(box2, descr3) is box5 def test_replace_box_array(self): h = HeapCache() @@ -291,9 +325,6 @@ h.setarrayitem(box3, descr2, index2, box1) h.setarrayitem(box2, descr3, index2, box3) h.replace_box(box1, box4) - assert h.getarrayitem(box1, descr1, index1) is None - assert h.getarrayitem(box1, descr2, index1) is None - assert h.arraylen(box1) is None assert h.arraylen(box4) is lengthbox1 assert h.getarrayitem(box4, descr1, index1) is box2 assert h.getarrayitem(box4, descr2, index1) is box3 @@ -304,6 +335,27 @@ h.replace_box(lengthbox1, lengthbox2) assert h.arraylen(box4) is lengthbox2 + def test_replace_box_array_twice(self): + h = HeapCache() + h.setarrayitem(box1, descr1, index1, box2) + h.setarrayitem(box1, descr2, index1, box3) + h.arraylen_now_known(box1, lengthbox1) + h.setarrayitem(box2, descr1, index2, box1) + h.setarrayitem(box3, descr2, index2, box1) + h.setarrayitem(box2, descr3, index2, box3) + h.replace_box(box1, box4) + h.replace_box(box4, box5) + assert h.arraylen(box4) is lengthbox1 + assert h.getarrayitem(box5, descr1, index1) is box2 + assert h.getarrayitem(box5, descr2, index1) is box3 + assert h.getarrayitem(box2, descr1, index2) is box5 + assert h.getarrayitem(box3, descr2, index2) is box5 + assert h.getarrayitem(box2, descr3, index2) is box3 + + h.replace_box(lengthbox1, lengthbox2) + h.replace_box(lengthbox2, lengthbox3) + assert h.arraylen(box4) is lengthbox3 + def test_ll_arraycopy(self): h = HeapCache() h.new_array(box1, lengthbox1) diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.quasiimmut import get_current_qmut_instance from pypy.jit.metainterp.test.support import LLJitMixin from pypy.jit.codewriter.policy import StopAtXPolicy -from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe +from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe, promote def test_get_current_qmut_instance(): @@ -506,6 +506,27 @@ "guard_not_invalidated": 2 }) + def test_issue1080(self): + myjitdriver = JitDriver(greens=[], reds=["n", "sa", "a"]) + class Foo(object): + _immutable_fields_ = ["x?"] + def __init__(self, x): + self.x = x + one, two = Foo(1), Foo(2) + def main(n): + sa = 0 + a = one + while n: + myjitdriver.jit_merge_point(n=n, sa=sa, a=a) + sa += a.x + if a.x == 1: + a = two + elif a.x == 2: + a = one + n -= 1 + return sa + res = self.meta_interp(main, [10]) + assert res == main(10) class TestLLtypeGreenFieldsTests(QuasiImmutTests, LLJitMixin): pass diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -341,9 +341,13 @@ def add(self, w_iobase): assert w_iobase.streamholder is None - holder = StreamHolder(w_iobase) - w_iobase.streamholder = holder - self.streams[holder] = None + if rweakref.has_weakref_support(): + holder = StreamHolder(w_iobase) + w_iobase.streamholder = holder + self.streams[holder] = None + #else: + # no support for weakrefs, so ignore and we + # will not get autoflushing def remove(self, w_iobase): holder = w_iobase.streamholder diff --git a/pypy/module/_lsprof/interp_lsprof.py b/pypy/module/_lsprof/interp_lsprof.py --- a/pypy/module/_lsprof/interp_lsprof.py +++ b/pypy/module/_lsprof/interp_lsprof.py @@ -185,6 +185,7 @@ if subentry is not None: subentry._stop(tt, it) + at jit.elidable_promote() def create_spec(space, w_arg): if isinstance(w_arg, Method): w_function = w_arg.w_function diff --git a/pypy/module/_multiprocessing/test/test_connection.py b/pypy/module/_multiprocessing/test/test_connection.py --- a/pypy/module/_multiprocessing/test/test_connection.py +++ b/pypy/module/_multiprocessing/test/test_connection.py @@ -157,13 +157,15 @@ raises(IOError, _multiprocessing.Connection, -15) def test_byte_order(self): + import socket + if not 'fromfd' in dir(socket): + skip('No fromfd in socket') # The exact format of net strings (length in network byte # order) is important for interoperation with others # implementations. rhandle, whandle = self.make_pair() whandle.send_bytes("abc") whandle.send_bytes("defg") - import socket sock = socket.fromfd(rhandle.fileno(), socket.AF_INET, socket.SOCK_STREAM) data1 = sock.recv(7) diff --git a/pypy/module/_winreg/test/test_winreg.py b/pypy/module/_winreg/test/test_winreg.py --- a/pypy/module/_winreg/test/test_winreg.py +++ b/pypy/module/_winreg/test/test_winreg.py @@ -198,7 +198,10 @@ import nt r = ExpandEnvironmentStrings(u"%windir%\\test") assert isinstance(r, unicode) - assert r == nt.environ["WINDIR"] + "\\test" + if 'WINDIR' in nt.environ.keys(): + assert r == nt.environ["WINDIR"] + "\\test" + else: + assert r == nt.environ["windir"] + "\\test" def test_long_key(self): from _winreg import ( diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -103,8 +103,8 @@ """.split() for name in constant_names: setattr(CConfig_constants, name, rffi_platform.ConstantInteger(name)) -udir.join('pypy_decl.h').write("/* Will be filled later */") -udir.join('pypy_macros.h').write("/* Will be filled later */") +udir.join('pypy_decl.h').write("/* Will be filled later */\n") +udir.join('pypy_macros.h').write("/* Will be filled later */\n") globals().update(rffi_platform.configure(CConfig_constants)) def copy_header_files(dstdir): @@ -927,12 +927,12 @@ source_dir / "pyerrors.c", source_dir / "modsupport.c", source_dir / "getargs.c", + source_dir / "abstract.c", source_dir / "stringobject.c", source_dir / "mysnprintf.c", source_dir / "pythonrun.c", source_dir / "sysmodule.c", source_dir / "bufferobject.c", - source_dir / "object.c", source_dir / "cobject.c", source_dir / "structseq.c", source_dir / "capsule.c", diff --git a/pypy/module/cpyext/complexobject.py b/pypy/module/cpyext/complexobject.py --- a/pypy/module/cpyext/complexobject.py +++ b/pypy/module/cpyext/complexobject.py @@ -33,6 +33,11 @@ # CPython also accepts anything return 0.0 + at cpython_api([Py_complex_ptr], PyObject) +def _PyComplex_FromCComplex(space, v): + """Create a new Python complex number object from a C Py_complex value.""" + return space.newcomplex(v.c_real, v.c_imag) + # lltype does not handle functions returning a structure. This implements a # helper function, which takes as argument a reference to the return value. @cpython_api([PyObject, Py_complex_ptr], lltype.Void) diff --git a/pypy/module/cpyext/floatobject.py b/pypy/module/cpyext/floatobject.py --- a/pypy/module/cpyext/floatobject.py +++ b/pypy/module/cpyext/floatobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.api import ( CANNOT_FAIL, cpython_api, PyObject, build_type_checkers, CONST_STRING) from pypy.interpreter.error import OperationError +from pypy.rlib.rstruct import runpack PyFloat_Check, PyFloat_CheckExact = build_type_checkers("Float") @@ -33,3 +34,19 @@ backward compatibility.""" return space.call_function(space.w_float, w_obj) + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack4(space, ptr, le): + input = rffi.charpsize2str(ptr, 4) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("f", input) + + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack8(space, ptr, le): + input = rffi.charpsize2str(ptr, 8) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("d", input) + diff --git a/pypy/module/cpyext/include/complexobject.h b/pypy/module/cpyext/include/complexobject.h --- a/pypy/module/cpyext/include/complexobject.h +++ b/pypy/module/cpyext/include/complexobject.h @@ -21,6 +21,8 @@ return result; } +#define PyComplex_FromCComplex(c) _PyComplex_FromCComplex(&c) + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -38,10 +38,19 @@ PyObject_VAR_HEAD } PyVarObject; +#ifndef PYPY_DEBUG_REFCOUNT #define Py_INCREF(ob) (Py_IncRef((PyObject *)ob)) #define Py_DECREF(ob) (Py_DecRef((PyObject *)ob)) #define Py_XINCREF(ob) (Py_IncRef((PyObject *)ob)) #define Py_XDECREF(ob) (Py_DecRef((PyObject *)ob)) +#else +#define Py_INCREF(ob) (((PyObject *)ob)->ob_refcnt++) +#define Py_DECREF(ob) ((((PyObject *)ob)->ob_refcnt > 1) ? \ + ((PyObject *)ob)->ob_refcnt-- : (Py_DecRef((PyObject *)ob))) + +#define Py_XINCREF(op) do { if ((op) == NULL) ; else Py_INCREF(op); } while (0) +#define Py_XDECREF(op) do { if ((op) == NULL) ; else Py_DECREF(op); } while (0) +#endif #define Py_CLEAR(op) \ do { \ diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -29,6 +29,10 @@ # define vsnprintf _vsnprintf #endif +#include +PyAPI_FUNC(int) PyOS_snprintf(char *str, size_t size, const char *format, ...); +PyAPI_FUNC(int) PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va); + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/stringobject.h b/pypy/module/cpyext/include/stringobject.h --- a/pypy/module/cpyext/include/stringobject.h +++ b/pypy/module/cpyext/include/stringobject.h @@ -7,8 +7,6 @@ extern "C" { #endif -int PyOS_snprintf(char *str, size_t size, const char *format, ...); - #define PyString_GET_SIZE(op) PyString_Size(op) #define PyString_AS_STRING(op) PyString_AsString(op) diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -40,7 +40,8 @@ when the value is NULL, instead of converting to None. */ #define T_LONGLONG 17 -#define T_ULONGLONG 18 +#define T_ULONGLONG 18 +#define T_PYSSIZET 19 /* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 diff --git a/pypy/module/cpyext/iterator.py b/pypy/module/cpyext/iterator.py --- a/pypy/module/cpyext/iterator.py +++ b/pypy/module/cpyext/iterator.py @@ -22,7 +22,7 @@ cannot be iterated.""" return space.iter(w_obj) - at cpython_api([PyObject], PyObject, error=CANNOT_FAIL) + at cpython_api([PyObject], PyObject) def PyIter_Next(space, w_obj): """Return the next value from the iteration o. If the object is an iterator, this retrieves the next value from the iteration, and returns diff --git a/pypy/module/cpyext/listobject.py b/pypy/module/cpyext/listobject.py --- a/pypy/module/cpyext/listobject.py +++ b/pypy/module/cpyext/listobject.py @@ -110,6 +110,16 @@ space.call_method(w_list, "reverse") return 0 + at cpython_api([PyObject, Py_ssize_t, Py_ssize_t], PyObject) +def PyList_GetSlice(space, w_list, low, high): + """Return a list of the objects in list containing the objects between low + and high. Return NULL and set an exception if unsuccessful. Analogous + to list[low:high]. Negative indices, as when slicing from Python, are not + supported.""" + w_start = space.wrap(low) + w_stop = space.wrap(high) + return space.getslice(w_list, w_start, w_stop) + @cpython_api([PyObject, Py_ssize_t, Py_ssize_t, PyObject], rffi.INT_real, error=-1) def PyList_SetSlice(space, w_list, low, high, w_sequence): """Set the slice of list between low and high to the contents of diff --git a/pypy/module/cpyext/longobject.py b/pypy/module/cpyext/longobject.py --- a/pypy/module/cpyext/longobject.py +++ b/pypy/module/cpyext/longobject.py @@ -1,6 +1,7 @@ from pypy.rpython.lltypesystem import lltype, rffi -from pypy.module.cpyext.api import (cpython_api, PyObject, build_type_checkers, - CONST_STRING, ADDR, CANNOT_FAIL) +from pypy.module.cpyext.api import ( + cpython_api, PyObject, build_type_checkers, Py_ssize_t, + CONST_STRING, ADDR, CANNOT_FAIL) from pypy.objspace.std.longobject import W_LongObject from pypy.interpreter.error import OperationError from pypy.module.cpyext.intobject import PyInt_AsUnsignedLongMask @@ -15,6 +16,13 @@ """Return a new PyLongObject object from v, or NULL on failure.""" return space.newlong(val) + at cpython_api([Py_ssize_t], PyObject) +def PyLong_FromSsize_t(space, val): + """Return a new PyLongObject object from a C Py_ssize_t, or + NULL on failure. + """ + return space.newlong(val) + @cpython_api([rffi.LONGLONG], PyObject) def PyLong_FromLongLong(space, val): """Return a new PyLongObject object from a C long long, or NULL @@ -56,6 +64,14 @@ and -1 will be returned.""" return space.int_w(w_long) + at cpython_api([PyObject], Py_ssize_t, error=-1) +def PyLong_AsSsize_t(space, w_long): + """Return a C Py_ssize_t representation of the contents of pylong. If + pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised + and -1 will be returned. + """ + return space.int_w(w_long) + @cpython_api([PyObject], rffi.LONGLONG, error=-1) def PyLong_AsLongLong(space, w_long): """ diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -381,6 +381,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], lltype.Signed, error=-1) +def PyObject_HashNotImplemented(space, o): + """Set a TypeError indicating that type(o) is not hashable and return -1. + This function receives special treatment when stored in a tp_hash slot, + allowing a type to explicitly indicate to the interpreter that it is not + hashable. + """ + raise OperationError(space.w_TypeError, space.wrap("unhashable type")) + @cpython_api([PyObject], PyObject) def PyObject_Dir(space, w_o): """This is equivalent to the Python expression dir(o), returning a (possibly diff --git a/pypy/module/cpyext/pyerrors.py b/pypy/module/cpyext/pyerrors.py --- a/pypy/module/cpyext/pyerrors.py +++ b/pypy/module/cpyext/pyerrors.py @@ -314,7 +314,10 @@ """This function simulates the effect of a SIGINT signal arriving --- the next time PyErr_CheckSignals() is called, KeyboardInterrupt will be raised. It may be called without holding the interpreter lock.""" - space.check_signal_action.set_interrupt() + if space.check_signal_action is not None: + space.check_signal_action.set_interrupt() + #else: + # no 'signal' module present, ignore... We can't return an error here @cpython_api([PyObjectP, PyObjectP, PyObjectP], lltype.Void) def PyErr_GetExcInfo(space, ptype, pvalue, ptraceback): diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -7,7 +7,7 @@ cpython_api, generic_cpy_call, PyObject, Py_ssize_t) from pypy.module.cpyext.typeobjectdefs import ( unaryfunc, wrapperfunc, ternaryfunc, PyTypeObjectPtr, binaryfunc, - getattrfunc, getattrofunc, setattrofunc, lenfunc, ssizeargfunc, + getattrfunc, getattrofunc, setattrofunc, lenfunc, ssizeargfunc, inquiry, ssizessizeargfunc, ssizeobjargproc, iternextfunc, initproc, richcmpfunc, cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, objobjargproc, readbufferproc) @@ -60,6 +60,16 @@ args_w = space.fixedview(w_args) return generic_cpy_call(space, func_binary, w_self, args_w[0]) +def wrap_inquirypred(space, w_self, w_args, func): + func_inquiry = rffi.cast(inquiry, func) + check_num_args(space, w_args, 0) + args_w = space.fixedview(w_args) + res = generic_cpy_call(space, func_inquiry, w_self) + res = rffi.cast(lltype.Signed, res) + if res == -1: + space.fromcache(State).check_and_raise_exception() + return space.wrap(bool(res)) + def wrap_getattr(space, w_self, w_args, func): func_target = rffi.cast(getattrfunc, func) check_num_args(space, w_args, 1) diff --git a/pypy/module/cpyext/src/abstract.c b/pypy/module/cpyext/src/abstract.c new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/src/abstract.c @@ -0,0 +1,269 @@ +/* Abstract Object Interface */ + +#include "Python.h" + +/* Shorthands to return certain errors */ + +static PyObject * +type_error(const char *msg, PyObject *obj) +{ + PyErr_Format(PyExc_TypeError, msg, obj->ob_type->tp_name); + return NULL; +} + +static PyObject * +null_error(void) +{ + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_SystemError, + "null argument to internal routine"); + return NULL; +} + +/* Operations on any object */ + +int +PyObject_CheckReadBuffer(PyObject *obj) +{ + PyBufferProcs *pb = obj->ob_type->tp_as_buffer; + + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + (*pb->bf_getsegcount)(obj, NULL) != 1) + return 0; + return 1; +} + +int PyObject_AsReadBuffer(PyObject *obj, + const void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void *pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a readable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getreadbuffer)(obj, 0, &pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +int PyObject_AsWriteBuffer(PyObject *obj, + void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void*pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a writeable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getwritebuffer)(obj,0,&pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +/* Operations on callable objects */ + +static PyObject* +call_function_tail(PyObject *callable, PyObject *args) +{ + PyObject *retval; + + if (args == NULL) + return NULL; + + if (!PyTuple_Check(args)) { + PyObject *a; + + a = PyTuple_New(1); + if (a == NULL) { + Py_DECREF(args); + return NULL; + } + PyTuple_SET_ITEM(a, 0, args); + args = a; + } + retval = PyObject_Call(callable, args, NULL); + + Py_DECREF(args); + + return retval; +} + +PyObject * +PyObject_CallFunction(PyObject *callable, const char *format, ...) +{ + va_list va; + PyObject *args; + + if (callable == NULL) + return null_error(); + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + return call_function_tail(callable, args); +} + +PyObject * +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) +{ + va_list va; + PyObject *args; + PyObject *func = NULL; + PyObject *retval = NULL; + + if (o == NULL || name == NULL) + return null_error(); + + func = PyObject_GetAttrString(o, name); + if (func == NULL) { + PyErr_SetString(PyExc_AttributeError, name); + return 0; + } + + if (!PyCallable_Check(func)) { + type_error("attribute of type '%.200s' is not callable", func); + goto exit; + } + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + retval = call_function_tail(func, args); + + exit: + /* args gets consumed in call_function_tail */ + Py_XDECREF(func); + + return retval; +} + +static PyObject * +objargs_mktuple(va_list va) +{ + int i, n = 0; + va_list countva; + PyObject *result, *tmp; + +#ifdef VA_LIST_IS_ARRAY + memcpy(countva, va, sizeof(va_list)); +#else +#ifdef __va_copy + __va_copy(countva, va); +#else + countva = va; +#endif +#endif + + while (((PyObject *)va_arg(countva, PyObject *)) != NULL) + ++n; + result = PyTuple_New(n); + if (result != NULL && n > 0) { + for (i = 0; i < n; ++i) { + tmp = (PyObject *)va_arg(va, PyObject *); + Py_INCREF(tmp); + PyTuple_SET_ITEM(result, i, tmp); + } + } + return result; +} + +PyObject * +PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL || name == NULL) + return null_error(); + + callable = PyObject_GetAttr(callable, name); + if (callable == NULL) + return NULL; + + /* count the args */ + va_start(vargs, name); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) { + Py_DECREF(callable); + return NULL; + } + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + Py_DECREF(callable); + + return tmp; +} + +PyObject * +PyObject_CallFunctionObjArgs(PyObject *callable, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL) + return null_error(); + + /* count the args */ + va_start(vargs, callable); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) + return NULL; + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + + return tmp; +} + diff --git a/pypy/module/cpyext/src/bufferobject.c b/pypy/module/cpyext/src/bufferobject.c --- a/pypy/module/cpyext/src/bufferobject.c +++ b/pypy/module/cpyext/src/bufferobject.c @@ -13,207 +13,207 @@ static int get_buf(PyBufferObject *self, void **ptr, Py_ssize_t *size, - enum buffer_t buffer_type) + enum buffer_t buffer_type) { - if (self->b_base == NULL) { - assert (ptr != NULL); - *ptr = self->b_ptr; - *size = self->b_size; - } - else { - Py_ssize_t count, offset; - readbufferproc proc = 0; - PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; - if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return 0; - } - if ((buffer_type == READ_BUFFER) || - ((buffer_type == ANY_BUFFER) && self->b_readonly)) - proc = bp->bf_getreadbuffer; - else if ((buffer_type == WRITE_BUFFER) || - (buffer_type == ANY_BUFFER)) - proc = (readbufferproc)bp->bf_getwritebuffer; - else if (buffer_type == CHAR_BUFFER) { + if (self->b_base == NULL) { + assert (ptr != NULL); + *ptr = self->b_ptr; + *size = self->b_size; + } + else { + Py_ssize_t count, offset; + readbufferproc proc = 0; + PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; + if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return 0; + } + if ((buffer_type == READ_BUFFER) || + ((buffer_type == ANY_BUFFER) && self->b_readonly)) + proc = bp->bf_getreadbuffer; + else if ((buffer_type == WRITE_BUFFER) || + (buffer_type == ANY_BUFFER)) + proc = (readbufferproc)bp->bf_getwritebuffer; + else if (buffer_type == CHAR_BUFFER) { if (!PyType_HasFeature(self->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER)) { - PyErr_SetString(PyExc_TypeError, - "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); - return 0; - } - proc = (readbufferproc)bp->bf_getcharbuffer; - } - if (!proc) { - char *buffer_type_name; - switch (buffer_type) { - case READ_BUFFER: - buffer_type_name = "read"; - break; - case WRITE_BUFFER: - buffer_type_name = "write"; - break; - case CHAR_BUFFER: - buffer_type_name = "char"; - break; - default: - buffer_type_name = "no"; - break; - } - PyErr_Format(PyExc_TypeError, - "%s buffer type not available", - buffer_type_name); - return 0; - } - if ((count = (*proc)(self->b_base, 0, ptr)) < 0) - return 0; - /* apply constraints to the start/end */ - if (self->b_offset > count) - offset = count; - else - offset = self->b_offset; - *(char **)ptr = *(char **)ptr + offset; - if (self->b_size == Py_END_OF_BUFFER) - *size = count; - else - *size = self->b_size; - if (offset + *size > count) - *size = count - offset; - } - return 1; + Py_TPFLAGS_HAVE_GETCHARBUFFER)) { + PyErr_SetString(PyExc_TypeError, + "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); + return 0; + } + proc = (readbufferproc)bp->bf_getcharbuffer; + } + if (!proc) { + char *buffer_type_name; + switch (buffer_type) { + case READ_BUFFER: + buffer_type_name = "read"; + break; + case WRITE_BUFFER: + buffer_type_name = "write"; + break; + case CHAR_BUFFER: + buffer_type_name = "char"; + break; + default: + buffer_type_name = "no"; + break; + } + PyErr_Format(PyExc_TypeError, + "%s buffer type not available", + buffer_type_name); + return 0; + } + if ((count = (*proc)(self->b_base, 0, ptr)) < 0) + return 0; + /* apply constraints to the start/end */ + if (self->b_offset > count) + offset = count; + else + offset = self->b_offset; + *(char **)ptr = *(char **)ptr + offset; + if (self->b_size == Py_END_OF_BUFFER) + *size = count; + else + *size = self->b_size; + if (offset + *size > count) + *size = count - offset; + } + return 1; } static PyObject * buffer_from_memory(PyObject *base, Py_ssize_t size, Py_ssize_t offset, void *ptr, - int readonly) + int readonly) { - PyBufferObject * b; + PyBufferObject * b; - if (size < 0 && size != Py_END_OF_BUFFER) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } + if (size < 0 && size != Py_END_OF_BUFFER) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } - b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); - if ( b == NULL ) - return NULL; + b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); + if ( b == NULL ) + return NULL; - Py_XINCREF(base); - b->b_base = base; - b->b_ptr = ptr; - b->b_size = size; - b->b_offset = offset; - b->b_readonly = readonly; - b->b_hash = -1; + Py_XINCREF(base); + b->b_base = base; + b->b_ptr = ptr; + b->b_size = size; + b->b_offset = offset; + b->b_readonly = readonly; + b->b_hash = -1; - return (PyObject *) b; + return (PyObject *) b; } static PyObject * buffer_from_object(PyObject *base, Py_ssize_t size, Py_ssize_t offset, int readonly) { - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } - if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { - /* another buffer, refer to the base object */ - PyBufferObject *b = (PyBufferObject *)base; - if (b->b_size != Py_END_OF_BUFFER) { - Py_ssize_t base_size = b->b_size - offset; - if (base_size < 0) - base_size = 0; - if (size == Py_END_OF_BUFFER || size > base_size) - size = base_size; - } - offset += b->b_offset; - base = b->b_base; - } - return buffer_from_memory(base, size, offset, NULL, readonly); + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } + if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { + /* another buffer, refer to the base object */ + PyBufferObject *b = (PyBufferObject *)base; + if (b->b_size != Py_END_OF_BUFFER) { + Py_ssize_t base_size = b->b_size - offset; + if (base_size < 0) + base_size = 0; + if (size == Py_END_OF_BUFFER || size > base_size) + size = base_size; + } + offset += b->b_offset; + base = b->b_base; + } + return buffer_from_memory(base, size, offset, NULL, readonly); } PyObject * PyBuffer_FromObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 1); + return buffer_from_object(base, size, offset, 1); } PyObject * PyBuffer_FromReadWriteObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 0); + return buffer_from_object(base, size, offset, 0); } PyObject * PyBuffer_FromMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 1); + return buffer_from_memory(NULL, size, 0, ptr, 1); } PyObject * PyBuffer_FromReadWriteMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 0); + return buffer_from_memory(NULL, size, 0, ptr, 0); } PyObject * PyBuffer_New(Py_ssize_t size) { - PyObject *o; - PyBufferObject * b; + PyObject *o; + PyBufferObject * b; - if (size < 0) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (sizeof(*b) > PY_SSIZE_T_MAX - size) { - /* unlikely */ - return PyErr_NoMemory(); - } - /* Inline PyObject_New */ - o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); - if ( o == NULL ) - return PyErr_NoMemory(); - b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); + if (size < 0) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (sizeof(*b) > PY_SSIZE_T_MAX - size) { + /* unlikely */ + return PyErr_NoMemory(); + } + /* Inline PyObject_New */ + o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); + if ( o == NULL ) + return PyErr_NoMemory(); + b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); - b->b_base = NULL; - b->b_ptr = (void *)(b + 1); - b->b_size = size; - b->b_offset = 0; - b->b_readonly = 0; - b->b_hash = -1; + b->b_base = NULL; + b->b_ptr = (void *)(b + 1); + b->b_size = size; + b->b_offset = 0; + b->b_readonly = 0; + b->b_hash = -1; - return o; + return o; } /* Methods */ @@ -221,19 +221,21 @@ static PyObject * buffer_new(PyTypeObject *type, PyObject *args, PyObject *kw) { - PyObject *ob; - Py_ssize_t offset = 0; - Py_ssize_t size = Py_END_OF_BUFFER; + PyObject *ob; + Py_ssize_t offset = 0; + Py_ssize_t size = Py_END_OF_BUFFER; - /*if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) - return NULL;*/ - - if (!_PyArg_NoKeywords("buffer()", kw)) - return NULL; + /* + * if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) + * return NULL; + */ - if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) - return NULL; - return PyBuffer_FromObject(ob, offset, size); + if (!_PyArg_NoKeywords("buffer()", kw)) + return NULL; + + if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) + return NULL; + return PyBuffer_FromObject(ob, offset, size); } PyDoc_STRVAR(buffer_doc, @@ -248,99 +250,99 @@ static void buffer_dealloc(PyBufferObject *self) { - Py_XDECREF(self->b_base); - PyObject_DEL(self); + Py_XDECREF(self->b_base); + PyObject_DEL(self); } static int buffer_compare(PyBufferObject *self, PyBufferObject *other) { - void *p1, *p2; - Py_ssize_t len_self, len_other, min_len; - int cmp; + void *p1, *p2; + Py_ssize_t len_self, len_other, min_len; + int cmp; - if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) - return -1; - if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) - return -1; - min_len = (len_self < len_other) ? len_self : len_other; - if (min_len > 0) { - cmp = memcmp(p1, p2, min_len); - if (cmp != 0) - return cmp < 0 ? -1 : 1; - } - return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; + if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) + return -1; + if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) + return -1; + min_len = (len_self < len_other) ? len_self : len_other; + if (min_len > 0) { + cmp = memcmp(p1, p2, min_len); + if (cmp != 0) + return cmp < 0 ? -1 : 1; + } + return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; } static PyObject * buffer_repr(PyBufferObject *self) { - const char *status = self->b_readonly ? "read-only" : "read-write"; + const char *status = self->b_readonly ? "read-only" : "read-write"; if ( self->b_base == NULL ) - return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", - status, - self->b_ptr, - self->b_size, - self); - else - return PyString_FromFormat( - "<%s buffer for %p, size %zd, offset %zd at %p>", - status, - self->b_base, - self->b_size, - self->b_offset, - self); + return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", + status, + self->b_ptr, + self->b_size, + self); + else + return PyString_FromFormat( + "<%s buffer for %p, size %zd, offset %zd at %p>", + status, + self->b_base, + self->b_size, + self->b_offset, + self); } static long buffer_hash(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - register Py_ssize_t len; - register unsigned char *p; - register long x; + void *ptr; + Py_ssize_t size; + register Py_ssize_t len; + register unsigned char *p; + register long x; - if ( self->b_hash != -1 ) - return self->b_hash; + if ( self->b_hash != -1 ) + return self->b_hash; - /* XXX potential bugs here, a readonly buffer does not imply that the - * underlying memory is immutable. b_readonly is a necessary but not - * sufficient condition for a buffer to be hashable. Perhaps it would - * be better to only allow hashing if the underlying object is known to - * be immutable (e.g. PyString_Check() is true). Another idea would - * be to call tp_hash on the underlying object and see if it raises - * an error. */ - if ( !self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, - "writable buffers are not hashable"); - return -1; - } + /* XXX potential bugs here, a readonly buffer does not imply that the + * underlying memory is immutable. b_readonly is a necessary but not + * sufficient condition for a buffer to be hashable. Perhaps it would + * be better to only allow hashing if the underlying object is known to + * be immutable (e.g. PyString_Check() is true). Another idea would + * be to call tp_hash on the underlying object and see if it raises + * an error. */ + if ( !self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, + "writable buffers are not hashable"); + return -1; + } - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - p = (unsigned char *) ptr; - len = size; - x = *p << 7; - while (--len >= 0) - x = (1000003*x) ^ *p++; - x ^= size; - if (x == -1) - x = -2; - self->b_hash = x; - return x; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + p = (unsigned char *) ptr; + len = size; + x = *p << 7; + while (--len >= 0) + x = (1000003*x) ^ *p++; + x ^= size; + if (x == -1) + x = -2; + self->b_hash = x; + return x; } static PyObject * buffer_str(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - return PyString_FromStringAndSize((const char *)ptr, size); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + return PyString_FromStringAndSize((const char *)ptr, size); } /* Sequence methods */ @@ -348,374 +350,372 @@ static Py_ssize_t buffer_length(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - return size; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + return size; } static PyObject * buffer_concat(PyBufferObject *self, PyObject *other) { - PyBufferProcs *pb = other->ob_type->tp_as_buffer; - void *ptr1, *ptr2; - char *p; - PyObject *ob; - Py_ssize_t size, count; + PyBufferProcs *pb = other->ob_type->tp_as_buffer; + void *ptr1, *ptr2; + char *p; + PyObject *ob; + Py_ssize_t size, count; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return NULL; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return NULL; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return NULL; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return NULL; - - /* optimize special case */ - if ( size == 0 ) - { - Py_INCREF(other); - return other; - } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return NULL; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return NULL; + /* optimize special case */ + if ( size == 0 ) + { + Py_INCREF(other); + return other; + } - assert(count <= PY_SIZE_MAX - size); + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return NULL; - ob = PyString_FromStringAndSize(NULL, size + count); - if ( ob == NULL ) - return NULL; - p = PyString_AS_STRING(ob); - memcpy(p, ptr1, size); - memcpy(p + size, ptr2, count); + assert(count <= PY_SIZE_MAX - size); - /* there is an extra byte in the string object, so this is safe */ - p[size + count] = '\0'; + ob = PyString_FromStringAndSize(NULL, size + count); + if ( ob == NULL ) + return NULL; + p = PyString_AS_STRING(ob); + memcpy(p, ptr1, size); + memcpy(p + size, ptr2, count); - return ob; + /* there is an extra byte in the string object, so this is safe */ + p[size + count] = '\0'; + + return ob; } static PyObject * buffer_repeat(PyBufferObject *self, Py_ssize_t count) { - PyObject *ob; - register char *p; - void *ptr; - Py_ssize_t size; + PyObject *ob; + register char *p; + void *ptr; + Py_ssize_t size; - if ( count < 0 ) - count = 0; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if (count > PY_SSIZE_T_MAX / size) { - PyErr_SetString(PyExc_MemoryError, "result too large"); - return NULL; - } - ob = PyString_FromStringAndSize(NULL, size * count); - if ( ob == NULL ) - return NULL; + if ( count < 0 ) + count = 0; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if (count > PY_SSIZE_T_MAX / size) { + PyErr_SetString(PyExc_MemoryError, "result too large"); + return NULL; + } + ob = PyString_FromStringAndSize(NULL, size * count); + if ( ob == NULL ) + return NULL; - p = PyString_AS_STRING(ob); - while ( count-- ) - { - memcpy(p, ptr, size); - p += size; - } + p = PyString_AS_STRING(ob); + while ( count-- ) + { + memcpy(p, ptr, size); + p += size; + } - /* there is an extra byte in the string object, so this is safe */ - *p = '\0'; + /* there is an extra byte in the string object, so this is safe */ + *p = '\0'; - return ob; + return ob; } static PyObject * buffer_item(PyBufferObject *self, Py_ssize_t idx) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( idx < 0 || idx >= size ) { - PyErr_SetString(PyExc_IndexError, "buffer index out of range"); - return NULL; - } - return PyString_FromStringAndSize((char *)ptr + idx, 1); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( idx < 0 || idx >= size ) { + PyErr_SetString(PyExc_IndexError, "buffer index out of range"); + return NULL; + } + return PyString_FromStringAndSize((char *)ptr + idx, 1); } static PyObject * buffer_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( left < 0 ) - left = 0; - if ( right < 0 ) - right = 0; - if ( right > size ) - right = size; - if ( right < left ) - right = left; - return PyString_FromStringAndSize((char *)ptr + left, - right - left); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( left < 0 ) + left = 0; + if ( right < 0 ) + right = 0; + if ( right > size ) + right = size; + if ( right < left ) + right = left; + return PyString_FromStringAndSize((char *)ptr + left, + right - left); } static PyObject * buffer_subscript(PyBufferObject *self, PyObject *item) { - void *p; - Py_ssize_t size; - - if (!get_buf(self, &p, &size, ANY_BUFFER)) - return NULL; - + void *p; + Py_ssize_t size; + + if (!get_buf(self, &p, &size, ANY_BUFFER)) + return NULL; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return NULL; - if (i < 0) - i += size; - return buffer_item(self, i); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength, cur, i; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return NULL; + if (i < 0) + i += size; + return buffer_item(self, i); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength, cur, i; - if (PySlice_GetIndicesEx((PySliceObject*)item, size, - &start, &stop, &step, &slicelength) < 0) { - return NULL; - } + if (PySlice_GetIndicesEx((PySliceObject*)item, size, + &start, &stop, &step, &slicelength) < 0) { + return NULL; + } - if (slicelength <= 0) - return PyString_FromStringAndSize("", 0); - else if (step == 1) - return PyString_FromStringAndSize((char *)p + start, - stop - start); - else { - PyObject *result; - char *source_buf = (char *)p; - char *result_buf = (char *)PyMem_Malloc(slicelength); + if (slicelength <= 0) + return PyString_FromStringAndSize("", 0); + else if (step == 1) + return PyString_FromStringAndSize((char *)p + start, + stop - start); + else { + PyObject *result; + char *source_buf = (char *)p; + char *result_buf = (char *)PyMem_Malloc(slicelength); - if (result_buf == NULL) - return PyErr_NoMemory(); + if (result_buf == NULL) + return PyErr_NoMemory(); - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - result_buf[i] = source_buf[cur]; - } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + result_buf[i] = source_buf[cur]; + } - result = PyString_FromStringAndSize(result_buf, - slicelength); - PyMem_Free(result_buf); - return result; - } - } - else { - PyErr_SetString(PyExc_TypeError, - "sequence index must be integer"); - return NULL; - } + result = PyString_FromStringAndSize(result_buf, + slicelength); + PyMem_Free(result_buf); + return result; + } + } + else { + PyErr_SetString(PyExc_TypeError, + "sequence index must be integer"); + return NULL; + } } static int buffer_ass_item(PyBufferObject *self, Py_ssize_t idx, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; - if (idx < 0 || idx >= size) { - PyErr_SetString(PyExc_IndexError, - "buffer assignment index out of range"); - return -1; - } + if (idx < 0 || idx >= size) { + PyErr_SetString(PyExc_IndexError, + "buffer assignment index out of range"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; - if ( count != 1 ) { - PyErr_SetString(PyExc_TypeError, - "right operand must be a single byte"); - return -1; - } + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; + if ( count != 1 ) { + PyErr_SetString(PyExc_TypeError, + "right operand must be a single byte"); + return -1; + } - ((char *)ptr1)[idx] = *(char *)ptr2; - return 0; + ((char *)ptr1)[idx] = *(char *)ptr2; + return 0; } static int buffer_ass_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t slice_len; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t slice_len; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; - if ( left < 0 ) - left = 0; - else if ( left > size ) - left = size; - if ( right < left ) - right = left; - else if ( right > size ) - right = size; - slice_len = right - left; + if ( left < 0 ) + left = 0; + else if ( left > size ) + left = size; + if ( right < left ) + right = left; + else if ( right > size ) + right = size; + slice_len = right - left; - if ( count != slice_len ) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ( count != slice_len ) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - if ( slice_len ) - memcpy((char *)ptr1 + left, ptr2, slice_len); + if ( slice_len ) + memcpy((char *)ptr1 + left, ptr2, slice_len); - return 0; + return 0; } static int buffer_ass_subscript(PyBufferObject *self, PyObject *item, PyObject *value) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t selfsize; - Py_ssize_t othersize; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t selfsize; + Py_ssize_t othersize; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = value ? value->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) - return -1; - + pb = value ? value->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) + return -1; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return -1; - if (i < 0) - i += selfsize; - return buffer_ass_item(self, i, value); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength; - - if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, - &start, &stop, &step, &slicelength) < 0) - return -1; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return -1; + if (i < 0) + i += selfsize; + return buffer_ass_item(self, i, value); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength; - if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) - return -1; + if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, + &start, &stop, &step, &slicelength) < 0) + return -1; - if (othersize != slicelength) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) + return -1; - if (slicelength == 0) - return 0; - else if (step == 1) { - memcpy((char *)ptr1 + start, ptr2, slicelength); - return 0; - } - else { - Py_ssize_t cur, i; - - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - ((char *)ptr1)[cur] = ((char *)ptr2)[i]; - } + if (othersize != slicelength) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - return 0; - } - } else { - PyErr_SetString(PyExc_TypeError, - "buffer indices must be integers"); - return -1; - } + if (slicelength == 0) + return 0; + else if (step == 1) { + memcpy((char *)ptr1 + start, ptr2, slicelength); + return 0; + } + else { + Py_ssize_t cur, i; + + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + ((char *)ptr1)[cur] = ((char *)ptr2)[i]; + } + + return 0; + } + } else { + PyErr_SetString(PyExc_TypeError, + "buffer indices must be integers"); + return -1; + } } /* Buffer methods */ @@ -723,64 +723,64 @@ static Py_ssize_t buffer_getreadbuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, READ_BUFFER)) - return -1; - return size; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, READ_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getwritebuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; + Py_ssize_t size; - if ( self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, "buffer is read-only"); + return -1; + } - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, WRITE_BUFFER)) - return -1; - return size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, WRITE_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getsegcount(PyBufferObject *self, Py_ssize_t *lenp) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - if (lenp) - *lenp = size; - return 1; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + if (lenp) + *lenp = size; + return 1; } static Py_ssize_t buffer_getcharbuf(PyBufferObject *self, Py_ssize_t idx, const char **pp) { - void *ptr; - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) - return -1; - *pp = (const char *)ptr; - return size; + void *ptr; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) + return -1; + *pp = (const char *)ptr; + return size; } void init_bufferobject(void) @@ -789,67 +789,65 @@ } static PySequenceMethods buffer_as_sequence = { - (lenfunc)buffer_length, /*sq_length*/ - (binaryfunc)buffer_concat, /*sq_concat*/ - (ssizeargfunc)buffer_repeat, /*sq_repeat*/ - (ssizeargfunc)buffer_item, /*sq_item*/ - (ssizessizeargfunc)buffer_slice, /*sq_slice*/ - (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ - (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ + (lenfunc)buffer_length, /*sq_length*/ + (binaryfunc)buffer_concat, /*sq_concat*/ + (ssizeargfunc)buffer_repeat, /*sq_repeat*/ + (ssizeargfunc)buffer_item, /*sq_item*/ + (ssizessizeargfunc)buffer_slice, /*sq_slice*/ + (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ + (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ }; static PyMappingMethods buffer_as_mapping = { - (lenfunc)buffer_length, - (binaryfunc)buffer_subscript, - (objobjargproc)buffer_ass_subscript, + (lenfunc)buffer_length, + (binaryfunc)buffer_subscript, + (objobjargproc)buffer_ass_subscript, }; static PyBufferProcs buffer_as_buffer = { - (readbufferproc)buffer_getreadbuf, - (writebufferproc)buffer_getwritebuf, - (segcountproc)buffer_getsegcount, - (charbufferproc)buffer_getcharbuf, + (readbufferproc)buffer_getreadbuf, + (writebufferproc)buffer_getwritebuf, + (segcountproc)buffer_getsegcount, + (charbufferproc)buffer_getcharbuf, }; PyTypeObject PyBuffer_Type = { - PyObject_HEAD_INIT(NULL) + PyVarObject_HEAD_INIT(NULL, 0) + "buffer", + sizeof(PyBufferObject), 0, - "buffer", - sizeof(PyBufferObject), - 0, - (destructor)buffer_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - (cmpfunc)buffer_compare, /* tp_compare */ - (reprfunc)buffer_repr, /* tp_repr */ - 0, /* tp_as_number */ - &buffer_as_sequence, /* tp_as_sequence */ - &buffer_as_mapping, /* tp_as_mapping */ - (hashfunc)buffer_hash, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)buffer_str, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - &buffer_as_buffer, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ - buffer_doc, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - buffer_new, /* tp_new */ + (destructor)buffer_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + (cmpfunc)buffer_compare, /* tp_compare */ + (reprfunc)buffer_repr, /* tp_repr */ + 0, /* tp_as_number */ + &buffer_as_sequence, /* tp_as_sequence */ + &buffer_as_mapping, /* tp_as_mapping */ + (hashfunc)buffer_hash, /* tp_hash */ + 0, /* tp_call */ + (reprfunc)buffer_str, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + &buffer_as_buffer, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ + buffer_doc, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + buffer_new, /* tp_new */ }; - diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -50,6 +50,10 @@ PyCObject_AsVoidPtr(PyObject *self) { if (self) { + if (PyCapsule_CheckExact(self)) { + const char *name = PyCapsule_GetName(self); + return (void *)PyCapsule_GetPointer(self, name); + } if (self->ob_type == &PyCObject_Type) return ((PyCObject *)self)->cobject; PyErr_SetString(PyExc_TypeError, diff --git a/pypy/module/cpyext/src/getargs.c b/pypy/module/cpyext/src/getargs.c --- a/pypy/module/cpyext/src/getargs.c +++ b/pypy/module/cpyext/src/getargs.c @@ -7,349 +7,348 @@ #ifdef __cplusplus -extern "C" { +extern "C" { #endif - int PyArg_Parse(PyObject *, const char *, ...); int PyArg_ParseTuple(PyObject *, const char *, ...); int PyArg_VaParse(PyObject *, const char *, va_list); int PyArg_ParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, ...); + const char *, char **, ...); int PyArg_VaParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, va_list); + const char *, char **, va_list); #define FLAG_COMPAT 1 #define FLAG_SIZE_T 2 -typedef int (*destr_t)(PyObject *, void *); - - -/* Keep track of "objects" that have been allocated or initialized and - which will need to be deallocated or cleaned up somehow if overall - parsing fails. -*/ -typedef struct { - void *item; - destr_t destructor; -} freelistentry_t; - -typedef struct { - int first_available; - freelistentry_t *entries; -} freelist_t; - /* Forward */ static int vgetargs1(PyObject *, const char *, va_list *, int); static void seterror(int, const char *, int *, const char *, const char *); -static char *convertitem(PyObject *, const char **, va_list *, int, int *, - char *, size_t, freelist_t *); +static char *convertitem(PyObject *, const char **, va_list *, int, int *, + char *, size_t, PyObject **); static char *converttuple(PyObject *, const char **, va_list *, int, - int *, char *, size_t, int, freelist_t *); + int *, char *, size_t, int, PyObject **); static char *convertsimple(PyObject *, const char **, va_list *, int, char *, - size_t, freelist_t *); + size_t, PyObject **); static Py_ssize_t convertbuffer(PyObject *, void **p, char **); static int getbuffer(PyObject *, Py_buffer *, char**); static int vgetargskeywords(PyObject *, PyObject *, - const char *, char **, va_list *, int); + const char *, char **, va_list *, int); static char *skipitem(const char **, va_list *, int); int PyArg_Parse(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT); + va_end(va); + return retval; } int _PyArg_Parse_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_ParseTuple(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, 0); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTuple_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParse(PyObject *args, const char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, 0); + return vgetargs1(args, format, &lva, 0); } int _PyArg_VaParse_SizeT(PyObject *args, char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, FLAG_SIZE_T); + return vgetargs1(args, format, &lva, FLAG_SIZE_T); } /* Handle cleanup of allocated memory in case of exception */ +#define GETARGS_CAPSULE_NAME_CLEANUP_PTR "getargs.cleanup_ptr" +#define GETARGS_CAPSULE_NAME_CLEANUP_BUFFER "getargs.cleanup_buffer" + +static void +cleanup_ptr(PyObject *self) +{ + void *ptr = PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_PTR); + if (ptr) { + PyMem_FREE(ptr); + } +} + +static void +cleanup_buffer(PyObject *self) +{ + Py_buffer *ptr = (Py_buffer *)PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_BUFFER); + if (ptr) { + PyBuffer_Release(ptr); + } +} + static int -cleanup_ptr(PyObject *self, void *ptr) +addcleanup(void *ptr, PyObject **freelist, PyCapsule_Destructor destr) { - if (ptr) { - PyMem_FREE(ptr); + PyObject *cobj; + const char *name; + + if (!*freelist) { + *freelist = PyList_New(0); + if (!*freelist) { + destr(ptr); + return -1; + } } + + if (destr == cleanup_ptr) { + name = GETARGS_CAPSULE_NAME_CLEANUP_PTR; + } else if (destr == cleanup_buffer) { + name = GETARGS_CAPSULE_NAME_CLEANUP_BUFFER; + } else { + return -1; + } + cobj = PyCapsule_New(ptr, name, destr); + if (!cobj) { + destr(ptr); + return -1; + } + if (PyList_Append(*freelist, cobj)) { + Py_DECREF(cobj); + return -1; + } + Py_DECREF(cobj); return 0; } static int -cleanup_buffer(PyObject *self, void *ptr) +cleanreturn(int retval, PyObject *freelist) { - Py_buffer *buf = (Py_buffer *)ptr; - if (buf) { - PyBuffer_Release(buf); + if (freelist && retval != 0) { + /* We were successful, reset the destructors so that they + don't get called. */ + Py_ssize_t len = PyList_GET_SIZE(freelist), i; + for (i = 0; i < len; i++) + PyCapsule_SetDestructor(PyList_GET_ITEM(freelist, i), NULL); } - return 0; + Py_XDECREF(freelist); + return retval; } -static int -addcleanup(void *ptr, freelist_t *freelist, destr_t destructor) -{ - int index; - - index = freelist->first_available; - freelist->first_available += 1; - - freelist->entries[index].item = ptr; - freelist->entries[index].destructor = destructor; - - return 0; -} - -static int -cleanreturn(int retval, freelist_t *freelist) -{ - int index; - - if (retval == 0) { - /* A failure occurred, therefore execute all of the cleanup - functions. - */ - for (index = 0; index < freelist->first_available; ++index) { - freelist->entries[index].destructor(NULL, - freelist->entries[index].item); - } - } - PyMem_Free(freelist->entries); - return retval; -} static int vgetargs1(PyObject *args, const char *format, va_list *p_va, int flags) { - char msgbuf[256]; - int levels[32]; - const char *fname = NULL; - const char *message = NULL; - int min = -1; - int max = 0; - int level = 0; - int endfmt = 0; - const char *formatsave = format; - Py_ssize_t i, len; - char *msg; - freelist_t freelist = {0, NULL}; - int compat = flags & FLAG_COMPAT; + char msgbuf[256]; + int levels[32]; + const char *fname = NULL; + const char *message = NULL; + int min = -1; + int max = 0; + int level = 0; + int endfmt = 0; + const char *formatsave = format; + Py_ssize_t i, len; + char *msg; + PyObject *freelist = NULL; + int compat = flags & FLAG_COMPAT; - assert(compat || (args != (PyObject*)NULL)); - flags = flags & ~FLAG_COMPAT; + assert(compat || (args != (PyObject*)NULL)); + flags = flags & ~FLAG_COMPAT; - while (endfmt == 0) { - int c = *format++; - switch (c) { - case '(': - if (level == 0) - max++; - level++; - if (level >= 30) - Py_FatalError("too many tuple nesting levels " - "in argument format string"); - break; - case ')': - if (level == 0) - Py_FatalError("excess ')' in getargs format"); - else - level--; - break; - case '\0': - endfmt = 1; - break; - case ':': - fname = format; - endfmt = 1; - break; - case ';': - message = format; - endfmt = 1; - break; - default: - if (level == 0) { - if (c == 'O') - max++; - else if (isalpha(Py_CHARMASK(c))) { - if (c != 'e') /* skip encoded */ - max++; - } else if (c == '|') - min = max; - } - break; - } - } - - if (level != 0) - Py_FatalError(/* '(' */ "missing ')' in getargs format"); - - if (min < 0) - min = max; - - format = formatsave; - - freelist.entries = PyMem_New(freelistentry_t, max); + while (endfmt == 0) { + int c = *format++; + switch (c) { + case '(': + if (level == 0) + max++; + level++; + if (level >= 30) + Py_FatalError("too many tuple nesting levels " + "in argument format string"); + break; + case ')': + if (level == 0) + Py_FatalError("excess ')' in getargs format"); + else + level--; + break; + case '\0': + endfmt = 1; + break; + case ':': + fname = format; + endfmt = 1; + break; + case ';': + message = format; + endfmt = 1; + break; + default: + if (level == 0) { + if (c == 'O') + max++; + else if (isalpha(Py_CHARMASK(c))) { + if (c != 'e') /* skip encoded */ + max++; + } else if (c == '|') + min = max; + } + break; + } + } - if (compat) { - if (max == 0) { - if (args == NULL) - return cleanreturn(1, &freelist); - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes no arguments", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - else if (min == 1 && max == 1) { - if (args == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes at least one argument", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - msg = convertitem(args, &format, p_va, flags, levels, - msgbuf, sizeof(msgbuf), &freelist); - if (msg == NULL) - return cleanreturn(1, &freelist); - seterror(levels[0], msg, levels+1, fname, message); - return cleanreturn(0, &freelist); - } - else { - PyErr_SetString(PyExc_SystemError, - "old style getargs format uses new features"); - return cleanreturn(0, &freelist); - } - } - - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "new style getargs format but argument is not a tuple"); - return cleanreturn(0, &freelist); - } - - len = PyTuple_GET_SIZE(args); - - if (len < min || max < len) { - if (message == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.150s%s takes %s %d argument%s " - "(%ld given)", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()", - min==max ? "exactly" - : len < min ? "at least" : "at most", - len < min ? min : max, - (len < min ? min : max) == 1 ? "" : "s", - Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); - message = msgbuf; - } - PyErr_SetString(PyExc_TypeError, message); - return cleanreturn(0, &freelist); - } - - for (i = 0; i < len; i++) { - if (*format == '|') - format++; - msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, - flags, levels, msgbuf, - sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, message); - return cleanreturn(0, &freelist); - } - } + if (level != 0) + Py_FatalError(/* '(' */ "missing ')' in getargs format"); - if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && - *format != '(' && - *format != '|' && *format != ':' && *format != ';') { - PyErr_Format(PyExc_SystemError, - "bad format string: %.200s", formatsave); - return cleanreturn(0, &freelist); - } - - return cleanreturn(1, &freelist); + if (min < 0) + min = max; + + format = formatsave; + + if (compat) { + if (max == 0) { + if (args == NULL) + return 1; + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes no arguments", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + else if (min == 1 && max == 1) { + if (args == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes at least one argument", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + msg = convertitem(args, &format, p_va, flags, levels, + msgbuf, sizeof(msgbuf), &freelist); + if (msg == NULL) + return cleanreturn(1, freelist); + seterror(levels[0], msg, levels+1, fname, message); + return cleanreturn(0, freelist); + } + else { + PyErr_SetString(PyExc_SystemError, + "old style getargs format uses new features"); + return 0; + } + } + + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "new style getargs format but argument is not a tuple"); + return 0; + } + + len = PyTuple_GET_SIZE(args); + + if (len < min || max < len) { + if (message == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.150s%s takes %s %d argument%s " + "(%ld given)", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()", + min==max ? "exactly" + : len < min ? "at least" : "at most", + len < min ? min : max, + (len < min ? min : max) == 1 ? "" : "s", + Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); + message = msgbuf; + } + PyErr_SetString(PyExc_TypeError, message); + return 0; + } + + for (i = 0; i < len; i++) { + if (*format == '|') + format++; + msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, + flags, levels, msgbuf, + sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, msg); + return cleanreturn(0, freelist); + } + } + + if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && + *format != '(' && + *format != '|' && *format != ':' && *format != ';') { + PyErr_Format(PyExc_SystemError, + "bad format string: %.200s", formatsave); + return cleanreturn(0, freelist); + } + + return cleanreturn(1, freelist); } @@ -358,37 +357,37 @@ seterror(int iarg, const char *msg, int *levels, const char *fname, const char *message) { - char buf[512]; - int i; - char *p = buf; + char buf[512]; + int i; + char *p = buf; - if (PyErr_Occurred()) - return; - else if (message == NULL) { - if (fname != NULL) { - PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); - p += strlen(p); - } - if (iarg != 0) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - "argument %d", iarg); - i = 0; - p += strlen(p); - while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - ", item %d", levels[i]-1); - p += strlen(p); - i++; - } - } - else { - PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); - p += strlen(p); - } - PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); - message = buf; - } - PyErr_SetString(PyExc_TypeError, message); + if (PyErr_Occurred()) + return; + else if (message == NULL) { + if (fname != NULL) { + PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); + p += strlen(p); + } + if (iarg != 0) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + "argument %d", iarg); + i = 0; + p += strlen(p); + while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + ", item %d", levels[i]-1); + p += strlen(p); + i++; + } + } + else { + PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); + p += strlen(p); + } + PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); + message = buf; + } + PyErr_SetString(PyExc_TypeError, message); } @@ -404,85 +403,84 @@ *p_va is undefined, *levels is a 0-terminated list of item numbers, *msgbuf contains an error message, whose format is: - "must be , not ", where: - is the name of the expected type, and - is the name of the actual type, + "must be , not ", where: + is the name of the expected type, and + is the name of the actual type, and msgbuf is returned. */ static char * converttuple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, int toplevel, - freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, int toplevel, + PyObject **freelist) { - int level = 0; - int n = 0; - const char *format = *p_format; - int i; - - for (;;) { - int c = *format++; - if (c == '(') { - if (level == 0) - n++; - level++; - } - else if (c == ')') { - if (level == 0) - break; - level--; - } - else if (c == ':' || c == ';' || c == '\0') - break; - else if (level == 0 && isalpha(Py_CHARMASK(c))) - n++; - } - - if (!PySequence_Check(arg) || PyString_Check(arg)) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %.50s" : - "must be %d-item sequence, not %.50s", - n, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; - } - - if ((i = PySequence_Size(arg)) != n) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %d" : - "must be sequence of length %d, not %d", - n, i); - return msgbuf; - } + int level = 0; + int n = 0; + const char *format = *p_format; + int i; - format = *p_format; - for (i = 0; i < n; i++) { - char *msg; - PyObject *item; + for (;;) { + int c = *format++; + if (c == '(') { + if (level == 0) + n++; + level++; + } + else if (c == ')') { + if (level == 0) + break; + level--; + } + else if (c == ':' || c == ';' || c == '\0') + break; + else if (level == 0 && isalpha(Py_CHARMASK(c))) + n++; + } + + if (!PySequence_Check(arg) || PyString_Check(arg)) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %.50s" : + "must be %d-item sequence, not %.50s", + n, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; + } + + if ((i = PySequence_Size(arg)) != n) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %d" : + "must be sequence of length %d, not %d", + n, i); + return msgbuf; + } + + format = *p_format; + for (i = 0; i < n; i++) { + char *msg; + PyObject *item; item = PySequence_GetItem(arg, i); - if (item == NULL) { - PyErr_Clear(); - levels[0] = i+1; - levels[1] = 0; - strncpy(msgbuf, "is not retrievable", - bufsize); - return msgbuf; - } - PyPy_Borrow(arg, item); - msg = convertitem(item, &format, p_va, flags, levels+1, - msgbuf, bufsize, freelist); + if (item == NULL) { + PyErr_Clear(); + levels[0] = i+1; + levels[1] = 0; + strncpy(msgbuf, "is not retrievable", bufsize); + return msgbuf; + } + PyPy_Borrow(arg, item); + msg = convertitem(item, &format, p_va, flags, levels+1, + msgbuf, bufsize, freelist); /* PySequence_GetItem calls tp->sq_item, which INCREFs */ Py_XDECREF(item); - if (msg != NULL) { - levels[0] = i+1; - return msg; - } - } + if (msg != NULL) { + levels[0] = i+1; + return msg; + } + } - *p_format = format; - return NULL; + *p_format = format; + return NULL; } @@ -490,45 +488,45 @@ static char * convertitem(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, PyObject **freelist) { - char *msg; - const char *format = *p_format; - - if (*format == '(' /* ')' */) { - format++; - msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, - bufsize, 0, freelist); - if (msg == NULL) - format++; - } - else { - msg = convertsimple(arg, &format, p_va, flags, - msgbuf, bufsize, freelist); - if (msg != NULL) - levels[0] = 0; - } - if (msg == NULL) - *p_format = format; - return msg; + char *msg; + const char *format = *p_format; + + if (*format == '(' /* ')' */) { + format++; + msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, + bufsize, 0, freelist); + if (msg == NULL) + format++; + } + else { + msg = convertsimple(arg, &format, p_va, flags, + msgbuf, bufsize, freelist); + if (msg != NULL) + levels[0] = 0; + } + if (msg == NULL) + *p_format = format; + return msg; } #define UNICODE_DEFAULT_ENCODING(arg) \ - _PyUnicode_AsDefaultEncodedString(arg, NULL) + _PyUnicode_AsDefaultEncodedString(arg, NULL) /* Format an error message generated by convertsimple(). */ static char * converterr(const char *expected, PyObject *arg, char *msgbuf, size_t bufsize) { - assert(expected != NULL); - assert(arg != NULL); - PyOS_snprintf(msgbuf, bufsize, - "must be %.50s, not %.50s", expected, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; + assert(expected != NULL); + assert(arg != NULL); + PyOS_snprintf(msgbuf, bufsize, + "must be %.50s, not %.50s", expected, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; } #define CONV_UNICODE "(unicode conversion error)" @@ -536,14 +534,28 @@ /* explicitly check for float arguments when integers are expected. For now * signal a warning. Returns true if an exception was raised. */ static int +float_argument_warning(PyObject *arg) +{ + if (PyFloat_Check(arg) && + PyErr_Warn(PyExc_DeprecationWarning, + "integer argument expected, got float" )) + return 1; + else + return 0; +} + +/* explicitly check for float arguments when integers are expected. Raises + TypeError and returns true for float arguments. */ +static int float_argument_error(PyObject *arg) { - if (PyFloat_Check(arg) && - PyErr_Warn(PyExc_DeprecationWarning, - "integer argument expected, got float" )) - return 1; - else - return 0; + if (PyFloat_Check(arg)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 1; + } + else + return 0; } /* Convert a non-tuple argument. Return NULL if conversion went OK, @@ -557,836 +569,839 @@ static char * convertsimple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - char *msgbuf, size_t bufsize, freelist_t *freelist) + char *msgbuf, size_t bufsize, PyObject **freelist) { - /* For # codes */ -#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ - if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ - else q=va_arg(*p_va, int*); -#define STORE_SIZE(s) if (flags & FLAG_SIZE_T) *q2=s; else *q=s; + /* For # codes */ +#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ + if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ + else q=va_arg(*p_va, int*); +#define STORE_SIZE(s) \ + if (flags & FLAG_SIZE_T) \ + *q2=s; \ + else { \ + if (INT_MAX < s) { \ + PyErr_SetString(PyExc_OverflowError, \ + "size does not fit in an int"); \ + return converterr("", arg, msgbuf, bufsize); \ + } \ + *q=s; \ + } #define BUFFER_LEN ((flags & FLAG_SIZE_T) ? *q2:*q) - const char *format = *p_format; - char c = *format++; + const char *format = *p_format; + char c = *format++; #ifdef Py_USING_UNICODE - PyObject *uarg; -#endif - - switch (c) { - - case 'b': { /* unsigned byte -- very short int */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > UCHAR_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (unsigned char) ival; - break; - } - - case 'B': {/* byte sized bitfield - both signed and unsigned - values allowed */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned char) ival; - break; - } - - case 'h': {/* signed short int */ - short *p = va_arg(*p_va, short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < SHRT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > SHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (short) ival; - break; - } - - case 'H': { /* short int sized bitfield, both signed and - unsigned allowed */ - unsigned short *p = va_arg(*p_va, unsigned short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned short) ival; - break; - } - case 'i': {/* signed int */ - int *p = va_arg(*p_va, int *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival > INT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival < INT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = ival; - break; - } - case 'I': { /* int sized bitfield, both signed and - unsigned allowed */ - unsigned int *p = va_arg(*p_va, unsigned int *); - unsigned int ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); - if (ival == (unsigned int)-1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - case 'n': /* Py_ssize_t */ -#if SIZEOF_SIZE_T != SIZEOF_LONG - { - Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); - Py_ssize_t ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsSsize_t(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif - /* Fall through from 'n' to 'l' if Py_ssize_t is int */ - case 'l': {/* long int */ - long *p = va_arg(*p_va, long *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - - case 'k': { /* long sized bitfield */ - unsigned long *p = va_arg(*p_va, unsigned long *); - unsigned long ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } - -#ifdef HAVE_LONG_LONG - case 'L': {/* PY_LONG_LONG */ - PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); - PY_LONG_LONG ival = PyLong_AsLongLong( arg ); - if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { - return converterr("long", arg, msgbuf, bufsize); - } else { - *p = ival; - } - break; - } - - case 'K': { /* long long sized bitfield */ - unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); - unsigned PY_LONG_LONG ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif // HAVE_LONG_LONG - - case 'f': {/* float */ - float *p = va_arg(*p_va, float *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = (float) dval; - break; - } - - case 'd': {/* double */ - double *p = va_arg(*p_va, double *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = dval; - break; - } - -#ifndef WITHOUT_COMPLEX - case 'D': {/* complex double */ - Py_complex *p = va_arg(*p_va, Py_complex *); - Py_complex cval; - cval = PyComplex_AsCComplex(arg); - if (PyErr_Occurred()) - return converterr("complex", arg, msgbuf, bufsize); - else - *p = cval; - break; - } -#endif /* WITHOUT_COMPLEX */ - - case 'c': {/* char */ - char *p = va_arg(*p_va, char *); - if (PyString_Check(arg) && PyString_Size(arg) == 1) - *p = PyString_AS_STRING(arg)[0]; - else - return converterr("char", arg, msgbuf, bufsize); - break; - } - case 's': {/* string */ - if (*format == '*') { - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (PyString_Check(arg)) { - fflush(stdout); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { -#if 0 - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); -#else - return converterr("string or buffer", arg, msgbuf, bufsize); -#endif - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; - } else if (*format == '#') { - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string", arg, msgbuf, bufsize); - if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr("string without null bytes", - arg, msgbuf, bufsize); - } - break; - } - - case 'z': {/* string, may be NULL (None) */ - if (*format == '*') { - Py_FatalError("'*' format not supported in PyArg_*\n"); -#if 0 - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (arg == Py_None) - PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); - else if (PyString_Check(arg)) { - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; -#endif - } else if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (arg == Py_None) { - *p = 0; - STORE_SIZE(0); - } - else if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (arg == Py_None) - *p = 0; - else if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string or None", - arg, msgbuf, bufsize); - if (*format == '#') { - FETCH_SIZE; - assert(0); /* XXX redundant with if-case */ - if (arg == Py_None) - *q = 0; - else - *q = PyString_Size(arg); - format++; - } - else if (*p != NULL && - (Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr( - "string without null bytes or None", - arg, msgbuf, bufsize); - } - break; - } - case 'e': {/* encoded string */ - char **buffer; - const char *encoding; - PyObject *s; - Py_ssize_t size; - int recode_strings; - - /* Get 'e' parameter: the encoding name */ - encoding = (const char *)va_arg(*p_va, const char *); -#ifdef Py_USING_UNICODE - if (encoding == NULL) - encoding = PyUnicode_GetDefaultEncoding(); + PyObject *uarg; #endif - /* Get output buffer parameter: - 's' (recode all objects via Unicode) or - 't' (only recode non-string objects) - */ - if (*format == 's') - recode_strings = 1; - else if (*format == 't') - recode_strings = 0; - else - return converterr( - "(unknown parser marker combination)", - arg, msgbuf, bufsize); - buffer = (char **)va_arg(*p_va, char **); - format++; - if (buffer == NULL) - return converterr("(buffer is NULL)", - arg, msgbuf, bufsize); - - /* Encode object */ - if (!recode_strings && PyString_Check(arg)) { - s = arg; - Py_INCREF(s); - } - else { + switch (c) { + + case 'b': { /* unsigned byte -- very short int */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > UCHAR_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (unsigned char) ival; + break; + } + + case 'B': {/* byte sized bitfield - both signed and unsigned + values allowed */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned char) ival; + break; + } + + case 'h': {/* signed short int */ + short *p = va_arg(*p_va, short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < SHRT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > SHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (short) ival; + break; + } + + case 'H': { /* short int sized bitfield, both signed and + unsigned allowed */ + unsigned short *p = va_arg(*p_va, unsigned short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned short) ival; + break; + } + + case 'i': {/* signed int */ + int *p = va_arg(*p_va, int *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival > INT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival < INT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = ival; + break; + } + + case 'I': { /* int sized bitfield, both signed and + unsigned allowed */ + unsigned int *p = va_arg(*p_va, unsigned int *); + unsigned int ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); + if (ival == (unsigned int)-1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'n': /* Py_ssize_t */ +#if SIZEOF_SIZE_T != SIZEOF_LONG + { + Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); + Py_ssize_t ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsSsize_t(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + /* Fall through from 'n' to 'l' if Py_ssize_t is int */ + case 'l': {/* long int */ + long *p = va_arg(*p_va, long *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'k': { /* long sized bitfield */ + unsigned long *p = va_arg(*p_va, unsigned long *); + unsigned long ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } + +#ifdef HAVE_LONG_LONG + case 'L': {/* PY_LONG_LONG */ + PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); + PY_LONG_LONG ival; + if (float_argument_warning(arg)) + return converterr("long", arg, msgbuf, bufsize); + ival = PyLong_AsLongLong(arg); + if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { + return converterr("long", arg, msgbuf, bufsize); + } else { + *p = ival; + } + break; + } + + case 'K': { /* long long sized bitfield */ + unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); + unsigned PY_LONG_LONG ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + + case 'f': {/* float */ + float *p = va_arg(*p_va, float *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = (float) dval; + break; + } + + case 'd': {/* double */ + double *p = va_arg(*p_va, double *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = dval; + break; + } + +#ifndef WITHOUT_COMPLEX + case 'D': {/* complex double */ + Py_complex *p = va_arg(*p_va, Py_complex *); + Py_complex cval; + cval = PyComplex_AsCComplex(arg); + if (PyErr_Occurred()) + return converterr("complex", arg, msgbuf, bufsize); + else + *p = cval; + break; + } +#endif /* WITHOUT_COMPLEX */ + + case 'c': {/* char */ + char *p = va_arg(*p_va, char *); + if (PyString_Check(arg) && PyString_Size(arg) == 1) + *p = PyString_AS_STRING(arg)[0]; + else + return converterr("char", arg, msgbuf, bufsize); + break; + } + + case 's': {/* string */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } #ifdef Py_USING_UNICODE - PyObject *u; + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; - /* Convert object to Unicode */ - u = PyUnicode_FromObject(arg); - if (u == NULL) - return converterr( - "string or unicode or text buffer", - arg, msgbuf, bufsize); - - /* Encode object; use default error handling */ - s = PyUnicode_AsEncodedString(u, - encoding, - NULL); - Py_DECREF(u); - if (s == NULL) - return converterr("(encoding failed)", - arg, msgbuf, bufsize); - if (!PyString_Check(s)) { - Py_DECREF(s); - return converterr( - "(encoder failed to return a string)", - arg, msgbuf, bufsize); - } + if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string", arg, msgbuf, bufsize); + if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr("string without null bytes", + arg, msgbuf, bufsize); + } + break; + } + + case 'z': {/* string, may be NULL (None) */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (arg == Py_None) + PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); + else if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + + if (arg == Py_None) { + *p = 0; + STORE_SIZE(0); + } + else if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (arg == Py_None) + *p = 0; + else if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string or None", + arg, msgbuf, bufsize); + if (*format == '#') { + FETCH_SIZE; + assert(0); /* XXX redundant with if-case */ + if (arg == Py_None) { + STORE_SIZE(0); + } else { + STORE_SIZE(PyString_Size(arg)); + } + format++; + } + else if (*p != NULL && + (Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr( + "string without null bytes or None", + arg, msgbuf, bufsize); + } + break; + } + + case 'e': {/* encoded string */ + char **buffer; + const char *encoding; + PyObject *s; + Py_ssize_t size; + int recode_strings; + + /* Get 'e' parameter: the encoding name */ + encoding = (const char *)va_arg(*p_va, const char *); +#ifdef Py_USING_UNICODE + if (encoding == NULL) + encoding = PyUnicode_GetDefaultEncoding(); +#endif + + /* Get output buffer parameter: + 's' (recode all objects via Unicode) or + 't' (only recode non-string objects) + */ + if (*format == 's') + recode_strings = 1; + else if (*format == 't') + recode_strings = 0; + else + return converterr( + "(unknown parser marker combination)", + arg, msgbuf, bufsize); + buffer = (char **)va_arg(*p_va, char **); + format++; + if (buffer == NULL) + return converterr("(buffer is NULL)", + arg, msgbuf, bufsize); + + /* Encode object */ + if (!recode_strings && PyString_Check(arg)) { + s = arg; + Py_INCREF(s); + } + else { +#ifdef Py_USING_UNICODE + PyObject *u; + + /* Convert object to Unicode */ + u = PyUnicode_FromObject(arg); + if (u == NULL) + return converterr( + "string or unicode or text buffer", + arg, msgbuf, bufsize); + + /* Encode object; use default error handling */ + s = PyUnicode_AsEncodedString(u, + encoding, + NULL); + Py_DECREF(u); + if (s == NULL) + return converterr("(encoding failed)", + arg, msgbuf, bufsize); + if (!PyString_Check(s)) { + Py_DECREF(s); + return converterr( + "(encoder failed to return a string)", + arg, msgbuf, bufsize); + } #else - return converterr("string", arg, msgbuf, bufsize); + return converterr("string", arg, msgbuf, bufsize); #endif - } - size = PyString_GET_SIZE(s); + } + size = PyString_GET_SIZE(s); - /* Write output; output is guaranteed to be 0-terminated */ - if (*format == '#') { - /* Using buffer length parameter '#': - - - if *buffer is NULL, a new buffer of the - needed size is allocated and the data - copied into it; *buffer is updated to point - to the new buffer; the caller is - responsible for PyMem_Free()ing it after - usage + /* Write output; output is guaranteed to be 0-terminated */ + if (*format == '#') { + /* Using buffer length parameter '#': - - if *buffer is not NULL, the data is - copied to *buffer; *buffer_len has to be - set to the size of the buffer on input; - buffer overflow is signalled with an error; - buffer has to provide enough room for the - encoded string plus the trailing 0-byte - - - in both cases, *buffer_len is updated to - the size of the buffer /excluding/ the - trailing 0-byte - - */ - FETCH_SIZE; + - if *buffer is NULL, a new buffer of the + needed size is allocated and the data + copied into it; *buffer is updated to point + to the new buffer; the caller is + responsible for PyMem_Free()ing it after + usage - format++; - if (q == NULL && q2 == NULL) { - Py_DECREF(s); - return converterr( - "(buffer_len is NULL)", - arg, msgbuf, bufsize); - } - if (*buffer == NULL) { - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr( - "(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - } else { - if (size + 1 > BUFFER_LEN) { - Py_DECREF(s); - return converterr( - "(buffer overflow)", - arg, msgbuf, bufsize); - } - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - STORE_SIZE(size); - } else { - /* Using a 0-terminated buffer: - - - the encoded string has to be 0-terminated - for this variant to work; if it is not, an - error raised + - if *buffer is not NULL, the data is + copied to *buffer; *buffer_len has to be + set to the size of the buffer on input; + buffer overflow is signalled with an error; + buffer has to provide enough room for the + encoded string plus the trailing 0-byte - - a new buffer of the needed size is - allocated and the data copied into it; - *buffer is updated to point to the new - buffer; the caller is responsible for - PyMem_Free()ing it after usage + - in both cases, *buffer_len is updated to + the size of the buffer /excluding/ the + trailing 0-byte - */ - if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) - != size) { - Py_DECREF(s); - return converterr( - "encoded string without NULL bytes", - arg, msgbuf, bufsize); - } - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr("(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr("(cleanup problem)", - arg, msgbuf, bufsize); - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - } - Py_DECREF(s); - break; - } + */ + FETCH_SIZE; + + format++; + if (q == NULL && q2 == NULL) { + Py_DECREF(s); + return converterr( + "(buffer_len is NULL)", + arg, msgbuf, bufsize); + } + if (*buffer == NULL) { + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr( + "(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + } else { + if (size + 1 > BUFFER_LEN) { + Py_DECREF(s); + return converterr( + "(buffer overflow)", + arg, msgbuf, bufsize); + } + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + STORE_SIZE(size); + } else { + /* Using a 0-terminated buffer: + + - the encoded string has to be 0-terminated + for this variant to work; if it is not, an + error raised + + - a new buffer of the needed size is + allocated and the data copied into it; + *buffer is updated to point to the new + buffer; the caller is responsible for + PyMem_Free()ing it after usage + + */ + if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) + != size) { + Py_DECREF(s); + return converterr( + "encoded string without NULL bytes", + arg, msgbuf, bufsize); + } + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr("(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr("(cleanup problem)", + arg, msgbuf, bufsize); + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + } + Py_DECREF(s); + break; + } #ifdef Py_USING_UNICODE - case 'u': {/* raw unicode buffer (Py_UNICODE *) */ - if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - if (PyUnicode_Check(arg)) { - *p = PyUnicode_AS_UNICODE(arg); - STORE_SIZE(PyUnicode_GET_SIZE(arg)); - } - else { - return converterr("cannot convert raw buffers", - arg, msgbuf, bufsize); - } - format++; - } else { - Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); - if (PyUnicode_Check(arg)) - *p = PyUnicode_AS_UNICODE(arg); - else - return converterr("unicode", arg, msgbuf, bufsize); - } - break; - } + case 'u': {/* raw unicode buffer (Py_UNICODE *) */ + if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + if (PyUnicode_Check(arg)) { + *p = PyUnicode_AS_UNICODE(arg); + STORE_SIZE(PyUnicode_GET_SIZE(arg)); + } + else { + return converterr("cannot convert raw buffers", + arg, msgbuf, bufsize); + } + format++; + } else { + Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); + if (PyUnicode_Check(arg)) + *p = PyUnicode_AS_UNICODE(arg); + else + return converterr("unicode", arg, msgbuf, bufsize); + } + break; + } #endif - case 'S': { /* string object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyString_Check(arg)) - *p = arg; - else - return converterr("string", arg, msgbuf, bufsize); - break; - } - + case 'S': { /* string object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyString_Check(arg)) + *p = arg; + else + return converterr("string", arg, msgbuf, bufsize); + break; + } + #ifdef Py_USING_UNICODE - case 'U': { /* Unicode object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyUnicode_Check(arg)) - *p = arg; - else - return converterr("unicode", arg, msgbuf, bufsize); - break; - } + case 'U': { /* Unicode object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyUnicode_Check(arg)) + *p = arg; + else + return converterr("unicode", arg, msgbuf, bufsize); + break; + } #endif - case 'O': { /* object */ - PyTypeObject *type; - PyObject **p; - if (*format == '!') { - type = va_arg(*p_va, PyTypeObject*); - p = va_arg(*p_va, PyObject **); - format++; - if (PyType_IsSubtype(arg->ob_type, type)) - *p = arg; - else - return converterr(type->tp_name, arg, msgbuf, bufsize); - } - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - p = va_arg(*p_va, PyObject **); - format++; - if ((*pred)(arg)) - *p = arg; - else - return converterr("(unspecified)", - arg, msgbuf, bufsize); - - } - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - converter convert = va_arg(*p_va, converter); - void *addr = va_arg(*p_va, void *); - format++; - if (! (*convert)(arg, addr)) - return converterr("(unspecified)", - arg, msgbuf, bufsize); - } - else { - p = va_arg(*p_va, PyObject **); - *p = arg; - } - break; - } - - case 'w': { /* memory buffer, read-write access */ - Py_FatalError("'w' unsupported\n"); -#if 0 - void **p = va_arg(*p_va, void **); - void *res; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; + case 'O': { /* object */ + PyTypeObject *type; + PyObject **p; + if (*format == '!') { + type = va_arg(*p_va, PyTypeObject*); + p = va_arg(*p_va, PyObject **); + format++; + if (PyType_IsSubtype(arg->ob_type, type)) + *p = arg; + else + return converterr(type->tp_name, arg, msgbuf, bufsize); - if (pb && pb->bf_releasebuffer && *format != '*') - /* Buffer must be released, yet caller does not use - the Py_buffer protocol. */ - return converterr("pinned buffer", arg, msgbuf, bufsize); + } + else if (*format == '?') { + inquiry pred = va_arg(*p_va, inquiry); + p = va_arg(*p_va, PyObject **); + format++; + if ((*pred)(arg)) + *p = arg; + else + return converterr("(unspecified)", + arg, msgbuf, bufsize); - if (pb && pb->bf_getbuffer && *format == '*') { - /* Caller is interested in Py_buffer, and the object - supports it directly. */ - format++; - if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { - PyErr_Clear(); - return converterr("read-write buffer", arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) - return converterr("contiguous buffer", arg, msgbuf, bufsize); - break; - } + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + converter convert = va_arg(*p_va, converter); + void *addr = va_arg(*p_va, void *); + format++; + if (! (*convert)(arg, addr)) + return converterr("(unspecified)", + arg, msgbuf, bufsize); + } + else { + p = va_arg(*p_va, PyObject **); + *p = arg; + } + break; + } - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) - return converterr("read-write buffer", arg, msgbuf, bufsize); - if ((*pb->bf_getsegcount)(arg, NULL) != 1) - return converterr("single-segment read-write buffer", - arg, msgbuf, bufsize); - if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); - if (*format == '*') { - PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); - format++; - } - else { - *p = res; - if (*format == '#') { - FETCH_SIZE; - STORE_SIZE(count); - format++; - } - } - break; -#endif - } - - case 't': { /* 8-bit character buffer, read-only access */ - char **p = va_arg(*p_va, char **); - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; -#if 0 - if (*format++ != '#') - return converterr( - "invalid use of 't' format character", - arg, msgbuf, bufsize); -#endif - if (!PyType_HasFeature(arg->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER) -#if 0 - || pb == NULL || pb->bf_getcharbuffer == NULL || - pb->bf_getsegcount == NULL -#endif - ) - return converterr( - "string or read-only character buffer", - arg, msgbuf, bufsize); -#if 0 - if (pb->bf_getsegcount(arg, NULL) != 1) - return converterr( - "string or single-segment read-only buffer", - arg, msgbuf, bufsize); + case 'w': { /* memory buffer, read-write access */ + void **p = va_arg(*p_va, void **); + void *res; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; - if (pb->bf_releasebuffer) - return converterr( - "string or pinned buffer", - arg, msgbuf, bufsize); -#endif - count = pb->bf_getcharbuffer(arg, 0, p); -#if 0 - if (count < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); -#endif - { - FETCH_SIZE; - STORE_SIZE(count); - ++format; - } - break; - } - default: - return converterr("impossible", arg, msgbuf, bufsize); - - } - - *p_format = format; - return NULL; + if (pb && pb->bf_releasebuffer && *format != '*') + /* Buffer must be released, yet caller does not use + the Py_buffer protocol. */ + return converterr("pinned buffer", arg, msgbuf, bufsize); + + if (pb && pb->bf_getbuffer && *format == '*') { + /* Caller is interested in Py_buffer, and the object + supports it directly. */ + format++; + if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { + PyErr_Clear(); + return converterr("read-write buffer", arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) + return converterr("contiguous buffer", arg, msgbuf, bufsize); + break; + } + + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr("read-write buffer", arg, msgbuf, bufsize); + if ((*pb->bf_getsegcount)(arg, NULL) != 1) + return converterr("single-segment read-write buffer", + arg, msgbuf, bufsize); + if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + if (*format == '*') { + PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); + format++; + } + else { + *p = res; + if (*format == '#') { + FETCH_SIZE; + STORE_SIZE(count); + format++; + } + } + break; + } + + case 't': { /* 8-bit character buffer, read-only access */ + char **p = va_arg(*p_va, char **); + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + + if (*format++ != '#') + return converterr( + "invalid use of 't' format character", + arg, msgbuf, bufsize); + if (!PyType_HasFeature(arg->ob_type, + Py_TPFLAGS_HAVE_GETCHARBUFFER) || + pb == NULL || pb->bf_getcharbuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr( + "string or read-only character buffer", + arg, msgbuf, bufsize); + + if (pb->bf_getsegcount(arg, NULL) != 1) + return converterr( + "string or single-segment read-only buffer", + arg, msgbuf, bufsize); + + if (pb->bf_releasebuffer) + return converterr( + "string or pinned buffer", + arg, msgbuf, bufsize); + + count = pb->bf_getcharbuffer(arg, 0, p); + if (count < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + { + FETCH_SIZE; + STORE_SIZE(count); + } + break; + } + + default: + return converterr("impossible", arg, msgbuf, bufsize); + + } + + *p_format = format; + return NULL; } static Py_ssize_t convertbuffer(PyObject *arg, void **p, char **errmsg) { - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - pb->bf_releasebuffer != NULL) { - *errmsg = "string or read-only buffer"; - return -1; - } - if ((*pb->bf_getsegcount)(arg, NULL) != 1) { - *errmsg = "string or single-segment read-only buffer"; - return -1; - } - if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { - *errmsg = "(unspecified)"; - } - return count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + pb->bf_releasebuffer != NULL) { + *errmsg = "string or read-only buffer"; + return -1; + } + if ((*pb->bf_getsegcount)(arg, NULL) != 1) { + *errmsg = "string or single-segment read-only buffer"; + return -1; + } + if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { + *errmsg = "(unspecified)"; + } + return count; } static int getbuffer(PyObject *arg, Py_buffer *view, char **errmsg) { - void *buf; - Py_ssize_t count; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - if (pb == NULL) { - *errmsg = "string or buffer"; - return -1; - } - if (pb->bf_getbuffer) { - if (pb->bf_getbuffer(arg, view, 0) < 0) { - *errmsg = "convertible to a buffer"; - return -1; - } - if (!PyBuffer_IsContiguous(view, 'C')) { - *errmsg = "contiguous buffer"; - return -1; - } - return 0; - } + void *buf; + Py_ssize_t count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + if (pb == NULL) { + *errmsg = "string or buffer"; + return -1; + } + if (pb->bf_getbuffer) { + if (pb->bf_getbuffer(arg, view, 0) < 0) { + *errmsg = "convertible to a buffer"; + return -1; + } + if (!PyBuffer_IsContiguous(view, 'C')) { + *errmsg = "contiguous buffer"; + return -1; + } + return 0; + } - count = convertbuffer(arg, &buf, errmsg); - if (count < 0) { - *errmsg = "convertible to a buffer"; - return count; - } - PyBuffer_FillInfo(view, NULL, buf, count, 1, 0); - return 0; + count = convertbuffer(arg, &buf, errmsg); + if (count < 0) { + *errmsg = "convertible to a buffer"; + return count; + } + PyBuffer_FillInfo(view, arg, buf, count, 1, 0); + return 0; } /* Support for keyword arguments donated by @@ -1395,501 +1410,487 @@ /* Return false (0) for error, else true. */ int PyArg_ParseTupleAndKeywords(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, - kwlist, &va, FLAG_SIZE_T); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, + kwlist, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParseTupleAndKeywords(PyObject *args, PyObject *keywords, - const char *format, + const char *format, char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); - return retval; + retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); + return retval; } int _PyArg_VaParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, va_list va) + PyObject *keywords, + const char *format, + char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, - kwlist, &lva, FLAG_SIZE_T); - return retval; + retval = vgetargskeywords(args, keywords, format, + kwlist, &lva, FLAG_SIZE_T); + return retval; } #define IS_END_OF_FORMAT(c) (c == '\0' || c == ';' || c == ':') static int vgetargskeywords(PyObject *args, PyObject *keywords, const char *format, - char **kwlist, va_list *p_va, int flags) + char **kwlist, va_list *p_va, int flags) { - char msgbuf[512]; - int levels[32]; - const char *fname, *msg, *custom_msg, *keyword; - int min = INT_MAX; - int i, len, nargs, nkeywords; - PyObject *current_arg; - freelist_t freelist = {0, NULL}; + char msgbuf[512]; + int levels[32]; + const char *fname, *msg, *custom_msg, *keyword; + int min = INT_MAX; + int i, len, nargs, nkeywords; + PyObject *freelist = NULL, *current_arg; + assert(args != NULL && PyTuple_Check(args)); + assert(keywords == NULL || PyDict_Check(keywords)); + assert(format != NULL); + assert(kwlist != NULL); + assert(p_va != NULL); - assert(args != NULL && PyTuple_Check(args)); - assert(keywords == NULL || PyDict_Check(keywords)); - assert(format != NULL); - assert(kwlist != NULL); - assert(p_va != NULL); + /* grab the function name or custom error msg first (mutually exclusive) */ + fname = strchr(format, ':'); + if (fname) { + fname++; + custom_msg = NULL; + } + else { + custom_msg = strchr(format,';'); + if (custom_msg) + custom_msg++; + } - /* grab the function name or custom error msg first (mutually exclusive) */ - fname = strchr(format, ':'); - if (fname) { - fname++; - custom_msg = NULL; - } - else { - custom_msg = strchr(format,';'); - if (custom_msg) - custom_msg++; - } + /* scan kwlist and get greatest possible nbr of args */ + for (len=0; kwlist[len]; len++) + continue; - /* scan kwlist and get greatest possible nbr of args */ - for (len=0; kwlist[len]; len++) - continue; + nargs = PyTuple_GET_SIZE(args); + nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); + if (nargs + nkeywords > len) { + PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " + "argument%s (%d given)", + (fname == NULL) ? "function" : fname, + (fname == NULL) ? "" : "()", + len, + (len == 1) ? "" : "s", + nargs + nkeywords); + return 0; + } - freelist.entries = PyMem_New(freelistentry_t, len); + /* convert tuple args and keyword args in same loop, using kwlist to drive process */ + for (i = 0; i < len; i++) { + keyword = kwlist[i]; + if (*format == '|') { + min = i; + format++; + } + if (IS_END_OF_FORMAT(*format)) { + PyErr_Format(PyExc_RuntimeError, + "More keyword list entries (%d) than " + "format specifiers (%d)", len, i); + return cleanreturn(0, freelist); + } + current_arg = NULL; + if (nkeywords) { + current_arg = PyDict_GetItemString(keywords, keyword); + } + if (current_arg) { + --nkeywords; + if (i < nargs) { + /* arg present in tuple and in dict */ + PyErr_Format(PyExc_TypeError, + "Argument given by name ('%s') " + "and position (%d)", + keyword, i+1); + return cleanreturn(0, freelist); + } + } + else if (nkeywords && PyErr_Occurred()) + return cleanreturn(0, freelist); + else if (i < nargs) + current_arg = PyTuple_GET_ITEM(args, i); - nargs = PyTuple_GET_SIZE(args); - nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); - if (nargs + nkeywords > len) { - PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " - "argument%s (%d given)", - (fname == NULL) ? "function" : fname, - (fname == NULL) ? "" : "()", - len, - (len == 1) ? "" : "s", - nargs + nkeywords); - return cleanreturn(0, &freelist); - } + if (current_arg) { + msg = convertitem(current_arg, &format, p_va, flags, + levels, msgbuf, sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, custom_msg); + return cleanreturn(0, freelist); + } + continue; + } - /* convert tuple args and keyword args in same loop, using kwlist to drive process */ - for (i = 0; i < len; i++) { - keyword = kwlist[i]; - if (*format == '|') { - min = i; - format++; - } - if (IS_END_OF_FORMAT(*format)) { - PyErr_Format(PyExc_RuntimeError, - "More keyword list entries (%d) than " - "format specifiers (%d)", len, i); - return cleanreturn(0, &freelist); - } - current_arg = NULL; - if (nkeywords) { - current_arg = PyDict_GetItemString(keywords, keyword); - } - if (current_arg) { - --nkeywords; - if (i < nargs) { - /* arg present in tuple and in dict */ - PyErr_Format(PyExc_TypeError, - "Argument given by name ('%s') " - "and position (%d)", - keyword, i+1); - return cleanreturn(0, &freelist); - } - } - else if (nkeywords && PyErr_Occurred()) - return cleanreturn(0, &freelist); - else if (i < nargs) - current_arg = PyTuple_GET_ITEM(args, i); - - if (current_arg) { - msg = convertitem(current_arg, &format, p_va, flags, - levels, msgbuf, sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, custom_msg); - return cleanreturn(0, &freelist); - } - continue; - } + if (i < min) { + PyErr_Format(PyExc_TypeError, "Required argument " + "'%s' (pos %d) not found", + keyword, i+1); + return cleanreturn(0, freelist); + } + /* current code reports success when all required args + * fulfilled and no keyword args left, with no further + * validation. XXX Maybe skip this in debug build ? + */ + if (!nkeywords) + return cleanreturn(1, freelist); - if (i < min) { - PyErr_Format(PyExc_TypeError, "Required argument " - "'%s' (pos %d) not found", - keyword, i+1); - return cleanreturn(0, &freelist); - } - /* current code reports success when all required args - * fulfilled and no keyword args left, with no further - * validation. XXX Maybe skip this in debug build ? - */ - if (!nkeywords) - return cleanreturn(1, &freelist); + /* We are into optional args, skip thru to any remaining + * keyword args */ + msg = skipitem(&format, p_va, flags); + if (msg) { + PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, + format); + return cleanreturn(0, freelist); + } + } - /* We are into optional args, skip thru to any remaining - * keyword args */ - msg = skipitem(&format, p_va, flags); - if (msg) { - PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, - format); - return cleanreturn(0, &freelist); - } - } + if (!IS_END_OF_FORMAT(*format) && *format != '|') { + PyErr_Format(PyExc_RuntimeError, + "more argument specifiers than keyword list entries " + "(remaining format:'%s')", format); + return cleanreturn(0, freelist); + } - if (!IS_END_OF_FORMAT(*format) && *format != '|') { - PyErr_Format(PyExc_RuntimeError, - "more argument specifiers than keyword list entries " - "(remaining format:'%s')", format); - return cleanreturn(0, &freelist); - } + /* make sure there are no extraneous keyword arguments */ + if (nkeywords > 0) { + PyObject *key, *value; + Py_ssize_t pos = 0; + while (PyDict_Next(keywords, &pos, &key, &value)) { + int match = 0; + char *ks; + if (!PyString_Check(key)) { + PyErr_SetString(PyExc_TypeError, + "keywords must be strings"); + return cleanreturn(0, freelist); + } + ks = PyString_AsString(key); + for (i = 0; i < len; i++) { + if (!strcmp(ks, kwlist[i])) { + match = 1; + break; + } + } + if (!match) { + PyErr_Format(PyExc_TypeError, + "'%s' is an invalid keyword " + "argument for this function", + ks); + return cleanreturn(0, freelist); + } + } + } - /* make sure there are no extraneous keyword arguments */ - if (nkeywords > 0) { - PyObject *key, *value; - Py_ssize_t pos = 0; - while (PyDict_Next(keywords, &pos, &key, &value)) { - int match = 0; - char *ks; - if (!PyString_Check(key)) { - PyErr_SetString(PyExc_TypeError, - "keywords must be strings"); - return cleanreturn(0, &freelist); - } - ks = PyString_AsString(key); - for (i = 0; i < len; i++) { - if (!strcmp(ks, kwlist[i])) { - match = 1; - break; - } - } - if (!match) { - PyErr_Format(PyExc_TypeError, - "'%s' is an invalid keyword " - "argument for this function", - ks); - return cleanreturn(0, &freelist); - } - } - } - - return cleanreturn(1, &freelist); + return cleanreturn(1, freelist); } static char * skipitem(const char **p_format, va_list *p_va, int flags) { - const char *format = *p_format; - char c = *format++; - - switch (c) { + const char *format = *p_format; + char c = *format++; - /* simple codes - * The individual types (second arg of va_arg) are irrelevant */ + switch (c) { - case 'b': /* byte -- very short int */ - case 'B': /* byte as bitfield */ - case 'h': /* short int */ - case 'H': /* short int as bitfield */ - case 'i': /* int */ - case 'I': /* int sized bitfield */ - case 'l': /* long int */ - case 'k': /* long int sized bitfield */ + /* simple codes + * The individual types (second arg of va_arg) are irrelevant */ + + case 'b': /* byte -- very short int */ + case 'B': /* byte as bitfield */ + case 'h': /* short int */ + case 'H': /* short int as bitfield */ + case 'i': /* int */ + case 'I': /* int sized bitfield */ + case 'l': /* long int */ + case 'k': /* long int sized bitfield */ #ifdef HAVE_LONG_LONG - case 'L': /* PY_LONG_LONG */ - case 'K': /* PY_LONG_LONG sized bitfield */ + case 'L': /* PY_LONG_LONG */ + case 'K': /* PY_LONG_LONG sized bitfield */ #endif - case 'f': /* float */ - case 'd': /* double */ + case 'f': /* float */ + case 'd': /* double */ #ifndef WITHOUT_COMPLEX - case 'D': /* complex double */ + case 'D': /* complex double */ #endif - case 'c': /* char */ - { - (void) va_arg(*p_va, void *); - break; - } + case 'c': /* char */ + { + (void) va_arg(*p_va, void *); + break; + } - case 'n': /* Py_ssize_t */ - { - (void) va_arg(*p_va, Py_ssize_t *); - break; - } - - /* string codes */ - - case 'e': /* string with encoding */ - { - (void) va_arg(*p_va, const char *); - if (!(*format == 's' || *format == 't')) - /* after 'e', only 's' and 't' is allowed */ - goto err; - format++; - /* explicit fallthrough to string cases */ - } - - case 's': /* string */ - case 'z': /* string or None */ + case 'n': /* Py_ssize_t */ + { + (void) va_arg(*p_va, Py_ssize_t *); + break; + } + + /* string codes */ + + case 'e': /* string with encoding */ + { + (void) va_arg(*p_va, const char *); + if (!(*format == 's' || *format == 't')) + /* after 'e', only 's' and 't' is allowed */ + goto err; + format++; + /* explicit fallthrough to string cases */ + } + + case 's': /* string */ + case 'z': /* string or None */ #ifdef Py_USING_UNICODE - case 'u': /* unicode string */ + case 'u': /* unicode string */ #endif - case 't': /* buffer, read-only */ - case 'w': /* buffer, read-write */ - { - (void) va_arg(*p_va, char **); - if (*format == '#') { - if (flags & FLAG_SIZE_T) - (void) va_arg(*p_va, Py_ssize_t *); - else - (void) va_arg(*p_va, int *); - format++; - } else if ((c == 's' || c == 'z') && *format == '*') { - format++; - } - break; - } + case 't': /* buffer, read-only */ + case 'w': /* buffer, read-write */ + { + (void) va_arg(*p_va, char **); + if (*format == '#') { + if (flags & FLAG_SIZE_T) + (void) va_arg(*p_va, Py_ssize_t *); + else + (void) va_arg(*p_va, int *); + format++; + } else if ((c == 's' || c == 'z') && *format == '*') { + format++; + } + break; + } - /* object codes */ + /* object codes */ - case 'S': /* string object */ + case 'S': /* string object */ #ifdef Py_USING_UNICODE - case 'U': /* unicode string object */ + case 'U': /* unicode string object */ #endif - { - (void) va_arg(*p_va, PyObject **); - break; - } - - case 'O': /* object */ - { - if (*format == '!') { - format++; - (void) va_arg(*p_va, PyTypeObject*); - (void) va_arg(*p_va, PyObject **); - } -#if 0 -/* I don't know what this is for */ - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - format++; - if ((*pred)(arg)) { - (void) va_arg(*p_va, PyObject **); - } - } -#endif - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - (void) va_arg(*p_va, converter); - (void) va_arg(*p_va, void *); - format++; - } - else { - (void) va_arg(*p_va, PyObject **); - } - break; - } + { + (void) va_arg(*p_va, PyObject **); + break; + } - case '(': /* bypass tuple, not handled at all previously */ - { - char *msg; - for (;;) { - if (*format==')') - break; - if (IS_END_OF_FORMAT(*format)) - return "Unmatched left paren in format " - "string"; - msg = skipitem(&format, p_va, flags); - if (msg) - return msg; - } - format++; - break; - } + case 'O': /* object */ + { + if (*format == '!') { + format++; + (void) va_arg(*p_va, PyTypeObject*); + (void) va_arg(*p_va, PyObject **); + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + (void) va_arg(*p_va, converter); + (void) va_arg(*p_va, void *); + format++; + } + else { + (void) va_arg(*p_va, PyObject **); + } + break; + } - case ')': - return "Unmatched right paren in format string"; + case '(': /* bypass tuple, not handled at all previously */ + { + char *msg; + for (;;) { + if (*format==')') + break; + if (IS_END_OF_FORMAT(*format)) + return "Unmatched left paren in format " + "string"; + msg = skipitem(&format, p_va, flags); + if (msg) + return msg; + } + format++; + break; + } - default: + case ')': + return "Unmatched right paren in format string"; + + default: err: - return "impossible"; - - } + return "impossible"; - *p_format = format; - return NULL; + } + + *p_format = format; + return NULL; } int PyArg_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, ...) { - Py_ssize_t i, l; - PyObject **o; - va_list vargs; + Py_ssize_t i, l; + PyObject **o; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, max); + va_start(vargs, max); #else - va_start(vargs); + va_start(vargs); #endif - assert(min >= 0); - assert(min <= max); - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "PyArg_UnpackTuple() argument list is not a tuple"); - return 0; - } - l = PyTuple_GET_SIZE(args); - if (l < min) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at least "), min, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at least "), min, l); - va_end(vargs); - return 0; - } - if (l > max) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at most "), max, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at most "), max, l); - va_end(vargs); - return 0; - } - for (i = 0; i < l; i++) { - o = va_arg(vargs, PyObject **); - *o = PyTuple_GET_ITEM(args, i); - } - va_end(vargs); - return 1; + assert(min >= 0); + assert(min <= max); + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "PyArg_UnpackTuple() argument list is not a tuple"); + return 0; + } + l = PyTuple_GET_SIZE(args); + if (l < min) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at least "), min, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at least "), min, l); + va_end(vargs); + return 0; + } + if (l > max) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at most "), max, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at most "), max, l); + va_end(vargs); + return 0; + } + for (i = 0; i < l; i++) { + o = va_arg(vargs, PyObject **); + *o = PyTuple_GET_ITEM(args, i); + } + va_end(vargs); + return 1; } /* For type constructors that don't take keyword args * - * Sets a TypeError and returns 0 if the kwds dict is + * Sets a TypeError and returns 0 if the kwds dict is * not empty, returns 1 otherwise */ int _PyArg_NoKeywords(const char *funcname, PyObject *kw) { - if (kw == NULL) - return 1; - if (!PyDict_CheckExact(kw)) { - PyErr_BadInternalCall(); - return 0; - } - if (PyDict_Size(kw) == 0) - return 1; - - PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", - funcname); - return 0; + if (kw == NULL) + return 1; + if (!PyDict_CheckExact(kw)) { + PyErr_BadInternalCall(); + return 0; + } + if (PyDict_Size(kw) == 0) + return 1; + + PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", + funcname); + return 0; } #ifdef __cplusplus }; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -33,41 +33,41 @@ static int countformat(const char *format, int endchar) { - int count = 0; - int level = 0; - while (level > 0 || *format != endchar) { - switch (*format) { - case '\0': - /* Premature end */ - PyErr_SetString(PyExc_SystemError, - "unmatched paren in format"); - return -1; - case '(': - case '[': - case '{': - if (level == 0) - count++; - level++; - break; - case ')': - case ']': - case '}': - level--; - break; - case '#': - case '&': - case ',': - case ':': - case ' ': - case '\t': - break; - default: - if (level == 0) - count++; - } - format++; - } - return count; + int count = 0; + int level = 0; + while (level > 0 || *format != endchar) { + switch (*format) { + case '\0': + /* Premature end */ + PyErr_SetString(PyExc_SystemError, + "unmatched paren in format"); + return -1; + case '(': + case '[': + case '{': + if (level == 0) + count++; + level++; + break; + case ')': + case ']': + case '}': + level--; + break; + case '#': + case '&': + case ',': + case ':': + case ' ': + case '\t': + break; + default: + if (level == 0) + count++; + } + format++; + } + return count; } @@ -83,582 +83,435 @@ static PyObject * do_mkdict(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *d; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((d = PyDict_New()) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i+= 2) { - PyObject *k, *v; - int err; - k = do_mkvalue(p_format, p_va, flags); - if (k == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - k = Py_None; - } - v = do_mkvalue(p_format, p_va, flags); - if (v == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - v = Py_None; - } - err = PyDict_SetItem(d, k, v); - Py_DECREF(k); - Py_DECREF(v); - if (err < 0 || itemfailed) { - Py_DECREF(d); - return NULL; - } - } - if (d != NULL && **p_format != endchar) { - Py_DECREF(d); - d = NULL; - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - } - else if (endchar) - ++*p_format; - return d; + PyObject *d; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((d = PyDict_New()) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i+= 2) { + PyObject *k, *v; + int err; + k = do_mkvalue(p_format, p_va, flags); + if (k == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + k = Py_None; + } + v = do_mkvalue(p_format, p_va, flags); + if (v == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + v = Py_None; + } + err = PyDict_SetItem(d, k, v); + Py_DECREF(k); + Py_DECREF(v); + if (err < 0 || itemfailed) { + Py_DECREF(d); + return NULL; + } + } + if (d != NULL && **p_format != endchar) { + Py_DECREF(d); + d = NULL; + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + } + else if (endchar) + ++*p_format; + return d; } static PyObject * do_mklist(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - v = PyList_New(n); - if (v == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyList_SET_ITEM(v, i, w); - } + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + v = PyList_New(n); + if (v == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyList_SET_ITEM(v, i, w); + } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } #ifdef Py_USING_UNICODE static int _ustrlen(Py_UNICODE *u) { - int i = 0; - Py_UNICODE *v = u; - while (*v != 0) { i++; v++; } - return i; + int i = 0; + Py_UNICODE *v = u; + while (*v != 0) { i++; v++; } + return i; } #endif static PyObject * do_mktuple(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((v = PyTuple_New(n)) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyTuple_SET_ITEM(v, i, w); - } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((v = PyTuple_New(n)) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyTuple_SET_ITEM(v, i, w); + } + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } static PyObject * do_mkvalue(const char **p_format, va_list *p_va, int flags) { - for (;;) { - switch (*(*p_format)++) { - case '(': - return do_mktuple(p_format, p_va, ')', - countformat(*p_format, ')'), flags); + for (;;) { + switch (*(*p_format)++) { + case '(': + return do_mktuple(p_format, p_va, ')', + countformat(*p_format, ')'), flags); - case '[': - return do_mklist(p_format, p_va, ']', - countformat(*p_format, ']'), flags); + case '[': + return do_mklist(p_format, p_va, ']', + countformat(*p_format, ']'), flags); - case '{': - return do_mkdict(p_format, p_va, '}', - countformat(*p_format, '}'), flags); + case '{': + return do_mkdict(p_format, p_va, '}', + countformat(*p_format, '}'), flags); - case 'b': - case 'B': - case 'h': - case 'i': - return PyInt_FromLong((long)va_arg(*p_va, int)); - - case 'H': - return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + case 'b': + case 'B': + case 'h': + case 'i': + return PyInt_FromLong((long)va_arg(*p_va, int)); - case 'I': - { - unsigned int n; - n = va_arg(*p_va, unsigned int); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong((unsigned long)n); - else - return PyInt_FromLong(n); - } - - case 'n': + case 'H': + return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + + case 'I': + { + unsigned int n; + n = va_arg(*p_va, unsigned int); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong((unsigned long)n); + else + return PyInt_FromLong(n); + } + + case 'n': #if SIZEOF_SIZE_T!=SIZEOF_LONG - return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); + return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); #endif - /* Fall through from 'n' to 'l' if Py_ssize_t is long */ - case 'l': - return PyInt_FromLong(va_arg(*p_va, long)); + /* Fall through from 'n' to 'l' if Py_ssize_t is long */ + case 'l': + return PyInt_FromLong(va_arg(*p_va, long)); - case 'k': - { - unsigned long n; - n = va_arg(*p_va, unsigned long); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong(n); - else - return PyInt_FromLong(n); - } + case 'k': + { + unsigned long n; + n = va_arg(*p_va, unsigned long); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong(n); + else + return PyInt_FromLong(n); + } #ifdef HAVE_LONG_LONG - case 'L': - return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); + case 'L': + return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); - case 'K': - return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); + case 'K': + return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); #endif #ifdef Py_USING_UNICODE - case 'u': - { - PyObject *v; - Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (u == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) - n = _ustrlen(u); - v = PyUnicode_FromUnicode(u, n); - } - return v; - } + case 'u': + { + PyObject *v; + Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (u == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) + n = _ustrlen(u); + v = PyUnicode_FromUnicode(u, n); + } + return v; + } #endif - case 'f': - case 'd': - return PyFloat_FromDouble( - (double)va_arg(*p_va, va_double)); + case 'f': + case 'd': + return PyFloat_FromDouble( + (double)va_arg(*p_va, va_double)); #ifndef WITHOUT_COMPLEX - case 'D': - return PyComplex_FromCComplex( - *((Py_complex *)va_arg(*p_va, Py_complex *))); + case 'D': + return PyComplex_FromCComplex( + *((Py_complex *)va_arg(*p_va, Py_complex *))); #endif /* WITHOUT_COMPLEX */ - case 'c': - { - char p[1]; - p[0] = (char)va_arg(*p_va, int); - return PyString_FromStringAndSize(p, 1); - } + case 'c': + { + char p[1]; + p[0] = (char)va_arg(*p_va, int); + return PyString_FromStringAndSize(p, 1); + } - case 's': - case 'z': - { - PyObject *v; - char *str = va_arg(*p_va, char *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (str == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) { - size_t m = strlen(str); - if (m > PY_SSIZE_T_MAX) { - PyErr_SetString(PyExc_OverflowError, - "string too long for Python string"); - return NULL; - } - n = (Py_ssize_t)m; - } - v = PyString_FromStringAndSize(str, n); - } - return v; - } + case 's': + case 'z': + { + PyObject *v; + char *str = va_arg(*p_va, char *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (str == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) { + size_t m = strlen(str); + if (m > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_OverflowError, + "string too long for Python string"); + return NULL; + } + n = (Py_ssize_t)m; + } + v = PyString_FromStringAndSize(str, n); + } + return v; + } - case 'N': - case 'S': - case 'O': - if (**p_format == '&') { - typedef PyObject *(*converter)(void *); - converter func = va_arg(*p_va, converter); - void *arg = va_arg(*p_va, void *); - ++*p_format; - return (*func)(arg); - } - else { - PyObject *v; - v = va_arg(*p_va, PyObject *); - if (v != NULL) { - if (*(*p_format - 1) != 'N') - Py_INCREF(v); - } - else if (!PyErr_Occurred()) - /* If a NULL was passed - * because a call that should - * have constructed a value - * failed, that's OK, and we - * pass the error on; but if - * no error occurred it's not - * clear that the caller knew - * what she was doing. */ - PyErr_SetString(PyExc_SystemError, - "NULL object passed to Py_BuildValue"); - return v; - } + case 'N': + case 'S': + case 'O': + if (**p_format == '&') { + typedef PyObject *(*converter)(void *); + converter func = va_arg(*p_va, converter); + void *arg = va_arg(*p_va, void *); + ++*p_format; + return (*func)(arg); + } + else { + PyObject *v; + v = va_arg(*p_va, PyObject *); + if (v != NULL) { + if (*(*p_format - 1) != 'N') + Py_INCREF(v); + } + else if (!PyErr_Occurred()) + /* If a NULL was passed + * because a call that should + * have constructed a value + * failed, that's OK, and we + * pass the error on; but if + * no error occurred it's not + * clear that the caller knew + * what she was doing. */ + PyErr_SetString(PyExc_SystemError, + "NULL object passed to Py_BuildValue"); + return v; + } - case ':': - case ',': - case ' ': - case '\t': - break; + case ':': + case ',': + case ' ': + case '\t': + break; - default: - PyErr_SetString(PyExc_SystemError, - "bad format char passed to Py_BuildValue"); - return NULL; + default: + PyErr_SetString(PyExc_SystemError, + "bad format char passed to Py_BuildValue"); + return NULL; - } - } + } + } } PyObject * Py_BuildValue(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, 0); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, 0); + va_end(va); + return retval; } PyObject * _Py_BuildValue_SizeT(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, FLAG_SIZE_T); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, FLAG_SIZE_T); + va_end(va); + return retval; } PyObject * Py_VaBuildValue(const char *format, va_list va) { - return va_build_value(format, va, 0); + return va_build_value(format, va, 0); } PyObject * _Py_VaBuildValue_SizeT(const char *format, va_list va) { - return va_build_value(format, va, FLAG_SIZE_T); + return va_build_value(format, va, FLAG_SIZE_T); } static PyObject * va_build_value(const char *format, va_list va, int flags) { - const char *f = format; - int n = countformat(f, '\0'); - va_list lva; + const char *f = format; + int n = countformat(f, '\0'); + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - if (n < 0) - return NULL; - if (n == 0) { - Py_INCREF(Py_None); - return Py_None; - } - if (n == 1) - return do_mkvalue(&f, &lva, flags); - return do_mktuple(&f, &lva, '\0', n, flags); + if (n < 0) + return NULL; + if (n == 0) { + Py_INCREF(Py_None); + return Py_None; + } + if (n == 1) + return do_mkvalue(&f, &lva, flags); + return do_mktuple(&f, &lva, '\0', n, flags); } PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...) { - va_list vargs; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *args; + PyObject *res; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) - return NULL; + if (args == NULL) + return NULL; - res = PyEval_CallObject(obj, args); - Py_DECREF(args); + res = PyEval_CallObject(obj, args); + Py_DECREF(args); - return res; + return res; } PyObject * PyEval_CallMethod(PyObject *obj, const char *methodname, const char *format, ...) { - va_list vargs; - PyObject *meth; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *meth; + PyObject *args; + PyObject *res; - meth = PyObject_GetAttrString(obj, methodname); - if (meth == NULL) - return NULL; + meth = PyObject_GetAttrString(obj, methodname); + if (meth == NULL) + return NULL; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) { - Py_DECREF(meth); - return NULL; - } + if (args == NULL) { + Py_DECREF(meth); + return NULL; + } - res = PyEval_CallObject(meth, args); - Py_DECREF(meth); - Py_DECREF(args); + res = PyEval_CallObject(meth, args); + Py_DECREF(meth); + Py_DECREF(args); - return res; -} - -static PyObject* -call_function_tail(PyObject *callable, PyObject *args) -{ - PyObject *retval; - - if (args == NULL) - return NULL; - - if (!PyTuple_Check(args)) { - PyObject *a; - - a = PyTuple_New(1); - if (a == NULL) { - Py_DECREF(args); - return NULL; - } - PyTuple_SET_ITEM(a, 0, args); - args = a; - } - retval = PyObject_Call(callable, args, NULL); - - Py_DECREF(args); - - return retval; -} - -PyObject * -PyObject_CallFunction(PyObject *callable, const char *format, ...) -{ - va_list va; - PyObject *args; - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - return call_function_tail(callable, args); -} - -PyObject * -PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) -{ - va_list va; - PyObject *args; - PyObject *func = NULL; - PyObject *retval = NULL; - - func = PyObject_GetAttrString(o, name); - if (func == NULL) { - PyErr_SetString(PyExc_AttributeError, name); - return 0; - } - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - retval = call_function_tail(func, args); - - exit: - /* args gets consumed in call_function_tail */ - Py_XDECREF(func); - - return retval; -} - -static PyObject * -objargs_mktuple(va_list va) -{ - int i, n = 0; - va_list countva; - PyObject *result, *tmp; - -#ifdef VA_LIST_IS_ARRAY - memcpy(countva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(countva, va); -#else - countva = va; -#endif -#endif - - while (((PyObject *)va_arg(countva, PyObject *)) != NULL) - ++n; - result = PyTuple_New(n); - if (result != NULL && n > 0) { - for (i = 0; i < n; ++i) { - tmp = (PyObject *)va_arg(va, PyObject *); - Py_INCREF(tmp); - PyTuple_SET_ITEM(result, i, tmp); - } - } - return result; -} - -PyObject * -PyObject_CallFunctionObjArgs(PyObject *callable, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - /* count the args */ - va_start(vargs, callable); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) - return NULL; - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - - return tmp; -} - -PyObject * -PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - callable = PyObject_GetAttr(callable, name); - if (callable == NULL) - return NULL; - - /* count the args */ - va_start(vargs, name); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) { - Py_DECREF(callable); - return NULL; - } - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - Py_DECREF(callable); - - return tmp; + return res; } /* returns -1 in case of error, 0 if a new key was added, 1 if the key @@ -666,67 +519,67 @@ static int _PyModule_AddObject_NoConsumeRef(PyObject *m, const char *name, PyObject *o) { - PyObject *dict, *prev; - if (!PyModule_Check(m)) { - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs module as first arg"); - return -1; - } - if (!o) { - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs non-NULL value"); - return -1; - } + PyObject *dict, *prev; + if (!PyModule_Check(m)) { + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs module as first arg"); + return -1; + } + if (!o) { + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs non-NULL value"); + return -1; + } - dict = PyModule_GetDict(m); - if (dict == NULL) { - /* Internal error -- modules must have a dict! */ - PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", - PyModule_GetName(m)); - return -1; - } - prev = PyDict_GetItemString(dict, name); - if (PyDict_SetItemString(dict, name, o)) - return -1; - return prev != NULL; + dict = PyModule_GetDict(m); + if (dict == NULL) { + /* Internal error -- modules must have a dict! */ + PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", + PyModule_GetName(m)); + return -1; + } + prev = PyDict_GetItemString(dict, name); + if (PyDict_SetItemString(dict, name, o)) + return -1; + return prev != NULL; } int PyModule_AddObject(PyObject *m, const char *name, PyObject *o) { - int result = _PyModule_AddObject_NoConsumeRef(m, name, o); - /* XXX WORKAROUND for a common misusage of PyModule_AddObject: - for the common case of adding a new key, we don't consume a - reference, but instead just leak it away. The issue is that - people generally don't realize that this function consumes a - reference, because on CPython the reference is still stored - on the dictionary. */ - if (result != 0) - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result = _PyModule_AddObject_NoConsumeRef(m, name, o); + /* XXX WORKAROUND for a common misusage of PyModule_AddObject: + for the common case of adding a new key, we don't consume a + reference, but instead just leak it away. The issue is that + people generally don't realize that this function consumes a + reference, because on CPython the reference is still stored + on the dictionary. */ + if (result != 0) + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddIntConstant(PyObject *m, const char *name, long value) { - int result; - PyObject *o = PyInt_FromLong(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyInt_FromLong(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddStringConstant(PyObject *m, const char *name, const char *value) { - int result; - PyObject *o = PyString_FromString(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyString_FromString(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } diff --git a/pypy/module/cpyext/src/mysnprintf.c b/pypy/module/cpyext/src/mysnprintf.c --- a/pypy/module/cpyext/src/mysnprintf.c +++ b/pypy/module/cpyext/src/mysnprintf.c @@ -20,86 +20,86 @@ Return value (rv): - When 0 <= rv < size, the output conversion was unexceptional, and - rv characters were written to str (excluding a trailing \0 byte at - str[rv]). + When 0 <= rv < size, the output conversion was unexceptional, and + rv characters were written to str (excluding a trailing \0 byte at + str[rv]). - When rv >= size, output conversion was truncated, and a buffer of - size rv+1 would have been needed to avoid truncation. str[size-1] - is \0 in this case. + When rv >= size, output conversion was truncated, and a buffer of + size rv+1 would have been needed to avoid truncation. str[size-1] + is \0 in this case. - When rv < 0, "something bad happened". str[size-1] is \0 in this - case too, but the rest of str is unreliable. It could be that - an error in format codes was detected by libc, or on platforms - with a non-C99 vsnprintf simply that the buffer wasn't big enough - to avoid truncation, or on platforms without any vsnprintf that - PyMem_Malloc couldn't obtain space for a temp buffer. + When rv < 0, "something bad happened". str[size-1] is \0 in this + case too, but the rest of str is unreliable. It could be that + an error in format codes was detected by libc, or on platforms + with a non-C99 vsnprintf simply that the buffer wasn't big enough + to avoid truncation, or on platforms without any vsnprintf that + PyMem_Malloc couldn't obtain space for a temp buffer. CAUTION: Unlike C99, str != NULL and size > 0 are required. */ int +PyOS_snprintf(char *str, size_t size, const char *format, ...) +{ + int rc; + va_list va; + + va_start(va, format); + rc = PyOS_vsnprintf(str, size, format, va); + va_end(va); + return rc; +} + +int PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va) { - int len; /* # bytes written, excluding \0 */ + int len; /* # bytes written, excluding \0 */ #ifdef HAVE_SNPRINTF #define _PyOS_vsnprintf_EXTRA_SPACE 1 #else #define _PyOS_vsnprintf_EXTRA_SPACE 512 - char *buffer; + char *buffer; #endif - assert(str != NULL); - assert(size > 0); - assert(format != NULL); - /* We take a size_t as input but return an int. Sanity check - * our input so that it won't cause an overflow in the - * vsnprintf return value or the buffer malloc size. */ - if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { - len = -666; - goto Done; - } + assert(str != NULL); + assert(size > 0); + assert(format != NULL); + /* We take a size_t as input but return an int. Sanity check + * our input so that it won't cause an overflow in the + * vsnprintf return value or the buffer malloc size. */ + if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { + len = -666; + goto Done; + } #ifdef HAVE_SNPRINTF - len = vsnprintf(str, size, format, va); + len = vsnprintf(str, size, format, va); #else - /* Emulate it. */ - buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); - if (buffer == NULL) { - len = -666; - goto Done; - } + /* Emulate it. */ + buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); + if (buffer == NULL) { + len = -666; + goto Done; + } - len = vsprintf(buffer, format, va); - if (len < 0) - /* ignore the error */; + len = vsprintf(buffer, format, va); + if (len < 0) + /* ignore the error */; - else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) - Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); + else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) + Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); - else { - const size_t to_copy = (size_t)len < size ? - (size_t)len : size - 1; - assert(to_copy < size); - memcpy(str, buffer, to_copy); - str[to_copy] = '\0'; - } - PyMem_FREE(buffer); + else { + const size_t to_copy = (size_t)len < size ? + (size_t)len : size - 1; + assert(to_copy < size); + memcpy(str, buffer, to_copy); + str[to_copy] = '\0'; + } + PyMem_FREE(buffer); #endif Done: - if (size > 0) - str[size-1] = '\0'; - return len; + if (size > 0) + str[size-1] = '\0'; + return len; #undef _PyOS_vsnprintf_EXTRA_SPACE } - -int -PyOS_snprintf(char *str, size_t size, const char *format, ...) -{ - int rc; - va_list va; - - va_start(va, format); - rc = PyOS_vsnprintf(str, size, format, va); - va_end(va); - return rc; -} diff --git a/pypy/module/cpyext/src/object.c b/pypy/module/cpyext/src/object.c deleted file mode 100644 --- a/pypy/module/cpyext/src/object.c +++ /dev/null @@ -1,91 +0,0 @@ -// contains code from abstract.c -#include - - -static PyObject * -null_error(void) -{ - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_SystemError, - "null argument to internal routine"); - return NULL; -} - -int PyObject_AsReadBuffer(PyObject *obj, - const void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void *pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a readable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getreadbuffer)(obj, 0, &pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int PyObject_AsWriteBuffer(PyObject *obj, - void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void*pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a writeable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getwritebuffer)(obj,0,&pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int -PyObject_CheckReadBuffer(PyObject *obj) -{ - PyBufferProcs *pb = obj->ob_type->tp_as_buffer; - - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - (*pb->bf_getsegcount)(obj, NULL) != 1) - return 0; - return 1; -} diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -4,72 +4,75 @@ PyObject * PyErr_Format(PyObject *exception, const char *format, ...) { - va_list vargs; - PyObject* string; + va_list vargs; + PyObject* string; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - string = PyString_FromFormatV(format, vargs); - PyErr_SetObject(exception, string); - Py_XDECREF(string); - va_end(vargs); - return NULL; + string = PyString_FromFormatV(format, vargs); + PyErr_SetObject(exception, string); + Py_XDECREF(string); + va_end(vargs); + return NULL; } + + PyObject * PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { - char *dot; - PyObject *modulename = NULL; - PyObject *classname = NULL; - PyObject *mydict = NULL; - PyObject *bases = NULL; - PyObject *result = NULL; - dot = strrchr(name, '.'); - if (dot == NULL) { - PyErr_SetString(PyExc_SystemError, - "PyErr_NewException: name must be module.class"); - return NULL; - } - if (base == NULL) - base = PyExc_Exception; - if (dict == NULL) { - dict = mydict = PyDict_New(); - if (dict == NULL) - goto failure; - } - if (PyDict_GetItemString(dict, "__module__") == NULL) { - modulename = PyString_FromStringAndSize(name, - (Py_ssize_t)(dot-name)); - if (modulename == NULL) - goto failure; - if (PyDict_SetItemString(dict, "__module__", modulename) != 0) - goto failure; - } - if (PyTuple_Check(base)) { - bases = base; - /* INCREF as we create a new ref in the else branch */ - Py_INCREF(bases); - } else { - bases = PyTuple_Pack(1, base); - if (bases == NULL) - goto failure; - } - /* Create a real new-style class. */ - result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", - dot+1, bases, dict); + char *dot; + PyObject *modulename = NULL; + PyObject *classname = NULL; + PyObject *mydict = NULL; + PyObject *bases = NULL; + PyObject *result = NULL; + dot = strrchr(name, '.'); + if (dot == NULL) { + PyErr_SetString(PyExc_SystemError, + "PyErr_NewException: name must be module.class"); + return NULL; + } + if (base == NULL) + base = PyExc_Exception; + if (dict == NULL) { + dict = mydict = PyDict_New(); + if (dict == NULL) + goto failure; + } + if (PyDict_GetItemString(dict, "__module__") == NULL) { + modulename = PyString_FromStringAndSize(name, + (Py_ssize_t)(dot-name)); + if (modulename == NULL) + goto failure; + if (PyDict_SetItemString(dict, "__module__", modulename) != 0) + goto failure; + } + if (PyTuple_Check(base)) { + bases = base; + /* INCREF as we create a new ref in the else branch */ + Py_INCREF(bases); + } else { + bases = PyTuple_Pack(1, base); + if (bases == NULL) + goto failure; + } + /* Create a real new-style class. */ + result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", + dot+1, bases, dict); failure: - Py_XDECREF(bases); - Py_XDECREF(mydict); - Py_XDECREF(classname); - Py_XDECREF(modulename); - return result; + Py_XDECREF(bases); + Py_XDECREF(mydict); + Py_XDECREF(classname); + Py_XDECREF(modulename); + return result; } + /* Create an exception with docstring */ PyObject * PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) diff --git a/pypy/module/cpyext/src/pysignals.c b/pypy/module/cpyext/src/pysignals.c --- a/pypy/module/cpyext/src/pysignals.c +++ b/pypy/module/cpyext/src/pysignals.c @@ -17,17 +17,34 @@ PyOS_getsig(int sig) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context; - if (sigaction(sig, NULL, &context) == -1) - return SIG_ERR; - return context.sa_handler; + /* assume sigaction exists */ + struct sigaction context; + if (sigaction(sig, NULL, &context) == -1) + return SIG_ERR; + return context.sa_handler; #else - PyOS_sighandler_t handler; - handler = signal(sig, SIG_IGN); - if (handler != SIG_ERR) - signal(sig, handler); - return handler; + PyOS_sighandler_t handler; +/* Special signal handling for the secure CRT in Visual Studio 2005 */ +#if defined(_MSC_VER) && _MSC_VER >= 1400 + switch (sig) { + /* Only these signals are valid */ + case SIGINT: + case SIGILL: + case SIGFPE: + case SIGSEGV: + case SIGTERM: + case SIGBREAK: + case SIGABRT: + break; + /* Don't call signal() with other values or it will assert */ + default: + return SIG_ERR; + } +#endif /* _MSC_VER && _MSC_VER >= 1400 */ + handler = signal(sig, SIG_IGN); + if (handler != SIG_ERR) + signal(sig, handler); + return handler; #endif } @@ -35,21 +52,21 @@ PyOS_setsig(int sig, PyOS_sighandler_t handler) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context, ocontext; - context.sa_handler = handler; - sigemptyset(&context.sa_mask); - context.sa_flags = 0; - if (sigaction(sig, &context, &ocontext) == -1) - return SIG_ERR; - return ocontext.sa_handler; + /* assume sigaction exists */ + struct sigaction context, ocontext; + context.sa_handler = handler; + sigemptyset(&context.sa_mask); + context.sa_flags = 0; + if (sigaction(sig, &context, &ocontext) == -1) + return SIG_ERR; + return ocontext.sa_handler; #else - PyOS_sighandler_t oldhandler; - oldhandler = signal(sig, handler); + PyOS_sighandler_t oldhandler; + oldhandler = signal(sig, handler); #ifndef MS_WINDOWS - /* should check if this exists */ - siginterrupt(sig, 1); + /* should check if this exists */ + siginterrupt(sig, 1); #endif - return oldhandler; + return oldhandler; #endif } diff --git a/pypy/module/cpyext/src/pythonrun.c b/pypy/module/cpyext/src/pythonrun.c --- a/pypy/module/cpyext/src/pythonrun.c +++ b/pypy/module/cpyext/src/pythonrun.c @@ -9,28 +9,28 @@ void Py_FatalError(const char *msg) { - fprintf(stderr, "Fatal Python error: %s\n", msg); - fflush(stderr); /* it helps in Windows debug build */ + fprintf(stderr, "Fatal Python error: %s\n", msg); + fflush(stderr); /* it helps in Windows debug build */ #ifdef MS_WINDOWS - { - size_t len = strlen(msg); - WCHAR* buffer; - size_t i; + { + size_t len = strlen(msg); + WCHAR* buffer; + size_t i; - /* Convert the message to wchar_t. This uses a simple one-to-one - conversion, assuming that the this error message actually uses ASCII - only. If this ceases to be true, we will have to convert. */ - buffer = alloca( (len+1) * (sizeof *buffer)); - for( i=0; i<=len; ++i) - buffer[i] = msg[i]; - OutputDebugStringW(L"Fatal Python error: "); - OutputDebugStringW(buffer); - OutputDebugStringW(L"\n"); - } + /* Convert the message to wchar_t. This uses a simple one-to-one + conversion, assuming that the this error message actually uses ASCII + only. If this ceases to be true, we will have to convert. */ + buffer = alloca( (len+1) * (sizeof *buffer)); + for( i=0; i<=len; ++i) + buffer[i] = msg[i]; + OutputDebugStringW(L"Fatal Python error: "); + OutputDebugStringW(buffer); + OutputDebugStringW(L"\n"); + } #ifdef _DEBUG - DebugBreak(); + DebugBreak(); #endif #endif /* MS_WINDOWS */ - abort(); + abort(); } diff --git a/pypy/module/cpyext/src/stringobject.c b/pypy/module/cpyext/src/stringobject.c --- a/pypy/module/cpyext/src/stringobject.c +++ b/pypy/module/cpyext/src/stringobject.c @@ -4,246 +4,247 @@ PyObject * PyString_FromFormatV(const char *format, va_list vargs) { - va_list count; - Py_ssize_t n = 0; - const char* f; - char *s; - PyObject* string; + va_list count; + Py_ssize_t n = 0; + const char* f; + char *s; + PyObject* string; #ifdef VA_LIST_IS_ARRAY - Py_MEMCPY(count, vargs, sizeof(va_list)); + Py_MEMCPY(count, vargs, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(count, vargs); + __va_copy(count, vargs); #else - count = vargs; + count = vargs; #endif #endif - /* step 1: figure out how large a buffer we need */ - for (f = format; *f; f++) { - if (*f == '%') { + /* step 1: figure out how large a buffer we need */ + for (f = format; *f; f++) { + if (*f == '%') { #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - const char* p = f; - while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - ; + const char* p = f; + while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + ; - /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since - * they don't affect the amount of space we reserve. - */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - ++f; - } + /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since + * they don't affect the amount of space we reserve. + */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - ++f; - } + } + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + ++f; + } - switch (*f) { - case 'c': - (void)va_arg(count, int); - /* fall through... */ - case '%': - n++; - break; - case 'd': case 'u': case 'i': case 'x': - (void) va_arg(count, int); + switch (*f) { + case 'c': + (void)va_arg(count, int); + /* fall through... */ + case '%': + n++; + break; + case 'd': case 'u': case 'i': case 'x': + (void) va_arg(count, int); #ifdef HAVE_LONG_LONG - /* Need at most - ceil(log10(256)*SIZEOF_LONG_LONG) digits, - plus 1 for the sign. 53/22 is an upper - bound for log10(256). */ - if (longlongflag) - n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; - else + /* Need at most + ceil(log10(256)*SIZEOF_LONG_LONG) digits, + plus 1 for the sign. 53/22 is an upper + bound for log10(256). */ + if (longlongflag) + n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; + else #endif - /* 20 bytes is enough to hold a 64-bit - integer. Decimal takes the most - space. This isn't enough for - octal. */ - n += 20; + /* 20 bytes is enough to hold a 64-bit + integer. Decimal takes the most + space. This isn't enough for + octal. */ + n += 20; - break; - case 's': - s = va_arg(count, char*); - n += strlen(s); - break; - case 'p': - (void) va_arg(count, int); - /* maximum 64-bit pointer representation: - * 0xffffffffffffffff - * so 19 characters is enough. - * XXX I count 18 -- what's the extra for? - */ - n += 19; - break; - default: - /* if we stumble upon an unknown - formatting code, copy the rest of - the format string to the output - string. (we cannot just skip the - code, since there's no way to know - what's in the argument list) */ - n += strlen(p); - goto expand; - } - } else - n++; - } + break; + case 's': + s = va_arg(count, char*); + n += strlen(s); + break; + case 'p': + (void) va_arg(count, int); + /* maximum 64-bit pointer representation: + * 0xffffffffffffffff + * so 19 characters is enough. + * XXX I count 18 -- what's the extra for? + */ + n += 19; + break; + default: + /* if we stumble upon an unknown + formatting code, copy the rest of + the format string to the output + string. (we cannot just skip the + code, since there's no way to know + what's in the argument list) */ + n += strlen(p); + goto expand; + } + } else + n++; + } expand: - /* step 2: fill the buffer */ - /* Since we've analyzed how much space we need for the worst case, - use sprintf directly instead of the slower PyOS_snprintf. */ - string = PyString_FromStringAndSize(NULL, n); - if (!string) - return NULL; + /* step 2: fill the buffer */ + /* Since we've analyzed how much space we need for the worst case, + use sprintf directly instead of the slower PyOS_snprintf. */ + string = PyString_FromStringAndSize(NULL, n); + if (!string) + return NULL; - s = PyString_AsString(string); + s = PyString_AsString(string); - for (f = format; *f; f++) { - if (*f == '%') { - const char* p = f++; - Py_ssize_t i; - int longflag = 0; + for (f = format; *f; f++) { + if (*f == '%') { + const char* p = f++; + Py_ssize_t i; + int longflag = 0; #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - int size_tflag = 0; - /* parse the width.precision part (we're only - interested in the precision value, if any) */ - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - if (*f == '.') { - f++; - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - } - while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - f++; - /* Handle %ld, %lu, %lld and %llu. */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - longflag = 1; - ++f; - } + int size_tflag = 0; + /* parse the width.precision part (we're only + interested in the precision value, if any) */ + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + if (*f == '.') { + f++; + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + } + while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + f++; + /* Handle %ld, %lu, %lld and %llu. */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + longflag = 1; + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - /* handle the size_t flag. */ - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - size_tflag = 1; - ++f; - } + } + /* handle the size_t flag. */ + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + size_tflag = 1; + ++f; + } - switch (*f) { - case 'c': - *s++ = va_arg(vargs, int); - break; - case 'd': - if (longflag) - sprintf(s, "%ld", va_arg(vargs, long)); + switch (*f) { + case 'c': + *s++ = va_arg(vargs, int); + break; + case 'd': + if (longflag) + sprintf(s, "%ld", va_arg(vargs, long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "d", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "d", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "d", - va_arg(vargs, Py_ssize_t)); - else - sprintf(s, "%d", va_arg(vargs, int)); - s += strlen(s); - break; - case 'u': - if (longflag) - sprintf(s, "%lu", - va_arg(vargs, unsigned long)); + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "d", + va_arg(vargs, Py_ssize_t)); + else + sprintf(s, "%d", va_arg(vargs, int)); + s += strlen(s); + break; + case 'u': + if (longflag) + sprintf(s, "%lu", + va_arg(vargs, unsigned long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "u", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "u", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "u", - va_arg(vargs, size_t)); - else - sprintf(s, "%u", - va_arg(vargs, unsigned int)); - s += strlen(s); - break; - case 'i': - sprintf(s, "%i", va_arg(vargs, int)); - s += strlen(s); - break; - case 'x': - sprintf(s, "%x", va_arg(vargs, int)); - s += strlen(s); - break; - case 's': - p = va_arg(vargs, char*); - i = strlen(p); - if (n > 0 && i > n) - i = n; - Py_MEMCPY(s, p, i); - s += i; - break; - case 'p': - sprintf(s, "%p", va_arg(vargs, void*)); - /* %p is ill-defined: ensure leading 0x. */ - if (s[1] == 'X') - s[1] = 'x'; - else if (s[1] != 'x') { - memmove(s+2, s, strlen(s)+1); - s[0] = '0'; - s[1] = 'x'; - } - s += strlen(s); - break; - case '%': - *s++ = '%'; - break; - default: - strcpy(s, p); - s += strlen(s); - goto end; - } - } else - *s++ = *f; - } + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "u", + va_arg(vargs, size_t)); + else + sprintf(s, "%u", + va_arg(vargs, unsigned int)); + s += strlen(s); + break; + case 'i': + sprintf(s, "%i", va_arg(vargs, int)); + s += strlen(s); + break; + case 'x': + sprintf(s, "%x", va_arg(vargs, int)); + s += strlen(s); + break; + case 's': + p = va_arg(vargs, char*); + i = strlen(p); + if (n > 0 && i > n) + i = n; + Py_MEMCPY(s, p, i); + s += i; + break; + case 'p': + sprintf(s, "%p", va_arg(vargs, void*)); + /* %p is ill-defined: ensure leading 0x. */ + if (s[1] == 'X') + s[1] = 'x'; + else if (s[1] != 'x') { + memmove(s+2, s, strlen(s)+1); + s[0] = '0'; + s[1] = 'x'; + } + s += strlen(s); + break; + case '%': + *s++ = '%'; + break; + default: + strcpy(s, p); + s += strlen(s); + goto end; + } + } else + *s++ = *f; + } end: - _PyString_Resize(&string, s - PyString_AS_STRING(string)); - return string; + if (_PyString_Resize(&string, s - PyString_AS_STRING(string))) + return NULL; + return string; } PyObject * PyString_FromFormat(const char *format, ...) { - PyObject* ret; - va_list vargs; + PyObject* ret; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - ret = PyString_FromFormatV(format, vargs); - va_end(vargs); - return ret; + ret = PyString_FromFormatV(format, vargs); + va_end(vargs); + return ret; } diff --git a/pypy/module/cpyext/src/structseq.c b/pypy/module/cpyext/src/structseq.c --- a/pypy/module/cpyext/src/structseq.c +++ b/pypy/module/cpyext/src/structseq.c @@ -175,32 +175,33 @@ if (min_len != max_len) { if (len < min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at least %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at least %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } if (len > max_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at most %zd-sequence (%zd-sequence given)", - type->tp_name, max_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at most %zd-sequence (%zd-sequence given)", + type->tp_name, max_len, len); + Py_DECREF(arg); + return NULL; } } else { if (len != min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes a %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes a %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } } res = (PyStructSequence*) PyStructSequence_New(type); if (res == NULL) { + Py_DECREF(arg); return NULL; } for (i = 0; i < len; ++i) { diff --git a/pypy/module/cpyext/src/sysmodule.c b/pypy/module/cpyext/src/sysmodule.c --- a/pypy/module/cpyext/src/sysmodule.c +++ b/pypy/module/cpyext/src/sysmodule.c @@ -100,4 +100,3 @@ sys_write("stderr", stderr, format, va); va_end(va); } - diff --git a/pypy/module/cpyext/src/varargwrapper.c b/pypy/module/cpyext/src/varargwrapper.c --- a/pypy/module/cpyext/src/varargwrapper.c +++ b/pypy/module/cpyext/src/varargwrapper.c @@ -1,21 +1,25 @@ #include #include -PyObject * PyTuple_Pack(Py_ssize_t size, ...) +PyObject * +PyTuple_Pack(Py_ssize_t n, ...) { - va_list ap; - PyObject *cur, *tuple; - int i; + Py_ssize_t i; + PyObject *o; + PyObject *result; + va_list vargs; - tuple = PyTuple_New(size); - va_start(ap, size); - for (i = 0; i < size; i++) { - cur = va_arg(ap, PyObject*); - Py_INCREF(cur); - if (PyTuple_SetItem(tuple, i, cur) < 0) + va_start(vargs, n); + result = PyTuple_New(n); + if (result == NULL) + return NULL; + for (i = 0; i < n; i++) { + o = va_arg(vargs, PyObject *); + Py_INCREF(o); + if (PyTuple_SetItem(result, i, o) < 0) return NULL; } - va_end(ap); - return tuple; + va_end(vargs); + return result; } diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -294,6 +294,26 @@ w_errors = space.wrap(rffi.charp2str(errors)) return space.call_method(w_str, 'encode', w_encoding, w_errors) + at cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) +def PyString_AsDecodedObject(space, w_str, encoding, errors): + """Decode a string object by passing it to the codec registered + for encoding and return the result as Python object. encoding and + errors have the same meaning as the parameters of the same name in + the string encode() method. The codec to be used is looked up + using the Python codec registry. Return NULL if an exception was + raised by the codec. + + This function is not available in 3.x and does not have a PyBytes alias.""" + if not PyString_Check(space, w_str): + PyErr_BadArgument(space) + + w_encoding = w_errors = space.w_None + if encoding: + w_encoding = space.wrap(rffi.charp2str(encoding)) + if errors: + w_errors = space.wrap(rffi.charp2str(errors)) + return space.call_method(w_str, "decode", w_encoding, w_errors) + @cpython_api([PyObject, PyObject], PyObject) def _PyString_Join(space, w_sep, w_seq): return space.call_method(w_sep, 'join', w_seq) diff --git a/pypy/module/cpyext/structmember.py b/pypy/module/cpyext/structmember.py --- a/pypy/module/cpyext/structmember.py +++ b/pypy/module/cpyext/structmember.py @@ -10,7 +10,7 @@ PyString_FromString, PyString_FromStringAndSize) from pypy.module.cpyext.floatobject import PyFloat_AsDouble from pypy.module.cpyext.longobject import ( - PyLong_AsLongLong, PyLong_AsUnsignedLongLong) + PyLong_AsLongLong, PyLong_AsUnsignedLongLong, PyLong_AsSsize_t) from pypy.module.cpyext.typeobjectdefs import PyMemberDef from pypy.rlib.unroll import unrolling_iterable @@ -28,6 +28,7 @@ (T_DOUBLE, rffi.DOUBLE, PyFloat_AsDouble), (T_LONGLONG, rffi.LONGLONG, PyLong_AsLongLong), (T_ULONGLONG, rffi.ULONGLONG, PyLong_AsUnsignedLongLong), + (T_PYSSIZET, rffi.SSIZE_T, PyLong_AsSsize_t), ]) diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -18,6 +18,7 @@ T_OBJECT_EX = 16 T_LONGLONG = 17 T_ULONGLONG = 18 +T_PYSSIZET = 19 READONLY = RO = 1 READ_RESTRICTED = 2 diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1405,24 +1405,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, Py_ssize_t, Py_ssize_t], PyObject) -def PyList_GetSlice(space, list, low, high): - """Return a list of the objects in list containing the objects between low - and high. Return NULL and set an exception if unsuccessful. Analogous - to list[low:high]. Negative indices, as when slicing from Python, are not - supported. - - This function used an int for low and high. This might - require changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([Py_ssize_t], PyObject) -def PyLong_FromSsize_t(space, v): - """Return a new PyLongObject object from a C Py_ssize_t, or - NULL on failure. - """ - raise NotImplementedError - @cpython_api([rffi.SIZE_T], PyObject) def PyLong_FromSize_t(space, v): """Return a new PyLongObject object from a C size_t, or @@ -1442,14 +1424,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject], Py_ssize_t, error=-1) -def PyLong_AsSsize_t(space, pylong): - """Return a C Py_ssize_t representation of the contents of pylong. If - pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised - and -1 will be returned. - """ - raise NotImplementedError - @cpython_api([PyObject, rffi.CCHARP], rffi.INT_real, error=-1) def PyMapping_DelItemString(space, o, key): """Remove the mapping for object key from the object o. Return -1 on @@ -1606,15 +1580,6 @@ for PyObject_Str().""" raise NotImplementedError - at cpython_api([PyObject], lltype.Signed, error=-1) -def PyObject_HashNotImplemented(space, o): - """Set a TypeError indicating that type(o) is not hashable and return -1. - This function receives special treatment when stored in a tp_hash slot, - allowing a type to explicitly indicate to the interpreter that it is not - hashable. - """ - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is @@ -1737,17 +1702,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) -def PyString_AsDecodedObject(space, str, encoding, errors): - """Decode a string object by passing it to the codec registered for encoding and - return the result as Python object. encoding and errors have the same - meaning as the parameters of the same name in the string encode() method. - The codec to be used is looked up using the Python codec registry. Return NULL - if an exception was raised by the codec. - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Encode(space, s, size, encoding, errors): """Encode the char buffer of the given size by passing it to the codec @@ -2011,35 +1965,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) -def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): - """Decode length bytes from a UTF-32 encoded buffer string and return the - corresponding Unicode object. errors (if non-NULL) defines the error - handling. It defaults to "strict". - - If byteorder is non-NULL, the decoder starts decoding using the given byte - order: - - *byteorder == -1: little endian - *byteorder == 0: native order - *byteorder == 1: big endian - - If *byteorder is zero, and the first four bytes of the input data are a - byte order mark (BOM), the decoder switches to this byte order and the BOM is - not copied into the resulting Unicode string. If *byteorder is -1 or - 1, any byte order mark is copied to the output. - - After completion, *byteorder is set to the current byte order at the end - of input data. - - In a narrow build codepoints outside the BMP will be decoded as surrogate pairs. - - If byteorder is NULL, the codec starts in native order mode. - - Return NULL if an exception was raised by the codec. - """ - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP, Py_ssize_t], PyObject) def PyUnicode_DecodeUTF32Stateful(space, s, size, errors, byteorder, consumed): """If consumed is NULL, behave like PyUnicode_DecodeUTF32(). If diff --git a/pypy/module/cpyext/test/_sre.c b/pypy/module/cpyext/test/_sre.c --- a/pypy/module/cpyext/test/_sre.c +++ b/pypy/module/cpyext/test/_sre.c @@ -81,9 +81,6 @@ #define PyObject_DEL(op) PyMem_DEL((op)) #endif -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - /* -------------------------------------------------------------------- */ #if defined(_MSC_VER) @@ -1689,7 +1686,7 @@ if (PyUnicode_Check(string)) { /* unicode strings doesn't always support the buffer interface */ ptr = (void*) PyUnicode_AS_DATA(string); - bytes = PyUnicode_GET_DATA_SIZE(string); + /* bytes = PyUnicode_GET_DATA_SIZE(string); */ size = PyUnicode_GET_SIZE(string); charsize = sizeof(Py_UNICODE); @@ -2601,46 +2598,22 @@ {NULL, NULL} }; -static PyObject* -pattern_getattr(PatternObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(pattern_methods, (PyObject*) self, name); - - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - if (!strcmp(name, "flags")) - return Py_BuildValue("i", self->flags); - - if (!strcmp(name, "groups")) - return Py_BuildValue("i", self->groups); - - if (!strcmp(name, "groupindex") && self->groupindex) { - Py_INCREF(self->groupindex); - return self->groupindex; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define PAT_OFF(x) offsetof(PatternObject, x) +static PyMemberDef pattern_members[] = { + {"pattern", T_OBJECT, PAT_OFF(pattern), READONLY}, + {"flags", T_INT, PAT_OFF(flags), READONLY}, + {"groups", T_PYSSIZET, PAT_OFF(groups), READONLY}, + {"groupindex", T_OBJECT, PAT_OFF(groupindex), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Pattern_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Pattern", sizeof(PatternObject), sizeof(SRE_CODE), (destructor)pattern_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)pattern_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattrn */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ @@ -2653,12 +2626,16 @@ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ pattern_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(PatternObject, weakreflist), /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + pattern_methods, /* tp_methods */ + pattern_members, /* tp_members */ }; static int _validate(PatternObject *self); /* Forward */ @@ -2767,7 +2744,7 @@ #if defined(VVERBOSE) #define VTRACE(v) printf v #else -#define VTRACE(v) +#define VTRACE(v) do {} while(0) /* do nothing */ #endif /* Report failure */ @@ -2970,13 +2947,13 @@ <1=skip> <2=flags> <3=min> <4=max>; If SRE_INFO_PREFIX or SRE_INFO_CHARSET is in the flags, more follows. */ - SRE_CODE flags, min, max, i; + SRE_CODE flags, i; SRE_CODE *newcode; GET_SKIP; newcode = code+skip-1; GET_ARG; flags = arg; - GET_ARG; min = arg; - GET_ARG; max = arg; + GET_ARG; /* min */ + GET_ARG; /* max */ /* Check that only valid flags are present */ if ((flags & ~(SRE_INFO_PREFIX | SRE_INFO_LITERAL | @@ -2992,9 +2969,9 @@ FAIL; /* Validate the prefix */ if (flags & SRE_INFO_PREFIX) { - SRE_CODE prefix_len, prefix_skip; + SRE_CODE prefix_len; GET_ARG; prefix_len = arg; - GET_ARG; prefix_skip = arg; + GET_ARG; /* prefix skip */ /* Here comes the prefix string */ if (code+prefix_len < code || code+prefix_len > newcode) FAIL; @@ -3565,7 +3542,7 @@ #endif } -static PyMethodDef match_methods[] = { +static struct PyMethodDef match_methods[] = { {"group", (PyCFunction) match_group, METH_VARARGS}, {"start", (PyCFunction) match_start, METH_VARARGS}, {"end", (PyCFunction) match_end, METH_VARARGS}, @@ -3578,80 +3555,90 @@ {NULL, NULL} }; -static PyObject* -match_getattr(MatchObject* self, char* name) +static PyObject * +match_lastindex_get(MatchObject *self) { - PyObject* res; - - res = Py_FindMethod(match_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - if (!strcmp(name, "lastindex")) { - if (self->lastindex >= 0) - return Py_BuildValue("i", self->lastindex); - Py_INCREF(Py_None); - return Py_None; + if (self->lastindex >= 0) + return Py_BuildValue("i", self->lastindex); + Py_INCREF(Py_None); + return Py_None; +} + +static PyObject * +match_lastgroup_get(MatchObject *self) +{ + if (self->pattern->indexgroup && self->lastindex >= 0) { + PyObject* result = PySequence_GetItem( + self->pattern->indexgroup, self->lastindex + ); + if (result) + return result; + PyErr_Clear(); } - - if (!strcmp(name, "lastgroup")) { - if (self->pattern->indexgroup && self->lastindex >= 0) { - PyObject* result = PySequence_GetItem( - self->pattern->indexgroup, self->lastindex - ); - if (result) - return result; - PyErr_Clear(); - } - Py_INCREF(Py_None); - return Py_None; - } - - if (!strcmp(name, "string")) { - if (self->string) { - Py_INCREF(self->string); - return self->string; From notifications-noreply at bitbucket.org Wed May 2 16:02:21 2012 From: notifications-noreply at bitbucket.org (Bitbucket) Date: Wed, 02 May 2012 14:02:21 -0000 Subject: [pypy-commit] Notification: pypy Message-ID: <20120502140221.23254.92090@bitbucket13.managed.contegix.com> You have received a notification from d3m3vilurr. Hi, I forked pypy. My fork is at https://bitbucket.org/d3m3vilurr/pypy. -- Disable notifications at https://bitbucket.org/account/notifications/ From noreply at buildbot.pypy.org Wed May 2 16:09:38 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 2 May 2012 16:09:38 +0200 (CEST) Subject: [pypy-commit] pypy py3k: py3k-ify by killing the u'' string prefix Message-ID: <20120502140938.5EAD182F50@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r54871:88c5fae18b02 Date: 2012-05-02 14:59 +0200 http://bitbucket.org/pypy/pypy/changeset/88c5fae18b02/ Log: py3k-ify by killing the u'' string prefix diff --git a/pypy/module/_sre/test/test_app_sre.py b/pypy/module/_sre/test/test_app_sre.py --- a/pypy/module/_sre/test/test_app_sre.py +++ b/pypy/module/_sre/test/test_app_sre.py @@ -327,17 +327,17 @@ def test_getlower_no_flags(self): UPPER_AE = "\xc4" s.assert_lower_equal([("a", "a"), ("A", "a"), (UPPER_AE, UPPER_AE), - (u"\u00c4", u"\u00c4"), (u"\u4444", u"\u4444")], 0) + ("\u00c4", "\u00c4"), ("\u4444", "\u4444")], 0) def test_getlower_locale(self): import locale, sre_constants UPPER_AE = "\xc4" LOWER_AE = "\xe4" - UPPER_PI = u"\u03a0" + UPPER_PI = "\u03a0" try: locale.setlocale(locale.LC_ALL, "de_DE") s.assert_lower_equal([("a", "a"), ("A", "a"), (UPPER_AE, LOWER_AE), - (u"\u00c4", u"\u00e4"), (UPPER_PI, UPPER_PI)], + ("\u00c4", "\u00e4"), (UPPER_PI, UPPER_PI)], sre_constants.SRE_FLAG_LOCALE) except locale.Error: # skip test @@ -347,11 +347,11 @@ import sre_constants UPPER_AE = "\xc4" LOWER_AE = "\xe4" - UPPER_PI = u"\u03a0" - LOWER_PI = u"\u03c0" + UPPER_PI = "\u03a0" + LOWER_PI = "\u03c0" s.assert_lower_equal([("a", "a"), ("A", "a"), (UPPER_AE, LOWER_AE), - (u"\u00c4", u"\u00e4"), (UPPER_PI, LOWER_PI), - (u"\u4444", u"\u4444")], sre_constants.SRE_FLAG_UNICODE) + ("\u00c4", "\u00e4"), (UPPER_PI, LOWER_PI), + ("\u4444", "\u4444")], sre_constants.SRE_FLAG_UNICODE) class AppTestSimpleSearches: @@ -666,15 +666,15 @@ skip("locale error") def test_at_uni_boundary(self): - UPPER_PI = u"\u03a0" - LOWER_PI = u"\u03c0" + UPPER_PI = "\u03a0" + LOWER_PI = "\u03c0" opcodes = s.encode_literal("bl") + [s.OPCODES["any"], s.OPCODES["at"], s.ATCODES["at_uni_boundary"], s.OPCODES["success"]] - s.assert_match(opcodes, ["bla ha", u"bl%s ja" % UPPER_PI]) - s.assert_no_match(opcodes, [u"bla%s" % LOWER_PI]) + s.assert_match(opcodes, ["bla ha", "bl%s ja" % UPPER_PI]) + s.assert_no_match(opcodes, ["bla%s" % LOWER_PI]) opcodes = s.encode_literal("bl") + [s.OPCODES["any"], s.OPCODES["at"], s.ATCODES["at_uni_non_boundary"], s.OPCODES["success"]] - s.assert_match(opcodes, ["blaha", u"bl%sja" % UPPER_PI]) + s.assert_match(opcodes, ["blaha", "bl%sja" % UPPER_PI]) def test_category_loc_word(self): import locale @@ -685,11 +685,11 @@ opcodes2 = s.encode_literal("b") \ + [s.OPCODES["category"], s.CHCODES["category_loc_not_word"], s.OPCODES["success"]] s.assert_no_match(opcodes1, "b\xFC") - s.assert_no_match(opcodes1, u"b\u00FC") + s.assert_no_match(opcodes1, "b\u00FC") s.assert_match(opcodes2, "b\xFC") locale.setlocale(locale.LC_ALL, "de_DE") s.assert_match(opcodes1, "b\xFC") - s.assert_no_match(opcodes1, u"b\u00FC") + s.assert_no_match(opcodes1, "b\u00FC") s.assert_no_match(opcodes2, "b\xFC") s.void_locale() except locale.Error: @@ -777,10 +777,10 @@ s.assert_no_match(opcodes, ["bb", "bu"]) def test_not_literal_ignore(self): - UPPER_PI = u"\u03a0" + UPPER_PI = "\u03a0" opcodes = s.encode_literal("b") \ + [s.OPCODES["not_literal_ignore"], ord("a"), s.OPCODES["success"]] - s.assert_match(opcodes, ["bb", "bu", u"b%s" % UPPER_PI]) + s.assert_match(opcodes, ["bb", "bu", "b%s" % UPPER_PI]) s.assert_no_match(opcodes, ["ba", "bA"]) def test_in_ignore(self): From noreply at buildbot.pypy.org Wed May 2 16:36:57 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Wed, 2 May 2012 16:36:57 +0200 (CEST) Subject: [pypy-commit] pypy stdlib-unification/py3k: merge from py3k Message-ID: <20120502143657.561B282F50@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: stdlib-unification/py3k Changeset: r54872:acf5143221d8 Date: 2012-05-02 16:24 +0200 http://bitbucket.org/pypy/pypy/changeset/acf5143221d8/ Log: merge from py3k diff too long, truncating to 10000 out of 17244 lines diff --git a/lib-python/2.7/test/test_peepholer.py b/lib-python/2.7/test/test_peepholer.py --- a/lib-python/2.7/test/test_peepholer.py +++ b/lib-python/2.7/test/test_peepholer.py @@ -145,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib_pypy/_ctypes/builtin.py b/lib_pypy/_ctypes/builtin.py --- a/lib_pypy/_ctypes/builtin.py +++ b/lib_pypy/_ctypes/builtin.py @@ -3,7 +3,8 @@ try: from thread import _local as local except ImportError: - local = object # no threads + class local(object): # no threads + pass class ConvMode: encoding = 'ascii' diff --git a/lib_pypy/_ctypes_test.py b/lib_pypy/_ctypes_test.py --- a/lib_pypy/_ctypes_test.py +++ b/lib_pypy/_ctypes_test.py @@ -21,7 +21,7 @@ # Compile .c file include_dir = os.path.join(thisdir, '..', 'include') if sys.platform == 'win32': - ccflags = [] + ccflags = ['-D_CRT_SECURE_NO_WARNINGS'] else: ccflags = ['-fPIC'] res = compiler.compile([os.path.join(thisdir, '_ctypes_test.c')], @@ -34,6 +34,13 @@ if sys.platform == 'win32': # XXX libpypy-c.lib is currently not installed automatically library = os.path.join(thisdir, '..', 'include', 'libpypy-c') + if not os.path.exists(library + '.lib'): + #For a nightly build + library = os.path.join(thisdir, '..', 'include', 'python27') + if not os.path.exists(library + '.lib'): + # For a local translation + library = os.path.join(thisdir, '..', 'pypy', 'translator', + 'goal', 'libpypy-c') libraries = [library, 'oleaut32'] extra_ldargs = ['/MANIFEST'] # needed for VC10 else: diff --git a/lib_pypy/_testcapi.py b/lib_pypy/_testcapi.py --- a/lib_pypy/_testcapi.py +++ b/lib_pypy/_testcapi.py @@ -16,7 +16,7 @@ # Compile .c file include_dir = os.path.join(thisdir, '..', 'include') if sys.platform == 'win32': - ccflags = [] + ccflags = ['-D_CRT_SECURE_NO_WARNINGS'] else: ccflags = ['-fPIC', '-Wimplicit-function-declaration'] res = compiler.compile([os.path.join(thisdir, '_testcapimodule.c')], @@ -29,6 +29,13 @@ if sys.platform == 'win32': # XXX libpypy-c.lib is currently not installed automatically library = os.path.join(thisdir, '..', 'include', 'libpypy-c') + if not os.path.exists(library + '.lib'): + #For a nightly build + library = os.path.join(thisdir, '..', 'include', 'python27') + if not os.path.exists(library + '.lib'): + # For a local translation + library = os.path.join(thisdir, '..', 'pypy', 'translator', + 'goal', 'libpypy-c') libraries = [library, 'oleaut32'] extra_ldargs = ['/MANIFEST', # needed for VC10 '/EXPORT:init_testcapi'] diff --git a/lib_pypy/pyrepl/reader.py b/lib_pypy/pyrepl/reader.py --- a/lib_pypy/pyrepl/reader.py +++ b/lib_pypy/pyrepl/reader.py @@ -152,8 +152,8 @@ (r'\', 'delete'), (r'\', 'backspace'), (r'\M-\', 'backward-kill-word'), - (r'\', 'end'), - (r'\', 'home'), + (r'\', 'end-of-line'), # was 'end' + (r'\', 'beginning-of-line'), # was 'home' (r'\', 'help'), (r'\EOF', 'end'), # the entries in the terminfo database for xterms (r'\EOH', 'home'), # seem to be wrong. this is a less than ideal diff --git a/pypy/annotation/description.py b/pypy/annotation/description.py --- a/pypy/annotation/description.py +++ b/pypy/annotation/description.py @@ -229,8 +229,8 @@ return thing elif hasattr(thing, '__name__'): # mostly types and functions return thing.__name__ - elif hasattr(thing, 'name'): # mostly ClassDescs - return thing.name + elif hasattr(thing, 'name') and isinstance(thing.name, str): + return thing.name # mostly ClassDescs elif isinstance(thing, tuple): return '_'.join(map(nameof, thing)) else: diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py --- a/pypy/config/pypyoption.py +++ b/pypy/config/pypyoption.py @@ -321,10 +321,14 @@ default=False), BoolOption("getattributeshortcut", "track types that override __getattribute__", - default=False), + default=False, + # weakrefs needed, because of get_subclasses() + requires=[("translation.rweakref", True)]), BoolOption("newshortcut", "cache and shortcut calling __new__ from builtin types", - default=False), + default=False, + # weakrefs needed, because of get_subclasses() + requires=[("translation.rweakref", True)]), BoolOption("logspaceoptypes", "a instrumentation option: before exit, print the types seen by " @@ -338,7 +342,9 @@ requires=[("objspace.std.builtinshortcut", True)]), BoolOption("withidentitydict", "track types that override __hash__, __eq__ or __cmp__ and use a special dict strategy for those which do not", - default=False), + default=False, + # weakrefs needed, because of get_subclasses() + requires=[("translation.rweakref", True)]), ]), ]) diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/cppyy.rst @@ -0,0 +1,579 @@ +============================ +cppyy: C++ bindings for PyPy +============================ + +The cppyy module provides C++ bindings for PyPy by using the reflection +information extracted from C++ header files by means of the +`Reflex package`_. +For this to work, you have to both install Reflex and build PyPy from the +reflex-support branch. +As indicated by this being a branch, support for Reflex is still +experimental. +However, it is functional enough to put it in the hands of those who want +to give it a try. +In the medium term, cppyy will move away from Reflex and instead use +`cling`_ as its backend, which is based on `llvm`_. +Although that will change the logistics on the generation of reflection +information, it will not change the python-side interface. + +.. _`Reflex package`: http://root.cern.ch/drupal/content/reflex +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ + + +Motivation +========== + +The cppyy module offers two unique features, which result in great +performance as well as better functionality and cross-language integration +than would otherwise be possible. +First, cppyy is written in RPython and therefore open to optimizations by the +JIT up until the actual point of call into C++. +This means that there are no conversions necessary between a garbage collected +and a reference counted environment, as is needed for the use of existing +extension modules written or generated for CPython. +It also means that if variables are already unboxed by the JIT, they can be +passed through directly to C++. +Second, Reflex (and cling far more so) adds dynamic features to C++, thus +greatly reducing impedance mismatches between the two languages. +In fact, Reflex is dynamic enough that you could write the runtime bindings +generation in python (as opposed to RPython) and this is used to create very +natural "pythonizations" of the bound code. + + +Installation +============ + +For now, the easiest way of getting the latest version of Reflex, is by +installing the ROOT package. +Besides getting the latest version of Reflex, another advantage is that with +the full ROOT package, you can also use your Reflex-bound code on `CPython`_. +`Download`_ a binary or install from `source`_. +Some Linux and Mac systems may have ROOT provided in the list of scientific +software of their packager. +A current, standalone version of Reflex should be provided at some point, +once the dependencies and general packaging have been thought out. +Also, make sure you have a version of `gccxml`_ installed, which is most +easily provided by the packager of your system. +If you read up on gccxml, you'll probably notice that it is no longer being +developed and hence will not provide C++11 support. +That's why the medium term plan is to move to `cling`_. + +.. _`Download`: http://root.cern.ch/drupal/content/downloading-root +.. _`source`: http://root.cern.ch/drupal/content/installing-root-source +.. _`gccxml`: http://www.gccxml.org + +Next, get the `PyPy sources`_, select the reflex-support branch, and build +pypy-c. +For the build to succeed, the ``$ROOTSYS`` environment variable must point to +the location of your ROOT installation:: + + $ hg clone https://bitbucket.org/pypy/pypy + $ cd pypy + $ hg up reflex-support + $ cd pypy/translator/goal + $ python translate.py -O jit --gcrootfinder=shadowstack targetpypystandalone.py --withmod-cppyy + +This will build a ``pypy-c`` that includes the cppyy module, and through that, +Reflex support. +Of course, if you already have a pre-built version of the ``pypy`` interpreter, +you can use that for the translation rather than ``python``. + +.. _`PyPy sources`: https://bitbucket.org/pypy/pypy/overview + + +Basic example +============= + +Now test with a trivial example whether all packages are properly installed +and functional. +First, create a C++ header file with some class in it (note that all functions +are made inline for convenience; a real-world example would of course have a +corresponding source file):: + + $ cat MyClass.h + class MyClass { + public: + MyClass(int i = -99) : m_myint(i) {} + + int GetMyInt() { return m_myint; } + void SetMyInt(int i) { m_myint = i; } + + public: + int m_myint; + }; + +Then, generate the bindings using ``genreflex`` (part of ROOT), and compile the +code:: + + $ genreflex MyClass.h + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyClass_rflx.cpp -o libMyClassDict.so + +Now you're ready to use the bindings. +Since the bindings are designed to look pythonistic, it should be +straightforward:: + + $ pypy-c + >>>> import cppyy + >>>> cppyy.load_reflection_info("libMyClassDict.so") + + >>>> myinst = cppyy.gbl.MyClass(42) + >>>> print myinst.GetMyInt() + 42 + >>>> myinst.SetMyInt(33) + >>>> print myinst.m_myint + 33 + >>>> myinst.m_myint = 77 + >>>> print myinst.GetMyInt() + 77 + >>>> help(cppyy.gbl.MyClass) # shows that normal python introspection works + +That's all there is to it! + + +Advanced example +================ +The following snippet of C++ is very contrived, to allow showing that such +pathological code can be handled and to show how certain features play out in +practice:: + + $ cat MyAdvanced.h + #include + + class Base1 { + public: + Base1(int i) : m_i(i) {} + virtual ~Base1() {} + int m_i; + }; + + class Base2 { + public: + Base2(double d) : m_d(d) {} + virtual ~Base2() {} + double m_d; + }; + + class C; + + class Derived : public virtual Base1, public virtual Base2 { + public: + Derived(const std::string& name, int i, double d) : Base1(i), Base2(d), m_name(name) {} + virtual C* gimeC() { return (C*)0; } + std::string m_name; + }; + + Base1* BaseFactory(const std::string& name, int i, double d) { + return new Derived(name, i, d); + } + +This code is still only in a header file, with all functions inline, for +convenience of the example. +If the implementations live in a separate source file or shared library, the +only change needed is to link those in when building the reflection library. + +If you were to run ``genreflex`` like above in the basic example, you will +find that not all classes of interest will be reflected, nor will be the +global factory function. +In particular, ``std::string`` will be missing, since it is not defined in +this header file, but in a header file that is included. +In practical terms, general classes such as ``std::string`` should live in a +core reflection set, but for the moment assume we want to have it in the +reflection library that we are building for this example. + +The ``genreflex`` script can be steered using a so-called `selection file`_, +which is a simple XML file specifying, either explicitly or by using a +pattern, which classes, variables, namespaces, etc. to select from the given +header file. +With the aid of a selection file, a large project can be easily managed: +simply ``#include`` all relevant headers into a single header file that is +handed to ``genreflex``. +Then, apply a selection file to pick up all the relevant classes. +For our purposes, the following rather straightforward selection will do +(the name ``lcgdict`` for the root is historical, but required):: + + $ cat MyAdvanced.xml + + + + + + + +.. _`selection file`: http://root.cern.ch/drupal/content/generating-reflex-dictionaries + +Now the reflection info can be generated and compiled:: + + $ genreflex MyAdvanced.h --selection=MyAdvanced.xml + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyAdvanced_rflx.cpp -o libAdvExDict.so + +and subsequently be used from PyPy:: + + >>>> import cppyy + >>>> cppyy.load_reflection_info("libAdvExDict.so") + + >>>> d = cppyy.gbl.BaseFactory("name", 42, 3.14) + >>>> type(d) + + >>>> isinstance(d, cppyy.gbl.Base1) + True + >>>> isinstance(d, cppyy.gbl.Base2) + True + >>>> d.m_i, d.m_d + (42, 3.14) + >>>> d.m_name == "name" + True + >>>> + +Again, that's all there is to it! + +A couple of things to note, though. +If you look back at the C++ definition of the ``BaseFactory`` function, +you will see that it declares the return type to be a ``Base1``, yet the +bindings return an object of the actual type ``Derived``? +This choice is made for a couple of reasons. +First, it makes method dispatching easier: if bound objects are always their +most derived type, then it is easy to calculate any offsets, if necessary. +Second, it makes memory management easier: the combination of the type and +the memory address uniquely identifies an object. +That way, it can be recycled and object identity can be maintained if it is +entered as a function argument into C++ and comes back to PyPy as a return +value. +Last, but not least, casting is decidedly unpythonistic. +By always providing the most derived type known, casting becomes unnecessary. +For example, the data member of ``Base2`` is simply directly available. +Note also that the unreflected ``gimeC`` method of ``Derived`` does not +preclude its use. +It is only the ``gimeC`` method that is unusable as long as class ``C`` is +unknown to the system. + + +Features +======== + +The following is not meant to be an exhaustive list, since cppyy is still +under active development. +Furthermore, the intention is that every feature is as natural as possible on +the python side, so if you find something missing in the list below, simply +try it out. +It is not always possible to provide exact mapping between python and C++ +(active memory management is one such case), but by and large, if the use of a +feature does not strike you as obvious, it is more likely to simply be a bug. +That is a strong statement to make, but also a worthy goal. + +* **abstract classes**: Are represented as python classes, since they are + needed to complete the inheritance hierarchies, but will raise an exception + if an attempt is made to instantiate from them. + +* **arrays**: Supported for builtin data types only, as used from module + ``array``. + Out-of-bounds checking is limited to those cases where the size is known at + compile time (and hence part of the reflection info). + +* **builtin data types**: Map onto the expected equivalent python types, with + the caveat that there may be size differences, and thus it is possible that + exceptions are raised if an overflow is detected. + +* **casting**: Is supposed to be unnecessary. + Object pointer returns from functions provide the most derived class known + in the hierarchy of the object being returned. + This is important to preserve object identity as well as to make casting, + a pure C++ feature after all, superfluous. + +* **classes and structs**: Get mapped onto python classes, where they can be + instantiated as expected. + If classes are inner classes or live in a namespace, their naming and + location will reflect that. + +* **data members**: Public data members are represented as python properties + and provide read and write access on instances as expected. + +* **default arguments**: C++ default arguments work as expected, but python + keywords are not supported. + It is technically possible to support keywords, but for the C++ interface, + the formal argument names have no meaning and are not considered part of the + API, hence it is not a good idea to use keywords. + +* **doc strings**: The doc string of a method or function contains the C++ + arguments and return types of all overloads of that name, as applicable. + +* **enums**: Are translated as ints with no further checking. + +* **functions**: Work as expected and live in their appropriate namespace + (which can be the global one, ``cppyy.gbl``). + +* **inheritance**: All combinations of inheritance on the C++ (single, + multiple, virtual) are supported in the binding. + However, new python classes can only use single inheritance from a bound C++ + class. + Multiple inheritance would introduce two "this" pointers in the binding. + This is a current, not a fundamental, limitation. + The C++ side will not see any overridden methods on the python side, as + cross-inheritance is planned but not yet supported. + +* **methods**: Are represented as python methods and work as expected. + They are first class objects and can be bound to an instance. + Virtual C++ methods work as expected. + To select a specific virtual method, do like with normal python classes + that override methods: select it from the class that you need, rather than + calling the method on the instance. + To select a specific overload, use the __dispatch__ special function, which + takes the name of the desired method and its signature (which can be + obtained from the doc string) as arguments. + +* **namespaces**: Are represented as python classes. + Namespaces are more open-ended than classes, so sometimes initial access may + result in updates as data and functions are looked up and constructed + lazily. + Thus the result of ``dir()`` on a namespace should not be relied upon: it + only shows the already accessed members. (TODO: to be fixed by implementing + __dir__.) + The global namespace is ``cppyy.gbl``. + +* **operator conversions**: If defined in the C++ class and a python + equivalent exists (i.e. all builtin integer and floating point types, as well + as ``bool``), it will map onto that python conversion. + Note that ``char*`` is mapped onto ``__str__``. + +* **operator overloads**: If defined in the C++ class and if a python + equivalent is available (not always the case, think e.g. of ``operator||``), + then they work as expected. + Special care needs to be taken for global operator overloads in C++: first, + make sure that they are actually reflected, especially for the global + overloads for ``operator==`` and ``operator!=`` of STL iterators in the case + of gcc. + Second, make sure that reflection info is loaded in the proper order. + I.e. that these global overloads are available before use. + +* **pointers**: For builtin data types, see arrays. + For objects, a pointer to an object and an object looks the same, unless + the pointer is a data member. + In that case, assigning to the data member will cause a copy of the pointer + and care should be taken about the object's life time. + If a pointer is a global variable, the C++ side can replace the underlying + object and the python side will immediately reflect that. + +* **static data members**: Are represented as python property objects on the + class and the meta-class. + Both read and write access is as expected. + +* **static methods**: Are represented as python's ``staticmethod`` objects + and can be called both from the class as well as from instances. + +* **strings**: The std::string class is considered a builtin C++ type and + mixes quite well with python's str. + Python's str can be passed where a ``const char*`` is expected, and an str + will be returned if the return type is ``const char*``. + +* **templated classes**: Are represented in a meta-class style in python. + This looks a little bit confusing, but conceptually is rather natural. + For example, given the class ``std::vector``, the meta-class part would + be ``std.vector`` in python. + Then, to get the instantiation on ``int``, do ``std.vector(int)`` and to + create an instance of that class, do ``std.vector(int)()``. + Note that templates can be build up by handing actual types to the class + instantiation (as done in this vector example), or by passing in the list of + template arguments as a string. + The former is a lot easier to work with if you have template instantiations + using classes that themselves are templates (etc.) in the arguments. + All classes must already exist in the loaded reflection info. + +* **typedefs**: Are simple python references to the actual classes to which + they refer. + +* **unary operators**: Are supported if a python equivalent exists, and if the + operator is defined in the C++ class. + +You can always find more detailed examples and see the full of supported +features by looking at the tests in pypy/module/cppyy/test. + +If a feature or reflection info is missing, this is supposed to be handled +gracefully. +In fact, there are unit tests explicitly for this purpose (even as their use +becomes less interesting over time, as the number of missing features +decreases). +Only when a missing feature is used, should there be an exception. +For example, if no reflection info is available for a return type, then a +class that has a method with that return type can still be used. +Only that one specific method can not be used. + + +Templates +========= + +A bit of special care needs to be taken for the use of templates. +For a templated class to be completely available, it must be guaranteed that +said class is fully instantiated, and hence all executable C++ code is +generated and compiled in. +The easiest way to fulfill that guarantee is by explicit instantiation in the +header file that is handed to ``genreflex``. +The following example should make that clear:: + + $ cat MyTemplate.h + #include + + class MyClass { + public: + MyClass(int i = -99) : m_i(i) {} + MyClass(const MyClass& s) : m_i(s.m_i) {} + MyClass& operator=(const MyClass& s) { m_i = s.m_i; return *this; } + ~MyClass() {} + int m_i; + }; + + template class std::vector; + +If you know for certain that all symbols will be linked in from other sources, +you can also declare the explicit template instantiation ``extern``. + +Unfortunately, this is not enough for gcc. +The iterators, if they are going to be used, need to be instantiated as well, +as do the comparison operators on those iterators, as these live in an +internal namespace, rather than in the iterator classes. +One way to handle this, is to deal with this once in a macro, then reuse that +macro for all ``vector`` classes. +Thus, the header above needs this, instead of just the explicit instantiation +of the ``vector``:: + + #define STLTYPES_EXPLICIT_INSTANTIATION_DECL(STLTYPE, TTYPE) \ + template class std::STLTYPE< TTYPE >; \ + template class __gnu_cxx::__normal_iterator >; \ + template class __gnu_cxx::__normal_iterator >;\ + namespace __gnu_cxx { \ + template bool operator==(const std::STLTYPE< TTYPE >::iterator&, \ + const std::STLTYPE< TTYPE >::iterator&); \ + template bool operator!=(const std::STLTYPE< TTYPE >::iterator&, \ + const std::STLTYPE< TTYPE >::iterator&); \ + } + + STLTYPES_EXPLICIT_INSTANTIATION_DECL(vector, MyClass) + +Then, still for gcc, the selection file needs to contain the full hierarchy as +well as the global overloads for comparisons for the iterators:: + + $ cat MyTemplate.xml + + + + + + + + + + + + + +Run the normal ``genreflex`` and compilation steps:: + + $ genreflex MyTemplate.h --selection=MyTemplate.xm + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyTemplate_rflx.cpp -o libTemplateDict.so + +Note: this is a dirty corner that clearly could do with some automation, +even if the macro already helps. +Such automation is planned. +In fact, in the cling world, the backend can perform the template +instantations and generate the reflection info on the fly, and none of the +above will any longer be necessary. + +Subsequent use should be as expected. +Note the meta-class style of "instantiating" the template:: + + >>>> import cppyy + >>>> cppyy.load_reflection_info("libTemplateDict.so") + >>>> std = cppyy.gbl.std + >>>> MyClass = cppyy.gbl.MyClass + >>>> v = std.vector(MyClass)() + >>>> v += [MyClass(1), MyClass(2), MyClass(3)] + >>>> for m in v: + .... print m.m_i, + .... + 1 2 3 + >>>> + +Other templates work similarly. +The arguments to the template instantiation can either be a string with the +full list of arguments, or the explicit classes. +The latter makes for easier code writing if the classes passed to the +instantiation are themselves templates. + + +The fast lane +============= + +The following is an experimental feature of cppyy, and that makes it doubly +experimental, so caveat emptor. +With a slight modification of Reflex, it can provide function pointers for +C++ methods, and hence allow PyPy to call those pointers directly, rather than +calling C++ through a Reflex stub. +This results in a rather significant speed-up. +Mind you, the normal stub path is not exactly slow, so for now only use this +out of curiosity or if you really need it. + +To install this patch of Reflex, locate the file genreflex-methptrgetter.patch +in pypy/module/cppyy and apply it to the genreflex python scripts found in +``$ROOTSYS/lib``:: + + $ cd $ROOTSYS/lib + $ patch -p2 < genreflex-methptrgetter.patch + +With this patch, ``genreflex`` will have grown the ``--with-methptrgetter`` +option. +Use this option when running ``genreflex``, and add the +``-Wno-pmf-conversions`` option to ``g++`` when compiling. +The rest works the same way: the fast path will be used transparently (which +also means that you can't actually find out whether it is in use, other than +by running a micro-benchmark). + + +CPython +======= + +Most of the ideas in cppyy come originally from the `PyROOT`_ project. +Although PyROOT does not support Reflex directly, it has an alter ego called +"PyCintex" that, in a somewhat roundabout way, does. +If you installed ROOT, rather than just Reflex, PyCintex should be available +immediately if you add ``$ROOTSYS/lib`` to the ``PYTHONPATH`` environment +variable. + +.. _`PyROOT`: http://root.cern.ch/drupal/content/pyroot + +There are a couple of minor differences between PyCintex and cppyy, most to do +with naming. +The one that you will run into directly, is that PyCintex uses a function +called ``loadDictionary`` rather than ``load_reflection_info``. +The reason for this is that Reflex calls the shared libraries that contain +reflection info "dictionaries." +However, in python, the name `dictionary` already has a well-defined meaning, +so a more descriptive name was chosen for cppyy. +In addition, PyCintex requires that the names of shared libraries so loaded +start with "lib" in their name. +The basic example above, rewritten for PyCintex thus goes like this:: + + $ python + >>> import PyCintex + >>> PyCintex.loadDictionary("libMyClassDict.so") + >>> myinst = PyCintex.gbl.MyClass(42) + >>> print myinst.GetMyInt() + 42 + >>> myinst.SetMyInt(33) + >>> print myinst.m_myint + 33 + >>> myinst.m_myint = 77 + >>> print myinst.GetMyInt() + 77 + >>> help(PyCintex.gbl.MyClass) # shows that normal python introspection works + +Other naming differences are such things as taking an address of an object. +In PyCintex, this is done with ``AddressOf`` whereas in cppyy the choice was +made to follow the naming as in ``ctypes`` and hence use ``addressof`` +(PyROOT/PyCintex predate ``ctypes`` by several years, and the ROOT project +follows camel-case, hence the differences). + +Of course, this is python, so if any of the naming is not to your liking, all +you have to do is provide a wrapper script that you import instead of +importing the ``cppyy`` or ``PyCintex`` modules directly. +In that wrapper script you can rename methods exactly the way you need it. + +In the cling world, all these differences will be resolved. diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst --- a/pypy/doc/extending.rst +++ b/pypy/doc/extending.rst @@ -23,6 +23,8 @@ * Write them in RPython as mixedmodule_, using *rffi* as bindings. +* Write them in C++ and bind them through Reflex_ (EXPERIMENTAL) + .. _ctypes: #CTypes .. _\_ffi: #LibFFI .. _mixedmodule: #Mixed Modules @@ -110,3 +112,59 @@ XXX we should provide detailed docs about lltype and rffi, especially if we want people to follow that way. + +Reflex +====== + +This method is still experimental and is being exercised on a branch, +`reflex-support`_, which adds the `cppyy`_ module. +The method works by using the `Reflex package`_ to provide reflection +information of the C++ code, which is then used to automatically generate +bindings at runtime. +From a python standpoint, there is no difference between generating bindings +at runtime, or having them "statically" generated and available in scripts +or compiled into extension modules: python classes and functions are always +runtime structures, created when a script or module loads. +However, if the backend itself is capable of dynamic behavior, it is a much +better functional match to python, allowing tighter integration and more +natural language mappings. +Full details are `available here`_. + +.. _`cppyy`: cppyy.html +.. _`reflex-support`: cppyy.html +.. _`Reflex package`: http://root.cern.ch/drupal/content/reflex +.. _`available here`: cppyy.html + +Pros +---- + +The cppyy module is written in RPython, which makes it possible to keep the +code execution visible to the JIT all the way to the actual point of call into +C++, thus allowing for a very fast interface. +Reflex is currently in use in large software environments in High Energy +Physics (HEP), across many different projects and packages, and its use can be +virtually completely automated in a production environment. +One of its uses in HEP is in providing language bindings for CPython. +Thus, it is possible to use Reflex to have bound code work on both CPython and +on PyPy. +In the medium-term, Reflex will be replaced by `cling`_, which is based on +`llvm`_. +This will affect the backend only; the python-side interface is expected to +remain the same, except that cling adds a lot of dynamic behavior to C++, +enabling further language integration. + +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ + +Cons +---- + +C++ is a large language, and cppyy is not yet feature-complete. +Still, the experience gained in developing the equivalent bindings for CPython +means that adding missing features is a simple matter of engineering, not a +question of research. +The module is written so that currently missing features should do no harm if +you don't use them, if you do need a particular feature, it may be necessary +to work around it in python or with a C++ helper function. +Although Reflex works on various platforms, the bindings with PyPy have only +been tested on Linux. diff --git a/pypy/doc/windows.rst b/pypy/doc/windows.rst --- a/pypy/doc/windows.rst +++ b/pypy/doc/windows.rst @@ -24,7 +24,8 @@ translation. Failing that, they will pick the most recent Visual Studio compiler they can find. In addition, the target architecture (32 bits, 64 bits) is automatically selected. A 32 bit build can only be built -using a 32 bit Python and vice versa. +using a 32 bit Python and vice versa. By default pypy is built using the +Multi-threaded DLL (/MD) runtime environment. **Note:** PyPy is currently not supported for 64 bit Windows, and translation will fail in this case. @@ -102,10 +103,12 @@ Download the source code of expat on sourceforge: http://sourceforge.net/projects/expat/ and extract it in the base -directory. Then open the project file ``expat.dsw`` with Visual +directory. Version 2.1.0 is known to pass tests. Then open the project +file ``expat.dsw`` with Visual Studio; follow the instruction for converting the project files, -switch to the "Release" configuration, and build the solution (the -``expat`` project is actually enough for pypy). +switch to the "Release" configuration, reconfigure the runtime for +Multi-threaded DLL (/MD) and build the solution (the ``expat`` project +is actually enough for pypy). Then, copy the file ``win32\bin\release\libexpat.dll`` somewhere in your PATH. diff --git a/pypy/interpreter/argument.py b/pypy/interpreter/argument.py --- a/pypy/interpreter/argument.py +++ b/pypy/interpreter/argument.py @@ -98,6 +98,10 @@ Collects the arguments of a function call. Instances should be considered immutable. + + Some parts of this class are written in a slightly convoluted style to help + the JIT. It is really crucial to get this right, because Python's argument + semantics are complex, but calls occur everywhere. """ ### Construction ### @@ -184,7 +188,13 @@ space = self.space keywords, values_w = space.view_as_kwargs(w_starstararg) if keywords is not None: # this path also taken for empty dicts - self._add_keywordargs_no_unwrapping(keywords, values_w) + if self.keywords is None: + self.keywords = keywords[:] # copy to make non-resizable + self.keywords_w = values_w[:] + else: + self._check_not_duplicate_kwargs(keywords, values_w) + self.keywords = self.keywords + keywords + self.keywords_w = self.keywords_w + values_w return not jit.isconstant(len(self.keywords)) if space.isinstance_w(w_starstararg, space.w_dict): keys_w = space.unpackiterable(w_starstararg) @@ -242,22 +252,16 @@ @jit.look_inside_iff(lambda self, keywords, keywords_w: jit.isconstant(len(keywords) and jit.isconstant(self.keywords))) - def _add_keywordargs_no_unwrapping(self, keywords, keywords_w): - if self.keywords is None: - self.keywords = keywords[:] # copy to make non-resizable - self.keywords_w = keywords_w[:] - else: - # looks quadratic, but the JIT should remove all of it nicely. - # Also, all the lists should be small - for key in keywords: - for otherkey in self.keywords: - if otherkey == key: - raise operationerrfmt(self.space.w_TypeError, - "got multiple values " - "for keyword argument " - "'%s'", key) - self.keywords = self.keywords + keywords - self.keywords_w = self.keywords_w + keywords_w + def _check_not_duplicate_kwargs(self, keywords, keywords_w): + # looks quadratic, but the JIT should remove all of it nicely. + # Also, all the lists should be small + for key in keywords: + for otherkey in self.keywords: + if otherkey == key: + raise operationerrfmt(self.space.w_TypeError, + "got multiple values " + "for keyword argument " + "'%s'", key) def fixedunpack(self, argcount): """The simplest argument parsing: get the 'argcount' arguments, diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -311,14 +311,19 @@ # produce compatible pycs. if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): - unistr = self.space.unicode_w(w_const) - if len(unistr) == 1: - ch = ord(unistr[0]) - else: - ch = 0 - if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): - return subs + #unistr = self.space.unicode_w(w_const) + #if len(unistr) == 1: + # ch = ord(unistr[0]) + #else: + # ch = 0 + #if (ch > 0xFFFF or + # (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): + # --XXX-- for now we always disable optimization of + # u'...'[constant] because the tests above are not + # enough to fix issue5057 (CPython has the same + # problem as of April 24, 2012). + # See test_const_fold_unicode_subscr + return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -910,7 +910,8 @@ return "abc"[0] """ counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + if 0: # xxx later? + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} # getitem outside of the BMP should not be optimized source = """def f(): @@ -920,12 +921,20 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + source = """def f(): + return u"\U00012345abcdef"[3] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, + ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) source = """def f(): return "\uE01F"[0] """ counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + if 0: # xxx later? + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} monkeypatch.undo() # getslice is not yet optimized. diff --git a/pypy/interpreter/function.py b/pypy/interpreter/function.py --- a/pypy/interpreter/function.py +++ b/pypy/interpreter/function.py @@ -51,7 +51,9 @@ def __repr__(self): # return "function %s.%s" % (self.space, self.name) # maybe we want this shorter: - name = getattr(self, 'name', '?') + name = getattr(self, 'name', None) + if not isinstance(name, str): + name = '?' return "<%s %s>" % (self.__class__.__name__, name) def call_args(self, args): diff --git a/pypy/jit/backend/llsupport/asmmemmgr.py b/pypy/jit/backend/llsupport/asmmemmgr.py --- a/pypy/jit/backend/llsupport/asmmemmgr.py +++ b/pypy/jit/backend/llsupport/asmmemmgr.py @@ -277,6 +277,8 @@ from pypy.jit.backend.hlinfo import highleveljitinfo if highleveljitinfo.sys_executable: debug_print('SYS_EXECUTABLE', highleveljitinfo.sys_executable) + else: + debug_print('SYS_EXECUTABLE', '??') # HEX = '0123456789ABCDEF' dump = [] diff --git a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py --- a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py +++ b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py @@ -217,7 +217,8 @@ encoded = ''.join(writtencode).encode('hex').upper() ataddr = '@%x' % addr assert log == [('test-logname-section', - [('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] + [('debug_print', 'SYS_EXECUTABLE', '??'), + ('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] lltype.free(p, flavor='raw') diff --git a/pypy/jit/backend/x86/assembler.py b/pypy/jit/backend/x86/assembler.py --- a/pypy/jit/backend/x86/assembler.py +++ b/pypy/jit/backend/x86/assembler.py @@ -1656,15 +1656,21 @@ else: # XXX hard-coded assumption: to go from an object to its class # we use the following algorithm: - # - read the typeid from mem(locs[0]), i.e. at offset 0 - # - keep the lower 16 bits read there - # - multiply by 4 and use it as an offset in type_info_group - # - add 16 bytes, to go past the TYPE_INFO structure + # - read the typeid from mem(locs[0]), i.e. at offset 0; + # this is a complete word (N=4 bytes on 32-bit, N=8 on + # 64-bits) + # - keep the lower half of what is read there (i.e. + # truncate to an unsigned 'N / 2' bytes value) + # - multiply by 4 (on 32-bits only) and use it as an + # offset in type_info_group + # - add 16/32 bytes, to go past the TYPE_INFO structure loc = locs[1] assert isinstance(loc, ImmedLoc) classptr = loc.value # here, we have to go back from 'classptr' to the value expected - # from reading the 16 bits in the object header + # from reading the half-word in the object header. Note that + # this half-word is at offset 0 on a little-endian machine; + # it would be at offset 2 or 4 on a big-endian machine. from pypy.rpython.memory.gctypelayout import GCData sizeof_ti = rffi.sizeof(GCData.TYPE_INFO) type_info_group = llop.gc_get_type_info_group(llmemory.Address) diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -20,6 +20,7 @@ self.dependencies = {} # contains frame boxes that are not virtualizables self.nonstandard_virtualizables = {} + # heap cache # maps descrs to {from_box, to_box} dicts self.heap_cache = {} @@ -29,6 +30,26 @@ # cache the length of arrays self.length_cache = {} + # replace_box is called surprisingly often, therefore it's not efficient + # to go over all the dicts and fix them. + # instead, these two dicts are kept, and a replace_box adds an entry to + # each of them. + # every time one of the dicts heap_cache, heap_array_cache, length_cache + # is accessed, suitable indirections need to be performed + + # this looks all very subtle, but in practice the patterns of + # replacements should not be that complex. Usually a box is replaced by + # a const, once. Also, if something goes wrong, the effect is that less + # caching than possible is done, which is not a huge problem. + self.input_indirections = {} + self.output_indirections = {} + + def _input_indirection(self, box): + return self.input_indirections.get(box, box) + + def _output_indirection(self, box): + return self.output_indirections.get(box, box) + def invalidate_caches(self, opnum, descr, argboxes): self.mark_escaped(opnum, argboxes) self.clear_caches(opnum, descr, argboxes) @@ -132,14 +153,16 @@ self.arraylen_now_known(box, lengthbox) def getfield(self, box, descr): + box = self._input_indirection(box) d = self.heap_cache.get(descr, None) if d: tobox = d.get(box, None) - if tobox: - return tobox + return self._output_indirection(tobox) return None def getfield_now_known(self, box, descr, fieldbox): + box = self._input_indirection(box) + fieldbox = self._input_indirection(fieldbox) self.heap_cache.setdefault(descr, {})[box] = fieldbox def setfield(self, box, descr, fieldbox): @@ -148,6 +171,8 @@ self.heap_cache[descr] = new_d def _do_write_with_aliasing(self, d, box, fieldbox): + box = self._input_indirection(box) + fieldbox = self._input_indirection(fieldbox) # slightly subtle logic here # a write to an arbitrary box, all other boxes can alias this one if not d or box not in self.new_boxes: @@ -166,6 +191,7 @@ return new_d def getarrayitem(self, box, descr, indexbox): + box = self._input_indirection(box) if not isinstance(indexbox, ConstInt): return index = indexbox.getint() @@ -173,9 +199,11 @@ if cache: indexcache = cache.get(index, None) if indexcache is not None: - return indexcache.get(box, None) + return self._output_indirection(indexcache.get(box, None)) def getarrayitem_now_known(self, box, descr, indexbox, valuebox): + box = self._input_indirection(box) + valuebox = self._input_indirection(valuebox) if not isinstance(indexbox, ConstInt): return index = indexbox.getint() @@ -198,25 +226,13 @@ cache[index] = self._do_write_with_aliasing(indexcache, box, valuebox) def arraylen(self, box): - return self.length_cache.get(box, None) + box = self._input_indirection(box) + return self._output_indirection(self.length_cache.get(box, None)) def arraylen_now_known(self, box, lengthbox): - self.length_cache[box] = lengthbox - - def _replace_box(self, d, oldbox, newbox): - new_d = {} - for frombox, tobox in d.iteritems(): - if frombox is oldbox: - frombox = newbox - if tobox is oldbox: - tobox = newbox - new_d[frombox] = tobox - return new_d + box = self._input_indirection(box) + self.length_cache[box] = self._input_indirection(lengthbox) def replace_box(self, oldbox, newbox): - for descr, d in self.heap_cache.iteritems(): - self.heap_cache[descr] = self._replace_box(d, oldbox, newbox) - for descr, d in self.heap_array_cache.iteritems(): - for index, cache in d.iteritems(): - d[index] = self._replace_box(cache, oldbox, newbox) - self.length_cache = self._replace_box(self.length_cache, oldbox, newbox) + self.input_indirections[self._output_indirection(newbox)] = self._input_indirection(oldbox) + self.output_indirections[self._input_indirection(oldbox)] = self._output_indirection(newbox) diff --git a/pypy/jit/metainterp/jitexc.py b/pypy/jit/metainterp/jitexc.py --- a/pypy/jit/metainterp/jitexc.py +++ b/pypy/jit/metainterp/jitexc.py @@ -12,7 +12,6 @@ """ _go_through_llinterp_uncaught_ = True # ugh - def _get_standard_error(rtyper, Class): exdata = rtyper.getexceptiondata() clsdef = rtyper.annotator.bookkeeper.getuniqueclassdef(Class) diff --git a/pypy/jit/metainterp/optimize.py b/pypy/jit/metainterp/optimize.py --- a/pypy/jit/metainterp/optimize.py +++ b/pypy/jit/metainterp/optimize.py @@ -5,3 +5,9 @@ """Raised when the optimize*.py detect that the loop that we are trying to build cannot possibly make sense as a long-running loop (e.g. it cannot run 2 complete iterations).""" + + def __init__(self, msg='?'): + debug_start("jit-abort") + debug_print(msg) + debug_stop("jit-abort") + self.msg = msg diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -49,8 +49,9 @@ optimizations.append(OptFfiCall()) if ('rewrite' not in enable_opts or 'virtualize' not in enable_opts - or 'heap' not in enable_opts or 'unroll' not in enable_opts): - optimizations.append(OptSimplify()) + or 'heap' not in enable_opts or 'unroll' not in enable_opts + or 'pure' not in enable_opts): + optimizations.append(OptSimplify(unroll)) return optimizations, unroll diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -257,8 +257,8 @@ opnum == rop.COPYSTRCONTENT or # no effect on GC struct/array opnum == rop.COPYUNICODECONTENT): # no effect on GC struct/array return - assert opnum != rop.CALL_PURE if (opnum == rop.CALL or + opnum == rop.CALL_PURE or opnum == rop.CALL_MAY_FORCE or opnum == rop.CALL_RELEASE_GIL or opnum == rop.CALL_ASSEMBLER): @@ -481,7 +481,7 @@ # already between the tracing and now. In this case, we are # simply ignoring the QUASIIMMUT_FIELD hint and compiling it # as a regular getfield. - if not qmutdescr.is_still_valid(): + if not qmutdescr.is_still_valid_for(structvalue.get_key_box()): self._remove_guard_not_invalidated = True return # record as an out-of-line guard diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -191,10 +191,13 @@ # GUARD_OVERFLOW, then the loop is invalid. lastop = self.last_emitted_operation if lastop is None: - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') opnum = lastop.getopnum() if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') + self.emit_operation(op) def optimize_INT_ADD_OVF(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -525,6 +525,7 @@ @specialize.argtype(0) def _emit_operation(self, op): + assert op.getopnum() != rop.CALL_PURE for i in range(op.numargs()): arg = op.getarg(i) try: diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -208,7 +208,8 @@ box = value.box assert isinstance(box, Const) if not box.same_constant(constbox): - raise InvalidLoop + raise InvalidLoop('A GURAD_{VALUE,TRUE,FALSE} was proven to' + + 'always fail') return if emit_operation: self.emit_operation(op) @@ -220,7 +221,7 @@ if value.is_null(): return elif value.is_nonnull(): - raise InvalidLoop + raise InvalidLoop('A GUARD_ISNULL was proven to always fail') self.emit_operation(op) value.make_constant(self.optimizer.cpu.ts.CONST_NULL) @@ -229,7 +230,7 @@ if value.is_nonnull(): return elif value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL was proven to always fail') self.emit_operation(op) value.make_nonnull(op) @@ -278,7 +279,7 @@ if realclassbox is not None: if realclassbox.same_constant(expectedclassbox): return - raise InvalidLoop + raise InvalidLoop('A GUARD_CLASS was proven to always fail') if value.last_guard: # there already has been a guard_nonnull or guard_class or # guard_nonnull_class on this value. @@ -301,7 +302,8 @@ def optimize_GUARD_NONNULL_CLASS(self, op): value = self.getvalue(op.getarg(0)) if value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL_CLASS was proven to always ' + + 'fail') self.optimize_GUARD_CLASS(op) def optimize_CALL_LOOPINVARIANT(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -4,8 +4,9 @@ from pypy.jit.metainterp.history import TargetToken, JitCellToken class OptSimplify(Optimization): - def __init__(self): + def __init__(self, unroll): self.last_label_descr = None + self.unroll = unroll def optimize_CALL_PURE(self, op): args = op.getarglist() @@ -35,24 +36,26 @@ pass def optimize_LABEL(self, op): - descr = op.getdescr() - if isinstance(descr, JitCellToken): - return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) - self.last_label_descr = op.getdescr() + if not self.unroll: + descr = op.getdescr() + if isinstance(descr, JitCellToken): + return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) + self.last_label_descr = op.getdescr() self.emit_operation(op) def optimize_JUMP(self, op): - descr = op.getdescr() - assert isinstance(descr, JitCellToken) - if not descr.target_tokens: - assert self.last_label_descr is not None - target_token = self.last_label_descr - assert isinstance(target_token, TargetToken) - assert target_token.targeting_jitcell_token is descr - op.setdescr(self.last_label_descr) - else: - assert len(descr.target_tokens) == 1 - op.setdescr(descr.target_tokens[0]) + if not self.unroll: + descr = op.getdescr() + assert isinstance(descr, JitCellToken) + if not descr.target_tokens: + assert self.last_label_descr is not None + target_token = self.last_label_descr + assert isinstance(target_token, TargetToken) + assert target_token.targeting_jitcell_token is descr + op.setdescr(self.last_label_descr) + else: + assert len(descr.target_tokens) == 1 + op.setdescr(descr.target_tokens[0]) self.emit_operation(op) dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py @@ -0,0 +1,46 @@ +from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import OptimizeOptTest +from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin +from pypy.jit.metainterp.resoperation import rop + + +allopts = OptimizeOptTest.enable_opts.split(':') +for optnum in range(len(allopts)): + myopts = allopts[:] + del myopts[optnum] + + class TestLLtype(OptimizeOptTest, LLtypeMixin): + enable_opts = ':'.join(myopts) + + def optimize_loop(self, ops, expected, expected_preamble=None, + call_pure_results=None, expected_short=None): + loop = self.parse(ops) + if expected != "crash!": + expected = self.parse(expected) + if expected_preamble: + expected_preamble = self.parse(expected_preamble) + if expected_short: + expected_short = self.parse(expected_short) + + preamble = self.unroll_and_optimize(loop, call_pure_results) + + for op in preamble.operations + loop.operations: + assert op.getopnum() not in (rop.CALL_PURE, + rop.CALL_LOOPINVARIANT, + rop.VIRTUAL_REF_FINISH, + rop.VIRTUAL_REF, + rop.QUASIIMMUT_FIELD, + rop.MARK_OPAQUE_PTR, + rop.RECORD_KNOWN_CLASS) + + def raises(self, e, fn, *args): + try: + fn(*args) + except e: + pass + + opt = allopts[optnum] + exec "TestNo%sLLtype = TestLLtype" % (opt[0].upper() + opt[1:]) + +del TestLLtype # No need to run the last set twice +del TestNoUnrollLLtype # This case is take care of by test_optimizebasic + diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -7,7 +7,7 @@ import pypy.jit.metainterp.optimizeopt.optimizer as optimizeopt import pypy.jit.metainterp.optimizeopt.virtualize as virtualize from pypy.jit.metainterp.optimize import InvalidLoop -from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt +from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt, get_const_ptr_for_string from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation from pypy.rlib.rarithmetic import LONG_BIT @@ -5067,11 +5067,29 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_call_pure_vstring_const(self): + ops = """ + [] + p0 = newstr(3) + strsetitem(p0, 0, 97) + strsetitem(p0, 1, 98) + strsetitem(p0, 2, 99) + i0 = call_pure(123, p0, descr=nonwritedescr) + finish(i0) + """ + expected = """ + [] + finish(5) + """ + call_pure_results = { + (ConstInt(123), get_const_ptr_for_string("abc"),): ConstInt(5), + } + self.optimize_loop(ops, expected, call_pure_results) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass - ##class TestOOtype(BaseTestOptimizeBasic, OOtypeMixin): ## def test_instanceof(self): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -105,6 +105,9 @@ return loop + def raises(self, e, fn, *args): + py.test.raises(e, fn, *args) + class OptimizeOptTest(BaseTestWithUnroll): def setup_method(self, meth=None): @@ -2639,7 +2642,7 @@ p2 = new_with_vtable(ConstClass(node_vtable)) jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_2(self): @@ -2651,7 +2654,7 @@ escape(p2) # prevent it from staying Virtual jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_3(self): @@ -2665,7 +2668,7 @@ setfield_gc(p3, p4, descr=nextdescr) jump(p3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_merge_guard_class_guard_value(self): @@ -4411,7 +4414,7 @@ setfield_gc(p1, p3, descr=nextdescr) jump(p3) """ - py.test.raises(BogusPureField, self.optimize_loop, ops, "crash!") + self.raises(BogusPureField, self.optimize_loop, ops, "crash!") def test_dont_complains_different_field(self): ops = """ @@ -5024,7 +5027,7 @@ i2 = int_add(i0, 3) jump(i2) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_ne_const_not(self): ops = """ @@ -5074,7 +5077,7 @@ i3 = int_add(i0, 3) jump(i3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_lshift(self): ops = """ @@ -6533,9 +6536,9 @@ def test_quasi_immut_2(self): ops = """ [] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) escape(i1) jump() """ @@ -6585,13 +6588,13 @@ def test_call_may_force_invalidated_guards_reload(self): ops = """ [i0a, i0b] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) call_may_force(i0b, descr=mayforcevirtdescr) - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i2 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i2 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) i3 = escape(i1) i4 = escape(i2) jump(i3, i4) @@ -7813,6 +7816,52 @@ """ self.optimize_loop(ops, expected) + def test_issue1080_infinitie_loop_virtual(self): + ops = """ + [p10] + p52 = getfield_gc(p10, descr=nextdescr) # inst_storage + p54 = getarrayitem_gc(p52, 0, descr=arraydescr) + p69 = getfield_gc_pure(p54, descr=otherdescr) # inst_w_function + + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + + p106 = new_with_vtable(ConstClass(node_vtable)) + p108 = new_array(3, descr=arraydescr) + p110 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p110, ConstPtr(myptr2), descr=otherdescr) # inst_w_function + setarrayitem_gc(p108, 0, p110, descr=arraydescr) + setfield_gc(p106, p108, descr=nextdescr) # inst_storage + jump(p106) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr2), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + + + def test_issue1080_infinitie_loop_simple(self): + ops = """ + [p69] + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + jump(ConstPtr(myptr)) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -122,6 +122,7 @@ quasi.inst_field = -4247 quasifielddescr = cpu.fielddescrof(QUASI, 'inst_field') quasibox = BoxPtr(lltype.cast_opaque_ptr(llmemory.GCREF, quasi)) + quasiptr = quasibox.value quasiimmutdescr = QuasiImmutDescr(cpu, quasibox, quasifielddescr, cpu.fielddescrof(QUASI, 'mutate_field')) diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -315,7 +315,10 @@ try: jumpargs = virtual_state.make_inputargs(values, self.optimizer) except BadVirtualState: - raise InvalidLoop + raise InvalidLoop('The state of the optimizer at the end of ' + + 'peeled loop is inconsistent with the ' + + 'VirtualState at the begining of the peeled ' + + 'loop') jumpop.initarglist(jumpargs) # Inline the short preamble at the end of the loop @@ -325,7 +328,11 @@ for i in range(len(short_inputargs)): if short_inputargs[i] in args: if args[short_inputargs[i]] != jmp_to_short_args[i]: - raise InvalidLoop + raise InvalidLoop('The short preamble wants the ' + + 'same box passed to multiple of its ' + + 'inputargs, but the jump at the ' + + 'end of this bridge does not do that.') + args[short_inputargs[i]] = jmp_to_short_args[i] self.short_inliner = Inliner(short_inputargs, jmp_to_short_args) for op in self.short[1:]: @@ -378,7 +385,10 @@ #final_virtual_state.debug_print("Bad virtual state at end of loop, ", # bad) #debug_stop('jit-log-virtualstate') - raise InvalidLoop + raise InvalidLoop('The virtual state at the end of the peeled ' + + 'loop is not compatible with the virtual ' + + 'state at the start of the loop which makes ' + + 'it impossible to close the loop') #debug_stop('jit-log-virtualstate') @@ -526,8 +536,8 @@ args = jumpop.getarglist() modifier = VirtualStateAdder(self.optimizer) virtual_state = modifier.get_virtual_state(args) - #debug_start('jit-log-virtualstate') - #virtual_state.debug_print("Looking for ") + debug_start('jit-log-virtualstate') + virtual_state.debug_print("Looking for ") for target in cell_token.target_tokens: if not target.virtual_state: @@ -536,10 +546,10 @@ extra_guards = [] bad = {} - #debugmsg = 'Did not match ' + debugmsg = 'Did not match ' if target.virtual_state.generalization_of(virtual_state, bad): ok = True - #debugmsg = 'Matched ' + debugmsg = 'Matched ' else: try: cpu = self.optimizer.cpu @@ -548,13 +558,13 @@ extra_guards) ok = True - #debugmsg = 'Guarded to match ' + debugmsg = 'Guarded to match ' except InvalidLoop: pass - #target.virtual_state.debug_print(debugmsg, bad) + target.virtual_state.debug_print(debugmsg, bad) if ok: - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') values = [self.getvalue(arg) for arg in jumpop.getarglist()] @@ -581,7 +591,7 @@ jumpop.setdescr(cell_token.target_tokens[0]) self.optimizer.send_extra_operation(jumpop) return True - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') return False class ValueImporter(object): diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -27,11 +27,15 @@ if self.generalization_of(other, renum, {}): return if renum[self.position] != other.position: - raise InvalidLoop + raise InvalidLoop('The numbering of the virtual states does not ' + + 'match. This means that two virtual fields ' + + 'have been set to the same Box in one of the ' + + 'virtual states but not in the other.') self._generate_guards(other, box, cpu, extra_guards) def _generate_guards(self, other, box, cpu, extra_guards): - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') def enum_forced_boxes(self, boxes, value, optimizer): raise NotImplementedError @@ -346,10 +350,12 @@ def _generate_guards(self, other, box, cpu, extra_guards): if not isinstance(other, NotVirtualStateInfo): - raise InvalidLoop + raise InvalidLoop('The VirtualStates does not match as a ' + + 'virtual appears where a pointer is needed ' + + 'and it is too late to force it.') if self.lenbound or other.lenbound: - raise InvalidLoop + raise InvalidLoop('The array length bounds does not match.') if self.level == LEVEL_KNOWNCLASS and \ box.nonnull() and \ @@ -400,7 +406,8 @@ return # Remaining cases are probably not interesting - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') if self.level == LEVEL_CONSTANT: import pdb; pdb.set_trace() raise NotImplementedError diff --git a/pypy/jit/metainterp/pyjitpl.py b/pypy/jit/metainterp/pyjitpl.py --- a/pypy/jit/metainterp/pyjitpl.py +++ b/pypy/jit/metainterp/pyjitpl.py @@ -1223,7 +1223,7 @@ def run_one_step(self): # Execute the frame forward. This method contains a loop that leaves # whenever the 'opcode_implementations' (which is one of the 'opimpl_' - # methods) returns True. This is the case when the current frame + # methods) raises ChangeFrame. This is the case when the current frame # changes, due to a call or a return. try: staticdata = self.metainterp.staticdata diff --git a/pypy/jit/metainterp/quasiimmut.py b/pypy/jit/metainterp/quasiimmut.py --- a/pypy/jit/metainterp/quasiimmut.py +++ b/pypy/jit/metainterp/quasiimmut.py @@ -120,8 +120,10 @@ self.fielddescr, self.structbox) return fieldbox.constbox() - def is_still_valid(self): + def is_still_valid_for(self, structconst): assert self.structbox is not None + if not self.structbox.constbox().same_constant(structconst): + return False cpu = self.cpu gcref = self.structbox.getref_base() qmut = get_current_qmut_instance(cpu, gcref, self.mutatefielddescr) diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py --- a/pypy/jit/metainterp/test/test_heapcache.py +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -2,12 +2,14 @@ from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.history import ConstInt -box1 = object() -box2 = object() -box3 = object() -box4 = object() +box1 = "box1" +box2 = "box2" +box3 = "box3" +box4 = "box4" +box5 = "box5" lengthbox1 = object() lengthbox2 = object() +lengthbox3 = object() descr1 = object() descr2 = object() descr3 = object() @@ -276,11 +278,43 @@ h.setfield(box1, descr2, box3) h.setfield(box2, descr3, box3) h.replace_box(box1, box4) - assert h.getfield(box1, descr1) is None - assert h.getfield(box1, descr2) is None assert h.getfield(box4, descr1) is box2 assert h.getfield(box4, descr2) is box3 assert h.getfield(box2, descr3) is box3 + h.setfield(box4, descr1, box3) + assert h.getfield(box4, descr1) is box3 + + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box3, box4) + assert h.getfield(box1, descr1) is box2 + assert h.getfield(box1, descr2) is box4 + assert h.getfield(box2, descr3) is box4 + + def test_replace_box_twice(self): + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box1, box4) + h.replace_box(box4, box5) + assert h.getfield(box5, descr1) is box2 + assert h.getfield(box5, descr2) is box3 + assert h.getfield(box2, descr3) is box3 + h.setfield(box5, descr1, box3) + assert h.getfield(box4, descr1) is box3 + + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box3, box4) + h.replace_box(box4, box5) + assert h.getfield(box1, descr1) is box2 + assert h.getfield(box1, descr2) is box5 + assert h.getfield(box2, descr3) is box5 def test_replace_box_array(self): h = HeapCache() @@ -291,9 +325,6 @@ h.setarrayitem(box3, descr2, index2, box1) h.setarrayitem(box2, descr3, index2, box3) h.replace_box(box1, box4) - assert h.getarrayitem(box1, descr1, index1) is None - assert h.getarrayitem(box1, descr2, index1) is None - assert h.arraylen(box1) is None assert h.arraylen(box4) is lengthbox1 assert h.getarrayitem(box4, descr1, index1) is box2 assert h.getarrayitem(box4, descr2, index1) is box3 @@ -304,6 +335,27 @@ h.replace_box(lengthbox1, lengthbox2) assert h.arraylen(box4) is lengthbox2 + def test_replace_box_array_twice(self): + h = HeapCache() + h.setarrayitem(box1, descr1, index1, box2) + h.setarrayitem(box1, descr2, index1, box3) + h.arraylen_now_known(box1, lengthbox1) + h.setarrayitem(box2, descr1, index2, box1) + h.setarrayitem(box3, descr2, index2, box1) + h.setarrayitem(box2, descr3, index2, box3) + h.replace_box(box1, box4) + h.replace_box(box4, box5) + assert h.arraylen(box4) is lengthbox1 + assert h.getarrayitem(box5, descr1, index1) is box2 + assert h.getarrayitem(box5, descr2, index1) is box3 + assert h.getarrayitem(box2, descr1, index2) is box5 + assert h.getarrayitem(box3, descr2, index2) is box5 + assert h.getarrayitem(box2, descr3, index2) is box3 + + h.replace_box(lengthbox1, lengthbox2) + h.replace_box(lengthbox2, lengthbox3) + assert h.arraylen(box4) is lengthbox3 + def test_ll_arraycopy(self): h = HeapCache() h.new_array(box1, lengthbox1) diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.quasiimmut import get_current_qmut_instance from pypy.jit.metainterp.test.support import LLJitMixin from pypy.jit.codewriter.policy import StopAtXPolicy -from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe +from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe, promote def test_get_current_qmut_instance(): @@ -506,6 +506,27 @@ "guard_not_invalidated": 2 }) + def test_issue1080(self): + myjitdriver = JitDriver(greens=[], reds=["n", "sa", "a"]) + class Foo(object): + _immutable_fields_ = ["x?"] + def __init__(self, x): + self.x = x + one, two = Foo(1), Foo(2) + def main(n): + sa = 0 + a = one + while n: + myjitdriver.jit_merge_point(n=n, sa=sa, a=a) + sa += a.x + if a.x == 1: + a = two + elif a.x == 2: + a = one + n -= 1 + return sa + res = self.meta_interp(main, [10]) + assert res == main(10) class TestLLtypeGreenFieldsTests(QuasiImmutTests, LLJitMixin): pass diff --git a/pypy/module/_continuation/test/test_zpickle.py b/pypy/module/_continuation/test/test_zpickle.py --- a/pypy/module/_continuation/test/test_zpickle.py +++ b/pypy/module/_continuation/test/test_zpickle.py @@ -1,9 +1,10 @@ +import py from pypy.conftest import gettestobjspace class AppTestCopy: def setup_class(cls): - py3k_skip("_continuation not supported yet") + py.test.py3k_skip("_continuation not supported yet") cls.space = gettestobjspace(usemodules=('_continuation',), CALL_METHOD=True) cls.space.config.translation.continuation = True @@ -107,6 +108,7 @@ version = 0 def setup_class(cls): + py.test.py3k_skip("_continuation not supported yet") cls.space = gettestobjspace(usemodules=('_continuation', 'struct'), CALL_METHOD=True) cls.space.config.translation.continuation = True diff --git a/pypy/module/_ffi/test/test__ffi.py b/pypy/module/_ffi/test/test__ffi.py --- a/pypy/module/_ffi/test/test__ffi.py +++ b/pypy/module/_ffi/test/test__ffi.py @@ -121,6 +121,7 @@ return x+y; } """ + py3k_skip('missing support for longs') import sys from _ffi import CDLL, types libfoo = CDLL(self.libfoo_name) @@ -202,6 +203,7 @@ return len; } """ + py3k_skip('missing support for unicode') from _ffi import CDLL, types import _rawffi libfoo = CDLL(self.libfoo_name) @@ -252,6 +254,7 @@ return s; } """ + py3k_skip('missing support for unicode') from _ffi import CDLL, types import _rawffi libfoo = CDLL(self.libfoo_name) @@ -323,6 +326,7 @@ return x+y; } """ + py3k_skip('missing support for longs') import sys from _ffi import CDLL, types libfoo = CDLL(self.libfoo_name) @@ -449,6 +453,7 @@ return x+y; } """ + py3k_skip('missing support for ulonglong') from _ffi import CDLL, types maxint64 = 9223372036854775807 # maxint64+1 does not fit into a # longlong, but it does into a diff --git a/pypy/module/_io/interp_iobase.py b/pypy/module/_io/interp_iobase.py --- a/pypy/module/_io/interp_iobase.py +++ b/pypy/module/_io/interp_iobase.py @@ -345,9 +345,13 @@ def add(self, w_iobase): assert w_iobase.streamholder is None - holder = StreamHolder(w_iobase) - w_iobase.streamholder = holder - self.streams[holder] = None + if rweakref.has_weakref_support(): + holder = StreamHolder(w_iobase) + w_iobase.streamholder = holder + self.streams[holder] = None + #else: + # no support for weakrefs, so ignore and we + # will not get autoflushing def remove(self, w_iobase): holder = w_iobase.streamholder diff --git a/pypy/module/_multiprocessing/test/test_connection.py b/pypy/module/_multiprocessing/test/test_connection.py --- a/pypy/module/_multiprocessing/test/test_connection.py +++ b/pypy/module/_multiprocessing/test/test_connection.py @@ -157,13 +157,15 @@ raises(IOError, _multiprocessing.Connection, -15) def test_byte_order(self): + import socket + if not 'fromfd' in dir(socket): + skip('No fromfd in socket') # The exact format of net strings (length in network byte # order) is important for interoperation with others # implementations. rhandle, whandle = self.make_pair() whandle.send_bytes("abc") whandle.send_bytes("defg") - import socket sock = socket.fromfd(rhandle.fileno(), socket.AF_INET, socket.SOCK_STREAM) data1 = sock.recv(7) diff --git a/pypy/module/_rawffi/test/test__rawffi.py b/pypy/module/_rawffi/test/test__rawffi.py --- a/pypy/module/_rawffi/test/test__rawffi.py +++ b/pypy/module/_rawffi/test/test__rawffi.py @@ -262,6 +262,7 @@ assert lib.ptr(1, [], 'i')()[0] == 42 def test_getchar(self): + py3k_skip('bytes vs unicode') import _rawffi lib = _rawffi.CDLL(self.lib_name) get_char = lib.ptr('get_char', ['P', 'H'], 'c') @@ -290,6 +291,7 @@ assert buf[:8] == b'*' + b'\x00'*6 + b'a' def test_returning_str(self): + py3k_skip('bytes vs unicode') import _rawffi lib = _rawffi.CDLL(self.lib_name) char_check = lib.ptr('char_check', ['c', 'c'], 's') @@ -558,6 +560,7 @@ raises(ValueError, "_rawffi.Array('xx')") def test_longs_ulongs(self): + py3k_skip('fails on 32bit') import _rawffi lib = _rawffi.CDLL(self.lib_name) some_huge_value = lib.ptr('some_huge_value', [], 'q') @@ -610,6 +613,7 @@ cb.free() def test_another_callback(self): + py3k_skip('fails on 32bit') import _rawffi lib = _rawffi.CDLL(self.lib_name) runcallback = lib.ptr('runcallback', ['P'], 'q') @@ -781,6 +785,7 @@ a.free() def test_truncate(self): + py3k_skip('int vs long') import _rawffi, struct a = _rawffi.Array('b')(1) a[0] = -5 @@ -1049,6 +1054,7 @@ assert oldnum == _rawffi._num_of_allocated_objects() def test_array_autofree(self): + py3k_skip('bytes vs unicode') import gc, _rawffi gc.collect() oldnum = _rawffi._num_of_allocated_objects() diff --git a/pypy/module/_winreg/test/test_winreg.py b/pypy/module/_winreg/test/test_winreg.py --- a/pypy/module/_winreg/test/test_winreg.py +++ b/pypy/module/_winreg/test/test_winreg.py @@ -198,7 +198,10 @@ import nt r = ExpandEnvironmentStrings(u"%windir%\\test") assert isinstance(r, unicode) - assert r == nt.environ["WINDIR"] + "\\test" + if 'WINDIR' in nt.environ.keys(): + assert r == nt.environ["WINDIR"] + "\\test" + else: + assert r == nt.environ["windir"] + "\\test" def test_long_key(self): from _winreg import ( diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -102,8 +102,8 @@ """.split() for name in constant_names: setattr(CConfig_constants, name, rffi_platform.ConstantInteger(name)) -udir.join('pypy_decl.h').write("/* Will be filled later */") -udir.join('pypy_macros.h').write("/* Will be filled later */") +udir.join('pypy_decl.h').write("/* Will be filled later */\n") +udir.join('pypy_macros.h').write("/* Will be filled later */\n") globals().update(rffi_platform.configure(CConfig_constants)) def copy_header_files(dstdir): @@ -924,12 +924,12 @@ source_dir / "pyerrors.c", source_dir / "modsupport.c", source_dir / "getargs.c", + source_dir / "abstract.c", source_dir / "unicodeobject.c", source_dir / "mysnprintf.c", source_dir / "pythonrun.c", source_dir / "sysmodule.c", source_dir / "bufferobject.c", - source_dir / "object.c", source_dir / "cobject.c", source_dir / "structseq.c", source_dir / "capsule.c", diff --git a/pypy/module/cpyext/complexobject.py b/pypy/module/cpyext/complexobject.py --- a/pypy/module/cpyext/complexobject.py +++ b/pypy/module/cpyext/complexobject.py @@ -33,6 +33,11 @@ # CPython also accepts anything return 0.0 + at cpython_api([Py_complex_ptr], PyObject) +def _PyComplex_FromCComplex(space, v): + """Create a new Python complex number object from a C Py_complex value.""" + return space.newcomplex(v.c_real, v.c_imag) + # lltype does not handle functions returning a structure. This implements a # helper function, which takes as argument a reference to the return value. @cpython_api([PyObject, Py_complex_ptr], lltype.Void) diff --git a/pypy/module/cpyext/dictobject.py b/pypy/module/cpyext/dictobject.py --- a/pypy/module/cpyext/dictobject.py +++ b/pypy/module/cpyext/dictobject.py @@ -111,19 +111,19 @@ def PyDict_Keys(space, w_obj): """Return a PyListObject containing all the keys from the dictionary, as in the dictionary method dict.keys().""" - return space.call_method(w_obj, "keys") + return space.call_function(space.w_list, space.call_method(w_obj, "keys")) @cpython_api([PyObject], PyObject) def PyDict_Values(space, w_obj): """Return a PyListObject containing all the values from the dictionary p, as in the dictionary method dict.values().""" - return space.call_method(w_obj, "values") + return space.call_function(space.w_list, space.call_method(w_obj, "values")) @cpython_api([PyObject], PyObject) def PyDict_Items(space, w_obj): """Return a PyListObject containing all the items from the dictionary, as in the dictionary method dict.items().""" - return space.call_method(w_obj, "items") + return space.call_function(space.w_list, space.call_method(w_obj, "items")) @cpython_api([PyObject, Py_ssize_tP, PyObjectP, PyObjectP], rffi.INT_real, error=CANNOT_FAIL) def PyDict_Next(space, w_dict, ppos, pkey, pvalue): @@ -215,3 +215,11 @@ w_frozendict = make_frozendict(space) return space.call_function(w_frozendict, w_dict) + at cpython_api([PyObject], rffi.INT_real, error=CANNOT_FAIL) +def _PyDict_HasOnlyStringKeys(space, w_dict): + keys_w = space.unpackiterable(w_dict) + for w_key in keys_w: + if not space.isinstance_w(w_key, space.w_unicode): + return 0 + return 1 + diff --git a/pypy/module/cpyext/eval.py b/pypy/module/cpyext/eval.py --- a/pypy/module/cpyext/eval.py +++ b/pypy/module/cpyext/eval.py @@ -94,7 +94,7 @@ Py_eval_input = 258 def compile_string(space, source, filename, start, flags=0): - w_source = space.wrap(source) + w_source = space.wrapbytes(source) start = rffi.cast(lltype.Signed, start) if start == Py_file_input: mode = 'exec' diff --git a/pypy/module/cpyext/floatobject.py b/pypy/module/cpyext/floatobject.py --- a/pypy/module/cpyext/floatobject.py +++ b/pypy/module/cpyext/floatobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.api import ( CANNOT_FAIL, cpython_api, PyObject, build_type_checkers, CONST_STRING) from pypy.interpreter.error import OperationError +from pypy.rlib.rstruct import runpack PyFloat_Check, PyFloat_CheckExact = build_type_checkers("Float") @@ -33,3 +34,19 @@ backward compatibility.""" return space.call_function(space.w_float, w_obj) + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack4(space, ptr, le): + input = rffi.charpsize2str(ptr, 4) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("f", input) + + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack8(space, ptr, le): + input = rffi.charpsize2str(ptr, 8) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("d", input) + diff --git a/pypy/module/cpyext/funcobject.py b/pypy/module/cpyext/funcobject.py --- a/pypy/module/cpyext/funcobject.py +++ b/pypy/module/cpyext/funcobject.py @@ -116,9 +116,10 @@ return [space.str_w(w_item) for w_item in space.fixedview(w_list)] @cpython_api([rffi.INT_real, rffi.INT_real, rffi.INT_real, rffi.INT_real, + rffi.INT_real, PyObject, PyObject, PyObject, PyObject, PyObject, PyObject, PyObject, PyObject, rffi.INT_real, PyObject], PyCodeObject) -def PyCode_New(space, argcount, nlocals, stacksize, flags, +def PyCode_New(space, argcount, kwonlyargcount, nlocals, stacksize, flags, w_code, w_consts, w_names, w_varnames, w_freevars, w_cellvars, w_filename, w_funcname, firstlineno, w_lnotab): """Return a new code object. If you need a dummy code object to @@ -147,6 +148,7 @@ """Creates a new empty code object with the specified source location.""" return space.wrap(PyCode(space, argcount=0, + kwonlyargcount=0, nlocals=0, stacksize=0, flags=0, diff --git a/pypy/module/cpyext/import_.py b/pypy/module/cpyext/import_.py --- a/pypy/module/cpyext/import_.py +++ b/pypy/module/cpyext/import_.py @@ -24,7 +24,7 @@ w_builtin = space.getitem(w_globals, space.wrap('__builtins__')) else: # No globals -- use standard builtins, and fake globals - w_builtin = space.getbuiltinmodule('__builtin__') + w_builtin = space.getbuiltinmodule('builtins') w_globals = space.newdict() space.setitem(w_globals, space.wrap("__builtins__"), w_builtin) @@ -121,5 +121,6 @@ pathname = code.co_filename w_mod = importing.add_module(space, w_name) space.setattr(w_mod, space.wrap('__file__'), space.wrap(pathname)) - importing.exec_code_module(space, w_mod, code, pathname) + cpathname = importing.make_compiled_pathname(pathname) + importing.exec_code_module(space, w_mod, code, pathname, cpathname) return w_mod diff --git a/pypy/module/cpyext/include/bytesobject.h b/pypy/module/cpyext/include/bytesobject.h --- a/pypy/module/cpyext/include/bytesobject.h +++ b/pypy/module/cpyext/include/bytesobject.h @@ -25,3 +25,6 @@ #define _PyBytes_Join _PyString_Join #define PyBytes_AsStringAndSize PyString_AsStringAndSize #define _PyBytes_InsertThousandsGrouping _PyString_InsertThousandsGrouping + +#define PyByteArray_Check(obj) \ + PyObject_IsInstance(obj, (PyObject *)&PyByteArray_Type) diff --git a/pypy/module/cpyext/include/complexobject.h b/pypy/module/cpyext/include/complexobject.h --- a/pypy/module/cpyext/include/complexobject.h +++ b/pypy/module/cpyext/include/complexobject.h @@ -21,6 +21,8 @@ return result; } +#define PyComplex_FromCComplex(c) _PyComplex_FromCComplex(&c) + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/modsupport.h b/pypy/module/cpyext/include/modsupport.h --- a/pypy/module/cpyext/include/modsupport.h +++ b/pypy/module/cpyext/include/modsupport.h @@ -7,6 +7,8 @@ extern "C" { #endif +#define Py_CLEANUP_SUPPORTED 0x20000 + #define PYTHON_API_VERSION 1013 #define PYTHON_API_STRING "1013" diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -38,10 +38,19 @@ PyObject_VAR_HEAD } PyVarObject; +#ifndef PYPY_DEBUG_REFCOUNT #define Py_INCREF(ob) (Py_IncRef((PyObject *)ob)) #define Py_DECREF(ob) (Py_DecRef((PyObject *)ob)) #define Py_XINCREF(ob) (Py_IncRef((PyObject *)ob)) #define Py_XDECREF(ob) (Py_DecRef((PyObject *)ob)) +#else +#define Py_INCREF(ob) (((PyObject *)ob)->ob_refcnt++) +#define Py_DECREF(ob) ((((PyObject *)ob)->ob_refcnt > 1) ? \ + ((PyObject *)ob)->ob_refcnt-- : (Py_DecRef((PyObject *)ob))) + +#define Py_XINCREF(op) do { if ((op) == NULL) ; else Py_INCREF(op); } while (0) +#define Py_XDECREF(op) do { if ((op) == NULL) ; else Py_DECREF(op); } while (0) +#endif #define Py_CLEAR(op) \ do { \ @@ -520,6 +529,8 @@ #define PyObject_GC_New(type, typeobj) \ ( (type *) _PyObject_GC_New(typeobj) ) +#define PyObject_GC_NewVar(type, typeobj, size) \ + ( (type *) _PyObject_GC_NewVar(typeobj, size) ) /* A dummy PyGC_Head, just to please some tests. Don't use it! */ typedef union _gc_head { diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -29,6 +29,10 @@ # define vsnprintf _vsnprintf #endif +#include +PyAPI_FUNC(int) PyOS_snprintf(char *str, size_t size, const char *format, ...); +PyAPI_FUNC(int) PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va); + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/stringobject.h b/pypy/module/cpyext/include/stringobject.h --- a/pypy/module/cpyext/include/stringobject.h +++ b/pypy/module/cpyext/include/stringobject.h @@ -7,8 +7,6 @@ extern "C" { #endif -int PyOS_snprintf(char *str, size_t size, const char *format, ...); - #define PyString_GET_SIZE(op) PyString_Size(op) #define PyString_AS_STRING(op) PyString_AsString(op) diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -40,7 +40,8 @@ when the value is NULL, instead of converting to None. */ #define T_LONGLONG 17 -#define T_ULONGLONG 18 +#define T_ULONGLONG 18 +#define T_PYSSIZET 19 /* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 diff --git a/pypy/module/cpyext/include/structseq.h b/pypy/module/cpyext/include/structseq.h --- a/pypy/module/cpyext/include/structseq.h +++ b/pypy/module/cpyext/include/structseq.h @@ -8,21 +8,21 @@ #endif typedef struct PyStructSequence_Field { - char *name; - char *doc; + char *name; + char *doc; } PyStructSequence_Field; typedef struct PyStructSequence_Desc { - char *name; - char *doc; - struct PyStructSequence_Field *fields; - int n_in_sequence; + char *name; + char *doc; + struct PyStructSequence_Field *fields; + int n_in_sequence; } PyStructSequence_Desc; extern char* PyStructSequence_UnnamedField; PyAPI_FUNC(void) PyStructSequence_InitType(PyTypeObject *type, - PyStructSequence_Desc *desc); + PyStructSequence_Desc *desc); PyAPI_FUNC(PyObject *) PyStructSequence_New(PyTypeObject* type); @@ -32,8 +32,9 @@ } PyStructSequence; /* Macro, *only* to be used to fill in brand new objects */ -#define PyStructSequence_SET_ITEM(op, i, v) \ - (((PyStructSequence *)(op))->ob_item[i] = v) +#define PyStructSequence_SET_ITEM(op, i, v) PyTuple_SET_ITEM(op, i, v) + +#define PyStructSequence_GET_ITEM(op, i) PyTuple_GET_ITEM(op, i) #ifdef __cplusplus } diff --git a/pypy/module/cpyext/include/unicodeobject.h b/pypy/module/cpyext/include/unicodeobject.h --- a/pypy/module/cpyext/include/unicodeobject.h +++ b/pypy/module/cpyext/include/unicodeobject.h @@ -29,6 +29,16 @@ PyObject *PyUnicode_FromFormatV(const char *format, va_list vargs); PyObject *PyUnicode_FromFormat(const char *format, ...); +Py_LOCAL_INLINE(size_t) Py_UNICODE_strlen(const Py_UNICODE *u) +{ + int res = 0; + while(*u++) + res++; + return res; +} + + + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/intobject.py b/pypy/module/cpyext/intobject.py --- a/pypy/module/cpyext/intobject.py +++ b/pypy/module/cpyext/intobject.py @@ -76,12 +76,8 @@ unsigned long. This function does not check for overflow. """ w_int = space.int(w_obj) - if space.is_true(space.isinstance(w_int, space.w_int)): - num = space.int_w(w_int) - return r_uint(num) - else: - num = space.bigint_w(w_int) - return num.uintmask() + num = space.bigint_w(w_int) + return num.uintmask() @cpython_api([PyObject], lltype.Signed, error=CANNOT_FAIL) def PyInt_AS_LONG(space, w_int): diff --git a/pypy/module/cpyext/iterator.py b/pypy/module/cpyext/iterator.py --- a/pypy/module/cpyext/iterator.py +++ b/pypy/module/cpyext/iterator.py @@ -22,7 +22,7 @@ cannot be iterated.""" return space.iter(w_obj) - at cpython_api([PyObject], PyObject, error=CANNOT_FAIL) + at cpython_api([PyObject], PyObject) def PyIter_Next(space, w_obj): """Return the next value from the iteration o. If the object is an iterator, this retrieves the next value from the iteration, and returns diff --git a/pypy/module/cpyext/listobject.py b/pypy/module/cpyext/listobject.py --- a/pypy/module/cpyext/listobject.py +++ b/pypy/module/cpyext/listobject.py @@ -110,6 +110,16 @@ space.call_method(w_list, "reverse") return 0 + at cpython_api([PyObject, Py_ssize_t, Py_ssize_t], PyObject) +def PyList_GetSlice(space, w_list, low, high): + """Return a list of the objects in list containing the objects between low + and high. Return NULL and set an exception if unsuccessful. Analogous + to list[low:high]. Negative indices, as when slicing from Python, are not + supported.""" + w_start = space.wrap(low) + w_stop = space.wrap(high) + return space.getslice(w_list, w_start, w_stop) + @cpython_api([PyObject, Py_ssize_t, Py_ssize_t, PyObject], rffi.INT_real, error=-1) def PyList_SetSlice(space, w_list, low, high, w_sequence): """Set the slice of list between low and high to the contents of diff --git a/pypy/module/cpyext/longobject.py b/pypy/module/cpyext/longobject.py --- a/pypy/module/cpyext/longobject.py +++ b/pypy/module/cpyext/longobject.py @@ -1,6 +1,7 @@ from pypy.rpython.lltypesystem import lltype, rffi -from pypy.module.cpyext.api import (cpython_api, PyObject, build_type_checkers, - CONST_STRING, ADDR, CANNOT_FAIL) +from pypy.module.cpyext.api import ( + cpython_api, PyObject, build_type_checkers, Py_ssize_t, + CONST_STRING, ADDR, CANNOT_FAIL) from pypy.objspace.std.longobject import W_LongObject from pypy.interpreter.error import OperationError from pypy.module.cpyext.intobject import PyInt_AsUnsignedLongMask @@ -8,13 +9,20 @@ from pypy.rlib.rarithmetic import intmask -PyLong_Check, PyLong_CheckExact = build_type_checkers("Long") +PyLong_Check, PyLong_CheckExact = build_type_checkers("Long", "w_int") @cpython_api([lltype.Signed], PyObject) def PyLong_FromLong(space, val): """Return a new PyLongObject object from v, or NULL on failure.""" return space.newlong(val) + at cpython_api([Py_ssize_t], PyObject) +def PyLong_FromSsize_t(space, val): + """Return a new PyLongObject object from a C Py_ssize_t, or + NULL on failure. + """ + return space.newlong(val) + @cpython_api([rffi.LONGLONG], PyObject) def PyLong_FromLongLong(space, val): """Return a new PyLongObject object from a C long long, or NULL @@ -56,6 +64,14 @@ and -1 will be returned.""" return space.int_w(w_long) + at cpython_api([PyObject], Py_ssize_t, error=-1) +def PyLong_AsSsize_t(space, w_long): + """Return a C Py_ssize_t representation of the contents of pylong. If + pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised + and -1 will be returned. + """ + return space.int_w(w_long) + @cpython_api([PyObject], rffi.LONGLONG, error=-1) def PyLong_AsLongLong(space, w_long): """ diff --git a/pypy/module/cpyext/mapping.py b/pypy/module/cpyext/mapping.py --- a/pypy/module/cpyext/mapping.py +++ b/pypy/module/cpyext/mapping.py @@ -22,20 +22,23 @@ def PyMapping_Keys(space, w_obj): """On success, return a list of the keys in object o. On failure, return NULL. This is equivalent to the Python expression o.keys().""" - return space.call_method(w_obj, "keys") + return space.call_function(space.w_list, + space.call_method(w_obj, "keys")) @cpython_api([PyObject], PyObject) def PyMapping_Values(space, w_obj): """On success, return a list of the values in object o. On failure, return NULL. This is equivalent to the Python expression o.values().""" - return space.call_method(w_obj, "values") + return space.call_function(space.w_list, + space.call_method(w_obj, "values")) @cpython_api([PyObject], PyObject) def PyMapping_Items(space, w_obj): """On success, return a list of the items in object o, where each item is a tuple containing a key-value pair. On failure, return NULL. This is equivalent to the Python expression o.items().""" - return space.call_method(w_obj, "items") + return space.call_function(space.w_list, + space.call_method(w_obj, "items")) @cpython_api([PyObject, CONST_STRING], PyObject) def PyMapping_GetItemString(space, w_obj, key): diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -57,6 +57,10 @@ def _PyObject_GC_New(space, type): return _PyObject_New(space, type) + at cpython_api([PyTypeObjectPtr, Py_ssize_t], PyObject) +def _PyObject_GC_NewVar(space, type, itemcount): + return _PyObject_NewVar(space, type, itemcount) + @cpython_api([rffi.VOIDP], lltype.Void) def PyObject_GC_Del(space, obj): PyObject_Del(space, obj) @@ -245,26 +249,6 @@ function.""" return space.call_function(space.w_unicode, w_obj) - at cpython_api([PyObject, PyObject], rffi.INT_real, error=-1) -def PyObject_Compare(space, w_o1, w_o2): - """ - Compare the values of o1 and o2 using a routine provided by o1, if one - exists, otherwise with a routine provided by o2. Returns the result of the - comparison on success. On error, the value returned is undefined; use - PyErr_Occurred() to detect an error. This is equivalent to the Python - expression cmp(o1, o2).""" - return space.int_w(space.cmp(w_o1, w_o2)) - - at cpython_api([PyObject, PyObject, rffi.INTP], rffi.INT_real, error=-1) -def PyObject_Cmp(space, w_o1, w_o2, result): - """Compare the values of o1 and o2 using a routine provided by o1, if one - exists, otherwise with a routine provided by o2. The result of the - comparison is returned in result. Returns -1 on failure. This is the - equivalent of the Python statement result = cmp(o1, o2).""" - res = space.int_w(space.cmp(w_o1, w_o2)) - result[0] = rffi.cast(rffi.INT, res) - return 0 - @cpython_api([PyObject, PyObject, rffi.INT_real], PyObject) def PyObject_RichCompare(space, w_o1, w_o2, opid_int): """Compare the values of o1 and o2 using the operation specified by opid, @@ -390,6 +374,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], lltype.Signed, error=-1) +def PyObject_HashNotImplemented(space, o): + """Set a TypeError indicating that type(o) is not hashable and return -1. + This function receives special treatment when stored in a tp_hash slot, + allowing a type to explicitly indicate to the interpreter that it is not + hashable. + """ + raise OperationError(space.w_TypeError, space.wrap("unhashable type")) + @cpython_api([PyObject], PyObject) def PyObject_Dir(space, w_o): """This is equivalent to the Python expression dir(o), returning a (possibly diff --git a/pypy/module/cpyext/pyerrors.py b/pypy/module/cpyext/pyerrors.py --- a/pypy/module/cpyext/pyerrors.py +++ b/pypy/module/cpyext/pyerrors.py @@ -313,7 +313,10 @@ """This function simulates the effect of a SIGINT signal arriving --- the next time PyErr_CheckSignals() is called, KeyboardInterrupt will be raised. It may be called without holding the interpreter lock.""" - space.check_signal_action.set_interrupt() + if space.check_signal_action is not None: + space.check_signal_action.set_interrupt() + #else: + # no 'signal' module present, ignore... We can't return an error here @cpython_api([PyObjectP, PyObjectP, PyObjectP], lltype.Void) def PyErr_GetExcInfo(space, ptype, pvalue, ptraceback): diff --git a/pypy/module/cpyext/pyfile.py b/pypy/module/cpyext/pyfile.py --- a/pypy/module/cpyext/pyfile.py +++ b/pypy/module/cpyext/pyfile.py @@ -1,6 +1,6 @@ from pypy.rpython.lltypesystem import rffi, lltype from pypy.module.cpyext.api import ( - cpython_api, CANNOT_FAIL, CONST_STRING, FILEP, build_type_checkers) + cpython_api, CANNOT_FAIL, CONST_STRING, FILEP) from pypy.module.cpyext.pyobject import PyObject, borrow_from from pypy.module.cpyext.object import Py_PRINT_RAW from pypy.interpreter.error import OperationError @@ -38,9 +38,9 @@ On success, return a new file object that is opened on the file given by filename, with a file mode given by mode, where mode has the same semantics as the standard C routine fopen(). On failure, return NULL.""" - w_filename = space.wrap(rffi.charp2str(filename)) + w_filename = space.wrapbytes(rffi.charp2str(filename)) w_mode = space.wrap(rffi.charp2str(mode)) - return space.call_method(space.builtin, 'file', w_filename, w_mode) + return space.call_method(space.builtin, 'open', w_filename, w_mode) @cpython_api([FILEP, CONST_STRING, CONST_STRING, rffi.VOIDP], PyObject) def PyFile_FromFile(space, fp, name, mode, close): diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -7,7 +7,7 @@ cpython_api, generic_cpy_call, PyObject, Py_ssize_t) from pypy.module.cpyext.typeobjectdefs import ( unaryfunc, wrapperfunc, ternaryfunc, PyTypeObjectPtr, binaryfunc, - getattrfunc, getattrofunc, setattrofunc, lenfunc, ssizeargfunc, + getattrfunc, getattrofunc, setattrofunc, lenfunc, ssizeargfunc, inquiry, ssizessizeargfunc, ssizeobjargproc, iternextfunc, initproc, richcmpfunc, cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, objobjargproc, readbufferproc) @@ -60,6 +60,16 @@ args_w = space.fixedview(w_args) return generic_cpy_call(space, func_binary, w_self, args_w[0]) +def wrap_inquirypred(space, w_self, w_args, func): + func_inquiry = rffi.cast(inquiry, func) + check_num_args(space, w_args, 0) + args_w = space.fixedview(w_args) + res = generic_cpy_call(space, func_inquiry, w_self) + res = rffi.cast(lltype.Signed, res) + if res == -1: + space.fromcache(State).check_and_raise_exception() + return space.wrap(bool(res)) + def wrap_getattr(space, w_self, w_args, func): func_target = rffi.cast(getattrfunc, func) check_num_args(space, w_args, 1) diff --git a/pypy/module/cpyext/src/abstract.c b/pypy/module/cpyext/src/abstract.c new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/src/abstract.c @@ -0,0 +1,269 @@ +/* Abstract Object Interface */ + +#include "Python.h" + +/* Shorthands to return certain errors */ + +static PyObject * +type_error(const char *msg, PyObject *obj) +{ + PyErr_Format(PyExc_TypeError, msg, obj->ob_type->tp_name); + return NULL; +} + +static PyObject * +null_error(void) +{ + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_SystemError, + "null argument to internal routine"); + return NULL; +} + +/* Operations on any object */ + +int +PyObject_CheckReadBuffer(PyObject *obj) +{ + PyBufferProcs *pb = obj->ob_type->tp_as_buffer; + + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + (*pb->bf_getsegcount)(obj, NULL) != 1) + return 0; + return 1; +} + +int PyObject_AsReadBuffer(PyObject *obj, + const void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void *pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a readable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getreadbuffer)(obj, 0, &pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +int PyObject_AsWriteBuffer(PyObject *obj, + void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void*pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a writeable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getwritebuffer)(obj,0,&pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +/* Operations on callable objects */ + +static PyObject* +call_function_tail(PyObject *callable, PyObject *args) +{ + PyObject *retval; + + if (args == NULL) + return NULL; + + if (!PyTuple_Check(args)) { + PyObject *a; + + a = PyTuple_New(1); + if (a == NULL) { + Py_DECREF(args); + return NULL; + } + PyTuple_SET_ITEM(a, 0, args); + args = a; + } + retval = PyObject_Call(callable, args, NULL); + + Py_DECREF(args); + + return retval; +} + +PyObject * +PyObject_CallFunction(PyObject *callable, const char *format, ...) +{ + va_list va; + PyObject *args; + + if (callable == NULL) + return null_error(); + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + return call_function_tail(callable, args); +} + +PyObject * +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) +{ + va_list va; + PyObject *args; + PyObject *func = NULL; + PyObject *retval = NULL; + + if (o == NULL || name == NULL) + return null_error(); + + func = PyObject_GetAttrString(o, name); + if (func == NULL) { + PyErr_SetString(PyExc_AttributeError, name); + return 0; + } + + if (!PyCallable_Check(func)) { + type_error("attribute of type '%.200s' is not callable", func); + goto exit; + } + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + retval = call_function_tail(func, args); + + exit: + /* args gets consumed in call_function_tail */ + Py_XDECREF(func); + + return retval; +} + +static PyObject * +objargs_mktuple(va_list va) +{ + int i, n = 0; + va_list countva; + PyObject *result, *tmp; + +#ifdef VA_LIST_IS_ARRAY + memcpy(countva, va, sizeof(va_list)); +#else +#ifdef __va_copy + __va_copy(countva, va); +#else + countva = va; +#endif +#endif + + while (((PyObject *)va_arg(countva, PyObject *)) != NULL) + ++n; + result = PyTuple_New(n); + if (result != NULL && n > 0) { + for (i = 0; i < n; ++i) { + tmp = (PyObject *)va_arg(va, PyObject *); + Py_INCREF(tmp); + PyTuple_SET_ITEM(result, i, tmp); + } + } + return result; +} + +PyObject * +PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL || name == NULL) + return null_error(); + + callable = PyObject_GetAttr(callable, name); + if (callable == NULL) + return NULL; + + /* count the args */ + va_start(vargs, name); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) { + Py_DECREF(callable); + return NULL; + } + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + Py_DECREF(callable); + + return tmp; +} + +PyObject * +PyObject_CallFunctionObjArgs(PyObject *callable, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL) + return null_error(); + + /* count the args */ + va_start(vargs, callable); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) + return NULL; + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + + return tmp; +} + diff --git a/pypy/module/cpyext/src/bufferobject.c b/pypy/module/cpyext/src/bufferobject.c --- a/pypy/module/cpyext/src/bufferobject.c +++ b/pypy/module/cpyext/src/bufferobject.c @@ -34,13 +34,13 @@ proc = bp->bf_getreadbuffer; else if ((buffer_type == WRITE_BUFFER) || (buffer_type == ANY_BUFFER)) - proc = (readbufferproc)bp->bf_getwritebuffer; + proc = (readbufferproc)bp->bf_getwritebuffer; else if (buffer_type == CHAR_BUFFER) { if (!PyType_HasFeature(self->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER)) { - PyErr_SetString(PyExc_TypeError, - "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); - return 0; + Py_TPFLAGS_HAVE_GETCHARBUFFER)) { + PyErr_SetString(PyExc_TypeError, + "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); + return 0; } proc = (readbufferproc)bp->bf_getcharbuffer; } @@ -86,18 +86,18 @@ static PyObject * buffer_from_memory(PyObject *base, Py_ssize_t size, Py_ssize_t offset, void *ptr, - int readonly) + int readonly) { PyBufferObject * b; if (size < 0 && size != Py_END_OF_BUFFER) { PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); + "size must be zero or positive"); return NULL; } if (offset < 0) { PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); + "offset must be zero or positive"); return NULL; } @@ -121,7 +121,7 @@ { if (offset < 0) { PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); + "offset must be zero or positive"); return NULL; } if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { @@ -193,7 +193,7 @@ if (size < 0) { PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); + "size must be zero or positive"); return NULL; } if (sizeof(*b) > PY_SSIZE_T_MAX - size) { @@ -225,9 +225,11 @@ Py_ssize_t offset = 0; Py_ssize_t size = Py_END_OF_BUFFER; - /*if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) - return NULL;*/ - + /* + * if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) + * return NULL; + */ + if (!_PyArg_NoKeywords("buffer()", kw)) return NULL; @@ -278,12 +280,11 @@ const char *status = self->b_readonly ? "read-only" : "read-write"; if ( self->b_base == NULL ) - return PyUnicode_FromFormat( - "<%s buffer ptr %p, size %zd at %p>", - status, - self->b_ptr, - self->b_size, - self); + return PyUnicode_FromFormat("<%s buffer ptr %p, size %zd at %p>", + status, + self->b_ptr, + self->b_size, + self); else return PyUnicode_FromFormat( "<%s buffer for %p, size %zd, offset %zd at %p>", @@ -316,7 +317,7 @@ if ( !self->b_readonly ) { PyErr_SetString(PyExc_TypeError, - "writable buffers are not hashable"); + "writable buffers are not hashable"); return -1; } @@ -376,13 +377,13 @@ { /* ### use a different exception type/message? */ PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); + "single-segment buffer object expected"); return NULL; } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return NULL; - + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return NULL; + /* optimize special case */ if ( size == 0 ) { @@ -395,12 +396,12 @@ assert(count <= PY_SIZE_MAX - size); - ob = PyString_FromStringAndSize(NULL, size + count); + ob = PyString_FromStringAndSize(NULL, size + count); if ( ob == NULL ) return NULL; - p = PyString_AS_STRING(ob); - memcpy(p, ptr1, size); - memcpy(p + size, ptr2, count); + p = PyString_AS_STRING(ob); + memcpy(p, ptr1, size); + memcpy(p + size, ptr2, count); /* there is an extra byte in the string object, so this is safe */ p[size + count] = '\0'; @@ -471,7 +472,7 @@ if ( right < left ) right = left; return PyString_FromStringAndSize((char *)ptr + left, - right - left); + right - left); } static PyObject * @@ -479,10 +480,9 @@ { void *p; Py_ssize_t size; - + if (!get_buf(self, &p, &size, ANY_BUFFER)) return NULL; - if (PyIndex_Check(item)) { Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); if (i == -1 && PyErr_Occurred()) @@ -495,7 +495,7 @@ Py_ssize_t start, stop, step, slicelength, cur, i; if (PySlice_GetIndicesEx((PySliceObject*)item, size, - &start, &stop, &step, &slicelength) < 0) { + &start, &stop, &step, &slicelength) < 0) { return NULL; } @@ -503,7 +503,7 @@ return PyString_FromStringAndSize("", 0); else if (step == 1) return PyString_FromStringAndSize((char *)p + start, - stop - start); + stop - start); else { PyObject *result; char *source_buf = (char *)p; @@ -518,14 +518,14 @@ } result = PyString_FromStringAndSize(result_buf, - slicelength); + slicelength); PyMem_Free(result_buf); return result; } } else { PyErr_SetString(PyExc_TypeError, - "sequence index must be integer"); + "sequence index must be integer"); return NULL; } } @@ -540,7 +540,7 @@ if ( self->b_readonly ) { PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); + "buffer is read-only"); return -1; } @@ -549,7 +549,7 @@ if (idx < 0 || idx >= size) { PyErr_SetString(PyExc_IndexError, - "buffer assignment index out of range"); + "buffer assignment index out of range"); return -1; } @@ -565,7 +565,7 @@ { /* ### use a different exception type/message? */ PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); + "single-segment buffer object expected"); return -1; } @@ -573,7 +573,7 @@ return -1; if ( count != 1 ) { PyErr_SetString(PyExc_TypeError, - "right operand must be a single byte"); + "right operand must be a single byte"); return -1; } @@ -592,7 +592,7 @@ if ( self->b_readonly ) { PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); + "buffer is read-only"); return -1; } @@ -608,7 +608,7 @@ { /* ### use a different exception type/message? */ PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); + "single-segment buffer object expected"); return -1; } if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) @@ -649,7 +649,7 @@ if ( self->b_readonly ) { PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); + "buffer is read-only"); return -1; } @@ -665,12 +665,11 @@ { /* ### use a different exception type/message? */ PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); + "single-segment buffer object expected"); return -1; } if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) return -1; - if (PyIndex_Check(item)) { Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); if (i == -1 && PyErr_Occurred()) @@ -681,9 +680,9 @@ } else if (PySlice_Check(item)) { Py_ssize_t start, stop, step, slicelength; - + if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, - &start, &stop, &step, &slicelength) < 0) + &start, &stop, &step, &slicelength) < 0) return -1; if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) @@ -704,7 +703,7 @@ } else { Py_ssize_t cur, i; - + for (cur = start, i = 0; i < slicelength; cur += step, i++) { ((char *)ptr1)[cur] = ((char *)ptr2)[i]; @@ -714,7 +713,7 @@ } } else { PyErr_SetString(PyExc_TypeError, - "buffer indices must be integers"); + "buffer indices must be integers"); return -1; } } @@ -727,7 +726,7 @@ Py_ssize_t size; if ( idx != 0 ) { PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); + "accessing non-existent buffer segment"); return -1; } if (!get_buf(self, pp, &size, READ_BUFFER)) @@ -748,7 +747,7 @@ if ( idx != 0 ) { PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); + "accessing non-existent buffer segment"); return -1; } if (!get_buf(self, pp, &size, WRITE_BUFFER)) @@ -775,7 +774,7 @@ Py_ssize_t size; if ( idx != 0 ) { PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); + "accessing non-existent buffer segment"); return -1; } if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) @@ -813,44 +812,42 @@ }; PyTypeObject PyBuffer_Type = { - PyObject_HEAD_INIT(NULL) - 0, + PyVarObject_HEAD_INIT(NULL, 0) "buffer", sizeof(PyBufferObject), 0, - (destructor)buffer_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - (cmpfunc)buffer_compare, /* tp_compare */ - (reprfunc)buffer_repr, /* tp_repr */ - 0, /* tp_as_number */ - &buffer_as_sequence, /* tp_as_sequence */ - &buffer_as_mapping, /* tp_as_mapping */ - (hashfunc)buffer_hash, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)buffer_str, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - &buffer_as_buffer, /* tp_as_buffer */ + (destructor)buffer_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + (cmpfunc)buffer_compare, /* tp_compare */ + (reprfunc)buffer_repr, /* tp_repr */ + 0, /* tp_as_number */ + &buffer_as_sequence, /* tp_as_sequence */ + &buffer_as_mapping, /* tp_as_mapping */ + (hashfunc)buffer_hash, /* tp_hash */ + 0, /* tp_call */ + (reprfunc)buffer_str, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + &buffer_as_buffer, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ - buffer_doc, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - buffer_new, /* tp_new */ + buffer_doc, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + buffer_new, /* tp_new */ }; - diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -50,6 +50,10 @@ PyCObject_AsVoidPtr(PyObject *self) { if (self) { + if (PyCapsule_CheckExact(self)) { + const char *name = PyCapsule_GetName(self); + return (void *)PyCapsule_GetPointer(self, name); + } if (self->ob_type == &PyCObject_Type) return ((PyCObject *)self)->cobject; PyErr_SetString(PyExc_TypeError, diff --git a/pypy/module/cpyext/src/getargs.c b/pypy/module/cpyext/src/getargs.c --- a/pypy/module/cpyext/src/getargs.c +++ b/pypy/module/cpyext/src/getargs.c @@ -7,349 +7,375 @@ #ifdef __cplusplus -extern "C" { +extern "C" { #endif - int PyArg_Parse(PyObject *, const char *, ...); int PyArg_ParseTuple(PyObject *, const char *, ...); int PyArg_VaParse(PyObject *, const char *, va_list); int PyArg_ParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, ...); + const char *, char **, ...); int PyArg_VaParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, va_list); + const char *, char **, va_list); #define FLAG_COMPAT 1 #define FLAG_SIZE_T 2 -typedef int (*destr_t)(PyObject *, void *); - - -/* Keep track of "objects" that have been allocated or initialized and - which will need to be deallocated or cleaned up somehow if overall - parsing fails. -*/ -typedef struct { - void *item; - destr_t destructor; -} freelistentry_t; - -typedef struct { - int first_available; - freelistentry_t *entries; -} freelist_t; - /* Forward */ static int vgetargs1(PyObject *, const char *, va_list *, int); static void seterror(int, const char *, int *, const char *, const char *); -static char *convertitem(PyObject *, const char **, va_list *, int, int *, - char *, size_t, freelist_t *); +static char *convertitem(PyObject *, const char **, va_list *, int, int *, + char *, size_t, PyObject **); static char *converttuple(PyObject *, const char **, va_list *, int, - int *, char *, size_t, int, freelist_t *); + int *, char *, size_t, int, PyObject **); static char *convertsimple(PyObject *, const char **, va_list *, int, char *, - size_t, freelist_t *); + size_t, PyObject **); static Py_ssize_t convertbuffer(PyObject *, void **p, char **); static int getbuffer(PyObject *, Py_buffer *, char**); static int vgetargskeywords(PyObject *, PyObject *, - const char *, char **, va_list *, int); + const char *, char **, va_list *, int); static char *skipitem(const char **, va_list *, int); int PyArg_Parse(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT); + va_end(va); + return retval; } int _PyArg_Parse_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_ParseTuple(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, 0); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTuple_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParse(PyObject *args, const char *format, va_list va) { - va_list lva; + va_list lva; -#ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(lva, va); -#else - lva = va; -#endif -#endif + Py_VA_COPY(lva, va); - return vgetargs1(args, format, &lva, 0); + return vgetargs1(args, format, &lva, 0); } int _PyArg_VaParse_SizeT(PyObject *args, char *format, va_list va) { - va_list lva; + va_list lva; -#ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(lva, va); -#else - lva = va; -#endif -#endif + Py_VA_COPY(lva, va); - return vgetargs1(args, format, &lva, FLAG_SIZE_T); + return vgetargs1(args, format, &lva, FLAG_SIZE_T); } /* Handle cleanup of allocated memory in case of exception */ -static int -cleanup_ptr(PyObject *self, void *ptr) +#define GETARGS_CAPSULE_NAME_CLEANUP_PTR "getargs.cleanup_ptr" +#define GETARGS_CAPSULE_NAME_CLEANUP_BUFFER "getargs.cleanup_buffer" +#define GETARGS_CAPSULE_NAME_CLEANUP_CONVERT "getargs.cleanup_convert" + +static void +cleanup_ptr(PyObject *self) { + void *ptr = PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_PTR); if (ptr) { PyMem_FREE(ptr); } +} + +static void +cleanup_buffer(PyObject *self) +{ + Py_buffer *ptr = (Py_buffer *)PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_BUFFER); + if (ptr) { + PyBuffer_Release(ptr); + } +} + +static int +addcleanup(void *ptr, PyObject **freelist, int is_buffer) +{ + PyObject *cobj; + const char *name; + PyCapsule_Destructor destr; + + if (is_buffer) { + destr = cleanup_buffer; + name = GETARGS_CAPSULE_NAME_CLEANUP_BUFFER; + } else { + destr = cleanup_ptr; + name = GETARGS_CAPSULE_NAME_CLEANUP_PTR; + } + + if (!*freelist) { + *freelist = PyList_New(0); + if (!*freelist) { + destr(ptr); + return -1; + } + } + + cobj = PyCapsule_New(ptr, name, destr); + if (!cobj) { + destr(ptr); + return -1; + } + if (PyList_Append(*freelist, cobj)) { + Py_DECREF(cobj); + return -1; + } + Py_DECREF(cobj); + return 0; +} + +static void +cleanup_convert(PyObject *self) +{ + typedef int (*destr_t)(PyObject *, void *); + destr_t destr = (destr_t)PyCapsule_GetContext(self); + void *ptr = PyCapsule_GetPointer(self, + GETARGS_CAPSULE_NAME_CLEANUP_CONVERT); + if (ptr && destr) + destr(NULL, ptr); +} + +static int +addcleanup_convert(void *ptr, PyObject **freelist, int (*destr)(PyObject*,void*)) +{ + PyObject *cobj; + if (!*freelist) { + *freelist = PyList_New(0); + if (!*freelist) { + destr(NULL, ptr); + return -1; + } + } + cobj = PyCapsule_New(ptr, GETARGS_CAPSULE_NAME_CLEANUP_CONVERT, + cleanup_convert); + if (!cobj) { + destr(NULL, ptr); + return -1; + } + if (PyCapsule_SetContext(cobj, destr) == -1) { + /* This really should not happen. */ + Py_FatalError("capsule refused setting of context."); + } + if (PyList_Append(*freelist, cobj)) { + Py_DECREF(cobj); /* This will also call destr. */ + return -1; + } + Py_DECREF(cobj); return 0; } static int -cleanup_buffer(PyObject *self, void *ptr) +cleanreturn(int retval, PyObject *freelist) { - Py_buffer *buf = (Py_buffer *)ptr; - if (buf) { - PyBuffer_Release(buf); + if (freelist && retval != 0) { + /* We were successful, reset the destructors so that they + don't get called. */ + Py_ssize_t len = PyList_GET_SIZE(freelist), i; + for (i = 0; i < len; i++) + PyCapsule_SetDestructor(PyList_GET_ITEM(freelist, i), NULL); } - return 0; + Py_XDECREF(freelist); + return retval; } -static int -addcleanup(void *ptr, freelist_t *freelist, destr_t destructor) -{ - int index; - - index = freelist->first_available; - freelist->first_available += 1; - - freelist->entries[index].item = ptr; - freelist->entries[index].destructor = destructor; - - return 0; -} - -static int -cleanreturn(int retval, freelist_t *freelist) -{ - int index; - - if (retval == 0) { - /* A failure occurred, therefore execute all of the cleanup - functions. - */ - for (index = 0; index < freelist->first_available; ++index) { - freelist->entries[index].destructor(NULL, - freelist->entries[index].item); - } - } - PyMem_Free(freelist->entries); - return retval; -} static int vgetargs1(PyObject *args, const char *format, va_list *p_va, int flags) { - char msgbuf[256]; - int levels[32]; - const char *fname = NULL; - const char *message = NULL; - int min = -1; - int max = 0; - int level = 0; - int endfmt = 0; - const char *formatsave = format; - Py_ssize_t i, len; - char *msg; - freelist_t freelist = {0, NULL}; - int compat = flags & FLAG_COMPAT; + char msgbuf[256]; + int levels[32]; + const char *fname = NULL; + const char *message = NULL; + int min = -1; + int max = 0; + int level = 0; + int endfmt = 0; + const char *formatsave = format; + Py_ssize_t i, len; + char *msg; + PyObject *freelist = NULL; + int compat = flags & FLAG_COMPAT; - assert(compat || (args != (PyObject*)NULL)); - flags = flags & ~FLAG_COMPAT; + assert(compat || (args != (PyObject*)NULL)); + flags = flags & ~FLAG_COMPAT; - while (endfmt == 0) { - int c = *format++; - switch (c) { - case '(': - if (level == 0) - max++; - level++; - if (level >= 30) - Py_FatalError("too many tuple nesting levels " - "in argument format string"); - break; - case ')': - if (level == 0) - Py_FatalError("excess ')' in getargs format"); - else - level--; - break; - case '\0': - endfmt = 1; - break; - case ':': - fname = format; - endfmt = 1; - break; - case ';': - message = format; - endfmt = 1; - break; - default: - if (level == 0) { - if (c == 'O') - max++; - else if (isalpha(Py_CHARMASK(c))) { - if (c != 'e') /* skip encoded */ - max++; - } else if (c == '|') - min = max; - } - break; - } - } - - if (level != 0) - Py_FatalError(/* '(' */ "missing ')' in getargs format"); - - if (min < 0) - min = max; - - format = formatsave; - - freelist.entries = PyMem_New(freelistentry_t, max); + while (endfmt == 0) { + int c = *format++; + switch (c) { + case '(': + if (level == 0) + max++; + level++; + if (level >= 30) + Py_FatalError("too many tuple nesting levels " + "in argument format string"); + break; + case ')': + if (level == 0) + Py_FatalError("excess ')' in getargs format"); + else + level--; + break; + case '\0': + endfmt = 1; + break; + case ':': + fname = format; + endfmt = 1; + break; + case ';': + message = format; + endfmt = 1; + break; + default: + if (level == 0) { + if (c == 'O') + max++; + else if (isalpha(Py_CHARMASK(c))) { + if (c != 'e') /* skip encoded */ + max++; + } else if (c == '|') + min = max; + } + break; + } + } - if (compat) { - if (max == 0) { - if (args == NULL) - return cleanreturn(1, &freelist); - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes no arguments", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - else if (min == 1 && max == 1) { - if (args == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes at least one argument", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - msg = convertitem(args, &format, p_va, flags, levels, - msgbuf, sizeof(msgbuf), &freelist); - if (msg == NULL) - return cleanreturn(1, &freelist); - seterror(levels[0], msg, levels+1, fname, message); - return cleanreturn(0, &freelist); - } - else { - PyErr_SetString(PyExc_SystemError, - "old style getargs format uses new features"); - return cleanreturn(0, &freelist); - } - } - - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "new style getargs format but argument is not a tuple"); - return cleanreturn(0, &freelist); - } - - len = PyTuple_GET_SIZE(args); - - if (len < min || max < len) { - if (message == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.150s%s takes %s %d argument%s " - "(%ld given)", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()", - min==max ? "exactly" - : len < min ? "at least" : "at most", - len < min ? min : max, - (len < min ? min : max) == 1 ? "" : "s", - Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); - message = msgbuf; - } - PyErr_SetString(PyExc_TypeError, message); - return cleanreturn(0, &freelist); - } - - for (i = 0; i < len; i++) { - if (*format == '|') - format++; - msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, - flags, levels, msgbuf, - sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, message); - return cleanreturn(0, &freelist); - } - } + if (level != 0) + Py_FatalError(/* '(' */ "missing ')' in getargs format"); - if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && - *format != '(' && - *format != '|' && *format != ':' && *format != ';') { - PyErr_Format(PyExc_SystemError, - "bad format string: %.200s", formatsave); - return cleanreturn(0, &freelist); - } - - return cleanreturn(1, &freelist); + if (min < 0) + min = max; + + format = formatsave; + + if (compat) { + if (max == 0) { + if (args == NULL) + return 1; + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes no arguments", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + else if (min == 1 && max == 1) { + if (args == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes at least one argument", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + msg = convertitem(args, &format, p_va, flags, levels, + msgbuf, sizeof(msgbuf), &freelist); + if (msg == NULL) + return cleanreturn(1, freelist); + seterror(levels[0], msg, levels+1, fname, message); + return cleanreturn(0, freelist); + } + else { + PyErr_SetString(PyExc_SystemError, + "old style getargs format uses new features"); + return 0; + } + } + + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "new style getargs format but argument is not a tuple"); + return 0; + } + + len = PyTuple_GET_SIZE(args); + + if (len < min || max < len) { + if (message == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.150s%s takes %s %d argument%s " + "(%ld given)", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()", + min==max ? "exactly" + : len < min ? "at least" : "at most", + len < min ? min : max, + (len < min ? min : max) == 1 ? "" : "s", + Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); + message = msgbuf; + } + PyErr_SetString(PyExc_TypeError, message); + return 0; + } + + for (i = 0; i < len; i++) { + if (*format == '|') + format++; + msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, + flags, levels, msgbuf, + sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, msg); + return cleanreturn(0, freelist); + } + } + + if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && + *format != '(' && + *format != '|' && *format != ':' && *format != ';') { + PyErr_Format(PyExc_SystemError, + "bad format string: %.200s", formatsave); + return cleanreturn(0, freelist); + } + + return cleanreturn(1, freelist); } @@ -358,37 +384,37 @@ seterror(int iarg, const char *msg, int *levels, const char *fname, const char *message) { - char buf[512]; - int i; - char *p = buf; + char buf[512]; + int i; + char *p = buf; - if (PyErr_Occurred()) - return; - else if (message == NULL) { - if (fname != NULL) { - PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); - p += strlen(p); - } - if (iarg != 0) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - "argument %d", iarg); - i = 0; - p += strlen(p); - while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - ", item %d", levels[i]-1); - p += strlen(p); - i++; - } - } - else { - PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); - p += strlen(p); - } - PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); - message = buf; - } - PyErr_SetString(PyExc_TypeError, message); + if (PyErr_Occurred()) + return; + else if (message == NULL) { + if (fname != NULL) { + PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); + p += strlen(p); + } + if (iarg != 0) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + "argument %d", iarg); + i = 0; + p += strlen(p); + while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + ", item %d", levels[i]-1); + p += strlen(p); + i++; + } + } + else { + PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); + p += strlen(p); + } + PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); + message = buf; + } + PyErr_SetString(PyExc_TypeError, message); } @@ -404,85 +430,84 @@ *p_va is undefined, *levels is a 0-terminated list of item numbers, *msgbuf contains an error message, whose format is: - "must be , not ", where: - is the name of the expected type, and - is the name of the actual type, + "must be , not ", where: + is the name of the expected type, and + is the name of the actual type, and msgbuf is returned. */ static char * converttuple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, int toplevel, - freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, int toplevel, + PyObject **freelist) { - int level = 0; - int n = 0; - const char *format = *p_format; - int i; - - for (;;) { - int c = *format++; - if (c == '(') { - if (level == 0) - n++; - level++; - } - else if (c == ')') { - if (level == 0) - break; - level--; - } - else if (c == ':' || c == ';' || c == '\0') - break; - else if (level == 0 && isalpha(Py_CHARMASK(c))) - n++; - } - - if (!PySequence_Check(arg) || PyString_Check(arg)) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %.50s" : - "must be %d-item sequence, not %.50s", - n, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; - } - - if ((i = PySequence_Size(arg)) != n) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %d" : - "must be sequence of length %d, not %d", - n, i); - return msgbuf; - } + int level = 0; + int n = 0; + const char *format = *p_format; + int i; - format = *p_format; - for (i = 0; i < n; i++) { - char *msg; - PyObject *item; + for (;;) { + int c = *format++; + if (c == '(') { + if (level == 0) + n++; + level++; + } + else if (c == ')') { + if (level == 0) + break; + level--; + } + else if (c == ':' || c == ';' || c == '\0') + break; + else if (level == 0 && isalpha(Py_CHARMASK(c))) + n++; + } + + if (!PySequence_Check(arg) || PyBytes_Check(arg)) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %.50s" : + "must be %d-item sequence, not %.50s", + n, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; + } + + if ((i = PySequence_Size(arg)) != n) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %d" : + "must be sequence of length %d, not %d", + n, i); + return msgbuf; + } + + format = *p_format; + for (i = 0; i < n; i++) { + char *msg; + PyObject *item; item = PySequence_GetItem(arg, i); - if (item == NULL) { - PyErr_Clear(); - levels[0] = i+1; - levels[1] = 0; - strncpy(msgbuf, "is not retrievable", - bufsize); - return msgbuf; - } - PyPy_Borrow(arg, item); - msg = convertitem(item, &format, p_va, flags, levels+1, - msgbuf, bufsize, freelist); + if (item == NULL) { + PyErr_Clear(); + levels[0] = i+1; + levels[1] = 0; + strncpy(msgbuf, "is not retrievable", bufsize); + return msgbuf; + } + PyPy_Borrow(arg, item); + msg = convertitem(item, &format, p_va, flags, levels+1, + msgbuf, bufsize, freelist); /* PySequence_GetItem calls tp->sq_item, which INCREFs */ Py_XDECREF(item); - if (msg != NULL) { - levels[0] = i+1; - return msg; - } - } + if (msg != NULL) { + levels[0] = i+1; + return msg; + } + } - *p_format = format; - return NULL; + *p_format = format; + return NULL; } @@ -490,60 +515,61 @@ static char * convertitem(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, PyObject **freelist) { - char *msg; - const char *format = *p_format; - - if (*format == '(' /* ')' */) { - format++; - msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, - bufsize, 0, freelist); - if (msg == NULL) - format++; - } - else { - msg = convertsimple(arg, &format, p_va, flags, - msgbuf, bufsize, freelist); - if (msg != NULL) - levels[0] = 0; - } - if (msg == NULL) - *p_format = format; - return msg; + char *msg; + const char *format = *p_format; + + if (*format == '(' /* ')' */) { + format++; + msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, + bufsize, 0, freelist); + if (msg == NULL) + format++; + } + else { + msg = convertsimple(arg, &format, p_va, flags, + msgbuf, bufsize, freelist); + if (msg != NULL) + levels[0] = 0; + } + if (msg == NULL) + *p_format = format; + return msg; } #define UNICODE_DEFAULT_ENCODING(arg) \ - _PyUnicode_AsDefaultEncodedString(arg, NULL) + _PyUnicode_AsDefaultEncodedString(arg, NULL) /* Format an error message generated by convertsimple(). */ static char * converterr(const char *expected, PyObject *arg, char *msgbuf, size_t bufsize) { - assert(expected != NULL); - assert(arg != NULL); - PyOS_snprintf(msgbuf, bufsize, - "must be %.50s, not %.50s", expected, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; + assert(expected != NULL); + assert(arg != NULL); + PyOS_snprintf(msgbuf, bufsize, + "must be %.50s, not %.50s", expected, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; } #define CONV_UNICODE "(unicode conversion error)" -/* explicitly check for float arguments when integers are expected. For now - * signal a warning. Returns true if an exception was raised. */ +/* Explicitly check for float arguments when integers are expected. + Return 1 for error, 0 if ok. */ static int float_argument_error(PyObject *arg) { - if (PyFloat_Check(arg) && - PyErr_Warn(PyExc_DeprecationWarning, - "integer argument expected, got float" )) - return 1; - else - return 0; + if (PyFloat_Check(arg)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float" ); + return 1; + } + else + return 0; } /* Convert a non-tuple argument. Return NULL if conversion went OK, @@ -557,836 +583,714 @@ static char * convertsimple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - char *msgbuf, size_t bufsize, freelist_t *freelist) + char *msgbuf, size_t bufsize, PyObject **freelist) { - /* For # codes */ -#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ - if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ - else q=va_arg(*p_va, int*); -#define STORE_SIZE(s) if (flags & FLAG_SIZE_T) *q2=s; else *q=s; + /* For # codes */ +#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ + if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ + else q=va_arg(*p_va, int*); +#define STORE_SIZE(s) \ + if (flags & FLAG_SIZE_T) \ + *q2=s; \ + else { \ + if (INT_MAX < s) { \ + PyErr_SetString(PyExc_OverflowError, \ + "size does not fit in an int"); \ + return converterr("", arg, msgbuf, bufsize); \ + } \ + *q=s; \ + } #define BUFFER_LEN ((flags & FLAG_SIZE_T) ? *q2:*q) +#define RETURN_ERR_OCCURRED return msgbuf - const char *format = *p_format; - char c = *format++; -#ifdef Py_USING_UNICODE - PyObject *uarg; -#endif - - switch (c) { - - case 'b': { /* unsigned byte -- very short int */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > UCHAR_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (unsigned char) ival; - break; - } - - case 'B': {/* byte sized bitfield - both signed and unsigned - values allowed */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned char) ival; - break; - } - - case 'h': {/* signed short int */ - short *p = va_arg(*p_va, short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < SHRT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > SHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (short) ival; - break; - } - - case 'H': { /* short int sized bitfield, both signed and - unsigned allowed */ - unsigned short *p = va_arg(*p_va, unsigned short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned short) ival; - break; - } - case 'i': {/* signed int */ - int *p = va_arg(*p_va, int *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival > INT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival < INT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = ival; - break; - } - case 'I': { /* int sized bitfield, both signed and - unsigned allowed */ - unsigned int *p = va_arg(*p_va, unsigned int *); - unsigned int ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); - if (ival == (unsigned int)-1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - case 'n': /* Py_ssize_t */ -#if SIZEOF_SIZE_T != SIZEOF_LONG - { - Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); - Py_ssize_t ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsSsize_t(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif - /* Fall through from 'n' to 'l' if Py_ssize_t is int */ - case 'l': {/* long int */ - long *p = va_arg(*p_va, long *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } + const char *format = *p_format; + char c = *format++; + PyObject *uarg; - case 'k': { /* long sized bitfield */ - unsigned long *p = va_arg(*p_va, unsigned long *); - unsigned long ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } - + switch (c) { + + case 'b': { /* unsigned byte -- very short int */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + RETURN_ERR_OCCURRED; + ival = PyLong_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + RETURN_ERR_OCCURRED; + else if (ival < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is less than minimum"); + RETURN_ERR_OCCURRED; + } + else if (ival > UCHAR_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is greater than maximum"); + RETURN_ERR_OCCURRED; + } + else + *p = (unsigned char) ival; + break; + } + + case 'B': {/* byte sized bitfield - both signed and unsigned + values allowed */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + RETURN_ERR_OCCURRED; + ival = PyLong_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + RETURN_ERR_OCCURRED; + else + *p = (unsigned char) ival; + break; + } + + case 'h': {/* signed short int */ + short *p = va_arg(*p_va, short *); + long ival; + if (float_argument_error(arg)) + RETURN_ERR_OCCURRED; + ival = PyLong_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + RETURN_ERR_OCCURRED; + else if (ival < SHRT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is less than minimum"); + RETURN_ERR_OCCURRED; + } + else if (ival > SHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is greater than maximum"); + RETURN_ERR_OCCURRED; + } + else + *p = (short) ival; + break; + } + + case 'H': { /* short int sized bitfield, both signed and + unsigned allowed */ + unsigned short *p = va_arg(*p_va, unsigned short *); + long ival; + if (float_argument_error(arg)) + RETURN_ERR_OCCURRED; + ival = PyLong_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + RETURN_ERR_OCCURRED; + else + *p = (unsigned short) ival; + break; + } + + case 'i': {/* signed int */ + int *p = va_arg(*p_va, int *); + long ival; + if (float_argument_error(arg)) + RETURN_ERR_OCCURRED; + ival = PyLong_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + RETURN_ERR_OCCURRED; + else if (ival > INT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is greater than maximum"); + RETURN_ERR_OCCURRED; + } + else if (ival < INT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is less than minimum"); + RETURN_ERR_OCCURRED; + } + else + *p = ival; + break; + } + + case 'I': { /* int sized bitfield, both signed and + unsigned allowed */ + unsigned int *p = va_arg(*p_va, unsigned int *); + unsigned int ival; + if (float_argument_error(arg)) + RETURN_ERR_OCCURRED; + ival = (unsigned int)PyLong_AsUnsignedLongMask(arg); + if (ival == (unsigned int)-1 && PyErr_Occurred()) + RETURN_ERR_OCCURRED; + else + *p = ival; + break; + } + + case 'n': /* Py_ssize_t */ + { + PyObject *iobj; + Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); + Py_ssize_t ival = -1; + if (float_argument_error(arg)) + RETURN_ERR_OCCURRED; + iobj = PyNumber_Index(arg); + if (iobj != NULL) { + ival = PyLong_AsSsize_t(iobj); + Py_DECREF(iobj); + } + if (ival == -1 && PyErr_Occurred()) + RETURN_ERR_OCCURRED; + *p = ival; + break; + } + case 'l': {/* long int */ + long *p = va_arg(*p_va, long *); + long ival; + if (float_argument_error(arg)) + RETURN_ERR_OCCURRED; + ival = PyLong_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + RETURN_ERR_OCCURRED; + else + *p = ival; + break; + } + + case 'k': { /* long sized bitfield */ + unsigned long *p = va_arg(*p_va, unsigned long *); + unsigned long ival; + if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } + #ifdef HAVE_LONG_LONG - case 'L': {/* PY_LONG_LONG */ - PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); - PY_LONG_LONG ival = PyLong_AsLongLong( arg ); - if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { - return converterr("long", arg, msgbuf, bufsize); - } else { - *p = ival; - } - break; - } + case 'L': {/* PY_LONG_LONG */ + PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); + PY_LONG_LONG ival; + if (float_argument_error(arg)) + RETURN_ERR_OCCURRED; + ival = PyLong_AsLongLong(arg); + if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred()) + RETURN_ERR_OCCURRED; + else + *p = ival; + break; + } - case 'K': { /* long long sized bitfield */ - unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); - unsigned PY_LONG_LONG ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif // HAVE_LONG_LONG - - case 'f': {/* float */ - float *p = va_arg(*p_va, float *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = (float) dval; - break; - } - - case 'd': {/* double */ - double *p = va_arg(*p_va, double *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = dval; - break; - } - -#ifndef WITHOUT_COMPLEX - case 'D': {/* complex double */ - Py_complex *p = va_arg(*p_va, Py_complex *); - Py_complex cval; - cval = PyComplex_AsCComplex(arg); - if (PyErr_Occurred()) - return converterr("complex", arg, msgbuf, bufsize); - else - *p = cval; - break; - } -#endif /* WITHOUT_COMPLEX */ - - case 'c': {/* char */ - char *p = va_arg(*p_va, char *); - if (PyString_Check(arg) && PyString_Size(arg) == 1) - *p = PyString_AS_STRING(arg)[0]; - else - return converterr("char", arg, msgbuf, bufsize); - break; - } - case 's': {/* string */ - if (*format == '*') { - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (PyString_Check(arg)) { - fflush(stdout); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { -#if 0 - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); -#else - return converterr("string or buffer", arg, msgbuf, bufsize); -#endif - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; - } else if (*format == '#') { - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string", arg, msgbuf, bufsize); - if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr("string without null bytes", - arg, msgbuf, bufsize); - } - break; - } - - case 'z': {/* string, may be NULL (None) */ - if (*format == '*') { - Py_FatalError("'*' format not supported in PyArg_*\n"); -#if 0 - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (arg == Py_None) - PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); - else if (PyString_Check(arg)) { - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; -#endif - } else if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (arg == Py_None) { - *p = 0; - STORE_SIZE(0); - } - else if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (arg == Py_None) - *p = 0; - else if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string or None", - arg, msgbuf, bufsize); - if (*format == '#') { - FETCH_SIZE; - assert(0); /* XXX redundant with if-case */ - if (arg == Py_None) - *q = 0; - else - *q = PyString_Size(arg); - format++; - } - else if (*p != NULL && - (Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr( - "string without null bytes or None", - arg, msgbuf, bufsize); - } - break; - } - case 'e': {/* encoded string */ - char **buffer; - const char *encoding; - PyObject *s; - Py_ssize_t size; - int recode_strings; - - /* Get 'e' parameter: the encoding name */ - encoding = (const char *)va_arg(*p_va, const char *); -#ifdef Py_USING_UNICODE - if (encoding == NULL) - encoding = PyUnicode_GetDefaultEncoding(); + case 'K': { /* long long sized bitfield */ + unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); + unsigned PY_LONG_LONG ival; + if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } #endif - /* Get output buffer parameter: - 's' (recode all objects via Unicode) or - 't' (only recode non-string objects) - */ - if (*format == 's') - recode_strings = 1; - else if (*format == 't') - recode_strings = 0; - else - return converterr( - "(unknown parser marker combination)", - arg, msgbuf, bufsize); - buffer = (char **)va_arg(*p_va, char **); - format++; - if (buffer == NULL) - return converterr("(buffer is NULL)", - arg, msgbuf, bufsize); - - /* Encode object */ - if (!recode_strings && PyString_Check(arg)) { - s = arg; - Py_INCREF(s); - } - else { -#ifdef Py_USING_UNICODE - PyObject *u; + case 'f': {/* float */ + float *p = va_arg(*p_va, float *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + RETURN_ERR_OCCURRED; + else + *p = (float) dval; + break; + } - /* Convert object to Unicode */ - u = PyUnicode_FromObject(arg); - if (u == NULL) - return converterr( - "string or unicode or text buffer", - arg, msgbuf, bufsize); - - /* Encode object; use default error handling */ - s = PyUnicode_AsEncodedString(u, - encoding, - NULL); - Py_DECREF(u); - if (s == NULL) - return converterr("(encoding failed)", - arg, msgbuf, bufsize); - if (!PyString_Check(s)) { - Py_DECREF(s); - return converterr( - "(encoder failed to return a string)", - arg, msgbuf, bufsize); - } -#else - return converterr("string", arg, msgbuf, bufsize); -#endif - } - size = PyString_GET_SIZE(s); + case 'd': {/* double */ + double *p = va_arg(*p_va, double *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + RETURN_ERR_OCCURRED; + else + *p = dval; + break; + } - /* Write output; output is guaranteed to be 0-terminated */ - if (*format == '#') { - /* Using buffer length parameter '#': - - - if *buffer is NULL, a new buffer of the - needed size is allocated and the data - copied into it; *buffer is updated to point - to the new buffer; the caller is - responsible for PyMem_Free()ing it after - usage + case 'D': {/* complex double */ + Py_complex *p = va_arg(*p_va, Py_complex *); + Py_complex cval; + cval = PyComplex_AsCComplex(arg); + if (PyErr_Occurred()) + RETURN_ERR_OCCURRED; + else + *p = cval; + break; + } - - if *buffer is not NULL, the data is - copied to *buffer; *buffer_len has to be - set to the size of the buffer on input; - buffer overflow is signalled with an error; - buffer has to provide enough room for the - encoded string plus the trailing 0-byte - - - in both cases, *buffer_len is updated to - the size of the buffer /excluding/ the - trailing 0-byte - - */ - FETCH_SIZE; + case 'c': {/* char */ + char *p = va_arg(*p_va, char *); + if (PyBytes_Check(arg) && PyBytes_Size(arg) == 1) + *p = PyBytes_AS_STRING(arg)[0]; + else + return converterr("a byte string of length 1", arg, msgbuf, bufsize); + break; + } - format++; - if (q == NULL && q2 == NULL) { - Py_DECREF(s); - return converterr( - "(buffer_len is NULL)", - arg, msgbuf, bufsize); - } - if (*buffer == NULL) { - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr( - "(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - } else { - if (size + 1 > BUFFER_LEN) { - Py_DECREF(s); - return converterr( - "(buffer overflow)", - arg, msgbuf, bufsize); - } - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - STORE_SIZE(size); - } else { - /* Using a 0-terminated buffer: - - - the encoded string has to be 0-terminated - for this variant to work; if it is not, an - error raised + case 'C': {/* unicode char */ + int *p = va_arg(*p_va, int *); + if (PyUnicode_Check(arg) && + PyUnicode_GET_SIZE(arg) == 1) + *p = PyUnicode_AS_UNICODE(arg)[0]; + else + return converterr("a unicode character", arg, msgbuf, bufsize); + break; + } - - a new buffer of the needed size is - allocated and the data copied into it; - *buffer is updated to point to the new - buffer; the caller is responsible for - PyMem_Free()ing it after usage + /* XXX WAAAAH! 's', 'y', 'z', 'u', 'Z', 'e', 'w' codes all + need to be cleaned up! */ - */ - if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) - != size) { - Py_DECREF(s); - return converterr( - "encoded string without NULL bytes", - arg, msgbuf, bufsize); - } - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr("(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr("(cleanup problem)", - arg, msgbuf, bufsize); - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - } - Py_DECREF(s); - break; - } + case 'y': {/* any buffer-like object, but not PyUnicode */ + void **p = (void **)va_arg(*p_va, char **); + char *buf; + Py_ssize_t count; + if (*format == '*') { + if (getbuffer(arg, (Py_buffer*)p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + format++; + if (addcleanup(p, freelist, 1)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + break; + } + count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + if (*format == '#') { + FETCH_SIZE; + STORE_SIZE(count); + format++; + } else { + if (strlen(*p) != count) + return converterr( + "bytes without null bytes", + arg, msgbuf, bufsize); + } + break; + } -#ifdef Py_USING_UNICODE - case 'u': {/* raw unicode buffer (Py_UNICODE *) */ - if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - if (PyUnicode_Check(arg)) { - *p = PyUnicode_AS_UNICODE(arg); - STORE_SIZE(PyUnicode_GET_SIZE(arg)); - } - else { - return converterr("cannot convert raw buffers", - arg, msgbuf, bufsize); - } - format++; - } else { - Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); - if (PyUnicode_Check(arg)) - *p = PyUnicode_AS_UNICODE(arg); - else - return converterr("unicode", arg, msgbuf, bufsize); - } - break; - } -#endif + case 's': /* text string */ + case 'z': /* text string or None */ + { + if (*format == '*') { + /* "s*" or "z*" */ + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - case 'S': { /* string object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyString_Check(arg)) - *p = arg; - else - return converterr("string", arg, msgbuf, bufsize); - break; - } - -#ifdef Py_USING_UNICODE - case 'U': { /* Unicode object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyUnicode_Check(arg)) - *p = arg; - else - return converterr("unicode", arg, msgbuf, bufsize); - break; - } -#endif - case 'O': { /* object */ - PyTypeObject *type; - PyObject **p; - if (*format == '!') { - type = va_arg(*p_va, PyTypeObject*); - p = va_arg(*p_va, PyObject **); - format++; - if (PyType_IsSubtype(arg->ob_type, type)) - *p = arg; - else - return converterr(type->tp_name, arg, msgbuf, bufsize); + if (c == 'z' && arg == Py_None) + PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyBytes_AS_STRING(uarg), PyBytes_GET_SIZE(uarg), + 1, 0); + } + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, 1)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { /* any buffer-like object */ + /* "s#" or "z#" */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; - } - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - p = va_arg(*p_va, PyObject **); - format++; - if ((*pred)(arg)) - *p = arg; - else - return converterr("(unspecified)", - arg, msgbuf, bufsize); - - } - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - converter convert = va_arg(*p_va, converter); - void *addr = va_arg(*p_va, void *); - format++; - if (! (*convert)(arg, addr)) - return converterr("(unspecified)", - arg, msgbuf, bufsize); - } - else { - p = va_arg(*p_va, PyObject **); - *p = arg; - } - break; - } - - case 'w': { /* memory buffer, read-write access */ - Py_FatalError("'w' unsupported\n"); -#if 0 - void **p = va_arg(*p_va, void **); - void *res; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; + if (c == 'z' && arg == Py_None) { + *p = NULL; + STORE_SIZE(0); + } + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyBytes_AS_STRING(uarg); + STORE_SIZE(PyBytes_GET_SIZE(uarg)); + } + else { /* any buffer-like object */ + /* XXX Really? */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + /* "s" or "z" */ + char **p = va_arg(*p_va, char **); + uarg = NULL; - if (pb && pb->bf_releasebuffer && *format != '*') - /* Buffer must be released, yet caller does not use - the Py_buffer protocol. */ - return converterr("pinned buffer", arg, msgbuf, bufsize); + if (c == 'z' && arg == Py_None) + *p = NULL; + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyBytes_AS_STRING(uarg); + } + else + return converterr(c == 'z' ? "str or None" : "str", + arg, msgbuf, bufsize); + if (*p != NULL && uarg != NULL && + (Py_ssize_t) strlen(*p) != PyBytes_GET_SIZE(uarg)) + return converterr( + c == 'z' ? "str without null bytes or None" + : "str without null bytes", + arg, msgbuf, bufsize); + } + break; + } - if (pb && pb->bf_getbuffer && *format == '*') { - /* Caller is interested in Py_buffer, and the object - supports it directly. */ - format++; - if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { - PyErr_Clear(); - return converterr("read-write buffer", arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) - return converterr("contiguous buffer", arg, msgbuf, bufsize); - break; - } + case 'u': /* raw unicode buffer (Py_UNICODE *) */ + case 'Z': /* raw unicode buffer or None */ + { + if (*format == '#') { /* any buffer-like object */ + /* "s#" or "Z#" */ + Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); + FETCH_SIZE; - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) - return converterr("read-write buffer", arg, msgbuf, bufsize); - if ((*pb->bf_getsegcount)(arg, NULL) != 1) - return converterr("single-segment read-write buffer", - arg, msgbuf, bufsize); - if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); - if (*format == '*') { - PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); - format++; - } - else { - *p = res; - if (*format == '#') { - FETCH_SIZE; - STORE_SIZE(count); - format++; - } - } - break; -#endif - } - - case 't': { /* 8-bit character buffer, read-only access */ - char **p = va_arg(*p_va, char **); - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; + if (c == 'Z' && arg == Py_None) { + *p = NULL; + STORE_SIZE(0); + } + else if (PyUnicode_Check(arg)) { + *p = PyUnicode_AS_UNICODE(arg); + STORE_SIZE(PyUnicode_GET_SIZE(arg)); + } + else + return converterr("str or None", arg, msgbuf, bufsize); + format++; + } else { + /* "s" or "Z" */ + Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); -#if 0 - if (*format++ != '#') - return converterr( - "invalid use of 't' format character", - arg, msgbuf, bufsize); -#endif - if (!PyType_HasFeature(arg->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER) -#if 0 - || pb == NULL || pb->bf_getcharbuffer == NULL || - pb->bf_getsegcount == NULL -#endif - ) - return converterr( - "string or read-only character buffer", - arg, msgbuf, bufsize); -#if 0 - if (pb->bf_getsegcount(arg, NULL) != 1) - return converterr( - "string or single-segment read-only buffer", - arg, msgbuf, bufsize); + if (c == 'Z' && arg == Py_None) + *p = NULL; + else if (PyUnicode_Check(arg)) { + *p = PyUnicode_AS_UNICODE(arg); + if (Py_UNICODE_strlen(*p) != PyUnicode_GET_SIZE(arg)) + return converterr( + "str without null character or None", + arg, msgbuf, bufsize); + } else + return converterr(c == 'Z' ? "str or None" : "str", + arg, msgbuf, bufsize); + } + break; + } - if (pb->bf_releasebuffer) - return converterr( - "string or pinned buffer", - arg, msgbuf, bufsize); -#endif - count = pb->bf_getcharbuffer(arg, 0, p); -#if 0 - if (count < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); -#endif - { - FETCH_SIZE; - STORE_SIZE(count); - ++format; - } - break; - } - default: - return converterr("impossible", arg, msgbuf, bufsize); - - } - - *p_format = format; - return NULL; + case 'e': {/* encoded string */ + char **buffer; + const char *encoding; + PyObject *s; + int recode_strings; + Py_ssize_t size; + const char *ptr; + + /* Get 'e' parameter: the encoding name */ + encoding = (const char *)va_arg(*p_va, const char *); + if (encoding == NULL) + encoding = PyUnicode_GetDefaultEncoding(); + + /* Get output buffer parameter: + 's' (recode all objects via Unicode) or + 't' (only recode non-string objects) + */ + if (*format == 's') + recode_strings = 1; + else if (*format == 't') + recode_strings = 0; + else + return converterr( + "(unknown parser marker combination)", + arg, msgbuf, bufsize); + buffer = (char **)va_arg(*p_va, char **); + format++; + if (buffer == NULL) + return converterr("(buffer is NULL)", + arg, msgbuf, bufsize); + + /* Encode object */ + if (!recode_strings && + (PyBytes_Check(arg) || PyByteArray_Check(arg))) { + s = arg; + Py_INCREF(s); + if (PyObject_AsCharBuffer(s, &ptr, &size) < 0) + return converterr("(AsCharBuffer failed)", + arg, msgbuf, bufsize); + } + else { + PyObject *u; + + /* Convert object to Unicode */ + u = PyUnicode_FromObject(arg); + if (u == NULL) + return converterr( + "string or unicode or text buffer", + arg, msgbuf, bufsize); + + /* Encode object; use default error handling */ + s = PyUnicode_AsEncodedString(u, + encoding, + NULL); + Py_DECREF(u); + if (s == NULL) + return converterr("(encoding failed)", + arg, msgbuf, bufsize); + if (!PyBytes_Check(s)) { + Py_DECREF(s); + return converterr( + "(encoder failed to return bytes)", + arg, msgbuf, bufsize); + } + size = PyBytes_GET_SIZE(s); + ptr = PyBytes_AS_STRING(s); + if (ptr == NULL) + ptr = ""; + } + + /* Write output; output is guaranteed to be 0-terminated */ + if (*format == '#') { + /* Using buffer length parameter '#': + + - if *buffer is NULL, a new buffer of the + needed size is allocated and the data + copied into it; *buffer is updated to point + to the new buffer; the caller is + responsible for PyMem_Free()ing it after + usage + + - if *buffer is not NULL, the data is + copied to *buffer; *buffer_len has to be + set to the size of the buffer on input; + buffer overflow is signalled with an error; + buffer has to provide enough room for the + encoded string plus the trailing 0-byte + + - in both cases, *buffer_len is updated to + the size of the buffer /excluding/ the + trailing 0-byte + + */ + FETCH_SIZE; + + format++; + if (q == NULL && q2 == NULL) { + Py_DECREF(s); + return converterr( + "(buffer_len is NULL)", + arg, msgbuf, bufsize); + } + if (*buffer == NULL) { + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + PyErr_NoMemory(); + RETURN_ERR_OCCURRED; + } + if (addcleanup(*buffer, freelist, 0)) { + Py_DECREF(s); + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + } else { + if (size + 1 > BUFFER_LEN) { + Py_DECREF(s); + return converterr( + "(buffer overflow)", + arg, msgbuf, bufsize); + } + } + memcpy(*buffer, ptr, size+1); + STORE_SIZE(size); + } else { + /* Using a 0-terminated buffer: + + - the encoded string has to be 0-terminated + for this variant to work; if it is not, an + error raised + + - a new buffer of the needed size is + allocated and the data copied into it; + *buffer is updated to point to the new + buffer; the caller is responsible for + PyMem_Free()ing it after usage + + */ + if ((Py_ssize_t)strlen(ptr) != size) { + Py_DECREF(s); + return converterr( + "encoded string without NULL bytes", + arg, msgbuf, bufsize); + } + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + PyErr_NoMemory(); + RETURN_ERR_OCCURRED; + } + if (addcleanup(*buffer, freelist, 0)) { + Py_DECREF(s); + return converterr("(cleanup problem)", + arg, msgbuf, bufsize); + } + memcpy(*buffer, ptr, size+1); + } + Py_DECREF(s); + break; + } + + case 'S': { /* PyBytes object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyBytes_Check(arg)) + *p = arg; + else + return converterr("bytes", arg, msgbuf, bufsize); + break; + } + + case 'Y': { /* PyByteArray object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyByteArray_Check(arg)) + *p = arg; + else + return converterr("bytearray", arg, msgbuf, bufsize); + break; + } + + case 'U': { /* PyUnicode object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyUnicode_Check(arg)) + *p = arg; + else + return converterr("str", arg, msgbuf, bufsize); + break; + } + + case 'O': { /* object */ + PyTypeObject *type; + PyObject **p; + if (*format == '!') { + type = va_arg(*p_va, PyTypeObject*); + p = va_arg(*p_va, PyObject **); + format++; + if (PyType_IsSubtype(arg->ob_type, type)) + *p = arg; + else + return converterr(type->tp_name, arg, msgbuf, bufsize); + + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + converter convert = va_arg(*p_va, converter); + void *addr = va_arg(*p_va, void *); + int res; + format++; + if (! (res = (*convert)(arg, addr))) + return converterr("(unspecified)", + arg, msgbuf, bufsize); + if (res == Py_CLEANUP_SUPPORTED && + addcleanup_convert(addr, freelist, convert) == -1) + return converterr("(cleanup problem)", + arg, msgbuf, bufsize); + } + else { + p = va_arg(*p_va, PyObject **); + *p = arg; + } + break; + } + + + case 'w': { /* "w*": memory buffer, read-write access */ + void **p = va_arg(*p_va, void **); + + if (*format != '*') + return converterr( + "invalid use of 'w' format character", + arg, msgbuf, bufsize); + format++; + + /* Caller is interested in Py_buffer, and the object + supports it directly. */ + if (PyObject_GetBuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { + PyErr_Clear(); + return converterr("read-write buffer", arg, msgbuf, bufsize); + } + if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) { + PyBuffer_Release((Py_buffer*)p); + return converterr("contiguous buffer", arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, 1)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + break; + } + + default: + return converterr("impossible", arg, msgbuf, bufsize); + + } + + *p_format = format; + return NULL; + +#undef FETCH_SIZE +#undef STORE_SIZE +#undef BUFFER_LEN +#undef RETURN_ERR_OCCURRED } static Py_ssize_t convertbuffer(PyObject *arg, void **p, char **errmsg) { - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - pb->bf_releasebuffer != NULL) { - *errmsg = "string or read-only buffer"; - return -1; - } - if ((*pb->bf_getsegcount)(arg, NULL) != 1) { - *errmsg = "string or single-segment read-only buffer"; - return -1; - } - if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { - *errmsg = "(unspecified)"; - } - return count; + PyBufferProcs *pb = Py_TYPE(arg)->tp_as_buffer; + Py_ssize_t count; + Py_buffer view; + + *errmsg = NULL; + *p = NULL; + if (pb != NULL && pb->bf_releasebuffer != NULL) { + *errmsg = "read-only pinned buffer"; + return -1; + } + + if (getbuffer(arg, &view, errmsg) < 0) + return -1; + count = view.len; + *p = view.buf; + PyBuffer_Release(&view); + return count; } static int getbuffer(PyObject *arg, Py_buffer *view, char **errmsg) { - void *buf; - Py_ssize_t count; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - if (pb == NULL) { - *errmsg = "string or buffer"; - return -1; - } - if (pb->bf_getbuffer) { - if (pb->bf_getbuffer(arg, view, 0) < 0) { - *errmsg = "convertible to a buffer"; - return -1; - } - if (!PyBuffer_IsContiguous(view, 'C')) { - *errmsg = "contiguous buffer"; - return -1; - } - return 0; - } - - count = convertbuffer(arg, &buf, errmsg); - if (count < 0) { - *errmsg = "convertible to a buffer"; - return count; - } - PyBuffer_FillInfo(view, NULL, buf, count, 1, 0); - return 0; + if (PyObject_GetBuffer(arg, view, PyBUF_SIMPLE) != 0) { + *errmsg = "bytes or buffer"; + return -1; + } + if (!PyBuffer_IsContiguous(view, 'C')) { + PyBuffer_Release(view); + *errmsg = "contiguous buffer"; + return -1; + } + return 0; } /* Support for keyword arguments donated by @@ -1395,501 +1299,485 @@ /* Return false (0) for error, else true. */ int PyArg_ParseTupleAndKeywords(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, - kwlist, &va, FLAG_SIZE_T); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, + kwlist, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParseTupleAndKeywords(PyObject *args, PyObject *keywords, - const char *format, + const char *format, char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } -#ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(lva, va); -#else - lva = va; -#endif -#endif + Py_VA_COPY(lva, va); - retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); - return retval; + retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); + return retval; } int _PyArg_VaParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, va_list va) + PyObject *keywords, + const char *format, + char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } -#ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(lva, va); -#else - lva = va; -#endif -#endif + Py_VA_COPY(lva, va); - retval = vgetargskeywords(args, keywords, format, - kwlist, &lva, FLAG_SIZE_T); - return retval; + retval = vgetargskeywords(args, keywords, format, + kwlist, &lva, FLAG_SIZE_T); + return retval; +} + +int +PyArg_ValidateKeywordArguments(PyObject *kwargs) +{ + if (!PyDict_Check(kwargs)) { + PyErr_BadInternalCall(); + return 0; + } + if (!_PyDict_HasOnlyStringKeys(kwargs)) { + PyErr_SetString(PyExc_TypeError, + "keyword arguments must be strings"); + return 0; + } + return 1; } #define IS_END_OF_FORMAT(c) (c == '\0' || c == ';' || c == ':') static int vgetargskeywords(PyObject *args, PyObject *keywords, const char *format, - char **kwlist, va_list *p_va, int flags) + char **kwlist, va_list *p_va, int flags) { - char msgbuf[512]; - int levels[32]; - const char *fname, *msg, *custom_msg, *keyword; - int min = INT_MAX; - int i, len, nargs, nkeywords; - PyObject *current_arg; - freelist_t freelist = {0, NULL}; + char msgbuf[512]; + int levels[32]; + const char *fname, *msg, *custom_msg, *keyword; + int min = INT_MAX; + int i, len, nargs, nkeywords; + PyObject *freelist = NULL, *current_arg; + assert(args != NULL && PyTuple_Check(args)); + assert(keywords == NULL || PyDict_Check(keywords)); + assert(format != NULL); + assert(kwlist != NULL); + assert(p_va != NULL); - assert(args != NULL && PyTuple_Check(args)); - assert(keywords == NULL || PyDict_Check(keywords)); - assert(format != NULL); - assert(kwlist != NULL); - assert(p_va != NULL); + /* grab the function name or custom error msg first (mutually exclusive) */ + fname = strchr(format, ':'); + if (fname) { + fname++; + custom_msg = NULL; + } + else { + custom_msg = strchr(format,';'); + if (custom_msg) + custom_msg++; + } - /* grab the function name or custom error msg first (mutually exclusive) */ - fname = strchr(format, ':'); - if (fname) { - fname++; - custom_msg = NULL; - } - else { - custom_msg = strchr(format,';'); - if (custom_msg) - custom_msg++; - } + /* scan kwlist and get greatest possible nbr of args */ + for (len=0; kwlist[len]; len++) + continue; - /* scan kwlist and get greatest possible nbr of args */ - for (len=0; kwlist[len]; len++) - continue; + nargs = PyTuple_GET_SIZE(args); + nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); + if (nargs + nkeywords > len) { + PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " + "argument%s (%d given)", + (fname == NULL) ? "function" : fname, + (fname == NULL) ? "" : "()", + len, + (len == 1) ? "" : "s", + nargs + nkeywords); + return 0; + } - freelist.entries = PyMem_New(freelistentry_t, len); + /* convert tuple args and keyword args in same loop, using kwlist to drive process */ + for (i = 0; i < len; i++) { + keyword = kwlist[i]; + if (*format == '|') { + min = i; + format++; + } + if (IS_END_OF_FORMAT(*format)) { + PyErr_Format(PyExc_RuntimeError, + "More keyword list entries (%d) than " + "format specifiers (%d)", len, i); + return cleanreturn(0, freelist); + } + current_arg = NULL; + if (nkeywords) { + current_arg = PyDict_GetItemString(keywords, keyword); + } + if (current_arg) { + --nkeywords; + if (i < nargs) { + /* arg present in tuple and in dict */ + PyErr_Format(PyExc_TypeError, + "Argument given by name ('%s') " + "and position (%d)", + keyword, i+1); + return cleanreturn(0, freelist); + } + } + else if (nkeywords && PyErr_Occurred()) + return cleanreturn(0, freelist); + else if (i < nargs) + current_arg = PyTuple_GET_ITEM(args, i); - nargs = PyTuple_GET_SIZE(args); - nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); - if (nargs + nkeywords > len) { - PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " - "argument%s (%d given)", - (fname == NULL) ? "function" : fname, - (fname == NULL) ? "" : "()", - len, - (len == 1) ? "" : "s", - nargs + nkeywords); - return cleanreturn(0, &freelist); - } + if (current_arg) { + msg = convertitem(current_arg, &format, p_va, flags, + levels, msgbuf, sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, custom_msg); + return cleanreturn(0, freelist); + } + continue; + } - /* convert tuple args and keyword args in same loop, using kwlist to drive process */ - for (i = 0; i < len; i++) { - keyword = kwlist[i]; - if (*format == '|') { - min = i; - format++; - } - if (IS_END_OF_FORMAT(*format)) { - PyErr_Format(PyExc_RuntimeError, - "More keyword list entries (%d) than " - "format specifiers (%d)", len, i); - return cleanreturn(0, &freelist); - } - current_arg = NULL; - if (nkeywords) { - current_arg = PyDict_GetItemString(keywords, keyword); - } - if (current_arg) { - --nkeywords; - if (i < nargs) { - /* arg present in tuple and in dict */ - PyErr_Format(PyExc_TypeError, - "Argument given by name ('%s') " - "and position (%d)", - keyword, i+1); - return cleanreturn(0, &freelist); - } - } - else if (nkeywords && PyErr_Occurred()) - return cleanreturn(0, &freelist); - else if (i < nargs) - current_arg = PyTuple_GET_ITEM(args, i); - - if (current_arg) { - msg = convertitem(current_arg, &format, p_va, flags, - levels, msgbuf, sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, custom_msg); - return cleanreturn(0, &freelist); - } - continue; - } + if (i < min) { + PyErr_Format(PyExc_TypeError, "Required argument " + "'%s' (pos %d) not found", + keyword, i+1); + return cleanreturn(0, freelist); + } + /* current code reports success when all required args + * fulfilled and no keyword args left, with no further + * validation. XXX Maybe skip this in debug build ? + */ + if (!nkeywords) + return cleanreturn(1, freelist); - if (i < min) { - PyErr_Format(PyExc_TypeError, "Required argument " - "'%s' (pos %d) not found", - keyword, i+1); - return cleanreturn(0, &freelist); - } - /* current code reports success when all required args - * fulfilled and no keyword args left, with no further - * validation. XXX Maybe skip this in debug build ? - */ - if (!nkeywords) - return cleanreturn(1, &freelist); + /* We are into optional args, skip thru to any remaining + * keyword args */ + msg = skipitem(&format, p_va, flags); + if (msg) { + PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, + format); + return cleanreturn(0, freelist); + } + } - /* We are into optional args, skip thru to any remaining - * keyword args */ - msg = skipitem(&format, p_va, flags); - if (msg) { - PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, - format); - return cleanreturn(0, &freelist); - } - } + if (!IS_END_OF_FORMAT(*format) && *format != '|') { + PyErr_Format(PyExc_RuntimeError, + "more argument specifiers than keyword list entries " + "(remaining format:'%s')", format); + return cleanreturn(0, freelist); + } - if (!IS_END_OF_FORMAT(*format) && *format != '|') { - PyErr_Format(PyExc_RuntimeError, - "more argument specifiers than keyword list entries " - "(remaining format:'%s')", format); - return cleanreturn(0, &freelist); - } + /* make sure there are no extraneous keyword arguments */ + if (nkeywords > 0) { + PyObject *key, *value; + Py_ssize_t pos = 0; + while (PyDict_Next(keywords, &pos, &key, &value)) { + int match = 0; + char *ks; + if (!PyUnicode_Check(key)) { + PyErr_SetString(PyExc_TypeError, + "keywords must be strings"); + return cleanreturn(0, freelist); + } + /* check that _PyUnicode_AsString() result is not NULL */ + ks = _PyUnicode_AsString(key); + if (ks != NULL) { + for (i = 0; i < len; i++) { + if (!strcmp(ks, kwlist[i])) { + match = 1; + break; + } + } + } + if (!match) { + PyErr_Format(PyExc_TypeError, + "'%U' is an invalid keyword " + "argument for this function", + key); + return cleanreturn(0, freelist); + } + } + } - /* make sure there are no extraneous keyword arguments */ - if (nkeywords > 0) { - PyObject *key, *value; - Py_ssize_t pos = 0; - while (PyDict_Next(keywords, &pos, &key, &value)) { - int match = 0; - char *ks; - if (!PyString_Check(key)) { - PyErr_SetString(PyExc_TypeError, - "keywords must be strings"); - return cleanreturn(0, &freelist); - } - ks = PyString_AsString(key); - for (i = 0; i < len; i++) { - if (!strcmp(ks, kwlist[i])) { - match = 1; - break; - } - } - if (!match) { - PyErr_Format(PyExc_TypeError, - "'%s' is an invalid keyword " - "argument for this function", - ks); - return cleanreturn(0, &freelist); - } - } - } - - return cleanreturn(1, &freelist); + return cleanreturn(1, freelist); } static char * skipitem(const char **p_format, va_list *p_va, int flags) { - const char *format = *p_format; - char c = *format++; - - switch (c) { + const char *format = *p_format; + char c = *format++; - /* simple codes - * The individual types (second arg of va_arg) are irrelevant */ + switch (c) { - case 'b': /* byte -- very short int */ - case 'B': /* byte as bitfield */ - case 'h': /* short int */ - case 'H': /* short int as bitfield */ - case 'i': /* int */ - case 'I': /* int sized bitfield */ - case 'l': /* long int */ - case 'k': /* long int sized bitfield */ + /* simple codes + * The individual types (second arg of va_arg) are irrelevant */ + + case 'b': /* byte -- very short int */ + case 'B': /* byte as bitfield */ + case 'h': /* short int */ + case 'H': /* short int as bitfield */ + case 'i': /* int */ + case 'I': /* int sized bitfield */ + case 'l': /* long int */ + case 'k': /* long int sized bitfield */ #ifdef HAVE_LONG_LONG - case 'L': /* PY_LONG_LONG */ - case 'K': /* PY_LONG_LONG sized bitfield */ + case 'L': /* PY_LONG_LONG */ + case 'K': /* PY_LONG_LONG sized bitfield */ #endif - case 'f': /* float */ - case 'd': /* double */ -#ifndef WITHOUT_COMPLEX - case 'D': /* complex double */ -#endif - case 'c': /* char */ - { - (void) va_arg(*p_va, void *); - break; - } + case 'f': /* float */ + case 'd': /* double */ + case 'D': /* complex double */ + case 'c': /* char */ + case 'C': /* unicode char */ + { + (void) va_arg(*p_va, void *); + break; + } - case 'n': /* Py_ssize_t */ - { - (void) va_arg(*p_va, Py_ssize_t *); - break; - } - - /* string codes */ - - case 'e': /* string with encoding */ - { - (void) va_arg(*p_va, const char *); - if (!(*format == 's' || *format == 't')) - /* after 'e', only 's' and 't' is allowed */ - goto err; - format++; - /* explicit fallthrough to string cases */ - } - - case 's': /* string */ - case 'z': /* string or None */ -#ifdef Py_USING_UNICODE - case 'u': /* unicode string */ -#endif - case 't': /* buffer, read-only */ - case 'w': /* buffer, read-write */ - { - (void) va_arg(*p_va, char **); - if (*format == '#') { - if (flags & FLAG_SIZE_T) - (void) va_arg(*p_va, Py_ssize_t *); - else - (void) va_arg(*p_va, int *); - format++; - } else if ((c == 's' || c == 'z') && *format == '*') { - format++; - } - break; - } + case 'n': /* Py_ssize_t */ + { + (void) va_arg(*p_va, Py_ssize_t *); + break; + } - /* object codes */ + /* string codes */ - case 'S': /* string object */ -#ifdef Py_USING_UNICODE - case 'U': /* unicode string object */ -#endif - { - (void) va_arg(*p_va, PyObject **); - break; - } - - case 'O': /* object */ - { - if (*format == '!') { - format++; - (void) va_arg(*p_va, PyTypeObject*); - (void) va_arg(*p_va, PyObject **); - } -#if 0 -/* I don't know what this is for */ - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - format++; - if ((*pred)(arg)) { - (void) va_arg(*p_va, PyObject **); - } - } -#endif - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - (void) va_arg(*p_va, converter); - (void) va_arg(*p_va, void *); - format++; - } - else { - (void) va_arg(*p_va, PyObject **); - } - break; - } + case 'e': /* string with encoding */ + { + (void) va_arg(*p_va, const char *); + if (!(*format == 's' || *format == 't')) + /* after 'e', only 's' and 't' is allowed */ + goto err; + format++; + /* explicit fallthrough to string cases */ + } - case '(': /* bypass tuple, not handled at all previously */ - { - char *msg; - for (;;) { - if (*format==')') - break; - if (IS_END_OF_FORMAT(*format)) - return "Unmatched left paren in format " - "string"; - msg = skipitem(&format, p_va, flags); - if (msg) - return msg; - } - format++; - break; - } + case 's': /* string */ + case 'z': /* string or None */ + case 'y': /* bytes */ + case 'u': /* unicode string */ + case 'w': /* buffer, read-write */ + { + (void) va_arg(*p_va, char **); + if (*format == '#') { + if (flags & FLAG_SIZE_T) + (void) va_arg(*p_va, Py_ssize_t *); + else + (void) va_arg(*p_va, int *); + format++; + } else if ((c == 's' || c == 'z' || c == 'y') && *format == '*') { + format++; + } + break; + } - case ')': - return "Unmatched right paren in format string"; + /* object codes */ - default: + case 'S': /* string object */ + case 'Y': /* string object */ + case 'U': /* unicode string object */ + { + (void) va_arg(*p_va, PyObject **); + break; + } + + case 'O': /* object */ + { + if (*format == '!') { + format++; + (void) va_arg(*p_va, PyTypeObject*); + (void) va_arg(*p_va, PyObject **); + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + (void) va_arg(*p_va, converter); + (void) va_arg(*p_va, void *); + format++; + } + else { + (void) va_arg(*p_va, PyObject **); + } + break; + } + + case '(': /* bypass tuple, not handled at all previously */ + { + char *msg; + for (;;) { + if (*format==')') + break; + if (IS_END_OF_FORMAT(*format)) + return "Unmatched left paren in format " + "string"; + msg = skipitem(&format, p_va, flags); + if (msg) + return msg; + } + format++; + break; + } + + case ')': + return "Unmatched right paren in format string"; + + default: err: - return "impossible"; - - } + return "impossible"; - *p_format = format; - return NULL; + } + + *p_format = format; + return NULL; } int PyArg_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, ...) { - Py_ssize_t i, l; - PyObject **o; - va_list vargs; + Py_ssize_t i, l; + PyObject **o; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, max); + va_start(vargs, max); #else - va_start(vargs); + va_start(vargs); #endif - assert(min >= 0); - assert(min <= max); - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "PyArg_UnpackTuple() argument list is not a tuple"); - return 0; - } - l = PyTuple_GET_SIZE(args); - if (l < min) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at least "), min, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at least "), min, l); - va_end(vargs); - return 0; - } - if (l > max) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at most "), max, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at most "), max, l); - va_end(vargs); - return 0; - } - for (i = 0; i < l; i++) { - o = va_arg(vargs, PyObject **); - *o = PyTuple_GET_ITEM(args, i); - } - va_end(vargs); - return 1; + assert(min >= 0); + assert(min <= max); + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "PyArg_UnpackTuple() argument list is not a tuple"); + return 0; + } + l = PyTuple_GET_SIZE(args); + if (l < min) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at least "), min, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at least "), min, l); + va_end(vargs); + return 0; + } + if (l > max) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at most "), max, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at most "), max, l); + va_end(vargs); + return 0; + } + for (i = 0; i < l; i++) { + o = va_arg(vargs, PyObject **); + *o = PyTuple_GET_ITEM(args, i); + } + va_end(vargs); + return 1; } /* For type constructors that don't take keyword args * - * Sets a TypeError and returns 0 if the kwds dict is + * Sets a TypeError and returns 0 if the kwds dict is * not empty, returns 1 otherwise */ int _PyArg_NoKeywords(const char *funcname, PyObject *kw) { - if (kw == NULL) - return 1; - if (!PyDict_CheckExact(kw)) { - PyErr_BadInternalCall(); - return 0; - } - if (PyDict_Size(kw) == 0) - return 1; - - PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", - funcname); - return 0; + if (kw == NULL) + return 1; + if (!PyDict_CheckExact(kw)) { + PyErr_BadInternalCall(); + return 0; + } + if (PyDict_Size(kw) == 0) + return 1; + + PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", + funcname); + return 0; } #ifdef __cplusplus }; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -11,63 +11,46 @@ /* Package context -- the full module name for package imports */ char *_Py_PackageContext = NULL; -/* Py_InitModule4() parameters: - - name is the module name - - methods is the list of top-level functions - - doc is the documentation string - - passthrough is passed as self to functions defined in the module - - api_version is the value of PYTHON_API_VERSION at the time the - module was compiled - - Return value is a borrowed reference to the module object; or NULL - if an error occurred (in Python 1.4 and before, errors were fatal). - Errors may still leak memory. -*/ - -static char api_version_warning[] = -"Python C API version mismatch for module %.100s:\ - This Python has API version %d, module %.100s has version %d."; - /* Helper for mkvalue() to scan the length of a format */ static int countformat(const char *format, int endchar) { - int count = 0; - int level = 0; - while (level > 0 || *format != endchar) { - switch (*format) { - case '\0': - /* Premature end */ - PyErr_SetString(PyExc_SystemError, - "unmatched paren in format"); - return -1; - case '(': - case '[': - case '{': - if (level == 0) - count++; - level++; - break; - case ')': - case ']': - case '}': - level--; - break; - case '#': - case '&': - case ',': - case ':': - case ' ': - case '\t': - break; - default: - if (level == 0) - count++; - } - format++; - } - return count; + int count = 0; + int level = 0; + while (level > 0 || *format != endchar) { + switch (*format) { + case '\0': + /* Premature end */ + PyErr_SetString(PyExc_SystemError, + "unmatched paren in format"); + return -1; + case '(': + case '[': + case '{': + if (level == 0) + count++; + level++; + break; + case ')': + case ']': + case '}': + level--; + break; + case '#': + case '&': + case ',': + case ':': + case ' ': + case '\t': + break; + default: + if (level == 0) + count++; + } + format++; + } + return count; } @@ -83,582 +66,459 @@ static PyObject * do_mkdict(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *d; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((d = PyDict_New()) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i+= 2) { - PyObject *k, *v; - int err; - k = do_mkvalue(p_format, p_va, flags); - if (k == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - k = Py_None; - } - v = do_mkvalue(p_format, p_va, flags); - if (v == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - v = Py_None; - } - err = PyDict_SetItem(d, k, v); - Py_DECREF(k); - Py_DECREF(v); - if (err < 0 || itemfailed) { - Py_DECREF(d); - return NULL; - } - } - if (d != NULL && **p_format != endchar) { - Py_DECREF(d); - d = NULL; - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - } - else if (endchar) - ++*p_format; - return d; + PyObject *d; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((d = PyDict_New()) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i+= 2) { + PyObject *k, *v; + int err; + k = do_mkvalue(p_format, p_va, flags); + if (k == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + k = Py_None; + } + v = do_mkvalue(p_format, p_va, flags); + if (v == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + v = Py_None; + } + err = PyDict_SetItem(d, k, v); + Py_DECREF(k); + Py_DECREF(v); + if (err < 0 || itemfailed) { + Py_DECREF(d); + return NULL; + } + } + if (d != NULL && **p_format != endchar) { + Py_DECREF(d); + d = NULL; + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + } + else if (endchar) + ++*p_format; + return d; } static PyObject * do_mklist(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - v = PyList_New(n); - if (v == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyList_SET_ITEM(v, i, w); - } + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + v = PyList_New(n); + if (v == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyList_SET_ITEM(v, i, w); + } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } -#ifdef Py_USING_UNICODE static int _ustrlen(Py_UNICODE *u) { - int i = 0; - Py_UNICODE *v = u; - while (*v != 0) { i++; v++; } - return i; + int i = 0; + Py_UNICODE *v = u; + while (*v != 0) { i++; v++; } + return i; } -#endif static PyObject * do_mktuple(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((v = PyTuple_New(n)) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyTuple_SET_ITEM(v, i, w); - } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((v = PyTuple_New(n)) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyTuple_SET_ITEM(v, i, w); + } + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } static PyObject * do_mkvalue(const char **p_format, va_list *p_va, int flags) { - for (;;) { - switch (*(*p_format)++) { - case '(': - return do_mktuple(p_format, p_va, ')', - countformat(*p_format, ')'), flags); + for (;;) { + switch (*(*p_format)++) { + case '(': + return do_mktuple(p_format, p_va, ')', + countformat(*p_format, ')'), flags); - case '[': - return do_mklist(p_format, p_va, ']', - countformat(*p_format, ']'), flags); + case '[': + return do_mklist(p_format, p_va, ']', + countformat(*p_format, ']'), flags); - case '{': - return do_mkdict(p_format, p_va, '}', - countformat(*p_format, '}'), flags); + case '{': + return do_mkdict(p_format, p_va, '}', + countformat(*p_format, '}'), flags); - case 'b': - case 'B': - case 'h': - case 'i': - return PyInt_FromLong((long)va_arg(*p_va, int)); - - case 'H': - return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + case 'b': + case 'B': + case 'h': + case 'i': + return PyLong_FromLong((long)va_arg(*p_va, int)); - case 'I': - { - unsigned int n; - n = va_arg(*p_va, unsigned int); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong((unsigned long)n); - else - return PyInt_FromLong(n); - } - - case 'n': + case 'H': + return PyLong_FromLong((long)va_arg(*p_va, unsigned int)); + + case 'I': + { + unsigned int n; + n = va_arg(*p_va, unsigned int); + return PyLong_FromUnsignedLong(n); + } + + case 'n': #if SIZEOF_SIZE_T!=SIZEOF_LONG - return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); + return PyLong_FromSsize_t(va_arg(*p_va, Py_ssize_t)); #endif - /* Fall through from 'n' to 'l' if Py_ssize_t is long */ - case 'l': - return PyInt_FromLong(va_arg(*p_va, long)); + /* Fall through from 'n' to 'l' if Py_ssize_t is long */ + case 'l': + return PyLong_FromLong(va_arg(*p_va, long)); - case 'k': - { - unsigned long n; - n = va_arg(*p_va, unsigned long); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong(n); - else - return PyInt_FromLong(n); - } + case 'k': + { + unsigned long n; + n = va_arg(*p_va, unsigned long); + return PyLong_FromUnsignedLong(n); + } #ifdef HAVE_LONG_LONG - case 'L': - return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); + case 'L': + return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); - case 'K': - return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); + case 'K': + return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); #endif -#ifdef Py_USING_UNICODE - case 'u': - { - PyObject *v; - Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (u == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) - n = _ustrlen(u); - v = PyUnicode_FromUnicode(u, n); - } - return v; - } -#endif - case 'f': - case 'd': - return PyFloat_FromDouble( - (double)va_arg(*p_va, va_double)); + case 'u': + { + PyObject *v; + Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (u == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) + n = _ustrlen(u); + v = PyUnicode_FromUnicode(u, n); + } + return v; + } + case 'f': + case 'd': + return PyFloat_FromDouble( + (double)va_arg(*p_va, va_double)); -#ifndef WITHOUT_COMPLEX - case 'D': - return PyComplex_FromCComplex( - *((Py_complex *)va_arg(*p_va, Py_complex *))); -#endif /* WITHOUT_COMPLEX */ + case 'D': + return PyComplex_FromCComplex( + *((Py_complex *)va_arg(*p_va, Py_complex *))); - case 'c': - { - char p[1]; - p[0] = (char)va_arg(*p_va, int); - return PyString_FromStringAndSize(p, 1); - } + case 'c': + { + char p[1]; + p[0] = (char)va_arg(*p_va, int); + return PyBytes_FromStringAndSize(p, 1); + } + case 'C': + { + int i = va_arg(*p_va, int); + if (i < 0 || i > PyUnicode_GetMax()) { + PyErr_SetString(PyExc_OverflowError, + "%c arg not in range(0x110000)"); + return NULL; + } + return PyUnicode_FromOrdinal(i); + } - case 's': - case 'z': - { - PyObject *v; - char *str = va_arg(*p_va, char *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (str == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) { - size_t m = strlen(str); - if (m > PY_SSIZE_T_MAX) { - PyErr_SetString(PyExc_OverflowError, - "string too long for Python string"); - return NULL; - } - n = (Py_ssize_t)m; - } - v = PyString_FromStringAndSize(str, n); - } - return v; - } + case 's': + case 'z': + case 'U': /* XXX deprecated alias */ + { + PyObject *v; + char *str = va_arg(*p_va, char *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (str == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) { + size_t m = strlen(str); + if (m > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_OverflowError, + "string too long for Python string"); + return NULL; + } + n = (Py_ssize_t)m; + } + v = PyUnicode_FromStringAndSize(str, n); + } + return v; + } - case 'N': - case 'S': - case 'O': - if (**p_format == '&') { - typedef PyObject *(*converter)(void *); - converter func = va_arg(*p_va, converter); - void *arg = va_arg(*p_va, void *); - ++*p_format; - return (*func)(arg); - } - else { - PyObject *v; - v = va_arg(*p_va, PyObject *); - if (v != NULL) { - if (*(*p_format - 1) != 'N') - Py_INCREF(v); - } - else if (!PyErr_Occurred()) - /* If a NULL was passed - * because a call that should - * have constructed a value - * failed, that's OK, and we - * pass the error on; but if - * no error occurred it's not - * clear that the caller knew - * what she was doing. */ - PyErr_SetString(PyExc_SystemError, - "NULL object passed to Py_BuildValue"); - return v; - } + case 'y': + { + PyObject *v; + char *str = va_arg(*p_va, char *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (str == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) { + size_t m = strlen(str); + if (m > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_OverflowError, + "string too long for Python bytes"); + return NULL; + } + n = (Py_ssize_t)m; + } + v = PyBytes_FromStringAndSize(str, n); + } + return v; + } - case ':': - case ',': - case ' ': - case '\t': - break; + case 'N': + case 'S': + case 'O': + if (**p_format == '&') { + typedef PyObject *(*converter)(void *); + converter func = va_arg(*p_va, converter); + void *arg = va_arg(*p_va, void *); + ++*p_format; + return (*func)(arg); + } + else { + PyObject *v; + v = va_arg(*p_va, PyObject *); + if (v != NULL) { + if (*(*p_format - 1) != 'N') + Py_INCREF(v); + } + else if (!PyErr_Occurred()) + /* If a NULL was passed + * because a call that should + * have constructed a value + * failed, that's OK, and we + * pass the error on; but if + * no error occurred it's not + * clear that the caller knew + * what she was doing. */ + PyErr_SetString(PyExc_SystemError, + "NULL object passed to Py_BuildValue"); + return v; + } - default: - PyErr_SetString(PyExc_SystemError, - "bad format char passed to Py_BuildValue"); - return NULL; + case ':': + case ',': + case ' ': + case '\t': + break; - } - } + default: + PyErr_SetString(PyExc_SystemError, + "bad format char passed to Py_BuildValue"); + return NULL; + + } + } } PyObject * Py_BuildValue(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, 0); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, 0); + va_end(va); + return retval; } PyObject * _Py_BuildValue_SizeT(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, FLAG_SIZE_T); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, FLAG_SIZE_T); + va_end(va); + return retval; } PyObject * Py_VaBuildValue(const char *format, va_list va) { - return va_build_value(format, va, 0); + return va_build_value(format, va, 0); } PyObject * _Py_VaBuildValue_SizeT(const char *format, va_list va) { - return va_build_value(format, va, FLAG_SIZE_T); + return va_build_value(format, va, FLAG_SIZE_T); } static PyObject * va_build_value(const char *format, va_list va, int flags) { - const char *f = format; - int n = countformat(f, '\0'); - va_list lva; + const char *f = format; + int n = countformat(f, '\0'); + va_list lva; -#ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(lva, va); -#else - lva = va; -#endif -#endif + Py_VA_COPY(lva, va); - if (n < 0) - return NULL; - if (n == 0) { - Py_INCREF(Py_None); - return Py_None; - } - if (n == 1) - return do_mkvalue(&f, &lva, flags); - return do_mktuple(&f, &lva, '\0', n, flags); + if (n < 0) + return NULL; + if (n == 0) { + Py_INCREF(Py_None); + return Py_None; + } + if (n == 1) + return do_mkvalue(&f, &lva, flags); + return do_mktuple(&f, &lva, '\0', n, flags); } PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...) { - va_list vargs; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *args; + PyObject *res; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) - return NULL; + if (args == NULL) + return NULL; - res = PyEval_CallObject(obj, args); - Py_DECREF(args); + res = PyEval_CallObject(obj, args); + Py_DECREF(args); - return res; + return res; } PyObject * PyEval_CallMethod(PyObject *obj, const char *methodname, const char *format, ...) { - va_list vargs; - PyObject *meth; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *meth; + PyObject *args; + PyObject *res; - meth = PyObject_GetAttrString(obj, methodname); - if (meth == NULL) - return NULL; + meth = PyObject_GetAttrString(obj, methodname); + if (meth == NULL) + return NULL; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) { - Py_DECREF(meth); - return NULL; - } + if (args == NULL) { + Py_DECREF(meth); + return NULL; + } - res = PyEval_CallObject(meth, args); - Py_DECREF(meth); - Py_DECREF(args); + res = PyEval_CallObject(meth, args); + Py_DECREF(meth); + Py_DECREF(args); - return res; -} - -static PyObject* -call_function_tail(PyObject *callable, PyObject *args) -{ - PyObject *retval; - - if (args == NULL) - return NULL; - - if (!PyTuple_Check(args)) { - PyObject *a; - - a = PyTuple_New(1); - if (a == NULL) { - Py_DECREF(args); - return NULL; - } - PyTuple_SET_ITEM(a, 0, args); - args = a; - } - retval = PyObject_Call(callable, args, NULL); - - Py_DECREF(args); - - return retval; -} - -PyObject * -PyObject_CallFunction(PyObject *callable, const char *format, ...) -{ - va_list va; - PyObject *args; - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - return call_function_tail(callable, args); -} - -PyObject * -PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) -{ - va_list va; - PyObject *args; - PyObject *func = NULL; - PyObject *retval = NULL; - - func = PyObject_GetAttrString(o, name); - if (func == NULL) { - PyErr_SetString(PyExc_AttributeError, name); - return 0; - } - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - retval = call_function_tail(func, args); - - exit: - /* args gets consumed in call_function_tail */ - Py_XDECREF(func); - - return retval; -} - -static PyObject * -objargs_mktuple(va_list va) -{ - int i, n = 0; - va_list countva; - PyObject *result, *tmp; - -#ifdef VA_LIST_IS_ARRAY - memcpy(countva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(countva, va); -#else - countva = va; -#endif -#endif - - while (((PyObject *)va_arg(countva, PyObject *)) != NULL) - ++n; - result = PyTuple_New(n); - if (result != NULL && n > 0) { - for (i = 0; i < n; ++i) { - tmp = (PyObject *)va_arg(va, PyObject *); - Py_INCREF(tmp); - PyTuple_SET_ITEM(result, i, tmp); - } - } - return result; -} - -PyObject * -PyObject_CallFunctionObjArgs(PyObject *callable, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - /* count the args */ - va_start(vargs, callable); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) - return NULL; - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - - return tmp; -} - -PyObject * -PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - callable = PyObject_GetAttr(callable, name); - if (callable == NULL) - return NULL; - - /* count the args */ - va_start(vargs, name); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) { - Py_DECREF(callable); - return NULL; - } - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - Py_DECREF(callable); - - return tmp; + return res; } /* returns -1 in case of error, 0 if a new key was added, 1 if the key @@ -666,67 +526,67 @@ static int _PyModule_AddObject_NoConsumeRef(PyObject *m, const char *name, PyObject *o) { - PyObject *dict, *prev; - if (!PyModule_Check(m)) { - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs module as first arg"); - return -1; - } - if (!o) { - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs non-NULL value"); - return -1; - } + PyObject *dict, *prev; + if (!PyModule_Check(m)) { + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs module as first arg"); + return -1; + } + if (!o) { + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs non-NULL value"); + return -1; + } - dict = PyModule_GetDict(m); - if (dict == NULL) { - /* Internal error -- modules must have a dict! */ - PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", - PyModule_GetName(m)); - return -1; - } - prev = PyDict_GetItemString(dict, name); - if (PyDict_SetItemString(dict, name, o)) - return -1; - return prev != NULL; + dict = PyModule_GetDict(m); + if (dict == NULL) { + /* Internal error -- modules must have a dict! */ + PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", + PyModule_GetName(m)); + return -1; + } + prev = PyDict_GetItemString(dict, name); + if (PyDict_SetItemString(dict, name, o)) + return -1; + return prev != NULL; } int PyModule_AddObject(PyObject *m, const char *name, PyObject *o) { - int result = _PyModule_AddObject_NoConsumeRef(m, name, o); - /* XXX WORKAROUND for a common misusage of PyModule_AddObject: - for the common case of adding a new key, we don't consume a - reference, but instead just leak it away. The issue is that - people generally don't realize that this function consumes a - reference, because on CPython the reference is still stored - on the dictionary. */ - if (result != 0) - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result = _PyModule_AddObject_NoConsumeRef(m, name, o); + /* XXX WORKAROUND for a common misusage of PyModule_AddObject: + for the common case of adding a new key, we don't consume a + reference, but instead just leak it away. The issue is that + people generally don't realize that this function consumes a + reference, because on CPython the reference is still stored + on the dictionary. */ + if (result != 0) + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddIntConstant(PyObject *m, const char *name, long value) { - int result; - PyObject *o = PyInt_FromLong(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyLong_FromLong(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddStringConstant(PyObject *m, const char *name, const char *value) { - int result; - PyObject *o = PyString_FromString(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyUnicode_FromString(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } diff --git a/pypy/module/cpyext/src/mysnprintf.c b/pypy/module/cpyext/src/mysnprintf.c --- a/pypy/module/cpyext/src/mysnprintf.c +++ b/pypy/module/cpyext/src/mysnprintf.c @@ -1,5 +1,4 @@ #include "Python.h" -#include /* snprintf() wrappers. If the platform has vsnprintf, we use it, else we emulate it in a half-hearted way. Even if the platform has it, we wrap @@ -20,86 +19,86 @@ Return value (rv): - When 0 <= rv < size, the output conversion was unexceptional, and - rv characters were written to str (excluding a trailing \0 byte at - str[rv]). + When 0 <= rv < size, the output conversion was unexceptional, and + rv characters were written to str (excluding a trailing \0 byte at + str[rv]). - When rv >= size, output conversion was truncated, and a buffer of - size rv+1 would have been needed to avoid truncation. str[size-1] - is \0 in this case. + When rv >= size, output conversion was truncated, and a buffer of + size rv+1 would have been needed to avoid truncation. str[size-1] + is \0 in this case. - When rv < 0, "something bad happened". str[size-1] is \0 in this - case too, but the rest of str is unreliable. It could be that - an error in format codes was detected by libc, or on platforms - with a non-C99 vsnprintf simply that the buffer wasn't big enough - to avoid truncation, or on platforms without any vsnprintf that - PyMem_Malloc couldn't obtain space for a temp buffer. + When rv < 0, "something bad happened". str[size-1] is \0 in this + case too, but the rest of str is unreliable. It could be that + an error in format codes was detected by libc, or on platforms + with a non-C99 vsnprintf simply that the buffer wasn't big enough + to avoid truncation, or on platforms without any vsnprintf that + PyMem_Malloc couldn't obtain space for a temp buffer. CAUTION: Unlike C99, str != NULL and size > 0 are required. */ int +PyOS_snprintf(char *str, size_t size, const char *format, ...) +{ + int rc; + va_list va; + + va_start(va, format); + rc = PyOS_vsnprintf(str, size, format, va); + va_end(va); + return rc; +} + +int PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va) { - int len; /* # bytes written, excluding \0 */ + int len; /* # bytes written, excluding \0 */ #ifdef HAVE_SNPRINTF #define _PyOS_vsnprintf_EXTRA_SPACE 1 #else #define _PyOS_vsnprintf_EXTRA_SPACE 512 - char *buffer; + char *buffer; #endif - assert(str != NULL); - assert(size > 0); - assert(format != NULL); - /* We take a size_t as input but return an int. Sanity check - * our input so that it won't cause an overflow in the - * vsnprintf return value or the buffer malloc size. */ - if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { - len = -666; - goto Done; - } + assert(str != NULL); + assert(size > 0); + assert(format != NULL); + /* We take a size_t as input but return an int. Sanity check + * our input so that it won't cause an overflow in the + * vsnprintf return value or the buffer malloc size. */ + if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { + len = -666; + goto Done; + } #ifdef HAVE_SNPRINTF - len = vsnprintf(str, size, format, va); + len = vsnprintf(str, size, format, va); #else - /* Emulate it. */ - buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); - if (buffer == NULL) { - len = -666; - goto Done; - } + /* Emulate it. */ + buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); + if (buffer == NULL) { + len = -666; + goto Done; + } - len = vsprintf(buffer, format, va); - if (len < 0) - /* ignore the error */; + len = vsprintf(buffer, format, va); + if (len < 0) + /* ignore the error */; - else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) - Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); + else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) + Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); - else { - const size_t to_copy = (size_t)len < size ? - (size_t)len : size - 1; - assert(to_copy < size); - memcpy(str, buffer, to_copy); - str[to_copy] = '\0'; - } - PyMem_FREE(buffer); + else { + const size_t to_copy = (size_t)len < size ? + (size_t)len : size - 1; + assert(to_copy < size); + memcpy(str, buffer, to_copy); + str[to_copy] = '\0'; + } + PyMem_FREE(buffer); #endif Done: - if (size > 0) - str[size-1] = '\0'; - return len; + if (size > 0) + str[size-1] = '\0'; + return len; #undef _PyOS_vsnprintf_EXTRA_SPACE } - -int -PyOS_snprintf(char *str, size_t size, const char *format, ...) -{ - int rc; - va_list va; - - va_start(va, format); - rc = PyOS_vsnprintf(str, size, format, va); - va_end(va); - return rc; -} diff --git a/pypy/module/cpyext/src/object.c b/pypy/module/cpyext/src/object.c deleted file mode 100644 --- a/pypy/module/cpyext/src/object.c +++ /dev/null @@ -1,91 +0,0 @@ -// contains code from abstract.c -#include - - -static PyObject * -null_error(void) -{ - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_SystemError, - "null argument to internal routine"); - return NULL; -} - -int PyObject_AsReadBuffer(PyObject *obj, - const void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void *pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a readable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getreadbuffer)(obj, 0, &pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int PyObject_AsWriteBuffer(PyObject *obj, - void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void*pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a writeable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getwritebuffer)(obj,0,&pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int -PyObject_CheckReadBuffer(PyObject *obj) -{ - PyBufferProcs *pb = obj->ob_type->tp_as_buffer; - - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - (*pb->bf_getsegcount)(obj, NULL) != 1) - return 0; - return 1; -} diff --git a/pypy/module/cpyext/src/pysignals.c b/pypy/module/cpyext/src/pysignals.c --- a/pypy/module/cpyext/src/pysignals.c +++ b/pypy/module/cpyext/src/pysignals.c @@ -17,39 +17,61 @@ PyOS_getsig(int sig) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context; - if (sigaction(sig, NULL, &context) == -1) - return SIG_ERR; - return context.sa_handler; + /* assume sigaction exists */ + struct sigaction context; + if (sigaction(sig, NULL, &context) == -1) + return SIG_ERR; + return context.sa_handler; #else - PyOS_sighandler_t handler; - handler = signal(sig, SIG_IGN); - if (handler != SIG_ERR) - signal(sig, handler); - return handler; + PyOS_sighandler_t handler; +/* Special signal handling for the secure CRT in Visual Studio 2005 */ +#if defined(_MSC_VER) && _MSC_VER >= 1400 + switch (sig) { + /* Only these signals are valid */ + case SIGINT: + case SIGILL: + case SIGFPE: + case SIGSEGV: + case SIGTERM: + case SIGBREAK: + case SIGABRT: + break; + /* Don't call signal() with other values or it will assert */ + default: + return SIG_ERR; + } +#endif /* _MSC_VER && _MSC_VER >= 1400 */ + handler = signal(sig, SIG_IGN); + if (handler != SIG_ERR) + signal(sig, handler); + return handler; #endif } +/* + * All of the code in this function must only use async-signal-safe functions, + * listed at `man 7 signal` or + * http://www.opengroup.org/onlinepubs/009695399/functions/xsh_chap02_04.html. + */ PyOS_sighandler_t PyOS_setsig(int sig, PyOS_sighandler_t handler) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context, ocontext; - context.sa_handler = handler; - sigemptyset(&context.sa_mask); - context.sa_flags = 0; - if (sigaction(sig, &context, &ocontext) == -1) - return SIG_ERR; - return ocontext.sa_handler; + /* assume sigaction exists */ + struct sigaction context, ocontext; + context.sa_handler = handler; + sigemptyset(&context.sa_mask); + context.sa_flags = 0; + if (sigaction(sig, &context, &ocontext) == -1) + return SIG_ERR; + return ocontext.sa_handler; #else - PyOS_sighandler_t oldhandler; - oldhandler = signal(sig, handler); + PyOS_sighandler_t oldhandler; + oldhandler = signal(sig, handler); #ifndef MS_WINDOWS - /* should check if this exists */ - siginterrupt(sig, 1); + /* should check if this exists */ + siginterrupt(sig, 1); #endif - return oldhandler; + return oldhandler; #endif } diff --git a/pypy/module/cpyext/src/pythonrun.c b/pypy/module/cpyext/src/pythonrun.c --- a/pypy/module/cpyext/src/pythonrun.c +++ b/pypy/module/cpyext/src/pythonrun.c @@ -9,28 +9,30 @@ void Py_FatalError(const char *msg) { - fprintf(stderr, "Fatal Python error: %s\n", msg); - fflush(stderr); /* it helps in Windows debug build */ + fprintf(stderr, "Fatal Python error: %s\n", msg); + fflush(stderr); /* it helps in Windows debug build */ + if (PyErr_Occurred()) { + PyErr_PrintEx(0); + } +#ifdef MS_WINDOWS + { + size_t len = strlen(msg); + WCHAR* buffer; + size_t i; -#ifdef MS_WINDOWS - { - size_t len = strlen(msg); - WCHAR* buffer; - size_t i; - - /* Convert the message to wchar_t. This uses a simple one-to-one - conversion, assuming that the this error message actually uses ASCII - only. If this ceases to be true, we will have to convert. */ - buffer = alloca( (len+1) * (sizeof *buffer)); - for( i=0; i<=len; ++i) - buffer[i] = msg[i]; - OutputDebugStringW(L"Fatal Python error: "); - OutputDebugStringW(buffer); - OutputDebugStringW(L"\n"); - } + /* Convert the message to wchar_t. This uses a simple one-to-one + conversion, assuming that the this error message actually uses ASCII + only. If this ceases to be true, we will have to convert. */ + buffer = alloca( (len+1) * (sizeof *buffer)); + for( i=0; i<=len; ++i) + buffer[i] = msg[i]; + OutputDebugStringW(L"Fatal Python error: "); + OutputDebugStringW(buffer); + OutputDebugStringW(L"\n"); + } #ifdef _DEBUG - DebugBreak(); + DebugBreak(); #endif #endif /* MS_WINDOWS */ - abort(); + abort(); } diff --git a/pypy/module/cpyext/src/structseq.c b/pypy/module/cpyext/src/structseq.c --- a/pypy/module/cpyext/src/structseq.c +++ b/pypy/module/cpyext/src/structseq.c @@ -14,14 +14,14 @@ char *PyStructSequence_UnnamedField = "unnamed field"; #define VISIBLE_SIZE(op) Py_SIZE(op) -#define VISIBLE_SIZE_TP(tp) PyInt_AsLong( \ +#define VISIBLE_SIZE_TP(tp) PyLong_AsLong( \ PyDict_GetItemString((tp)->tp_dict, visible_length_key)) -#define REAL_SIZE_TP(tp) PyInt_AsLong( \ +#define REAL_SIZE_TP(tp) PyLong_AsLong( \ PyDict_GetItemString((tp)->tp_dict, real_length_key)) #define REAL_SIZE(op) REAL_SIZE_TP(Py_TYPE(op)) -#define UNNAMED_FIELDS_TP(tp) PyInt_AsLong( \ +#define UNNAMED_FIELDS_TP(tp) PyLong_AsLong( \ PyDict_GetItemString((tp)->tp_dict, unnamed_fields_key)) #define UNNAMED_FIELDS(op) UNNAMED_FIELDS_TP(Py_TYPE(op)) @@ -30,113 +30,42 @@ PyStructSequence_New(PyTypeObject *type) { PyStructSequence *obj; + Py_ssize_t size = REAL_SIZE_TP(type), i; - obj = PyObject_New(PyStructSequence, type); + obj = PyObject_GC_NewVar(PyStructSequence, type, size); if (obj == NULL) return NULL; + /* Hack the size of the variable object, so invisible fields don't appear + to Python code. */ Py_SIZE(obj) = VISIBLE_SIZE_TP(type); + for (i = 0; i < size; i++) + obj->ob_item[i] = NULL; - return (PyObject*) obj; + return (PyObject*)obj; +} + +void +PyStructSequence_SetItem(PyObject* op, Py_ssize_t i, PyObject* v) +{ + PyStructSequence_SET_ITEM(op, i, v); +} + +PyObject* +PyStructSequence_GetItem(PyObject* op, Py_ssize_t i) +{ + return PyStructSequence_GET_ITEM(op, i); } static void structseq_dealloc(PyStructSequence *obj) { Py_ssize_t i, size; - + size = REAL_SIZE(obj); for (i = 0; i < size; ++i) { Py_XDECREF(obj->ob_item[i]); } - PyObject_Del(obj); -} - -static Py_ssize_t -structseq_length(PyStructSequence *obj) -{ - return VISIBLE_SIZE(obj); -} - -static PyObject* -structseq_item(PyStructSequence *obj, Py_ssize_t i) -{ - if (i < 0 || i >= VISIBLE_SIZE(obj)) { - PyErr_SetString(PyExc_IndexError, "tuple index out of range"); - return NULL; - } - Py_INCREF(obj->ob_item[i]); - return obj->ob_item[i]; -} - -static PyObject* -structseq_slice(PyStructSequence *obj, Py_ssize_t low, Py_ssize_t high) -{ - PyTupleObject *np; - Py_ssize_t i; - - if (low < 0) - low = 0; - if (high > VISIBLE_SIZE(obj)) - high = VISIBLE_SIZE(obj); - if (high < low) - high = low; - np = (PyTupleObject *)PyTuple_New(high-low); - if (np == NULL) - return NULL; - for(i = low; i < high; ++i) { - PyObject *v = obj->ob_item[i]; - Py_INCREF(v); - PyTuple_SET_ITEM(np, i-low, v); - } - return (PyObject *) np; -} - -static PyObject * -structseq_subscript(PyStructSequence *self, PyObject *item) -{ - if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return NULL; - - if (i < 0) - i += VISIBLE_SIZE(self); - - if (i < 0 || i >= VISIBLE_SIZE(self)) { - PyErr_SetString(PyExc_IndexError, - "tuple index out of range"); - return NULL; - } - Py_INCREF(self->ob_item[i]); - return self->ob_item[i]; - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelen, cur, i; - PyObject *result; - - if (PySlice_GetIndicesEx((PySliceObject *)item, - VISIBLE_SIZE(self), &start, &stop, - &step, &slicelen) < 0) { - return NULL; - } - if (slicelen <= 0) - return PyTuple_New(0); - result = PyTuple_New(slicelen); - if (result == NULL) - return NULL; - for (cur = start, i = 0; i < slicelen; - cur += step, i++) { - PyObject *v = self->ob_item[cur]; - Py_INCREF(v); - PyTuple_SET_ITEM(result, i, v); - } - return result; - } - else { - PyErr_SetString(PyExc_TypeError, - "structseq index must be integer"); - return NULL; - } + PyObject_GC_Del(obj); } static PyObject * @@ -223,11 +152,6 @@ return (PyObject*) res; } -static PyObject * -make_tuple(PyStructSequence *obj) -{ - return structseq_slice(obj, 0, VISIBLE_SIZE(obj)); -} static PyObject * structseq_repr(PyStructSequence *obj) @@ -236,7 +160,6 @@ #define REPR_BUFFER_SIZE 512 #define TYPE_MAXSIZE 100 - PyObject *tup; PyTypeObject *typ = Py_TYPE(obj); int i, removelast = 0; Py_ssize_t len; @@ -246,10 +169,6 @@ /* pointer to end of writeable buffer; safes space for "...)\0" */ endofbuf= &buf[REPR_BUFFER_SIZE-5]; - if ((tup = make_tuple(obj)) == NULL) { - return NULL; - } - /* "typename(", limited to TYPE_MAXSIZE */ len = strlen(typ->tp_name) > TYPE_MAXSIZE ? TYPE_MAXSIZE : strlen(typ->tp_name); @@ -262,19 +181,17 @@ char *cname, *crepr; cname = typ->tp_members[i].name; - - val = PyTuple_GetItem(tup, i); - if (cname == NULL || val == NULL) { + if (cname == NULL) { + PyErr_Format(PyExc_SystemError, "In structseq_repr(), member %d name is NULL" + " for type %.500s", i, typ->tp_name); return NULL; } + val = PyStructSequence_GET_ITEM(obj, i); repr = PyObject_Repr(val); - if (repr == NULL) { - Py_DECREF(tup); + if (repr == NULL) return NULL; - } - crepr = PyString_AsString(repr); + crepr = _PyUnicode_AsString(repr); if (crepr == NULL) { - Py_DECREF(tup); Py_DECREF(repr); return NULL; } @@ -300,7 +217,6 @@ break; } } - Py_DECREF(tup); if (removelast) { /* overwrite last ", " */ pbuf-=2; @@ -308,63 +224,7 @@ *pbuf++ = ')'; *pbuf = '\0'; - return PyString_FromString(buf); -} - -static PyObject * -structseq_concat(PyStructSequence *obj, PyObject *b) -{ - PyObject *tup, *result; - tup = make_tuple(obj); - result = PySequence_Concat(tup, b); - Py_DECREF(tup); - return result; -} - -static PyObject * -structseq_repeat(PyStructSequence *obj, Py_ssize_t n) -{ - PyObject *tup, *result; - tup = make_tuple(obj); - result = PySequence_Repeat(tup, n); - Py_DECREF(tup); - return result; -} - -static int -structseq_contains(PyStructSequence *obj, PyObject *o) -{ - PyObject *tup; - int result; - tup = make_tuple(obj); - if (!tup) - return -1; - result = PySequence_Contains(tup, o); - Py_DECREF(tup); - return result; -} - -static long -structseq_hash(PyObject *obj) -{ - PyObject *tup; - long result; - tup = make_tuple((PyStructSequence*) obj); - if (!tup) - return -1; - result = PyObject_Hash(tup); - Py_DECREF(tup); - return result; -} - -static PyObject * -structseq_richcompare(PyObject *obj, PyObject *o2, int op) -{ - PyObject *tup, *result; - tup = make_tuple((PyStructSequence*) obj); - result = PyObject_RichCompare(tup, o2, op); - Py_DECREF(tup); - return result; + return PyUnicode_FromString(buf); } static PyObject * @@ -409,53 +269,36 @@ return result; } -static PySequenceMethods structseq_as_sequence = { - (lenfunc)structseq_length, - (binaryfunc)structseq_concat, /* sq_concat */ - (ssizeargfunc)structseq_repeat, /* sq_repeat */ - (ssizeargfunc)structseq_item, /* sq_item */ - (ssizessizeargfunc)structseq_slice, /* sq_slice */ - 0, /* sq_ass_item */ - 0, /* sq_ass_slice */ - (objobjproc)structseq_contains, /* sq_contains */ -}; - -static PyMappingMethods structseq_as_mapping = { - (lenfunc)structseq_length, - (binaryfunc)structseq_subscript, -}; - static PyMethodDef structseq_methods[] = { - {"__reduce__", (PyCFunction)structseq_reduce, - METH_NOARGS, NULL}, + {"__reduce__", (PyCFunction)structseq_reduce, METH_NOARGS, NULL}, {NULL, NULL} }; static PyTypeObject _struct_sequence_template = { PyVarObject_HEAD_INIT(&PyType_Type, 0) NULL, /* tp_name */ - 0, /* tp_basicsize */ - 0, /* tp_itemsize */ + sizeof(PyStructSequence) - sizeof(PyObject *), /* tp_basicsize */ + sizeof(PyObject *), /* tp_itemsize */ (destructor)structseq_dealloc, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ - 0, /* tp_compare */ + 0, /* tp_reserved */ (reprfunc)structseq_repr, /* tp_repr */ 0, /* tp_as_number */ - &structseq_as_sequence, /* tp_as_sequence */ - &structseq_as_mapping, /* tp_as_mapping */ - structseq_hash, /* tp_hash */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT, /* tp_flags */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ NULL, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ - structseq_richcompare, /* tp_richcompare */ + 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ @@ -482,7 +325,7 @@ #ifdef Py_TRACE_REFS /* if the type object was chained, unchain it first before overwriting its storage */ - if (type->_ob_next) { + if (type->ob_base.ob_base._ob_next) { _Py_ForgetReference((PyObject*)type); } #endif @@ -494,11 +337,9 @@ n_members = i; memcpy(type, &_struct_sequence_template, sizeof(PyTypeObject)); + type->tp_base = &PyTuple_Type; type->tp_name = desc->name; type->tp_doc = desc->doc; - type->tp_basicsize = sizeof(PyStructSequence)+ - sizeof(PyObject*)*(n_members-1); - type->tp_itemsize = 0; members = PyMem_NEW(PyMemberDef, n_members-n_unnamed_members+1); if (members == NULL) @@ -526,7 +367,7 @@ dict = type->tp_dict; #define SET_DICT_FROM_INT(key, value) \ do { \ - PyObject *v = PyInt_FromLong((long) value); \ + PyObject *v = PyLong_FromLong((long) value); \ if (v != NULL) { \ PyDict_SetItemString(dict, key, v); \ Py_DECREF(v); \ @@ -537,3 +378,11 @@ SET_DICT_FROM_INT(real_length_key, n_members); SET_DICT_FROM_INT(unnamed_fields_key, n_unnamed_members); } + +PyTypeObject* +PyStructSequence_NewType(PyStructSequence_Desc *desc) +{ + PyTypeObject *result = (PyTypeObject*)PyType_GenericAlloc(&PyType_Type, 0); + PyStructSequence_InitType(result, desc); + return result; +} diff --git a/pypy/module/cpyext/src/sysmodule.c b/pypy/module/cpyext/src/sysmodule.c --- a/pypy/module/cpyext/src/sysmodule.c +++ b/pypy/module/cpyext/src/sysmodule.c @@ -68,7 +68,7 @@ PyErr_Fetch(&error_type, &error_value, &error_traceback); file = PySys_GetObject(name); - written = vsnprintf(buffer, sizeof(buffer), format, va); + written = PyOS_vsnprintf(buffer, sizeof(buffer), format, va); if (sys_pyfile_write(buffer, file) != 0) { PyErr_Clear(); fputs(buffer, fp); @@ -100,4 +100,3 @@ sys_write("stderr", stderr, format, va); va_end(va); } - diff --git a/pypy/module/cpyext/src/unicodeobject.c b/pypy/module/cpyext/src/unicodeobject.c --- a/pypy/module/cpyext/src/unicodeobject.c +++ b/pypy/module/cpyext/src/unicodeobject.c @@ -504,6 +504,8 @@ return NULL; } +#undef appendstring + PyObject * PyUnicode_FromFormat(const char *format, ...) { diff --git a/pypy/module/cpyext/src/varargwrapper.c b/pypy/module/cpyext/src/varargwrapper.c --- a/pypy/module/cpyext/src/varargwrapper.c +++ b/pypy/module/cpyext/src/varargwrapper.c @@ -1,21 +1,25 @@ #include #include -PyObject * PyTuple_Pack(Py_ssize_t size, ...) +PyObject * +PyTuple_Pack(Py_ssize_t n, ...) { - va_list ap; - PyObject *cur, *tuple; - int i; + Py_ssize_t i; + PyObject *o; + PyObject *result; + va_list vargs; - tuple = PyTuple_New(size); - va_start(ap, size); - for (i = 0; i < size; i++) { - cur = va_arg(ap, PyObject*); - Py_INCREF(cur); - if (PyTuple_SetItem(tuple, i, cur) < 0) + va_start(vargs, n); + result = PyTuple_New(n); + if (result == NULL) + return NULL; + for (i = 0; i < n; i++) { + o = va_arg(vargs, PyObject *); + Py_INCREF(o); + if (PyTuple_SetItem(result, i, o) < 0) return NULL; } - va_end(ap); - return tuple; + va_end(vargs); + return result; } diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -288,6 +288,26 @@ w_errors = space.wrap(rffi.charp2str(errors)) return space.call_method(w_str, 'encode', w_encoding, w_errors) + at cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) +def PyString_AsDecodedObject(space, w_str, encoding, errors): + """Decode a string object by passing it to the codec registered + for encoding and return the result as Python object. encoding and + errors have the same meaning as the parameters of the same name in + the string encode() method. The codec to be used is looked up + using the Python codec registry. Return NULL if an exception was + raised by the codec. + + This function is not available in 3.x and does not have a PyBytes alias.""" + if not PyString_Check(space, w_str): + PyErr_BadArgument(space) + + w_encoding = w_errors = space.w_None + if encoding: + w_encoding = space.wrap(rffi.charp2str(encoding)) + if errors: + w_errors = space.wrap(rffi.charp2str(errors)) + return space.call_method(w_str, "decode", w_encoding, w_errors) + @cpython_api([PyObject, PyObject], PyObject) def _PyString_Join(space, w_sep, w_seq): return space.call_method(w_sep, 'join', w_seq) diff --git a/pypy/module/cpyext/structmember.py b/pypy/module/cpyext/structmember.py --- a/pypy/module/cpyext/structmember.py +++ b/pypy/module/cpyext/structmember.py @@ -10,7 +10,7 @@ PyString_FromString, PyString_FromStringAndSize) from pypy.module.cpyext.floatobject import PyFloat_AsDouble from pypy.module.cpyext.longobject import ( - PyLong_AsLongLong, PyLong_AsUnsignedLongLong) + PyLong_AsLongLong, PyLong_AsUnsignedLongLong, PyLong_AsSsize_t) from pypy.module.cpyext.typeobjectdefs import PyMemberDef from pypy.rlib.unroll import unrolling_iterable @@ -28,6 +28,7 @@ (T_DOUBLE, rffi.DOUBLE, PyFloat_AsDouble), (T_LONGLONG, rffi.LONGLONG, PyLong_AsLongLong), (T_ULONGLONG, rffi.ULONGLONG, PyLong_AsUnsignedLongLong), + (T_PYSSIZET, rffi.SSIZE_T, PyLong_AsSsize_t), ]) @@ -54,7 +55,7 @@ if member_type == T_STRING: result = rffi.cast(rffi.CCHARPP, addr) if result[0]: - w_result = PyString_FromString(space, result[0]) + w_result = PyUnicode_FromString(space, result[0]) else: w_result = space.w_None elif member_type == T_STRING_INPLACE: diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -18,6 +18,7 @@ T_OBJECT_EX = 16 T_LONGLONG = 17 T_ULONGLONG = 18 +T_PYSSIZET = 19 READONLY = RO = 1 READ_RESTRICTED = 2 diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1405,24 +1405,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, Py_ssize_t, Py_ssize_t], PyObject) -def PyList_GetSlice(space, list, low, high): - """Return a list of the objects in list containing the objects between low - and high. Return NULL and set an exception if unsuccessful. Analogous - to list[low:high]. Negative indices, as when slicing from Python, are not - supported. - - This function used an int for low and high. This might - require changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([Py_ssize_t], PyObject) -def PyLong_FromSsize_t(space, v): - """Return a new PyLongObject object from a C Py_ssize_t, or - NULL on failure. - """ - raise NotImplementedError - @cpython_api([rffi.SIZE_T], PyObject) def PyLong_FromSize_t(space, v): """Return a new PyLongObject object from a C size_t, or @@ -1442,14 +1424,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject], Py_ssize_t, error=-1) -def PyLong_AsSsize_t(space, pylong): - """Return a C Py_ssize_t representation of the contents of pylong. If - pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised - and -1 will be returned. - """ - raise NotImplementedError - @cpython_api([PyObject, rffi.CCHARP], rffi.INT_real, error=-1) def PyMapping_DelItemString(space, o, key): """Remove the mapping for object key from the object o. Return -1 on @@ -1588,15 +1562,6 @@ for PyObject_Str().""" raise NotImplementedError - at cpython_api([PyObject], lltype.Signed, error=-1) -def PyObject_HashNotImplemented(space, o): - """Set a TypeError indicating that type(o) is not hashable and return -1. - This function receives special treatment when stored in a tp_hash slot, - allowing a type to explicitly indicate to the interpreter that it is not - hashable. - """ - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is @@ -1719,17 +1684,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) -def PyString_AsDecodedObject(space, str, encoding, errors): - """Decode a string object by passing it to the codec registered for encoding and - return the result as Python object. encoding and errors have the same - meaning as the parameters of the same name in the string encode() method. - The codec to be used is looked up using the Python codec registry. Return NULL - if an exception was raised by the codec. - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Encode(space, s, size, encoding, errors): """Encode the char buffer of the given size by passing it to the codec @@ -1993,35 +1947,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) -def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): - """Decode length bytes from a UTF-32 encoded buffer string and return the - corresponding Unicode object. errors (if non-NULL) defines the error - handling. It defaults to "strict". - - If byteorder is non-NULL, the decoder starts decoding using the given byte - order: - - *byteorder == -1: little endian - *byteorder == 0: native order - *byteorder == 1: big endian - - If *byteorder is zero, and the first four bytes of the input data are a - byte order mark (BOM), the decoder switches to this byte order and the BOM is - not copied into the resulting Unicode string. If *byteorder is -1 or - 1, any byte order mark is copied to the output. - - After completion, *byteorder is set to the current byte order at the end - of input data. - - In a narrow build codepoints outside the BMP will be decoded as surrogate pairs. - - If byteorder is NULL, the codec starts in native order mode. - - Return NULL if an exception was raised by the codec. - """ - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP, Py_ssize_t], PyObject) def PyUnicode_DecodeUTF32Stateful(space, s, size, errors, byteorder, consumed): """If consumed is NULL, behave like PyUnicode_DecodeUTF32(). If diff --git a/pypy/module/cpyext/test/_sre.c b/pypy/module/cpyext/test/_sre.c --- a/pypy/module/cpyext/test/_sre.c +++ b/pypy/module/cpyext/test/_sre.c @@ -58,12 +58,8 @@ /* defining this one enables tracing */ #undef VERBOSE -#if PY_VERSION_HEX >= 0x01060000 -#if PY_VERSION_HEX < 0x02020000 || defined(Py_USING_UNICODE) /* defining this enables unicode support (default under 1.6a1 and later) */ #define HAVE_UNICODE -#endif -#endif /* -------------------------------------------------------------------- */ /* optional features */ @@ -71,9 +67,6 @@ /* enables fast searching */ #define USE_FAST_SEARCH -/* enables aggressive inlining (always on for Visual C) */ -#undef USE_INLINE - /* enables copy/deepcopy handling (work in progress) */ #undef USE_BUILTIN_COPY @@ -81,9 +74,6 @@ #define PyObject_DEL(op) PyMem_DEL((op)) #endif -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - /* -------------------------------------------------------------------- */ #if defined(_MSC_VER) @@ -1684,41 +1674,44 @@ Py_ssize_t size, bytes; int charsize; void* ptr; - -#if defined(HAVE_UNICODE) + Py_buffer view; + + /* Unicode objects do not support the buffer API. So, get the data + directly instead. */ if (PyUnicode_Check(string)) { - /* unicode strings doesn't always support the buffer interface */ - ptr = (void*) PyUnicode_AS_DATA(string); - bytes = PyUnicode_GET_DATA_SIZE(string); - size = PyUnicode_GET_SIZE(string); - charsize = sizeof(Py_UNICODE); - - } else { -#endif + ptr = (void *)PyUnicode_AS_DATA(string); + *p_length = PyUnicode_GET_SIZE(string); + *p_charsize = sizeof(Py_UNICODE); + return ptr; + } /* get pointer to string buffer */ + view.len = -1; buffer = Py_TYPE(string)->tp_as_buffer; - if (!buffer || !buffer->bf_getreadbuffer || !buffer->bf_getsegcount || - buffer->bf_getsegcount(string, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, "expected string or buffer"); - return NULL; + if (!buffer || !buffer->bf_getbuffer || + (*buffer->bf_getbuffer)(string, &view, PyBUF_SIMPLE) < 0) { + PyErr_SetString(PyExc_TypeError, "expected string or buffer"); + return NULL; } /* determine buffer size */ - bytes = buffer->bf_getreadbuffer(string, 0, &ptr); + bytes = view.len; + ptr = view.buf; + + /* Release the buffer immediately --- possibly dangerous + but doing something else would require some re-factoring + */ + PyBuffer_Release(&view); + if (bytes < 0) { PyErr_SetString(PyExc_TypeError, "buffer has negative size"); return NULL; } /* determine character size */ -#if PY_VERSION_HEX >= 0x01060000 size = PyObject_Size(string); -#else - size = PyObject_Length(string); -#endif - - if (PyString_Check(string) || bytes == size) + + if (PyBytes_Check(string) || bytes == size) charsize = 1; #if defined(HAVE_UNICODE) else if (bytes == (Py_ssize_t) (size * sizeof(Py_UNICODE))) @@ -1729,13 +1722,13 @@ return NULL; } -#if defined(HAVE_UNICODE) - } -#endif - *p_length = size; *p_charsize = charsize; + if (ptr == NULL) { + PyErr_SetString(PyExc_ValueError, + "Buffer is NULL"); + } return ptr; } @@ -1758,6 +1751,17 @@ if (!ptr) return NULL; + if (charsize == 1 && pattern->charsize > 1) { + PyErr_SetString(PyExc_TypeError, + "can't use a string pattern on a bytes-like object"); + return NULL; + } + if (charsize > 1 && pattern->charsize == 1) { + PyErr_SetString(PyExc_TypeError, + "can't use a bytes pattern on a string-like object"); + return NULL; + } + /* adjust boundaries */ if (start < 0) start = 0; @@ -1952,7 +1956,7 @@ if (!args) return NULL; - name = PyString_FromString(module); + name = PyUnicode_FromString(module); if (!name) return NULL; mod = PyImport_Import(name); @@ -2601,48 +2605,24 @@ {NULL, NULL} }; -static PyObject* -pattern_getattr(PatternObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(pattern_methods, (PyObject*) self, name); - - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - if (!strcmp(name, "flags")) - return Py_BuildValue("i", self->flags); - - if (!strcmp(name, "groups")) - return Py_BuildValue("i", self->groups); - - if (!strcmp(name, "groupindex") && self->groupindex) { - Py_INCREF(self->groupindex); - return self->groupindex; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} - -statichere PyTypeObject Pattern_Type = { - PyObject_HEAD_INIT(NULL) - 0, "_" SRE_MODULE ".SRE_Pattern", +#define PAT_OFF(x) offsetof(PatternObject, x) +static PyMemberDef pattern_members[] = { + {"pattern", T_OBJECT, PAT_OFF(pattern), READONLY}, + {"flags", T_INT, PAT_OFF(flags), READONLY}, + {"groups", T_PYSSIZET, PAT_OFF(groups), READONLY}, + {"groupindex", T_OBJECT, PAT_OFF(groupindex), READONLY}, + {NULL} /* Sentinel */ +}; + +static PyTypeObject Pattern_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "_" SRE_MODULE ".SRE_Pattern", sizeof(PatternObject), sizeof(SRE_CODE), - (destructor)pattern_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)pattern_getattr, /*tp_getattr*/ + (destructor)pattern_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ 0, /* tp_setattr */ - 0, /* tp_compare */ + 0, /* tp_reserved */ 0, /* tp_repr */ 0, /* tp_as_number */ 0, /* tp_as_sequence */ @@ -2653,12 +2633,16 @@ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ pattern_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(PatternObject, weakreflist), /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + pattern_methods, /* tp_methods */ + pattern_members, /* tp_members */ }; static int _validate(PatternObject *self); /* Forward */ @@ -2696,8 +2680,7 @@ for (i = 0; i < n; i++) { PyObject *o = PyList_GET_ITEM(code, i); - unsigned long value = PyInt_Check(o) ? (unsigned long)PyInt_AsLong(o) - : PyLong_AsUnsignedLong(o); + unsigned long value = PyLong_AsUnsignedLong(o); self->code[i] = (SRE_CODE) value; if ((unsigned long) self->code[i] != value) { PyErr_SetString(PyExc_OverflowError, @@ -2711,6 +2694,16 @@ return NULL; } + if (pattern == Py_None) + self->charsize = -1; + else { + Py_ssize_t p_length; + if (!getstring(pattern, &p_length, &self->charsize)) { + Py_DECREF(self); + return NULL; + } + } + Py_INCREF(pattern); self->pattern = pattern; @@ -3244,16 +3237,20 @@ { Py_ssize_t i; - if (PyInt_Check(index)) - return PyInt_AsSsize_t(index); + if (index == NULL) + /* Default value */ + return 0; + + if (PyLong_Check(index)) + return PyLong_AsSsize_t(index); i = -1; if (self->pattern->groupindex) { index = PyObject_GetItem(self->pattern->groupindex, index); if (index) { - if (PyInt_Check(index) || PyLong_Check(index)) - i = PyInt_AsSsize_t(index); + if (PyLong_Check(index)) + i = PyLong_AsSsize_t(index); Py_DECREF(index); } else PyErr_Clear(); @@ -3394,7 +3391,7 @@ { Py_ssize_t index; - PyObject* index_ = Py_False; /* zero */ + PyObject* index_ = NULL; if (!PyArg_UnpackTuple(args, "start", 0, 1, &index_)) return NULL; @@ -3417,7 +3414,7 @@ { Py_ssize_t index; - PyObject* index_ = Py_False; /* zero */ + PyObject* index_ = NULL; if (!PyArg_UnpackTuple(args, "end", 0, 1, &index_)) return NULL; @@ -3445,12 +3442,12 @@ if (!pair) return NULL; - item = PyInt_FromSsize_t(i1); + item = PyLong_FromSsize_t(i1); if (!item) goto error; PyTuple_SET_ITEM(pair, 0, item); - item = PyInt_FromSsize_t(i2); + item = PyLong_FromSsize_t(i2); if (!item) goto error; PyTuple_SET_ITEM(pair, 1, item); @@ -3467,7 +3464,7 @@ { Py_ssize_t index; - PyObject* index_ = Py_False; /* zero */ + PyObject* index_ = NULL; if (!PyArg_UnpackTuple(args, "span", 0, 1, &index_)) return NULL; @@ -3578,80 +3575,89 @@ {NULL, NULL} }; -static PyObject* -match_getattr(MatchObject* self, char* name) +static PyObject * +match_lastindex_get(MatchObject *self) { - PyObject* res; - - res = Py_FindMethod(match_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - if (!strcmp(name, "lastindex")) { - if (self->lastindex >= 0) - return Py_BuildValue("i", self->lastindex); - Py_INCREF(Py_None); - return Py_None; + if (self->lastindex >= 0) + return Py_BuildValue("i", self->lastindex); + Py_INCREF(Py_None); + return Py_None; +} + +static PyObject * +match_lastgroup_get(MatchObject *self) +{ + if (self->pattern->indexgroup && self->lastindex >= 0) { + PyObject* result = PySequence_GetItem( + self->pattern->indexgroup, self->lastindex + ); + if (result) + return result; + PyErr_Clear(); } - - if (!strcmp(name, "lastgroup")) { - if (self->pattern->indexgroup && self->lastindex >= 0) { - PyObject* result = PySequence_GetItem( - self->pattern->indexgroup, self->lastindex - ); - if (result) - return result; - PyErr_Clear(); - } - Py_INCREF(Py_None); - return Py_None; - } - - if (!strcmp(name, "string")) { - if (self->string) { - Py_INCREF(self->string); - return self->string; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - - if (!strcmp(name, "regs")) { - if (self->regs) { - Py_INCREF(self->regs); - return self->regs; - } else - return match_regs(self); - } - - if (!strcmp(name, "re")) { - Py_INCREF(self->pattern); - return (PyObject*) self->pattern; - } - - if (!strcmp(name, "pos")) - return Py_BuildValue("i", self->pos); - - if (!strcmp(name, "endpos")) - return Py_BuildValue("i", self->endpos); - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; + Py_INCREF(Py_None); + return Py_None; } +static PyObject * +match_regs_get(MatchObject *self) +{ + if (self->regs) { + Py_INCREF(self->regs); + return self->regs; + } else + return match_regs(self); +} + +static PyGetSetDef match_getset[] = { + {"lastindex", (getter)match_lastindex_get, (setter)NULL}, + {"lastgroup", (getter)match_lastgroup_get, (setter)NULL}, + {"regs", (getter)match_regs_get, (setter)NULL}, + {NULL} +}; + +#define MATCH_OFF(x) offsetof(MatchObject, x) +static PyMemberDef match_members[] = { + {"string", T_OBJECT, MATCH_OFF(string), READONLY}, + {"re", T_OBJECT, MATCH_OFF(pattern), READONLY}, + {"pos", T_PYSSIZET, MATCH_OFF(pos), READONLY}, + {"endpos", T_PYSSIZET, MATCH_OFF(endpos), READONLY}, + {NULL} +}; + /* FIXME: implement setattr("string", None) as a special case (to detach the associated string, if any */ -statichere PyTypeObject Match_Type = { - PyObject_HEAD_INIT(NULL) - 0, "_" SRE_MODULE ".SRE_Match", +static PyTypeObject Match_Type = { + PyVarObject_HEAD_INIT(NULL,0) + "_" SRE_MODULE ".SRE_Match", sizeof(MatchObject), sizeof(Py_ssize_t), - (destructor)match_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)match_getattr /*tp_getattr*/ + (destructor)match_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_reserved */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + match_methods, /* tp_methods */ + match_members, /* tp_members */ + match_getset, /* tp_getset */ }; static PyObject* @@ -3800,34 +3806,42 @@ {NULL, NULL} }; -static PyObject* -scanner_getattr(ScannerObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(scanner_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} - -statichere PyTypeObject Scanner_Type = { - PyObject_HEAD_INIT(NULL) - 0, "_" SRE_MODULE ".SRE_Scanner", +#define SCAN_OFF(x) offsetof(ScannerObject, x) +static PyMemberDef scanner_members[] = { + {"pattern", T_OBJECT, SCAN_OFF(pattern), READONLY}, + {NULL} /* Sentinel */ +}; + +static PyTypeObject Scanner_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "_" SRE_MODULE ".SRE_Scanner", sizeof(ScannerObject), 0, - (destructor)scanner_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)scanner_getattr, /*tp_getattr*/ + (destructor)scanner_dealloc,/* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_reserved */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + scanner_methods, /* tp_methods */ + scanner_members, /* tp_members */ + 0, /* tp_getset */ }; static PyObject* @@ -3868,38 +3882,35 @@ {NULL, NULL} }; -#if PY_VERSION_HEX < 0x02030000 -DL_EXPORT(void) init_sre(void) -#else PyMODINIT_FUNC init_sre(void) -#endif { PyObject* m; PyObject* d; PyObject* x; /* Patch object types */ - Pattern_Type.ob_type = Match_Type.ob_type = - Scanner_Type.ob_type = &PyType_Type; + if (PyType_Ready(&Pattern_Type) || PyType_Ready(&Match_Type) || + PyType_Ready(&Scanner_Type)) + return; m = Py_InitModule("_" SRE_MODULE, _functions); if (m == NULL) return; d = PyModule_GetDict(m); - x = PyInt_FromLong(SRE_MAGIC); + x = PyLong_FromLong(SRE_MAGIC); if (x) { PyDict_SetItemString(d, "MAGIC", x); Py_DECREF(x); } - x = PyInt_FromLong(sizeof(SRE_CODE)); + x = PyLong_FromLong(sizeof(SRE_CODE)); if (x) { PyDict_SetItemString(d, "CODESIZE", x); Py_DECREF(x); } - x = PyString_FromString(copyright); + x = PyUnicode_FromString(copyright); if (x) { PyDict_SetItemString(d, "copyright", x); Py_DECREF(x); diff --git a/pypy/module/cpyext/test/array.c b/pypy/module/cpyext/test/array.c --- a/pypy/module/cpyext/test/array.c +++ b/pypy/module/cpyext/test/array.c @@ -11,13 +11,10 @@ #include #else /* !STDC_HEADERS */ #ifdef HAVE_SYS_TYPES_H -#include /* For size_t */ +#include /* For size_t */ #endif /* HAVE_SYS_TYPES_H */ #endif /* !STDC_HEADERS */ -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - struct arrayobject; /* Forward */ /* All possible arraydescr values are defined in the vector "descriptors" @@ -25,18 +22,22 @@ * functions aren't visible yet. */ struct arraydescr { - int typecode; - int itemsize; - PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); - int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); + Py_UNICODE typecode; + int itemsize; + PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); + int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); + char *formats; + int is_integer_type; + int is_signed; }; typedef struct arrayobject { - PyObject_VAR_HEAD - char *ob_item; - Py_ssize_t allocated; - struct arraydescr *ob_descr; - PyObject *weakreflist; /* List of weak references */ + PyObject_VAR_HEAD + char *ob_item; + Py_ssize_t allocated; + struct arraydescr *ob_descr; + PyObject *weakreflist; /* List of weak references */ + int ob_exports; /* Number of exported buffers */ } arrayobject; static PyTypeObject Arraytype; @@ -47,49 +48,63 @@ static int array_resize(arrayobject *self, Py_ssize_t newsize) { - char *items; - size_t _new_size; + char *items; + size_t _new_size; - /* Bypass realloc() when a previous overallocation is large enough - to accommodate the newsize. If the newsize is 16 smaller than the - current size, then proceed with the realloc() to shrink the list. - */ + if (self->ob_exports > 0 && newsize != Py_SIZE(self)) { + PyErr_SetString(PyExc_BufferError, + "cannot resize an array that is exporting buffers"); + return -1; + } - if (self->allocated >= newsize && - Py_SIZE(self) < newsize + 16 && - self->ob_item != NULL) { - Py_SIZE(self) = newsize; - return 0; - } + /* Bypass realloc() when a previous overallocation is large enough + to accommodate the newsize. If the newsize is 16 smaller than the + current size, then proceed with the realloc() to shrink the array. + */ - /* This over-allocates proportional to the array size, making room - * for additional growth. The over-allocation is mild, but is - * enough to give linear-time amortized behavior over a long - * sequence of appends() in the presence of a poorly-performing - * system realloc(). - * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... - * Note, the pattern starts out the same as for lists but then - * grows at a smaller rate so that larger arrays only overallocate - * by about 1/16th -- this is done because arrays are presumed to be more - * memory critical. - */ + if (self->allocated >= newsize && + Py_SIZE(self) < newsize + 16 && + self->ob_item != NULL) { + Py_SIZE(self) = newsize; + return 0; + } - _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; - items = self->ob_item; - /* XXX The following multiplication and division does not optimize away - like it does for lists since the size is not known at compile time */ - if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) - PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); - else - items = NULL; - if (items == NULL) { - PyErr_NoMemory(); - return -1; - } - self->ob_item = items; - Py_SIZE(self) = newsize; - self->allocated = _new_size; - return 0; + if (newsize == 0) { + PyMem_FREE(self->ob_item); + self->ob_item = NULL; + Py_SIZE(self) = 0; + self->allocated = 0; + return 0; + } + + /* This over-allocates proportional to the array size, making room + * for additional growth. The over-allocation is mild, but is + * enough to give linear-time amortized behavior over a long + * sequence of appends() in the presence of a poorly-performing + * system realloc(). + * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... + * Note, the pattern starts out the same as for lists but then + * grows at a smaller rate so that larger arrays only overallocate + * by about 1/16th -- this is done because arrays are presumed to be more + * memory critical. + */ + + _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; + items = self->ob_item; + /* XXX The following multiplication and division does not optimize away + like it does for lists since the size is not known at compile time */ + if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) + PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); + else + items = NULL; + if (items == NULL) { + PyErr_NoMemory(); + return -1; + } + self->ob_item = items; + Py_SIZE(self) = newsize; + self->allocated = _new_size; + return 0; } /**************************************************************************** @@ -105,310 +120,295 @@ ****************************************************************************/ static PyObject * -c_getitem(arrayobject *ap, Py_ssize_t i) -{ - return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); -} - -static int -c_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) -{ - char x; - if (!PyArg_Parse(v, "c;array item must be char", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; -} - -static PyObject * b_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((char *)ap->ob_item)[i]; - if (x >= 128) - x -= 256; - return PyInt_FromLong(x); + long x = ((char *)ap->ob_item)[i]; + if (x >= 128) + x -= 256; + return PyLong_FromLong(x); } static int b_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore - must use the next size up that is signed ('h') and manually do - the overflow checking */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - else if (x < -128) { - PyErr_SetString(PyExc_OverflowError, - "signed char is less than minimum"); - return -1; - } - else if (x > 127) { - PyErr_SetString(PyExc_OverflowError, - "signed char is greater than maximum"); - return -1; - } - if (i >= 0) - ((char *)ap->ob_item)[i] = (char)x; - return 0; + short x; + /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore + must use the next size up that is signed ('h') and manually do + the overflow checking */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + else if (x < -128) { + PyErr_SetString(PyExc_OverflowError, + "signed char is less than minimum"); + return -1; + } + else if (x > 127) { + PyErr_SetString(PyExc_OverflowError, + "signed char is greater than maximum"); + return -1; + } + if (i >= 0) + ((char *)ap->ob_item)[i] = (char)x; + return 0; } static PyObject * BB_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((unsigned char *)ap->ob_item)[i]; - return PyInt_FromLong(x); + long x = ((unsigned char *)ap->ob_item)[i]; + return PyLong_FromLong(x); } static int BB_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned char x; - /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ - if (!PyArg_Parse(v, "b;array item must be integer", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; + unsigned char x; + /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ + if (!PyArg_Parse(v, "b;array item must be integer", &x)) + return -1; + if (i >= 0) + ((char *)ap->ob_item)[i] = x; + return 0; } -#ifdef Py_USING_UNICODE static PyObject * u_getitem(arrayobject *ap, Py_ssize_t i) { - return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); + return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); } static int u_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - Py_UNICODE *p; - Py_ssize_t len; + Py_UNICODE *p; + Py_ssize_t len; - if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) - return -1; - if (len != 1) { - PyErr_SetString(PyExc_TypeError, - "array item must be unicode character"); - return -1; - } - if (i >= 0) - ((Py_UNICODE *)ap->ob_item)[i] = p[0]; - return 0; + if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) + return -1; + if (len != 1) { + PyErr_SetString(PyExc_TypeError, + "array item must be unicode character"); + return -1; + } + if (i >= 0) + ((Py_UNICODE *)ap->ob_item)[i] = p[0]; + return 0; } -#endif + static PyObject * h_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); + return PyLong_FromLong((long) ((short *)ap->ob_item)[i]); } + static int h_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - if (i >= 0) - ((short *)ap->ob_item)[i] = x; - return 0; + short x; + /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + if (i >= 0) + ((short *)ap->ob_item)[i] = x; + return 0; } static PyObject * HH_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); + return PyLong_FromLong((long) ((unsigned short *)ap->ob_item)[i]); } static int HH_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* PyArg_Parse's 'h' formatter is for a signed short, therefore - must use the next size up and manually do the overflow checking */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - else if (x < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is less than minimum"); - return -1; - } - else if (x > USHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is greater than maximum"); - return -1; - } - if (i >= 0) - ((short *)ap->ob_item)[i] = (short)x; - return 0; + int x; + /* PyArg_Parse's 'h' formatter is for a signed short, therefore + must use the next size up and manually do the overflow checking */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + else if (x < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is less than minimum"); + return -1; + } + else if (x > USHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is greater than maximum"); + return -1; + } + if (i >= 0) + ((short *)ap->ob_item)[i] = (short)x; + return 0; } static PyObject * i_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); + return PyLong_FromLong((long) ((int *)ap->ob_item)[i]); } static int i_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - if (i >= 0) - ((int *)ap->ob_item)[i] = x; - return 0; + int x; + /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + if (i >= 0) + ((int *)ap->ob_item)[i] = x; + return 0; } static PyObject * II_getitem(arrayobject *ap, Py_ssize_t i) { - return PyLong_FromUnsignedLong( - (unsigned long) ((unsigned int *)ap->ob_item)[i]); + return PyLong_FromUnsignedLong( + (unsigned long) ((unsigned int *)ap->ob_item)[i]); } static int II_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned long x; - if (PyLong_Check(v)) { - x = PyLong_AsUnsignedLong(v); - if (x == (unsigned long) -1 && PyErr_Occurred()) - return -1; - } - else { - long y; - if (!PyArg_Parse(v, "l;array item must be integer", &y)) - return -1; - if (y < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is less than minimum"); - return -1; - } - x = (unsigned long)y; + unsigned long x; + if (PyLong_Check(v)) { + x = PyLong_AsUnsignedLong(v); + if (x == (unsigned long) -1 && PyErr_Occurred()) + return -1; + } + else { + long y; + if (!PyArg_Parse(v, "l;array item must be integer", &y)) + return -1; + if (y < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is less than minimum"); + return -1; + } + x = (unsigned long)y; - } - if (x > UINT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is greater than maximum"); - return -1; - } + } + if (x > UINT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is greater than maximum"); + return -1; + } - if (i >= 0) - ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; - return 0; + if (i >= 0) + ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; + return 0; } static PyObject * l_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong(((long *)ap->ob_item)[i]); From noreply at buildbot.pypy.org Wed May 2 16:36:58 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Wed, 2 May 2012 16:36:58 +0200 (CEST) Subject: [pypy-commit] pypy stdlib-unification/py3k: merge from stdlib-unification Message-ID: <20120502143658.F013182F50@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: stdlib-unification/py3k Changeset: r54873:a9fa4e709310 Date: 2012-05-02 16:35 +0200 http://bitbucket.org/pypy/pypy/changeset/a9fa4e709310/ Log: merge from stdlib-unification diff --git a/lib-python/3.2/sysconfig.py b/lib-python/3.2/sysconfig.py --- a/lib-python/3.2/sysconfig.py +++ b/lib-python/3.2/sysconfig.py @@ -623,3 +623,147 @@ if __name__ == '__main__': _main() + + # Convert the OS name to lowercase, remove '/' characters + # (to accommodate BSD/OS), and translate spaces (for "Power Macintosh") + osname = osname.lower().replace('/', '') + machine = machine.replace(' ', '_') + machine = machine.replace('/', '-') + + if osname[:5] == "linux": + # At least on Linux/Intel, 'machine' is the processor -- + # i386, etc. + # XXX what about Alpha, SPARC, etc? + return "%s-%s" % (osname, machine) + elif osname[:5] == "sunos": + if release[0] >= "5": # SunOS 5 == Solaris 2 + osname = "solaris" + release = "%d.%s" % (int(release[0]) - 3, release[2:]) + # We can't use "platform.architecture()[0]" because a + # bootstrap problem. We use a dict to get an error + # if some suspicious happens. + bitness = {2147483647:"32bit", 9223372036854775807:"64bit"} + machine += ".%s" % bitness[sys.maxsize] + # fall through to standard osname-release-machine representation + elif osname[:4] == "irix": # could be "irix64"! + return "%s-%s" % (osname, release) + elif osname[:3] == "aix": + return "%s-%s.%s" % (osname, version, release) + elif osname[:6] == "cygwin": + osname = "cygwin" + rel_re = re.compile (r'[\d.]+') + m = rel_re.match(release) + if m: + release = m.group() + elif osname[:6] == "darwin": + # + # For our purposes, we'll assume that the system version from + # distutils' perspective is what MACOSX_DEPLOYMENT_TARGET is set + # to. This makes the compatibility story a bit more sane because the + # machine is going to compile and link as if it were + # MACOSX_DEPLOYMENT_TARGET. + # + cfgvars = get_config_vars() + macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET') + + if 1: + # Always calculate the release of the running machine, + # needed to determine if we can build fat binaries or not. + + macrelease = macver + # Get the system version. Reading this plist is a documented + # way to get the system version (see the documentation for + # the Gestalt Manager) + try: + f = open('/System/Library/CoreServices/SystemVersion.plist') + except IOError: + # We're on a plain darwin box, fall back to the default + # behaviour. + pass + else: + try: + m = re.search( + r'ProductUserVisibleVersion\s*' + + r'(.*?)', f.read()) + if m is not None: + macrelease = '.'.join(m.group(1).split('.')[:2]) + # else: fall back to the default behaviour + finally: + f.close() + + if not macver: + macver = macrelease + + if macver: + release = macver + osname = "macosx" + + if (macrelease + '.') >= '10.4.' and \ + '-arch' in get_config_vars().get('CFLAGS', '').strip(): + # The universal build will build fat binaries, but not on + # systems before 10.4 + # + # Try to detect 4-way universal builds, those have machine-type + # 'universal' instead of 'fat'. + + machine = 'fat' + cflags = get_config_vars().get('CFLAGS') + + archs = re.findall('-arch\s+(\S+)', cflags) + archs = tuple(sorted(set(archs))) + + if len(archs) == 1: + machine = archs[0] + elif archs == ('i386', 'ppc'): + machine = 'fat' + elif archs == ('i386', 'x86_64'): + machine = 'intel' + elif archs == ('i386', 'ppc', 'x86_64'): + machine = 'fat3' + elif archs == ('ppc64', 'x86_64'): + machine = 'fat64' + elif archs == ('i386', 'ppc', 'ppc64', 'x86_64'): + machine = 'universal' + else: + raise ValueError( + "Don't know machine value for archs=%r"%(archs,)) + + elif machine == 'i386': + # On OSX the machine type returned by uname is always the + # 32-bit variant, even if the executable architecture is + # the 64-bit variant + if sys.maxsize >= 2**32: + machine = 'x86_64' + + elif machine in ('PowerPC', 'Power_Macintosh'): + # Pick a sane name for the PPC architecture. + # See 'i386' case + if sys.maxsize >= 2**32: + machine = 'ppc64' + else: + machine = 'ppc' + + return "%s-%s-%s" % (osname, release, machine) + + +def get_python_version(): + return _PY_VERSION_SHORT + +def _print_dict(title, data): + for index, (key, value) in enumerate(sorted(data.items())): + if index == 0: + print('{0}: '.format(title)) + print('\t{0} = "{1}"'.format(key, value)) + +def _main(): + """Display all information sysconfig detains.""" + print('Platform: "{0}"'.format(get_platform())) + print('Python version: "{0}"'.format(get_python_version())) + print('Current installation scheme: "{0}"'.format(_get_default_scheme())) + print('') + _print_dict('Paths', get_paths()) + print('') + _print_dict('Variables', get_config_vars()) + +if __name__ == '__main__': + _main() From noreply at buildbot.pypy.org Wed May 2 17:07:15 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 2 May 2012 17:07:15 +0200 (CEST) Subject: [pypy-commit] pypy win32-cleanup2: merge from default Message-ID: <20120502150715.CFA3182F50@wyvern.cs.uni-duesseldorf.de> Author: mattip Branch: win32-cleanup2 Changeset: r54874:f8b3c6c64ce9 Date: 2012-05-02 16:59 +0300 http://bitbucket.org/pypy/pypy/changeset/f8b3c6c64ce9/ Log: merge from default diff too long, truncating to 10000 out of 12873 lines diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst --- a/pypy/doc/cppyy.rst +++ b/pypy/doc/cppyy.rst @@ -21,6 +21,26 @@ .. _`llvm`: http://llvm.org/ +Motivation +========== + +The cppyy module offers two unique features, which result in great +performance as well as better functionality and cross-language integration +than would otherwise be possible. +First, cppyy is written in RPython and therefore open to optimizations by the +JIT up until the actual point of call into C++. +This means that there are no conversions necessary between a garbage collected +and a reference counted environment, as is needed for the use of existing +extension modules written or generated for CPython. +It also means that if variables are already unboxed by the JIT, they can be +passed through directly to C++. +Second, Reflex (and cling far more so) adds dynamic features to C++, thus +greatly reducing impedance mismatches between the two languages. +In fact, Reflex is dynamic enough that you could write the runtime bindings +generation in python (as opposed to RPython) and this is used to create very +natural "pythonizations" of the bound code. + + Installation ============ @@ -195,10 +215,12 @@ >>>> d = cppyy.gbl.BaseFactory("name", 42, 3.14) >>>> type(d) - >>>> d.m_i - 42 - >>>> d.m_d - 3.14 + >>>> isinstance(d, cppyy.gbl.Base1) + True + >>>> isinstance(d, cppyy.gbl.Base2) + True + >>>> d.m_i, d.m_d + (42, 3.14) >>>> d.m_name == "name" True >>>> @@ -295,6 +317,9 @@ To select a specific virtual method, do like with normal python classes that override methods: select it from the class that you need, rather than calling the method on the instance. + To select a specific overload, use the __dispatch__ special function, which + takes the name of the desired method and its signature (which can be + obtained from the doc string) as arguments. * **namespaces**: Are represented as python classes. Namespaces are more open-ended than classes, so sometimes initial access may diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst --- a/pypy/doc/extending.rst +++ b/pypy/doc/extending.rst @@ -116,13 +116,21 @@ Reflex ====== -This method is only experimental for now, and is being exercised on a branch, -`reflex-support`_, so you will have to build PyPy yourself. +This method is still experimental and is being exercised on a branch, +`reflex-support`_, which adds the `cppyy`_ module. The method works by using the `Reflex package`_ to provide reflection information of the C++ code, which is then used to automatically generate -bindings at runtime, which can then be used from python. +bindings at runtime. +From a python standpoint, there is no difference between generating bindings +at runtime, or having them "statically" generated and available in scripts +or compiled into extension modules: python classes and functions are always +runtime structures, created when a script or module loads. +However, if the backend itself is capable of dynamic behavior, it is a much +better functional match to python, allowing tighter integration and more +natural language mappings. Full details are `available here`_. +.. _`cppyy`: cppyy.html .. _`reflex-support`: cppyy.html .. _`Reflex package`: http://root.cern.ch/drupal/content/reflex .. _`available here`: cppyy.html @@ -130,16 +138,33 @@ Pros ---- -If it works, it is mostly automatic, and hence easy in use. -The bindings can make use of direct pointers, in which case the calls are -very fast. +The cppyy module is written in RPython, which makes it possible to keep the +code execution visible to the JIT all the way to the actual point of call into +C++, thus allowing for a very fast interface. +Reflex is currently in use in large software environments in High Energy +Physics (HEP), across many different projects and packages, and its use can be +virtually completely automated in a production environment. +One of its uses in HEP is in providing language bindings for CPython. +Thus, it is possible to use Reflex to have bound code work on both CPython and +on PyPy. +In the medium-term, Reflex will be replaced by `cling`_, which is based on +`llvm`_. +This will affect the backend only; the python-side interface is expected to +remain the same, except that cling adds a lot of dynamic behavior to C++, +enabling further language integration. + +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ Cons ---- -C++ is a large language, and these bindings are not yet feature-complete. -Although missing features should do no harm if you don't use them, if you do -need a particular feature, it may be necessary to work around it in python -or with a C++ helper function. +C++ is a large language, and cppyy is not yet feature-complete. +Still, the experience gained in developing the equivalent bindings for CPython +means that adding missing features is a simple matter of engineering, not a +question of research. +The module is written so that currently missing features should do no harm if +you don't use them, if you do need a particular feature, it may be necessary +to work around it in python or with a C++ helper function. Although Reflex works on various platforms, the bindings with PyPy have only been tested on Linux. diff --git a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py --- a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py +++ b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py @@ -217,7 +217,8 @@ encoded = ''.join(writtencode).encode('hex').upper() ataddr = '@%x' % addr assert log == [('test-logname-section', - [('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] + [('debug_print', 'SYS_EXECUTABLE', '??'), + ('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] lltype.free(p, flavor='raw') diff --git a/pypy/jit/metainterp/jitexc.py b/pypy/jit/metainterp/jitexc.py --- a/pypy/jit/metainterp/jitexc.py +++ b/pypy/jit/metainterp/jitexc.py @@ -12,7 +12,6 @@ """ _go_through_llinterp_uncaught_ = True # ugh - def _get_standard_error(rtyper, Class): exdata = rtyper.getexceptiondata() clsdef = rtyper.annotator.bookkeeper.getuniqueclassdef(Class) diff --git a/pypy/jit/metainterp/optimize.py b/pypy/jit/metainterp/optimize.py --- a/pypy/jit/metainterp/optimize.py +++ b/pypy/jit/metainterp/optimize.py @@ -5,3 +5,9 @@ """Raised when the optimize*.py detect that the loop that we are trying to build cannot possibly make sense as a long-running loop (e.g. it cannot run 2 complete iterations).""" + + def __init__(self, msg='?'): + debug_start("jit-abort") + debug_print(msg) + debug_stop("jit-abort") + self.msg = msg diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -49,8 +49,9 @@ optimizations.append(OptFfiCall()) if ('rewrite' not in enable_opts or 'virtualize' not in enable_opts - or 'heap' not in enable_opts or 'unroll' not in enable_opts): - optimizations.append(OptSimplify()) + or 'heap' not in enable_opts or 'unroll' not in enable_opts + or 'pure' not in enable_opts): + optimizations.append(OptSimplify(unroll)) return optimizations, unroll diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -257,8 +257,8 @@ opnum == rop.COPYSTRCONTENT or # no effect on GC struct/array opnum == rop.COPYUNICODECONTENT): # no effect on GC struct/array return - assert opnum != rop.CALL_PURE if (opnum == rop.CALL or + opnum == rop.CALL_PURE or opnum == rop.CALL_MAY_FORCE or opnum == rop.CALL_RELEASE_GIL or opnum == rop.CALL_ASSEMBLER): @@ -481,7 +481,7 @@ # already between the tracing and now. In this case, we are # simply ignoring the QUASIIMMUT_FIELD hint and compiling it # as a regular getfield. - if not qmutdescr.is_still_valid(): + if not qmutdescr.is_still_valid_for(structvalue.get_key_box()): self._remove_guard_not_invalidated = True return # record as an out-of-line guard diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -191,10 +191,13 @@ # GUARD_OVERFLOW, then the loop is invalid. lastop = self.last_emitted_operation if lastop is None: - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') opnum = lastop.getopnum() if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') + self.emit_operation(op) def optimize_INT_ADD_OVF(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -525,6 +525,7 @@ @specialize.argtype(0) def _emit_operation(self, op): + assert op.getopnum() != rop.CALL_PURE for i in range(op.numargs()): arg = op.getarg(i) try: diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -208,7 +208,8 @@ box = value.box assert isinstance(box, Const) if not box.same_constant(constbox): - raise InvalidLoop + raise InvalidLoop('A GURAD_{VALUE,TRUE,FALSE} was proven to' + + 'always fail') return if emit_operation: self.emit_operation(op) @@ -220,7 +221,7 @@ if value.is_null(): return elif value.is_nonnull(): - raise InvalidLoop + raise InvalidLoop('A GUARD_ISNULL was proven to always fail') self.emit_operation(op) value.make_constant(self.optimizer.cpu.ts.CONST_NULL) @@ -229,7 +230,7 @@ if value.is_nonnull(): return elif value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL was proven to always fail') self.emit_operation(op) value.make_nonnull(op) @@ -278,7 +279,7 @@ if realclassbox is not None: if realclassbox.same_constant(expectedclassbox): return - raise InvalidLoop + raise InvalidLoop('A GUARD_CLASS was proven to always fail') if value.last_guard: # there already has been a guard_nonnull or guard_class or # guard_nonnull_class on this value. @@ -301,7 +302,8 @@ def optimize_GUARD_NONNULL_CLASS(self, op): value = self.getvalue(op.getarg(0)) if value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL_CLASS was proven to always ' + + 'fail') self.optimize_GUARD_CLASS(op) def optimize_CALL_LOOPINVARIANT(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -4,8 +4,9 @@ from pypy.jit.metainterp.history import TargetToken, JitCellToken class OptSimplify(Optimization): - def __init__(self): + def __init__(self, unroll): self.last_label_descr = None + self.unroll = unroll def optimize_CALL_PURE(self, op): args = op.getarglist() @@ -35,24 +36,26 @@ pass def optimize_LABEL(self, op): - descr = op.getdescr() - if isinstance(descr, JitCellToken): - return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) - self.last_label_descr = op.getdescr() + if not self.unroll: + descr = op.getdescr() + if isinstance(descr, JitCellToken): + return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) + self.last_label_descr = op.getdescr() self.emit_operation(op) def optimize_JUMP(self, op): - descr = op.getdescr() - assert isinstance(descr, JitCellToken) - if not descr.target_tokens: - assert self.last_label_descr is not None - target_token = self.last_label_descr - assert isinstance(target_token, TargetToken) - assert target_token.targeting_jitcell_token is descr - op.setdescr(self.last_label_descr) - else: - assert len(descr.target_tokens) == 1 - op.setdescr(descr.target_tokens[0]) + if not self.unroll: + descr = op.getdescr() + assert isinstance(descr, JitCellToken) + if not descr.target_tokens: + assert self.last_label_descr is not None + target_token = self.last_label_descr + assert isinstance(target_token, TargetToken) + assert target_token.targeting_jitcell_token is descr + op.setdescr(self.last_label_descr) + else: + assert len(descr.target_tokens) == 1 + op.setdescr(descr.target_tokens[0]) self.emit_operation(op) dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py @@ -0,0 +1,46 @@ +from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import OptimizeOptTest +from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin +from pypy.jit.metainterp.resoperation import rop + + +allopts = OptimizeOptTest.enable_opts.split(':') +for optnum in range(len(allopts)): + myopts = allopts[:] + del myopts[optnum] + + class TestLLtype(OptimizeOptTest, LLtypeMixin): + enable_opts = ':'.join(myopts) + + def optimize_loop(self, ops, expected, expected_preamble=None, + call_pure_results=None, expected_short=None): + loop = self.parse(ops) + if expected != "crash!": + expected = self.parse(expected) + if expected_preamble: + expected_preamble = self.parse(expected_preamble) + if expected_short: + expected_short = self.parse(expected_short) + + preamble = self.unroll_and_optimize(loop, call_pure_results) + + for op in preamble.operations + loop.operations: + assert op.getopnum() not in (rop.CALL_PURE, + rop.CALL_LOOPINVARIANT, + rop.VIRTUAL_REF_FINISH, + rop.VIRTUAL_REF, + rop.QUASIIMMUT_FIELD, + rop.MARK_OPAQUE_PTR, + rop.RECORD_KNOWN_CLASS) + + def raises(self, e, fn, *args): + try: + fn(*args) + except e: + pass + + opt = allopts[optnum] + exec "TestNo%sLLtype = TestLLtype" % (opt[0].upper() + opt[1:]) + +del TestLLtype # No need to run the last set twice +del TestNoUnrollLLtype # This case is take care of by test_optimizebasic + diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -5090,7 +5090,6 @@ class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass - ##class TestOOtype(BaseTestOptimizeBasic, OOtypeMixin): ## def test_instanceof(self): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -105,6 +105,9 @@ return loop + def raises(self, e, fn, *args): + py.test.raises(e, fn, *args) + class OptimizeOptTest(BaseTestWithUnroll): def setup_method(self, meth=None): @@ -2639,7 +2642,7 @@ p2 = new_with_vtable(ConstClass(node_vtable)) jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_2(self): @@ -2651,7 +2654,7 @@ escape(p2) # prevent it from staying Virtual jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_3(self): @@ -2665,7 +2668,7 @@ setfield_gc(p3, p4, descr=nextdescr) jump(p3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_merge_guard_class_guard_value(self): @@ -4411,7 +4414,7 @@ setfield_gc(p1, p3, descr=nextdescr) jump(p3) """ - py.test.raises(BogusPureField, self.optimize_loop, ops, "crash!") + self.raises(BogusPureField, self.optimize_loop, ops, "crash!") def test_dont_complains_different_field(self): ops = """ @@ -5024,7 +5027,7 @@ i2 = int_add(i0, 3) jump(i2) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_ne_const_not(self): ops = """ @@ -5074,7 +5077,7 @@ i3 = int_add(i0, 3) jump(i3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_lshift(self): ops = """ @@ -6533,9 +6536,9 @@ def test_quasi_immut_2(self): ops = """ [] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) escape(i1) jump() """ @@ -6585,13 +6588,13 @@ def test_call_may_force_invalidated_guards_reload(self): ops = """ [i0a, i0b] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) call_may_force(i0b, descr=mayforcevirtdescr) - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i2 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i2 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) i3 = escape(i1) i4 = escape(i2) jump(i3, i4) @@ -7813,6 +7816,52 @@ """ self.optimize_loop(ops, expected) + def test_issue1080_infinitie_loop_virtual(self): + ops = """ + [p10] + p52 = getfield_gc(p10, descr=nextdescr) # inst_storage + p54 = getarrayitem_gc(p52, 0, descr=arraydescr) + p69 = getfield_gc_pure(p54, descr=otherdescr) # inst_w_function + + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + + p106 = new_with_vtable(ConstClass(node_vtable)) + p108 = new_array(3, descr=arraydescr) + p110 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p110, ConstPtr(myptr2), descr=otherdescr) # inst_w_function + setarrayitem_gc(p108, 0, p110, descr=arraydescr) + setfield_gc(p106, p108, descr=nextdescr) # inst_storage + jump(p106) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr2), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + + + def test_issue1080_infinitie_loop_simple(self): + ops = """ + [p69] + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + jump(ConstPtr(myptr)) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -122,6 +122,7 @@ quasi.inst_field = -4247 quasifielddescr = cpu.fielddescrof(QUASI, 'inst_field') quasibox = BoxPtr(lltype.cast_opaque_ptr(llmemory.GCREF, quasi)) + quasiptr = quasibox.value quasiimmutdescr = QuasiImmutDescr(cpu, quasibox, quasifielddescr, cpu.fielddescrof(QUASI, 'mutate_field')) diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -315,7 +315,10 @@ try: jumpargs = virtual_state.make_inputargs(values, self.optimizer) except BadVirtualState: - raise InvalidLoop + raise InvalidLoop('The state of the optimizer at the end of ' + + 'peeled loop is inconsistent with the ' + + 'VirtualState at the begining of the peeled ' + + 'loop') jumpop.initarglist(jumpargs) # Inline the short preamble at the end of the loop @@ -325,7 +328,11 @@ for i in range(len(short_inputargs)): if short_inputargs[i] in args: if args[short_inputargs[i]] != jmp_to_short_args[i]: - raise InvalidLoop + raise InvalidLoop('The short preamble wants the ' + + 'same box passed to multiple of its ' + + 'inputargs, but the jump at the ' + + 'end of this bridge does not do that.') + args[short_inputargs[i]] = jmp_to_short_args[i] self.short_inliner = Inliner(short_inputargs, jmp_to_short_args) for op in self.short[1:]: @@ -378,7 +385,10 @@ #final_virtual_state.debug_print("Bad virtual state at end of loop, ", # bad) #debug_stop('jit-log-virtualstate') - raise InvalidLoop + raise InvalidLoop('The virtual state at the end of the peeled ' + + 'loop is not compatible with the virtual ' + + 'state at the start of the loop which makes ' + + 'it impossible to close the loop') #debug_stop('jit-log-virtualstate') @@ -526,8 +536,8 @@ args = jumpop.getarglist() modifier = VirtualStateAdder(self.optimizer) virtual_state = modifier.get_virtual_state(args) - #debug_start('jit-log-virtualstate') - #virtual_state.debug_print("Looking for ") + debug_start('jit-log-virtualstate') + virtual_state.debug_print("Looking for ") for target in cell_token.target_tokens: if not target.virtual_state: @@ -536,10 +546,10 @@ extra_guards = [] bad = {} - #debugmsg = 'Did not match ' + debugmsg = 'Did not match ' if target.virtual_state.generalization_of(virtual_state, bad): ok = True - #debugmsg = 'Matched ' + debugmsg = 'Matched ' else: try: cpu = self.optimizer.cpu @@ -548,13 +558,13 @@ extra_guards) ok = True - #debugmsg = 'Guarded to match ' + debugmsg = 'Guarded to match ' except InvalidLoop: pass - #target.virtual_state.debug_print(debugmsg, bad) + target.virtual_state.debug_print(debugmsg, bad) if ok: - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') values = [self.getvalue(arg) for arg in jumpop.getarglist()] @@ -581,7 +591,7 @@ jumpop.setdescr(cell_token.target_tokens[0]) self.optimizer.send_extra_operation(jumpop) return True - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') return False class ValueImporter(object): diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -27,11 +27,15 @@ if self.generalization_of(other, renum, {}): return if renum[self.position] != other.position: - raise InvalidLoop + raise InvalidLoop('The numbering of the virtual states does not ' + + 'match. This means that two virtual fields ' + + 'have been set to the same Box in one of the ' + + 'virtual states but not in the other.') self._generate_guards(other, box, cpu, extra_guards) def _generate_guards(self, other, box, cpu, extra_guards): - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') def enum_forced_boxes(self, boxes, value, optimizer): raise NotImplementedError @@ -346,10 +350,12 @@ def _generate_guards(self, other, box, cpu, extra_guards): if not isinstance(other, NotVirtualStateInfo): - raise InvalidLoop + raise InvalidLoop('The VirtualStates does not match as a ' + + 'virtual appears where a pointer is needed ' + + 'and it is too late to force it.') if self.lenbound or other.lenbound: - raise InvalidLoop + raise InvalidLoop('The array length bounds does not match.') if self.level == LEVEL_KNOWNCLASS and \ box.nonnull() and \ @@ -400,7 +406,8 @@ return # Remaining cases are probably not interesting - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') if self.level == LEVEL_CONSTANT: import pdb; pdb.set_trace() raise NotImplementedError diff --git a/pypy/jit/metainterp/quasiimmut.py b/pypy/jit/metainterp/quasiimmut.py --- a/pypy/jit/metainterp/quasiimmut.py +++ b/pypy/jit/metainterp/quasiimmut.py @@ -120,8 +120,10 @@ self.fielddescr, self.structbox) return fieldbox.constbox() - def is_still_valid(self): + def is_still_valid_for(self, structconst): assert self.structbox is not None + if not self.structbox.constbox().same_constant(structconst): + return False cpu = self.cpu gcref = self.structbox.getref_base() qmut = get_current_qmut_instance(cpu, gcref, self.mutatefielddescr) diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.quasiimmut import get_current_qmut_instance from pypy.jit.metainterp.test.support import LLJitMixin from pypy.jit.codewriter.policy import StopAtXPolicy -from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe +from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe, promote def test_get_current_qmut_instance(): @@ -506,6 +506,27 @@ "guard_not_invalidated": 2 }) + def test_issue1080(self): + myjitdriver = JitDriver(greens=[], reds=["n", "sa", "a"]) + class Foo(object): + _immutable_fields_ = ["x?"] + def __init__(self, x): + self.x = x + one, two = Foo(1), Foo(2) + def main(n): + sa = 0 + a = one + while n: + myjitdriver.jit_merge_point(n=n, sa=sa, a=a) + sa += a.x + if a.x == 1: + a = two + elif a.x == 2: + a = one + n -= 1 + return sa + res = self.meta_interp(main, [10]) + assert res == main(10) class TestLLtypeGreenFieldsTests(QuasiImmutTests, LLJitMixin): pass diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -936,12 +936,12 @@ source_dir / "pyerrors.c", source_dir / "modsupport.c", source_dir / "getargs.c", + source_dir / "abstract.c", source_dir / "stringobject.c", source_dir / "mysnprintf.c", source_dir / "pythonrun.c", source_dir / "sysmodule.c", source_dir / "bufferobject.c", - source_dir / "object.c", source_dir / "cobject.c", source_dir / "structseq.c", source_dir / "capsule.c", diff --git a/pypy/module/cpyext/complexobject.py b/pypy/module/cpyext/complexobject.py --- a/pypy/module/cpyext/complexobject.py +++ b/pypy/module/cpyext/complexobject.py @@ -33,6 +33,11 @@ # CPython also accepts anything return 0.0 + at cpython_api([Py_complex_ptr], PyObject) +def _PyComplex_FromCComplex(space, v): + """Create a new Python complex number object from a C Py_complex value.""" + return space.newcomplex(v.c_real, v.c_imag) + # lltype does not handle functions returning a structure. This implements a # helper function, which takes as argument a reference to the return value. @cpython_api([PyObject, Py_complex_ptr], lltype.Void) diff --git a/pypy/module/cpyext/floatobject.py b/pypy/module/cpyext/floatobject.py --- a/pypy/module/cpyext/floatobject.py +++ b/pypy/module/cpyext/floatobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.api import ( CANNOT_FAIL, cpython_api, PyObject, build_type_checkers, CONST_STRING) from pypy.interpreter.error import OperationError +from pypy.rlib.rstruct import runpack PyFloat_Check, PyFloat_CheckExact = build_type_checkers("Float") @@ -33,3 +34,19 @@ backward compatibility.""" return space.call_function(space.w_float, w_obj) + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack4(space, ptr, le): + input = rffi.charpsize2str(ptr, 4) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("f", input) + + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack8(space, ptr, le): + input = rffi.charpsize2str(ptr, 8) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("d", input) + diff --git a/pypy/module/cpyext/include/complexobject.h b/pypy/module/cpyext/include/complexobject.h --- a/pypy/module/cpyext/include/complexobject.h +++ b/pypy/module/cpyext/include/complexobject.h @@ -21,6 +21,8 @@ return result; } +#define PyComplex_FromCComplex(c) _PyComplex_FromCComplex(&c) + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -29,6 +29,10 @@ # define vsnprintf _vsnprintf #endif +#include +PyAPI_FUNC(int) PyOS_snprintf(char *str, size_t size, const char *format, ...); +PyAPI_FUNC(int) PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va); + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/stringobject.h b/pypy/module/cpyext/include/stringobject.h --- a/pypy/module/cpyext/include/stringobject.h +++ b/pypy/module/cpyext/include/stringobject.h @@ -7,8 +7,6 @@ extern "C" { #endif -int PyOS_snprintf(char *str, size_t size, const char *format, ...); - #define PyString_GET_SIZE(op) PyString_Size(op) #define PyString_AS_STRING(op) PyString_AsString(op) diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -40,7 +40,8 @@ when the value is NULL, instead of converting to None. */ #define T_LONGLONG 17 -#define T_ULONGLONG 18 +#define T_ULONGLONG 18 +#define T_PYSSIZET 19 /* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 diff --git a/pypy/module/cpyext/longobject.py b/pypy/module/cpyext/longobject.py --- a/pypy/module/cpyext/longobject.py +++ b/pypy/module/cpyext/longobject.py @@ -1,6 +1,7 @@ from pypy.rpython.lltypesystem import lltype, rffi -from pypy.module.cpyext.api import (cpython_api, PyObject, build_type_checkers, - CONST_STRING, ADDR, CANNOT_FAIL) +from pypy.module.cpyext.api import ( + cpython_api, PyObject, build_type_checkers, Py_ssize_t, + CONST_STRING, ADDR, CANNOT_FAIL) from pypy.objspace.std.longobject import W_LongObject from pypy.interpreter.error import OperationError from pypy.module.cpyext.intobject import PyInt_AsUnsignedLongMask @@ -15,6 +16,13 @@ """Return a new PyLongObject object from v, or NULL on failure.""" return space.newlong(val) + at cpython_api([Py_ssize_t], PyObject) +def PyLong_FromSsize_t(space, val): + """Return a new PyLongObject object from a C Py_ssize_t, or + NULL on failure. + """ + return space.newlong(val) + @cpython_api([rffi.LONGLONG], PyObject) def PyLong_FromLongLong(space, val): """Return a new PyLongObject object from a C long long, or NULL @@ -56,6 +64,14 @@ and -1 will be returned.""" return space.int_w(w_long) + at cpython_api([PyObject], Py_ssize_t, error=-1) +def PyLong_AsSsize_t(space, w_long): + """Return a C Py_ssize_t representation of the contents of pylong. If + pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised + and -1 will be returned. + """ + return space.int_w(w_long) + @cpython_api([PyObject], rffi.LONGLONG, error=-1) def PyLong_AsLongLong(space, w_long): """ diff --git a/pypy/module/cpyext/src/abstract.c b/pypy/module/cpyext/src/abstract.c new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/src/abstract.c @@ -0,0 +1,269 @@ +/* Abstract Object Interface */ + +#include "Python.h" + +/* Shorthands to return certain errors */ + +static PyObject * +type_error(const char *msg, PyObject *obj) +{ + PyErr_Format(PyExc_TypeError, msg, obj->ob_type->tp_name); + return NULL; +} + +static PyObject * +null_error(void) +{ + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_SystemError, + "null argument to internal routine"); + return NULL; +} + +/* Operations on any object */ + +int +PyObject_CheckReadBuffer(PyObject *obj) +{ + PyBufferProcs *pb = obj->ob_type->tp_as_buffer; + + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + (*pb->bf_getsegcount)(obj, NULL) != 1) + return 0; + return 1; +} + +int PyObject_AsReadBuffer(PyObject *obj, + const void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void *pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a readable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getreadbuffer)(obj, 0, &pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +int PyObject_AsWriteBuffer(PyObject *obj, + void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void*pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a writeable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getwritebuffer)(obj,0,&pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +/* Operations on callable objects */ + +static PyObject* +call_function_tail(PyObject *callable, PyObject *args) +{ + PyObject *retval; + + if (args == NULL) + return NULL; + + if (!PyTuple_Check(args)) { + PyObject *a; + + a = PyTuple_New(1); + if (a == NULL) { + Py_DECREF(args); + return NULL; + } + PyTuple_SET_ITEM(a, 0, args); + args = a; + } + retval = PyObject_Call(callable, args, NULL); + + Py_DECREF(args); + + return retval; +} + +PyObject * +PyObject_CallFunction(PyObject *callable, const char *format, ...) +{ + va_list va; + PyObject *args; + + if (callable == NULL) + return null_error(); + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + return call_function_tail(callable, args); +} + +PyObject * +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) +{ + va_list va; + PyObject *args; + PyObject *func = NULL; + PyObject *retval = NULL; + + if (o == NULL || name == NULL) + return null_error(); + + func = PyObject_GetAttrString(o, name); + if (func == NULL) { + PyErr_SetString(PyExc_AttributeError, name); + return 0; + } + + if (!PyCallable_Check(func)) { + type_error("attribute of type '%.200s' is not callable", func); + goto exit; + } + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + retval = call_function_tail(func, args); + + exit: + /* args gets consumed in call_function_tail */ + Py_XDECREF(func); + + return retval; +} + +static PyObject * +objargs_mktuple(va_list va) +{ + int i, n = 0; + va_list countva; + PyObject *result, *tmp; + +#ifdef VA_LIST_IS_ARRAY + memcpy(countva, va, sizeof(va_list)); +#else +#ifdef __va_copy + __va_copy(countva, va); +#else + countva = va; +#endif +#endif + + while (((PyObject *)va_arg(countva, PyObject *)) != NULL) + ++n; + result = PyTuple_New(n); + if (result != NULL && n > 0) { + for (i = 0; i < n; ++i) { + tmp = (PyObject *)va_arg(va, PyObject *); + Py_INCREF(tmp); + PyTuple_SET_ITEM(result, i, tmp); + } + } + return result; +} + +PyObject * +PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL || name == NULL) + return null_error(); + + callable = PyObject_GetAttr(callable, name); + if (callable == NULL) + return NULL; + + /* count the args */ + va_start(vargs, name); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) { + Py_DECREF(callable); + return NULL; + } + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + Py_DECREF(callable); + + return tmp; +} + +PyObject * +PyObject_CallFunctionObjArgs(PyObject *callable, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL) + return null_error(); + + /* count the args */ + va_start(vargs, callable); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) + return NULL; + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + + return tmp; +} + diff --git a/pypy/module/cpyext/src/bufferobject.c b/pypy/module/cpyext/src/bufferobject.c --- a/pypy/module/cpyext/src/bufferobject.c +++ b/pypy/module/cpyext/src/bufferobject.c @@ -13,207 +13,207 @@ static int get_buf(PyBufferObject *self, void **ptr, Py_ssize_t *size, - enum buffer_t buffer_type) + enum buffer_t buffer_type) { - if (self->b_base == NULL) { - assert (ptr != NULL); - *ptr = self->b_ptr; - *size = self->b_size; - } - else { - Py_ssize_t count, offset; - readbufferproc proc = 0; - PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; - if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return 0; - } - if ((buffer_type == READ_BUFFER) || - ((buffer_type == ANY_BUFFER) && self->b_readonly)) - proc = bp->bf_getreadbuffer; - else if ((buffer_type == WRITE_BUFFER) || - (buffer_type == ANY_BUFFER)) - proc = (readbufferproc)bp->bf_getwritebuffer; - else if (buffer_type == CHAR_BUFFER) { + if (self->b_base == NULL) { + assert (ptr != NULL); + *ptr = self->b_ptr; + *size = self->b_size; + } + else { + Py_ssize_t count, offset; + readbufferproc proc = 0; + PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; + if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return 0; + } + if ((buffer_type == READ_BUFFER) || + ((buffer_type == ANY_BUFFER) && self->b_readonly)) + proc = bp->bf_getreadbuffer; + else if ((buffer_type == WRITE_BUFFER) || + (buffer_type == ANY_BUFFER)) + proc = (readbufferproc)bp->bf_getwritebuffer; + else if (buffer_type == CHAR_BUFFER) { if (!PyType_HasFeature(self->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER)) { - PyErr_SetString(PyExc_TypeError, - "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); - return 0; - } - proc = (readbufferproc)bp->bf_getcharbuffer; - } - if (!proc) { - char *buffer_type_name; - switch (buffer_type) { - case READ_BUFFER: - buffer_type_name = "read"; - break; - case WRITE_BUFFER: - buffer_type_name = "write"; - break; - case CHAR_BUFFER: - buffer_type_name = "char"; - break; - default: - buffer_type_name = "no"; - break; - } - PyErr_Format(PyExc_TypeError, - "%s buffer type not available", - buffer_type_name); - return 0; - } - if ((count = (*proc)(self->b_base, 0, ptr)) < 0) - return 0; - /* apply constraints to the start/end */ - if (self->b_offset > count) - offset = count; - else - offset = self->b_offset; - *(char **)ptr = *(char **)ptr + offset; - if (self->b_size == Py_END_OF_BUFFER) - *size = count; - else - *size = self->b_size; - if (offset + *size > count) - *size = count - offset; - } - return 1; + Py_TPFLAGS_HAVE_GETCHARBUFFER)) { + PyErr_SetString(PyExc_TypeError, + "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); + return 0; + } + proc = (readbufferproc)bp->bf_getcharbuffer; + } + if (!proc) { + char *buffer_type_name; + switch (buffer_type) { + case READ_BUFFER: + buffer_type_name = "read"; + break; + case WRITE_BUFFER: + buffer_type_name = "write"; + break; + case CHAR_BUFFER: + buffer_type_name = "char"; + break; + default: + buffer_type_name = "no"; + break; + } + PyErr_Format(PyExc_TypeError, + "%s buffer type not available", + buffer_type_name); + return 0; + } + if ((count = (*proc)(self->b_base, 0, ptr)) < 0) + return 0; + /* apply constraints to the start/end */ + if (self->b_offset > count) + offset = count; + else + offset = self->b_offset; + *(char **)ptr = *(char **)ptr + offset; + if (self->b_size == Py_END_OF_BUFFER) + *size = count; + else + *size = self->b_size; + if (offset + *size > count) + *size = count - offset; + } + return 1; } static PyObject * buffer_from_memory(PyObject *base, Py_ssize_t size, Py_ssize_t offset, void *ptr, - int readonly) + int readonly) { - PyBufferObject * b; + PyBufferObject * b; - if (size < 0 && size != Py_END_OF_BUFFER) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } + if (size < 0 && size != Py_END_OF_BUFFER) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } - b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); - if ( b == NULL ) - return NULL; + b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); + if ( b == NULL ) + return NULL; - Py_XINCREF(base); - b->b_base = base; - b->b_ptr = ptr; - b->b_size = size; - b->b_offset = offset; - b->b_readonly = readonly; - b->b_hash = -1; + Py_XINCREF(base); + b->b_base = base; + b->b_ptr = ptr; + b->b_size = size; + b->b_offset = offset; + b->b_readonly = readonly; + b->b_hash = -1; - return (PyObject *) b; + return (PyObject *) b; } static PyObject * buffer_from_object(PyObject *base, Py_ssize_t size, Py_ssize_t offset, int readonly) { - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } - if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { - /* another buffer, refer to the base object */ - PyBufferObject *b = (PyBufferObject *)base; - if (b->b_size != Py_END_OF_BUFFER) { - Py_ssize_t base_size = b->b_size - offset; - if (base_size < 0) - base_size = 0; - if (size == Py_END_OF_BUFFER || size > base_size) - size = base_size; - } - offset += b->b_offset; - base = b->b_base; - } - return buffer_from_memory(base, size, offset, NULL, readonly); + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } + if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { + /* another buffer, refer to the base object */ + PyBufferObject *b = (PyBufferObject *)base; + if (b->b_size != Py_END_OF_BUFFER) { + Py_ssize_t base_size = b->b_size - offset; + if (base_size < 0) + base_size = 0; + if (size == Py_END_OF_BUFFER || size > base_size) + size = base_size; + } + offset += b->b_offset; + base = b->b_base; + } + return buffer_from_memory(base, size, offset, NULL, readonly); } PyObject * PyBuffer_FromObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 1); + return buffer_from_object(base, size, offset, 1); } PyObject * PyBuffer_FromReadWriteObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 0); + return buffer_from_object(base, size, offset, 0); } PyObject * PyBuffer_FromMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 1); + return buffer_from_memory(NULL, size, 0, ptr, 1); } PyObject * PyBuffer_FromReadWriteMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 0); + return buffer_from_memory(NULL, size, 0, ptr, 0); } PyObject * PyBuffer_New(Py_ssize_t size) { - PyObject *o; - PyBufferObject * b; + PyObject *o; + PyBufferObject * b; - if (size < 0) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (sizeof(*b) > PY_SSIZE_T_MAX - size) { - /* unlikely */ - return PyErr_NoMemory(); - } - /* Inline PyObject_New */ - o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); - if ( o == NULL ) - return PyErr_NoMemory(); - b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); + if (size < 0) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (sizeof(*b) > PY_SSIZE_T_MAX - size) { + /* unlikely */ + return PyErr_NoMemory(); + } + /* Inline PyObject_New */ + o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); + if ( o == NULL ) + return PyErr_NoMemory(); + b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); - b->b_base = NULL; - b->b_ptr = (void *)(b + 1); - b->b_size = size; - b->b_offset = 0; - b->b_readonly = 0; - b->b_hash = -1; + b->b_base = NULL; + b->b_ptr = (void *)(b + 1); + b->b_size = size; + b->b_offset = 0; + b->b_readonly = 0; + b->b_hash = -1; - return o; + return o; } /* Methods */ @@ -221,19 +221,21 @@ static PyObject * buffer_new(PyTypeObject *type, PyObject *args, PyObject *kw) { - PyObject *ob; - Py_ssize_t offset = 0; - Py_ssize_t size = Py_END_OF_BUFFER; + PyObject *ob; + Py_ssize_t offset = 0; + Py_ssize_t size = Py_END_OF_BUFFER; - /*if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) - return NULL;*/ - - if (!_PyArg_NoKeywords("buffer()", kw)) - return NULL; + /* + * if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) + * return NULL; + */ - if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) - return NULL; - return PyBuffer_FromObject(ob, offset, size); + if (!_PyArg_NoKeywords("buffer()", kw)) + return NULL; + + if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) + return NULL; + return PyBuffer_FromObject(ob, offset, size); } PyDoc_STRVAR(buffer_doc, @@ -248,99 +250,99 @@ static void buffer_dealloc(PyBufferObject *self) { - Py_XDECREF(self->b_base); - PyObject_DEL(self); + Py_XDECREF(self->b_base); + PyObject_DEL(self); } static int buffer_compare(PyBufferObject *self, PyBufferObject *other) { - void *p1, *p2; - Py_ssize_t len_self, len_other, min_len; - int cmp; + void *p1, *p2; + Py_ssize_t len_self, len_other, min_len; + int cmp; - if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) - return -1; - if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) - return -1; - min_len = (len_self < len_other) ? len_self : len_other; - if (min_len > 0) { - cmp = memcmp(p1, p2, min_len); - if (cmp != 0) - return cmp < 0 ? -1 : 1; - } - return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; + if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) + return -1; + if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) + return -1; + min_len = (len_self < len_other) ? len_self : len_other; + if (min_len > 0) { + cmp = memcmp(p1, p2, min_len); + if (cmp != 0) + return cmp < 0 ? -1 : 1; + } + return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; } static PyObject * buffer_repr(PyBufferObject *self) { - const char *status = self->b_readonly ? "read-only" : "read-write"; + const char *status = self->b_readonly ? "read-only" : "read-write"; if ( self->b_base == NULL ) - return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", - status, - self->b_ptr, - self->b_size, - self); - else - return PyString_FromFormat( - "<%s buffer for %p, size %zd, offset %zd at %p>", - status, - self->b_base, - self->b_size, - self->b_offset, - self); + return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", + status, + self->b_ptr, + self->b_size, + self); + else + return PyString_FromFormat( + "<%s buffer for %p, size %zd, offset %zd at %p>", + status, + self->b_base, + self->b_size, + self->b_offset, + self); } static long buffer_hash(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - register Py_ssize_t len; - register unsigned char *p; - register long x; + void *ptr; + Py_ssize_t size; + register Py_ssize_t len; + register unsigned char *p; + register long x; - if ( self->b_hash != -1 ) - return self->b_hash; + if ( self->b_hash != -1 ) + return self->b_hash; - /* XXX potential bugs here, a readonly buffer does not imply that the - * underlying memory is immutable. b_readonly is a necessary but not - * sufficient condition for a buffer to be hashable. Perhaps it would - * be better to only allow hashing if the underlying object is known to - * be immutable (e.g. PyString_Check() is true). Another idea would - * be to call tp_hash on the underlying object and see if it raises - * an error. */ - if ( !self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, - "writable buffers are not hashable"); - return -1; - } + /* XXX potential bugs here, a readonly buffer does not imply that the + * underlying memory is immutable. b_readonly is a necessary but not + * sufficient condition for a buffer to be hashable. Perhaps it would + * be better to only allow hashing if the underlying object is known to + * be immutable (e.g. PyString_Check() is true). Another idea would + * be to call tp_hash on the underlying object and see if it raises + * an error. */ + if ( !self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, + "writable buffers are not hashable"); + return -1; + } - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - p = (unsigned char *) ptr; - len = size; - x = *p << 7; - while (--len >= 0) - x = (1000003*x) ^ *p++; - x ^= size; - if (x == -1) - x = -2; - self->b_hash = x; - return x; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + p = (unsigned char *) ptr; + len = size; + x = *p << 7; + while (--len >= 0) + x = (1000003*x) ^ *p++; + x ^= size; + if (x == -1) + x = -2; + self->b_hash = x; + return x; } static PyObject * buffer_str(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - return PyString_FromStringAndSize((const char *)ptr, size); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + return PyString_FromStringAndSize((const char *)ptr, size); } /* Sequence methods */ @@ -348,374 +350,372 @@ static Py_ssize_t buffer_length(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - return size; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + return size; } static PyObject * buffer_concat(PyBufferObject *self, PyObject *other) { - PyBufferProcs *pb = other->ob_type->tp_as_buffer; - void *ptr1, *ptr2; - char *p; - PyObject *ob; - Py_ssize_t size, count; + PyBufferProcs *pb = other->ob_type->tp_as_buffer; + void *ptr1, *ptr2; + char *p; + PyObject *ob; + Py_ssize_t size, count; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return NULL; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return NULL; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return NULL; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return NULL; - - /* optimize special case */ - if ( size == 0 ) - { - Py_INCREF(other); - return other; - } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return NULL; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return NULL; + /* optimize special case */ + if ( size == 0 ) + { + Py_INCREF(other); + return other; + } - assert(count <= PY_SIZE_MAX - size); + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return NULL; - ob = PyString_FromStringAndSize(NULL, size + count); - if ( ob == NULL ) - return NULL; - p = PyString_AS_STRING(ob); - memcpy(p, ptr1, size); - memcpy(p + size, ptr2, count); + assert(count <= PY_SIZE_MAX - size); - /* there is an extra byte in the string object, so this is safe */ - p[size + count] = '\0'; + ob = PyString_FromStringAndSize(NULL, size + count); + if ( ob == NULL ) + return NULL; + p = PyString_AS_STRING(ob); + memcpy(p, ptr1, size); + memcpy(p + size, ptr2, count); - return ob; + /* there is an extra byte in the string object, so this is safe */ + p[size + count] = '\0'; + + return ob; } static PyObject * buffer_repeat(PyBufferObject *self, Py_ssize_t count) { - PyObject *ob; - register char *p; - void *ptr; - Py_ssize_t size; + PyObject *ob; + register char *p; + void *ptr; + Py_ssize_t size; - if ( count < 0 ) - count = 0; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if (count > PY_SSIZE_T_MAX / size) { - PyErr_SetString(PyExc_MemoryError, "result too large"); - return NULL; - } - ob = PyString_FromStringAndSize(NULL, size * count); - if ( ob == NULL ) - return NULL; + if ( count < 0 ) + count = 0; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if (count > PY_SSIZE_T_MAX / size) { + PyErr_SetString(PyExc_MemoryError, "result too large"); + return NULL; + } + ob = PyString_FromStringAndSize(NULL, size * count); + if ( ob == NULL ) + return NULL; - p = PyString_AS_STRING(ob); - while ( count-- ) - { - memcpy(p, ptr, size); - p += size; - } + p = PyString_AS_STRING(ob); + while ( count-- ) + { + memcpy(p, ptr, size); + p += size; + } - /* there is an extra byte in the string object, so this is safe */ - *p = '\0'; + /* there is an extra byte in the string object, so this is safe */ + *p = '\0'; - return ob; + return ob; } static PyObject * buffer_item(PyBufferObject *self, Py_ssize_t idx) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( idx < 0 || idx >= size ) { - PyErr_SetString(PyExc_IndexError, "buffer index out of range"); - return NULL; - } - return PyString_FromStringAndSize((char *)ptr + idx, 1); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( idx < 0 || idx >= size ) { + PyErr_SetString(PyExc_IndexError, "buffer index out of range"); + return NULL; + } + return PyString_FromStringAndSize((char *)ptr + idx, 1); } static PyObject * buffer_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( left < 0 ) - left = 0; - if ( right < 0 ) - right = 0; - if ( right > size ) - right = size; - if ( right < left ) - right = left; - return PyString_FromStringAndSize((char *)ptr + left, - right - left); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( left < 0 ) + left = 0; + if ( right < 0 ) + right = 0; + if ( right > size ) + right = size; + if ( right < left ) + right = left; + return PyString_FromStringAndSize((char *)ptr + left, + right - left); } static PyObject * buffer_subscript(PyBufferObject *self, PyObject *item) { - void *p; - Py_ssize_t size; - - if (!get_buf(self, &p, &size, ANY_BUFFER)) - return NULL; - + void *p; + Py_ssize_t size; + + if (!get_buf(self, &p, &size, ANY_BUFFER)) + return NULL; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return NULL; - if (i < 0) - i += size; - return buffer_item(self, i); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength, cur, i; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return NULL; + if (i < 0) + i += size; + return buffer_item(self, i); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength, cur, i; - if (PySlice_GetIndicesEx((PySliceObject*)item, size, - &start, &stop, &step, &slicelength) < 0) { - return NULL; - } + if (PySlice_GetIndicesEx((PySliceObject*)item, size, + &start, &stop, &step, &slicelength) < 0) { + return NULL; + } - if (slicelength <= 0) - return PyString_FromStringAndSize("", 0); - else if (step == 1) - return PyString_FromStringAndSize((char *)p + start, - stop - start); - else { - PyObject *result; - char *source_buf = (char *)p; - char *result_buf = (char *)PyMem_Malloc(slicelength); + if (slicelength <= 0) + return PyString_FromStringAndSize("", 0); + else if (step == 1) + return PyString_FromStringAndSize((char *)p + start, + stop - start); + else { + PyObject *result; + char *source_buf = (char *)p; + char *result_buf = (char *)PyMem_Malloc(slicelength); - if (result_buf == NULL) - return PyErr_NoMemory(); + if (result_buf == NULL) + return PyErr_NoMemory(); - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - result_buf[i] = source_buf[cur]; - } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + result_buf[i] = source_buf[cur]; + } - result = PyString_FromStringAndSize(result_buf, - slicelength); - PyMem_Free(result_buf); - return result; - } - } - else { - PyErr_SetString(PyExc_TypeError, - "sequence index must be integer"); - return NULL; - } + result = PyString_FromStringAndSize(result_buf, + slicelength); + PyMem_Free(result_buf); + return result; + } + } + else { + PyErr_SetString(PyExc_TypeError, + "sequence index must be integer"); + return NULL; + } } static int buffer_ass_item(PyBufferObject *self, Py_ssize_t idx, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; - if (idx < 0 || idx >= size) { - PyErr_SetString(PyExc_IndexError, - "buffer assignment index out of range"); - return -1; - } + if (idx < 0 || idx >= size) { + PyErr_SetString(PyExc_IndexError, + "buffer assignment index out of range"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; - if ( count != 1 ) { - PyErr_SetString(PyExc_TypeError, - "right operand must be a single byte"); - return -1; - } + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; + if ( count != 1 ) { + PyErr_SetString(PyExc_TypeError, + "right operand must be a single byte"); + return -1; + } - ((char *)ptr1)[idx] = *(char *)ptr2; - return 0; + ((char *)ptr1)[idx] = *(char *)ptr2; + return 0; } static int buffer_ass_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t slice_len; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t slice_len; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; - if ( left < 0 ) - left = 0; - else if ( left > size ) - left = size; - if ( right < left ) - right = left; - else if ( right > size ) - right = size; - slice_len = right - left; + if ( left < 0 ) + left = 0; + else if ( left > size ) + left = size; + if ( right < left ) + right = left; + else if ( right > size ) + right = size; + slice_len = right - left; - if ( count != slice_len ) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ( count != slice_len ) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - if ( slice_len ) - memcpy((char *)ptr1 + left, ptr2, slice_len); + if ( slice_len ) + memcpy((char *)ptr1 + left, ptr2, slice_len); - return 0; + return 0; } static int buffer_ass_subscript(PyBufferObject *self, PyObject *item, PyObject *value) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t selfsize; - Py_ssize_t othersize; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t selfsize; + Py_ssize_t othersize; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = value ? value->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) - return -1; - + pb = value ? value->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) + return -1; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return -1; - if (i < 0) - i += selfsize; - return buffer_ass_item(self, i, value); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength; - - if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, - &start, &stop, &step, &slicelength) < 0) - return -1; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return -1; + if (i < 0) + i += selfsize; + return buffer_ass_item(self, i, value); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength; - if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) - return -1; + if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, + &start, &stop, &step, &slicelength) < 0) + return -1; - if (othersize != slicelength) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) + return -1; - if (slicelength == 0) - return 0; - else if (step == 1) { - memcpy((char *)ptr1 + start, ptr2, slicelength); - return 0; - } - else { - Py_ssize_t cur, i; - - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - ((char *)ptr1)[cur] = ((char *)ptr2)[i]; - } + if (othersize != slicelength) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - return 0; - } - } else { - PyErr_SetString(PyExc_TypeError, - "buffer indices must be integers"); - return -1; - } + if (slicelength == 0) + return 0; + else if (step == 1) { + memcpy((char *)ptr1 + start, ptr2, slicelength); + return 0; + } + else { + Py_ssize_t cur, i; + + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + ((char *)ptr1)[cur] = ((char *)ptr2)[i]; + } + + return 0; + } + } else { + PyErr_SetString(PyExc_TypeError, + "buffer indices must be integers"); + return -1; + } } /* Buffer methods */ @@ -723,64 +723,64 @@ static Py_ssize_t buffer_getreadbuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, READ_BUFFER)) - return -1; - return size; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, READ_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getwritebuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; + Py_ssize_t size; - if ( self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, "buffer is read-only"); + return -1; + } - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, WRITE_BUFFER)) - return -1; - return size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, WRITE_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getsegcount(PyBufferObject *self, Py_ssize_t *lenp) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - if (lenp) - *lenp = size; - return 1; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + if (lenp) + *lenp = size; + return 1; } static Py_ssize_t buffer_getcharbuf(PyBufferObject *self, Py_ssize_t idx, const char **pp) { - void *ptr; - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) - return -1; - *pp = (const char *)ptr; - return size; + void *ptr; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) + return -1; + *pp = (const char *)ptr; + return size; } void init_bufferobject(void) @@ -789,67 +789,65 @@ } static PySequenceMethods buffer_as_sequence = { - (lenfunc)buffer_length, /*sq_length*/ - (binaryfunc)buffer_concat, /*sq_concat*/ - (ssizeargfunc)buffer_repeat, /*sq_repeat*/ - (ssizeargfunc)buffer_item, /*sq_item*/ - (ssizessizeargfunc)buffer_slice, /*sq_slice*/ - (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ - (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ + (lenfunc)buffer_length, /*sq_length*/ + (binaryfunc)buffer_concat, /*sq_concat*/ + (ssizeargfunc)buffer_repeat, /*sq_repeat*/ + (ssizeargfunc)buffer_item, /*sq_item*/ + (ssizessizeargfunc)buffer_slice, /*sq_slice*/ + (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ + (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ }; static PyMappingMethods buffer_as_mapping = { - (lenfunc)buffer_length, - (binaryfunc)buffer_subscript, - (objobjargproc)buffer_ass_subscript, + (lenfunc)buffer_length, + (binaryfunc)buffer_subscript, + (objobjargproc)buffer_ass_subscript, }; static PyBufferProcs buffer_as_buffer = { - (readbufferproc)buffer_getreadbuf, - (writebufferproc)buffer_getwritebuf, - (segcountproc)buffer_getsegcount, - (charbufferproc)buffer_getcharbuf, + (readbufferproc)buffer_getreadbuf, + (writebufferproc)buffer_getwritebuf, + (segcountproc)buffer_getsegcount, + (charbufferproc)buffer_getcharbuf, }; PyTypeObject PyBuffer_Type = { - PyObject_HEAD_INIT(NULL) + PyVarObject_HEAD_INIT(NULL, 0) + "buffer", + sizeof(PyBufferObject), 0, - "buffer", - sizeof(PyBufferObject), - 0, - (destructor)buffer_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - (cmpfunc)buffer_compare, /* tp_compare */ - (reprfunc)buffer_repr, /* tp_repr */ - 0, /* tp_as_number */ - &buffer_as_sequence, /* tp_as_sequence */ - &buffer_as_mapping, /* tp_as_mapping */ - (hashfunc)buffer_hash, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)buffer_str, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - &buffer_as_buffer, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ - buffer_doc, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - buffer_new, /* tp_new */ + (destructor)buffer_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + (cmpfunc)buffer_compare, /* tp_compare */ + (reprfunc)buffer_repr, /* tp_repr */ + 0, /* tp_as_number */ + &buffer_as_sequence, /* tp_as_sequence */ + &buffer_as_mapping, /* tp_as_mapping */ + (hashfunc)buffer_hash, /* tp_hash */ + 0, /* tp_call */ + (reprfunc)buffer_str, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + &buffer_as_buffer, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ + buffer_doc, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + buffer_new, /* tp_new */ }; - diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -50,6 +50,10 @@ PyCObject_AsVoidPtr(PyObject *self) { if (self) { + if (PyCapsule_CheckExact(self)) { + const char *name = PyCapsule_GetName(self); + return (void *)PyCapsule_GetPointer(self, name); + } if (self->ob_type == &PyCObject_Type) return ((PyCObject *)self)->cobject; PyErr_SetString(PyExc_TypeError, diff --git a/pypy/module/cpyext/src/getargs.c b/pypy/module/cpyext/src/getargs.c --- a/pypy/module/cpyext/src/getargs.c +++ b/pypy/module/cpyext/src/getargs.c @@ -7,349 +7,348 @@ #ifdef __cplusplus -extern "C" { +extern "C" { #endif - int PyArg_Parse(PyObject *, const char *, ...); int PyArg_ParseTuple(PyObject *, const char *, ...); int PyArg_VaParse(PyObject *, const char *, va_list); int PyArg_ParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, ...); + const char *, char **, ...); int PyArg_VaParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, va_list); + const char *, char **, va_list); #define FLAG_COMPAT 1 #define FLAG_SIZE_T 2 -typedef int (*destr_t)(PyObject *, void *); - - -/* Keep track of "objects" that have been allocated or initialized and - which will need to be deallocated or cleaned up somehow if overall - parsing fails. -*/ -typedef struct { - void *item; - destr_t destructor; -} freelistentry_t; - -typedef struct { - int first_available; - freelistentry_t *entries; -} freelist_t; - /* Forward */ static int vgetargs1(PyObject *, const char *, va_list *, int); static void seterror(int, const char *, int *, const char *, const char *); -static char *convertitem(PyObject *, const char **, va_list *, int, int *, - char *, size_t, freelist_t *); +static char *convertitem(PyObject *, const char **, va_list *, int, int *, + char *, size_t, PyObject **); static char *converttuple(PyObject *, const char **, va_list *, int, - int *, char *, size_t, int, freelist_t *); + int *, char *, size_t, int, PyObject **); static char *convertsimple(PyObject *, const char **, va_list *, int, char *, - size_t, freelist_t *); + size_t, PyObject **); static Py_ssize_t convertbuffer(PyObject *, void **p, char **); static int getbuffer(PyObject *, Py_buffer *, char**); static int vgetargskeywords(PyObject *, PyObject *, - const char *, char **, va_list *, int); + const char *, char **, va_list *, int); static char *skipitem(const char **, va_list *, int); int PyArg_Parse(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT); + va_end(va); + return retval; } int _PyArg_Parse_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_ParseTuple(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, 0); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTuple_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParse(PyObject *args, const char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, 0); + return vgetargs1(args, format, &lva, 0); } int _PyArg_VaParse_SizeT(PyObject *args, char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, FLAG_SIZE_T); + return vgetargs1(args, format, &lva, FLAG_SIZE_T); } /* Handle cleanup of allocated memory in case of exception */ +#define GETARGS_CAPSULE_NAME_CLEANUP_PTR "getargs.cleanup_ptr" +#define GETARGS_CAPSULE_NAME_CLEANUP_BUFFER "getargs.cleanup_buffer" + +static void +cleanup_ptr(PyObject *self) +{ + void *ptr = PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_PTR); + if (ptr) { + PyMem_FREE(ptr); + } +} + +static void +cleanup_buffer(PyObject *self) +{ + Py_buffer *ptr = (Py_buffer *)PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_BUFFER); + if (ptr) { + PyBuffer_Release(ptr); + } +} + static int -cleanup_ptr(PyObject *self, void *ptr) +addcleanup(void *ptr, PyObject **freelist, PyCapsule_Destructor destr) { - if (ptr) { - PyMem_FREE(ptr); + PyObject *cobj; + const char *name; + + if (!*freelist) { + *freelist = PyList_New(0); + if (!*freelist) { + destr(ptr); + return -1; + } } + + if (destr == cleanup_ptr) { + name = GETARGS_CAPSULE_NAME_CLEANUP_PTR; + } else if (destr == cleanup_buffer) { + name = GETARGS_CAPSULE_NAME_CLEANUP_BUFFER; + } else { + return -1; + } + cobj = PyCapsule_New(ptr, name, destr); + if (!cobj) { + destr(ptr); + return -1; + } + if (PyList_Append(*freelist, cobj)) { + Py_DECREF(cobj); + return -1; + } + Py_DECREF(cobj); return 0; } static int -cleanup_buffer(PyObject *self, void *ptr) +cleanreturn(int retval, PyObject *freelist) { - Py_buffer *buf = (Py_buffer *)ptr; - if (buf) { - PyBuffer_Release(buf); + if (freelist && retval != 0) { + /* We were successful, reset the destructors so that they + don't get called. */ + Py_ssize_t len = PyList_GET_SIZE(freelist), i; + for (i = 0; i < len; i++) + PyCapsule_SetDestructor(PyList_GET_ITEM(freelist, i), NULL); } - return 0; + Py_XDECREF(freelist); + return retval; } -static int -addcleanup(void *ptr, freelist_t *freelist, destr_t destructor) -{ - int index; - - index = freelist->first_available; - freelist->first_available += 1; - - freelist->entries[index].item = ptr; - freelist->entries[index].destructor = destructor; - - return 0; -} - -static int -cleanreturn(int retval, freelist_t *freelist) -{ - int index; - - if (retval == 0) { - /* A failure occurred, therefore execute all of the cleanup - functions. - */ - for (index = 0; index < freelist->first_available; ++index) { - freelist->entries[index].destructor(NULL, - freelist->entries[index].item); - } - } - PyMem_Free(freelist->entries); - return retval; -} static int vgetargs1(PyObject *args, const char *format, va_list *p_va, int flags) { - char msgbuf[256]; - int levels[32]; - const char *fname = NULL; - const char *message = NULL; - int min = -1; - int max = 0; - int level = 0; - int endfmt = 0; - const char *formatsave = format; - Py_ssize_t i, len; - char *msg; - freelist_t freelist = {0, NULL}; - int compat = flags & FLAG_COMPAT; + char msgbuf[256]; + int levels[32]; + const char *fname = NULL; + const char *message = NULL; + int min = -1; + int max = 0; + int level = 0; + int endfmt = 0; + const char *formatsave = format; + Py_ssize_t i, len; + char *msg; + PyObject *freelist = NULL; + int compat = flags & FLAG_COMPAT; - assert(compat || (args != (PyObject*)NULL)); - flags = flags & ~FLAG_COMPAT; + assert(compat || (args != (PyObject*)NULL)); + flags = flags & ~FLAG_COMPAT; - while (endfmt == 0) { - int c = *format++; - switch (c) { - case '(': - if (level == 0) - max++; - level++; - if (level >= 30) - Py_FatalError("too many tuple nesting levels " - "in argument format string"); - break; - case ')': - if (level == 0) - Py_FatalError("excess ')' in getargs format"); - else - level--; - break; - case '\0': - endfmt = 1; - break; - case ':': - fname = format; - endfmt = 1; - break; - case ';': - message = format; - endfmt = 1; - break; - default: - if (level == 0) { - if (c == 'O') - max++; - else if (isalpha(Py_CHARMASK(c))) { - if (c != 'e') /* skip encoded */ - max++; - } else if (c == '|') - min = max; - } - break; - } - } - - if (level != 0) - Py_FatalError(/* '(' */ "missing ')' in getargs format"); - - if (min < 0) - min = max; - - format = formatsave; - - freelist.entries = PyMem_New(freelistentry_t, max); + while (endfmt == 0) { + int c = *format++; + switch (c) { + case '(': + if (level == 0) + max++; + level++; + if (level >= 30) + Py_FatalError("too many tuple nesting levels " + "in argument format string"); + break; + case ')': + if (level == 0) + Py_FatalError("excess ')' in getargs format"); + else + level--; + break; + case '\0': + endfmt = 1; + break; + case ':': + fname = format; + endfmt = 1; + break; + case ';': + message = format; + endfmt = 1; + break; + default: + if (level == 0) { + if (c == 'O') + max++; + else if (isalpha(Py_CHARMASK(c))) { + if (c != 'e') /* skip encoded */ + max++; + } else if (c == '|') + min = max; + } + break; + } + } - if (compat) { - if (max == 0) { - if (args == NULL) - return cleanreturn(1, &freelist); - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes no arguments", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - else if (min == 1 && max == 1) { - if (args == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes at least one argument", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - msg = convertitem(args, &format, p_va, flags, levels, - msgbuf, sizeof(msgbuf), &freelist); - if (msg == NULL) - return cleanreturn(1, &freelist); - seterror(levels[0], msg, levels+1, fname, message); - return cleanreturn(0, &freelist); - } - else { - PyErr_SetString(PyExc_SystemError, - "old style getargs format uses new features"); - return cleanreturn(0, &freelist); - } - } - - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "new style getargs format but argument is not a tuple"); - return cleanreturn(0, &freelist); - } - - len = PyTuple_GET_SIZE(args); - - if (len < min || max < len) { - if (message == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.150s%s takes %s %d argument%s " - "(%ld given)", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()", - min==max ? "exactly" - : len < min ? "at least" : "at most", - len < min ? min : max, - (len < min ? min : max) == 1 ? "" : "s", - Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); - message = msgbuf; - } - PyErr_SetString(PyExc_TypeError, message); - return cleanreturn(0, &freelist); - } - - for (i = 0; i < len; i++) { - if (*format == '|') - format++; - msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, - flags, levels, msgbuf, - sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, message); - return cleanreturn(0, &freelist); - } - } + if (level != 0) + Py_FatalError(/* '(' */ "missing ')' in getargs format"); - if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && - *format != '(' && - *format != '|' && *format != ':' && *format != ';') { - PyErr_Format(PyExc_SystemError, - "bad format string: %.200s", formatsave); - return cleanreturn(0, &freelist); - } - - return cleanreturn(1, &freelist); + if (min < 0) + min = max; + + format = formatsave; + + if (compat) { + if (max == 0) { + if (args == NULL) + return 1; + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes no arguments", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + else if (min == 1 && max == 1) { + if (args == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes at least one argument", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + msg = convertitem(args, &format, p_va, flags, levels, + msgbuf, sizeof(msgbuf), &freelist); + if (msg == NULL) + return cleanreturn(1, freelist); + seterror(levels[0], msg, levels+1, fname, message); + return cleanreturn(0, freelist); + } + else { + PyErr_SetString(PyExc_SystemError, + "old style getargs format uses new features"); + return 0; + } + } + + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "new style getargs format but argument is not a tuple"); + return 0; + } + + len = PyTuple_GET_SIZE(args); + + if (len < min || max < len) { + if (message == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.150s%s takes %s %d argument%s " + "(%ld given)", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()", + min==max ? "exactly" + : len < min ? "at least" : "at most", + len < min ? min : max, + (len < min ? min : max) == 1 ? "" : "s", + Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); + message = msgbuf; + } + PyErr_SetString(PyExc_TypeError, message); + return 0; + } + + for (i = 0; i < len; i++) { + if (*format == '|') + format++; + msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, + flags, levels, msgbuf, + sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, msg); + return cleanreturn(0, freelist); + } + } + + if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && + *format != '(' && + *format != '|' && *format != ':' && *format != ';') { + PyErr_Format(PyExc_SystemError, + "bad format string: %.200s", formatsave); + return cleanreturn(0, freelist); + } + + return cleanreturn(1, freelist); } @@ -358,37 +357,37 @@ seterror(int iarg, const char *msg, int *levels, const char *fname, const char *message) { - char buf[512]; - int i; - char *p = buf; + char buf[512]; + int i; + char *p = buf; - if (PyErr_Occurred()) - return; - else if (message == NULL) { - if (fname != NULL) { - PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); - p += strlen(p); - } - if (iarg != 0) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - "argument %d", iarg); - i = 0; - p += strlen(p); - while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - ", item %d", levels[i]-1); - p += strlen(p); - i++; - } - } - else { - PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); - p += strlen(p); - } - PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); - message = buf; - } - PyErr_SetString(PyExc_TypeError, message); + if (PyErr_Occurred()) + return; + else if (message == NULL) { + if (fname != NULL) { + PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); + p += strlen(p); + } + if (iarg != 0) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + "argument %d", iarg); + i = 0; + p += strlen(p); + while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + ", item %d", levels[i]-1); + p += strlen(p); + i++; + } + } + else { + PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); + p += strlen(p); + } + PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); + message = buf; + } + PyErr_SetString(PyExc_TypeError, message); } @@ -404,85 +403,84 @@ *p_va is undefined, *levels is a 0-terminated list of item numbers, *msgbuf contains an error message, whose format is: - "must be , not ", where: - is the name of the expected type, and - is the name of the actual type, + "must be , not ", where: + is the name of the expected type, and + is the name of the actual type, and msgbuf is returned. */ static char * converttuple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, int toplevel, - freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, int toplevel, + PyObject **freelist) { - int level = 0; - int n = 0; - const char *format = *p_format; - int i; - - for (;;) { - int c = *format++; - if (c == '(') { - if (level == 0) - n++; - level++; - } - else if (c == ')') { - if (level == 0) - break; - level--; - } - else if (c == ':' || c == ';' || c == '\0') - break; - else if (level == 0 && isalpha(Py_CHARMASK(c))) - n++; - } - - if (!PySequence_Check(arg) || PyString_Check(arg)) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %.50s" : - "must be %d-item sequence, not %.50s", - n, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; - } - - if ((i = PySequence_Size(arg)) != n) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %d" : - "must be sequence of length %d, not %d", - n, i); - return msgbuf; - } + int level = 0; + int n = 0; + const char *format = *p_format; + int i; - format = *p_format; - for (i = 0; i < n; i++) { - char *msg; - PyObject *item; + for (;;) { + int c = *format++; + if (c == '(') { + if (level == 0) + n++; + level++; + } + else if (c == ')') { + if (level == 0) + break; + level--; + } + else if (c == ':' || c == ';' || c == '\0') + break; + else if (level == 0 && isalpha(Py_CHARMASK(c))) + n++; + } + + if (!PySequence_Check(arg) || PyString_Check(arg)) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %.50s" : + "must be %d-item sequence, not %.50s", + n, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; + } + + if ((i = PySequence_Size(arg)) != n) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %d" : + "must be sequence of length %d, not %d", + n, i); + return msgbuf; + } + + format = *p_format; + for (i = 0; i < n; i++) { + char *msg; + PyObject *item; item = PySequence_GetItem(arg, i); - if (item == NULL) { - PyErr_Clear(); - levels[0] = i+1; - levels[1] = 0; - strncpy(msgbuf, "is not retrievable", - bufsize); - return msgbuf; - } - PyPy_Borrow(arg, item); - msg = convertitem(item, &format, p_va, flags, levels+1, - msgbuf, bufsize, freelist); + if (item == NULL) { + PyErr_Clear(); + levels[0] = i+1; + levels[1] = 0; + strncpy(msgbuf, "is not retrievable", bufsize); + return msgbuf; + } + PyPy_Borrow(arg, item); + msg = convertitem(item, &format, p_va, flags, levels+1, + msgbuf, bufsize, freelist); /* PySequence_GetItem calls tp->sq_item, which INCREFs */ Py_XDECREF(item); - if (msg != NULL) { - levels[0] = i+1; - return msg; - } - } + if (msg != NULL) { + levels[0] = i+1; + return msg; + } + } - *p_format = format; - return NULL; + *p_format = format; + return NULL; } @@ -490,45 +488,45 @@ static char * convertitem(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, PyObject **freelist) { - char *msg; - const char *format = *p_format; - - if (*format == '(' /* ')' */) { - format++; - msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, - bufsize, 0, freelist); - if (msg == NULL) - format++; - } - else { - msg = convertsimple(arg, &format, p_va, flags, - msgbuf, bufsize, freelist); - if (msg != NULL) - levels[0] = 0; - } - if (msg == NULL) - *p_format = format; - return msg; + char *msg; + const char *format = *p_format; + + if (*format == '(' /* ')' */) { + format++; + msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, + bufsize, 0, freelist); + if (msg == NULL) + format++; + } + else { + msg = convertsimple(arg, &format, p_va, flags, + msgbuf, bufsize, freelist); + if (msg != NULL) + levels[0] = 0; + } + if (msg == NULL) + *p_format = format; + return msg; } #define UNICODE_DEFAULT_ENCODING(arg) \ - _PyUnicode_AsDefaultEncodedString(arg, NULL) + _PyUnicode_AsDefaultEncodedString(arg, NULL) /* Format an error message generated by convertsimple(). */ static char * converterr(const char *expected, PyObject *arg, char *msgbuf, size_t bufsize) { - assert(expected != NULL); - assert(arg != NULL); - PyOS_snprintf(msgbuf, bufsize, - "must be %.50s, not %.50s", expected, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; + assert(expected != NULL); + assert(arg != NULL); + PyOS_snprintf(msgbuf, bufsize, + "must be %.50s, not %.50s", expected, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; } #define CONV_UNICODE "(unicode conversion error)" @@ -536,14 +534,28 @@ /* explicitly check for float arguments when integers are expected. For now * signal a warning. Returns true if an exception was raised. */ static int +float_argument_warning(PyObject *arg) +{ + if (PyFloat_Check(arg) && + PyErr_Warn(PyExc_DeprecationWarning, + "integer argument expected, got float" )) + return 1; + else + return 0; +} + +/* explicitly check for float arguments when integers are expected. Raises + TypeError and returns true for float arguments. */ +static int float_argument_error(PyObject *arg) { - if (PyFloat_Check(arg) && - PyErr_Warn(PyExc_DeprecationWarning, - "integer argument expected, got float" )) - return 1; - else - return 0; + if (PyFloat_Check(arg)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 1; + } + else + return 0; } /* Convert a non-tuple argument. Return NULL if conversion went OK, @@ -557,836 +569,839 @@ static char * convertsimple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - char *msgbuf, size_t bufsize, freelist_t *freelist) + char *msgbuf, size_t bufsize, PyObject **freelist) { - /* For # codes */ -#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ - if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ - else q=va_arg(*p_va, int*); -#define STORE_SIZE(s) if (flags & FLAG_SIZE_T) *q2=s; else *q=s; + /* For # codes */ +#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ + if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ + else q=va_arg(*p_va, int*); +#define STORE_SIZE(s) \ + if (flags & FLAG_SIZE_T) \ + *q2=s; \ + else { \ + if (INT_MAX < s) { \ + PyErr_SetString(PyExc_OverflowError, \ + "size does not fit in an int"); \ + return converterr("", arg, msgbuf, bufsize); \ + } \ + *q=s; \ + } #define BUFFER_LEN ((flags & FLAG_SIZE_T) ? *q2:*q) - const char *format = *p_format; - char c = *format++; + const char *format = *p_format; + char c = *format++; #ifdef Py_USING_UNICODE - PyObject *uarg; -#endif - - switch (c) { - - case 'b': { /* unsigned byte -- very short int */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > UCHAR_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (unsigned char) ival; - break; - } - - case 'B': {/* byte sized bitfield - both signed and unsigned - values allowed */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned char) ival; - break; - } - - case 'h': {/* signed short int */ - short *p = va_arg(*p_va, short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < SHRT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > SHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (short) ival; - break; - } - - case 'H': { /* short int sized bitfield, both signed and - unsigned allowed */ - unsigned short *p = va_arg(*p_va, unsigned short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned short) ival; - break; - } - case 'i': {/* signed int */ - int *p = va_arg(*p_va, int *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival > INT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival < INT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = ival; - break; - } - case 'I': { /* int sized bitfield, both signed and - unsigned allowed */ - unsigned int *p = va_arg(*p_va, unsigned int *); - unsigned int ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); - if (ival == (unsigned int)-1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - case 'n': /* Py_ssize_t */ -#if SIZEOF_SIZE_T != SIZEOF_LONG - { - Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); - Py_ssize_t ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsSsize_t(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif - /* Fall through from 'n' to 'l' if Py_ssize_t is int */ - case 'l': {/* long int */ - long *p = va_arg(*p_va, long *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - - case 'k': { /* long sized bitfield */ - unsigned long *p = va_arg(*p_va, unsigned long *); - unsigned long ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } - -#ifdef HAVE_LONG_LONG - case 'L': {/* PY_LONG_LONG */ - PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); - PY_LONG_LONG ival = PyLong_AsLongLong( arg ); - if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { - return converterr("long", arg, msgbuf, bufsize); - } else { - *p = ival; - } - break; - } - - case 'K': { /* long long sized bitfield */ - unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); - unsigned PY_LONG_LONG ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif // HAVE_LONG_LONG - - case 'f': {/* float */ - float *p = va_arg(*p_va, float *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = (float) dval; - break; - } - - case 'd': {/* double */ - double *p = va_arg(*p_va, double *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = dval; - break; - } - -#ifndef WITHOUT_COMPLEX - case 'D': {/* complex double */ - Py_complex *p = va_arg(*p_va, Py_complex *); - Py_complex cval; - cval = PyComplex_AsCComplex(arg); - if (PyErr_Occurred()) - return converterr("complex", arg, msgbuf, bufsize); - else - *p = cval; - break; - } -#endif /* WITHOUT_COMPLEX */ - - case 'c': {/* char */ - char *p = va_arg(*p_va, char *); - if (PyString_Check(arg) && PyString_Size(arg) == 1) - *p = PyString_AS_STRING(arg)[0]; - else - return converterr("char", arg, msgbuf, bufsize); - break; - } - case 's': {/* string */ - if (*format == '*') { - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (PyString_Check(arg)) { - fflush(stdout); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { -#if 0 - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); -#else - return converterr("string or buffer", arg, msgbuf, bufsize); -#endif - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; - } else if (*format == '#') { - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string", arg, msgbuf, bufsize); - if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr("string without null bytes", - arg, msgbuf, bufsize); - } - break; - } - - case 'z': {/* string, may be NULL (None) */ - if (*format == '*') { - Py_FatalError("'*' format not supported in PyArg_*\n"); -#if 0 - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (arg == Py_None) - PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); - else if (PyString_Check(arg)) { - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; -#endif - } else if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (arg == Py_None) { - *p = 0; - STORE_SIZE(0); - } - else if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (arg == Py_None) - *p = 0; - else if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string or None", - arg, msgbuf, bufsize); - if (*format == '#') { - FETCH_SIZE; - assert(0); /* XXX redundant with if-case */ - if (arg == Py_None) - *q = 0; - else - *q = PyString_Size(arg); - format++; - } - else if (*p != NULL && - (Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr( - "string without null bytes or None", - arg, msgbuf, bufsize); - } - break; - } - case 'e': {/* encoded string */ - char **buffer; - const char *encoding; - PyObject *s; - Py_ssize_t size; - int recode_strings; - - /* Get 'e' parameter: the encoding name */ - encoding = (const char *)va_arg(*p_va, const char *); -#ifdef Py_USING_UNICODE - if (encoding == NULL) - encoding = PyUnicode_GetDefaultEncoding(); + PyObject *uarg; #endif - /* Get output buffer parameter: - 's' (recode all objects via Unicode) or - 't' (only recode non-string objects) - */ - if (*format == 's') - recode_strings = 1; - else if (*format == 't') - recode_strings = 0; - else - return converterr( - "(unknown parser marker combination)", - arg, msgbuf, bufsize); - buffer = (char **)va_arg(*p_va, char **); - format++; - if (buffer == NULL) - return converterr("(buffer is NULL)", - arg, msgbuf, bufsize); - - /* Encode object */ - if (!recode_strings && PyString_Check(arg)) { - s = arg; - Py_INCREF(s); - } - else { + switch (c) { + + case 'b': { /* unsigned byte -- very short int */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > UCHAR_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (unsigned char) ival; + break; + } + + case 'B': {/* byte sized bitfield - both signed and unsigned + values allowed */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned char) ival; + break; + } + + case 'h': {/* signed short int */ + short *p = va_arg(*p_va, short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < SHRT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > SHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (short) ival; + break; + } + + case 'H': { /* short int sized bitfield, both signed and + unsigned allowed */ + unsigned short *p = va_arg(*p_va, unsigned short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned short) ival; + break; + } + + case 'i': {/* signed int */ + int *p = va_arg(*p_va, int *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival > INT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival < INT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = ival; + break; + } + + case 'I': { /* int sized bitfield, both signed and + unsigned allowed */ + unsigned int *p = va_arg(*p_va, unsigned int *); + unsigned int ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); + if (ival == (unsigned int)-1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'n': /* Py_ssize_t */ +#if SIZEOF_SIZE_T != SIZEOF_LONG + { + Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); + Py_ssize_t ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsSsize_t(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + /* Fall through from 'n' to 'l' if Py_ssize_t is int */ + case 'l': {/* long int */ + long *p = va_arg(*p_va, long *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'k': { /* long sized bitfield */ + unsigned long *p = va_arg(*p_va, unsigned long *); + unsigned long ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } + +#ifdef HAVE_LONG_LONG + case 'L': {/* PY_LONG_LONG */ + PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); + PY_LONG_LONG ival; + if (float_argument_warning(arg)) + return converterr("long", arg, msgbuf, bufsize); + ival = PyLong_AsLongLong(arg); + if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { + return converterr("long", arg, msgbuf, bufsize); + } else { + *p = ival; + } + break; + } + + case 'K': { /* long long sized bitfield */ + unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); + unsigned PY_LONG_LONG ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + + case 'f': {/* float */ + float *p = va_arg(*p_va, float *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = (float) dval; + break; + } + + case 'd': {/* double */ + double *p = va_arg(*p_va, double *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = dval; + break; + } + +#ifndef WITHOUT_COMPLEX + case 'D': {/* complex double */ + Py_complex *p = va_arg(*p_va, Py_complex *); + Py_complex cval; + cval = PyComplex_AsCComplex(arg); + if (PyErr_Occurred()) + return converterr("complex", arg, msgbuf, bufsize); + else + *p = cval; + break; + } +#endif /* WITHOUT_COMPLEX */ + + case 'c': {/* char */ + char *p = va_arg(*p_va, char *); + if (PyString_Check(arg) && PyString_Size(arg) == 1) + *p = PyString_AS_STRING(arg)[0]; + else + return converterr("char", arg, msgbuf, bufsize); + break; + } + + case 's': {/* string */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } #ifdef Py_USING_UNICODE - PyObject *u; + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; - /* Convert object to Unicode */ - u = PyUnicode_FromObject(arg); - if (u == NULL) - return converterr( - "string or unicode or text buffer", - arg, msgbuf, bufsize); - - /* Encode object; use default error handling */ - s = PyUnicode_AsEncodedString(u, - encoding, - NULL); - Py_DECREF(u); - if (s == NULL) - return converterr("(encoding failed)", - arg, msgbuf, bufsize); - if (!PyString_Check(s)) { - Py_DECREF(s); - return converterr( - "(encoder failed to return a string)", - arg, msgbuf, bufsize); - } + if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string", arg, msgbuf, bufsize); + if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr("string without null bytes", + arg, msgbuf, bufsize); + } + break; + } + + case 'z': {/* string, may be NULL (None) */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (arg == Py_None) + PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); + else if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + + if (arg == Py_None) { + *p = 0; + STORE_SIZE(0); + } + else if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (arg == Py_None) + *p = 0; + else if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string or None", + arg, msgbuf, bufsize); + if (*format == '#') { + FETCH_SIZE; + assert(0); /* XXX redundant with if-case */ + if (arg == Py_None) { + STORE_SIZE(0); + } else { + STORE_SIZE(PyString_Size(arg)); + } + format++; + } + else if (*p != NULL && + (Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr( + "string without null bytes or None", + arg, msgbuf, bufsize); + } + break; + } + + case 'e': {/* encoded string */ + char **buffer; + const char *encoding; + PyObject *s; + Py_ssize_t size; + int recode_strings; + + /* Get 'e' parameter: the encoding name */ + encoding = (const char *)va_arg(*p_va, const char *); +#ifdef Py_USING_UNICODE + if (encoding == NULL) + encoding = PyUnicode_GetDefaultEncoding(); +#endif + + /* Get output buffer parameter: + 's' (recode all objects via Unicode) or + 't' (only recode non-string objects) + */ + if (*format == 's') + recode_strings = 1; + else if (*format == 't') + recode_strings = 0; + else + return converterr( + "(unknown parser marker combination)", + arg, msgbuf, bufsize); + buffer = (char **)va_arg(*p_va, char **); + format++; + if (buffer == NULL) + return converterr("(buffer is NULL)", + arg, msgbuf, bufsize); + + /* Encode object */ + if (!recode_strings && PyString_Check(arg)) { + s = arg; + Py_INCREF(s); + } + else { +#ifdef Py_USING_UNICODE + PyObject *u; + + /* Convert object to Unicode */ + u = PyUnicode_FromObject(arg); + if (u == NULL) + return converterr( + "string or unicode or text buffer", + arg, msgbuf, bufsize); + + /* Encode object; use default error handling */ + s = PyUnicode_AsEncodedString(u, + encoding, + NULL); + Py_DECREF(u); + if (s == NULL) + return converterr("(encoding failed)", + arg, msgbuf, bufsize); + if (!PyString_Check(s)) { + Py_DECREF(s); + return converterr( + "(encoder failed to return a string)", + arg, msgbuf, bufsize); + } #else - return converterr("string", arg, msgbuf, bufsize); + return converterr("string", arg, msgbuf, bufsize); #endif - } - size = PyString_GET_SIZE(s); + } + size = PyString_GET_SIZE(s); - /* Write output; output is guaranteed to be 0-terminated */ - if (*format == '#') { - /* Using buffer length parameter '#': - - - if *buffer is NULL, a new buffer of the - needed size is allocated and the data - copied into it; *buffer is updated to point - to the new buffer; the caller is - responsible for PyMem_Free()ing it after - usage + /* Write output; output is guaranteed to be 0-terminated */ + if (*format == '#') { + /* Using buffer length parameter '#': - - if *buffer is not NULL, the data is - copied to *buffer; *buffer_len has to be - set to the size of the buffer on input; - buffer overflow is signalled with an error; - buffer has to provide enough room for the - encoded string plus the trailing 0-byte - - - in both cases, *buffer_len is updated to - the size of the buffer /excluding/ the - trailing 0-byte - - */ - FETCH_SIZE; + - if *buffer is NULL, a new buffer of the + needed size is allocated and the data + copied into it; *buffer is updated to point + to the new buffer; the caller is + responsible for PyMem_Free()ing it after + usage - format++; - if (q == NULL && q2 == NULL) { - Py_DECREF(s); - return converterr( - "(buffer_len is NULL)", - arg, msgbuf, bufsize); - } - if (*buffer == NULL) { - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr( - "(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - } else { - if (size + 1 > BUFFER_LEN) { - Py_DECREF(s); - return converterr( - "(buffer overflow)", - arg, msgbuf, bufsize); - } - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - STORE_SIZE(size); - } else { - /* Using a 0-terminated buffer: - - - the encoded string has to be 0-terminated - for this variant to work; if it is not, an - error raised + - if *buffer is not NULL, the data is + copied to *buffer; *buffer_len has to be + set to the size of the buffer on input; + buffer overflow is signalled with an error; + buffer has to provide enough room for the + encoded string plus the trailing 0-byte - - a new buffer of the needed size is - allocated and the data copied into it; - *buffer is updated to point to the new - buffer; the caller is responsible for - PyMem_Free()ing it after usage + - in both cases, *buffer_len is updated to + the size of the buffer /excluding/ the + trailing 0-byte - */ - if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) - != size) { - Py_DECREF(s); - return converterr( - "encoded string without NULL bytes", - arg, msgbuf, bufsize); - } - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr("(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr("(cleanup problem)", - arg, msgbuf, bufsize); - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - } - Py_DECREF(s); - break; - } + */ + FETCH_SIZE; + + format++; + if (q == NULL && q2 == NULL) { + Py_DECREF(s); + return converterr( + "(buffer_len is NULL)", + arg, msgbuf, bufsize); + } + if (*buffer == NULL) { + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr( + "(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + } else { + if (size + 1 > BUFFER_LEN) { + Py_DECREF(s); + return converterr( + "(buffer overflow)", + arg, msgbuf, bufsize); + } + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + STORE_SIZE(size); + } else { + /* Using a 0-terminated buffer: + + - the encoded string has to be 0-terminated + for this variant to work; if it is not, an + error raised + + - a new buffer of the needed size is + allocated and the data copied into it; + *buffer is updated to point to the new + buffer; the caller is responsible for + PyMem_Free()ing it after usage + + */ + if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) + != size) { + Py_DECREF(s); + return converterr( + "encoded string without NULL bytes", + arg, msgbuf, bufsize); + } + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr("(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr("(cleanup problem)", + arg, msgbuf, bufsize); + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + } + Py_DECREF(s); + break; + } #ifdef Py_USING_UNICODE - case 'u': {/* raw unicode buffer (Py_UNICODE *) */ - if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - if (PyUnicode_Check(arg)) { - *p = PyUnicode_AS_UNICODE(arg); - STORE_SIZE(PyUnicode_GET_SIZE(arg)); - } - else { - return converterr("cannot convert raw buffers", - arg, msgbuf, bufsize); - } - format++; - } else { - Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); - if (PyUnicode_Check(arg)) - *p = PyUnicode_AS_UNICODE(arg); - else - return converterr("unicode", arg, msgbuf, bufsize); - } - break; - } + case 'u': {/* raw unicode buffer (Py_UNICODE *) */ + if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + if (PyUnicode_Check(arg)) { + *p = PyUnicode_AS_UNICODE(arg); + STORE_SIZE(PyUnicode_GET_SIZE(arg)); + } + else { + return converterr("cannot convert raw buffers", + arg, msgbuf, bufsize); + } + format++; + } else { + Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); + if (PyUnicode_Check(arg)) + *p = PyUnicode_AS_UNICODE(arg); + else + return converterr("unicode", arg, msgbuf, bufsize); + } + break; + } #endif - case 'S': { /* string object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyString_Check(arg)) - *p = arg; - else - return converterr("string", arg, msgbuf, bufsize); - break; - } - + case 'S': { /* string object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyString_Check(arg)) + *p = arg; + else + return converterr("string", arg, msgbuf, bufsize); + break; + } + #ifdef Py_USING_UNICODE - case 'U': { /* Unicode object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyUnicode_Check(arg)) - *p = arg; - else - return converterr("unicode", arg, msgbuf, bufsize); - break; - } + case 'U': { /* Unicode object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyUnicode_Check(arg)) + *p = arg; + else + return converterr("unicode", arg, msgbuf, bufsize); + break; + } #endif - case 'O': { /* object */ - PyTypeObject *type; - PyObject **p; - if (*format == '!') { - type = va_arg(*p_va, PyTypeObject*); - p = va_arg(*p_va, PyObject **); - format++; - if (PyType_IsSubtype(arg->ob_type, type)) - *p = arg; - else - return converterr(type->tp_name, arg, msgbuf, bufsize); - } - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - p = va_arg(*p_va, PyObject **); - format++; - if ((*pred)(arg)) - *p = arg; - else - return converterr("(unspecified)", - arg, msgbuf, bufsize); - - } - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - converter convert = va_arg(*p_va, converter); - void *addr = va_arg(*p_va, void *); - format++; - if (! (*convert)(arg, addr)) - return converterr("(unspecified)", - arg, msgbuf, bufsize); - } - else { - p = va_arg(*p_va, PyObject **); - *p = arg; - } - break; - } - - case 'w': { /* memory buffer, read-write access */ - Py_FatalError("'w' unsupported\n"); -#if 0 - void **p = va_arg(*p_va, void **); - void *res; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; + case 'O': { /* object */ + PyTypeObject *type; + PyObject **p; + if (*format == '!') { + type = va_arg(*p_va, PyTypeObject*); + p = va_arg(*p_va, PyObject **); + format++; + if (PyType_IsSubtype(arg->ob_type, type)) + *p = arg; + else + return converterr(type->tp_name, arg, msgbuf, bufsize); - if (pb && pb->bf_releasebuffer && *format != '*') - /* Buffer must be released, yet caller does not use - the Py_buffer protocol. */ - return converterr("pinned buffer", arg, msgbuf, bufsize); + } + else if (*format == '?') { + inquiry pred = va_arg(*p_va, inquiry); + p = va_arg(*p_va, PyObject **); + format++; + if ((*pred)(arg)) + *p = arg; + else + return converterr("(unspecified)", + arg, msgbuf, bufsize); - if (pb && pb->bf_getbuffer && *format == '*') { - /* Caller is interested in Py_buffer, and the object - supports it directly. */ - format++; - if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { - PyErr_Clear(); - return converterr("read-write buffer", arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) - return converterr("contiguous buffer", arg, msgbuf, bufsize); - break; - } + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + converter convert = va_arg(*p_va, converter); + void *addr = va_arg(*p_va, void *); + format++; + if (! (*convert)(arg, addr)) + return converterr("(unspecified)", + arg, msgbuf, bufsize); + } + else { + p = va_arg(*p_va, PyObject **); + *p = arg; + } + break; + } - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) - return converterr("read-write buffer", arg, msgbuf, bufsize); - if ((*pb->bf_getsegcount)(arg, NULL) != 1) - return converterr("single-segment read-write buffer", - arg, msgbuf, bufsize); - if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); - if (*format == '*') { - PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); - format++; - } - else { - *p = res; - if (*format == '#') { - FETCH_SIZE; - STORE_SIZE(count); - format++; - } - } - break; -#endif - } - - case 't': { /* 8-bit character buffer, read-only access */ - char **p = va_arg(*p_va, char **); - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; -#if 0 - if (*format++ != '#') - return converterr( - "invalid use of 't' format character", - arg, msgbuf, bufsize); -#endif - if (!PyType_HasFeature(arg->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER) -#if 0 - || pb == NULL || pb->bf_getcharbuffer == NULL || - pb->bf_getsegcount == NULL -#endif - ) - return converterr( - "string or read-only character buffer", - arg, msgbuf, bufsize); -#if 0 - if (pb->bf_getsegcount(arg, NULL) != 1) - return converterr( - "string or single-segment read-only buffer", - arg, msgbuf, bufsize); + case 'w': { /* memory buffer, read-write access */ + void **p = va_arg(*p_va, void **); + void *res; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; - if (pb->bf_releasebuffer) - return converterr( - "string or pinned buffer", - arg, msgbuf, bufsize); -#endif - count = pb->bf_getcharbuffer(arg, 0, p); -#if 0 - if (count < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); -#endif - { - FETCH_SIZE; - STORE_SIZE(count); - ++format; - } - break; - } - default: - return converterr("impossible", arg, msgbuf, bufsize); - - } - - *p_format = format; - return NULL; + if (pb && pb->bf_releasebuffer && *format != '*') + /* Buffer must be released, yet caller does not use + the Py_buffer protocol. */ + return converterr("pinned buffer", arg, msgbuf, bufsize); + + if (pb && pb->bf_getbuffer && *format == '*') { + /* Caller is interested in Py_buffer, and the object + supports it directly. */ + format++; + if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { + PyErr_Clear(); + return converterr("read-write buffer", arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) + return converterr("contiguous buffer", arg, msgbuf, bufsize); + break; + } + + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr("read-write buffer", arg, msgbuf, bufsize); + if ((*pb->bf_getsegcount)(arg, NULL) != 1) + return converterr("single-segment read-write buffer", + arg, msgbuf, bufsize); + if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + if (*format == '*') { + PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); + format++; + } + else { + *p = res; + if (*format == '#') { + FETCH_SIZE; + STORE_SIZE(count); + format++; + } + } + break; + } + + case 't': { /* 8-bit character buffer, read-only access */ + char **p = va_arg(*p_va, char **); + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + + if (*format++ != '#') + return converterr( + "invalid use of 't' format character", + arg, msgbuf, bufsize); + if (!PyType_HasFeature(arg->ob_type, + Py_TPFLAGS_HAVE_GETCHARBUFFER) || + pb == NULL || pb->bf_getcharbuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr( + "string or read-only character buffer", + arg, msgbuf, bufsize); + + if (pb->bf_getsegcount(arg, NULL) != 1) + return converterr( + "string or single-segment read-only buffer", + arg, msgbuf, bufsize); + + if (pb->bf_releasebuffer) + return converterr( + "string or pinned buffer", + arg, msgbuf, bufsize); + + count = pb->bf_getcharbuffer(arg, 0, p); + if (count < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + { + FETCH_SIZE; + STORE_SIZE(count); + } + break; + } + + default: + return converterr("impossible", arg, msgbuf, bufsize); + + } + + *p_format = format; + return NULL; } static Py_ssize_t convertbuffer(PyObject *arg, void **p, char **errmsg) { - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - pb->bf_releasebuffer != NULL) { - *errmsg = "string or read-only buffer"; - return -1; - } - if ((*pb->bf_getsegcount)(arg, NULL) != 1) { - *errmsg = "string or single-segment read-only buffer"; - return -1; - } - if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { - *errmsg = "(unspecified)"; - } - return count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + pb->bf_releasebuffer != NULL) { + *errmsg = "string or read-only buffer"; + return -1; + } + if ((*pb->bf_getsegcount)(arg, NULL) != 1) { + *errmsg = "string or single-segment read-only buffer"; + return -1; + } + if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { + *errmsg = "(unspecified)"; + } + return count; } static int getbuffer(PyObject *arg, Py_buffer *view, char **errmsg) { - void *buf; - Py_ssize_t count; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - if (pb == NULL) { - *errmsg = "string or buffer"; - return -1; - } - if (pb->bf_getbuffer) { - if (pb->bf_getbuffer(arg, view, 0) < 0) { - *errmsg = "convertible to a buffer"; - return -1; - } - if (!PyBuffer_IsContiguous(view, 'C')) { - *errmsg = "contiguous buffer"; - return -1; - } - return 0; - } + void *buf; + Py_ssize_t count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + if (pb == NULL) { + *errmsg = "string or buffer"; + return -1; + } + if (pb->bf_getbuffer) { + if (pb->bf_getbuffer(arg, view, 0) < 0) { + *errmsg = "convertible to a buffer"; + return -1; + } + if (!PyBuffer_IsContiguous(view, 'C')) { + *errmsg = "contiguous buffer"; + return -1; + } + return 0; + } - count = convertbuffer(arg, &buf, errmsg); - if (count < 0) { - *errmsg = "convertible to a buffer"; - return count; - } - PyBuffer_FillInfo(view, NULL, buf, count, 1, 0); - return 0; + count = convertbuffer(arg, &buf, errmsg); + if (count < 0) { + *errmsg = "convertible to a buffer"; + return count; + } + PyBuffer_FillInfo(view, arg, buf, count, 1, 0); + return 0; } /* Support for keyword arguments donated by @@ -1395,501 +1410,487 @@ /* Return false (0) for error, else true. */ int PyArg_ParseTupleAndKeywords(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, - kwlist, &va, FLAG_SIZE_T); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, + kwlist, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParseTupleAndKeywords(PyObject *args, PyObject *keywords, - const char *format, + const char *format, char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); - return retval; + retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); + return retval; } int _PyArg_VaParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, va_list va) + PyObject *keywords, + const char *format, + char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, - kwlist, &lva, FLAG_SIZE_T); - return retval; + retval = vgetargskeywords(args, keywords, format, + kwlist, &lva, FLAG_SIZE_T); + return retval; } #define IS_END_OF_FORMAT(c) (c == '\0' || c == ';' || c == ':') static int vgetargskeywords(PyObject *args, PyObject *keywords, const char *format, - char **kwlist, va_list *p_va, int flags) + char **kwlist, va_list *p_va, int flags) { - char msgbuf[512]; - int levels[32]; - const char *fname, *msg, *custom_msg, *keyword; - int min = INT_MAX; - int i, len, nargs, nkeywords; - PyObject *current_arg; - freelist_t freelist = {0, NULL}; + char msgbuf[512]; + int levels[32]; + const char *fname, *msg, *custom_msg, *keyword; + int min = INT_MAX; + int i, len, nargs, nkeywords; + PyObject *freelist = NULL, *current_arg; + assert(args != NULL && PyTuple_Check(args)); + assert(keywords == NULL || PyDict_Check(keywords)); + assert(format != NULL); + assert(kwlist != NULL); + assert(p_va != NULL); - assert(args != NULL && PyTuple_Check(args)); - assert(keywords == NULL || PyDict_Check(keywords)); - assert(format != NULL); - assert(kwlist != NULL); - assert(p_va != NULL); + /* grab the function name or custom error msg first (mutually exclusive) */ + fname = strchr(format, ':'); + if (fname) { + fname++; + custom_msg = NULL; + } + else { + custom_msg = strchr(format,';'); + if (custom_msg) + custom_msg++; + } - /* grab the function name or custom error msg first (mutually exclusive) */ - fname = strchr(format, ':'); - if (fname) { - fname++; - custom_msg = NULL; - } - else { - custom_msg = strchr(format,';'); - if (custom_msg) - custom_msg++; - } + /* scan kwlist and get greatest possible nbr of args */ + for (len=0; kwlist[len]; len++) + continue; - /* scan kwlist and get greatest possible nbr of args */ - for (len=0; kwlist[len]; len++) - continue; + nargs = PyTuple_GET_SIZE(args); + nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); + if (nargs + nkeywords > len) { + PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " + "argument%s (%d given)", + (fname == NULL) ? "function" : fname, + (fname == NULL) ? "" : "()", + len, + (len == 1) ? "" : "s", + nargs + nkeywords); + return 0; + } - freelist.entries = PyMem_New(freelistentry_t, len); + /* convert tuple args and keyword args in same loop, using kwlist to drive process */ + for (i = 0; i < len; i++) { + keyword = kwlist[i]; + if (*format == '|') { + min = i; + format++; + } + if (IS_END_OF_FORMAT(*format)) { + PyErr_Format(PyExc_RuntimeError, + "More keyword list entries (%d) than " + "format specifiers (%d)", len, i); + return cleanreturn(0, freelist); + } + current_arg = NULL; + if (nkeywords) { + current_arg = PyDict_GetItemString(keywords, keyword); + } + if (current_arg) { + --nkeywords; + if (i < nargs) { + /* arg present in tuple and in dict */ + PyErr_Format(PyExc_TypeError, + "Argument given by name ('%s') " + "and position (%d)", + keyword, i+1); + return cleanreturn(0, freelist); + } + } + else if (nkeywords && PyErr_Occurred()) + return cleanreturn(0, freelist); + else if (i < nargs) + current_arg = PyTuple_GET_ITEM(args, i); - nargs = PyTuple_GET_SIZE(args); - nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); - if (nargs + nkeywords > len) { - PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " - "argument%s (%d given)", - (fname == NULL) ? "function" : fname, - (fname == NULL) ? "" : "()", - len, - (len == 1) ? "" : "s", - nargs + nkeywords); - return cleanreturn(0, &freelist); - } + if (current_arg) { + msg = convertitem(current_arg, &format, p_va, flags, + levels, msgbuf, sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, custom_msg); + return cleanreturn(0, freelist); + } + continue; + } - /* convert tuple args and keyword args in same loop, using kwlist to drive process */ - for (i = 0; i < len; i++) { - keyword = kwlist[i]; - if (*format == '|') { - min = i; - format++; - } - if (IS_END_OF_FORMAT(*format)) { - PyErr_Format(PyExc_RuntimeError, - "More keyword list entries (%d) than " - "format specifiers (%d)", len, i); - return cleanreturn(0, &freelist); - } - current_arg = NULL; - if (nkeywords) { - current_arg = PyDict_GetItemString(keywords, keyword); - } - if (current_arg) { - --nkeywords; - if (i < nargs) { - /* arg present in tuple and in dict */ - PyErr_Format(PyExc_TypeError, - "Argument given by name ('%s') " - "and position (%d)", - keyword, i+1); - return cleanreturn(0, &freelist); - } - } - else if (nkeywords && PyErr_Occurred()) - return cleanreturn(0, &freelist); - else if (i < nargs) - current_arg = PyTuple_GET_ITEM(args, i); - - if (current_arg) { - msg = convertitem(current_arg, &format, p_va, flags, - levels, msgbuf, sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, custom_msg); - return cleanreturn(0, &freelist); - } - continue; - } + if (i < min) { + PyErr_Format(PyExc_TypeError, "Required argument " + "'%s' (pos %d) not found", + keyword, i+1); + return cleanreturn(0, freelist); + } + /* current code reports success when all required args + * fulfilled and no keyword args left, with no further + * validation. XXX Maybe skip this in debug build ? + */ + if (!nkeywords) + return cleanreturn(1, freelist); - if (i < min) { - PyErr_Format(PyExc_TypeError, "Required argument " - "'%s' (pos %d) not found", - keyword, i+1); - return cleanreturn(0, &freelist); - } - /* current code reports success when all required args - * fulfilled and no keyword args left, with no further - * validation. XXX Maybe skip this in debug build ? - */ - if (!nkeywords) - return cleanreturn(1, &freelist); + /* We are into optional args, skip thru to any remaining + * keyword args */ + msg = skipitem(&format, p_va, flags); + if (msg) { + PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, + format); + return cleanreturn(0, freelist); + } + } - /* We are into optional args, skip thru to any remaining - * keyword args */ - msg = skipitem(&format, p_va, flags); - if (msg) { - PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, - format); - return cleanreturn(0, &freelist); - } - } + if (!IS_END_OF_FORMAT(*format) && *format != '|') { + PyErr_Format(PyExc_RuntimeError, + "more argument specifiers than keyword list entries " + "(remaining format:'%s')", format); + return cleanreturn(0, freelist); + } - if (!IS_END_OF_FORMAT(*format) && *format != '|') { - PyErr_Format(PyExc_RuntimeError, - "more argument specifiers than keyword list entries " - "(remaining format:'%s')", format); - return cleanreturn(0, &freelist); - } + /* make sure there are no extraneous keyword arguments */ + if (nkeywords > 0) { + PyObject *key, *value; + Py_ssize_t pos = 0; + while (PyDict_Next(keywords, &pos, &key, &value)) { + int match = 0; + char *ks; + if (!PyString_Check(key)) { + PyErr_SetString(PyExc_TypeError, + "keywords must be strings"); + return cleanreturn(0, freelist); + } + ks = PyString_AsString(key); + for (i = 0; i < len; i++) { + if (!strcmp(ks, kwlist[i])) { + match = 1; + break; + } + } + if (!match) { + PyErr_Format(PyExc_TypeError, + "'%s' is an invalid keyword " + "argument for this function", + ks); + return cleanreturn(0, freelist); + } + } + } - /* make sure there are no extraneous keyword arguments */ - if (nkeywords > 0) { - PyObject *key, *value; - Py_ssize_t pos = 0; - while (PyDict_Next(keywords, &pos, &key, &value)) { - int match = 0; - char *ks; - if (!PyString_Check(key)) { - PyErr_SetString(PyExc_TypeError, - "keywords must be strings"); - return cleanreturn(0, &freelist); - } - ks = PyString_AsString(key); - for (i = 0; i < len; i++) { - if (!strcmp(ks, kwlist[i])) { - match = 1; - break; - } - } - if (!match) { - PyErr_Format(PyExc_TypeError, - "'%s' is an invalid keyword " - "argument for this function", - ks); - return cleanreturn(0, &freelist); - } - } - } - - return cleanreturn(1, &freelist); + return cleanreturn(1, freelist); } static char * skipitem(const char **p_format, va_list *p_va, int flags) { - const char *format = *p_format; - char c = *format++; - - switch (c) { + const char *format = *p_format; + char c = *format++; - /* simple codes - * The individual types (second arg of va_arg) are irrelevant */ + switch (c) { - case 'b': /* byte -- very short int */ - case 'B': /* byte as bitfield */ - case 'h': /* short int */ - case 'H': /* short int as bitfield */ - case 'i': /* int */ - case 'I': /* int sized bitfield */ - case 'l': /* long int */ - case 'k': /* long int sized bitfield */ + /* simple codes + * The individual types (second arg of va_arg) are irrelevant */ + + case 'b': /* byte -- very short int */ + case 'B': /* byte as bitfield */ + case 'h': /* short int */ + case 'H': /* short int as bitfield */ + case 'i': /* int */ + case 'I': /* int sized bitfield */ + case 'l': /* long int */ + case 'k': /* long int sized bitfield */ #ifdef HAVE_LONG_LONG - case 'L': /* PY_LONG_LONG */ - case 'K': /* PY_LONG_LONG sized bitfield */ + case 'L': /* PY_LONG_LONG */ + case 'K': /* PY_LONG_LONG sized bitfield */ #endif - case 'f': /* float */ - case 'd': /* double */ + case 'f': /* float */ + case 'd': /* double */ #ifndef WITHOUT_COMPLEX - case 'D': /* complex double */ + case 'D': /* complex double */ #endif - case 'c': /* char */ - { - (void) va_arg(*p_va, void *); - break; - } + case 'c': /* char */ + { + (void) va_arg(*p_va, void *); + break; + } - case 'n': /* Py_ssize_t */ - { - (void) va_arg(*p_va, Py_ssize_t *); - break; - } - - /* string codes */ - - case 'e': /* string with encoding */ - { - (void) va_arg(*p_va, const char *); - if (!(*format == 's' || *format == 't')) - /* after 'e', only 's' and 't' is allowed */ - goto err; - format++; - /* explicit fallthrough to string cases */ - } - - case 's': /* string */ - case 'z': /* string or None */ + case 'n': /* Py_ssize_t */ + { + (void) va_arg(*p_va, Py_ssize_t *); + break; + } + + /* string codes */ + + case 'e': /* string with encoding */ + { + (void) va_arg(*p_va, const char *); + if (!(*format == 's' || *format == 't')) + /* after 'e', only 's' and 't' is allowed */ + goto err; + format++; + /* explicit fallthrough to string cases */ + } + + case 's': /* string */ + case 'z': /* string or None */ #ifdef Py_USING_UNICODE - case 'u': /* unicode string */ + case 'u': /* unicode string */ #endif - case 't': /* buffer, read-only */ - case 'w': /* buffer, read-write */ - { - (void) va_arg(*p_va, char **); - if (*format == '#') { - if (flags & FLAG_SIZE_T) - (void) va_arg(*p_va, Py_ssize_t *); - else - (void) va_arg(*p_va, int *); - format++; - } else if ((c == 's' || c == 'z') && *format == '*') { - format++; - } - break; - } + case 't': /* buffer, read-only */ + case 'w': /* buffer, read-write */ + { + (void) va_arg(*p_va, char **); + if (*format == '#') { + if (flags & FLAG_SIZE_T) + (void) va_arg(*p_va, Py_ssize_t *); + else + (void) va_arg(*p_va, int *); + format++; + } else if ((c == 's' || c == 'z') && *format == '*') { + format++; + } + break; + } - /* object codes */ + /* object codes */ - case 'S': /* string object */ + case 'S': /* string object */ #ifdef Py_USING_UNICODE - case 'U': /* unicode string object */ + case 'U': /* unicode string object */ #endif - { - (void) va_arg(*p_va, PyObject **); - break; - } - - case 'O': /* object */ - { - if (*format == '!') { - format++; - (void) va_arg(*p_va, PyTypeObject*); - (void) va_arg(*p_va, PyObject **); - } -#if 0 -/* I don't know what this is for */ - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - format++; - if ((*pred)(arg)) { - (void) va_arg(*p_va, PyObject **); - } - } -#endif - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - (void) va_arg(*p_va, converter); - (void) va_arg(*p_va, void *); - format++; - } - else { - (void) va_arg(*p_va, PyObject **); - } - break; - } + { + (void) va_arg(*p_va, PyObject **); + break; + } - case '(': /* bypass tuple, not handled at all previously */ - { - char *msg; - for (;;) { - if (*format==')') - break; - if (IS_END_OF_FORMAT(*format)) - return "Unmatched left paren in format " - "string"; - msg = skipitem(&format, p_va, flags); - if (msg) - return msg; - } - format++; - break; - } + case 'O': /* object */ + { + if (*format == '!') { + format++; + (void) va_arg(*p_va, PyTypeObject*); + (void) va_arg(*p_va, PyObject **); + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + (void) va_arg(*p_va, converter); + (void) va_arg(*p_va, void *); + format++; + } + else { + (void) va_arg(*p_va, PyObject **); + } + break; + } - case ')': - return "Unmatched right paren in format string"; + case '(': /* bypass tuple, not handled at all previously */ + { + char *msg; + for (;;) { + if (*format==')') + break; + if (IS_END_OF_FORMAT(*format)) + return "Unmatched left paren in format " + "string"; + msg = skipitem(&format, p_va, flags); + if (msg) + return msg; + } + format++; + break; + } - default: + case ')': + return "Unmatched right paren in format string"; + + default: err: - return "impossible"; - - } + return "impossible"; - *p_format = format; - return NULL; + } + + *p_format = format; + return NULL; } int PyArg_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, ...) { - Py_ssize_t i, l; - PyObject **o; - va_list vargs; + Py_ssize_t i, l; + PyObject **o; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, max); + va_start(vargs, max); #else - va_start(vargs); + va_start(vargs); #endif - assert(min >= 0); - assert(min <= max); - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "PyArg_UnpackTuple() argument list is not a tuple"); - return 0; - } - l = PyTuple_GET_SIZE(args); - if (l < min) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at least "), min, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at least "), min, l); - va_end(vargs); - return 0; - } - if (l > max) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at most "), max, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at most "), max, l); - va_end(vargs); - return 0; - } - for (i = 0; i < l; i++) { - o = va_arg(vargs, PyObject **); - *o = PyTuple_GET_ITEM(args, i); - } - va_end(vargs); - return 1; + assert(min >= 0); + assert(min <= max); + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "PyArg_UnpackTuple() argument list is not a tuple"); + return 0; + } + l = PyTuple_GET_SIZE(args); + if (l < min) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at least "), min, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at least "), min, l); + va_end(vargs); + return 0; + } + if (l > max) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at most "), max, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at most "), max, l); + va_end(vargs); + return 0; + } + for (i = 0; i < l; i++) { + o = va_arg(vargs, PyObject **); + *o = PyTuple_GET_ITEM(args, i); + } + va_end(vargs); + return 1; } /* For type constructors that don't take keyword args * - * Sets a TypeError and returns 0 if the kwds dict is + * Sets a TypeError and returns 0 if the kwds dict is * not empty, returns 1 otherwise */ int _PyArg_NoKeywords(const char *funcname, PyObject *kw) { - if (kw == NULL) - return 1; - if (!PyDict_CheckExact(kw)) { - PyErr_BadInternalCall(); - return 0; - } - if (PyDict_Size(kw) == 0) - return 1; - - PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", - funcname); - return 0; + if (kw == NULL) + return 1; + if (!PyDict_CheckExact(kw)) { + PyErr_BadInternalCall(); + return 0; + } + if (PyDict_Size(kw) == 0) + return 1; + + PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", + funcname); + return 0; } #ifdef __cplusplus }; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -33,41 +33,41 @@ static int countformat(const char *format, int endchar) { - int count = 0; - int level = 0; - while (level > 0 || *format != endchar) { - switch (*format) { - case '\0': - /* Premature end */ - PyErr_SetString(PyExc_SystemError, - "unmatched paren in format"); - return -1; - case '(': - case '[': - case '{': - if (level == 0) - count++; - level++; - break; - case ')': - case ']': - case '}': - level--; - break; - case '#': - case '&': - case ',': - case ':': - case ' ': - case '\t': - break; - default: - if (level == 0) - count++; - } - format++; - } - return count; + int count = 0; + int level = 0; + while (level > 0 || *format != endchar) { + switch (*format) { + case '\0': + /* Premature end */ + PyErr_SetString(PyExc_SystemError, + "unmatched paren in format"); + return -1; + case '(': + case '[': + case '{': + if (level == 0) + count++; + level++; + break; + case ')': + case ']': + case '}': + level--; + break; + case '#': + case '&': + case ',': + case ':': + case ' ': + case '\t': + break; + default: + if (level == 0) + count++; + } + format++; + } + return count; } @@ -83,582 +83,435 @@ static PyObject * do_mkdict(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *d; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((d = PyDict_New()) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i+= 2) { - PyObject *k, *v; - int err; - k = do_mkvalue(p_format, p_va, flags); - if (k == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - k = Py_None; - } - v = do_mkvalue(p_format, p_va, flags); - if (v == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - v = Py_None; - } - err = PyDict_SetItem(d, k, v); - Py_DECREF(k); - Py_DECREF(v); - if (err < 0 || itemfailed) { - Py_DECREF(d); - return NULL; - } - } - if (d != NULL && **p_format != endchar) { - Py_DECREF(d); - d = NULL; - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - } - else if (endchar) - ++*p_format; - return d; + PyObject *d; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((d = PyDict_New()) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i+= 2) { + PyObject *k, *v; + int err; + k = do_mkvalue(p_format, p_va, flags); + if (k == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + k = Py_None; + } + v = do_mkvalue(p_format, p_va, flags); + if (v == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + v = Py_None; + } + err = PyDict_SetItem(d, k, v); + Py_DECREF(k); + Py_DECREF(v); + if (err < 0 || itemfailed) { + Py_DECREF(d); + return NULL; + } + } + if (d != NULL && **p_format != endchar) { + Py_DECREF(d); + d = NULL; + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + } + else if (endchar) + ++*p_format; + return d; } static PyObject * do_mklist(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - v = PyList_New(n); - if (v == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyList_SET_ITEM(v, i, w); - } + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + v = PyList_New(n); + if (v == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyList_SET_ITEM(v, i, w); + } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } #ifdef Py_USING_UNICODE static int _ustrlen(Py_UNICODE *u) { - int i = 0; - Py_UNICODE *v = u; - while (*v != 0) { i++; v++; } - return i; + int i = 0; + Py_UNICODE *v = u; + while (*v != 0) { i++; v++; } + return i; } #endif static PyObject * do_mktuple(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((v = PyTuple_New(n)) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyTuple_SET_ITEM(v, i, w); - } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((v = PyTuple_New(n)) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyTuple_SET_ITEM(v, i, w); + } + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } static PyObject * do_mkvalue(const char **p_format, va_list *p_va, int flags) { - for (;;) { - switch (*(*p_format)++) { - case '(': - return do_mktuple(p_format, p_va, ')', - countformat(*p_format, ')'), flags); + for (;;) { + switch (*(*p_format)++) { + case '(': + return do_mktuple(p_format, p_va, ')', + countformat(*p_format, ')'), flags); - case '[': - return do_mklist(p_format, p_va, ']', - countformat(*p_format, ']'), flags); + case '[': + return do_mklist(p_format, p_va, ']', + countformat(*p_format, ']'), flags); - case '{': - return do_mkdict(p_format, p_va, '}', - countformat(*p_format, '}'), flags); + case '{': + return do_mkdict(p_format, p_va, '}', + countformat(*p_format, '}'), flags); - case 'b': - case 'B': - case 'h': - case 'i': - return PyInt_FromLong((long)va_arg(*p_va, int)); - - case 'H': - return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + case 'b': + case 'B': + case 'h': + case 'i': + return PyInt_FromLong((long)va_arg(*p_va, int)); - case 'I': - { - unsigned int n; - n = va_arg(*p_va, unsigned int); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong((unsigned long)n); - else - return PyInt_FromLong(n); - } - - case 'n': + case 'H': + return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + + case 'I': + { + unsigned int n; + n = va_arg(*p_va, unsigned int); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong((unsigned long)n); + else + return PyInt_FromLong(n); + } + + case 'n': #if SIZEOF_SIZE_T!=SIZEOF_LONG - return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); + return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); #endif - /* Fall through from 'n' to 'l' if Py_ssize_t is long */ - case 'l': - return PyInt_FromLong(va_arg(*p_va, long)); + /* Fall through from 'n' to 'l' if Py_ssize_t is long */ + case 'l': + return PyInt_FromLong(va_arg(*p_va, long)); - case 'k': - { - unsigned long n; - n = va_arg(*p_va, unsigned long); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong(n); - else - return PyInt_FromLong(n); - } + case 'k': + { + unsigned long n; + n = va_arg(*p_va, unsigned long); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong(n); + else + return PyInt_FromLong(n); + } #ifdef HAVE_LONG_LONG - case 'L': - return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); + case 'L': + return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); - case 'K': - return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); + case 'K': + return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); #endif #ifdef Py_USING_UNICODE - case 'u': - { - PyObject *v; - Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (u == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) - n = _ustrlen(u); - v = PyUnicode_FromUnicode(u, n); - } - return v; - } + case 'u': + { + PyObject *v; + Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (u == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) + n = _ustrlen(u); + v = PyUnicode_FromUnicode(u, n); + } + return v; + } #endif - case 'f': - case 'd': - return PyFloat_FromDouble( - (double)va_arg(*p_va, va_double)); + case 'f': + case 'd': + return PyFloat_FromDouble( + (double)va_arg(*p_va, va_double)); #ifndef WITHOUT_COMPLEX - case 'D': - return PyComplex_FromCComplex( - *((Py_complex *)va_arg(*p_va, Py_complex *))); + case 'D': + return PyComplex_FromCComplex( + *((Py_complex *)va_arg(*p_va, Py_complex *))); #endif /* WITHOUT_COMPLEX */ - case 'c': - { - char p[1]; - p[0] = (char)va_arg(*p_va, int); - return PyString_FromStringAndSize(p, 1); - } + case 'c': + { + char p[1]; + p[0] = (char)va_arg(*p_va, int); + return PyString_FromStringAndSize(p, 1); + } - case 's': - case 'z': - { - PyObject *v; - char *str = va_arg(*p_va, char *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (str == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) { - size_t m = strlen(str); - if (m > PY_SSIZE_T_MAX) { - PyErr_SetString(PyExc_OverflowError, - "string too long for Python string"); - return NULL; - } - n = (Py_ssize_t)m; - } - v = PyString_FromStringAndSize(str, n); - } - return v; - } + case 's': + case 'z': + { + PyObject *v; + char *str = va_arg(*p_va, char *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (str == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) { + size_t m = strlen(str); + if (m > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_OverflowError, + "string too long for Python string"); + return NULL; + } + n = (Py_ssize_t)m; + } + v = PyString_FromStringAndSize(str, n); + } + return v; + } - case 'N': - case 'S': - case 'O': - if (**p_format == '&') { - typedef PyObject *(*converter)(void *); - converter func = va_arg(*p_va, converter); - void *arg = va_arg(*p_va, void *); - ++*p_format; - return (*func)(arg); - } - else { - PyObject *v; - v = va_arg(*p_va, PyObject *); - if (v != NULL) { - if (*(*p_format - 1) != 'N') - Py_INCREF(v); - } - else if (!PyErr_Occurred()) - /* If a NULL was passed - * because a call that should - * have constructed a value - * failed, that's OK, and we - * pass the error on; but if - * no error occurred it's not - * clear that the caller knew - * what she was doing. */ - PyErr_SetString(PyExc_SystemError, - "NULL object passed to Py_BuildValue"); - return v; - } + case 'N': + case 'S': + case 'O': + if (**p_format == '&') { + typedef PyObject *(*converter)(void *); + converter func = va_arg(*p_va, converter); + void *arg = va_arg(*p_va, void *); + ++*p_format; + return (*func)(arg); + } + else { + PyObject *v; + v = va_arg(*p_va, PyObject *); + if (v != NULL) { + if (*(*p_format - 1) != 'N') + Py_INCREF(v); + } + else if (!PyErr_Occurred()) + /* If a NULL was passed + * because a call that should + * have constructed a value + * failed, that's OK, and we + * pass the error on; but if + * no error occurred it's not + * clear that the caller knew + * what she was doing. */ + PyErr_SetString(PyExc_SystemError, + "NULL object passed to Py_BuildValue"); + return v; + } - case ':': - case ',': - case ' ': - case '\t': - break; + case ':': + case ',': + case ' ': + case '\t': + break; - default: - PyErr_SetString(PyExc_SystemError, - "bad format char passed to Py_BuildValue"); - return NULL; + default: + PyErr_SetString(PyExc_SystemError, + "bad format char passed to Py_BuildValue"); + return NULL; - } - } + } + } } PyObject * Py_BuildValue(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, 0); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, 0); + va_end(va); + return retval; } PyObject * _Py_BuildValue_SizeT(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, FLAG_SIZE_T); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, FLAG_SIZE_T); + va_end(va); + return retval; } PyObject * Py_VaBuildValue(const char *format, va_list va) { - return va_build_value(format, va, 0); + return va_build_value(format, va, 0); } PyObject * _Py_VaBuildValue_SizeT(const char *format, va_list va) { - return va_build_value(format, va, FLAG_SIZE_T); + return va_build_value(format, va, FLAG_SIZE_T); } static PyObject * va_build_value(const char *format, va_list va, int flags) { - const char *f = format; - int n = countformat(f, '\0'); - va_list lva; + const char *f = format; + int n = countformat(f, '\0'); + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - if (n < 0) - return NULL; - if (n == 0) { - Py_INCREF(Py_None); - return Py_None; - } - if (n == 1) - return do_mkvalue(&f, &lva, flags); - return do_mktuple(&f, &lva, '\0', n, flags); + if (n < 0) + return NULL; + if (n == 0) { + Py_INCREF(Py_None); + return Py_None; + } + if (n == 1) + return do_mkvalue(&f, &lva, flags); + return do_mktuple(&f, &lva, '\0', n, flags); } PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...) { - va_list vargs; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *args; + PyObject *res; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) - return NULL; + if (args == NULL) + return NULL; - res = PyEval_CallObject(obj, args); - Py_DECREF(args); + res = PyEval_CallObject(obj, args); + Py_DECREF(args); - return res; + return res; } PyObject * PyEval_CallMethod(PyObject *obj, const char *methodname, const char *format, ...) { - va_list vargs; - PyObject *meth; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *meth; + PyObject *args; + PyObject *res; - meth = PyObject_GetAttrString(obj, methodname); - if (meth == NULL) - return NULL; + meth = PyObject_GetAttrString(obj, methodname); + if (meth == NULL) + return NULL; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) { - Py_DECREF(meth); - return NULL; - } + if (args == NULL) { + Py_DECREF(meth); + return NULL; + } - res = PyEval_CallObject(meth, args); - Py_DECREF(meth); - Py_DECREF(args); + res = PyEval_CallObject(meth, args); + Py_DECREF(meth); + Py_DECREF(args); - return res; -} - -static PyObject* -call_function_tail(PyObject *callable, PyObject *args) -{ - PyObject *retval; - - if (args == NULL) - return NULL; - - if (!PyTuple_Check(args)) { - PyObject *a; - - a = PyTuple_New(1); - if (a == NULL) { - Py_DECREF(args); - return NULL; - } - PyTuple_SET_ITEM(a, 0, args); - args = a; - } - retval = PyObject_Call(callable, args, NULL); - - Py_DECREF(args); - - return retval; -} - -PyObject * -PyObject_CallFunction(PyObject *callable, const char *format, ...) -{ - va_list va; - PyObject *args; - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - return call_function_tail(callable, args); -} - -PyObject * -PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) -{ - va_list va; - PyObject *args; - PyObject *func = NULL; - PyObject *retval = NULL; - - func = PyObject_GetAttrString(o, name); - if (func == NULL) { - PyErr_SetString(PyExc_AttributeError, name); - return 0; - } - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - retval = call_function_tail(func, args); - - exit: - /* args gets consumed in call_function_tail */ - Py_XDECREF(func); - - return retval; -} - -static PyObject * -objargs_mktuple(va_list va) -{ - int i, n = 0; - va_list countva; - PyObject *result, *tmp; - -#ifdef VA_LIST_IS_ARRAY - memcpy(countva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(countva, va); -#else - countva = va; -#endif -#endif - - while (((PyObject *)va_arg(countva, PyObject *)) != NULL) - ++n; - result = PyTuple_New(n); - if (result != NULL && n > 0) { - for (i = 0; i < n; ++i) { - tmp = (PyObject *)va_arg(va, PyObject *); - Py_INCREF(tmp); - PyTuple_SET_ITEM(result, i, tmp); - } - } - return result; -} - -PyObject * -PyObject_CallFunctionObjArgs(PyObject *callable, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - /* count the args */ - va_start(vargs, callable); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) - return NULL; - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - - return tmp; -} - -PyObject * -PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - callable = PyObject_GetAttr(callable, name); - if (callable == NULL) - return NULL; - - /* count the args */ - va_start(vargs, name); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) { - Py_DECREF(callable); - return NULL; - } - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - Py_DECREF(callable); - - return tmp; + return res; } /* returns -1 in case of error, 0 if a new key was added, 1 if the key @@ -666,67 +519,67 @@ static int _PyModule_AddObject_NoConsumeRef(PyObject *m, const char *name, PyObject *o) { - PyObject *dict, *prev; - if (!PyModule_Check(m)) { - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs module as first arg"); - return -1; - } - if (!o) { - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs non-NULL value"); - return -1; - } + PyObject *dict, *prev; + if (!PyModule_Check(m)) { + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs module as first arg"); + return -1; + } + if (!o) { + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs non-NULL value"); + return -1; + } - dict = PyModule_GetDict(m); - if (dict == NULL) { - /* Internal error -- modules must have a dict! */ - PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", - PyModule_GetName(m)); - return -1; - } - prev = PyDict_GetItemString(dict, name); - if (PyDict_SetItemString(dict, name, o)) - return -1; - return prev != NULL; + dict = PyModule_GetDict(m); + if (dict == NULL) { + /* Internal error -- modules must have a dict! */ + PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", + PyModule_GetName(m)); + return -1; + } + prev = PyDict_GetItemString(dict, name); + if (PyDict_SetItemString(dict, name, o)) + return -1; + return prev != NULL; } int PyModule_AddObject(PyObject *m, const char *name, PyObject *o) { - int result = _PyModule_AddObject_NoConsumeRef(m, name, o); - /* XXX WORKAROUND for a common misusage of PyModule_AddObject: - for the common case of adding a new key, we don't consume a - reference, but instead just leak it away. The issue is that - people generally don't realize that this function consumes a - reference, because on CPython the reference is still stored - on the dictionary. */ - if (result != 0) - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result = _PyModule_AddObject_NoConsumeRef(m, name, o); + /* XXX WORKAROUND for a common misusage of PyModule_AddObject: + for the common case of adding a new key, we don't consume a + reference, but instead just leak it away. The issue is that + people generally don't realize that this function consumes a + reference, because on CPython the reference is still stored + on the dictionary. */ + if (result != 0) + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddIntConstant(PyObject *m, const char *name, long value) { - int result; - PyObject *o = PyInt_FromLong(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyInt_FromLong(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddStringConstant(PyObject *m, const char *name, const char *value) { - int result; - PyObject *o = PyString_FromString(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyString_FromString(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } diff --git a/pypy/module/cpyext/src/mysnprintf.c b/pypy/module/cpyext/src/mysnprintf.c --- a/pypy/module/cpyext/src/mysnprintf.c +++ b/pypy/module/cpyext/src/mysnprintf.c @@ -20,86 +20,86 @@ Return value (rv): - When 0 <= rv < size, the output conversion was unexceptional, and - rv characters were written to str (excluding a trailing \0 byte at - str[rv]). + When 0 <= rv < size, the output conversion was unexceptional, and + rv characters were written to str (excluding a trailing \0 byte at + str[rv]). - When rv >= size, output conversion was truncated, and a buffer of - size rv+1 would have been needed to avoid truncation. str[size-1] - is \0 in this case. + When rv >= size, output conversion was truncated, and a buffer of + size rv+1 would have been needed to avoid truncation. str[size-1] + is \0 in this case. - When rv < 0, "something bad happened". str[size-1] is \0 in this - case too, but the rest of str is unreliable. It could be that - an error in format codes was detected by libc, or on platforms - with a non-C99 vsnprintf simply that the buffer wasn't big enough - to avoid truncation, or on platforms without any vsnprintf that - PyMem_Malloc couldn't obtain space for a temp buffer. + When rv < 0, "something bad happened". str[size-1] is \0 in this + case too, but the rest of str is unreliable. It could be that + an error in format codes was detected by libc, or on platforms + with a non-C99 vsnprintf simply that the buffer wasn't big enough + to avoid truncation, or on platforms without any vsnprintf that + PyMem_Malloc couldn't obtain space for a temp buffer. CAUTION: Unlike C99, str != NULL and size > 0 are required. */ int +PyOS_snprintf(char *str, size_t size, const char *format, ...) +{ + int rc; + va_list va; + + va_start(va, format); + rc = PyOS_vsnprintf(str, size, format, va); + va_end(va); + return rc; +} + +int PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va) { - int len; /* # bytes written, excluding \0 */ + int len; /* # bytes written, excluding \0 */ #ifdef HAVE_SNPRINTF #define _PyOS_vsnprintf_EXTRA_SPACE 1 #else #define _PyOS_vsnprintf_EXTRA_SPACE 512 - char *buffer; + char *buffer; #endif - assert(str != NULL); - assert(size > 0); - assert(format != NULL); - /* We take a size_t as input but return an int. Sanity check - * our input so that it won't cause an overflow in the - * vsnprintf return value or the buffer malloc size. */ - if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { - len = -666; - goto Done; - } + assert(str != NULL); + assert(size > 0); + assert(format != NULL); + /* We take a size_t as input but return an int. Sanity check + * our input so that it won't cause an overflow in the + * vsnprintf return value or the buffer malloc size. */ + if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { + len = -666; + goto Done; + } #ifdef HAVE_SNPRINTF - len = vsnprintf(str, size, format, va); + len = vsnprintf(str, size, format, va); #else - /* Emulate it. */ - buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); - if (buffer == NULL) { - len = -666; - goto Done; - } + /* Emulate it. */ + buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); + if (buffer == NULL) { + len = -666; + goto Done; + } - len = vsprintf(buffer, format, va); - if (len < 0) - /* ignore the error */; + len = vsprintf(buffer, format, va); + if (len < 0) + /* ignore the error */; - else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) - Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); + else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) + Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); - else { - const size_t to_copy = (size_t)len < size ? - (size_t)len : size - 1; - assert(to_copy < size); - memcpy(str, buffer, to_copy); - str[to_copy] = '\0'; - } - PyMem_FREE(buffer); + else { + const size_t to_copy = (size_t)len < size ? + (size_t)len : size - 1; + assert(to_copy < size); + memcpy(str, buffer, to_copy); + str[to_copy] = '\0'; + } + PyMem_FREE(buffer); #endif Done: - if (size > 0) - str[size-1] = '\0'; - return len; + if (size > 0) + str[size-1] = '\0'; + return len; #undef _PyOS_vsnprintf_EXTRA_SPACE } - -int -PyOS_snprintf(char *str, size_t size, const char *format, ...) -{ - int rc; - va_list va; - - va_start(va, format); - rc = PyOS_vsnprintf(str, size, format, va); - va_end(va); - return rc; -} diff --git a/pypy/module/cpyext/src/object.c b/pypy/module/cpyext/src/object.c deleted file mode 100644 --- a/pypy/module/cpyext/src/object.c +++ /dev/null @@ -1,91 +0,0 @@ -// contains code from abstract.c -#include - - -static PyObject * -null_error(void) -{ - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_SystemError, - "null argument to internal routine"); - return NULL; -} - -int PyObject_AsReadBuffer(PyObject *obj, - const void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void *pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a readable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getreadbuffer)(obj, 0, &pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int PyObject_AsWriteBuffer(PyObject *obj, - void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void*pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a writeable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getwritebuffer)(obj,0,&pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int -PyObject_CheckReadBuffer(PyObject *obj) -{ - PyBufferProcs *pb = obj->ob_type->tp_as_buffer; - - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - (*pb->bf_getsegcount)(obj, NULL) != 1) - return 0; - return 1; -} diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -4,72 +4,75 @@ PyObject * PyErr_Format(PyObject *exception, const char *format, ...) { - va_list vargs; - PyObject* string; + va_list vargs; + PyObject* string; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - string = PyString_FromFormatV(format, vargs); - PyErr_SetObject(exception, string); - Py_XDECREF(string); - va_end(vargs); - return NULL; + string = PyString_FromFormatV(format, vargs); + PyErr_SetObject(exception, string); + Py_XDECREF(string); + va_end(vargs); + return NULL; } + + PyObject * PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { - char *dot; - PyObject *modulename = NULL; - PyObject *classname = NULL; - PyObject *mydict = NULL; - PyObject *bases = NULL; - PyObject *result = NULL; - dot = strrchr(name, '.'); - if (dot == NULL) { - PyErr_SetString(PyExc_SystemError, - "PyErr_NewException: name must be module.class"); - return NULL; - } - if (base == NULL) - base = PyExc_Exception; - if (dict == NULL) { - dict = mydict = PyDict_New(); - if (dict == NULL) - goto failure; - } - if (PyDict_GetItemString(dict, "__module__") == NULL) { - modulename = PyString_FromStringAndSize(name, - (Py_ssize_t)(dot-name)); - if (modulename == NULL) - goto failure; - if (PyDict_SetItemString(dict, "__module__", modulename) != 0) - goto failure; - } - if (PyTuple_Check(base)) { - bases = base; - /* INCREF as we create a new ref in the else branch */ - Py_INCREF(bases); - } else { - bases = PyTuple_Pack(1, base); - if (bases == NULL) - goto failure; - } - /* Create a real new-style class. */ - result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", - dot+1, bases, dict); + char *dot; + PyObject *modulename = NULL; + PyObject *classname = NULL; + PyObject *mydict = NULL; + PyObject *bases = NULL; + PyObject *result = NULL; + dot = strrchr(name, '.'); + if (dot == NULL) { + PyErr_SetString(PyExc_SystemError, + "PyErr_NewException: name must be module.class"); + return NULL; + } + if (base == NULL) + base = PyExc_Exception; + if (dict == NULL) { + dict = mydict = PyDict_New(); + if (dict == NULL) + goto failure; + } + if (PyDict_GetItemString(dict, "__module__") == NULL) { + modulename = PyString_FromStringAndSize(name, + (Py_ssize_t)(dot-name)); + if (modulename == NULL) + goto failure; + if (PyDict_SetItemString(dict, "__module__", modulename) != 0) + goto failure; + } + if (PyTuple_Check(base)) { + bases = base; + /* INCREF as we create a new ref in the else branch */ + Py_INCREF(bases); + } else { + bases = PyTuple_Pack(1, base); + if (bases == NULL) + goto failure; + } + /* Create a real new-style class. */ + result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", + dot+1, bases, dict); failure: - Py_XDECREF(bases); - Py_XDECREF(mydict); - Py_XDECREF(classname); - Py_XDECREF(modulename); - return result; + Py_XDECREF(bases); + Py_XDECREF(mydict); + Py_XDECREF(classname); + Py_XDECREF(modulename); + return result; } + /* Create an exception with docstring */ PyObject * PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) diff --git a/pypy/module/cpyext/src/pysignals.c b/pypy/module/cpyext/src/pysignals.c --- a/pypy/module/cpyext/src/pysignals.c +++ b/pypy/module/cpyext/src/pysignals.c @@ -17,17 +17,34 @@ PyOS_getsig(int sig) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context; - if (sigaction(sig, NULL, &context) == -1) - return SIG_ERR; - return context.sa_handler; + /* assume sigaction exists */ + struct sigaction context; + if (sigaction(sig, NULL, &context) == -1) + return SIG_ERR; + return context.sa_handler; #else - PyOS_sighandler_t handler; - handler = signal(sig, SIG_IGN); - if (handler != SIG_ERR) - signal(sig, handler); - return handler; + PyOS_sighandler_t handler; +/* Special signal handling for the secure CRT in Visual Studio 2005 */ +#if defined(_MSC_VER) && _MSC_VER >= 1400 + switch (sig) { + /* Only these signals are valid */ + case SIGINT: + case SIGILL: + case SIGFPE: + case SIGSEGV: + case SIGTERM: + case SIGBREAK: + case SIGABRT: + break; + /* Don't call signal() with other values or it will assert */ + default: + return SIG_ERR; + } +#endif /* _MSC_VER && _MSC_VER >= 1400 */ + handler = signal(sig, SIG_IGN); + if (handler != SIG_ERR) + signal(sig, handler); + return handler; #endif } @@ -35,21 +52,21 @@ PyOS_setsig(int sig, PyOS_sighandler_t handler) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context, ocontext; - context.sa_handler = handler; - sigemptyset(&context.sa_mask); - context.sa_flags = 0; - if (sigaction(sig, &context, &ocontext) == -1) - return SIG_ERR; - return ocontext.sa_handler; + /* assume sigaction exists */ + struct sigaction context, ocontext; + context.sa_handler = handler; + sigemptyset(&context.sa_mask); + context.sa_flags = 0; + if (sigaction(sig, &context, &ocontext) == -1) + return SIG_ERR; + return ocontext.sa_handler; #else - PyOS_sighandler_t oldhandler; - oldhandler = signal(sig, handler); + PyOS_sighandler_t oldhandler; + oldhandler = signal(sig, handler); #ifndef MS_WINDOWS - /* should check if this exists */ - siginterrupt(sig, 1); + /* should check if this exists */ + siginterrupt(sig, 1); #endif - return oldhandler; + return oldhandler; #endif } diff --git a/pypy/module/cpyext/src/pythonrun.c b/pypy/module/cpyext/src/pythonrun.c --- a/pypy/module/cpyext/src/pythonrun.c +++ b/pypy/module/cpyext/src/pythonrun.c @@ -9,28 +9,28 @@ void Py_FatalError(const char *msg) { - fprintf(stderr, "Fatal Python error: %s\n", msg); - fflush(stderr); /* it helps in Windows debug build */ + fprintf(stderr, "Fatal Python error: %s\n", msg); + fflush(stderr); /* it helps in Windows debug build */ #ifdef MS_WINDOWS - { - size_t len = strlen(msg); - WCHAR* buffer; - size_t i; + { + size_t len = strlen(msg); + WCHAR* buffer; + size_t i; - /* Convert the message to wchar_t. This uses a simple one-to-one - conversion, assuming that the this error message actually uses ASCII - only. If this ceases to be true, we will have to convert. */ - buffer = alloca( (len+1) * (sizeof *buffer)); - for( i=0; i<=len; ++i) - buffer[i] = msg[i]; - OutputDebugStringW(L"Fatal Python error: "); - OutputDebugStringW(buffer); - OutputDebugStringW(L"\n"); - } + /* Convert the message to wchar_t. This uses a simple one-to-one + conversion, assuming that the this error message actually uses ASCII + only. If this ceases to be true, we will have to convert. */ + buffer = alloca( (len+1) * (sizeof *buffer)); + for( i=0; i<=len; ++i) + buffer[i] = msg[i]; + OutputDebugStringW(L"Fatal Python error: "); + OutputDebugStringW(buffer); + OutputDebugStringW(L"\n"); + } #ifdef _DEBUG - DebugBreak(); + DebugBreak(); #endif #endif /* MS_WINDOWS */ - abort(); + abort(); } diff --git a/pypy/module/cpyext/src/stringobject.c b/pypy/module/cpyext/src/stringobject.c --- a/pypy/module/cpyext/src/stringobject.c +++ b/pypy/module/cpyext/src/stringobject.c @@ -4,246 +4,247 @@ PyObject * PyString_FromFormatV(const char *format, va_list vargs) { - va_list count; - Py_ssize_t n = 0; - const char* f; - char *s; - PyObject* string; + va_list count; + Py_ssize_t n = 0; + const char* f; + char *s; + PyObject* string; #ifdef VA_LIST_IS_ARRAY - Py_MEMCPY(count, vargs, sizeof(va_list)); + Py_MEMCPY(count, vargs, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(count, vargs); + __va_copy(count, vargs); #else - count = vargs; + count = vargs; #endif #endif - /* step 1: figure out how large a buffer we need */ - for (f = format; *f; f++) { - if (*f == '%') { + /* step 1: figure out how large a buffer we need */ + for (f = format; *f; f++) { + if (*f == '%') { #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - const char* p = f; - while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - ; + const char* p = f; + while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + ; - /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since - * they don't affect the amount of space we reserve. - */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - ++f; - } + /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since + * they don't affect the amount of space we reserve. + */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - ++f; - } + } + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + ++f; + } - switch (*f) { - case 'c': - (void)va_arg(count, int); - /* fall through... */ - case '%': - n++; - break; - case 'd': case 'u': case 'i': case 'x': - (void) va_arg(count, int); + switch (*f) { + case 'c': + (void)va_arg(count, int); + /* fall through... */ + case '%': + n++; + break; + case 'd': case 'u': case 'i': case 'x': + (void) va_arg(count, int); #ifdef HAVE_LONG_LONG - /* Need at most - ceil(log10(256)*SIZEOF_LONG_LONG) digits, - plus 1 for the sign. 53/22 is an upper - bound for log10(256). */ - if (longlongflag) - n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; - else + /* Need at most + ceil(log10(256)*SIZEOF_LONG_LONG) digits, + plus 1 for the sign. 53/22 is an upper + bound for log10(256). */ + if (longlongflag) + n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; + else #endif - /* 20 bytes is enough to hold a 64-bit - integer. Decimal takes the most - space. This isn't enough for - octal. */ - n += 20; + /* 20 bytes is enough to hold a 64-bit + integer. Decimal takes the most + space. This isn't enough for + octal. */ + n += 20; - break; - case 's': - s = va_arg(count, char*); - n += strlen(s); - break; - case 'p': - (void) va_arg(count, int); - /* maximum 64-bit pointer representation: - * 0xffffffffffffffff - * so 19 characters is enough. - * XXX I count 18 -- what's the extra for? - */ - n += 19; - break; - default: - /* if we stumble upon an unknown - formatting code, copy the rest of - the format string to the output - string. (we cannot just skip the - code, since there's no way to know - what's in the argument list) */ - n += strlen(p); - goto expand; - } - } else - n++; - } + break; + case 's': + s = va_arg(count, char*); + n += strlen(s); + break; + case 'p': + (void) va_arg(count, int); + /* maximum 64-bit pointer representation: + * 0xffffffffffffffff + * so 19 characters is enough. + * XXX I count 18 -- what's the extra for? + */ + n += 19; + break; + default: + /* if we stumble upon an unknown + formatting code, copy the rest of + the format string to the output + string. (we cannot just skip the + code, since there's no way to know + what's in the argument list) */ + n += strlen(p); + goto expand; + } + } else + n++; + } expand: - /* step 2: fill the buffer */ - /* Since we've analyzed how much space we need for the worst case, - use sprintf directly instead of the slower PyOS_snprintf. */ - string = PyString_FromStringAndSize(NULL, n); - if (!string) - return NULL; + /* step 2: fill the buffer */ + /* Since we've analyzed how much space we need for the worst case, + use sprintf directly instead of the slower PyOS_snprintf. */ + string = PyString_FromStringAndSize(NULL, n); + if (!string) + return NULL; - s = PyString_AsString(string); + s = PyString_AsString(string); - for (f = format; *f; f++) { - if (*f == '%') { - const char* p = f++; - Py_ssize_t i; - int longflag = 0; + for (f = format; *f; f++) { + if (*f == '%') { + const char* p = f++; + Py_ssize_t i; + int longflag = 0; #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - int size_tflag = 0; - /* parse the width.precision part (we're only - interested in the precision value, if any) */ - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - if (*f == '.') { - f++; - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - } - while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - f++; - /* Handle %ld, %lu, %lld and %llu. */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - longflag = 1; - ++f; - } + int size_tflag = 0; + /* parse the width.precision part (we're only + interested in the precision value, if any) */ + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + if (*f == '.') { + f++; + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + } + while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + f++; + /* Handle %ld, %lu, %lld and %llu. */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + longflag = 1; + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - /* handle the size_t flag. */ - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - size_tflag = 1; - ++f; - } + } + /* handle the size_t flag. */ + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + size_tflag = 1; + ++f; + } - switch (*f) { - case 'c': - *s++ = va_arg(vargs, int); - break; - case 'd': - if (longflag) - sprintf(s, "%ld", va_arg(vargs, long)); + switch (*f) { + case 'c': + *s++ = va_arg(vargs, int); + break; + case 'd': + if (longflag) + sprintf(s, "%ld", va_arg(vargs, long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "d", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "d", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "d", - va_arg(vargs, Py_ssize_t)); - else - sprintf(s, "%d", va_arg(vargs, int)); - s += strlen(s); - break; - case 'u': - if (longflag) - sprintf(s, "%lu", - va_arg(vargs, unsigned long)); + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "d", + va_arg(vargs, Py_ssize_t)); + else + sprintf(s, "%d", va_arg(vargs, int)); + s += strlen(s); + break; + case 'u': + if (longflag) + sprintf(s, "%lu", + va_arg(vargs, unsigned long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "u", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "u", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "u", - va_arg(vargs, size_t)); - else - sprintf(s, "%u", - va_arg(vargs, unsigned int)); - s += strlen(s); - break; - case 'i': - sprintf(s, "%i", va_arg(vargs, int)); - s += strlen(s); - break; - case 'x': - sprintf(s, "%x", va_arg(vargs, int)); - s += strlen(s); - break; - case 's': - p = va_arg(vargs, char*); - i = strlen(p); - if (n > 0 && i > n) - i = n; - Py_MEMCPY(s, p, i); - s += i; - break; - case 'p': - sprintf(s, "%p", va_arg(vargs, void*)); - /* %p is ill-defined: ensure leading 0x. */ - if (s[1] == 'X') - s[1] = 'x'; - else if (s[1] != 'x') { - memmove(s+2, s, strlen(s)+1); - s[0] = '0'; - s[1] = 'x'; - } - s += strlen(s); - break; - case '%': - *s++ = '%'; - break; - default: - strcpy(s, p); - s += strlen(s); - goto end; - } - } else - *s++ = *f; - } + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "u", + va_arg(vargs, size_t)); + else + sprintf(s, "%u", + va_arg(vargs, unsigned int)); + s += strlen(s); + break; + case 'i': + sprintf(s, "%i", va_arg(vargs, int)); + s += strlen(s); + break; + case 'x': + sprintf(s, "%x", va_arg(vargs, int)); + s += strlen(s); + break; + case 's': + p = va_arg(vargs, char*); + i = strlen(p); + if (n > 0 && i > n) + i = n; + Py_MEMCPY(s, p, i); + s += i; + break; + case 'p': + sprintf(s, "%p", va_arg(vargs, void*)); + /* %p is ill-defined: ensure leading 0x. */ + if (s[1] == 'X') + s[1] = 'x'; + else if (s[1] != 'x') { + memmove(s+2, s, strlen(s)+1); + s[0] = '0'; + s[1] = 'x'; + } + s += strlen(s); + break; + case '%': + *s++ = '%'; + break; + default: + strcpy(s, p); + s += strlen(s); + goto end; + } + } else + *s++ = *f; + } end: - _PyString_Resize(&string, s - PyString_AS_STRING(string)); - return string; + if (_PyString_Resize(&string, s - PyString_AS_STRING(string))) + return NULL; + return string; } PyObject * PyString_FromFormat(const char *format, ...) { - PyObject* ret; - va_list vargs; + PyObject* ret; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - ret = PyString_FromFormatV(format, vargs); - va_end(vargs); - return ret; + ret = PyString_FromFormatV(format, vargs); + va_end(vargs); + return ret; } diff --git a/pypy/module/cpyext/src/structseq.c b/pypy/module/cpyext/src/structseq.c --- a/pypy/module/cpyext/src/structseq.c +++ b/pypy/module/cpyext/src/structseq.c @@ -175,32 +175,33 @@ if (min_len != max_len) { if (len < min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at least %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at least %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } if (len > max_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at most %zd-sequence (%zd-sequence given)", - type->tp_name, max_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at most %zd-sequence (%zd-sequence given)", + type->tp_name, max_len, len); + Py_DECREF(arg); + return NULL; } } else { if (len != min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes a %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes a %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } } res = (PyStructSequence*) PyStructSequence_New(type); if (res == NULL) { + Py_DECREF(arg); return NULL; } for (i = 0; i < len; ++i) { diff --git a/pypy/module/cpyext/src/sysmodule.c b/pypy/module/cpyext/src/sysmodule.c --- a/pypy/module/cpyext/src/sysmodule.c +++ b/pypy/module/cpyext/src/sysmodule.c @@ -100,4 +100,3 @@ sys_write("stderr", stderr, format, va); va_end(va); } - diff --git a/pypy/module/cpyext/src/varargwrapper.c b/pypy/module/cpyext/src/varargwrapper.c --- a/pypy/module/cpyext/src/varargwrapper.c +++ b/pypy/module/cpyext/src/varargwrapper.c @@ -1,21 +1,25 @@ #include #include -PyObject * PyTuple_Pack(Py_ssize_t size, ...) +PyObject * +PyTuple_Pack(Py_ssize_t n, ...) { - va_list ap; - PyObject *cur, *tuple; - int i; + Py_ssize_t i; + PyObject *o; + PyObject *result; + va_list vargs; - tuple = PyTuple_New(size); - va_start(ap, size); - for (i = 0; i < size; i++) { - cur = va_arg(ap, PyObject*); - Py_INCREF(cur); - if (PyTuple_SetItem(tuple, i, cur) < 0) + va_start(vargs, n); + result = PyTuple_New(n); + if (result == NULL) + return NULL; + for (i = 0; i < n; i++) { + o = va_arg(vargs, PyObject *); + Py_INCREF(o); + if (PyTuple_SetItem(result, i, o) < 0) return NULL; } - va_end(ap); - return tuple; + va_end(vargs); + return result; } diff --git a/pypy/module/cpyext/structmember.py b/pypy/module/cpyext/structmember.py --- a/pypy/module/cpyext/structmember.py +++ b/pypy/module/cpyext/structmember.py @@ -10,7 +10,7 @@ PyString_FromString, PyString_FromStringAndSize) from pypy.module.cpyext.floatobject import PyFloat_AsDouble from pypy.module.cpyext.longobject import ( - PyLong_AsLongLong, PyLong_AsUnsignedLongLong) + PyLong_AsLongLong, PyLong_AsUnsignedLongLong, PyLong_AsSsize_t) from pypy.module.cpyext.typeobjectdefs import PyMemberDef from pypy.rlib.unroll import unrolling_iterable @@ -28,6 +28,7 @@ (T_DOUBLE, rffi.DOUBLE, PyFloat_AsDouble), (T_LONGLONG, rffi.LONGLONG, PyLong_AsLongLong), (T_ULONGLONG, rffi.ULONGLONG, PyLong_AsUnsignedLongLong), + (T_PYSSIZET, rffi.SSIZE_T, PyLong_AsSsize_t), ]) diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -18,6 +18,7 @@ T_OBJECT_EX = 16 T_LONGLONG = 17 T_ULONGLONG = 18 +T_PYSSIZET = 19 READONLY = RO = 1 READ_RESTRICTED = 2 diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1405,13 +1405,6 @@ """ raise NotImplementedError - at cpython_api([Py_ssize_t], PyObject) -def PyLong_FromSsize_t(space, v): - """Return a new PyLongObject object from a C Py_ssize_t, or - NULL on failure. - """ - raise NotImplementedError - @cpython_api([rffi.SIZE_T], PyObject) def PyLong_FromSize_t(space, v): """Return a new PyLongObject object from a C size_t, or @@ -1431,14 +1424,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject], Py_ssize_t, error=-1) -def PyLong_AsSsize_t(space, pylong): - """Return a C Py_ssize_t representation of the contents of pylong. If - pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised - and -1 will be returned. - """ - raise NotImplementedError - @cpython_api([PyObject, rffi.CCHARP], rffi.INT_real, error=-1) def PyMapping_DelItemString(space, o, key): """Remove the mapping for object key from the object o. Return -1 on @@ -1980,35 +1965,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) -def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): - """Decode length bytes from a UTF-32 encoded buffer string and return the - corresponding Unicode object. errors (if non-NULL) defines the error - handling. It defaults to "strict". - - If byteorder is non-NULL, the decoder starts decoding using the given byte - order: - - *byteorder == -1: little endian - *byteorder == 0: native order - *byteorder == 1: big endian - - If *byteorder is zero, and the first four bytes of the input data are a - byte order mark (BOM), the decoder switches to this byte order and the BOM is - not copied into the resulting Unicode string. If *byteorder is -1 or - 1, any byte order mark is copied to the output. - - After completion, *byteorder is set to the current byte order at the end - of input data. - - In a narrow build codepoints outside the BMP will be decoded as surrogate pairs. - - If byteorder is NULL, the codec starts in native order mode. - - Return NULL if an exception was raised by the codec. - """ - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP, Py_ssize_t], PyObject) def PyUnicode_DecodeUTF32Stateful(space, s, size, errors, byteorder, consumed): """If consumed is NULL, behave like PyUnicode_DecodeUTF32(). If diff --git a/pypy/module/cpyext/test/_sre.c b/pypy/module/cpyext/test/_sre.c --- a/pypy/module/cpyext/test/_sre.c +++ b/pypy/module/cpyext/test/_sre.c @@ -81,9 +81,6 @@ #define PyObject_DEL(op) PyMem_DEL((op)) #endif -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - /* -------------------------------------------------------------------- */ #if defined(_MSC_VER) @@ -1689,7 +1686,7 @@ if (PyUnicode_Check(string)) { /* unicode strings doesn't always support the buffer interface */ ptr = (void*) PyUnicode_AS_DATA(string); - bytes = PyUnicode_GET_DATA_SIZE(string); + /* bytes = PyUnicode_GET_DATA_SIZE(string); */ size = PyUnicode_GET_SIZE(string); charsize = sizeof(Py_UNICODE); @@ -2601,46 +2598,22 @@ {NULL, NULL} }; -static PyObject* -pattern_getattr(PatternObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(pattern_methods, (PyObject*) self, name); - - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - if (!strcmp(name, "flags")) - return Py_BuildValue("i", self->flags); - - if (!strcmp(name, "groups")) - return Py_BuildValue("i", self->groups); - - if (!strcmp(name, "groupindex") && self->groupindex) { - Py_INCREF(self->groupindex); - return self->groupindex; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define PAT_OFF(x) offsetof(PatternObject, x) +static PyMemberDef pattern_members[] = { + {"pattern", T_OBJECT, PAT_OFF(pattern), READONLY}, + {"flags", T_INT, PAT_OFF(flags), READONLY}, + {"groups", T_PYSSIZET, PAT_OFF(groups), READONLY}, + {"groupindex", T_OBJECT, PAT_OFF(groupindex), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Pattern_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Pattern", sizeof(PatternObject), sizeof(SRE_CODE), (destructor)pattern_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)pattern_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattrn */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ @@ -2653,12 +2626,16 @@ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ pattern_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(PatternObject, weakreflist), /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + pattern_methods, /* tp_methods */ + pattern_members, /* tp_members */ }; static int _validate(PatternObject *self); /* Forward */ @@ -2767,7 +2744,7 @@ #if defined(VVERBOSE) #define VTRACE(v) printf v #else -#define VTRACE(v) +#define VTRACE(v) do {} while(0) /* do nothing */ #endif /* Report failure */ @@ -2970,13 +2947,13 @@ <1=skip> <2=flags> <3=min> <4=max>; If SRE_INFO_PREFIX or SRE_INFO_CHARSET is in the flags, more follows. */ - SRE_CODE flags, min, max, i; + SRE_CODE flags, i; SRE_CODE *newcode; GET_SKIP; newcode = code+skip-1; GET_ARG; flags = arg; - GET_ARG; min = arg; - GET_ARG; max = arg; + GET_ARG; /* min */ + GET_ARG; /* max */ /* Check that only valid flags are present */ if ((flags & ~(SRE_INFO_PREFIX | SRE_INFO_LITERAL | @@ -2992,9 +2969,9 @@ FAIL; /* Validate the prefix */ if (flags & SRE_INFO_PREFIX) { - SRE_CODE prefix_len, prefix_skip; + SRE_CODE prefix_len; GET_ARG; prefix_len = arg; - GET_ARG; prefix_skip = arg; + GET_ARG; /* prefix skip */ /* Here comes the prefix string */ if (code+prefix_len < code || code+prefix_len > newcode) FAIL; @@ -3565,7 +3542,7 @@ #endif } -static PyMethodDef match_methods[] = { +static struct PyMethodDef match_methods[] = { {"group", (PyCFunction) match_group, METH_VARARGS}, {"start", (PyCFunction) match_start, METH_VARARGS}, {"end", (PyCFunction) match_end, METH_VARARGS}, @@ -3578,80 +3555,90 @@ {NULL, NULL} }; -static PyObject* -match_getattr(MatchObject* self, char* name) +static PyObject * +match_lastindex_get(MatchObject *self) { - PyObject* res; - - res = Py_FindMethod(match_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - if (!strcmp(name, "lastindex")) { - if (self->lastindex >= 0) - return Py_BuildValue("i", self->lastindex); - Py_INCREF(Py_None); - return Py_None; + if (self->lastindex >= 0) + return Py_BuildValue("i", self->lastindex); + Py_INCREF(Py_None); + return Py_None; +} + +static PyObject * +match_lastgroup_get(MatchObject *self) +{ + if (self->pattern->indexgroup && self->lastindex >= 0) { + PyObject* result = PySequence_GetItem( + self->pattern->indexgroup, self->lastindex + ); + if (result) + return result; + PyErr_Clear(); } - - if (!strcmp(name, "lastgroup")) { - if (self->pattern->indexgroup && self->lastindex >= 0) { - PyObject* result = PySequence_GetItem( - self->pattern->indexgroup, self->lastindex - ); - if (result) - return result; - PyErr_Clear(); - } - Py_INCREF(Py_None); - return Py_None; - } - - if (!strcmp(name, "string")) { - if (self->string) { - Py_INCREF(self->string); - return self->string; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - - if (!strcmp(name, "regs")) { - if (self->regs) { - Py_INCREF(self->regs); - return self->regs; - } else - return match_regs(self); - } - - if (!strcmp(name, "re")) { - Py_INCREF(self->pattern); - return (PyObject*) self->pattern; - } - - if (!strcmp(name, "pos")) - return Py_BuildValue("i", self->pos); - - if (!strcmp(name, "endpos")) - return Py_BuildValue("i", self->endpos); - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; + Py_INCREF(Py_None); + return Py_None; } +static PyObject * +match_regs_get(MatchObject *self) +{ + if (self->regs) { + Py_INCREF(self->regs); + return self->regs; + } else + return match_regs(self); +} + +static PyGetSetDef match_getset[] = { + {"lastindex", (getter)match_lastindex_get, (setter)NULL}, + {"lastgroup", (getter)match_lastgroup_get, (setter)NULL}, + {"regs", (getter)match_regs_get, (setter)NULL}, + {NULL} +}; + +#define MATCH_OFF(x) offsetof(MatchObject, x) +static PyMemberDef match_members[] = { + {"string", T_OBJECT, MATCH_OFF(string), READONLY}, + {"re", T_OBJECT, MATCH_OFF(pattern), READONLY}, + {"pos", T_PYSSIZET, MATCH_OFF(pos), READONLY}, + {"endpos", T_PYSSIZET, MATCH_OFF(endpos), READONLY}, + {NULL} +}; + + /* FIXME: implement setattr("string", None) as a special case (to detach the associated string, if any */ -statichere PyTypeObject Match_Type = { - PyObject_HEAD_INIT(NULL) - 0, "_" SRE_MODULE ".SRE_Match", +static PyTypeObject Match_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "_" SRE_MODULE ".SRE_Match", sizeof(MatchObject), sizeof(Py_ssize_t), - (destructor)match_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)match_getattr /*tp_getattr*/ + (destructor)match_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + match_methods, /* tp_methods */ + match_members, /* tp_members */ + match_getset, /* tp_getset */ }; static PyObject* @@ -3800,34 +3787,42 @@ {NULL, NULL} }; -static PyObject* -scanner_getattr(ScannerObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(scanner_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define SCAN_OFF(x) offsetof(ScannerObject, x) +static PyMemberDef scanner_members[] = { + {"pattern", T_OBJECT, SCAN_OFF(pattern), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Scanner_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Scanner", sizeof(ScannerObject), 0, (destructor)scanner_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)scanner_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_reserved */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + scanner_methods, /* tp_methods */ + scanner_members, /* tp_members */ + 0, /* tp_getset */ }; static PyObject* @@ -3879,8 +3874,9 @@ PyObject* x; /* Patch object types */ - Pattern_Type.ob_type = Match_Type.ob_type = - Scanner_Type.ob_type = &PyType_Type; + if (PyType_Ready(&Pattern_Type) || PyType_Ready(&Match_Type) || + PyType_Ready(&Scanner_Type)) + return; m = Py_InitModule("_" SRE_MODULE, _functions); if (m == NULL) diff --git a/pypy/module/cpyext/test/array.c b/pypy/module/cpyext/test/array.c --- a/pypy/module/cpyext/test/array.c +++ b/pypy/module/cpyext/test/array.c @@ -11,13 +11,10 @@ #include #else /* !STDC_HEADERS */ #ifdef HAVE_SYS_TYPES_H -#include /* For size_t */ +#include /* For size_t */ #endif /* HAVE_SYS_TYPES_H */ #endif /* !STDC_HEADERS */ -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - struct arrayobject; /* Forward */ /* All possible arraydescr values are defined in the vector "descriptors" @@ -25,18 +22,18 @@ * functions aren't visible yet. */ struct arraydescr { - int typecode; - int itemsize; - PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); - int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); + int typecode; + int itemsize; + PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); + int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); }; typedef struct arrayobject { - PyObject_VAR_HEAD - char *ob_item; - Py_ssize_t allocated; - struct arraydescr *ob_descr; - PyObject *weakreflist; /* List of weak references */ + PyObject_VAR_HEAD + char *ob_item; + Py_ssize_t allocated; + struct arraydescr *ob_descr; + PyObject *weakreflist; /* List of weak references */ } arrayobject; static PyTypeObject Arraytype; @@ -47,49 +44,49 @@ static int array_resize(arrayobject *self, Py_ssize_t newsize) { - char *items; - size_t _new_size; + char *items; + size_t _new_size; - /* Bypass realloc() when a previous overallocation is large enough - to accommodate the newsize. If the newsize is 16 smaller than the - current size, then proceed with the realloc() to shrink the list. - */ + /* Bypass realloc() when a previous overallocation is large enough + to accommodate the newsize. If the newsize is 16 smaller than the + current size, then proceed with the realloc() to shrink the list. + */ - if (self->allocated >= newsize && - Py_SIZE(self) < newsize + 16 && - self->ob_item != NULL) { - Py_SIZE(self) = newsize; - return 0; - } + if (self->allocated >= newsize && + Py_SIZE(self) < newsize + 16 && + self->ob_item != NULL) { + Py_SIZE(self) = newsize; + return 0; + } - /* This over-allocates proportional to the array size, making room - * for additional growth. The over-allocation is mild, but is - * enough to give linear-time amortized behavior over a long - * sequence of appends() in the presence of a poorly-performing - * system realloc(). - * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... - * Note, the pattern starts out the same as for lists but then - * grows at a smaller rate so that larger arrays only overallocate - * by about 1/16th -- this is done because arrays are presumed to be more - * memory critical. - */ + /* This over-allocates proportional to the array size, making room + * for additional growth. The over-allocation is mild, but is + * enough to give linear-time amortized behavior over a long + * sequence of appends() in the presence of a poorly-performing + * system realloc(). + * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... + * Note, the pattern starts out the same as for lists but then + * grows at a smaller rate so that larger arrays only overallocate + * by about 1/16th -- this is done because arrays are presumed to be more + * memory critical. + */ - _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; - items = self->ob_item; - /* XXX The following multiplication and division does not optimize away - like it does for lists since the size is not known at compile time */ - if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) - PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); - else - items = NULL; - if (items == NULL) { - PyErr_NoMemory(); - return -1; - } - self->ob_item = items; - Py_SIZE(self) = newsize; - self->allocated = _new_size; - return 0; + _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; + items = self->ob_item; + /* XXX The following multiplication and division does not optimize away + like it does for lists since the size is not known at compile time */ + if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) + PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); + else + items = NULL; + if (items == NULL) { + PyErr_NoMemory(); + return -1; + } + self->ob_item = items; + Py_SIZE(self) = newsize; + self->allocated = _new_size; + return 0; } /**************************************************************************** @@ -107,308 +104,308 @@ static PyObject * c_getitem(arrayobject *ap, Py_ssize_t i) { - return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); + return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); } static int c_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - char x; - if (!PyArg_Parse(v, "c;array item must be char", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; + char x; + if (!PyArg_Parse(v, "c;array item must be char", &x)) + return -1; + if (i >= 0) + ((char *)ap->ob_item)[i] = x; + return 0; } static PyObject * b_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((char *)ap->ob_item)[i]; - if (x >= 128) - x -= 256; - return PyInt_FromLong(x); + long x = ((char *)ap->ob_item)[i]; + if (x >= 128) + x -= 256; + return PyInt_FromLong(x); } static int b_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore - must use the next size up that is signed ('h') and manually do - the overflow checking */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - else if (x < -128) { - PyErr_SetString(PyExc_OverflowError, - "signed char is less than minimum"); - return -1; - } - else if (x > 127) { - PyErr_SetString(PyExc_OverflowError, - "signed char is greater than maximum"); - return -1; - } - if (i >= 0) - ((char *)ap->ob_item)[i] = (char)x; - return 0; + short x; + /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore + must use the next size up that is signed ('h') and manually do + the overflow checking */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + else if (x < -128) { + PyErr_SetString(PyExc_OverflowError, + "signed char is less than minimum"); + return -1; + } + else if (x > 127) { + PyErr_SetString(PyExc_OverflowError, + "signed char is greater than maximum"); + return -1; + } + if (i >= 0) + ((char *)ap->ob_item)[i] = (char)x; + return 0; } static PyObject * BB_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((unsigned char *)ap->ob_item)[i]; - return PyInt_FromLong(x); + long x = ((unsigned char *)ap->ob_item)[i]; + return PyInt_FromLong(x); } static int BB_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned char x; - /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ - if (!PyArg_Parse(v, "b;array item must be integer", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; + unsigned char x; + /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ + if (!PyArg_Parse(v, "b;array item must be integer", &x)) + return -1; + if (i >= 0) + ((char *)ap->ob_item)[i] = x; + return 0; } #ifdef Py_USING_UNICODE static PyObject * u_getitem(arrayobject *ap, Py_ssize_t i) { - return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); + return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); } static int u_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - Py_UNICODE *p; - Py_ssize_t len; + Py_UNICODE *p; + Py_ssize_t len; - if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) - return -1; - if (len != 1) { - PyErr_SetString(PyExc_TypeError, - "array item must be unicode character"); - return -1; - } - if (i >= 0) - ((Py_UNICODE *)ap->ob_item)[i] = p[0]; - return 0; + if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) + return -1; + if (len != 1) { + PyErr_SetString(PyExc_TypeError, + "array item must be unicode character"); + return -1; + } + if (i >= 0) + ((Py_UNICODE *)ap->ob_item)[i] = p[0]; + return 0; } #endif static PyObject * h_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); } static int h_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - if (i >= 0) - ((short *)ap->ob_item)[i] = x; - return 0; + short x; + /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + if (i >= 0) + ((short *)ap->ob_item)[i] = x; + return 0; } static PyObject * HH_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); } static int HH_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* PyArg_Parse's 'h' formatter is for a signed short, therefore - must use the next size up and manually do the overflow checking */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - else if (x < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is less than minimum"); - return -1; - } - else if (x > USHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is greater than maximum"); - return -1; - } - if (i >= 0) - ((short *)ap->ob_item)[i] = (short)x; - return 0; + int x; + /* PyArg_Parse's 'h' formatter is for a signed short, therefore + must use the next size up and manually do the overflow checking */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + else if (x < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is less than minimum"); + return -1; + } + else if (x > USHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is greater than maximum"); + return -1; + } + if (i >= 0) + ((short *)ap->ob_item)[i] = (short)x; + return 0; } static PyObject * i_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); } static int i_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - if (i >= 0) - ((int *)ap->ob_item)[i] = x; - return 0; + int x; + /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + if (i >= 0) + ((int *)ap->ob_item)[i] = x; + return 0; } static PyObject * II_getitem(arrayobject *ap, Py_ssize_t i) { - return PyLong_FromUnsignedLong( - (unsigned long) ((unsigned int *)ap->ob_item)[i]); + return PyLong_FromUnsignedLong( + (unsigned long) ((unsigned int *)ap->ob_item)[i]); } static int II_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned long x; - if (PyLong_Check(v)) { - x = PyLong_AsUnsignedLong(v); - if (x == (unsigned long) -1 && PyErr_Occurred()) - return -1; - } - else { - long y; - if (!PyArg_Parse(v, "l;array item must be integer", &y)) - return -1; - if (y < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is less than minimum"); - return -1; - } - x = (unsigned long)y; + unsigned long x; + if (PyLong_Check(v)) { + x = PyLong_AsUnsignedLong(v); + if (x == (unsigned long) -1 && PyErr_Occurred()) + return -1; + } + else { + long y; + if (!PyArg_Parse(v, "l;array item must be integer", &y)) + return -1; + if (y < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is less than minimum"); + return -1; + } + x = (unsigned long)y; - } - if (x > UINT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is greater than maximum"); - return -1; - } + } + if (x > UINT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is greater than maximum"); + return -1; + } - if (i >= 0) - ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; - return 0; + if (i >= 0) + ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; + return 0; } static PyObject * l_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong(((long *)ap->ob_item)[i]); + return PyInt_FromLong(((long *)ap->ob_item)[i]); } static int l_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - long x; - if (!PyArg_Parse(v, "l;array item must be integer", &x)) - return -1; - if (i >= 0) - ((long *)ap->ob_item)[i] = x; - return 0; + long x; + if (!PyArg_Parse(v, "l;array item must be integer", &x)) + return -1; + if (i >= 0) + ((long *)ap->ob_item)[i] = x; + return 0; } static PyObject * LL_getitem(arrayobject *ap, Py_ssize_t i) { - return PyLong_FromUnsignedLong(((unsigned long *)ap->ob_item)[i]); + return PyLong_FromUnsignedLong(((unsigned long *)ap->ob_item)[i]); } static int LL_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned long x; - if (PyLong_Check(v)) { - x = PyLong_AsUnsignedLong(v); - if (x == (unsigned long) -1 && PyErr_Occurred()) - return -1; - } - else { - long y; - if (!PyArg_Parse(v, "l;array item must be integer", &y)) - return -1; - if (y < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned long is less than minimum"); - return -1; - } - x = (unsigned long)y; + unsigned long x; + if (PyLong_Check(v)) { + x = PyLong_AsUnsignedLong(v); + if (x == (unsigned long) -1 && PyErr_Occurred()) + return -1; + } + else { + long y; + if (!PyArg_Parse(v, "l;array item must be integer", &y)) + return -1; + if (y < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned long is less than minimum"); + return -1; + } + x = (unsigned long)y; - } - if (x > ULONG_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned long is greater than maximum"); - return -1; - } + } + if (x > ULONG_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned long is greater than maximum"); + return -1; + } - if (i >= 0) - ((unsigned long *)ap->ob_item)[i] = x; - return 0; + if (i >= 0) + ((unsigned long *)ap->ob_item)[i] = x; + return 0; } static PyObject * f_getitem(arrayobject *ap, Py_ssize_t i) { - return PyFloat_FromDouble((double) ((float *)ap->ob_item)[i]); + return PyFloat_FromDouble((double) ((float *)ap->ob_item)[i]); } static int f_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - float x; - if (!PyArg_Parse(v, "f;array item must be float", &x)) - return -1; - if (i >= 0) - ((float *)ap->ob_item)[i] = x; - return 0; + float x; + if (!PyArg_Parse(v, "f;array item must be float", &x)) + return -1; + if (i >= 0) + ((float *)ap->ob_item)[i] = x; + return 0; } static PyObject * d_getitem(arrayobject *ap, Py_ssize_t i) { - return PyFloat_FromDouble(((double *)ap->ob_item)[i]); + return PyFloat_FromDouble(((double *)ap->ob_item)[i]); } static int d_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - double x; - if (!PyArg_Parse(v, "d;array item must be float", &x)) - return -1; - if (i >= 0) - ((double *)ap->ob_item)[i] = x; - return 0; + double x; + if (!PyArg_Parse(v, "d;array item must be float", &x)) + return -1; + if (i >= 0) + ((double *)ap->ob_item)[i] = x; + return 0; } /* Description of types */ static struct arraydescr descriptors[] = { - {'c', sizeof(char), c_getitem, c_setitem}, - {'b', sizeof(char), b_getitem, b_setitem}, - {'B', sizeof(char), BB_getitem, BB_setitem}, + {'c', sizeof(char), c_getitem, c_setitem}, + {'b', sizeof(char), b_getitem, b_setitem}, + {'B', sizeof(char), BB_getitem, BB_setitem}, #ifdef Py_USING_UNICODE - {'u', sizeof(Py_UNICODE), u_getitem, u_setitem}, + {'u', sizeof(Py_UNICODE), u_getitem, u_setitem}, #endif - {'h', sizeof(short), h_getitem, h_setitem}, - {'H', sizeof(short), HH_getitem, HH_setitem}, - {'i', sizeof(int), i_getitem, i_setitem}, - {'I', sizeof(int), II_getitem, II_setitem}, - {'l', sizeof(long), l_getitem, l_setitem}, - {'L', sizeof(long), LL_getitem, LL_setitem}, - {'f', sizeof(float), f_getitem, f_setitem}, - {'d', sizeof(double), d_getitem, d_setitem}, - {'\0', 0, 0, 0} /* Sentinel */ + {'h', sizeof(short), h_getitem, h_setitem}, + {'H', sizeof(short), HH_getitem, HH_setitem}, + {'i', sizeof(int), i_getitem, i_setitem}, + {'I', sizeof(int), II_getitem, II_setitem}, + {'l', sizeof(long), l_getitem, l_setitem}, + {'L', sizeof(long), LL_getitem, LL_setitem}, + {'f', sizeof(float), f_getitem, f_setitem}, + {'d', sizeof(double), d_getitem, d_setitem}, + {'\0', 0, 0, 0} /* Sentinel */ }; /**************************************************************************** @@ -418,78 +415,78 @@ static PyObject * newarrayobject(PyTypeObject *type, Py_ssize_t size, struct arraydescr *descr) { - arrayobject *op; - size_t nbytes; + arrayobject *op; + size_t nbytes; - if (size < 0) { - PyErr_BadInternalCall(); - return NULL; - } + if (size < 0) { + PyErr_BadInternalCall(); + return NULL; + } - nbytes = size * descr->itemsize; - /* Check for overflow */ - if (nbytes / descr->itemsize != (size_t)size) { - return PyErr_NoMemory(); - } - op = (arrayobject *) type->tp_alloc(type, 0); - if (op == NULL) { - return NULL; - } - op->ob_descr = descr; - op->allocated = size; - op->weakreflist = NULL; - Py_SIZE(op) = size; - if (size <= 0) { - op->ob_item = NULL; - } - else { - op->ob_item = PyMem_NEW(char, nbytes); - if (op->ob_item == NULL) { - Py_DECREF(op); - return PyErr_NoMemory(); - } - } - return (PyObject *) op; + nbytes = size * descr->itemsize; + /* Check for overflow */ + if (nbytes / descr->itemsize != (size_t)size) { + return PyErr_NoMemory(); + } + op = (arrayobject *) type->tp_alloc(type, 0); + if (op == NULL) { + return NULL; + } + op->ob_descr = descr; + op->allocated = size; + op->weakreflist = NULL; + Py_SIZE(op) = size; + if (size <= 0) { + op->ob_item = NULL; + } + else { + op->ob_item = PyMem_NEW(char, nbytes); + if (op->ob_item == NULL) { + Py_DECREF(op); + return PyErr_NoMemory(); + } + } + return (PyObject *) op; } static PyObject * getarrayitem(PyObject *op, Py_ssize_t i) { - register arrayobject *ap; - assert(array_Check(op)); - ap = (arrayobject *)op; - assert(i>=0 && iob_descr->getitem)(ap, i); + register arrayobject *ap; + assert(array_Check(op)); + ap = (arrayobject *)op; + assert(i>=0 && iob_descr->getitem)(ap, i); } static int ins1(arrayobject *self, Py_ssize_t where, PyObject *v) { - char *items; - Py_ssize_t n = Py_SIZE(self); - if (v == NULL) { - PyErr_BadInternalCall(); - return -1; - } - if ((*self->ob_descr->setitem)(self, -1, v) < 0) - return -1; + char *items; + Py_ssize_t n = Py_SIZE(self); + if (v == NULL) { + PyErr_BadInternalCall(); + return -1; + } + if ((*self->ob_descr->setitem)(self, -1, v) < 0) + return -1; - if (array_resize(self, n+1) == -1) - return -1; - items = self->ob_item; - if (where < 0) { - where += n; - if (where < 0) - where = 0; - } - if (where > n) - where = n; - /* appends don't need to call memmove() */ - if (where != n) - memmove(items + (where+1)*self->ob_descr->itemsize, - items + where*self->ob_descr->itemsize, - (n-where)*self->ob_descr->itemsize); - return (*self->ob_descr->setitem)(self, where, v); + if (array_resize(self, n+1) == -1) + return -1; + items = self->ob_item; + if (where < 0) { + where += n; + if (where < 0) + where = 0; + } + if (where > n) + where = n; + /* appends don't need to call memmove() */ + if (where != n) + memmove(items + (where+1)*self->ob_descr->itemsize, + items + where*self->ob_descr->itemsize, + (n-where)*self->ob_descr->itemsize); + return (*self->ob_descr->setitem)(self, where, v); } /* Methods */ @@ -497,141 +494,141 @@ static void array_dealloc(arrayobject *op) { - if (op->weakreflist != NULL) - PyObject_ClearWeakRefs((PyObject *) op); - if (op->ob_item != NULL) - PyMem_DEL(op->ob_item); - Py_TYPE(op)->tp_free((PyObject *)op); + if (op->weakreflist != NULL) + PyObject_ClearWeakRefs((PyObject *) op); + if (op->ob_item != NULL) + PyMem_DEL(op->ob_item); + Py_TYPE(op)->tp_free((PyObject *)op); } static PyObject * array_richcompare(PyObject *v, PyObject *w, int op) { - arrayobject *va, *wa; - PyObject *vi = NULL; - PyObject *wi = NULL; - Py_ssize_t i, k; - PyObject *res; + arrayobject *va, *wa; + PyObject *vi = NULL; + PyObject *wi = NULL; + Py_ssize_t i, k; + PyObject *res; - if (!array_Check(v) || !array_Check(w)) { - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } + if (!array_Check(v) || !array_Check(w)) { + Py_INCREF(Py_NotImplemented); + return Py_NotImplemented; + } - va = (arrayobject *)v; - wa = (arrayobject *)w; + va = (arrayobject *)v; + wa = (arrayobject *)w; - if (Py_SIZE(va) != Py_SIZE(wa) && (op == Py_EQ || op == Py_NE)) { - /* Shortcut: if the lengths differ, the arrays differ */ - if (op == Py_EQ) - res = Py_False; - else - res = Py_True; - Py_INCREF(res); - return res; - } + if (Py_SIZE(va) != Py_SIZE(wa) && (op == Py_EQ || op == Py_NE)) { + /* Shortcut: if the lengths differ, the arrays differ */ + if (op == Py_EQ) + res = Py_False; + else + res = Py_True; + Py_INCREF(res); + return res; + } - /* Search for the first index where items are different */ - k = 1; - for (i = 0; i < Py_SIZE(va) && i < Py_SIZE(wa); i++) { - vi = getarrayitem(v, i); - wi = getarrayitem(w, i); - if (vi == NULL || wi == NULL) { - Py_XDECREF(vi); - Py_XDECREF(wi); - return NULL; - } - k = PyObject_RichCompareBool(vi, wi, Py_EQ); - if (k == 0) - break; /* Keeping vi and wi alive! */ - Py_DECREF(vi); - Py_DECREF(wi); - if (k < 0) - return NULL; - } + /* Search for the first index where items are different */ + k = 1; + for (i = 0; i < Py_SIZE(va) && i < Py_SIZE(wa); i++) { + vi = getarrayitem(v, i); + wi = getarrayitem(w, i); + if (vi == NULL || wi == NULL) { + Py_XDECREF(vi); + Py_XDECREF(wi); + return NULL; + } + k = PyObject_RichCompareBool(vi, wi, Py_EQ); + if (k == 0) + break; /* Keeping vi and wi alive! */ + Py_DECREF(vi); + Py_DECREF(wi); + if (k < 0) + return NULL; + } - if (k) { - /* No more items to compare -- compare sizes */ - Py_ssize_t vs = Py_SIZE(va); - Py_ssize_t ws = Py_SIZE(wa); - int cmp; - switch (op) { - case Py_LT: cmp = vs < ws; break; - case Py_LE: cmp = vs <= ws; break; - case Py_EQ: cmp = vs == ws; break; - case Py_NE: cmp = vs != ws; break; - case Py_GT: cmp = vs > ws; break; - case Py_GE: cmp = vs >= ws; break; - default: return NULL; /* cannot happen */ - } - if (cmp) - res = Py_True; - else - res = Py_False; - Py_INCREF(res); - return res; - } + if (k) { + /* No more items to compare -- compare sizes */ + Py_ssize_t vs = Py_SIZE(va); + Py_ssize_t ws = Py_SIZE(wa); + int cmp; + switch (op) { + case Py_LT: cmp = vs < ws; break; + case Py_LE: cmp = vs <= ws; break; + case Py_EQ: cmp = vs == ws; break; + case Py_NE: cmp = vs != ws; break; + case Py_GT: cmp = vs > ws; break; + case Py_GE: cmp = vs >= ws; break; + default: return NULL; /* cannot happen */ + } + if (cmp) + res = Py_True; + else + res = Py_False; + Py_INCREF(res); + return res; + } - /* We have an item that differs. First, shortcuts for EQ/NE */ - if (op == Py_EQ) { - Py_INCREF(Py_False); - res = Py_False; - } - else if (op == Py_NE) { - Py_INCREF(Py_True); - res = Py_True; - } - else { - /* Compare the final item again using the proper operator */ - res = PyObject_RichCompare(vi, wi, op); - } - Py_DECREF(vi); - Py_DECREF(wi); - return res; + /* We have an item that differs. First, shortcuts for EQ/NE */ + if (op == Py_EQ) { + Py_INCREF(Py_False); + res = Py_False; + } + else if (op == Py_NE) { + Py_INCREF(Py_True); + res = Py_True; + } + else { + /* Compare the final item again using the proper operator */ + res = PyObject_RichCompare(vi, wi, op); + } + Py_DECREF(vi); + Py_DECREF(wi); + return res; } static Py_ssize_t array_length(arrayobject *a) { - return Py_SIZE(a); + return Py_SIZE(a); } static PyObject * array_item(arrayobject *a, Py_ssize_t i) { - if (i < 0 || i >= Py_SIZE(a)) { - PyErr_SetString(PyExc_IndexError, "array index out of range"); - return NULL; - } - return getarrayitem((PyObject *)a, i); + if (i < 0 || i >= Py_SIZE(a)) { + PyErr_SetString(PyExc_IndexError, "array index out of range"); + return NULL; + } + return getarrayitem((PyObject *)a, i); } static PyObject * array_slice(arrayobject *a, Py_ssize_t ilow, Py_ssize_t ihigh) { - arrayobject *np; - if (ilow < 0) - ilow = 0; - else if (ilow > Py_SIZE(a)) - ilow = Py_SIZE(a); - if (ihigh < 0) - ihigh = 0; - if (ihigh < ilow) - ihigh = ilow; - else if (ihigh > Py_SIZE(a)) - ihigh = Py_SIZE(a); - np = (arrayobject *) newarrayobject(&Arraytype, ihigh - ilow, a->ob_descr); - if (np == NULL) - return NULL; - memcpy(np->ob_item, a->ob_item + ilow * a->ob_descr->itemsize, - (ihigh-ilow) * a->ob_descr->itemsize); - return (PyObject *)np; + arrayobject *np; + if (ilow < 0) + ilow = 0; + else if (ilow > Py_SIZE(a)) + ilow = Py_SIZE(a); + if (ihigh < 0) + ihigh = 0; + if (ihigh < ilow) + ihigh = ilow; + else if (ihigh > Py_SIZE(a)) + ihigh = Py_SIZE(a); + np = (arrayobject *) newarrayobject(&Arraytype, ihigh - ilow, a->ob_descr); + if (np == NULL) + return NULL; + memcpy(np->ob_item, a->ob_item + ilow * a->ob_descr->itemsize, + (ihigh-ilow) * a->ob_descr->itemsize); + return (PyObject *)np; } static PyObject * array_copy(arrayobject *a, PyObject *unused) { - return array_slice(a, 0, Py_SIZE(a)); + return array_slice(a, 0, Py_SIZE(a)); } PyDoc_STRVAR(copy_doc, @@ -642,297 +639,297 @@ static PyObject * array_concat(arrayobject *a, PyObject *bb) { - Py_ssize_t size; - arrayobject *np; - if (!array_Check(bb)) { - PyErr_Format(PyExc_TypeError, - "can only append array (not \"%.200s\") to array", - Py_TYPE(bb)->tp_name); - return NULL; - } + Py_ssize_t size; + arrayobject *np; + if (!array_Check(bb)) { + PyErr_Format(PyExc_TypeError, + "can only append array (not \"%.200s\") to array", + Py_TYPE(bb)->tp_name); + return NULL; + } #define b ((arrayobject *)bb) - if (a->ob_descr != b->ob_descr) { - PyErr_BadArgument(); - return NULL; - } - if (Py_SIZE(a) > PY_SSIZE_T_MAX - Py_SIZE(b)) { - return PyErr_NoMemory(); - } - size = Py_SIZE(a) + Py_SIZE(b); - np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); - if (np == NULL) { - return NULL; - } - memcpy(np->ob_item, a->ob_item, Py_SIZE(a)*a->ob_descr->itemsize); - memcpy(np->ob_item + Py_SIZE(a)*a->ob_descr->itemsize, - b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); - return (PyObject *)np; + if (a->ob_descr != b->ob_descr) { + PyErr_BadArgument(); + return NULL; + } + if (Py_SIZE(a) > PY_SSIZE_T_MAX - Py_SIZE(b)) { + return PyErr_NoMemory(); + } + size = Py_SIZE(a) + Py_SIZE(b); + np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); + if (np == NULL) { + return NULL; + } + memcpy(np->ob_item, a->ob_item, Py_SIZE(a)*a->ob_descr->itemsize); + memcpy(np->ob_item + Py_SIZE(a)*a->ob_descr->itemsize, + b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); + return (PyObject *)np; #undef b } static PyObject * array_repeat(arrayobject *a, Py_ssize_t n) { - Py_ssize_t i; - Py_ssize_t size; - arrayobject *np; - char *p; - Py_ssize_t nbytes; - if (n < 0) - n = 0; - if ((Py_SIZE(a) != 0) && (n > PY_SSIZE_T_MAX / Py_SIZE(a))) { - return PyErr_NoMemory(); - } - size = Py_SIZE(a) * n; - np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); - if (np == NULL) - return NULL; - p = np->ob_item; - nbytes = Py_SIZE(a) * a->ob_descr->itemsize; - for (i = 0; i < n; i++) { - memcpy(p, a->ob_item, nbytes); - p += nbytes; - } - return (PyObject *) np; + Py_ssize_t i; + Py_ssize_t size; + arrayobject *np; + char *p; + Py_ssize_t nbytes; + if (n < 0) + n = 0; + if ((Py_SIZE(a) != 0) && (n > PY_SSIZE_T_MAX / Py_SIZE(a))) { + return PyErr_NoMemory(); + } + size = Py_SIZE(a) * n; + np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); + if (np == NULL) + return NULL; + p = np->ob_item; + nbytes = Py_SIZE(a) * a->ob_descr->itemsize; + for (i = 0; i < n; i++) { + memcpy(p, a->ob_item, nbytes); + p += nbytes; + } + return (PyObject *) np; } static int array_ass_slice(arrayobject *a, Py_ssize_t ilow, Py_ssize_t ihigh, PyObject *v) { - char *item; - Py_ssize_t n; /* Size of replacement array */ - Py_ssize_t d; /* Change in size */ + char *item; + Py_ssize_t n; /* Size of replacement array */ + Py_ssize_t d; /* Change in size */ #define b ((arrayobject *)v) - if (v == NULL) - n = 0; - else if (array_Check(v)) { - n = Py_SIZE(b); - if (a == b) { - /* Special case "a[i:j] = a" -- copy b first */ - int ret; - v = array_slice(b, 0, n); - if (!v) - return -1; - ret = array_ass_slice(a, ilow, ihigh, v); - Py_DECREF(v); - return ret; - } - if (b->ob_descr != a->ob_descr) { - PyErr_BadArgument(); - return -1; - } - } - else { - PyErr_Format(PyExc_TypeError, - "can only assign array (not \"%.200s\") to array slice", - Py_TYPE(v)->tp_name); - return -1; - } - if (ilow < 0) - ilow = 0; - else if (ilow > Py_SIZE(a)) - ilow = Py_SIZE(a); - if (ihigh < 0) - ihigh = 0; - if (ihigh < ilow) - ihigh = ilow; - else if (ihigh > Py_SIZE(a)) - ihigh = Py_SIZE(a); - item = a->ob_item; - d = n - (ihigh-ilow); - if (d < 0) { /* Delete -d items */ - memmove(item + (ihigh+d)*a->ob_descr->itemsize, - item + ihigh*a->ob_descr->itemsize, - (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); - Py_SIZE(a) += d; - PyMem_RESIZE(item, char, Py_SIZE(a)*a->ob_descr->itemsize); - /* Can't fail */ - a->ob_item = item; - a->allocated = Py_SIZE(a); - } - else if (d > 0) { /* Insert d items */ - PyMem_RESIZE(item, char, - (Py_SIZE(a) + d)*a->ob_descr->itemsize); - if (item == NULL) { - PyErr_NoMemory(); - return -1; - } - memmove(item + (ihigh+d)*a->ob_descr->itemsize, - item + ihigh*a->ob_descr->itemsize, - (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); - a->ob_item = item; - Py_SIZE(a) += d; - a->allocated = Py_SIZE(a); - } - if (n > 0) - memcpy(item + ilow*a->ob_descr->itemsize, b->ob_item, - n*b->ob_descr->itemsize); - return 0; + if (v == NULL) + n = 0; + else if (array_Check(v)) { + n = Py_SIZE(b); + if (a == b) { + /* Special case "a[i:j] = a" -- copy b first */ + int ret; + v = array_slice(b, 0, n); + if (!v) + return -1; + ret = array_ass_slice(a, ilow, ihigh, v); + Py_DECREF(v); + return ret; + } + if (b->ob_descr != a->ob_descr) { + PyErr_BadArgument(); + return -1; + } + } + else { + PyErr_Format(PyExc_TypeError, + "can only assign array (not \"%.200s\") to array slice", + Py_TYPE(v)->tp_name); + return -1; + } + if (ilow < 0) + ilow = 0; + else if (ilow > Py_SIZE(a)) + ilow = Py_SIZE(a); + if (ihigh < 0) + ihigh = 0; + if (ihigh < ilow) + ihigh = ilow; + else if (ihigh > Py_SIZE(a)) + ihigh = Py_SIZE(a); + item = a->ob_item; + d = n - (ihigh-ilow); + if (d < 0) { /* Delete -d items */ + memmove(item + (ihigh+d)*a->ob_descr->itemsize, + item + ihigh*a->ob_descr->itemsize, + (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); + Py_SIZE(a) += d; + PyMem_RESIZE(item, char, Py_SIZE(a)*a->ob_descr->itemsize); + /* Can't fail */ + a->ob_item = item; + a->allocated = Py_SIZE(a); + } + else if (d > 0) { /* Insert d items */ + PyMem_RESIZE(item, char, + (Py_SIZE(a) + d)*a->ob_descr->itemsize); + if (item == NULL) { + PyErr_NoMemory(); + return -1; + } + memmove(item + (ihigh+d)*a->ob_descr->itemsize, + item + ihigh*a->ob_descr->itemsize, + (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); + a->ob_item = item; + Py_SIZE(a) += d; + a->allocated = Py_SIZE(a); + } + if (n > 0) + memcpy(item + ilow*a->ob_descr->itemsize, b->ob_item, + n*b->ob_descr->itemsize); + return 0; #undef b } static int array_ass_item(arrayobject *a, Py_ssize_t i, PyObject *v) { - if (i < 0 || i >= Py_SIZE(a)) { - PyErr_SetString(PyExc_IndexError, - "array assignment index out of range"); - return -1; - } - if (v == NULL) - return array_ass_slice(a, i, i+1, v); - return (*a->ob_descr->setitem)(a, i, v); + if (i < 0 || i >= Py_SIZE(a)) { + PyErr_SetString(PyExc_IndexError, + "array assignment index out of range"); + return -1; + } + if (v == NULL) + return array_ass_slice(a, i, i+1, v); + return (*a->ob_descr->setitem)(a, i, v); From noreply at buildbot.pypy.org Wed May 2 18:37:38 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 2 May 2012 18:37:38 +0200 (CEST) Subject: [pypy-commit] pypy py3k: the re.UNICODE flags is by default now. Disable it explicitly for the test Message-ID: <20120502163738.5771082F50@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r54875:b1fd67622c9e Date: 2012-05-02 18:28 +0200 http://bitbucket.org/pypy/pypy/changeset/b1fd67622c9e/ Log: the re.UNICODE flags is by default now. Disable it explicitly for the test diff --git a/pypy/module/_sre/test/test_app_sre.py b/pypy/module/_sre/test/test_app_sre.py --- a/pypy/module/_sre/test/test_app_sre.py +++ b/pypy/module/_sre/test/test_app_sre.py @@ -373,12 +373,12 @@ def test_search_simple_boundaries(self): import re - UPPER_PI = u"\u03a0" + UPPER_PI = "\u03a0" assert re.search(r"bla\b", "bla") assert re.search(r"bla\b", "bla ja") - assert re.search(r"bla\b", u"bla%s" % UPPER_PI) + assert re.search(r"bla\b", "bla%s" % UPPER_PI, re.ASCII) assert not re.search(r"bla\b", "blano") - assert not re.search(r"bla\b", u"bla%s" % UPPER_PI, re.UNICODE) + assert not re.search(r"bla\b", "bla%s" % UPPER_PI, re.UNICODE) def test_search_simple_categories(self): import re From noreply at buildbot.pypy.org Wed May 2 18:37:39 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 2 May 2012 18:37:39 +0200 (CEST) Subject: [pypy-commit] pypy py3k: py3k-ify by killing the u'' string prefix Message-ID: <20120502163739.D433F82F51@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r54876:cff46a1c3088 Date: 2012-05-02 18:37 +0200 http://bitbucket.org/pypy/pypy/changeset/cff46a1c3088/ Log: py3k-ify by killing the u'' string prefix diff --git a/pypy/module/_sre/test/test_app_sre.py b/pypy/module/_sre/test/test_app_sre.py --- a/pypy/module/_sre/test/test_app_sre.py +++ b/pypy/module/_sre/test/test_app_sre.py @@ -189,13 +189,13 @@ def test_sub_unicode(self): import re - assert isinstance(re.sub(u"a", u"b", u""), str) + assert isinstance(re.sub("a", "b", ""), str) # the input is returned unmodified if no substitution is performed, # which (if interpreted literally, as CPython does) gives the # following strangeish rules: - assert isinstance(re.sub(u"a", u"b", "diwoiioamoi"), str) - assert isinstance(re.sub(u"a", u"b", b"diwoiiobmoi"), bytes) - assert isinstance(re.sub(u'x', b'y', b'x'), bytes) + assert isinstance(re.sub("a", "b", "diwoiioamoi"), str) + assert isinstance(re.sub("a", "b", b"diwoiiobmoi"), bytes) + assert isinstance(re.sub('x', b'y', b'x'), bytes) def test_sub_callable(self): import re @@ -382,17 +382,17 @@ def test_search_simple_categories(self): import re - LOWER_PI = u"\u03c0" - INDIAN_DIGIT = u"\u0966" - EM_SPACE = u"\u2001" + LOWER_PI = "\u03c0" + INDIAN_DIGIT = "\u0966" + EM_SPACE = "\u2001" LOWER_AE = "\xe4" assert re.search(r"bla\d\s\w", "bla3 b") - assert re.search(r"b\d", u"b%s" % INDIAN_DIGIT, re.UNICODE) - assert not re.search(r"b\D", u"b%s" % INDIAN_DIGIT, re.UNICODE) - assert re.search(r"b\s", u"b%s" % EM_SPACE, re.UNICODE) - assert not re.search(r"b\S", u"b%s" % EM_SPACE, re.UNICODE) - assert re.search(r"b\w", u"b%s" % LOWER_PI, re.UNICODE) - assert not re.search(r"b\W", u"b%s" % LOWER_PI, re.UNICODE) + assert re.search(r"b\d", "b%s" % INDIAN_DIGIT, re.UNICODE) + assert not re.search(r"b\D", "b%s" % INDIAN_DIGIT, re.UNICODE) + assert re.search(r"b\s", "b%s" % EM_SPACE, re.UNICODE) + assert not re.search(r"b\S", "b%s" % EM_SPACE, re.UNICODE) + assert re.search(r"b\w", "b%s" % LOWER_PI, re.UNICODE) + assert not re.search(r"b\W", "b%s" % LOWER_PI, re.UNICODE) assert re.search(r"b\w", "b%s" % LOWER_AE, re.UNICODE) def test_search_simple_any(self): @@ -403,38 +403,38 @@ def test_search_simple_in(self): import re - UPPER_PI = u"\u03a0" - LOWER_PI = u"\u03c0" - EM_SPACE = u"\u2001" - LINE_SEP = u"\u2028" + UPPER_PI = "\u03a0" + LOWER_PI = "\u03c0" + EM_SPACE = "\u2001" + LINE_SEP = "\u2028" assert re.search(r"b[\da-z]a", "bb1a") assert re.search(r"b[\da-z]a", "bbsa") assert not re.search(r"b[\da-z]a", "bbSa") assert re.search(r"b[^okd]a", "bsa") assert not re.search(r"b[^okd]a", "bda") - assert re.search(u"b[%s%s%s]a" % (LOWER_PI, UPPER_PI, EM_SPACE), - u"b%sa" % UPPER_PI) # bigcharset - assert re.search(u"b[%s%s%s]a" % (LOWER_PI, UPPER_PI, EM_SPACE), - u"b%sa" % EM_SPACE) - assert not re.search(u"b[%s%s%s]a" % (LOWER_PI, UPPER_PI, EM_SPACE), - u"b%sa" % LINE_SEP) + assert re.search("b[%s%s%s]a" % (LOWER_PI, UPPER_PI, EM_SPACE), + "b%sa" % UPPER_PI) # bigcharset + assert re.search("b[%s%s%s]a" % (LOWER_PI, UPPER_PI, EM_SPACE), + "b%sa" % EM_SPACE) + assert not re.search("b[%s%s%s]a" % (LOWER_PI, UPPER_PI, EM_SPACE), + "b%sa" % LINE_SEP) def test_search_simple_literal_ignore(self): import re - UPPER_PI = u"\u03a0" - LOWER_PI = u"\u03c0" + UPPER_PI = "\u03a0" + LOWER_PI = "\u03c0" assert re.search(r"ba", "ba", re.IGNORECASE) assert re.search(r"ba", "BA", re.IGNORECASE) - assert re.search(u"b%s" % UPPER_PI, u"B%s" % LOWER_PI, + assert re.search("b%s" % UPPER_PI, "B%s" % LOWER_PI, re.IGNORECASE | re.UNICODE) def test_search_simple_in_ignore(self): import re - UPPER_PI = u"\u03a0" - LOWER_PI = u"\u03c0" + UPPER_PI = "\u03a0" + LOWER_PI = "\u03c0" assert re.search(r"ba[A-C]", "bac", re.IGNORECASE) assert re.search(r"ba[a-c]", "baB", re.IGNORECASE) - assert re.search(u"ba[%s]" % UPPER_PI, "ba%s" % LOWER_PI, + assert re.search("ba[%s]" % UPPER_PI, "ba%s" % LOWER_PI, re.IGNORECASE | re.UNICODE) assert re.search(r"ba[^A-C]", "bar", re.IGNORECASE) assert not re.search(r"ba[^A-C]", "baA", re.IGNORECASE) @@ -496,13 +496,13 @@ def test_search_simple_groupref(self): import re - UPPER_PI = u"\u03a0" - LOWER_PI = u"\u03c0" + UPPER_PI = "\u03a0" + LOWER_PI = "\u03c0" assert re.match(r"((ab)+)c\1", "ababcabab") assert not re.match(r"((ab)+)c\1", "ababcab") assert not re.search(r"(a|(b))\2", "aa") assert re.match(r"((ab)+)c\1", "aBAbcAbaB", re.IGNORECASE) - assert re.match(r"((a.)+)c\1", u"a%sca%s" % (UPPER_PI, LOWER_PI), + assert re.match(r"((a.)+)c\1", "a%sca%s" % (UPPER_PI, LOWER_PI), re.IGNORECASE | re.UNICODE) def test_search_simple_groupref_exists(self): From noreply at buildbot.pypy.org Wed May 2 23:17:47 2012 From: noreply at buildbot.pypy.org (wlav) Date: Wed, 2 May 2012 23:17:47 +0200 (CEST) Subject: [pypy-commit] pypy reflex-support: prevent a class of CINT-specific race conditions Message-ID: <20120502211747.153EE82009@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r54877:ff98315fab0d Date: 2012-04-30 17:30 -0700 http://bitbucket.org/pypy/pypy/changeset/ff98315fab0d/ Log: prevent a class of CINT-specific race conditions diff --git a/pypy/module/cppyy/src/cintcwrapper.cxx b/pypy/module/cppyy/src/cintcwrapper.cxx --- a/pypy/module/cppyy/src/cintcwrapper.cxx +++ b/pypy/module/cppyy/src/cintcwrapper.cxx @@ -36,6 +36,10 @@ extern "C" void G__LockCriticalSection(); extern "C" void G__UnlockCriticalSection(); +#define G__SETMEMFUNCENV (long)0x7fff0035 +#define G__NOP (long)0x7fff00ff + + /* ROOT meta internals ---------------------------------------------------- */ namespace { @@ -84,7 +88,7 @@ static GlobalVars_t g_globalvars; -/* initialization of th ROOT system (debatable ... ) ---------------------- */ +/* initialization of the ROOT system (debatable ... ) --------------------- */ namespace { class TCppyyApplication : public TApplication { @@ -288,7 +292,9 @@ G__setnull(&result); G__LockCriticalSection(); // is recursive lock - + long index = (long)&method; + G__CurrentCall(G__SETMEMFUNCENV, 0, &index); + // TODO: access to store_struct_offset won't work on Windows long store_struct_offset = G__store_struct_offset; if (self) @@ -302,6 +308,7 @@ if (G__get_return(0) > G__RETURN_NORMAL) G__security_recover(0); // 0 ensures silence + G__CurrentCall(G__NOP, 0, 0); G__UnlockCriticalSection(); return result; From noreply at buildbot.pypy.org Wed May 2 23:17:48 2012 From: noreply at buildbot.pypy.org (wlav) Date: Wed, 2 May 2012 23:17:48 +0200 (CEST) Subject: [pypy-commit] pypy reflex-support: merge default into branch Message-ID: <20120502211748.99A3182009@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r54878:3f1d1ccdb3f5 Date: 2012-05-01 11:13 -0700 http://bitbucket.org/pypy/pypy/changeset/3f1d1ccdb3f5/ Log: merge default into branch diff --git a/lib-python/modified-2.7/test/test_peepholer.py b/lib-python/modified-2.7/test/test_peepholer.py --- a/lib-python/modified-2.7/test/test_peepholer.py +++ b/lib-python/modified-2.7/test/test_peepholer.py @@ -145,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst --- a/pypy/doc/cppyy.rst +++ b/pypy/doc/cppyy.rst @@ -21,6 +21,26 @@ .. _`llvm`: http://llvm.org/ +Motivation +========== + +The cppyy module offers two unique features, which result in great +performance as well as better functionality and cross-language integration +than would otherwise be possible. +First, cppyy is written in RPython and therefore open to optimizations by the +JIT up until the actual point of call into C++. +This means that there are no conversions necessary between a garbage collected +and a reference counted environment, as is needed for the use of existing +extension modules written or generated for CPython. +It also means that if variables are already unboxed by the JIT, they can be +passed through directly to C++. +Second, Reflex (and cling far more so) adds dynamic features to C++, thus +greatly reducing impedance mismatches between the two languages. +In fact, Reflex is dynamic enough that you could write the runtime bindings +generation in python (as opposed to RPython) and this is used to create very +natural "pythonizations" of the bound code. + + Installation ============ @@ -195,10 +215,12 @@ >>>> d = cppyy.gbl.BaseFactory("name", 42, 3.14) >>>> type(d) - >>>> d.m_i - 42 - >>>> d.m_d - 3.14 + >>>> isinstance(d, cppyy.gbl.Base1) + True + >>>> isinstance(d, cppyy.gbl.Base2) + True + >>>> d.m_i, d.m_d + (42, 3.14) >>>> d.m_name == "name" True >>>> @@ -295,6 +317,9 @@ To select a specific virtual method, do like with normal python classes that override methods: select it from the class that you need, rather than calling the method on the instance. + To select a specific overload, use the __dispatch__ special function, which + takes the name of the desired method and its signature (which can be + obtained from the doc string) as arguments. * **namespaces**: Are represented as python classes. Namespaces are more open-ended than classes, so sometimes initial access may diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst --- a/pypy/doc/extending.rst +++ b/pypy/doc/extending.rst @@ -116,13 +116,21 @@ Reflex ====== -This method is only experimental for now, and is being exercised on a branch, -`reflex-support`_, so you will have to build PyPy yourself. +This method is still experimental and is being exercised on a branch, +`reflex-support`_, which adds the `cppyy`_ module. The method works by using the `Reflex package`_ to provide reflection information of the C++ code, which is then used to automatically generate -bindings at runtime, which can then be used from python. +bindings at runtime. +From a python standpoint, there is no difference between generating bindings +at runtime, or having them "statically" generated and available in scripts +or compiled into extension modules: python classes and functions are always +runtime structures, created when a script or module loads. +However, if the backend itself is capable of dynamic behavior, it is a much +better functional match to python, allowing tighter integration and more +natural language mappings. Full details are `available here`_. +.. _`cppyy`: cppyy.html .. _`reflex-support`: cppyy.html .. _`Reflex package`: http://root.cern.ch/drupal/content/reflex .. _`available here`: cppyy.html @@ -130,16 +138,33 @@ Pros ---- -If it works, it is mostly automatic, and hence easy in use. -The bindings can make use of direct pointers, in which case the calls are -very fast. +The cppyy module is written in RPython, which makes it possible to keep the +code execution visible to the JIT all the way to the actual point of call into +C++, thus allowing for a very fast interface. +Reflex is currently in use in large software environments in High Energy +Physics (HEP), across many different projects and packages, and its use can be +virtually completely automated in a production environment. +One of its uses in HEP is in providing language bindings for CPython. +Thus, it is possible to use Reflex to have bound code work on both CPython and +on PyPy. +In the medium-term, Reflex will be replaced by `cling`_, which is based on +`llvm`_. +This will affect the backend only; the python-side interface is expected to +remain the same, except that cling adds a lot of dynamic behavior to C++, +enabling further language integration. + +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ Cons ---- -C++ is a large language, and these bindings are not yet feature-complete. -Although missing features should do no harm if you don't use them, if you do -need a particular feature, it may be necessary to work around it in python -or with a C++ helper function. +C++ is a large language, and cppyy is not yet feature-complete. +Still, the experience gained in developing the equivalent bindings for CPython +means that adding missing features is a simple matter of engineering, not a +question of research. +The module is written so that currently missing features should do no harm if +you don't use them, if you do need a particular feature, it may be necessary +to work around it in python or with a C++ helper function. Although Reflex works on various platforms, the bindings with PyPy have only been tested on Linux. diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -304,14 +304,19 @@ # produce compatible pycs. if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): - unistr = self.space.unicode_w(w_const) - if len(unistr) == 1: - ch = ord(unistr[0]) - else: - ch = 0 - if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): - return subs + #unistr = self.space.unicode_w(w_const) + #if len(unistr) == 1: + # ch = ord(unistr[0]) + #else: + # ch = 0 + #if (ch > 0xFFFF or + # (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): + # --XXX-- for now we always disable optimization of + # u'...'[constant] because the tests above are not + # enough to fix issue5057 (CPython has the same + # problem as of April 24, 2012). + # See test_const_fold_unicode_subscr + return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -844,7 +844,8 @@ return u"abc"[0] """ counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + if 0: # xxx later? + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} # getitem outside of the BMP should not be optimized source = """def f(): @@ -854,12 +855,20 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + source = """def f(): + return u"\U00012345abcdef"[3] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, + ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) source = """def f(): return u"\uE01F"[0] """ counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + if 0: # xxx later? + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} monkeypatch.undo() # getslice is not yet optimized. diff --git a/pypy/jit/backend/llsupport/asmmemmgr.py b/pypy/jit/backend/llsupport/asmmemmgr.py --- a/pypy/jit/backend/llsupport/asmmemmgr.py +++ b/pypy/jit/backend/llsupport/asmmemmgr.py @@ -277,6 +277,8 @@ from pypy.jit.backend.hlinfo import highleveljitinfo if highleveljitinfo.sys_executable: debug_print('SYS_EXECUTABLE', highleveljitinfo.sys_executable) + else: + debug_print('SYS_EXECUTABLE', '??') # HEX = '0123456789ABCDEF' dump = [] diff --git a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py --- a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py +++ b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py @@ -217,7 +217,8 @@ encoded = ''.join(writtencode).encode('hex').upper() ataddr = '@%x' % addr assert log == [('test-logname-section', - [('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] + [('debug_print', 'SYS_EXECUTABLE', '??'), + ('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] lltype.free(p, flavor='raw') diff --git a/pypy/jit/metainterp/jitexc.py b/pypy/jit/metainterp/jitexc.py --- a/pypy/jit/metainterp/jitexc.py +++ b/pypy/jit/metainterp/jitexc.py @@ -12,7 +12,6 @@ """ _go_through_llinterp_uncaught_ = True # ugh - def _get_standard_error(rtyper, Class): exdata = rtyper.getexceptiondata() clsdef = rtyper.annotator.bookkeeper.getuniqueclassdef(Class) diff --git a/pypy/jit/metainterp/optimize.py b/pypy/jit/metainterp/optimize.py --- a/pypy/jit/metainterp/optimize.py +++ b/pypy/jit/metainterp/optimize.py @@ -5,3 +5,9 @@ """Raised when the optimize*.py detect that the loop that we are trying to build cannot possibly make sense as a long-running loop (e.g. it cannot run 2 complete iterations).""" + + def __init__(self, msg='?'): + debug_start("jit-abort") + debug_print(msg) + debug_stop("jit-abort") + self.msg = msg diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -49,7 +49,8 @@ optimizations.append(OptFfiCall()) if ('rewrite' not in enable_opts or 'virtualize' not in enable_opts - or 'heap' not in enable_opts or 'unroll' not in enable_opts): + or 'heap' not in enable_opts or 'unroll' not in enable_opts + or 'pure' not in enable_opts): optimizations.append(OptSimplify()) return optimizations, unroll diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -257,8 +257,8 @@ opnum == rop.COPYSTRCONTENT or # no effect on GC struct/array opnum == rop.COPYUNICODECONTENT): # no effect on GC struct/array return - assert opnum != rop.CALL_PURE if (opnum == rop.CALL or + opnum == rop.CALL_PURE or opnum == rop.CALL_MAY_FORCE or opnum == rop.CALL_RELEASE_GIL or opnum == rop.CALL_ASSEMBLER): @@ -481,7 +481,7 @@ # already between the tracing and now. In this case, we are # simply ignoring the QUASIIMMUT_FIELD hint and compiling it # as a regular getfield. - if not qmutdescr.is_still_valid(): + if not qmutdescr.is_still_valid_for(structvalue.get_key_box()): self._remove_guard_not_invalidated = True return # record as an out-of-line guard diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -191,10 +191,13 @@ # GUARD_OVERFLOW, then the loop is invalid. lastop = self.last_emitted_operation if lastop is None: - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') opnum = lastop.getopnum() if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') + self.emit_operation(op) def optimize_INT_ADD_OVF(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -525,6 +525,7 @@ @specialize.argtype(0) def _emit_operation(self, op): + assert op.getopnum() != rop.CALL_PURE for i in range(op.numargs()): arg = op.getarg(i) try: diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -208,7 +208,8 @@ box = value.box assert isinstance(box, Const) if not box.same_constant(constbox): - raise InvalidLoop + raise InvalidLoop('A GURAD_{VALUE,TRUE,FALSE} was proven to' + + 'always fail') return if emit_operation: self.emit_operation(op) @@ -220,7 +221,7 @@ if value.is_null(): return elif value.is_nonnull(): - raise InvalidLoop + raise InvalidLoop('A GUARD_ISNULL was proven to always fail') self.emit_operation(op) value.make_constant(self.optimizer.cpu.ts.CONST_NULL) @@ -229,7 +230,7 @@ if value.is_nonnull(): return elif value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL was proven to always fail') self.emit_operation(op) value.make_nonnull(op) @@ -278,7 +279,7 @@ if realclassbox is not None: if realclassbox.same_constant(expectedclassbox): return - raise InvalidLoop + raise InvalidLoop('A GUARD_CLASS was proven to always fail') if value.last_guard: # there already has been a guard_nonnull or guard_class or # guard_nonnull_class on this value. @@ -301,7 +302,8 @@ def optimize_GUARD_NONNULL_CLASS(self, op): value = self.getvalue(op.getarg(0)) if value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL_CLASS was proven to always ' + + 'fail') self.optimize_GUARD_CLASS(op) def optimize_CALL_LOOPINVARIANT(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -5090,7 +5090,6 @@ class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass - ##class TestOOtype(BaseTestOptimizeBasic, OOtypeMixin): ## def test_instanceof(self): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -6533,9 +6533,9 @@ def test_quasi_immut_2(self): ops = """ [] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) escape(i1) jump() """ @@ -6585,13 +6585,13 @@ def test_call_may_force_invalidated_guards_reload(self): ops = """ [i0a, i0b] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) call_may_force(i0b, descr=mayforcevirtdescr) - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i2 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i2 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) i3 = escape(i1) i4 = escape(i2) jump(i3, i4) @@ -7813,6 +7813,52 @@ """ self.optimize_loop(ops, expected) + def test_issue1080_infinitie_loop_virtual(self): + ops = """ + [p10] + p52 = getfield_gc(p10, descr=nextdescr) # inst_storage + p54 = getarrayitem_gc(p52, 0, descr=arraydescr) + p69 = getfield_gc_pure(p54, descr=otherdescr) # inst_w_function + + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + + p106 = new_with_vtable(ConstClass(node_vtable)) + p108 = new_array(3, descr=arraydescr) + p110 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p110, ConstPtr(myptr2), descr=otherdescr) # inst_w_function + setarrayitem_gc(p108, 0, p110, descr=arraydescr) + setfield_gc(p106, p108, descr=nextdescr) # inst_storage + jump(p106) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr2), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + + + def test_issue1080_infinitie_loop_simple(self): + ops = """ + [p69] + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + jump(ConstPtr(myptr)) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -122,6 +122,7 @@ quasi.inst_field = -4247 quasifielddescr = cpu.fielddescrof(QUASI, 'inst_field') quasibox = BoxPtr(lltype.cast_opaque_ptr(llmemory.GCREF, quasi)) + quasiptr = quasibox.value quasiimmutdescr = QuasiImmutDescr(cpu, quasibox, quasifielddescr, cpu.fielddescrof(QUASI, 'mutate_field')) diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -315,7 +315,10 @@ try: jumpargs = virtual_state.make_inputargs(values, self.optimizer) except BadVirtualState: - raise InvalidLoop + raise InvalidLoop('The state of the optimizer at the end of ' + + 'peeled loop is inconsistent with the ' + + 'VirtualState at the begining of the peeled ' + + 'loop') jumpop.initarglist(jumpargs) # Inline the short preamble at the end of the loop @@ -325,7 +328,11 @@ for i in range(len(short_inputargs)): if short_inputargs[i] in args: if args[short_inputargs[i]] != jmp_to_short_args[i]: - raise InvalidLoop + raise InvalidLoop('The short preamble wants the ' + + 'same box passed to multiple of its ' + + 'inputargs, but the jump at the ' + + 'end of this bridge does not do that.') + args[short_inputargs[i]] = jmp_to_short_args[i] self.short_inliner = Inliner(short_inputargs, jmp_to_short_args) for op in self.short[1:]: @@ -378,7 +385,10 @@ #final_virtual_state.debug_print("Bad virtual state at end of loop, ", # bad) #debug_stop('jit-log-virtualstate') - raise InvalidLoop + raise InvalidLoop('The virtual state at the end of the peeled ' + + 'loop is not compatible with the virtual ' + + 'state at the start of the loop which makes ' + + 'it impossible to close the loop') #debug_stop('jit-log-virtualstate') @@ -526,8 +536,8 @@ args = jumpop.getarglist() modifier = VirtualStateAdder(self.optimizer) virtual_state = modifier.get_virtual_state(args) - #debug_start('jit-log-virtualstate') - #virtual_state.debug_print("Looking for ") + debug_start('jit-log-virtualstate') + virtual_state.debug_print("Looking for ") for target in cell_token.target_tokens: if not target.virtual_state: @@ -536,10 +546,10 @@ extra_guards = [] bad = {} - #debugmsg = 'Did not match ' + debugmsg = 'Did not match ' if target.virtual_state.generalization_of(virtual_state, bad): ok = True - #debugmsg = 'Matched ' + debugmsg = 'Matched ' else: try: cpu = self.optimizer.cpu @@ -548,13 +558,13 @@ extra_guards) ok = True - #debugmsg = 'Guarded to match ' + debugmsg = 'Guarded to match ' except InvalidLoop: pass - #target.virtual_state.debug_print(debugmsg, bad) + target.virtual_state.debug_print(debugmsg, bad) if ok: - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') values = [self.getvalue(arg) for arg in jumpop.getarglist()] @@ -581,7 +591,7 @@ jumpop.setdescr(cell_token.target_tokens[0]) self.optimizer.send_extra_operation(jumpop) return True - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') return False class ValueImporter(object): diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -27,11 +27,15 @@ if self.generalization_of(other, renum, {}): return if renum[self.position] != other.position: - raise InvalidLoop + raise InvalidLoop('The numbering of the virtual states does not ' + + 'match. This means that two virtual fields ' + + 'have been set to the same Box in one of the ' + + 'virtual states but not in the other.') self._generate_guards(other, box, cpu, extra_guards) def _generate_guards(self, other, box, cpu, extra_guards): - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') def enum_forced_boxes(self, boxes, value, optimizer): raise NotImplementedError @@ -346,10 +350,12 @@ def _generate_guards(self, other, box, cpu, extra_guards): if not isinstance(other, NotVirtualStateInfo): - raise InvalidLoop + raise InvalidLoop('The VirtualStates does not match as a ' + + 'virtual appears where a pointer is needed ' + + 'and it is too late to force it.') if self.lenbound or other.lenbound: - raise InvalidLoop + raise InvalidLoop('The array length bounds does not match.') if self.level == LEVEL_KNOWNCLASS and \ box.nonnull() and \ @@ -400,7 +406,8 @@ return # Remaining cases are probably not interesting - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') if self.level == LEVEL_CONSTANT: import pdb; pdb.set_trace() raise NotImplementedError diff --git a/pypy/jit/metainterp/quasiimmut.py b/pypy/jit/metainterp/quasiimmut.py --- a/pypy/jit/metainterp/quasiimmut.py +++ b/pypy/jit/metainterp/quasiimmut.py @@ -120,8 +120,10 @@ self.fielddescr, self.structbox) return fieldbox.constbox() - def is_still_valid(self): + def is_still_valid_for(self, structconst): assert self.structbox is not None + if not self.structbox.constbox().same_constant(structconst): + return False cpu = self.cpu gcref = self.structbox.getref_base() qmut = get_current_qmut_instance(cpu, gcref, self.mutatefielddescr) diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.quasiimmut import get_current_qmut_instance from pypy.jit.metainterp.test.support import LLJitMixin from pypy.jit.codewriter.policy import StopAtXPolicy -from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe +from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe, promote def test_get_current_qmut_instance(): @@ -506,6 +506,27 @@ "guard_not_invalidated": 2 }) + def test_issue1080(self): + myjitdriver = JitDriver(greens=[], reds=["n", "sa", "a"]) + class Foo(object): + _immutable_fields_ = ["x?"] + def __init__(self, x): + self.x = x + one, two = Foo(1), Foo(2) + def main(n): + sa = 0 + a = one + while n: + myjitdriver.jit_merge_point(n=n, sa=sa, a=a) + sa += a.x + if a.x == 1: + a = two + elif a.x == 2: + a = one + n -= 1 + return sa + res = self.meta_interp(main, [10]) + assert res == main(10) class TestLLtypeGreenFieldsTests(QuasiImmutTests, LLJitMixin): pass diff --git a/pypy/module/_multiprocessing/test/test_connection.py b/pypy/module/_multiprocessing/test/test_connection.py --- a/pypy/module/_multiprocessing/test/test_connection.py +++ b/pypy/module/_multiprocessing/test/test_connection.py @@ -157,13 +157,15 @@ raises(IOError, _multiprocessing.Connection, -15) def test_byte_order(self): + import socket + if not 'fromfd' in dir(socket): + skip('No fromfd in socket') # The exact format of net strings (length in network byte # order) is important for interoperation with others # implementations. rhandle, whandle = self.make_pair() whandle.send_bytes("abc") whandle.send_bytes("defg") - import socket sock = socket.fromfd(rhandle.fileno(), socket.AF_INET, socket.SOCK_STREAM) data1 = sock.recv(7) diff --git a/pypy/module/_winreg/test/test_winreg.py b/pypy/module/_winreg/test/test_winreg.py --- a/pypy/module/_winreg/test/test_winreg.py +++ b/pypy/module/_winreg/test/test_winreg.py @@ -198,7 +198,10 @@ import nt r = ExpandEnvironmentStrings(u"%windir%\\test") assert isinstance(r, unicode) - assert r == nt.environ["WINDIR"] + "\\test" + if 'WINDIR' in nt.environ.keys(): + assert r == nt.environ["WINDIR"] + "\\test" + else: + assert r == nt.environ["windir"] + "\\test" def test_long_key(self): from _winreg import ( diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -103,8 +103,8 @@ """.split() for name in constant_names: setattr(CConfig_constants, name, rffi_platform.ConstantInteger(name)) -udir.join('pypy_decl.h').write("/* Will be filled later */") -udir.join('pypy_macros.h').write("/* Will be filled later */") +udir.join('pypy_decl.h').write("/* Will be filled later */\n") +udir.join('pypy_macros.h').write("/* Will be filled later */\n") globals().update(rffi_platform.configure(CConfig_constants)) def copy_header_files(dstdir): diff --git a/pypy/module/micronumpy/test/test_dtypes.py b/pypy/module/micronumpy/test/test_dtypes.py --- a/pypy/module/micronumpy/test/test_dtypes.py +++ b/pypy/module/micronumpy/test/test_dtypes.py @@ -185,6 +185,33 @@ assert dtype("float") is dtype(float) + def test_index_int8(self): + from _numpypy import array, int8 + + a = array(range(10), dtype=int8) + b = array([0] * 10, dtype=int8) + for idx in b: a[idx] += 1 + + def test_index_int16(self): + from _numpypy import array, int16 + + a = array(range(10), dtype=int16) + b = array([0] * 10, dtype=int16) + for idx in b: a[idx] += 1 + + def test_index_int32(self): + from _numpypy import array, int32 + + a = array(range(10), dtype=int32) + b = array([0] * 10, dtype=int32) + for idx in b: a[idx] += 1 + + def test_index_int64(self): + from _numpypy import array, int64 + + a = array(range(10), dtype=int64) + b = array([0] * 10, dtype=int64) + for idx in b: a[idx] += 1 class AppTestTypes(BaseNumpyAppTest): def test_abstract_types(self): diff --git a/pypy/module/pypyjit/test_pypy_c/test_00_model.py b/pypy/module/pypyjit/test_pypy_c/test_00_model.py --- a/pypy/module/pypyjit/test_pypy_c/test_00_model.py +++ b/pypy/module/pypyjit/test_pypy_c/test_00_model.py @@ -54,7 +54,8 @@ cmdline += ['--jit', ','.join(jitcmdline)] cmdline.append(str(self.filepath)) # - env={'PYPYLOG': self.log_string + ':' + str(logfile)} + env = os.environ.copy() + env['PYPYLOG'] = self.log_string + ':' + str(logfile) pipe = subprocess.Popen(cmdline, env=env, stdout=subprocess.PIPE, diff --git a/pypy/module/rctime/interp_time.py b/pypy/module/rctime/interp_time.py --- a/pypy/module/rctime/interp_time.py +++ b/pypy/module/rctime/interp_time.py @@ -572,7 +572,7 @@ if i < length and format[i] == '#': # not documented by python i += 1 - if i >= length or format[i] not in "aAbBcdfHIjmMpSUwWxXyYzZ%": + if i >= length or format[i] not in "aAbBcdHIjmMpSUwWxXyYzZ%": raise OperationError(space.w_ValueError, space.wrap("invalid format string")) i += 1 diff --git a/pypy/module/rctime/test/test_rctime.py b/pypy/module/rctime/test/test_rctime.py --- a/pypy/module/rctime/test/test_rctime.py +++ b/pypy/module/rctime/test/test_rctime.py @@ -64,6 +64,7 @@ def test_localtime(self): import time as rctime + import os raises(TypeError, rctime.localtime, "foo") rctime.localtime() rctime.localtime(None) @@ -75,6 +76,10 @@ assert 0 <= (t1 - t0) < 1.2 t = rctime.time() assert rctime.localtime(t) == rctime.localtime(t) + if os.name == 'nt': + raises(ValueError, rctime.localtime, -1) + else: + rctime.localtime(-1) def test_mktime(self): import time as rctime @@ -108,8 +113,8 @@ assert long(rctime.mktime(rctime.gmtime(t))) - rctime.timezone == long(t) ltime = rctime.localtime() assert rctime.mktime(tuple(ltime)) == rctime.mktime(ltime) - - assert rctime.mktime(rctime.localtime(-1)) == -1 + if os.name != 'nt': + assert rctime.mktime(rctime.localtime(-1)) == -1 def test_asctime(self): import time as rctime diff --git a/pypy/module/select/__init__.py b/pypy/module/select/__init__.py --- a/pypy/module/select/__init__.py +++ b/pypy/module/select/__init__.py @@ -2,6 +2,7 @@ from pypy.interpreter.mixedmodule import MixedModule import sys +import os class Module(MixedModule): @@ -9,11 +10,13 @@ } interpleveldefs = { - 'poll' : 'interp_select.poll', 'select': 'interp_select.select', 'error' : 'space.fromcache(interp_select.Cache).w_error' } + if os.name =='posix': + interpleveldefs['poll'] = 'interp_select.poll' + if sys.platform.startswith('linux'): interpleveldefs['epoll'] = 'interp_epoll.W_Epoll' from pypy.module.select.interp_epoll import cconfig, public_symbols diff --git a/pypy/module/select/test/test_select.py b/pypy/module/select/test/test_select.py --- a/pypy/module/select/test/test_select.py +++ b/pypy/module/select/test/test_select.py @@ -214,6 +214,8 @@ def test_poll(self): import select + if not hasattr(select, 'poll'): + skip("no select.poll() on this platform") readend, writeend = self.getpair() try: class A(object): diff --git a/pypy/objspace/flow/operation.py b/pypy/objspace/flow/operation.py --- a/pypy/objspace/flow/operation.py +++ b/pypy/objspace/flow/operation.py @@ -350,8 +350,8 @@ result = op(*args) except Exception, e: etype = e.__class__ - msg = "generated by a constant operation: %s" % ( - name) + msg = "generated by a constant operation:\n\t%s%r" % ( + name, tuple(args)) raise OperationThatShouldNotBePropagatedError( self.wrap(etype), self.wrap(msg)) else: diff --git a/pypy/rlib/debug.py b/pypy/rlib/debug.py --- a/pypy/rlib/debug.py +++ b/pypy/rlib/debug.py @@ -1,10 +1,12 @@ import sys, time from pypy.rpython.extregistry import ExtRegistryEntry +from pypy.rlib.objectmodel import we_are_translated from pypy.rlib.rarithmetic import is_valid_int def ll_assert(x, msg): """After translation to C, this becomes an RPyAssert.""" + assert type(x) is bool, "bad type! got %r" % (type(x),) assert x, msg class Entry(ExtRegistryEntry): @@ -21,8 +23,13 @@ hop.exception_cannot_occur() hop.genop('debug_assert', vlist) +class FatalError(Exception): + pass + def fatalerror(msg): # print the RPython traceback and abort with a fatal error + if not we_are_translated(): + raise FatalError(msg) from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop llop.debug_print_traceback(lltype.Void) @@ -33,6 +40,8 @@ def fatalerror_notb(msg): # a variant of fatalerror() that doesn't print the RPython traceback + if not we_are_translated(): + raise FatalError(msg) from pypy.rpython.lltypesystem import lltype from pypy.rpython.lltypesystem.lloperation import llop llop.debug_fatalerror(lltype.Void, msg) diff --git a/pypy/rlib/rposix.py b/pypy/rlib/rposix.py --- a/pypy/rlib/rposix.py +++ b/pypy/rlib/rposix.py @@ -1,9 +1,11 @@ import os -from pypy.rpython.lltypesystem.rffi import CConstant, CExternVariable, INT +from pypy.rpython.lltypesystem.rffi import (CConstant, CExternVariable, + INT, CCHARPP) from pypy.rpython.lltypesystem import lltype, ll2ctypes, rffi from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rlib.rarithmetic import intmask from pypy.rlib.objectmodel import specialize +from pypy.rlib import jit class CConstantErrno(CConstant): # these accessors are used when calling get_errno() or set_errno() @@ -18,9 +20,69 @@ def __setitem__(self, index, value): assert index == 0 ll2ctypes.TLS.errno = value +if os.name == 'nt': + separate_module_sources =[''' + /* Lifted completely from CPython 3.3 Modules/posix_module.c */ + #include /* for _msize */ + typedef struct { + intptr_t osfhnd; + char osfile; + } my_ioinfo; + extern __declspec(dllimport) char * __pioinfo[]; + #define IOINFO_L2E 5 + #define IOINFO_ARRAY_ELTS (1 << IOINFO_L2E) + #define IOINFO_ARRAYS 64 + #define _NHANDLE_ (IOINFO_ARRAYS * IOINFO_ARRAY_ELTS) + #define FOPEN 0x01 + #define _NO_CONSOLE_FILENO (intptr_t)-2 + /* This function emulates what the windows CRT + does to validate file handles */ + int + _PyVerify_fd(int fd) + { + const int i1 = fd >> IOINFO_L2E; + const int i2 = fd & ((1 << IOINFO_L2E) - 1); + + static size_t sizeof_ioinfo = 0; + + /* Determine the actual size of the ioinfo structure, + * as used by the CRT loaded in memory + */ + if (sizeof_ioinfo == 0 && __pioinfo[0] != NULL) { + sizeof_ioinfo = _msize(__pioinfo[0]) / IOINFO_ARRAY_ELTS; + } + if (sizeof_ioinfo == 0) { + /* This should not happen... */ + goto fail; + } + + /* See that it isn't a special CLEAR fileno */ + if (fd != _NO_CONSOLE_FILENO) { + /* Microsoft CRT would check that 0<=fd<_nhandle but we can't do that. Instead + * we check pointer validity and other info + */ + if (0 <= i1 && i1 < IOINFO_ARRAYS && __pioinfo[i1] != NULL) { + /* finally, check that the file is open */ + my_ioinfo* info = (my_ioinfo*)(__pioinfo[i1] + i2 * sizeof_ioinfo); + if (info->osfile & FOPEN) { + return 1; + } + } + } + fail: + errno = EBADF; + return 0; + } + ''',] + export_symbols = ['_PyVerify_fd'] +else: + separate_module_sources = [] + export_symbols = [] errno_eci = ExternalCompilationInfo( - includes=['errno.h'] + includes=['errno.h','stdio.h'], + separate_module_sources = separate_module_sources, + export_symbols = export_symbols, ) _get_errno, _set_errno = CExternVariable(INT, 'errno', errno_eci, @@ -35,6 +97,21 @@ def set_errno(errno): _set_errno(rffi.cast(INT, errno)) +if os.name == 'nt': + _validate_fd = rffi.llexternal( + "_PyVerify_fd", [rffi.INT], rffi.INT, + compilation_info=errno_eci, + ) + @jit.dont_look_inside + def validate_fd(fd): + if not _validate_fd(fd): + raise OSError(get_errno(), 'Bad file descriptor') +else: + def _validate_fd(fd): + return 1 + + def validate_fd(fd): + return 1 def closerange(fd_low, fd_high): # this behaves like os.closerange() from Python 2.6. diff --git a/pypy/rlib/test/test_rposix.py b/pypy/rlib/test/test_rposix.py --- a/pypy/rlib/test/test_rposix.py +++ b/pypy/rlib/test/test_rposix.py @@ -131,3 +131,15 @@ os.rmdir(self.ufilename) except Exception: pass + + def test_validate_fd(self): + if os.name != 'nt': + skip('relevant for windows only') + assert rposix._validate_fd(0) == 1 + fid = open(str(udir.join('validate_test.txt')), 'w') + fd = fid.fileno() + assert rposix._validate_fd(fd) == 1 + fid.close() + assert rposix._validate_fd(fd) == 0 + + diff --git a/pypy/rpython/annlowlevel.py b/pypy/rpython/annlowlevel.py --- a/pypy/rpython/annlowlevel.py +++ b/pypy/rpython/annlowlevel.py @@ -488,6 +488,8 @@ else: TO = PTR if not hasattr(object, '_carry_around_for_tests'): + if object is None: + return lltype.nullptr(PTR.TO) assert not hasattr(object, '_TYPE') object._carry_around_for_tests = True object._TYPE = TO @@ -557,6 +559,8 @@ """NOT_RPYTHON: hack. Reverse the hacking done in cast_object_to_ptr().""" if isinstance(lltype.typeOf(ptr), lltype.Ptr): ptr = ptr._as_obj() + if ptr is None: + return None if not isinstance(ptr, Class): raise NotImplementedError("cast_base_ptr_to_instance: casting %r to %r" % (ptr, Class)) diff --git a/pypy/rpython/lltypesystem/lltype.py b/pypy/rpython/lltypesystem/lltype.py --- a/pypy/rpython/lltypesystem/lltype.py +++ b/pypy/rpython/lltypesystem/lltype.py @@ -1167,7 +1167,7 @@ try: return self._lookup_adtmeth(field_name) except AttributeError: - raise AttributeError("%r instance has no field %r" % (self._T._name, + raise AttributeError("%r instance has no field %r" % (self._T, field_name)) def __setattr__(self, field_name, val): diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -916,7 +916,7 @@ ll_assert(not self.is_in_nursery(obj), "object in nursery after collection") # similarily, all objects should have this flag: - ll_assert(self.header(obj).tid & GCFLAG_TRACK_YOUNG_PTRS, + ll_assert(self.header(obj).tid & GCFLAG_TRACK_YOUNG_PTRS != 0, "missing GCFLAG_TRACK_YOUNG_PTRS") # the GCFLAG_VISITED should not be set between collections ll_assert(self.header(obj).tid & GCFLAG_VISITED == 0, diff --git a/pypy/rpython/memory/gc/semispace.py b/pypy/rpython/memory/gc/semispace.py --- a/pypy/rpython/memory/gc/semispace.py +++ b/pypy/rpython/memory/gc/semispace.py @@ -640,7 +640,7 @@ between collections.""" tid = self.header(obj).tid if tid & GCFLAG_EXTERNAL: - ll_assert(tid & GCFLAG_FORWARDED, "bug: external+!forwarded") + ll_assert(tid & GCFLAG_FORWARDED != 0, "bug: external+!forwarded") ll_assert(not (self.tospace <= obj < self.free), "external flag but object inside the semispaces") else: diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -8,7 +8,6 @@ from pypy.rpython.memory.gcheader import GCHeaderBuilder from pypy.rlib.rarithmetic import ovfcheck from pypy.rlib import rgc -from pypy.rlib.debug import ll_assert from pypy.rlib.objectmodel import we_are_translated from pypy.translator.backendopt import graphanalyze from pypy.translator.backendopt.support import var_needsgc diff --git a/pypy/rpython/test/test_llann.py b/pypy/rpython/test/test_llann.py --- a/pypy/rpython/test/test_llann.py +++ b/pypy/rpython/test/test_llann.py @@ -9,6 +9,7 @@ from pypy.rpython.annlowlevel import MixLevelHelperAnnotator from pypy.rpython.annlowlevel import PseudoHighLevelCallable from pypy.rpython.annlowlevel import llhelper, cast_instance_to_base_ptr +from pypy.rpython.annlowlevel import cast_base_ptr_to_instance from pypy.rpython.annlowlevel import base_ptr_lltype from pypy.rpython.llinterp import LLInterpreter from pypy.rpython.test.test_llinterp import interpret @@ -502,7 +503,10 @@ self.y = y def f(x, y): - a = A(x, y) + if x > 20: + a = None + else: + a = A(x, y) a1 = cast_instance_to_base_ptr(a) return a1 @@ -510,3 +514,30 @@ assert typeOf(res) == base_ptr_lltype() assert fishllattr(res, 'x') == 5 assert fishllattr(res, 'y') == 10 + + res = interpret(f, [25, 10]) + assert res == nullptr(base_ptr_lltype().TO) + + +def test_cast_base_ptr_to_instance(): + class A: + def __init__(self, x, y): + self.x = x + self.y = y + + def f(x, y): + if x > 20: + a = None + else: + a = A(x, y) + a1 = cast_instance_to_base_ptr(a) + b = cast_base_ptr_to_instance(A, a1) + return a is b + + assert f(5, 10) is True + assert f(25, 10) is True + + res = interpret(f, [5, 10]) + assert res is True + res = interpret(f, [25, 10]) + assert res is True diff --git a/pypy/rpython/tool/rffi_platform.py b/pypy/rpython/tool/rffi_platform.py --- a/pypy/rpython/tool/rffi_platform.py +++ b/pypy/rpython/tool/rffi_platform.py @@ -379,7 +379,7 @@ self.name = name def prepare_code(self): - yield 'if ((%s) < 0) {' % (self.name,) + yield 'if ((%s) <= 0) {' % (self.name,) yield ' long long x = (long long)(%s);' % (self.name,) yield ' printf("value: %lld\\n", x);' yield '} else {' @@ -401,7 +401,7 @@ def prepare_code(self): yield '#ifdef %s' % self.macro yield 'dump("defined", 1);' - yield 'if ((%s) < 0) {' % (self.macro,) + yield 'if ((%s) <= 0) {' % (self.macro,) yield ' long long x = (long long)(%s);' % (self.macro,) yield ' printf("value: %lld\\n", x);' yield '} else {' diff --git a/pypy/tool/compare_last_builds.py b/pypy/tool/compare_last_builds.py new file mode 100644 --- /dev/null +++ b/pypy/tool/compare_last_builds.py @@ -0,0 +1,122 @@ +import os +import urllib2 +import json +import sys +import md5 + +wanted = sys.argv[1:] +if not wanted: + wanted = ['default'] +base = "http://buildbot.pypy.org/json/builders/" + +cachedir = os.environ.get('PYPY_BUILDS_CACHE') +if cachedir and not os.path.exists(cachedir): + os.makedirs(cachedir) + + + +def get_json(url, cache=cachedir): + return json.loads(get_data(url, cache)) + + +def get_data(url, cache=cachedir): + url = str(url) + if cache: + digest = md5.md5() + digest.update(url) + digest = digest.hexdigest() + cachepath = os.path.join(cachedir, digest) + if os.path.exists(cachepath): + with open(cachepath) as fp: + return fp.read() + + print 'GET', url + fp = urllib2.urlopen(url) + try: + data = fp.read() + if cache: + with open(cachepath, 'wb') as cp: + cp.write(data) + return data + finally: + fp.close() + +def parse_log(log): + items = [] + for v in log.splitlines(1): + if not v[0].isspace() and v[1].isspace(): + items.append(v) + return sorted(items) #sort cause testrunner order is non-deterministic + +def gather_logdata(build): + logdata = get_data(str(build['log']) + '?as_text=1') + logdata = logdata.replace('', '') + logdata = logdata.replace('', '') + del build['log'] + build['log'] = parse_log(logdata) + + +def branch_mapping(l): + keep = 3 - len(wanted) + d = {} + for x in reversed(l): + gather_logdata(x) + if not x['log']: + continue + b = x['branch'] + if b not in d: + d[b] = [] + d[b].insert(0, x) + if len(d[b]) > keep: + d[b].pop() + return d + +def cleanup_build(d): + for a in 'times eta steps slave reason sourceStamp blame currentStep text'.split(): + del d[a] + + props = d.pop(u'logs') + for name, val in props: + if name == u'pytestLog': + d['log'] = val + props = d.pop(u'properties') + for name, val, _ in props: + if name == u'branch': + d['branch'] = val or 'default' + return d + +def collect_builds(d): + name = str(d['basedir']) + builds = d['cachedBuilds'] + l = [] + for build in builds: + d = get_json(base + '%s/builds/%s' % (name, build)) + cleanup_build(d) + l.append(d) + + l = [x for x in l if x['branch'] in wanted and 'log' in x] + d = branch_mapping(l) + return [x for lst in d.values() for x in lst] + + +def only_linux32(d): + return d['own-linux-x86-32'] + + +own_builds = get_json(base, cache=False)['own-linux-x86-32'] + +builds = collect_builds(own_builds) + + +builds.sort(key=lambda x: (wanted.index(x['branch']), x['number'])) +logs = [x.pop('log') for x in builds] +for b, s in zip(builds, logs): + b['resultset'] = len(s) +import pprint +pprint.pprint(builds) + +from difflib import Differ + +for x in Differ().compare(*logs): + if x[0]!=' ': + sys.stdout.write(x) diff --git a/pypy/translator/c/src/cjkcodecs/cjkcodecs.h b/pypy/translator/c/src/cjkcodecs/cjkcodecs.h --- a/pypy/translator/c/src/cjkcodecs/cjkcodecs.h +++ b/pypy/translator/c/src/cjkcodecs/cjkcodecs.h @@ -210,15 +210,15 @@ #define BEGIN_CODECS_LIST /* empty */ #define _CODEC(name) \ - static const MultibyteCodec _pypy_cjkcodec_##name; \ - const MultibyteCodec *pypy_cjkcodec_##name(void) { \ + static MultibyteCodec _pypy_cjkcodec_##name; \ + MultibyteCodec *pypy_cjkcodec_##name(void) { \ if (_pypy_cjkcodec_##name.codecinit != NULL) { \ int r = _pypy_cjkcodec_##name.codecinit(_pypy_cjkcodec_##name.config); \ assert(r == 0); \ } \ return &_pypy_cjkcodec_##name; \ } \ - static const MultibyteCodec _pypy_cjkcodec_##name + static MultibyteCodec _pypy_cjkcodec_##name #define _STATEFUL_METHODS(enc) \ enc##_encode, \ enc##_encode_init, \ diff --git a/pypy/translator/c/src/cjkcodecs/multibytecodec.h b/pypy/translator/c/src/cjkcodecs/multibytecodec.h --- a/pypy/translator/c/src/cjkcodecs/multibytecodec.h +++ b/pypy/translator/c/src/cjkcodecs/multibytecodec.h @@ -131,7 +131,7 @@ /* list of codecs defined in the .c files */ #define DEFINE_CODEC(name) \ - const MultibyteCodec *pypy_cjkcodec_##name(void); + MultibyteCodec *pypy_cjkcodec_##name(void); // _codecs_cn DEFINE_CODEC(gb2312) From noreply at buildbot.pypy.org Wed May 2 23:17:49 2012 From: noreply at buildbot.pypy.org (wlav) Date: Wed, 2 May 2012 23:17:49 +0200 (CEST) Subject: [pypy-commit] pypy reflex-support: make it easier to deal with reflex standalone Message-ID: <20120502211749.CD04282009@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r54879:1479a0f387b0 Date: 2012-05-02 14:17 -0700 http://bitbucket.org/pypy/pypy/changeset/1479a0f387b0/ Log: make it easier to deal with reflex standalone diff --git a/pypy/module/cppyy/capi/reflex_capi.py b/pypy/module/cppyy/capi/reflex_capi.py --- a/pypy/module/cppyy/capi/reflex_capi.py +++ b/pypy/module/cppyy/capi/reflex_capi.py @@ -11,7 +11,7 @@ if os.environ.get("ROOTSYS"): rootincpath = [os.path.join(os.environ["ROOTSYS"], "include")] - rootlibpath = [os.path.join(os.environ["ROOTSYS"], "lib")] + rootlibpath = [os.path.join(os.environ["ROOTSYS"], "lib64"), os.path.join(os.environ["ROOTSYS"], "lib")] else: rootincpath = [] rootlibpath = [] diff --git a/pypy/module/cppyy/test/Makefile b/pypy/module/cppyy/test/Makefile --- a/pypy/module/cppyy/test/Makefile +++ b/pypy/module/cppyy/test/Makefile @@ -10,7 +10,7 @@ cppflags= else genreflex=$(ROOTSYS)/bin/genreflex - cppflags=-I$(ROOTSYS)/include -L$(ROOTSYS)/lib + cppflags=-I$(ROOTSYS)/include -L$(ROOTSYS)/lib64 -L$(ROOTSYS)/lib endif PLATFORM := $(shell uname -s) From noreply at buildbot.pypy.org Thu May 3 10:21:26 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 10:21:26 +0200 (CEST) Subject: [pypy-commit] pyrepl py3ksupport: close pyk branch Message-ID: <20120503082126.48A2082009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: py3ksupport Changeset: r171:4e50d007915f Date: 2012-05-03 10:20 +0200 http://bitbucket.org/pypy/pyrepl/changeset/4e50d007915f/ Log: close pyk branch diff --git a/tox.ini b/tox.ini --- a/tox.ini +++ b/tox.ini @@ -1,6 +1,9 @@ [tox] envlist= py27, py32 +[pytest] +codechecks = pep8 pyflakes + [testenv] deps= pytest From noreply at buildbot.pypy.org Thu May 3 10:21:27 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 10:21:27 +0200 (CEST) Subject: [pypy-commit] pyrepl default: merge py3k support branch Message-ID: <20120503082127.B9C0A8208A@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r172:a1bbaca0edcc Date: 2012-05-03 10:21 +0200 http://bitbucket.org/pypy/pyrepl/changeset/a1bbaca0edcc/ Log: merge py3k support branch diff --git a/pyrepl/_minimal_curses.py b/pyrepl/_minimal_curses.py new file mode 100644 --- /dev/null +++ b/pyrepl/_minimal_curses.py @@ -0,0 +1,69 @@ +"""Minimal '_curses' module, the low-level interface for curses module +which is not meant to be used directly. + +Based on ctypes. It's too incomplete to be really called '_curses', so +to use it, you have to import it and stick it in sys.modules['_curses'] +manually. + +Note that there is also a built-in module _minimal_curses which will +hide this one if compiled in. +""" + +import ctypes, ctypes.util + +class error(Exception): + pass + + +def _find_clib(): + trylibs = ['ncurses', 'curses'] + + for lib in trylibs: + path = ctypes.util.find_library(lib) + if path: + return path + raise ImportError("curses library not found") + +_clibpath = _find_clib() +clib = ctypes.cdll.LoadLibrary(_clibpath) + +clib.setupterm.argtypes = [ctypes.c_char_p, ctypes.c_int, + ctypes.POINTER(ctypes.c_int)] +clib.setupterm.restype = ctypes.c_int + +clib.tigetstr.argtypes = [ctypes.c_char_p] +clib.tigetstr.restype = ctypes.POINTER(ctypes.c_char) + +clib.tparm.argtypes = [ctypes.c_char_p] + 9 * [ctypes.c_int] +clib.tparm.restype = ctypes.c_char_p + +OK = 0 +ERR = -1 + +# ____________________________________________________________ + +try: from __pypy__ import builtinify +except ImportError: builtinify = lambda f: f + + at builtinify +def setupterm(termstr, fd): + err = ctypes.c_int(0) + result = clib.setupterm(termstr, fd, ctypes.byref(err)) + if result == ERR: + raise error("setupterm() failed (err=%d)" % err.value) + + at builtinify +def tigetstr(cap): + if not isinstance(cap, bytes): + cap = cap.encode('ascii') + result = clib.tigetstr(cap) + if ctypes.cast(result, ctypes.c_void_p).value == ERR: + return None + return ctypes.cast(result, ctypes.c_char_p).value + + at builtinify +def tparm(str, i1=0, i2=0, i3=0, i4=0, i5=0, i6=0, i7=0, i8=0, i9=0): + result = clib.tparm(str, i1, i2, i3, i4, i5, i6, i7, i8, i9) + if result is None: + raise error("tparm() returned NULL") + return result diff --git a/pyrepl/cmdrepl.py b/pyrepl/cmdrepl.py --- a/pyrepl/cmdrepl.py +++ b/pyrepl/cmdrepl.py @@ -33,7 +33,7 @@ which is in fact done by the `pythoni' script that comes with pyrepl.""" -from __future__ import nested_scopes +from __future__ import print_function from pyrepl import completing_reader as cr, reader, completer from pyrepl.completing_reader import CompletingReader as CR @@ -96,7 +96,7 @@ if intro is not None: self.intro = intro if self.intro: - print self.intro + print(self.intro) stop = None while not stop: if self.cmdqueue: diff --git a/pyrepl/commands.py b/pyrepl/commands.py --- a/pyrepl/commands.py +++ b/pyrepl/commands.py @@ -33,10 +33,12 @@ class Command(object): finish = 0 kills_digit_arg = 1 - def __init__(self, reader, (event_name, event)): + + def __init__(self, reader, event_name, event): self.reader = reader self.event = event self.event_name = event_name + def do(self): pass diff --git a/pyrepl/completer.py b/pyrepl/completer.py --- a/pyrepl/completer.py +++ b/pyrepl/completer.py @@ -17,7 +17,10 @@ # CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -import __builtin__ +try: + import __builtin__ as builtins +except ImportError: + import builtins class Completer: def __init__(self, ns): @@ -38,12 +41,11 @@ """ import keyword matches = [] - n = len(text) for list in [keyword.kwlist, - __builtin__.__dict__.keys(), + builtins.__dict__.keys(), self.ns.keys()]: for word in list: - if word[:n] == text and word != "__builtins__": + if word.startswith(text) and word != "__builtins__": matches.append(word) return matches diff --git a/pyrepl/completing_reader.py b/pyrepl/completing_reader.py --- a/pyrepl/completing_reader.py +++ b/pyrepl/completing_reader.py @@ -168,7 +168,7 @@ r.insert(completions[0][len(stem):]) else: p = prefix(completions, len(stem)) - if p <> '': + if p: r.insert(p) if r.last_command_is(self.__class__): if not r.cmpltn_menu_vis: @@ -259,7 +259,7 @@ p = self.pos - 1 while p >= 0 and st.get(b[p], SW) == SW: p -= 1 - return u''.join(b[p+1:self.pos]) + return ''.join(b[p+1:self.pos]) def get_completions(self, stem): return [] diff --git a/pyrepl/console.py b/pyrepl/console.py --- a/pyrepl/console.py +++ b/pyrepl/console.py @@ -17,8 +17,9 @@ # CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -class Event: +class Event(object): """An Event. `evt' is 'key' or somesuch.""" + __slots__ = 'evt', 'data', 'raw' def __init__(self, evt, data, raw=''): self.evt = evt @@ -28,7 +29,7 @@ def __repr__(self): return 'Event(%r, %r)'%(self.evt, self.data) -class Console: +class Console(object): """Attributes: screen, diff --git a/pyrepl/copy_code.py b/pyrepl/copy_code.py --- a/pyrepl/copy_code.py +++ b/pyrepl/copy_code.py @@ -17,7 +17,7 @@ # CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -import new +from types import CodeType def copy_code_with_changes(codeobject, argcount=None, @@ -44,7 +44,7 @@ if name is None: name = codeobject.co_name if firstlineno is None: firstlineno = codeobject.co_firstlineno if lnotab is None: lnotab = codeobject.co_lnotab - return new.code(argcount, + return CodeType(argcount, nlocals, stacksize, flags, diff --git a/pyrepl/curses.py b/pyrepl/curses.py --- a/pyrepl/curses.py +++ b/pyrepl/curses.py @@ -19,21 +19,5 @@ # CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -# Some try-import logic for two purposes: avoiding to bring in the whole -# pure Python curses package if possible; and, in _curses is not actually -# present, falling back to _minimal_curses (which is either a ctypes-based -# pure Python module or a PyPy built-in module). -try: - import _curses -except ImportError: - try: - import _minimal_curses as _curses - except ImportError: - # Who knows, maybe some environment has "curses" but not "_curses". - # If not, at least the following import gives a clean ImportError. - import _curses -setupterm = _curses.setupterm -tigetstr = _curses.tigetstr -tparm = _curses.tparm -error = _curses.error +from ._minimal_curses import setupterm, tigetstr, tparm, error diff --git a/pyrepl/historical_reader.py b/pyrepl/historical_reader.py --- a/pyrepl/historical_reader.py +++ b/pyrepl/historical_reader.py @@ -33,7 +33,8 @@ (r'\C-g', 'isearch-cancel'), (r'\', 'isearch-backspace')]) -del c +if 'c' in globals(): + del c ISEARCH_DIRECTION_NONE = '' ISEARCH_DIRECTION_BACKWARDS = 'r' @@ -230,7 +231,7 @@ self.dirty = 1 def get_item(self, i): - if i <> len(self.history): + if i != len(self.history): return self.transient_history.get(i, self.history[i]) else: return self.transient_history.get(i, self.get_unicode()) @@ -253,7 +254,7 @@ raise def get_prompt(self, lineno, cursor_on_line): - if cursor_on_line and self.isearch_direction <> ISEARCH_DIRECTION_NONE: + if cursor_on_line and self.isearch_direction != ISEARCH_DIRECTION_NONE: d = 'rf'[self.isearch_direction == ISEARCH_DIRECTION_FORWARDS] return "(%s-search `%s') "%(d, self.isearch_term) else: diff --git a/pyrepl/input.py b/pyrepl/input.py --- a/pyrepl/input.py +++ b/pyrepl/input.py @@ -32,8 +32,10 @@ # executive, temporary decision: [tab] and [C-i] are distinct, but # [meta-key] is identified with [esc key]. We demand that any console # class does quite a lot towards emulating a unix terminal. +from __future__ import print_function +from pyrepl import unicodedata_ +from collections import deque -from pyrepl import unicodedata_ class InputTranslator(object): def push(self, evt): @@ -43,7 +45,9 @@ def empty(self): pass + class KeymapTranslator(InputTranslator): + def __init__(self, keymap, verbose=0, invalid_cls=None, character_cls=None): self.verbose = verbose @@ -56,24 +60,25 @@ keyseq = tuple(parse_keys(keyspec)) d[keyseq] = command if self.verbose: - print d + print(d) self.k = self.ck = compile_keymap(d, ()) - self.results = [] + self.results = deque() self.stack = [] + def push(self, evt): if self.verbose: - print "pushed", evt.data, + print("pushed", evt.data, end='') key = evt.data d = self.k.get(key) if isinstance(d, dict): if self.verbose: - print "transition" + print("transition") self.stack.append(key) self.k = d else: if d is None: if self.verbose: - print "invalid" + print("invalid") if self.stack or len(key) > 1 or unicodedata_.category(key) == 'C': self.results.append( (self.invalid_cls, self.stack + [key])) @@ -84,14 +89,16 @@ (self.character_cls, [key])) else: if self.verbose: - print "matched", d + print("matched", d) self.results.append((d, self.stack + [key])) self.stack = [] self.k = self.ck + def get(self): if self.results: - return self.results.pop(0) + return self.results.popleft() else: return None + def empty(self): return not self.results diff --git a/pyrepl/keymap.py b/pyrepl/keymap.py --- a/pyrepl/keymap.py +++ b/pyrepl/keymap.py @@ -101,27 +101,27 @@ while not ret and s < len(key): if key[s] == '\\': c = key[s+1].lower() - if _escapes.has_key(c): + if c in _escapes: ret = _escapes[c] s += 2 elif c == "c": if key[s + 2] != '-': - raise KeySpecError, \ + raise KeySpecError( "\\C must be followed by `-' (char %d of %s)"%( - s + 2, repr(key)) + s + 2, repr(key))) if ctrl: - raise KeySpecError, "doubled \\C- (char %d of %s)"%( - s + 1, repr(key)) + raise KeySpecError("doubled \\C- (char %d of %s)"%( + s + 1, repr(key))) ctrl = 1 s += 3 elif c == "m": if key[s + 2] != '-': - raise KeySpecError, \ + raise KeySpecError( "\\M must be followed by `-' (char %d of %s)"%( - s + 2, repr(key)) + s + 2, repr(key))) if meta: - raise KeySpecError, "doubled \\M- (char %d of %s)"%( - s + 1, repr(key)) + raise KeySpecError("doubled \\M- (char %d of %s)"%( + s + 1, repr(key))) meta = 1 s += 3 elif c.isdigit(): @@ -135,26 +135,26 @@ elif c == '<': t = key.find('>', s) if t == -1: - raise KeySpecError, \ + raise KeySpecError( "unterminated \\< starting at char %d of %s"%( - s + 1, repr(key)) + s + 1, repr(key))) ret = key[s+2:t].lower() if ret not in _keynames: - raise KeySpecError, \ + raise KeySpecError( "unrecognised keyname `%s' at char %d of %s"%( - ret, s + 2, repr(key)) + ret, s + 2, repr(key))) ret = _keynames[ret] s = t + 1 else: - raise KeySpecError, \ + raise KeySpecError( "unknown backslash escape %s at char %d of %s"%( - `c`, s + 2, repr(key)) + repr(c), s + 2, repr(key))) else: ret = key[s] s += 1 if ctrl: if len(ret) > 1: - raise KeySpecError, "\\C- must be followed by a character" + raise KeySpecError("\\C- must be followed by a character") ret = chr(ord(ret) & 0x1f) # curses.ascii.ctrl() if meta: ret = ['\033', ret] @@ -170,15 +170,15 @@ r.extend(k) return r -def compile_keymap(keymap, empty=''): +def compile_keymap(keymap, empty=b''): r = {} for key, value in keymap.items(): r.setdefault(key[0], {})[key[1:]] = value for key, value in r.items(): if empty in value: - if len(value) <> 1: - raise KeySpecError, \ - "key definitions for %s clash"%(value.values(),) + if len(value) != 1: + raise KeySpecError( + "key definitions for %s clash"%(value.values(),)) else: r[key] = value[empty] else: diff --git a/pyrepl/module_lister.py b/pyrepl/module_lister.py --- a/pyrepl/module_lister.py +++ b/pyrepl/module_lister.py @@ -66,5 +66,5 @@ try: mods = _packages[pack] except KeyError: - raise ImportError, "can't find \"%s\" package"%pack + raise ImportError("can't find \"%s\" package" % pack) return [mod for mod in mods if mod.startswith(stem)] diff --git a/pyrepl/python_reader.py b/pyrepl/python_reader.py --- a/pyrepl/python_reader.py +++ b/pyrepl/python_reader.py @@ -20,17 +20,25 @@ # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. # one impressive collections of imports: +from __future__ import print_function +from __future__ import unicode_literals from pyrepl.completing_reader import CompletingReader from pyrepl.historical_reader import HistoricalReader from pyrepl import completing_reader, reader from pyrepl import copy_code, commands, completer from pyrepl import module_lister -import new, sys, os, re, code, traceback +import imp, sys, os, re, code, traceback import atexit, warnings try: import cPickle as pickle except ImportError: import pickle + +try: + unicode +except: + unicode = str + try: import imp imp.find_module("twisted") @@ -47,13 +55,28 @@ """this function eats warnings, if you were wondering""" pass +if sys.version_info >= (3,0): + def _reraise(cls, val, tb): + __tracebackhide__ = True + assert hasattr(val, '__traceback__') + raise val +else: + exec (""" +def _reraise(cls, val, tb): + __tracebackhide__ = True + raise cls, val, tb +""") + + + + class maybe_accept(commands.Command): def do(self): r = self.reader text = r.get_unicode() try: # ooh, look at the hack: - code = r.compiler("#coding:utf-8\n"+text.encode('utf-8')) + code = r.compiler(text) except (OverflowError, SyntaxError, ValueError): self.finish = 1 else: @@ -67,16 +90,14 @@ import_line_prog = re.compile( "^(?:import|from)\s+(?P[A-Za-z_.0-9]*)\s*$") -def mk_saver(reader): - def saver(reader=reader): - try: - file = open(os.path.expanduser("~/.pythoni.hist"), "w") - except IOError: - pass - else: - pickle.dump(reader.history, file) - file.close() - return saver +def saver(reader=reader): + try: + with open(os.path.expanduser("~/.pythoni.hist"), "wb") as fp: + fp.write(b'\n'.join(item.encode('unicode_escape') + for item in reader.history)) + except IOError as e: + print(e) + pass class PythonicReader(CompletingReader, HistoricalReader): def collect_keymap(self): @@ -97,17 +118,18 @@ else: self.compiler = compiler try: - file = open(os.path.expanduser("~/.pythoni.hist")) + file = open(os.path.expanduser("~/.pythoni.hist"), 'rb') except IOError: - pass + self.history = [] else: try: - self.history = pickle.load(file) + lines = file.readlines() + self.history = [ x.rstrip(b'\n').decode('unicode_escape') for x in lines] except: self.history = [] self.historyi = len(self.history) file.close() - atexit.register(mk_saver(self)) + atexit.register(lambda: saver(self)) for c in [maybe_accept]: self.commands[c.__name__] = c self.commands[c.__name__.replace('_', '-')] = c @@ -172,8 +194,7 @@ def execute(self, text): try: # ooh, look at the hack: - code = self.compile("# coding:utf8\n"+text.encode('utf-8'), - '', 'single') + code = self.compile(text, '', 'single') except (OverflowError, SyntaxError, ValueError): self.showsyntaxerror("") else: @@ -192,7 +213,7 @@ finally: warnings.showwarning = sv except KeyboardInterrupt: - print "KeyboardInterrupt" + print("KeyboardInterrupt") else: if l: self.execute(l) @@ -217,7 +238,7 @@ r = self.reader.handle1(block) except KeyboardInterrupt: self.restore() - print "KeyboardInterrupt" + print("KeyboardInterrupt") self.prepare() else: if self.reader.finished: @@ -253,7 +274,7 @@ if self.exc_info: type, value, tb = self.exc_info self.exc_info = None - raise type, value, tb + _reraise(type, value, tb) def tkinteract(self): """Run a Tk-aware Python interactive session. @@ -370,13 +391,13 @@ encoding = None # so you get ASCII... con = UnixConsole(0, 1, None, encoding) if print_banner: - print "Python", sys.version, "on", sys.platform - print 'Type "help", "copyright", "credits" or "license" '\ - 'for more information.' + print("Python", sys.version, "on", sys.platform) + print('Type "help", "copyright", "credits" or "license" '\ + 'for more information.') sys.path.insert(0, os.getcwd()) if clear_main and __name__ != '__main__': - mainmod = new.module('__main__') + mainmod = imp.new_module('__main__') sys.modules['__main__'] = mainmod else: mainmod = sys.modules['__main__'] diff --git a/pyrepl/reader.py b/pyrepl/reader.py --- a/pyrepl/reader.py +++ b/pyrepl/reader.py @@ -19,25 +19,31 @@ # CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -import types +from __future__ import unicode_literals from pyrepl import unicodedata_ from pyrepl import commands from pyrepl import input +try: + unicode +except NameError: + unicode = str + unichr = chr + basestring = bytes, str def _make_unctrl_map(): uc_map = {} for c in map(unichr, range(256)): - if unicodedata_.category(c)[0] <> 'C': + if unicodedata_.category(c)[0] != 'C': uc_map[c] = c for i in range(32): c = unichr(i) - uc_map[c] = u'^' + unichr(ord('A') + i - 1) - uc_map['\t'] = ' ' # display TABs as 4 characters - uc_map['\177'] = u'^?' + uc_map[c] = '^' + unichr(ord('A') + i - 1) + uc_map[b'\t'] = ' ' # display TABs as 4 characters + uc_map[b'\177'] = unicode('^?') for i in range(256): c = unichr(i) - if not uc_map.has_key(c): - uc_map[c] = u'\\%03o'%i + if c not in uc_map: + uc_map[c] = unicode('\\%03o')%i return uc_map # disp_str proved to be a bottleneck for large inputs, so it's been @@ -56,7 +62,7 @@ return u[c] else: if unicodedata_.category(c).startswith('C'): - return '\u%04x'%(ord(c),) + return b'\u%04x'%(ord(c)) else: return c @@ -72,9 +78,12 @@ the list always contains 0s or 1s at present; it could conceivably go higher as and when unicode support happens.""" - s = map(uc, buffer) - return (join(s), - map(ord, join(map(lambda x:'\001'+(len(x)-1)*'\000', s)))) + s = [uc(x) for x in buffer] + b = [] #XXX: bytearray + for x in s: + b.append(1) + b.extend([0]*(len(x)-1)) + return join(s), b del _my_unctrl @@ -93,7 +102,7 @@ st[c] = SYNTAX_SYMBOL for c in [a for a in map(unichr, range(256)) if a.isalpha()]: st[c] = SYNTAX_WORD - st[u'\n'] = st[u' '] = SYNTAX_WHITESPACE + st[unicode('\n')] = st[unicode(' ')] = SYNTAX_WHITESPACE return st default_keymap = tuple( @@ -141,7 +150,7 @@ #(r'\M-\n', 'insert-nl'), ('\\\\', 'self-insert')] + \ [(c, 'self-insert') - for c in map(chr, range(32, 127)) if c <> '\\'] + \ + for c in map(chr, range(32, 127)) if c != '\\'] + \ [(c, 'self-insert') for c in map(chr, range(128, 256)) if c.isalpha()] + \ [(r'\', 'up'), @@ -160,7 +169,8 @@ # workaround ]) -del c # from the listcomps +if 'c' in globals(): # only on python 2.x + del c # from the listcomps class Reader(object): """The Reader class implements the bare bones of a command reader, @@ -281,7 +291,7 @@ p -= ll + 1 prompt, lp = self.process_prompt(prompt) l, l2 = disp_str(line) - wrapcount = (len(l) + lp) / w + wrapcount = (len(l) + lp) // w if wrapcount == 0: screen.append(prompt + l) screeninfo.append((lp, l2+[1])) @@ -337,7 +347,7 @@ st = self.syntax_table b = self.buffer p -= 1 - while p >= 0 and st.get(b[p], SYNTAX_WORD) <> SYNTAX_WORD: + while p >= 0 and st.get(b[p], SYNTAX_WORD) != SYNTAX_WORD: p -= 1 while p >= 0 and st.get(b[p], SYNTAX_WORD) == SYNTAX_WORD: p -= 1 @@ -353,7 +363,7 @@ p = self.pos st = self.syntax_table b = self.buffer - while p < len(b) and st.get(b[p], SYNTAX_WORD) <> SYNTAX_WORD: + while p < len(b) and st.get(b[p], SYNTAX_WORD) != SYNTAX_WORD: p += 1 while p < len(b) and st.get(b[p], SYNTAX_WORD) == SYNTAX_WORD: p += 1 @@ -369,7 +379,7 @@ p = self.pos b = self.buffer p -= 1 - while p >= 0 and b[p] <> '\n': + while p >= 0 and b[p] != '\n': p -= 1 return p + 1 @@ -381,7 +391,7 @@ if p is None: p = self.pos b = self.buffer - while p < len(b) and b[p] <> '\n': + while p < len(b) and b[p] != '\n': p += 1 return p @@ -515,11 +525,13 @@ def do_cmd(self, cmd): #print cmd - if isinstance(cmd[0], str): + if isinstance(cmd[0], basestring): #XXX: unify to text cmd = self.commands.get(cmd[0], - commands.invalid_command)(self, cmd) + commands.invalid_command)(self, *cmd) elif isinstance(cmd[0], type): cmd = cmd[0](self, cmd) + else: + return # nothing to do cmd.do() @@ -552,6 +564,8 @@ if not event: # can only happen if we're not blocking return None + translate = True + if event.evt == 'key': self.input_trans.push(event) elif event.evt == 'scroll': @@ -559,9 +573,12 @@ elif event.evt == 'resize': self.refresh() else: - pass + translate = False - cmd = self.input_trans.get() + if translate: + cmd = self.input_trans.get() + else: + cmd = event.evt, event.data if cmd is None: if block: @@ -603,11 +620,11 @@ def get_buffer(self, encoding=None): if encoding is None: encoding = self.console.encoding - return u''.join(self.buffer).encode(self.console.encoding) + return unicode('').join(self.buffer).encode(self.console.encoding) def get_unicode(self): """Return the current buffer as a unicode string.""" - return u''.join(self.buffer) + return unicode('').join(self.buffer) def test(): from pyrepl.unix_console import UnixConsole diff --git a/pyrepl/tests/__init__.py b/pyrepl/tests/__init__.py deleted file mode 100644 --- a/pyrepl/tests/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright 2000-2004 Michael Hudson-Doyle -# -# All Rights Reserved -# -# -# Permission to use, copy, modify, and distribute this software and -# its documentation for any purpose is hereby granted without fee, -# provided that the above copyright notice appear in all copies and -# that both that copyright notice and this permission notice appear in -# supporting documentation. -# -# THE AUTHOR MICHAEL HUDSON DISCLAIMS ALL WARRANTIES WITH REGARD TO -# THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY -# AND FITNESS, IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, -# INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER -# RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF -# CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN -# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -# moo diff --git a/pyrepl/trace.py b/pyrepl/trace.py new file mode 100644 --- /dev/null +++ b/pyrepl/trace.py @@ -0,0 +1,17 @@ +import os + +trace_filename = os.environ.get("PYREPL_TRACE") + +if trace_filename is not None: + trace_file = open(trace_filename, 'a') +else: + trace_file = None + +def trace(line, *k, **kw): + if trace_file is None: + return + if k or kw: + line = line.format(*k, **kw) + trace_file.write(line+'\n') + trace_file.flush() + diff --git a/pyrepl/unix_console.py b/pyrepl/unix_console.py --- a/pyrepl/unix_console.py +++ b/pyrepl/unix_console.py @@ -22,14 +22,20 @@ import termios, select, os, struct, errno import signal, re, time, sys from fcntl import ioctl -from pyrepl import curses -from pyrepl.fancy_termios import tcgetattr, tcsetattr -from pyrepl.console import Console, Event -from pyrepl import unix_eventqueue +from . import curses +from .fancy_termios import tcgetattr, tcsetattr +from .console import Console, Event +from .unix_eventqueue import EventQueue +from .trace import trace class InvalidTerminal(RuntimeError): pass +try: + unicode +except NameError: + unicode = str + _error = (termios.error, curses.error, InvalidTerminal) # there are arguments for changing this to "refresh" @@ -41,8 +47,8 @@ def _my_getstr(cap, optional=0): r = curses.tigetstr(cap) if not optional and r is None: - raise InvalidTerminal, \ - "terminal doesn't have the required '%s' capability"%cap + raise InvalidTerminal( + "terminal doesn't have the required '%s' capability"%cap) return r # at this point, can we say: AAAAAAAAAAAAAAAAAAAAAARGH! @@ -58,7 +64,7 @@ del r, maybe_add_baudrate -delayprog = re.compile("\\$<([0-9]+)((?:/|\\*){0,2})>") +delayprog = re.compile(b"\\$<([0-9]+)((?:/|\\*){0,2})>") try: poll = select.poll @@ -93,6 +99,7 @@ else: self.output_fd = f_out.fileno() + self.pollob = poll() self.pollob.register(self.input_fd, POLLIN) curses.setupterm(term, self.output_fd) @@ -131,14 +138,14 @@ elif self._cub1 and self._cuf1: self.__move_x = self.__move_x_cub1_cuf1 else: - raise RuntimeError, "insufficient terminal (horizontal)" + raise RuntimeError("insufficient terminal (horizontal)") if self._cuu and self._cud: self.__move_y = self.__move_y_cuu_cud elif self._cuu1 and self._cud1: self.__move_y = self.__move_y_cuu1_cud1 else: - raise RuntimeError, "insufficient terminal (vertical)" + raise RuntimeError("insufficient terminal (vertical)") if self._dch1: self.dch1 = self._dch1 @@ -156,16 +163,16 @@ self.__move = self.__move_short - self.event_queue = unix_eventqueue.EventQueue(self.input_fd) - self.partial_char = '' + self.event_queue = EventQueue(self.input_fd) + self.partial_char = b'' self.cursor_visible = 1 def change_encoding(self, encoding): self.encoding = encoding - def refresh(self, screen, (cx, cy)): + def refresh(self, screen, c_xy): # this function is still too long (over 90 lines) - + cx, cy = c_xy if not self.__gone_tall: while len(self.screen) < min(len(screen), self.height): self.__hide_cursor() @@ -303,6 +310,7 @@ self.__buffer.append((text, 0)) def __write_code(self, fmt, *args): + self.__buffer.append((curses.tparm(fmt, *args), 1)) def __maybe_write_code(self, fmt, *args): @@ -403,10 +411,11 @@ self.event_queue.insert(Event('resize', None)) def push_char(self, char): + trace('push char {char!r}', char=char) self.partial_char += char try: - c = unicode(self.partial_char, self.encoding) - except UnicodeError, e: + c = self.partial_char.decode(self.encoding) + except UnicodeError as e: if len(e.args) > 4 and \ e.args[4] == 'unexpected end of data': pass @@ -418,7 +427,7 @@ self.partial_char = '' sys.stderr.write('\n%s: %s\n' % (e.__class__.__name__, e)) else: - self.partial_char = '' + self.partial_char = b'' self.event_queue.push(c) def get_event(self, block=1): @@ -426,7 +435,7 @@ while 1: # All hail Unix! try: self.push_char(os.read(self.input_fd, 1)) - except (IOError, OSError), err: + except (IOError, OSError) as err: if err.errno == errno.EINTR: if not self.event_queue.empty(): return self.event_queue.get() diff --git a/pyrepl/unix_eventqueue.py b/pyrepl/unix_eventqueue.py --- a/pyrepl/unix_eventqueue.py +++ b/pyrepl/unix_eventqueue.py @@ -26,6 +26,11 @@ from pyrepl import curses from termios import tcgetattr, VERASE import os +try: + unicode +except NameError: + unicode = str + _keynames = { "delete" : "kdch1", @@ -54,7 +59,7 @@ if keycode: our_keycodes[keycode] = unicode(key) if os.isatty(fd): - our_keycodes[tcgetattr(fd)[6][VERASE]] = u'backspace' + our_keycodes[tcgetattr(fd)[6][VERASE]] = unicode('backspace') self.k = self.ck = keymap.compile_keymap(our_keycodes) self.events = [] self.buf = [] diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -39,7 +39,7 @@ license = "MIT X11 style", description = "A library for building flexible command line interfaces", platforms = ["unix", "linux"], - packages = ["pyrepl", "pyrepl.tests"], + packages = ["pyrepl" ], #ext_modules = [Extension("_pyrepl_utils", ["pyrepl_utilsmodule.c"])], scripts = ["pythoni", "pythoni1"], long_description = long_desc, diff --git a/testing/__init__.py b/testing/__init__.py new file mode 100644 diff --git a/pyrepl/tests/infrastructure.py b/testing/infrastructure.py rename from pyrepl/tests/infrastructure.py rename to testing/infrastructure.py --- a/pyrepl/tests/infrastructure.py +++ b/testing/infrastructure.py @@ -17,10 +17,9 @@ # CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. +from __future__ import print_function from pyrepl.reader import Reader from pyrepl.console import Console, Event -import unittest -import sys class EqualsAnything(object): def __eq__(self, other): @@ -32,51 +31,38 @@ width = 80 encoding = 'utf-8' - def __init__(self, events, testcase, verbose=False): + def __init__(self, events, verbose=False): self.events = events self.next_screen = None self.verbose = verbose - self.testcase = testcase def refresh(self, screen, xy): if self.next_screen is not None: - self.testcase.assertEqual( - screen, self.next_screen, - "[ %s != %s after %r ]"%(screen, self.next_screen, - self.last_event_name)) + assert screen == self.next_screen, "[ %s != %s after %r ]"%( + screen, self.next_screen, self.last_event_name) def get_event(self, block=1): ev, sc = self.events.pop(0) self.next_screen = sc if not isinstance(ev, tuple): - ev = (ev,) + ev = (ev, None) self.last_event_name = ev[0] if self.verbose: - print "event", ev + print("event", ev) return Event(*ev) class TestReader(Reader): + def get_prompt(self, lineno, cursor_on_line): return '' + def refresh(self): Reader.refresh(self) self.dirty = True -class ReaderTestCase(unittest.TestCase): - def run_test(self, test_spec, reader_class=TestReader): - # remember to finish your test_spec with 'accept' or similar! - con = TestConsole(test_spec, self) - reader = reader_class(con) - reader.readline() +def read_spec(test_spec, reader_class=TestReader): + # remember to finish your test_spec with 'accept' or similar! + con = TestConsole(test_spec, verbose=True) + reader = reader_class(con) + reader.readline() -class BasicTestRunner: - def run(self, test): - result = unittest.TestResult() - test(result) - return result - -def run_testcase(testclass): - suite = unittest.makeSuite(testclass) - runner = unittest.TextTestRunner(sys.stdout, verbosity=1) - result = runner.run(suite) - diff --git a/pyrepl/tests/basic.py b/testing/test_basic.py rename from pyrepl/tests/basic.py rename to testing/test_basic.py --- a/pyrepl/tests/basic.py +++ b/testing/test_basic.py @@ -16,100 +16,89 @@ # RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF # CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. +import pytest +from .infrastructure import read_spec, EA -from pyrepl.console import Event -from pyrepl.tests.infrastructure import ReaderTestCase, EA, run_testcase -class SimpleTestCase(ReaderTestCase): +def test_basic(): + read_spec([(('self-insert', 'a'), ['a']), + ( 'accept', ['a'])]) - def test_basic(self): - self.run_test([(('self-insert', 'a'), ['a']), - ( 'accept', ['a'])]) +def test_repeat(): + read_spec([(('digit-arg', '3'), ['']), + (('self-insert', 'a'), ['aaa']), + ( 'accept', ['aaa'])]) - def test_repeat(self): - self.run_test([(('digit-arg', '3'), ['']), - (('self-insert', 'a'), ['aaa']), - ( 'accept', ['aaa'])]) +def test_kill_line(): + read_spec([(('self-insert', 'abc'), ['abc']), + ( 'left', None), + ( 'kill-line', ['ab']), + ( 'accept', ['ab'])]) - def test_kill_line(self): - self.run_test([(('self-insert', 'abc'), ['abc']), - ( 'left', None), - ( 'kill-line', ['ab']), - ( 'accept', ['ab'])]) +def test_unix_line_discard(): + read_spec([(('self-insert', 'abc'), ['abc']), + ( 'left', None), + ( 'unix-word-rubout', ['c']), + ( 'accept', ['c'])]) - def test_unix_line_discard(self): - self.run_test([(('self-insert', 'abc'), ['abc']), - ( 'left', None), - ( 'unix-word-rubout', ['c']), - ( 'accept', ['c'])]) +def test_kill_word(): + read_spec([(('self-insert', 'ab cd'), ['ab cd']), + ( 'beginning-of-line', ['ab cd']), + ( 'kill-word', [' cd']), + ( 'accept', [' cd'])]) - def test_kill_word(self): - self.run_test([(('self-insert', 'ab cd'), ['ab cd']), - ( 'beginning-of-line', ['ab cd']), - ( 'kill-word', [' cd']), - ( 'accept', [' cd'])]) +def test_backward_kill_word(): + read_spec([(('self-insert', 'ab cd'), ['ab cd']), + ( 'backward-kill-word', ['ab ']), + ( 'accept', ['ab '])]) - def test_backward_kill_word(self): - self.run_test([(('self-insert', 'ab cd'), ['ab cd']), - ( 'backward-kill-word', ['ab ']), - ( 'accept', ['ab '])]) +def test_yank(): + read_spec([(('self-insert', 'ab cd'), ['ab cd']), + ( 'backward-kill-word', ['ab ']), + ( 'beginning-of-line', ['ab ']), + ( 'yank', ['cdab ']), + ( 'accept', ['cdab '])]) + +def test_yank_pop(): + read_spec([(('self-insert', 'ab cd'), ['ab cd']), + ( 'backward-kill-word', ['ab ']), + ( 'left', ['ab ']), + ( 'backward-kill-word', [' ']), + ( 'yank', ['ab ']), + ( 'yank-pop', ['cd ']), + ( 'accept', ['cd '])]) - def test_yank(self): - self.run_test([(('self-insert', 'ab cd'), ['ab cd']), - ( 'backward-kill-word', ['ab ']), - ( 'beginning-of-line', ['ab ']), - ( 'yank', ['cdab ']), - ( 'accept', ['cdab '])]) - - def test_yank_pop(self): - self.run_test([(('self-insert', 'ab cd'), ['ab cd']), - ( 'backward-kill-word', ['ab ']), - ( 'left', ['ab ']), - ( 'backward-kill-word', [' ']), - ( 'yank', ['ab ']), - ( 'yank-pop', ['cd ']), - ( 'accept', ['cd '])]) +def test_interrupt(): + with pytest.raises(KeyboardInterrupt): + read_spec([( 'interrupt', [''])]) - def test_interrupt(self): - try: - self.run_test([( 'interrupt', [''])]) - except KeyboardInterrupt: - pass - else: - self.fail('KeyboardInterrupt got lost') +# test_suspend -- hah - # test_suspend -- hah +def test_up(): + read_spec([(('self-insert', 'ab\ncd'), ['ab', 'cd']), + ( 'up', ['ab', 'cd']), + (('self-insert', 'e'), ['abe', 'cd']), + ( 'accept', ['abe', 'cd'])]) - def test_up(self): - self.run_test([(('self-insert', 'ab\ncd'), ['ab', 'cd']), - ( 'up', ['ab', 'cd']), - (('self-insert', 'e'), ['abe', 'cd']), - ( 'accept', ['abe', 'cd'])]) +def test_down(): + read_spec([(('self-insert', 'ab\ncd'), ['ab', 'cd']), + ( 'up', ['ab', 'cd']), + (('self-insert', 'e'), ['abe', 'cd']), + ( 'down', ['abe', 'cd']), + (('self-insert', 'f'), ['abe', 'cdf']), + ( 'accept', ['abe', 'cdf'])]) - def test_down(self): - self.run_test([(('self-insert', 'ab\ncd'), ['ab', 'cd']), - ( 'up', ['ab', 'cd']), - (('self-insert', 'e'), ['abe', 'cd']), - ( 'down', ['abe', 'cd']), - (('self-insert', 'f'), ['abe', 'cdf']), - ( 'accept', ['abe', 'cdf'])]) +def test_left(): + read_spec([(('self-insert', 'ab'), ['ab']), + ( 'left', ['ab']), + (('self-insert', 'c'), ['acb']), + ( 'accept', ['acb'])]) - def test_left(self): - self.run_test([(('self-insert', 'ab'), ['ab']), - ( 'left', ['ab']), - (('self-insert', 'c'), ['acb']), - ( 'accept', ['acb'])]) +def test_right(): + read_spec([(('self-insert', 'ab'), ['ab']), + ( 'left', ['ab']), + (('self-insert', 'c'), ['acb']), + ( 'right', ['acb']), + (('self-insert', 'd'), ['acbd']), + ( 'accept', ['acbd'])]) - def test_right(self): - self.run_test([(('self-insert', 'ab'), ['ab']), - ( 'left', ['ab']), - (('self-insert', 'c'), ['acb']), - ( 'right', ['acb']), - (('self-insert', 'd'), ['acbd']), - ( 'accept', ['acbd'])]) - -def test(): - run_testcase(SimpleTestCase) - -if __name__ == '__main__': - test() diff --git a/pyrepl/tests/bugs.py b/testing/test_bugs.py rename from pyrepl/tests/bugs.py rename to testing/test_bugs.py --- a/pyrepl/tests/bugs.py +++ b/testing/test_bugs.py @@ -17,20 +17,15 @@ # CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -from pyrepl.console import Event -from pyrepl.tests.infrastructure import ReaderTestCase, EA, run_testcase +from .infrastructure import EA, read_spec # this test case should contain as-verbatim-as-possible versions of # (applicable) bug reports -class BugsTestCase(ReaderTestCase): +import pytest - def test_transpose_at_start(self): - self.run_test([( 'transpose', [EA, '']), - ( 'accept', [''])]) + at pytest.mark.xfail(reason='event missing', run=False) +def test_transpose_at_start(): + read_spec([( 'transpose', [EA, '']), + ( 'accept', [''])]) -def test(): - run_testcase(BugsTestCase) - -if __name__ == '__main__': - test() diff --git a/pyrepl/test/test_functional.py b/testing/test_functional.py rename from pyrepl/test/test_functional.py rename to testing/test_functional.py --- a/pyrepl/test/test_functional.py +++ b/testing/test_functional.py @@ -20,31 +20,22 @@ # some functional tests, to see if this is really working -import py +import pytest import sys -class TestTerminal(object): - def _spawn(self, *args, **kwds): - try: - import pexpect - except ImportError, e: - py.test.skip(str(e)) - kwds.setdefault('timeout', 10) - child = pexpect.spawn(*args, **kwds) - child.logfile = sys.stdout - return child +def pytest_funcarg__child(request): + try: + pexpect = pytest.importorskip('pexpect') + except SyntaxError: + pytest.skip('pexpect wont work on py3k') + child = pexpect.spawn(sys.executable, ['-S'], timeout=10) + child.logfile = sys.stdout + child.sendline('from pyrepl.python_reader import main') + child.sendline('main()') + return child - def spawn(self, argv=[]): - # avoid running start.py, cause it might contain - # things like readline or rlcompleter(2) included - child = self._spawn(sys.executable, ['-S'] + argv) - child.sendline('from pyrepl.python_reader import main') - child.sendline('main()') - return child +def test_basic(child): + child.sendline('a = 3') + child.sendline('a') + child.expect('3') - def test_basic(self): - child = self.spawn() - child.sendline('a = 3') - child.sendline('a') - child.expect('3') - diff --git a/testing/test_keymap.py b/testing/test_keymap.py new file mode 100644 --- /dev/null +++ b/testing/test_keymap.py @@ -0,0 +1,12 @@ +import pytest +from pyrepl.keymap import compile_keymap + + + at pytest.mark.skip('completely wrong') +def test_compile_keymap(): + k = compile_keymap({ + b'a': 'test', + b'bc': 'test2', + }) + + assert k == {b'a': 'test', b'b': { b'c': 'test2'}} diff --git a/pyrepl/tests/wishes.py b/testing/test_wishes.py rename from pyrepl/tests/wishes.py rename to testing/test_wishes.py --- a/pyrepl/tests/wishes.py +++ b/testing/test_wishes.py @@ -17,22 +17,15 @@ # CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -from pyrepl.console import Event -from pyrepl.tests.infrastructure import ReaderTestCase, EA, run_testcase +from .infrastructure import EA, read_spec # this test case should contain as-verbatim-as-possible versions of # (applicable) feature requests -class WishesTestCase(ReaderTestCase): - def test_quoted_insert_repeat(self): - self.run_test([(('digit-arg', '3'), ['']), - ( 'quoted-insert', ['']), - (('self-insert', '\033'), ['^[^[^[']), - ( 'accept', None)]) +def test_quoted_insert_repeat(): + read_spec([(('digit-arg', '3'), ['']), + ( 'quoted-insert', ['']), + (('self-insert', '\033'), ['^[^[^[']), + ( 'accept', None)]) -def test(): - run_testcase(WishesTestCase) - -if __name__ == '__main__': - test() diff --git a/tox.ini b/tox.ini --- a/tox.ini +++ b/tox.ini @@ -1,6 +1,9 @@ [tox] envlist= py27, py32 +[pytest] +codechecks = pep8 pyflakes + [testenv] deps= pytest From noreply at buildbot.pypy.org Thu May 3 10:33:17 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 10:33:17 +0200 (CEST) Subject: [pypy-commit] pyrepl default: kill disabled setaf writeout code Message-ID: <20120503083317.6D23482009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r173:54c21f79d92f Date: 2012-05-03 10:25 +0200 http://bitbucket.org/pypy/pyrepl/changeset/54c21f79d92f/ Log: kill disabled setaf writeout code diff --git a/pyrepl/unix_console.py b/pyrepl/unix_console.py --- a/pyrepl/unix_console.py +++ b/pyrepl/unix_console.py @@ -192,16 +192,6 @@ old_offset = offset = self.__offset height = self.height - if 0: - global counter - try: - counter - except NameError: - counter = 0 - self.__write_code(curses.tigetstr("setaf"), counter) - counter += 1 - if counter > 8: - counter = 0 # we make sure the cursor is on the screen, and that we're # using all of the screen if we can From noreply at buildbot.pypy.org Thu May 3 10:33:18 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 10:33:18 +0200 (CEST) Subject: [pypy-commit] pyrepl default: various pep8/pyflakes fixes Message-ID: <20120503083318.89BA382009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r174:136fccdf085e Date: 2012-05-03 10:25 +0200 http://bitbucket.org/pypy/pyrepl/changeset/136fccdf085e/ Log: various pep8/pyflakes fixes diff --git a/testing/test_basic.py b/testing/test_basic.py --- a/testing/test_basic.py +++ b/testing/test_basic.py @@ -17,7 +17,7 @@ # CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. import pytest -from .infrastructure import read_spec, EA +from .infrastructure import read_spec def test_basic(): diff --git a/testing/test_keymap.py b/testing/test_keymap.py --- a/testing/test_keymap.py +++ b/testing/test_keymap.py @@ -9,4 +9,4 @@ b'bc': 'test2', }) - assert k == {b'a': 'test', b'b': { b'c': 'test2'}} + assert k == {b'a': 'test', b'b': {b'c': 'test2'}} diff --git a/testing/test_wishes.py b/testing/test_wishes.py --- a/testing/test_wishes.py +++ b/testing/test_wishes.py @@ -17,7 +17,7 @@ # CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -from .infrastructure import EA, read_spec +from .infrastructure import read_spec # this test case should contain as-verbatim-as-possible versions of # (applicable) feature requests @@ -25,7 +25,7 @@ def test_quoted_insert_repeat(): read_spec([(('digit-arg', '3'), ['']), - ( 'quoted-insert', ['']), + (('quoted-insert',), ['']), (('self-insert', '\033'), ['^[^[^[']), - ( 'accept', None)]) + (('accept',), None)]) From noreply at buildbot.pypy.org Thu May 3 10:33:19 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 10:33:19 +0200 (CEST) Subject: [pypy-commit] pyrepl default: fix test Message-ID: <20120503083319.A4E6882009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r175:b9b2b1d9277a Date: 2012-05-03 10:29 +0200 http://bitbucket.org/pypy/pyrepl/changeset/b9b2b1d9277a/ Log: fix test diff --git a/testing/test_wishes.py b/testing/test_wishes.py --- a/testing/test_wishes.py +++ b/testing/test_wishes.py @@ -25,7 +25,7 @@ def test_quoted_insert_repeat(): read_spec([(('digit-arg', '3'), ['']), - (('quoted-insert',), ['']), + (('quoted-insert', None), ['']), (('self-insert', '\033'), ['^[^[^[']), - (('accept',), None)]) + (('accept', None), None)]) From noreply at buildbot.pypy.org Thu May 3 10:33:20 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 10:33:20 +0200 (CEST) Subject: [pypy-commit] pyrepl default: kill pyrepl.unicodedata_ for the stdlib one Message-ID: <20120503083320.C055182009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r176:d5be341ae99e Date: 2012-05-03 10:32 +0200 http://bitbucket.org/pypy/pyrepl/changeset/d5be341ae99e/ Log: kill pyrepl.unicodedata_ for the stdlib one diff --git a/pyrepl/input.py b/pyrepl/input.py --- a/pyrepl/input.py +++ b/pyrepl/input.py @@ -33,7 +33,7 @@ # [meta-key] is identified with [esc key]. We demand that any console # class does quite a lot towards emulating a unix terminal. from __future__ import print_function -from pyrepl import unicodedata_ +import unicodedata from collections import deque @@ -79,7 +79,7 @@ if d is None: if self.verbose: print("invalid") - if self.stack or len(key) > 1 or unicodedata_.category(key) == 'C': + if self.stack or len(key) > 1 or unicodedata.category(key) == 'C': self.results.append( (self.invalid_cls, self.stack + [key])) else: diff --git a/pyrepl/reader.py b/pyrepl/reader.py --- a/pyrepl/reader.py +++ b/pyrepl/reader.py @@ -20,7 +20,7 @@ # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. from __future__ import unicode_literals -from pyrepl import unicodedata_ +import unicodedata from pyrepl import commands from pyrepl import input try: @@ -33,7 +33,7 @@ def _make_unctrl_map(): uc_map = {} for c in map(unichr, range(256)): - if unicodedata_.category(c)[0] != 'C': + if unicodedata.category(c)[0] != 'C': uc_map[c] = c for i in range(32): c = unichr(i) @@ -61,7 +61,7 @@ if c in u: return u[c] else: - if unicodedata_.category(c).startswith('C'): + if unicodedata.category(c).startswith('C'): return b'\u%04x'%(ord(c)) else: return c diff --git a/pyrepl/unicodedata_.py b/pyrepl/unicodedata_.py deleted file mode 100644 --- a/pyrepl/unicodedata_.py +++ /dev/null @@ -1,59 +0,0 @@ -try: - from unicodedata import * -except ImportError: - - def category(ch): - """ - ASCII only implementation - """ - if type(ch) is not unicode: - raise TypeError - if len(ch) != 1: - raise TypeError - return _categories.get(ord(ch), 'Co') # "Other, private use" - - _categories = { - 0: 'Cc', 1: 'Cc', 2: 'Cc', 3: 'Cc', 4: 'Cc', 5: 'Cc', - 6: 'Cc', 7: 'Cc', 8: 'Cc', 9: 'Cc', 10: 'Cc', 11: 'Cc', - 12: 'Cc', 13: 'Cc', 14: 'Cc', 15: 'Cc', 16: 'Cc', 17: 'Cc', - 18: 'Cc', 19: 'Cc', 20: 'Cc', 21: 'Cc', 22: 'Cc', 23: 'Cc', - 24: 'Cc', 25: 'Cc', 26: 'Cc', 27: 'Cc', 28: 'Cc', 29: 'Cc', - 30: 'Cc', 31: 'Cc', 32: 'Zs', 33: 'Po', 34: 'Po', 35: 'Po', - 36: 'Sc', 37: 'Po', 38: 'Po', 39: 'Po', 40: 'Ps', 41: 'Pe', - 42: 'Po', 43: 'Sm', 44: 'Po', 45: 'Pd', 46: 'Po', 47: 'Po', - 48: 'Nd', 49: 'Nd', 50: 'Nd', 51: 'Nd', 52: 'Nd', 53: 'Nd', - 54: 'Nd', 55: 'Nd', 56: 'Nd', 57: 'Nd', 58: 'Po', 59: 'Po', - 60: 'Sm', 61: 'Sm', 62: 'Sm', 63: 'Po', 64: 'Po', 65: 'Lu', - 66: 'Lu', 67: 'Lu', 68: 'Lu', 69: 'Lu', 70: 'Lu', 71: 'Lu', - 72: 'Lu', 73: 'Lu', 74: 'Lu', 75: 'Lu', 76: 'Lu', 77: 'Lu', - 78: 'Lu', 79: 'Lu', 80: 'Lu', 81: 'Lu', 82: 'Lu', 83: 'Lu', - 84: 'Lu', 85: 'Lu', 86: 'Lu', 87: 'Lu', 88: 'Lu', 89: 'Lu', - 90: 'Lu', 91: 'Ps', 92: 'Po', 93: 'Pe', 94: 'Sk', 95: 'Pc', - 96: 'Sk', 97: 'Ll', 98: 'Ll', 99: 'Ll', 100: 'Ll', 101: 'Ll', - 102: 'Ll', 103: 'Ll', 104: 'Ll', 105: 'Ll', 106: 'Ll', 107: 'Ll', - 108: 'Ll', 109: 'Ll', 110: 'Ll', 111: 'Ll', 112: 'Ll', 113: 'Ll', - 114: 'Ll', 115: 'Ll', 116: 'Ll', 117: 'Ll', 118: 'Ll', 119: 'Ll', - 120: 'Ll', 121: 'Ll', 122: 'Ll', 123: 'Ps', 124: 'Sm', 125: 'Pe', - 126: 'Sm', 127: 'Cc', 128: 'Cc', 129: 'Cc', 130: 'Cc', 131: 'Cc', - 132: 'Cc', 133: 'Cc', 134: 'Cc', 135: 'Cc', 136: 'Cc', 137: 'Cc', - 138: 'Cc', 139: 'Cc', 140: 'Cc', 141: 'Cc', 142: 'Cc', 143: 'Cc', - 144: 'Cc', 145: 'Cc', 146: 'Cc', 147: 'Cc', 148: 'Cc', 149: 'Cc', - 150: 'Cc', 151: 'Cc', 152: 'Cc', 153: 'Cc', 154: 'Cc', 155: 'Cc', - 156: 'Cc', 157: 'Cc', 158: 'Cc', 159: 'Cc', 160: 'Zs', 161: 'Po', - 162: 'Sc', 163: 'Sc', 164: 'Sc', 165: 'Sc', 166: 'So', 167: 'So', - 168: 'Sk', 169: 'So', 170: 'Ll', 171: 'Pi', 172: 'Sm', 173: 'Cf', - 174: 'So', 175: 'Sk', 176: 'So', 177: 'Sm', 178: 'No', 179: 'No', - 180: 'Sk', 181: 'Ll', 182: 'So', 183: 'Po', 184: 'Sk', 185: 'No', - 186: 'Ll', 187: 'Pf', 188: 'No', 189: 'No', 190: 'No', 191: 'Po', - 192: 'Lu', 193: 'Lu', 194: 'Lu', 195: 'Lu', 196: 'Lu', 197: 'Lu', - 198: 'Lu', 199: 'Lu', 200: 'Lu', 201: 'Lu', 202: 'Lu', 203: 'Lu', - 204: 'Lu', 205: 'Lu', 206: 'Lu', 207: 'Lu', 208: 'Lu', 209: 'Lu', - 210: 'Lu', 211: 'Lu', 212: 'Lu', 213: 'Lu', 214: 'Lu', 215: 'Sm', - 216: 'Lu', 217: 'Lu', 218: 'Lu', 219: 'Lu', 220: 'Lu', 221: 'Lu', - 222: 'Lu', 223: 'Ll', 224: 'Ll', 225: 'Ll', 226: 'Ll', 227: 'Ll', - 228: 'Ll', 229: 'Ll', 230: 'Ll', 231: 'Ll', 232: 'Ll', 233: 'Ll', - 234: 'Ll', 235: 'Ll', 236: 'Ll', 237: 'Ll', 238: 'Ll', 239: 'Ll', - 240: 'Ll', 241: 'Ll', 242: 'Ll', 243: 'Ll', 244: 'Ll', 245: 'Ll', - 246: 'Ll', 247: 'Sm', 248: 'Ll', 249: 'Ll', 250: 'Ll', 251: 'Ll', - 252: 'Ll', 253: 'Ll', 254: 'Ll' - } From noreply at buildbot.pypy.org Thu May 3 10:42:15 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 May 2012 10:42:15 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: As per pypy-dev discussion: try a different model with threads Message-ID: <20120503084215.0B8C182009@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r54880:24620c58821b Date: 2012-05-02 18:56 +0200 http://bitbucket.org/pypy/pypy/changeset/24620c58821b/ Log: As per pypy-dev discussion: try a different model with threads From noreply at buildbot.pypy.org Thu May 3 10:42:16 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 May 2012 10:42:16 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: First, kill the 'transaction' module. Message-ID: <20120503084216.9F6BA82009@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r54881:117ef34b3249 Date: 2012-05-02 19:07 +0200 http://bitbucket.org/pypy/pypy/changeset/117ef34b3249/ Log: First, kill the 'transaction' module. diff --git a/lib_pypy/transaction.py b/lib_pypy/transaction.py deleted file mode 100644 --- a/lib_pypy/transaction.py +++ /dev/null @@ -1,128 +0,0 @@ -""" -Minimal example of usage: - - for i in range(10): - transaction.add(do_stuff, i) - transaction.run() - -This schedules and runs all ten do_stuff(i), each in its own transaction. -Each one can also add more transactions to run afterwards, and so on. -The call to run() returns when all transactions have completed. - -From the API point of view it is as if the do_stuff(i) were run serially -in some random order. If you use a real implementation instead of this -one (which is here for trying things out), then the transactions can -actually run in parallel on multiple cores. -""" - -import sys -import random - - -print >> sys.stderr, "warning: using lib_pypy/transaction.py, the emulator" - -_pending = {} -_in_transaction = False - - -class TransactionError(Exception): - pass - - -def set_num_threads(num): - """Set the number of threads to use. In a real implementation, - the transactions will attempt to use 'num' threads in parallel. - """ - - -def add(f, *args, **kwds): - """Register the call 'f(*args, **kwds)' as running a new - transaction. If we are currently running in a transaction too, the - new transaction will only start after the end of the current - transaction. Note that if the same or another transaction raises an - exception in the meantime, all pending transactions are cancelled. - """ - r = random.random() - assert r not in _pending # very bad luck if it is - _pending[r] = (f, args, kwds) - - -def add_epoll(ep, callback): - """Register the epoll object (from the 'select' module). For any - event (fd, events) detected by 'ep', a new transaction will be - started invoking 'callback(fd, events)'. Note that all fds should - be registered with the flag select.EPOLLONESHOT, and re-registered - from the callback if needed. - """ - for key, (f, args, kwds) in _pending.items(): - if getattr(f, '_reads_from_epoll_', None) is ep: - raise TransactionError("add_epoll(ep): ep is already registered") - def poll_reader(): - # assume only one epoll is added. If the _pending list is - # now empty, wait. If not, then just poll non-blockingly. - if len(_pending) == 0: - timeout = -1 - else: - timeout = 0 - got = ep.poll(timeout=timeout) - for fd, events in got: - add(callback, fd, events) - add(poll_reader) - poll_reader._reads_from_epoll_ = ep - add(poll_reader) - -def remove_epoll(ep): - """Explicitly unregister the epoll object. Note that raising an - exception in a transaction to abort run() also unregisters all epolls. - However, an epoll that becomes empty (doesn't wait on any fd) is not - automatically removed; if there is only an empty epoll left and no - further transactions, and no-one raised an exception, then it will - basically deadlock. - """ - for key, (f, args, kwds) in _pending.items(): - if getattr(f, '_reads_from_epoll_', None) is ep: - del _pending[key] - break - else: - raise TransactionError("remove_epoll(ep): ep is not registered") - -def run(): - """Run the pending transactions, as well as all transactions started - by them, and so on. The order is random and undeterministic. Must - be called from the main program, i.e. not from within another - transaction. If at some point all transactions are done, returns. - If a transaction raises an exception, it propagates here; in this - case all pending transactions are cancelled. - """ - global _pending, _in_transaction - if _in_transaction: - raise TransactionError("recursive invocation of transaction.run()") - pending = _pending - try: - _in_transaction = True - while pending: - _, (f, args, kwds) = pending.popitem() - f(*args, **kwds) - finally: - _in_transaction = False - pending.clear() # this is the behavior we get with interp_transaction - - -class local(object): - """Thread-local data. Behaves like a regular object, but its content - is not shared between multiple concurrently-running transactions. - It can be accessed without conflicts. - - It can be used for purely transaction-local data that needs to be - stored in a single global object. As long as the data is not needed - after the transaction, e.g. because it is removed before the - transaction ends, using a "local" instance avoids conflicts which - would otherwise systematically trigger. - - Values that remain in a "local" instance after the end of a - transaction are visible in the next transaction that happens to be - executed on the same thread. This can be used for long-living - caches that store values that are (1) not too costly to compute, - because they will end up being computed once per thread; and (2) not - too memory-hungry, because of the replicated storage. - """ diff --git a/pypy/module/transaction/__init__.py b/pypy/module/transaction/__init__.py deleted file mode 100644 --- a/pypy/module/transaction/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ - -from pypy.interpreter.mixedmodule import MixedModule - -class Module(MixedModule): - """Transaction module. XXX document me - """ - - interpleveldefs = { - 'set_num_threads': 'interp_transaction.set_num_threads', - 'add': 'interp_transaction.add', - 'run': 'interp_transaction.run', - #'add_epoll': 'interp_epoll.add_epoll', # xxx linux only - #'remove_epoll': 'interp_epoll.remove_epoll', # xxx linux only - 'local': 'interp_local.W_Local', - } - - appleveldefs = { - 'TransactionError': 'app_transaction.TransactionError', - } - - def __init__(self, space, *args): - "NOT_RPYTHON: patches space.threadlocals to use real threadlocals" - from pypy.module.transaction import interp_transaction - MixedModule.__init__(self, space, *args) - space.threadlocals = interp_transaction.getstate(space) - - def startup(self, space): - from pypy.module.transaction import interp_transaction - state = interp_transaction.getstate(space) - state.startup(space.wrap(self)) diff --git a/pypy/module/transaction/app_transaction.py b/pypy/module/transaction/app_transaction.py deleted file mode 100644 --- a/pypy/module/transaction/app_transaction.py +++ /dev/null @@ -1,3 +0,0 @@ - -class TransactionError(Exception): - pass diff --git a/pypy/module/transaction/interp_epoll.py b/pypy/module/transaction/interp_epoll.py deleted file mode 100644 --- a/pypy/module/transaction/interp_epoll.py +++ /dev/null @@ -1,126 +0,0 @@ - -# Linux-only - -from __future__ import with_statement -import os -from errno import EINTR -from pypy.rpython.lltypesystem import lltype, rffi -from pypy.interpreter.gateway import unwrap_spec -from pypy.interpreter.error import OperationError -from pypy.module.select import interp_epoll -from pypy.module.select.interp_epoll import W_Epoll, FD_SETSIZE -from pypy.module.select.interp_epoll import epoll_event -from pypy.module.transaction import interp_transaction -from pypy.rlib import rstm, rposix - - -# a _nowrapper version, to be sure that it does not allocate anything -_epoll_wait = rffi.llexternal( - "epoll_wait", - [rffi.INT, lltype.Ptr(rffi.CArray(epoll_event)), rffi.INT, rffi.INT], - rffi.INT, - compilation_info = interp_epoll.eci, - _nowrapper = True -) - - -class EPollPending(interp_transaction.AbstractPending): - maxevents = FD_SETSIZE - 1 # for now - evs = lltype.nullptr(rffi.CArray(epoll_event)) - - def __init__(self, space, epoller, w_callback): - self.space = space - self.epoller = epoller - self.w_callback = w_callback - self.evs = lltype.malloc(rffi.CArray(epoll_event), self.maxevents, - flavor='raw', add_memory_pressure=True, - track_allocation=False) - self.force_quit = False - - def __del__(self): - evs = self.evs - if evs: - self.evs = lltype.nullptr(rffi.CArray(epoll_event)) - lltype.free(evs, flavor='raw', track_allocation=False) - - def run(self): - # This code is run non-transactionally. Careful, no GC available. - ts = interp_transaction.state.transactionalstate - if ts.has_exception() or self.force_quit: - return - fd = rffi.cast(rffi.INT, self.epoller.epfd) - maxevents = rffi.cast(rffi.INT, self.maxevents) - timeout = rffi.cast(rffi.INT, 500) # for now: half a second - nfds = _epoll_wait(fd, self.evs, maxevents, timeout) - nfds = rffi.cast(lltype.Signed, nfds) - # - if nfds < 0: - errno = rposix.get_errno() - if errno == EINTR: - nfds = 0 # ignore, just wait for more later - else: - # unsure how to trigger this case - ts = interp_transaction.state.transactionalstate - ts.got_exception_errno = errno - ts.must_reraise_exception(_reraise_from_errno) - return - # We have to allocate new PendingCallback objects, but we can't - # allocate anything here because we are not running transactionally. - # Workaround for now: run a new tiny transaction just to create - # and register these PendingCallback's. - self.nfds = nfds - rstm.perform_transaction(EPollPending._add_real_transactions, - EPollPending, self) - # XXX could be avoided in the common case with some pool of - # PendingCallback instances - - @staticmethod - def _add_real_transactions(self, retry_counter): - evs = self.evs - for i in range(self.nfds): - event = evs[i] - fd = rffi.cast(lltype.Signed, event.c_data.c_fd) - PendingCallback(self.w_callback, fd, event.c_events).register() - # re-register myself to epoll_wait() for more - self.register() - - -class PendingCallback(interp_transaction.AbstractPending): - def __init__(self, w_callback, fd, events): - self.w_callback = w_callback - self.fd = fd - self.events = events - - def run_in_transaction(self, space): - space.call_function(self.w_callback, space.wrap(self.fd), - space.wrap(self.events)) - - -def _reraise_from_errno(transactionalstate): - space = interp_transaction.state.space - errno = transactionalstate.got_exception_errno - msg = os.strerror(errno) - w_type = space.w_IOError - w_error = space.call_function(w_type, space.wrap(errno), space.wrap(msg)) - raise OperationError(w_type, w_error) - - - at unwrap_spec(epoller=W_Epoll) -def add_epoll(space, epoller, w_callback): - state = interp_transaction.state - if epoller in state.epolls: - raise OperationError(state.w_error, - space.wrap("add_epoll(ep): ep is already registered")) - pending = EPollPending(space, epoller, w_callback) - state.epolls[epoller] = pending - pending.register() - - at unwrap_spec(epoller=W_Epoll) -def remove_epoll(space, epoller): - state = interp_transaction.state - pending = state.epolls.get(epoller, None) - if pending is None: - raise OperationError(state.w_error, - space.wrap("remove_epoll(ep): ep is not registered")) - pending.force_quit = True - del state.epolls[epoller] diff --git a/pypy/module/transaction/interp_local.py b/pypy/module/transaction/interp_local.py deleted file mode 100644 --- a/pypy/module/transaction/interp_local.py +++ /dev/null @@ -1,55 +0,0 @@ -from pypy.interpreter.baseobjspace import Wrappable -from pypy.interpreter.typedef import (TypeDef, interp2app, GetSetProperty, - descr_get_dict) -from pypy.module.transaction.interp_transaction import getstate - - -class W_Local(Wrappable): - """Thread-local data. Behaves like a regular object, but its content - is not shared between multiple concurrently-running transactions. - It can be accessed without conflicts. - - It can be used for purely transaction-local data. - - It can also be used for long-living caches that store values that - are (1) not too costly to compute and (2) not too memory-hungry, - because they will end up being computed and stored once per actual - thread. - """ - - def __init__(self, space): - self.state = getstate(space) - self.dicts = [] - self._update_dicts() - # unless we call transaction.set_num_threads() afterwards, this - # 'local' object is now initialized with the correct number of - # dictionaries, to avoid conflicts later if _update_dicts() is - # called in a transaction. - - def _update_dicts(self): - state = self.state - new = state.get_total_number_of_threads() - len(self.dicts) - if new <= 0: - return - # update the list without appending to it (to keep it non-resizable) - self.dicts = self.dicts + [state.space.newdict(instance=True) - for i in range(new)] - - def getdict(self, space): - n = self.state.get_thread_number() - try: - return self.dicts[n] - except IndexError: - self._update_dicts() - assert n < len(self.dicts) - return self.dicts[n] - -def descr_local__new__(space, w_subtype): - local = W_Local(space) - return space.wrap(local) - -W_Local.typedef = TypeDef("transaction.local", - __new__ = interp2app(descr_local__new__), - __dict__ = GetSetProperty(descr_get_dict, cls=W_Local), - ) -W_Local.typedef.acceptable_as_base_class = False diff --git a/pypy/module/transaction/interp_transaction.py b/pypy/module/transaction/interp_transaction.py deleted file mode 100644 --- a/pypy/module/transaction/interp_transaction.py +++ /dev/null @@ -1,176 +0,0 @@ -from pypy.interpreter.error import OperationError -from pypy.interpreter.gateway import unwrap_spec -from pypy.interpreter.executioncontext import ExecutionContext -from pypy.rlib import rstm - - -class State(object): - """The shared, global state. Warning, writes to it cause conflicts. - XXX fix me to somehow avoid conflicts at the beginning due to setvalue() - """ - - def __init__(self, space): - self.space = space - self.num_threads = rstm.NUM_THREADS_DEFAULT - self.running = False - self.w_error = None - self.threadobjs = {} # empty during translation - self.threadnums = {} # empty during translation - self.epolls = {} - self.pending_before_run = [] - - def startup(self, w_module): - if w_module is not None: # for tests - space = self.space - self.w_error = space.getattr(w_module, - space.wrap('TransactionError')) - main_ec = self.space.getexecutioncontext() # create it if needed - main_ec._transaction_pending = self.pending_before_run - - def add_thread(self, id, ec): - # register a new transaction thread - assert id not in self.threadobjs - ec._transaction_pending = [] - self.threadobjs[id] = ec - self.threadnums[id] = len(self.threadnums) - - # ---------- interface for ThreadLocals ---------- - # This works really like a thread-local, which may have slightly - # strange consequences in multiple transactions, because you don't - # know on which thread a transaction will run. The point of this is - # to let every thread get its own ExecutionContext; otherwise, they - # conflict with each other e.g. when setting the 'topframeref' - # attribute. - - def getvalue(self): - id = rstm.thread_id() - return self.threadobjs.get(id, None) - - def setvalue(self, value): - id = rstm.thread_id() - if id == rstm.MAIN_THREAD_ID: - assert len(self.threadobjs) == 0 - assert len(self.threadnums) == 0 - self.threadobjs[id] = value - self.threadnums[id] = 0 - else: - self.add_thread(id, value) - - def getmainthreadvalue(self): - return self.threadobjs.get(MAIN_THREAD_ID, None) - - def getallvalues(self): - return self.threadobjs - - def clear_all_values_apart_from_main(self): - for id in self.threadobjs.keys(): - if id != MAIN_THREAD_ID: - del self.threadobjs[id] - for id in self.threadnums.keys(): - if id != MAIN_THREAD_ID: - del self.threadnums[id] - self.epolls.clear() - - def get_thread_number(self): - id = rstm.thread_id() - return self.threadnums[id] - - def get_total_number_of_threads(self): - return 1 + self.num_threads - - def set_num_threads(self, num): - if self.running: - space = self.space - raise OperationError(self.w_error, - space.wrap("cannot change the number of " - "threads when transaction.run() " - "is active")) - self.num_threads = num - - -def getstate(space): - return space.fromcache(State) - - - at unwrap_spec(num=int) -def set_num_threads(space, num): - if num < 1: - num = 1 - getstate(space).set_num_threads(num) - - -class SpaceTransaction(rstm.Transaction): - - def __init__(self, space, w_callback, args): - self.space = space - self.state = getstate(space) - self.w_callback = w_callback - self.args = args - - def register(self): - """Register this SpaceTransaction instance in the pending list - belonging to the current thread. If called from the main - thread, it is the global list. If called from a transaction, - it is a thread-local list that will be merged with the global - list when the transaction is done. - NOTE: never register() the same instance multiple times. - """ - ec = self.state.getvalue() - assert ec is not None # must have been created first - ec._transaction_pending.append(self) - - def run(self): - #if self.retry_counter > 0: - # # retrying: will be done later, try others first - # return [self] # XXX does not work, will be retried immediately - # - ec = self.space.getexecutioncontext() # create it if needed - assert len(ec._transaction_pending) == 0 - # - if self.space.config.objspace.std.withmethodcache: - from pypy.objspace.std.typeobject import MethodCache - ec._methodcache = MethodCache(self.space) - # - self.space.call_args(self.w_callback, self.args) - # - if self.space.config.objspace.std.withmethodcache: - # remove the method cache again now, to prevent it from being - # promoted to a GLOBAL - ec._methodcache = None - # - result = ec._transaction_pending - ec._transaction_pending = [] - return result - - -class InitialTransaction(rstm.Transaction): - - def __init__(self, state): - self.state = state - - def run(self): - # initially: return the list of already-added transactions as - # the list of transactions to run next, and clear it - result = self.state.pending_before_run[:] - del self.state.pending_before_run[:] - return result - - -def add(space, w_callback, __args__): - transaction = SpaceTransaction(space, w_callback, __args__) - transaction.register() - - -def run(space): - state = getstate(space) - if state.running: - raise OperationError( - state.w_error, - space.wrap("recursive invocation of transaction.run()")) - state.running = True - try: - rstm.run_all_transactions(InitialTransaction(state), - num_threads = state.num_threads) - finally: - state.running = False - assert len(state.pending_before_run) == 0 diff --git a/pypy/module/transaction/test/__init__.py b/pypy/module/transaction/test/__init__.py deleted file mode 100644 diff --git a/pypy/module/transaction/test/test_epoll.py b/pypy/module/transaction/test/test_epoll.py deleted file mode 100644 --- a/pypy/module/transaction/test/test_epoll.py +++ /dev/null @@ -1,89 +0,0 @@ -import py -from pypy.conftest import gettestobjspace -py.test.skip("epoll support disabled for now") - - -class AppTestEpoll: - def setup_class(cls): - cls.space = gettestobjspace(usemodules=['transaction', 'select']) - - def test_non_transactional(self): - import select, posix as os - fd_read, fd_write = os.pipe() - epoller = select.epoll() - epoller.register(fd_read) - os.write(fd_write, 'x') - [(fd, events)] = epoller.poll() - assert fd == fd_read - assert events & select.EPOLLIN - got = os.read(fd_read, 1) - assert got == 'x' - - def test_simple(self): - import transaction, select, posix as os - - steps = [] - - fd_read, fd_write = os.pipe() - - epoller = select.epoll() - epoller.register(fd_read) - - def write_stuff(): - os.write(fd_write, 'x') - steps.append('write_stuff') - - class Done(Exception): - pass - - def callback(fd, events): - assert fd == fd_read - assert events & select.EPOLLIN - got = os.read(fd_read, 1) - assert got == 'x' - steps.append('callback') - raise Done - - transaction.add_epoll(epoller, callback) - transaction.add(write_stuff) - - assert steps == [] - raises(Done, transaction.run) - assert steps == ['write_stuff', 'callback'] - - def test_remove_closed_epoll(self): - import transaction, select, posix as os - - fd_read, fd_write = os.pipe() - - epoller = select.epoll() - epoller.register(fd_read) - - # we run it 10 times in order to get both possible orders in - # the emulator - for i in range(10): - transaction.add_epoll(epoller, lambda *args: not_actually_callable) - transaction.add(transaction.remove_epoll, epoller) - transaction.run() - # assert didn't deadlock - transaction.add(transaction.remove_epoll, epoller) - transaction.add_epoll(epoller, lambda *args: not_actually_callable) - transaction.run() - # assert didn't deadlock - - def test_errors(self): - import transaction, select - epoller = select.epoll() - callback = lambda *args: not_actually_callable - transaction.add_epoll(epoller, callback) - raises(transaction.TransactionError, - transaction.add_epoll, epoller, callback) - transaction.remove_epoll(epoller) - raises(transaction.TransactionError, - transaction.remove_epoll, epoller) - - -class AppTestEpollEmulator(AppTestEpoll): - def setup_class(cls): - # test for lib_pypy/transaction.py - cls.space = gettestobjspace(usemodules=['select']) diff --git a/pypy/module/transaction/test/test_interp_transaction.py b/pypy/module/transaction/test/test_interp_transaction.py deleted file mode 100644 --- a/pypy/module/transaction/test/test_interp_transaction.py +++ /dev/null @@ -1,91 +0,0 @@ -import time -from pypy.module.transaction import interp_transaction - - -class FakeSpace: - class config: - class objspace: - class std: - withmethodcache = False - def __init__(self): - self._spacecache = {} - def getexecutioncontext(self): - state = interp_transaction.getstate(self) - ec = state.getvalue() - if ec is None: - ec = FakeEC() - state.setvalue(ec) - return ec - def call_args(self, w_callback, args): - w_callback(*args) - def fromcache(self, Cls): - if Cls not in self._spacecache: - self._spacecache[Cls] = Cls(self) - return self._spacecache[Cls] - -class FakeEC: - pass - -def make_fake_space(): - space = FakeSpace() - interp_transaction.getstate(space).startup(None) - return space - - -def test_linear_list(): - space = make_fake_space() - seen = [] - # - def do(n): - seen.append(n) - if n < 200: - interp_transaction.add(space, do, (n+1,)) - # - interp_transaction.add(space, do, (0,)) - assert seen == [] - interp_transaction.run(space) - assert seen == range(201) - - -def test_tree_of_transactions(): - space = make_fake_space() - seen = [] - # - def do(level): - seen.append(level) - if level < 11: - interp_transaction.add(space, do, (level+1,)) - interp_transaction.add(space, do, (level+1,)) - # - interp_transaction.add(space, do, (0,)) - assert seen == [] - interp_transaction.run(space) - for i in range(12): - assert seen.count(i) == 2 ** i - assert len(seen) == 2 ** 12 - 1 - - -def test_transactional_simple(): - space = make_fake_space() - lst = [] - def f(n): - lst.append(n+0) - lst.append(n+1) - time.sleep(0.05) - lst.append(n+2) - lst.append(n+3) - lst.append(n+4) - time.sleep(0.25) - lst.append(n+5) - lst.append(n+6) - interp_transaction.add(space, f, (10,)) - interp_transaction.add(space, f, (20,)) - interp_transaction.add(space, f, (30,)) - interp_transaction.run(space) - assert len(lst) == 7 * 3 - seen = set() - for start in range(0, 21, 7): - seen.add(lst[start]) - for index in range(7): - assert lst[start + index] == lst[start] + index - assert seen == set([10, 20, 30]) diff --git a/pypy/module/transaction/test/test_local.py b/pypy/module/transaction/test/test_local.py deleted file mode 100644 --- a/pypy/module/transaction/test/test_local.py +++ /dev/null @@ -1,75 +0,0 @@ -import py -from pypy.conftest import gettestobjspace - - -class AppTestLocal: - def setup_class(cls): - cls.space = gettestobjspace(usemodules=['transaction']) - - def test_simple(self): - import transaction - x = transaction.local() - x.foo = 42 - assert x.foo == 42 - assert hasattr(x, 'foo') - assert not hasattr(x, 'bar') - assert getattr(x, 'foo', 84) == 42 - assert getattr(x, 'bar', 84) == 84 - - def test_transaction_local(self): - import transaction - transaction.set_num_threads(2) - x = transaction.local() - all_lists = [] - - def f(n): - if not hasattr(x, 'lst'): - x.lst = [] - all_lists.append(x.lst) - x.lst.append(n) - if n > 0: - transaction.add(f, n - 1) - transaction.add(f, n - 1) - transaction.add(f, 5) - transaction.run() - - assert not hasattr(x, 'lst') - assert len(all_lists) == 2 - total = all_lists[0] + all_lists[1] - assert total.count(5) == 1 - assert total.count(4) == 2 - assert total.count(3) == 4 - assert total.count(2) == 8 - assert total.count(1) == 16 - assert total.count(0) == 32 - assert len(total) == 63 - - def test_transaction_local_growing(self): - import transaction - transaction.set_num_threads(1) - x = transaction.local() - all_lists = [] - - def f(n): - if not hasattr(x, 'lst'): - x.lst = [] - all_lists.append(x.lst) - x.lst.append(n) - if n > 0: - transaction.add(f, n - 1) - transaction.add(f, n - 1) - transaction.add(f, 5) - - transaction.set_num_threads(2) # more than 1 specified above - transaction.run() - - assert not hasattr(x, 'lst') - assert len(all_lists) == 2 - total = all_lists[0] + all_lists[1] - assert total.count(5) == 1 - assert total.count(4) == 2 - assert total.count(3) == 4 - assert total.count(2) == 8 - assert total.count(1) == 16 - assert total.count(0) == 32 - assert len(total) == 63 diff --git a/pypy/module/transaction/test/test_transaction.py b/pypy/module/transaction/test/test_transaction.py deleted file mode 100644 --- a/pypy/module/transaction/test/test_transaction.py +++ /dev/null @@ -1,82 +0,0 @@ -import py -from pypy.conftest import gettestobjspace - - -class AppTestTransaction: - def setup_class(cls): - cls.space = gettestobjspace(usemodules=['transaction']) - - def test_set_num_threads(self): - import transaction - transaction.set_num_threads(4) - - def test_simple(self): - import transaction - lst = [] - transaction.add(lst.append, 5) - transaction.add(lst.append, 6) - transaction.add(lst.append, 7) - transaction.run() - assert sorted(lst) == [5, 6, 7] - - def test_almost_as_simple(self): - import transaction - lst = [] - def f(n): - lst.append(n+0) - lst.append(n+1) - lst.append(n+2) - lst.append(n+3) - lst.append(n+4) - lst.append(n+5) - lst.append(n+6) - transaction.add(f, 10) - transaction.add(f, 20) - transaction.add(f, 30) - transaction.run() - assert len(lst) == 7 * 3 - seen = set() - for start in range(0, 21, 7): - seen.add(lst[start]) - for index in range(7): - assert lst[start + index] == lst[start] + index - assert seen == set([10, 20, 30]) - - def test_propagate_exception(self): - import transaction, time - lst = [] - def f(n): - lst.append(n) - time.sleep(0.5) - raise ValueError(n) - transaction.add(f, 10) - transaction.add(f, 20) - transaction.add(f, 30) - try: - transaction.run() - assert 0, "should have raised ValueError" - except ValueError, e: - pass - assert len(lst) == 1 - assert lst[0] == e.args[0] - - def test_clear_pending_transactions(self): - import transaction - class Foo(Exception): - pass - def raiseme(): - raise Foo - for i in range(20): - transaction.add(raiseme) - try: - transaction.run() - assert 0, "should have raised Foo" - except Foo: - pass - transaction.run() # all the other 'raiseme's should have been cleared - - -class AppTestTransactionEmulator(AppTestTransaction): - def setup_class(cls): - # test for lib_pypy/transaction.py - cls.space = gettestobjspace(usemodules=[]) diff --git a/pypy/module/transaction/test/test_ztranslation.py b/pypy/module/transaction/test/test_ztranslation.py deleted file mode 100644 --- a/pypy/module/transaction/test/test_ztranslation.py +++ /dev/null @@ -1,4 +0,0 @@ -from pypy.objspace.fake.checkmodule import checkmodule - -def test_checkmodule(): - checkmodule('transaction') From noreply at buildbot.pypy.org Thu May 3 10:42:17 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 May 2012 10:42:17 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: This also makes no sense any more. Message-ID: <20120503084217.E9FCB82009@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r54882:38e65854dff3 Date: 2012-05-02 19:13 +0200 http://bitbucket.org/pypy/pypy/changeset/38e65854dff3/ Log: This also makes no sense any more. diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py deleted file mode 100644 --- a/pypy/rlib/rstm.py +++ /dev/null @@ -1,265 +0,0 @@ -from pypy.rpython.lltypesystem import lltype, llmemory, rffi -from pypy.rpython.lltypesystem.lloperation import llop -from pypy.rpython.annlowlevel import llhelper, cast_instance_to_base_ptr -from pypy.rpython.annlowlevel import base_ptr_lltype, cast_base_ptr_to_instance -from pypy.rlib.objectmodel import keepalive_until_here, we_are_translated -from pypy.rlib.objectmodel import specialize -from pypy.rlib.debug import ll_assert -from pypy.rlib.nonconst import NonConstant -from pypy.translator.stm.stmgcintf import StmOperations - -from pypy.rlib.rgc import stm_is_enabled # re-exported here - - -NUM_THREADS_DEFAULT = 4 # XXX for now - -MAIN_THREAD_ID = 0 - - -class TransactionError(Exception): - pass - -class Transaction(object): - _next_transaction = None - _scheduled = False # for debugging - - def run(self): - # subclasses can access 'self.retry_counter' here - raise NotImplementedError - - -def stm_operations(): - if we_are_translated(): - return StmOperations - else: - from pypy.rlib.test.test_rstm import fake_stm_operations - return fake_stm_operations - - -def in_transaction(): - return bool(stm_operations().in_transaction()) - -def thread_id(): - return stm_operations().thread_id() - - -def run_all_transactions(initial_transaction, - num_threads = NUM_THREADS_DEFAULT): - if in_transaction(): - raise TransactionError("nested call to rstm.run_all_transactions()") - # - _transactionalstate.initialize() - # - # Tell the GC we are entering transactional mode. This makes - # sure that 'initial_transaction' is flagged as GLOBAL. - # (Actually it flags all surviving objects as GLOBAL.) - # No more GC operation afterwards! - llop.stm_enter_transactional_mode(lltype.Void) - # - # Keep alive 'initial_transaction'. In truth we would like it to - # survive a little bit longer, for the beginning of the C code in - # run_all_transactions(). This should be equivalent because there - # is no possibility of having a GC collection inbetween. - keepalive_until_here(initial_transaction) - # - # The following line causes the _stm_run_transaction() function to be - # generated in the C source with a specific signature, where it - # can be called by the C code. - llop.nop(lltype.Void, llhelper(StmOperations.RUN_TRANSACTION, - _stm_run_transaction)) - # - # The same about the _stm_thread_starting() and _stm_thread_stopping() - llop.nop(lltype.Void, llhelper(StmOperations.INIT_DONE, - _stm_thread_starting)) - llop.nop(lltype.Void, llhelper(StmOperations.INIT_DONE, - _stm_thread_stopping)) - # - # Tell the C code to run all transactions. - ptr = _cast_transaction_to_voidp(initial_transaction) - stm_operations().run_all_transactions(ptr, num_threads) - # - # Tell the GC we are leaving transactional mode. - llop.stm_leave_transactional_mode(lltype.Void) - # - # Hack - if not we_are_translated(): - stm_operations().leaving() - # - # If an exception was raised, re-raise it here. - _transactionalstate.close_exceptions() - - -def _cast_transaction_to_voidp(transaction): - if we_are_translated(): - ptr = cast_instance_to_base_ptr(transaction) - return rffi.cast(rffi.VOIDP, ptr) - else: - return stm_operations().cast_transaction_to_voidp(transaction) - -def _cast_voidp_to_transaction(transactionptr): - if we_are_translated(): - ptr = rffi.cast(base_ptr_lltype(), transactionptr) - return cast_base_ptr_to_instance(Transaction, ptr) - else: - return stm_operations().cast_voidp_to_transaction(transactionptr) - - -class _TransactionalState(object): - """This is the class of a global singleton, seen by every transaction. - Used for cross-transaction synchronization. Of course writing to it - will likely cause conflicts. Reserved for now for storing the - exception that must be re-raised by run_all_transactions(). - """ - # The logic ensures that once a transaction calls must_reraise_exception() - # and commits, all uncommitted transactions will abort (because they have - # read '_reraise_exception' when they started) and then, when they retry, - # do nothing. This makes the transaction committing an exception the last - # one to commit, and it cleanly shuts down all other pending transactions. - - def initialize(self): - self._reraise_exception = None - - def has_exception(self): - return self._reraise_exception is not None - - def must_reraise_exception(self, got_exception): - self._got_exception = got_exception - self._reraise_exception = self.reraise_exception_callback - if not we_are_translated(): - import sys; self._got_tb = sys.exc_info()[2] - - def close_exceptions(self): - if self._reraise_exception is not None: - self._reraise_exception() - - @staticmethod - def reraise_exception_callback(): - self = _transactionalstate - exc = self._got_exception - self._got_exception = None - if not we_are_translated() and hasattr(self, '_got_tb'): - raise exc.__class__, exc, self._got_tb - raise exc - -_transactionalstate = _TransactionalState() - - -def _stm_run_transaction(transactionptr, retry_counter): - # - # Tell the GC we are starting a transaction - # (at this point, we have no stack root at all; the following - # call will clear any remaining garbage from the shadowstack, - # in case of an aborted transaction) - llop.stm_start_transaction(lltype.Void) - # - # Now we can use the GC - next = None - try: - if _transactionalstate.has_exception(): - # a previously committed transaction raised: don't do anything - # more in this transaction - pass - else: - # run! - next = _run_really(transactionptr, retry_counter) - # - except Exception, e: - _transactionalstate.must_reraise_exception(e) - # - # Stop using the GC. This will make 'next' and all transactions linked - # from there GLOBAL objects. - llop.stm_stop_transaction(lltype.Void) - # - # Mark 'next' as kept-alive-until-here. In truth we would like to - # keep it alive after the return, for the C code. This should be - # equivalent because there is no possibility of having a GC collection - # inbetween. - keepalive_until_here(next) - return _cast_transaction_to_voidp(next) - - -def _run_really(transactionptr, retry_counter): - # Call the RPython method run() on the Transaction instance. - # This logic is in a sub-function because we want to catch - # the MemoryErrors that could occur. - transaction = _cast_voidp_to_transaction(transactionptr) - # - # Sanity-check that C code cleared '_next_transaction' first. - # Also needs a bit of nonsensical code to make sure that - # '_next_transaction' is always created as a general field. - ll_assert(transaction._next_transaction is None, - "transaction._next_transaction should be cleared by the C code") - if NonConstant(False): - transaction._next_transaction = transaction - # - transaction.retry_counter = retry_counter - transaction._scheduled = False - new_transactions = transaction.run() - return _link_new_transactions(new_transactions) -_run_really._dont_inline_ = True - -def _link_new_transactions(new_transactions): - # in order to schedule the new transactions, we have to return a - # raw pointer to the first one, with their field '_next_transaction' - # making a linked list. The C code reads directly from this - # field '_next_transaction'. - if new_transactions is None: - return None - n = len(new_transactions) - 1 - next = None - while n >= 0: - new_transactions[n]._next_transaction = next - next = new_transactions[n] - # - ll_assert(not next._scheduled, - "the same Transaction instance is scheduled more than once") - next._scheduled = True - # - n -= 1 - return next - - -def _stm_thread_starting(): - llop.stm_thread_starting(lltype.Void) - -def _stm_thread_stopping(): - llop.stm_thread_stopping(lltype.Void) - - -class DISABLEDThreadLocal(object): - """ - A thread-local container. Use only for one or a few static places, - e.g. the ExecutionContext in the PyPy interpreter; and store any - number of stuff on the ExecutionContext instead. The point of this - is to have proper GC support: if at the end of a transaction some - objects are only reachable via a ThreadLocal object, then these - objects don't need to be turned GLOBAL. It avoids the overhead of - STM, notably the two copies that are needed for every transaction - that changes a GLOBAL object. - """ - STMTHREADLOCAL = lltype.Struct('StmThreadLocal', - ('content', base_ptr_lltype()), - hints={'stm_thread_local': True}) - - def __init__(self, Class): - """NOT_RPYTHON: You can only have a small number of ThreadLocal() - instances built during translation.""" - self.Class = Class - self.threadlocal = lltype.malloc(self.STMTHREADLOCAL, immortal=True) - - def _freeze_(self): - return True # but the thread-local value can be read and written - - @specialize.arg(0) - def getvalue(self): - """Read the thread-local value. - It can be either None (the default) or an instance of self.Class.""" - ptr = self.threadlocal.content - return cast_base_ptr_to_instance(self.Class, ptr) - - @specialize.arg(0) - def setvalue(self, value): - """Write the thread-local value.""" - assert value is None or isinstance(value, self.Class) - ptr = cast_instance_to_base_ptr(value) - self.threadlocal.content = ptr diff --git a/pypy/rlib/test/test_rstm.py b/pypy/rlib/test/test_rstm.py deleted file mode 100644 --- a/pypy/rlib/test/test_rstm.py +++ /dev/null @@ -1,140 +0,0 @@ -import random -import py -from pypy.rpython.lltypesystem import lltype, rffi -from pypy.rlib import rstm - - -class FakeStmOperations: - _in_transaction = 0 - _thread_id = 0 - _mapping = {} - - def in_transaction(self): - return self._in_transaction - - def thread_id(self): - return self._thread_id - - def _add(self, transactionptr): - r = random.random() - assert r not in self._pending # very bad luck if it is - self._pending[r] = transactionptr - - def run_all_transactions(self, initial_transaction_ptr, num_threads=4): - self._pending = {} - self._add(initial_transaction_ptr) - thread_ids = [-10000 - 123 * i for i in range(num_threads)] # random - self._in_transaction = True - try: - while self._pending: - self._thread_id = thread_ids.pop(0) - thread_ids.append(self._thread_id) - r, transactionptr = self._pending.popitem() - transaction = self.cast_voidp_to_transaction(transactionptr) - transaction._next_transaction = None - nextptr = rstm._stm_run_transaction(transactionptr, 0) - next = self.cast_voidp_to_transaction(nextptr) - while next is not None: - self._add(self.cast_transaction_to_voidp(next)) - next = next._next_transaction - finally: - self._in_transaction = False - self._thread_id = 0 - del self._pending - - def cast_transaction_to_voidp(self, transaction): - if transaction is None: - return lltype.nullptr(rffi.VOIDP.TO) - assert isinstance(transaction, rstm.Transaction) - num = 10000 + len(self._mapping) - self._mapping[num] = transaction - return rffi.cast(rffi.VOIDP, num) - - def cast_voidp_to_transaction(self, transactionptr): - if not transactionptr: - return None - num = rffi.cast(lltype.Signed, transactionptr) - return self._mapping[num] - - def leaving(self): - self._mapping.clear() - -fake_stm_operations = FakeStmOperations() - - -def test_in_transaction(): - res = rstm.in_transaction() - assert res is False - res = rstm.thread_id() - assert res == 0 - -def test_run_all_transactions_minimal(): - seen = [] - class Empty(rstm.Transaction): - def run(self): - res = rstm.in_transaction() - seen.append(res is True) - res = rstm.thread_id() - seen.append(res != 0) - rstm.run_all_transactions(Empty()) - assert seen == [True, True] - -def test_run_all_transactions_recursive(): - seen = [] - class DoInOrder(rstm.Transaction): - def run(self): - assert self._next_transaction is None - if len(seen) < 10: - seen.append(len(seen)) - return [self] - rstm.run_all_transactions(DoInOrder()) - assert seen == range(10) - -def test_run_all_transactions_random_order(): - seen = [] - class AddToSeen(rstm.Transaction): - def run(self): - seen.append(self.value) - class DoInOrder(rstm.Transaction): - count = 0 - def run(self): - assert self._next_transaction is None - if self.count < 50: - other = AddToSeen() - other.value = self.count - self.count += 1 - return [self, other] - rstm.run_all_transactions(DoInOrder()) - assert seen != range(50) and sorted(seen) == range(50) - -def test_raise(): - class MyException(Exception): - pass - class FooBar(rstm.Transaction): - def run(self): - raise MyException - class DoInOrder(rstm.Transaction): - def run(self): - return [FooBar() for i in range(10)] - py.test.raises(MyException, rstm.run_all_transactions, DoInOrder()) - -def test_threadlocal(): - py.test.skip("disabled") - # not testing the thread-local factor, but only the general interface - class Point: - def __init__(self, x, y): - self.x = x - self.y = y - p1 = Point(10, 2) - p2 = Point(-1, 0) - tl = rstm.ThreadLocal(Point) - assert tl.getvalue() is None - tl.setvalue(p1) - assert tl.getvalue() is p1 - tl.setvalue(p2) - assert tl.getvalue() is p2 - tl.setvalue(None) - assert tl.getvalue() is None - -def test_stm_is_enabled(): - assert rstm.stm_is_enabled() is None # not translated From noreply at buildbot.pypy.org Thu May 3 10:42:19 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 May 2012 10:42:19 +0200 (CEST) Subject: [pypy-commit] pypy default: Kill a method not used any more, and fix the comment. Message-ID: <20120503084219.3B5F682009@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r54883:a8bbe642cb67 Date: 2012-05-03 10:41 +0200 http://bitbucket.org/pypy/pypy/changeset/a8bbe642cb67/ Log: Kill a method not used any more, and fix the comment. diff --git a/pypy/module/thread/gil.py b/pypy/module/thread/gil.py --- a/pypy/module/thread/gil.py +++ b/pypy/module/thread/gil.py @@ -5,7 +5,7 @@ # This module adds a global lock to an object space. # If multiple threads try to execute simultaneously in this space, # all but one will be blocked. The other threads get a chance to run -# from time to time, using the hook yield_thread(). +# from time to time, using the periodic action GILReleaseAction. from pypy.module.thread import ll_thread as thread from pypy.module.thread.error import wrap_thread_error @@ -51,8 +51,6 @@ self.gil_ready = False self.setup_threads(space) - def yield_thread(self): - do_yield_thread() class GILReleaseAction(PeriodicAsyncAction): """An action called every sys.checkinterval bytecodes. It releases diff --git a/pypy/module/thread/test/test_gil.py b/pypy/module/thread/test/test_gil.py --- a/pypy/module/thread/test/test_gil.py +++ b/pypy/module/thread/test/test_gil.py @@ -55,7 +55,7 @@ assert state.datalen3 == len(state.data) assert state.datalen4 == len(state.data) debug_print(main, i, state.datalen4) - state.threadlocals.yield_thread() + gil.do_yield_thread() assert i == j j += 1 def bootstrap(): From noreply at buildbot.pypy.org Thu May 3 10:53:38 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 10:53:38 +0200 (CEST) Subject: [pypy-commit] pyrepl default: kill unused c imports and pep8 cleanup of reader Message-ID: <20120503085338.EF23982009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r177:92c4f16f275e Date: 2012-05-03 10:45 +0200 http://bitbucket.org/pypy/pyrepl/changeset/92c4f16f275e/ Log: kill unused c imports and pep8 cleanup of reader diff --git a/pyrepl/reader.py b/pyrepl/reader.py --- a/pyrepl/reader.py +++ b/pyrepl/reader.py @@ -30,6 +30,7 @@ unichr = chr basestring = bytes, str + def _make_unctrl_map(): uc_map = {} for c in map(unichr, range(256)): @@ -38,54 +39,47 @@ for i in range(32): c = unichr(i) uc_map[c] = '^' + unichr(ord('A') + i - 1) - uc_map[b'\t'] = ' ' # display TABs as 4 characters + uc_map[b'\t'] = ' ' # display TABs as 4 characters uc_map[b'\177'] = unicode('^?') for i in range(256): c = unichr(i) if c not in uc_map: - uc_map[c] = unicode('\\%03o')%i + uc_map[c] = unicode('\\%03o') % i return uc_map -# disp_str proved to be a bottleneck for large inputs, so it's been -# rewritten in C; it's not required though. -try: - raise ImportError # currently it's borked by the unicode support - from _pyrepl_utils import disp_str, init_unctrl_map +def _my_unctrl(c, u=_make_unctrl_map()): + if c in u: + return u[c] + else: + if unicodedata.category(c).startswith('C'): + return b'\u%04x' % ord(c) + else: + return c - init_unctrl_map(_make_unctrl_map()) - del init_unctrl_map -except ImportError: - def _my_unctrl(c, u=_make_unctrl_map()): - if c in u: - return u[c] - else: - if unicodedata.category(c).startswith('C'): - return b'\u%04x'%(ord(c)) - else: - return c +def disp_str(buffer, join=''.join, uc=_my_unctrl): + """ disp_str(buffer:string) -> (string, [int]) - def disp_str(buffer, join=''.join, uc=_my_unctrl): - """ disp_str(buffer:string) -> (string, [int]) + Return the string that should be the printed represenation of + |buffer| and a list detailing where the characters of |buffer| + get used up. E.g.: - Return the string that should be the printed represenation of - |buffer| and a list detailing where the characters of |buffer| - get used up. E.g.: + >>> disp_str(chr(3)) + ('^C', [1, 0]) - >>> disp_str(chr(3)) - ('^C', [1, 0]) + the list always contains 0s or 1s at present; it could conceivably + go higher as and when unicode support happens.""" + # disp_str proved to be a bottleneck for large inputs, + # so it needs to be rewritten in C; it's not required though. + s = [uc(x) for x in buffer] + b = [] # XXX: bytearray + for x in s: + b.append(1) + b.extend([0] * (len(x) - 1)) + return join(s), b - the list always contains 0s or 1s at present; it could conceivably - go higher as and when unicode support happens.""" - s = [uc(x) for x in buffer] - b = [] #XXX: bytearray - for x in s: - b.append(1) - b.extend([0]*(len(x)-1)) - return join(s), b - - del _my_unctrl +del _my_unctrl del _make_unctrl_map @@ -95,6 +89,7 @@ SYNTAX_WORD, SYNTAX_SYMBOL] = range(3) + def make_default_syntax_table(): # XXX perhaps should use some unicodedata here? st = {} @@ -164,13 +159,14 @@ (r'\', 'end-of-line'), # was 'end' (r'\', 'beginning-of-line'), # was 'home' (r'\', 'help'), - (r'\EOF', 'end'), # the entries in the terminfo database for xterms - (r'\EOH', 'home'), # seem to be wrong. this is a less than ideal - # workaround + (r'\EOF', 'end'), # the entries in the terminfo database for xterms + (r'\EOH', 'home'), # seem to be wrong. this is a less than ideal + # workaround ]) if 'c' in globals(): # only on python 2.x - del c # from the listcomps + del c # from the listcomps + class Reader(object): """The Reader class implements the bare bones of a command reader, @@ -248,9 +244,9 @@ self.commands = {} self.msg = '' for v in vars(commands).values(): - if ( isinstance(v, type) - and issubclass(v, commands.Command) - and v.__name__[0].islower() ): + if (isinstance(v, type) + and issubclass(v, commands.Command) + and v.__name__[0].islower()): self.commands[v.__name__] = v self.commands[v.__name__.replace('_', '-')] = v self.syntax_table = make_default_syntax_table() @@ -294,15 +290,15 @@ wrapcount = (len(l) + lp) // w if wrapcount == 0: screen.append(prompt + l) - screeninfo.append((lp, l2+[1])) + screeninfo.append((lp, l2 + [1])) else: - screen.append(prompt + l[:w-lp] + "\\") - screeninfo.append((lp, l2[:w-lp])) - for i in range(-lp + w, -lp + wrapcount*w, w): - screen.append(l[i:i+w] + "\\") + screen.append(prompt + l[:w - lp] + "\\") + screeninfo.append((lp, l2[:w - lp])) + for i in range(-lp + w, -lp + wrapcount * w, w): + screen.append(l[i:i + w] + "\\") screeninfo.append((0, l2[i:i + w])) - screen.append(l[wrapcount*w - lp:]) - screeninfo.append((0, l2[wrapcount*w - lp:]+[1])) + screen.append(l[wrapcount * w - lp:]) + screeninfo.append((0, l2[wrapcount * w - lp:] + [1])) self.screeninfo = screeninfo self.cxy = self.pos2xy(self.pos) if self.msg and self.msg_at_bottom: @@ -330,9 +326,9 @@ if e == -1: break # Found start and end brackets, subtract from string length - l = l - (e-s+1) - out_prompt += prompt[pos:s] + prompt[s+1:e] - pos = e+1 + l = l - (e - s + 1) + out_prompt += prompt[pos:s] + prompt[s + 1:e] + pos = e + 1 out_prompt += prompt[pos:] return out_prompt, l @@ -408,7 +404,7 @@ """Return what should be in the left-hand margin for line `lineno'.""" if self.arg is not None and cursor_on_line: - return "(arg: %s) "%self.arg + return "(arg: %s) " % self.arg if "\n" in self.buffer: if lineno == 0: res = self.ps2 @@ -521,17 +517,18 @@ # this call sets up self.cxy, so call it first. screen = self.calc_screen() self.console.refresh(screen, self.cxy) - self.dirty = 0 # forgot this for a while (blush) + self.dirty = 0 # forgot this for a while (blush) def do_cmd(self, cmd): #print cmd - if isinstance(cmd[0], basestring): #XXX: unify to text + if isinstance(cmd[0], basestring): + #XXX: unify to text cmd = self.commands.get(cmd[0], commands.invalid_command)(self, *cmd) elif isinstance(cmd[0], type): cmd = cmd[0](self, cmd) else: - return # nothing to do + return # nothing to do cmd.do() @@ -561,7 +558,7 @@ while 1: event = self.console.get_event(block) - if not event: # can only happen if we're not blocking + if not event: # can only happen if we're not blocking return None translate = True @@ -626,6 +623,7 @@ """Return the current buffer as a unicode string.""" return unicode('').join(self.buffer) + def test(): from pyrepl.unix_console import UnixConsole reader = Reader(UnixConsole()) @@ -636,5 +634,5 @@ while reader.readline(): pass -if __name__=='__main__': +if __name__ == '__main__': test() From noreply at buildbot.pypy.org Thu May 3 10:53:40 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 10:53:40 +0200 (CEST) Subject: [pypy-commit] pyrepl default: flip code in run_multiline_interactive_console to be pyflaks safe Message-ID: <20120503085340.1031882009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r178:c99e02700ab4 Date: 2012-05-03 10:46 +0200 http://bitbucket.org/pypy/pyrepl/changeset/c99e02700ab4/ Log: flip code in run_multiline_interactive_console to be pyflaks safe diff --git a/pyrepl/simple_interact.py b/pyrepl/simple_interact.py --- a/pyrepl/simple_interact.py +++ b/pyrepl/simple_interact.py @@ -35,8 +35,8 @@ def run_multiline_interactive_console(mainmodule=None): import code - if mainmodule is None: - import __main__ as mainmodule + import __main__ + mainmodule = mainmodule or __main__ console = code.InteractiveConsole(mainmodule.__dict__) def more_lines(unicodetext): From noreply at buildbot.pypy.org Thu May 3 10:53:41 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 10:53:41 +0200 (CEST) Subject: [pypy-commit] pyrepl default: kill unused imports in python_reader Message-ID: <20120503085341.608CA82009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r179:6038a4d2487a Date: 2012-05-03 10:53 +0200 http://bitbucket.org/pypy/pyrepl/changeset/6038a4d2487a/ Log: kill unused imports in python_reader diff --git a/pyrepl/python_reader.py b/pyrepl/python_reader.py --- a/pyrepl/python_reader.py +++ b/pyrepl/python_reader.py @@ -25,14 +25,10 @@ from pyrepl.completing_reader import CompletingReader from pyrepl.historical_reader import HistoricalReader from pyrepl import completing_reader, reader -from pyrepl import copy_code, commands, completer +from pyrepl import commands, completer from pyrepl import module_lister import imp, sys, os, re, code, traceback import atexit, warnings -try: - import cPickle as pickle -except ImportError: - import pickle try: unicode @@ -40,10 +36,7 @@ unicode = str try: - import imp imp.find_module("twisted") - from twisted.internet import reactor - from twisted.internet.abstract import FileDescriptor except ImportError: default_interactmethod = "interact" else: From noreply at buildbot.pypy.org Thu May 3 11:03:44 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 11:03:44 +0200 (CEST) Subject: [pypy-commit] pyrepl default: pyrepl.html to rst conversion Message-ID: <20120503090344.B205A82009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r180:99bdd1752f1f Date: 2012-05-03 11:03 +0200 http://bitbucket.org/pypy/pyrepl/changeset/99bdd1752f1f/ Log: pyrepl.html to rst conversion diff --git a/pyrepl.html b/pyrepl.rst rename from pyrepl.html rename to pyrepl.rst --- a/pyrepl.html +++ b/pyrepl.rst @@ -1,191 +1,84 @@ - - - - - ~mwh/hacks/pyrepl - - - -

-<home> -<personal> -<hacks> -<links> -<quotes> -<details> -<summaries> -

-

-~mwh/hacks: -<bytecodehacks> -<xmms-py> -<pyicqlib> -<pyrepl> -

+pyrepl +====== -

pyrepl

- -

For ages now, I've been working on and off on a replacement for readline for use from Python. readline is undoubtedly great, but a couple of things irritate me about it. One is the inability to do -sane multi-line editing. Have you ever typed something like: -

-
->>> for i in range(10):
-...     for i in range(10):
-...         print i*j
-
-

+sane multi-line editing. Have you ever typed something like:: + + >>> for i in range(10): + ... for i in range(10): + ... print i*j + into a Python top-level? Grr, that "i" on the second line should have been a "j". Wouldn't it be nice if you could just press "up" on your keyboard and fix it? This was one of the aims I kept in mind when -writing pyrepl (or pyrl as I used to call it, but that name's taken). -

-

+writing pyrepl (or pyrl as I used to call it, but that name's +`taken `_). + Another irritation of readline is the GPL. I'm not even nearly as anti-GPL as some, but I don't want to have to GPL my program just so I can use readline. -

-

+ 0.7 adds to the version that runs an a terminal an experimental version that runs in a pygame surface. A long term goal is Mathematica-like notebooks, but that's a loong way off... -

-

+ Anyway, after many months of intermittent toil I present: -

-

-pyrepl 0.7.2 -

-

-(0.7.2 fixes a number of silly small typos and slips) -

-

-(the significant change since 0.7 is inclusion of a working setup.py...) -

-

-

-For more details on the changes since 0.6, you can read the CHANGES file. -

-

-NEWS (as of Dec 12 2002): -pyrepl now has dedicated mailing lists -where discussion about pyrepl's development will take place. -

-

-Dependencies: Python 2.1 with the termios and curses modules built (I + + +Dependencies: Python 2.7 with the termios and curses modules built (I don't really use curses, but I need the terminfo functions that live in the curses module), or pygame installed (if you want to live on the bleeding edge). -

-

+ There are probably portability gremlins in some of the ioctl using code. Fixes gratefully received! -

-

+ Features: -

-
    -
  • -sane multi-line editing -
  • -
  • -history, with incremental search -
  • -
  • -completion, including displaying of available options -
  • -
  • -a fairly large subset of the readline emacs-mode key bindings (adding -more is mostly just a matter of typing) -
  • -
  • -Deliberately liberal, Python-style license -
  • -
  • -a new python top-level that I really like; possibly my favourite -feature I've yet added is the ability to type -
    -->> from __f
    -
    -and hit TAB to get -
    -->> from __future__
    -
    -then you type " import n" and hit tab again to get: -
    -->> from __future__ import nested_scopes
    -
    -(this is very addictive!). -
  • -
  • -no global variables, so you can run two independent -readers without having their histories interfering. -
  • -
  • -An experimental version that runs in a pygame surface. -
  • x -
-

-pyrepl currently consists of four major classes: -

-
-Reader <- HistoricalReader <- CompletingReader <- PythonReader
-
-

-There's also a UnixConsole class that handles the low-level + * sane multi-line editing + * history, with incremental search + * completion, including displaying of available options + * a fairly large subset of the readline emacs-mode key bindings (adding + more is mostly just a matter of typing) + * Deliberately liberal, Python-style license + * a new python top-level that I really like; possibly my favourite + feature I've yet added is the ability to type:: + + ->> from __f + + and hit TAB to get:: + + ->> from __future__ + + then you type " import n" and hit tab again to get:: + + ->> from __future__ import nested_scopes + + (this is very addictive!). + + * no global variables, so you can run two independent + readers without having their histories interfering. + * An experimental version that runs in a pygame surface. + +pyrepl currently consists of four major classes:: + + Reader - HistoricalReader - CompletingReader - PythonReader + + +There's also a **UnixConsole** class that handles the low-level details. -

-

+ Each of these lives in it's own file, and there are a bunch of support files (including a C module that just provides a bit of a speed up - building it is strictly optional). -

-

-IMHO, the best way to get a feel for how it works is to type -

-
-$ python pythoni
-
-

+ +IMHO, the best way to get a feel for how it works is to type:: + + $ python pythoni + and just play around. If you're used to readline's emacs-mode, you should feel right at home. One point that might confuse: because the arrow keys are used to move up and down in the command currently being edited, you need to use ^P and ^N to move through the history. -

-

-<home> -<personal> -<hacks> -<links> -<quotes> -<details> -<summaries> -

-
- - Valid XHTML 1.0! - - - Valid CSS! - -Last updated: $Date: 2002/12/12 11:52:29 $. Comments to mwh at python.net. -
- -Best viewed with any browser. Except netscape 4 with javascript -on... -
- - - From noreply at buildbot.pypy.org Thu May 3 12:14:36 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 12:14:36 +0200 (CEST) Subject: [pypy-commit] pyrepl default: move partial char handling to eventqueue Message-ID: <20120503101436.05B3982009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r181:62bb9c81ed12 Date: 2012-05-03 11:17 +0200 http://bitbucket.org/pypy/pyrepl/changeset/62bb9c81ed12/ Log: move partial char handling to eventqueue diff --git a/pyrepl/unix_console.py b/pyrepl/unix_console.py --- a/pyrepl/unix_console.py +++ b/pyrepl/unix_console.py @@ -163,8 +163,7 @@ self.__move = self.__move_short - self.event_queue = EventQueue(self.input_fd) - self.partial_char = b'' + self.event_queue = EventQueue(self.input_fd, self.encoding) self.cursor_visible = 1 def change_encoding(self, encoding): @@ -402,23 +401,7 @@ def push_char(self, char): trace('push char {char!r}', char=char) - self.partial_char += char - try: - c = self.partial_char.decode(self.encoding) - except UnicodeError as e: - if len(e.args) > 4 and \ - e.args[4] == 'unexpected end of data': - pass - else: - # was: "raise". But it crashes pyrepl, and by extension the - # pypy currently running, in which we are e.g. in the middle - # of some debugging session. Argh. Instead just print an - # error message to stderr and continue running, for now. - self.partial_char = '' - sys.stderr.write('\n%s: %s\n' % (e.__class__.__name__, e)) - else: - self.partial_char = b'' - self.event_queue.push(c) + self.event_queue.push(char) def get_event(self, block=1): while self.event_queue.empty(): diff --git a/pyrepl/unix_eventqueue.py b/pyrepl/unix_eventqueue.py --- a/pyrepl/unix_eventqueue.py +++ b/pyrepl/unix_eventqueue.py @@ -52,7 +52,7 @@ } class EventQueue(object): - def __init__(self, fd): + def __init__(self, fd, encoding): our_keycodes = {} for key, tiname in _keynames.items(): keycode = curses.tigetstr(tiname) @@ -63,27 +63,39 @@ self.k = self.ck = keymap.compile_keymap(our_keycodes) self.events = [] self.buf = [] + self.encoding=encoding + def get(self): if self.events: return self.events.pop(0) else: return None + def empty(self): return not self.events + + def flush_buf(self): + raw = b''.join(self.buf) + self.buf = [] + return raw + def insert(self, event): self.events.append(event) + def push(self, char): if char in self.k: k = self.k[char] + self.buf.append(char) if isinstance(k, dict): - self.buf.append(char) self.k = k else: - self.events.append(Event('key', k, ''.join(self.buf) + char)) - self.buf = [] + self.events.append(Event('key', k, self.flush_buf())) self.k = self.ck elif self.buf: - self.events.extend([Event('key', c, c) for c in self.buf]) + keys = self.flush_buf() + decoded = keys.decode(self.encoding, 'ignore') # XXX surogate? + #XXX: incorrect + self.events.extend(Event('key', c, c) for c in decoded) self.buf = [] self.k = self.ck self.push(char) From noreply at buildbot.pypy.org Thu May 3 12:14:37 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 12:14:37 +0200 (CEST) Subject: [pypy-commit] pyrepl default: yay we work on python3 Message-ID: <20120503101437.1CEE68208A@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r182:7978b2d43a7f Date: 2012-05-03 12:14 +0200 http://bitbucket.org/pypy/pyrepl/changeset/7978b2d43a7f/ Log: yay we work on python3 diff --git a/pyrepl/keymap.py b/pyrepl/keymap.py --- a/pyrepl/keymap.py +++ b/pyrepl/keymap.py @@ -173,7 +173,11 @@ def compile_keymap(keymap, empty=b''): r = {} for key, value in keymap.items(): - r.setdefault(key[0], {})[key[1:]] = value + if isinstance(key, bytes): + first = key[:1] + else: + first = key[0] + r.setdefault(first, {})[key[1:]] = value for key, value in r.items(): if empty in value: if len(value) != 1: diff --git a/pyrepl/unix_eventqueue.py b/pyrepl/unix_eventqueue.py --- a/pyrepl/unix_eventqueue.py +++ b/pyrepl/unix_eventqueue.py @@ -24,6 +24,7 @@ from pyrepl import keymap from pyrepl.console import Event from pyrepl import curses +from .trace import trace from termios import tcgetattr, VERASE import os try: @@ -56,11 +57,13 @@ our_keycodes = {} for key, tiname in _keynames.items(): keycode = curses.tigetstr(tiname) + trace('key {key} tiname {tiname} keycode {keycode!r}', **locals()) if keycode: - our_keycodes[keycode] = unicode(key) + our_keycodes[keycode] = key if os.isatty(fd): our_keycodes[tcgetattr(fd)[6][VERASE]] = unicode('backspace') self.k = self.ck = keymap.compile_keymap(our_keycodes) + trace('keymap {k!r}', k=self.k) self.events = [] self.buf = [] self.encoding=encoding @@ -80,24 +83,27 @@ return raw def insert(self, event): + trace('added event {event}', event=event) self.events.append(event) def push(self, char): if char in self.k: k = self.k[char] + trace('found map {k!r}', k=k) self.buf.append(char) if isinstance(k, dict): self.k = k else: - self.events.append(Event('key', k, self.flush_buf())) + self.insert(Event('key', k, self.flush_buf())) self.k = self.ck elif self.buf: keys = self.flush_buf() decoded = keys.decode(self.encoding, 'ignore') # XXX surogate? #XXX: incorrect - self.events.extend(Event('key', c, c) for c in decoded) + for c in decoded: + self.insert(Event('key', c, c)) self.buf = [] self.k = self.ck self.push(char) else: - self.events.append(Event('key', char, char)) + self.insert(Event('key', char.decode(self.encoding), char)) diff --git a/testing/test_keymap.py b/testing/test_keymap.py --- a/testing/test_keymap.py +++ b/testing/test_keymap.py @@ -2,7 +2,6 @@ from pyrepl.keymap import compile_keymap - at pytest.mark.skip('completely wrong') def test_compile_keymap(): k = compile_keymap({ b'a': 'test', diff --git a/testing/test_unix_reader.py b/testing/test_unix_reader.py new file mode 100644 --- /dev/null +++ b/testing/test_unix_reader.py @@ -0,0 +1,9 @@ +from pyrepl.unix_eventqueue import EventQueue + +from pyrepl import curses + + + at pytest.mark.xfail(run=False, reason='wtf segfault') +def test_simple(): + q = EventQueue(0, 'utf-8') + From noreply at buildbot.pypy.org Thu May 3 12:51:46 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 12:51:46 +0200 (CEST) Subject: [pypy-commit] pyrepl default: remove curses requirement, since we use ctypes Message-ID: <20120503105146.43D1E82009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r183:49f53a8bb198 Date: 2012-05-03 12:50 +0200 http://bitbucket.org/pypy/pyrepl/changeset/49f53a8bb198/ Log: remove curses requirement, since we use ctypes diff --git a/README b/README --- a/README +++ b/README @@ -2,8 +2,7 @@ http://pyrepl.codespeak.net/ -It requires python 2.2 (or newer) with the curses and termios modules -built and features: +It requires python 2.7 (or newer) and features: * sane multi-line editing * history, with incremental search From noreply at buildbot.pypy.org Thu May 3 12:54:01 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 12:54:01 +0200 (CEST) Subject: [pypy-commit] pyrepl default: bitbucket url in setup.py Message-ID: <20120503105401.2A9A282009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r184:e60a72dcc3ae Date: 2012-05-03 12:53 +0200 http://bitbucket.org/pypy/pyrepl/changeset/e60a72dcc3ae/ Log: bitbucket url in setup.py diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -32,10 +32,10 @@ setup( name = "pyrepl", - version = "0.8.3", + version = "0.8.4", author = "Michael Hudson-Doyle", author_email = "micahel at gmail.com", - url = "http://codespeak.net/pyrepl/", + url = "http://bitbucket.org/pypy/pyrepl/", license = "MIT X11 style", description = "A library for building flexible command line interfaces", platforms = ["unix", "linux"], From noreply at buildbot.pypy.org Thu May 3 14:06:15 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 14:06:15 +0200 (CEST) Subject: [pypy-commit] pyrepl default: prepare 0.8.4 release Message-ID: <20120503120615.E68CE82009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r185:640cf3c5d4f5 Date: 2012-05-03 14:05 +0200 http://bitbucket.org/pypy/pyrepl/changeset/640cf3c5d4f5/ Log: prepare 0.8.4 release diff --git a/README b/README --- a/README +++ b/README @@ -43,6 +43,13 @@ emails, so don't think I'll be irritated by the banality of your comments!) + +Summary of 0.8.4: + + + python3 support + + support for more readline hooks + + backport various fixes from pypy + Summary of 0.8.3: + First release from new home on bitbucket. diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -35,6 +35,8 @@ version = "0.8.4", author = "Michael Hudson-Doyle", author_email = "micahel at gmail.com", + maintainer="Ronny Pfannschmidt", + maintainer_email="ronny.pfannschmidt at gmx.de", url = "http://bitbucket.org/pypy/pyrepl/", license = "MIT X11 style", description = "A library for building flexible command line interfaces", From noreply at buildbot.pypy.org Thu May 3 14:32:01 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 14:32:01 +0200 (CEST) Subject: [pypy-commit] pyrepl default: dup fds for UnixConsole and ad check for closed stdout to flishing, fixes #1 Message-ID: <20120503123201.58C2582009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r186:1793b23a5a1b Date: 2012-05-03 14:31 +0200 http://bitbucket.org/pypy/pyrepl/changeset/1793b23a5a1b/ Log: dup fds for UnixConsole and ad check for closed stdout to flishing, fixes #1 diff --git a/pyrepl/python_reader.py b/pyrepl/python_reader.py --- a/pyrepl/python_reader.py +++ b/pyrepl/python_reader.py @@ -192,7 +192,8 @@ self.showsyntaxerror("") else: self.runcode(code) - sys.stdout.flush() + if sys.stdout and not sys.stdout.closed: + sys.stdout.flush() def interact(self): while 1: @@ -382,7 +383,7 @@ encoding = None else: encoding = None # so you get ASCII... - con = UnixConsole(0, 1, None, encoding) + con = UnixConsole(os.dup(0), os.dup(1), None, encoding) if print_banner: print("Python", sys.version, "on", sys.platform) print('Type "help", "copyright", "credits" or "license" '\ diff --git a/pyrepl/readline.py b/pyrepl/readline.py --- a/pyrepl/readline.py +++ b/pyrepl/readline.py @@ -174,13 +174,15 @@ # ____________________________________________________________ class _ReadlineWrapper(object): - f_in = 0 - f_out = 1 reader = None saved_history_length = -1 startup_hook = None config = ReadlineConfig() + def __init__(self): + self.f_in = os.dup(0) + self.f_ut = os.dup(1) + def get_reader(self): if self.reader is None: console = UnixConsole(self.f_in, self.f_out, encoding=ENCODING) From noreply at buildbot.pypy.org Thu May 3 14:32:02 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 14:32:02 +0200 (CEST) Subject: [pypy-commit] pyrepl default: update changelog Message-ID: <20120503123202.844A782009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r187:0cfad1cf00ec Date: 2012-05-03 14:31 +0200 http://bitbucket.org/pypy/pyrepl/changeset/0cfad1cf00ec/ Log: update changelog diff --git a/README b/README --- a/README +++ b/README @@ -49,6 +49,7 @@ + python3 support + support for more readline hooks + backport various fixes from pypy + + gracefully break on sys.stdout.close() Summary of 0.8.3: From noreply at buildbot.pypy.org Thu May 3 14:35:59 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 May 2012 14:35:59 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: hg merge default Message-ID: <20120503123559.8B80D82009@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r54884:8bb30f57bc59 Date: 2012-05-03 10:42 +0200 http://bitbucket.org/pypy/pypy/changeset/8bb30f57bc59/ Log: hg merge default diff too long, truncating to 10000 out of 12272 lines diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst --- a/pypy/doc/cppyy.rst +++ b/pypy/doc/cppyy.rst @@ -21,6 +21,26 @@ .. _`llvm`: http://llvm.org/ +Motivation +========== + +The cppyy module offers two unique features, which result in great +performance as well as better functionality and cross-language integration +than would otherwise be possible. +First, cppyy is written in RPython and therefore open to optimizations by the +JIT up until the actual point of call into C++. +This means that there are no conversions necessary between a garbage collected +and a reference counted environment, as is needed for the use of existing +extension modules written or generated for CPython. +It also means that if variables are already unboxed by the JIT, they can be +passed through directly to C++. +Second, Reflex (and cling far more so) adds dynamic features to C++, thus +greatly reducing impedance mismatches between the two languages. +In fact, Reflex is dynamic enough that you could write the runtime bindings +generation in python (as opposed to RPython) and this is used to create very +natural "pythonizations" of the bound code. + + Installation ============ @@ -195,10 +215,12 @@ >>>> d = cppyy.gbl.BaseFactory("name", 42, 3.14) >>>> type(d) - >>>> d.m_i - 42 - >>>> d.m_d - 3.14 + >>>> isinstance(d, cppyy.gbl.Base1) + True + >>>> isinstance(d, cppyy.gbl.Base2) + True + >>>> d.m_i, d.m_d + (42, 3.14) >>>> d.m_name == "name" True >>>> @@ -295,6 +317,9 @@ To select a specific virtual method, do like with normal python classes that override methods: select it from the class that you need, rather than calling the method on the instance. + To select a specific overload, use the __dispatch__ special function, which + takes the name of the desired method and its signature (which can be + obtained from the doc string) as arguments. * **namespaces**: Are represented as python classes. Namespaces are more open-ended than classes, so sometimes initial access may diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst --- a/pypy/doc/extending.rst +++ b/pypy/doc/extending.rst @@ -116,13 +116,21 @@ Reflex ====== -This method is only experimental for now, and is being exercised on a branch, -`reflex-support`_, so you will have to build PyPy yourself. +This method is still experimental and is being exercised on a branch, +`reflex-support`_, which adds the `cppyy`_ module. The method works by using the `Reflex package`_ to provide reflection information of the C++ code, which is then used to automatically generate -bindings at runtime, which can then be used from python. +bindings at runtime. +From a python standpoint, there is no difference between generating bindings +at runtime, or having them "statically" generated and available in scripts +or compiled into extension modules: python classes and functions are always +runtime structures, created when a script or module loads. +However, if the backend itself is capable of dynamic behavior, it is a much +better functional match to python, allowing tighter integration and more +natural language mappings. Full details are `available here`_. +.. _`cppyy`: cppyy.html .. _`reflex-support`: cppyy.html .. _`Reflex package`: http://root.cern.ch/drupal/content/reflex .. _`available here`: cppyy.html @@ -130,16 +138,33 @@ Pros ---- -If it works, it is mostly automatic, and hence easy in use. -The bindings can make use of direct pointers, in which case the calls are -very fast. +The cppyy module is written in RPython, which makes it possible to keep the +code execution visible to the JIT all the way to the actual point of call into +C++, thus allowing for a very fast interface. +Reflex is currently in use in large software environments in High Energy +Physics (HEP), across many different projects and packages, and its use can be +virtually completely automated in a production environment. +One of its uses in HEP is in providing language bindings for CPython. +Thus, it is possible to use Reflex to have bound code work on both CPython and +on PyPy. +In the medium-term, Reflex will be replaced by `cling`_, which is based on +`llvm`_. +This will affect the backend only; the python-side interface is expected to +remain the same, except that cling adds a lot of dynamic behavior to C++, +enabling further language integration. + +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ Cons ---- -C++ is a large language, and these bindings are not yet feature-complete. -Although missing features should do no harm if you don't use them, if you do -need a particular feature, it may be necessary to work around it in python -or with a C++ helper function. +C++ is a large language, and cppyy is not yet feature-complete. +Still, the experience gained in developing the equivalent bindings for CPython +means that adding missing features is a simple matter of engineering, not a +question of research. +The module is written so that currently missing features should do no harm if +you don't use them, if you do need a particular feature, it may be necessary +to work around it in python or with a C++ helper function. Although Reflex works on various platforms, the bindings with PyPy have only been tested on Linux. diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -49,8 +49,9 @@ optimizations.append(OptFfiCall()) if ('rewrite' not in enable_opts or 'virtualize' not in enable_opts - or 'heap' not in enable_opts or 'unroll' not in enable_opts): - optimizations.append(OptSimplify()) + or 'heap' not in enable_opts or 'unroll' not in enable_opts + or 'pure' not in enable_opts): + optimizations.append(OptSimplify(unroll)) return optimizations, unroll diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -257,8 +257,8 @@ opnum == rop.COPYSTRCONTENT or # no effect on GC struct/array opnum == rop.COPYUNICODECONTENT): # no effect on GC struct/array return - assert opnum != rop.CALL_PURE if (opnum == rop.CALL or + opnum == rop.CALL_PURE or opnum == rop.CALL_MAY_FORCE or opnum == rop.CALL_RELEASE_GIL or opnum == rop.CALL_ASSEMBLER): diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -525,6 +525,7 @@ @specialize.argtype(0) def _emit_operation(self, op): + assert op.getopnum() != rop.CALL_PURE for i in range(op.numargs()): arg = op.getarg(i) try: diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -4,8 +4,9 @@ from pypy.jit.metainterp.history import TargetToken, JitCellToken class OptSimplify(Optimization): - def __init__(self): + def __init__(self, unroll): self.last_label_descr = None + self.unroll = unroll def optimize_CALL_PURE(self, op): args = op.getarglist() @@ -35,24 +36,26 @@ pass def optimize_LABEL(self, op): - descr = op.getdescr() - if isinstance(descr, JitCellToken): - return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) - self.last_label_descr = op.getdescr() + if not self.unroll: + descr = op.getdescr() + if isinstance(descr, JitCellToken): + return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) + self.last_label_descr = op.getdescr() self.emit_operation(op) def optimize_JUMP(self, op): - descr = op.getdescr() - assert isinstance(descr, JitCellToken) - if not descr.target_tokens: - assert self.last_label_descr is not None - target_token = self.last_label_descr - assert isinstance(target_token, TargetToken) - assert target_token.targeting_jitcell_token is descr - op.setdescr(self.last_label_descr) - else: - assert len(descr.target_tokens) == 1 - op.setdescr(descr.target_tokens[0]) + if not self.unroll: + descr = op.getdescr() + assert isinstance(descr, JitCellToken) + if not descr.target_tokens: + assert self.last_label_descr is not None + target_token = self.last_label_descr + assert isinstance(target_token, TargetToken) + assert target_token.targeting_jitcell_token is descr + op.setdescr(self.last_label_descr) + else: + assert len(descr.target_tokens) == 1 + op.setdescr(descr.target_tokens[0]) self.emit_operation(op) dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py @@ -0,0 +1,46 @@ +from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import OptimizeOptTest +from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin +from pypy.jit.metainterp.resoperation import rop + + +allopts = OptimizeOptTest.enable_opts.split(':') +for optnum in range(len(allopts)): + myopts = allopts[:] + del myopts[optnum] + + class TestLLtype(OptimizeOptTest, LLtypeMixin): + enable_opts = ':'.join(myopts) + + def optimize_loop(self, ops, expected, expected_preamble=None, + call_pure_results=None, expected_short=None): + loop = self.parse(ops) + if expected != "crash!": + expected = self.parse(expected) + if expected_preamble: + expected_preamble = self.parse(expected_preamble) + if expected_short: + expected_short = self.parse(expected_short) + + preamble = self.unroll_and_optimize(loop, call_pure_results) + + for op in preamble.operations + loop.operations: + assert op.getopnum() not in (rop.CALL_PURE, + rop.CALL_LOOPINVARIANT, + rop.VIRTUAL_REF_FINISH, + rop.VIRTUAL_REF, + rop.QUASIIMMUT_FIELD, + rop.MARK_OPAQUE_PTR, + rop.RECORD_KNOWN_CLASS) + + def raises(self, e, fn, *args): + try: + fn(*args) + except e: + pass + + opt = allopts[optnum] + exec "TestNo%sLLtype = TestLLtype" % (opt[0].upper() + opt[1:]) + +del TestLLtype # No need to run the last set twice +del TestNoUnrollLLtype # This case is take care of by test_optimizebasic + diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -105,6 +105,9 @@ return loop + def raises(self, e, fn, *args): + py.test.raises(e, fn, *args) + class OptimizeOptTest(BaseTestWithUnroll): def setup_method(self, meth=None): @@ -2639,7 +2642,7 @@ p2 = new_with_vtable(ConstClass(node_vtable)) jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_2(self): @@ -2651,7 +2654,7 @@ escape(p2) # prevent it from staying Virtual jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_3(self): @@ -2665,7 +2668,7 @@ setfield_gc(p3, p4, descr=nextdescr) jump(p3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_merge_guard_class_guard_value(self): @@ -4411,7 +4414,7 @@ setfield_gc(p1, p3, descr=nextdescr) jump(p3) """ - py.test.raises(BogusPureField, self.optimize_loop, ops, "crash!") + self.raises(BogusPureField, self.optimize_loop, ops, "crash!") def test_dont_complains_different_field(self): ops = """ @@ -5024,7 +5027,7 @@ i2 = int_add(i0, 3) jump(i2) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_ne_const_not(self): ops = """ @@ -5074,7 +5077,7 @@ i3 = int_add(i0, 3) jump(i3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_lshift(self): ops = """ diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -927,12 +927,12 @@ source_dir / "pyerrors.c", source_dir / "modsupport.c", source_dir / "getargs.c", + source_dir / "abstract.c", source_dir / "stringobject.c", source_dir / "mysnprintf.c", source_dir / "pythonrun.c", source_dir / "sysmodule.c", source_dir / "bufferobject.c", - source_dir / "object.c", source_dir / "cobject.c", source_dir / "structseq.c", source_dir / "capsule.c", diff --git a/pypy/module/cpyext/complexobject.py b/pypy/module/cpyext/complexobject.py --- a/pypy/module/cpyext/complexobject.py +++ b/pypy/module/cpyext/complexobject.py @@ -33,6 +33,11 @@ # CPython also accepts anything return 0.0 + at cpython_api([Py_complex_ptr], PyObject) +def _PyComplex_FromCComplex(space, v): + """Create a new Python complex number object from a C Py_complex value.""" + return space.newcomplex(v.c_real, v.c_imag) + # lltype does not handle functions returning a structure. This implements a # helper function, which takes as argument a reference to the return value. @cpython_api([PyObject, Py_complex_ptr], lltype.Void) diff --git a/pypy/module/cpyext/floatobject.py b/pypy/module/cpyext/floatobject.py --- a/pypy/module/cpyext/floatobject.py +++ b/pypy/module/cpyext/floatobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.api import ( CANNOT_FAIL, cpython_api, PyObject, build_type_checkers, CONST_STRING) from pypy.interpreter.error import OperationError +from pypy.rlib.rstruct import runpack PyFloat_Check, PyFloat_CheckExact = build_type_checkers("Float") @@ -33,3 +34,19 @@ backward compatibility.""" return space.call_function(space.w_float, w_obj) + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack4(space, ptr, le): + input = rffi.charpsize2str(ptr, 4) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("f", input) + + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack8(space, ptr, le): + input = rffi.charpsize2str(ptr, 8) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("d", input) + diff --git a/pypy/module/cpyext/include/complexobject.h b/pypy/module/cpyext/include/complexobject.h --- a/pypy/module/cpyext/include/complexobject.h +++ b/pypy/module/cpyext/include/complexobject.h @@ -21,6 +21,8 @@ return result; } +#define PyComplex_FromCComplex(c) _PyComplex_FromCComplex(&c) + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -29,6 +29,10 @@ # define vsnprintf _vsnprintf #endif +#include +PyAPI_FUNC(int) PyOS_snprintf(char *str, size_t size, const char *format, ...); +PyAPI_FUNC(int) PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va); + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/stringobject.h b/pypy/module/cpyext/include/stringobject.h --- a/pypy/module/cpyext/include/stringobject.h +++ b/pypy/module/cpyext/include/stringobject.h @@ -7,8 +7,6 @@ extern "C" { #endif -int PyOS_snprintf(char *str, size_t size, const char *format, ...); - #define PyString_GET_SIZE(op) PyString_Size(op) #define PyString_AS_STRING(op) PyString_AsString(op) diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -40,7 +40,8 @@ when the value is NULL, instead of converting to None. */ #define T_LONGLONG 17 -#define T_ULONGLONG 18 +#define T_ULONGLONG 18 +#define T_PYSSIZET 19 /* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 diff --git a/pypy/module/cpyext/longobject.py b/pypy/module/cpyext/longobject.py --- a/pypy/module/cpyext/longobject.py +++ b/pypy/module/cpyext/longobject.py @@ -1,6 +1,7 @@ from pypy.rpython.lltypesystem import lltype, rffi -from pypy.module.cpyext.api import (cpython_api, PyObject, build_type_checkers, - CONST_STRING, ADDR, CANNOT_FAIL) +from pypy.module.cpyext.api import ( + cpython_api, PyObject, build_type_checkers, Py_ssize_t, + CONST_STRING, ADDR, CANNOT_FAIL) from pypy.objspace.std.longobject import W_LongObject from pypy.interpreter.error import OperationError from pypy.module.cpyext.intobject import PyInt_AsUnsignedLongMask @@ -15,6 +16,13 @@ """Return a new PyLongObject object from v, or NULL on failure.""" return space.newlong(val) + at cpython_api([Py_ssize_t], PyObject) +def PyLong_FromSsize_t(space, val): + """Return a new PyLongObject object from a C Py_ssize_t, or + NULL on failure. + """ + return space.newlong(val) + @cpython_api([rffi.LONGLONG], PyObject) def PyLong_FromLongLong(space, val): """Return a new PyLongObject object from a C long long, or NULL @@ -56,6 +64,14 @@ and -1 will be returned.""" return space.int_w(w_long) + at cpython_api([PyObject], Py_ssize_t, error=-1) +def PyLong_AsSsize_t(space, w_long): + """Return a C Py_ssize_t representation of the contents of pylong. If + pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised + and -1 will be returned. + """ + return space.int_w(w_long) + @cpython_api([PyObject], rffi.LONGLONG, error=-1) def PyLong_AsLongLong(space, w_long): """ diff --git a/pypy/module/cpyext/src/abstract.c b/pypy/module/cpyext/src/abstract.c new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/src/abstract.c @@ -0,0 +1,269 @@ +/* Abstract Object Interface */ + +#include "Python.h" + +/* Shorthands to return certain errors */ + +static PyObject * +type_error(const char *msg, PyObject *obj) +{ + PyErr_Format(PyExc_TypeError, msg, obj->ob_type->tp_name); + return NULL; +} + +static PyObject * +null_error(void) +{ + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_SystemError, + "null argument to internal routine"); + return NULL; +} + +/* Operations on any object */ + +int +PyObject_CheckReadBuffer(PyObject *obj) +{ + PyBufferProcs *pb = obj->ob_type->tp_as_buffer; + + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + (*pb->bf_getsegcount)(obj, NULL) != 1) + return 0; + return 1; +} + +int PyObject_AsReadBuffer(PyObject *obj, + const void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void *pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a readable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getreadbuffer)(obj, 0, &pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +int PyObject_AsWriteBuffer(PyObject *obj, + void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void*pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a writeable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getwritebuffer)(obj,0,&pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +/* Operations on callable objects */ + +static PyObject* +call_function_tail(PyObject *callable, PyObject *args) +{ + PyObject *retval; + + if (args == NULL) + return NULL; + + if (!PyTuple_Check(args)) { + PyObject *a; + + a = PyTuple_New(1); + if (a == NULL) { + Py_DECREF(args); + return NULL; + } + PyTuple_SET_ITEM(a, 0, args); + args = a; + } + retval = PyObject_Call(callable, args, NULL); + + Py_DECREF(args); + + return retval; +} + +PyObject * +PyObject_CallFunction(PyObject *callable, const char *format, ...) +{ + va_list va; + PyObject *args; + + if (callable == NULL) + return null_error(); + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + return call_function_tail(callable, args); +} + +PyObject * +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) +{ + va_list va; + PyObject *args; + PyObject *func = NULL; + PyObject *retval = NULL; + + if (o == NULL || name == NULL) + return null_error(); + + func = PyObject_GetAttrString(o, name); + if (func == NULL) { + PyErr_SetString(PyExc_AttributeError, name); + return 0; + } + + if (!PyCallable_Check(func)) { + type_error("attribute of type '%.200s' is not callable", func); + goto exit; + } + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + retval = call_function_tail(func, args); + + exit: + /* args gets consumed in call_function_tail */ + Py_XDECREF(func); + + return retval; +} + +static PyObject * +objargs_mktuple(va_list va) +{ + int i, n = 0; + va_list countva; + PyObject *result, *tmp; + +#ifdef VA_LIST_IS_ARRAY + memcpy(countva, va, sizeof(va_list)); +#else +#ifdef __va_copy + __va_copy(countva, va); +#else + countva = va; +#endif +#endif + + while (((PyObject *)va_arg(countva, PyObject *)) != NULL) + ++n; + result = PyTuple_New(n); + if (result != NULL && n > 0) { + for (i = 0; i < n; ++i) { + tmp = (PyObject *)va_arg(va, PyObject *); + Py_INCREF(tmp); + PyTuple_SET_ITEM(result, i, tmp); + } + } + return result; +} + +PyObject * +PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL || name == NULL) + return null_error(); + + callable = PyObject_GetAttr(callable, name); + if (callable == NULL) + return NULL; + + /* count the args */ + va_start(vargs, name); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) { + Py_DECREF(callable); + return NULL; + } + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + Py_DECREF(callable); + + return tmp; +} + +PyObject * +PyObject_CallFunctionObjArgs(PyObject *callable, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL) + return null_error(); + + /* count the args */ + va_start(vargs, callable); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) + return NULL; + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + + return tmp; +} + diff --git a/pypy/module/cpyext/src/bufferobject.c b/pypy/module/cpyext/src/bufferobject.c --- a/pypy/module/cpyext/src/bufferobject.c +++ b/pypy/module/cpyext/src/bufferobject.c @@ -13,207 +13,207 @@ static int get_buf(PyBufferObject *self, void **ptr, Py_ssize_t *size, - enum buffer_t buffer_type) + enum buffer_t buffer_type) { - if (self->b_base == NULL) { - assert (ptr != NULL); - *ptr = self->b_ptr; - *size = self->b_size; - } - else { - Py_ssize_t count, offset; - readbufferproc proc = 0; - PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; - if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return 0; - } - if ((buffer_type == READ_BUFFER) || - ((buffer_type == ANY_BUFFER) && self->b_readonly)) - proc = bp->bf_getreadbuffer; - else if ((buffer_type == WRITE_BUFFER) || - (buffer_type == ANY_BUFFER)) - proc = (readbufferproc)bp->bf_getwritebuffer; - else if (buffer_type == CHAR_BUFFER) { + if (self->b_base == NULL) { + assert (ptr != NULL); + *ptr = self->b_ptr; + *size = self->b_size; + } + else { + Py_ssize_t count, offset; + readbufferproc proc = 0; + PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; + if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return 0; + } + if ((buffer_type == READ_BUFFER) || + ((buffer_type == ANY_BUFFER) && self->b_readonly)) + proc = bp->bf_getreadbuffer; + else if ((buffer_type == WRITE_BUFFER) || + (buffer_type == ANY_BUFFER)) + proc = (readbufferproc)bp->bf_getwritebuffer; + else if (buffer_type == CHAR_BUFFER) { if (!PyType_HasFeature(self->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER)) { - PyErr_SetString(PyExc_TypeError, - "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); - return 0; - } - proc = (readbufferproc)bp->bf_getcharbuffer; - } - if (!proc) { - char *buffer_type_name; - switch (buffer_type) { - case READ_BUFFER: - buffer_type_name = "read"; - break; - case WRITE_BUFFER: - buffer_type_name = "write"; - break; - case CHAR_BUFFER: - buffer_type_name = "char"; - break; - default: - buffer_type_name = "no"; - break; - } - PyErr_Format(PyExc_TypeError, - "%s buffer type not available", - buffer_type_name); - return 0; - } - if ((count = (*proc)(self->b_base, 0, ptr)) < 0) - return 0; - /* apply constraints to the start/end */ - if (self->b_offset > count) - offset = count; - else - offset = self->b_offset; - *(char **)ptr = *(char **)ptr + offset; - if (self->b_size == Py_END_OF_BUFFER) - *size = count; - else - *size = self->b_size; - if (offset + *size > count) - *size = count - offset; - } - return 1; + Py_TPFLAGS_HAVE_GETCHARBUFFER)) { + PyErr_SetString(PyExc_TypeError, + "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); + return 0; + } + proc = (readbufferproc)bp->bf_getcharbuffer; + } + if (!proc) { + char *buffer_type_name; + switch (buffer_type) { + case READ_BUFFER: + buffer_type_name = "read"; + break; + case WRITE_BUFFER: + buffer_type_name = "write"; + break; + case CHAR_BUFFER: + buffer_type_name = "char"; + break; + default: + buffer_type_name = "no"; + break; + } + PyErr_Format(PyExc_TypeError, + "%s buffer type not available", + buffer_type_name); + return 0; + } + if ((count = (*proc)(self->b_base, 0, ptr)) < 0) + return 0; + /* apply constraints to the start/end */ + if (self->b_offset > count) + offset = count; + else + offset = self->b_offset; + *(char **)ptr = *(char **)ptr + offset; + if (self->b_size == Py_END_OF_BUFFER) + *size = count; + else + *size = self->b_size; + if (offset + *size > count) + *size = count - offset; + } + return 1; } static PyObject * buffer_from_memory(PyObject *base, Py_ssize_t size, Py_ssize_t offset, void *ptr, - int readonly) + int readonly) { - PyBufferObject * b; + PyBufferObject * b; - if (size < 0 && size != Py_END_OF_BUFFER) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } + if (size < 0 && size != Py_END_OF_BUFFER) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } - b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); - if ( b == NULL ) - return NULL; + b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); + if ( b == NULL ) + return NULL; - Py_XINCREF(base); - b->b_base = base; - b->b_ptr = ptr; - b->b_size = size; - b->b_offset = offset; - b->b_readonly = readonly; - b->b_hash = -1; + Py_XINCREF(base); + b->b_base = base; + b->b_ptr = ptr; + b->b_size = size; + b->b_offset = offset; + b->b_readonly = readonly; + b->b_hash = -1; - return (PyObject *) b; + return (PyObject *) b; } static PyObject * buffer_from_object(PyObject *base, Py_ssize_t size, Py_ssize_t offset, int readonly) { - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } - if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { - /* another buffer, refer to the base object */ - PyBufferObject *b = (PyBufferObject *)base; - if (b->b_size != Py_END_OF_BUFFER) { - Py_ssize_t base_size = b->b_size - offset; - if (base_size < 0) - base_size = 0; - if (size == Py_END_OF_BUFFER || size > base_size) - size = base_size; - } - offset += b->b_offset; - base = b->b_base; - } - return buffer_from_memory(base, size, offset, NULL, readonly); + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } + if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { + /* another buffer, refer to the base object */ + PyBufferObject *b = (PyBufferObject *)base; + if (b->b_size != Py_END_OF_BUFFER) { + Py_ssize_t base_size = b->b_size - offset; + if (base_size < 0) + base_size = 0; + if (size == Py_END_OF_BUFFER || size > base_size) + size = base_size; + } + offset += b->b_offset; + base = b->b_base; + } + return buffer_from_memory(base, size, offset, NULL, readonly); } PyObject * PyBuffer_FromObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 1); + return buffer_from_object(base, size, offset, 1); } PyObject * PyBuffer_FromReadWriteObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 0); + return buffer_from_object(base, size, offset, 0); } PyObject * PyBuffer_FromMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 1); + return buffer_from_memory(NULL, size, 0, ptr, 1); } PyObject * PyBuffer_FromReadWriteMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 0); + return buffer_from_memory(NULL, size, 0, ptr, 0); } PyObject * PyBuffer_New(Py_ssize_t size) { - PyObject *o; - PyBufferObject * b; + PyObject *o; + PyBufferObject * b; - if (size < 0) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (sizeof(*b) > PY_SSIZE_T_MAX - size) { - /* unlikely */ - return PyErr_NoMemory(); - } - /* Inline PyObject_New */ - o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); - if ( o == NULL ) - return PyErr_NoMemory(); - b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); + if (size < 0) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (sizeof(*b) > PY_SSIZE_T_MAX - size) { + /* unlikely */ + return PyErr_NoMemory(); + } + /* Inline PyObject_New */ + o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); + if ( o == NULL ) + return PyErr_NoMemory(); + b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); - b->b_base = NULL; - b->b_ptr = (void *)(b + 1); - b->b_size = size; - b->b_offset = 0; - b->b_readonly = 0; - b->b_hash = -1; + b->b_base = NULL; + b->b_ptr = (void *)(b + 1); + b->b_size = size; + b->b_offset = 0; + b->b_readonly = 0; + b->b_hash = -1; - return o; + return o; } /* Methods */ @@ -221,19 +221,21 @@ static PyObject * buffer_new(PyTypeObject *type, PyObject *args, PyObject *kw) { - PyObject *ob; - Py_ssize_t offset = 0; - Py_ssize_t size = Py_END_OF_BUFFER; + PyObject *ob; + Py_ssize_t offset = 0; + Py_ssize_t size = Py_END_OF_BUFFER; - /*if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) - return NULL;*/ - - if (!_PyArg_NoKeywords("buffer()", kw)) - return NULL; + /* + * if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) + * return NULL; + */ - if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) - return NULL; - return PyBuffer_FromObject(ob, offset, size); + if (!_PyArg_NoKeywords("buffer()", kw)) + return NULL; + + if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) + return NULL; + return PyBuffer_FromObject(ob, offset, size); } PyDoc_STRVAR(buffer_doc, @@ -248,99 +250,99 @@ static void buffer_dealloc(PyBufferObject *self) { - Py_XDECREF(self->b_base); - PyObject_DEL(self); + Py_XDECREF(self->b_base); + PyObject_DEL(self); } static int buffer_compare(PyBufferObject *self, PyBufferObject *other) { - void *p1, *p2; - Py_ssize_t len_self, len_other, min_len; - int cmp; + void *p1, *p2; + Py_ssize_t len_self, len_other, min_len; + int cmp; - if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) - return -1; - if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) - return -1; - min_len = (len_self < len_other) ? len_self : len_other; - if (min_len > 0) { - cmp = memcmp(p1, p2, min_len); - if (cmp != 0) - return cmp < 0 ? -1 : 1; - } - return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; + if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) + return -1; + if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) + return -1; + min_len = (len_self < len_other) ? len_self : len_other; + if (min_len > 0) { + cmp = memcmp(p1, p2, min_len); + if (cmp != 0) + return cmp < 0 ? -1 : 1; + } + return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; } static PyObject * buffer_repr(PyBufferObject *self) { - const char *status = self->b_readonly ? "read-only" : "read-write"; + const char *status = self->b_readonly ? "read-only" : "read-write"; if ( self->b_base == NULL ) - return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", - status, - self->b_ptr, - self->b_size, - self); - else - return PyString_FromFormat( - "<%s buffer for %p, size %zd, offset %zd at %p>", - status, - self->b_base, - self->b_size, - self->b_offset, - self); + return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", + status, + self->b_ptr, + self->b_size, + self); + else + return PyString_FromFormat( + "<%s buffer for %p, size %zd, offset %zd at %p>", + status, + self->b_base, + self->b_size, + self->b_offset, + self); } static long buffer_hash(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - register Py_ssize_t len; - register unsigned char *p; - register long x; + void *ptr; + Py_ssize_t size; + register Py_ssize_t len; + register unsigned char *p; + register long x; - if ( self->b_hash != -1 ) - return self->b_hash; + if ( self->b_hash != -1 ) + return self->b_hash; - /* XXX potential bugs here, a readonly buffer does not imply that the - * underlying memory is immutable. b_readonly is a necessary but not - * sufficient condition for a buffer to be hashable. Perhaps it would - * be better to only allow hashing if the underlying object is known to - * be immutable (e.g. PyString_Check() is true). Another idea would - * be to call tp_hash on the underlying object and see if it raises - * an error. */ - if ( !self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, - "writable buffers are not hashable"); - return -1; - } + /* XXX potential bugs here, a readonly buffer does not imply that the + * underlying memory is immutable. b_readonly is a necessary but not + * sufficient condition for a buffer to be hashable. Perhaps it would + * be better to only allow hashing if the underlying object is known to + * be immutable (e.g. PyString_Check() is true). Another idea would + * be to call tp_hash on the underlying object and see if it raises + * an error. */ + if ( !self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, + "writable buffers are not hashable"); + return -1; + } - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - p = (unsigned char *) ptr; - len = size; - x = *p << 7; - while (--len >= 0) - x = (1000003*x) ^ *p++; - x ^= size; - if (x == -1) - x = -2; - self->b_hash = x; - return x; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + p = (unsigned char *) ptr; + len = size; + x = *p << 7; + while (--len >= 0) + x = (1000003*x) ^ *p++; + x ^= size; + if (x == -1) + x = -2; + self->b_hash = x; + return x; } static PyObject * buffer_str(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - return PyString_FromStringAndSize((const char *)ptr, size); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + return PyString_FromStringAndSize((const char *)ptr, size); } /* Sequence methods */ @@ -348,374 +350,372 @@ static Py_ssize_t buffer_length(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - return size; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + return size; } static PyObject * buffer_concat(PyBufferObject *self, PyObject *other) { - PyBufferProcs *pb = other->ob_type->tp_as_buffer; - void *ptr1, *ptr2; - char *p; - PyObject *ob; - Py_ssize_t size, count; + PyBufferProcs *pb = other->ob_type->tp_as_buffer; + void *ptr1, *ptr2; + char *p; + PyObject *ob; + Py_ssize_t size, count; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return NULL; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return NULL; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return NULL; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return NULL; - - /* optimize special case */ - if ( size == 0 ) - { - Py_INCREF(other); - return other; - } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return NULL; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return NULL; + /* optimize special case */ + if ( size == 0 ) + { + Py_INCREF(other); + return other; + } - assert(count <= PY_SIZE_MAX - size); + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return NULL; - ob = PyString_FromStringAndSize(NULL, size + count); - if ( ob == NULL ) - return NULL; - p = PyString_AS_STRING(ob); - memcpy(p, ptr1, size); - memcpy(p + size, ptr2, count); + assert(count <= PY_SIZE_MAX - size); - /* there is an extra byte in the string object, so this is safe */ - p[size + count] = '\0'; + ob = PyString_FromStringAndSize(NULL, size + count); + if ( ob == NULL ) + return NULL; + p = PyString_AS_STRING(ob); + memcpy(p, ptr1, size); + memcpy(p + size, ptr2, count); - return ob; + /* there is an extra byte in the string object, so this is safe */ + p[size + count] = '\0'; + + return ob; } static PyObject * buffer_repeat(PyBufferObject *self, Py_ssize_t count) { - PyObject *ob; - register char *p; - void *ptr; - Py_ssize_t size; + PyObject *ob; + register char *p; + void *ptr; + Py_ssize_t size; - if ( count < 0 ) - count = 0; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if (count > PY_SSIZE_T_MAX / size) { - PyErr_SetString(PyExc_MemoryError, "result too large"); - return NULL; - } - ob = PyString_FromStringAndSize(NULL, size * count); - if ( ob == NULL ) - return NULL; + if ( count < 0 ) + count = 0; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if (count > PY_SSIZE_T_MAX / size) { + PyErr_SetString(PyExc_MemoryError, "result too large"); + return NULL; + } + ob = PyString_FromStringAndSize(NULL, size * count); + if ( ob == NULL ) + return NULL; - p = PyString_AS_STRING(ob); - while ( count-- ) - { - memcpy(p, ptr, size); - p += size; - } + p = PyString_AS_STRING(ob); + while ( count-- ) + { + memcpy(p, ptr, size); + p += size; + } - /* there is an extra byte in the string object, so this is safe */ - *p = '\0'; + /* there is an extra byte in the string object, so this is safe */ + *p = '\0'; - return ob; + return ob; } static PyObject * buffer_item(PyBufferObject *self, Py_ssize_t idx) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( idx < 0 || idx >= size ) { - PyErr_SetString(PyExc_IndexError, "buffer index out of range"); - return NULL; - } - return PyString_FromStringAndSize((char *)ptr + idx, 1); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( idx < 0 || idx >= size ) { + PyErr_SetString(PyExc_IndexError, "buffer index out of range"); + return NULL; + } + return PyString_FromStringAndSize((char *)ptr + idx, 1); } static PyObject * buffer_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( left < 0 ) - left = 0; - if ( right < 0 ) - right = 0; - if ( right > size ) - right = size; - if ( right < left ) - right = left; - return PyString_FromStringAndSize((char *)ptr + left, - right - left); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( left < 0 ) + left = 0; + if ( right < 0 ) + right = 0; + if ( right > size ) + right = size; + if ( right < left ) + right = left; + return PyString_FromStringAndSize((char *)ptr + left, + right - left); } static PyObject * buffer_subscript(PyBufferObject *self, PyObject *item) { - void *p; - Py_ssize_t size; - - if (!get_buf(self, &p, &size, ANY_BUFFER)) - return NULL; - + void *p; + Py_ssize_t size; + + if (!get_buf(self, &p, &size, ANY_BUFFER)) + return NULL; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return NULL; - if (i < 0) - i += size; - return buffer_item(self, i); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength, cur, i; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return NULL; + if (i < 0) + i += size; + return buffer_item(self, i); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength, cur, i; - if (PySlice_GetIndicesEx((PySliceObject*)item, size, - &start, &stop, &step, &slicelength) < 0) { - return NULL; - } + if (PySlice_GetIndicesEx((PySliceObject*)item, size, + &start, &stop, &step, &slicelength) < 0) { + return NULL; + } - if (slicelength <= 0) - return PyString_FromStringAndSize("", 0); - else if (step == 1) - return PyString_FromStringAndSize((char *)p + start, - stop - start); - else { - PyObject *result; - char *source_buf = (char *)p; - char *result_buf = (char *)PyMem_Malloc(slicelength); + if (slicelength <= 0) + return PyString_FromStringAndSize("", 0); + else if (step == 1) + return PyString_FromStringAndSize((char *)p + start, + stop - start); + else { + PyObject *result; + char *source_buf = (char *)p; + char *result_buf = (char *)PyMem_Malloc(slicelength); - if (result_buf == NULL) - return PyErr_NoMemory(); + if (result_buf == NULL) + return PyErr_NoMemory(); - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - result_buf[i] = source_buf[cur]; - } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + result_buf[i] = source_buf[cur]; + } - result = PyString_FromStringAndSize(result_buf, - slicelength); - PyMem_Free(result_buf); - return result; - } - } - else { - PyErr_SetString(PyExc_TypeError, - "sequence index must be integer"); - return NULL; - } + result = PyString_FromStringAndSize(result_buf, + slicelength); + PyMem_Free(result_buf); + return result; + } + } + else { + PyErr_SetString(PyExc_TypeError, + "sequence index must be integer"); + return NULL; + } } static int buffer_ass_item(PyBufferObject *self, Py_ssize_t idx, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; - if (idx < 0 || idx >= size) { - PyErr_SetString(PyExc_IndexError, - "buffer assignment index out of range"); - return -1; - } + if (idx < 0 || idx >= size) { + PyErr_SetString(PyExc_IndexError, + "buffer assignment index out of range"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; - if ( count != 1 ) { - PyErr_SetString(PyExc_TypeError, - "right operand must be a single byte"); - return -1; - } + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; + if ( count != 1 ) { + PyErr_SetString(PyExc_TypeError, + "right operand must be a single byte"); + return -1; + } - ((char *)ptr1)[idx] = *(char *)ptr2; - return 0; + ((char *)ptr1)[idx] = *(char *)ptr2; + return 0; } static int buffer_ass_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t slice_len; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t slice_len; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; - if ( left < 0 ) - left = 0; - else if ( left > size ) - left = size; - if ( right < left ) - right = left; - else if ( right > size ) - right = size; - slice_len = right - left; + if ( left < 0 ) + left = 0; + else if ( left > size ) + left = size; + if ( right < left ) + right = left; + else if ( right > size ) + right = size; + slice_len = right - left; - if ( count != slice_len ) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ( count != slice_len ) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - if ( slice_len ) - memcpy((char *)ptr1 + left, ptr2, slice_len); + if ( slice_len ) + memcpy((char *)ptr1 + left, ptr2, slice_len); - return 0; + return 0; } static int buffer_ass_subscript(PyBufferObject *self, PyObject *item, PyObject *value) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t selfsize; - Py_ssize_t othersize; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t selfsize; + Py_ssize_t othersize; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = value ? value->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) - return -1; - + pb = value ? value->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) + return -1; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return -1; - if (i < 0) - i += selfsize; - return buffer_ass_item(self, i, value); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength; - - if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, - &start, &stop, &step, &slicelength) < 0) - return -1; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return -1; + if (i < 0) + i += selfsize; + return buffer_ass_item(self, i, value); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength; - if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) - return -1; + if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, + &start, &stop, &step, &slicelength) < 0) + return -1; - if (othersize != slicelength) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) + return -1; - if (slicelength == 0) - return 0; - else if (step == 1) { - memcpy((char *)ptr1 + start, ptr2, slicelength); - return 0; - } - else { - Py_ssize_t cur, i; - - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - ((char *)ptr1)[cur] = ((char *)ptr2)[i]; - } + if (othersize != slicelength) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - return 0; - } - } else { - PyErr_SetString(PyExc_TypeError, - "buffer indices must be integers"); - return -1; - } + if (slicelength == 0) + return 0; + else if (step == 1) { + memcpy((char *)ptr1 + start, ptr2, slicelength); + return 0; + } + else { + Py_ssize_t cur, i; + + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + ((char *)ptr1)[cur] = ((char *)ptr2)[i]; + } + + return 0; + } + } else { + PyErr_SetString(PyExc_TypeError, + "buffer indices must be integers"); + return -1; + } } /* Buffer methods */ @@ -723,64 +723,64 @@ static Py_ssize_t buffer_getreadbuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, READ_BUFFER)) - return -1; - return size; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, READ_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getwritebuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; + Py_ssize_t size; - if ( self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, "buffer is read-only"); + return -1; + } - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, WRITE_BUFFER)) - return -1; - return size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, WRITE_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getsegcount(PyBufferObject *self, Py_ssize_t *lenp) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - if (lenp) - *lenp = size; - return 1; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + if (lenp) + *lenp = size; + return 1; } static Py_ssize_t buffer_getcharbuf(PyBufferObject *self, Py_ssize_t idx, const char **pp) { - void *ptr; - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) - return -1; - *pp = (const char *)ptr; - return size; + void *ptr; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) + return -1; + *pp = (const char *)ptr; + return size; } void init_bufferobject(void) @@ -789,67 +789,65 @@ } static PySequenceMethods buffer_as_sequence = { - (lenfunc)buffer_length, /*sq_length*/ - (binaryfunc)buffer_concat, /*sq_concat*/ - (ssizeargfunc)buffer_repeat, /*sq_repeat*/ - (ssizeargfunc)buffer_item, /*sq_item*/ - (ssizessizeargfunc)buffer_slice, /*sq_slice*/ - (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ - (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ + (lenfunc)buffer_length, /*sq_length*/ + (binaryfunc)buffer_concat, /*sq_concat*/ + (ssizeargfunc)buffer_repeat, /*sq_repeat*/ + (ssizeargfunc)buffer_item, /*sq_item*/ + (ssizessizeargfunc)buffer_slice, /*sq_slice*/ + (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ + (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ }; static PyMappingMethods buffer_as_mapping = { - (lenfunc)buffer_length, - (binaryfunc)buffer_subscript, - (objobjargproc)buffer_ass_subscript, + (lenfunc)buffer_length, + (binaryfunc)buffer_subscript, + (objobjargproc)buffer_ass_subscript, }; static PyBufferProcs buffer_as_buffer = { - (readbufferproc)buffer_getreadbuf, - (writebufferproc)buffer_getwritebuf, - (segcountproc)buffer_getsegcount, - (charbufferproc)buffer_getcharbuf, + (readbufferproc)buffer_getreadbuf, + (writebufferproc)buffer_getwritebuf, + (segcountproc)buffer_getsegcount, + (charbufferproc)buffer_getcharbuf, }; PyTypeObject PyBuffer_Type = { - PyObject_HEAD_INIT(NULL) + PyVarObject_HEAD_INIT(NULL, 0) + "buffer", + sizeof(PyBufferObject), 0, - "buffer", - sizeof(PyBufferObject), - 0, - (destructor)buffer_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - (cmpfunc)buffer_compare, /* tp_compare */ - (reprfunc)buffer_repr, /* tp_repr */ - 0, /* tp_as_number */ - &buffer_as_sequence, /* tp_as_sequence */ - &buffer_as_mapping, /* tp_as_mapping */ - (hashfunc)buffer_hash, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)buffer_str, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - &buffer_as_buffer, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ - buffer_doc, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - buffer_new, /* tp_new */ + (destructor)buffer_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + (cmpfunc)buffer_compare, /* tp_compare */ + (reprfunc)buffer_repr, /* tp_repr */ + 0, /* tp_as_number */ + &buffer_as_sequence, /* tp_as_sequence */ + &buffer_as_mapping, /* tp_as_mapping */ + (hashfunc)buffer_hash, /* tp_hash */ + 0, /* tp_call */ + (reprfunc)buffer_str, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + &buffer_as_buffer, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ + buffer_doc, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + buffer_new, /* tp_new */ }; - diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -50,6 +50,10 @@ PyCObject_AsVoidPtr(PyObject *self) { if (self) { + if (PyCapsule_CheckExact(self)) { + const char *name = PyCapsule_GetName(self); + return (void *)PyCapsule_GetPointer(self, name); + } if (self->ob_type == &PyCObject_Type) return ((PyCObject *)self)->cobject; PyErr_SetString(PyExc_TypeError, diff --git a/pypy/module/cpyext/src/getargs.c b/pypy/module/cpyext/src/getargs.c --- a/pypy/module/cpyext/src/getargs.c +++ b/pypy/module/cpyext/src/getargs.c @@ -7,349 +7,348 @@ #ifdef __cplusplus -extern "C" { +extern "C" { #endif - int PyArg_Parse(PyObject *, const char *, ...); int PyArg_ParseTuple(PyObject *, const char *, ...); int PyArg_VaParse(PyObject *, const char *, va_list); int PyArg_ParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, ...); + const char *, char **, ...); int PyArg_VaParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, va_list); + const char *, char **, va_list); #define FLAG_COMPAT 1 #define FLAG_SIZE_T 2 -typedef int (*destr_t)(PyObject *, void *); - - -/* Keep track of "objects" that have been allocated or initialized and - which will need to be deallocated or cleaned up somehow if overall - parsing fails. -*/ -typedef struct { - void *item; - destr_t destructor; -} freelistentry_t; - -typedef struct { - int first_available; - freelistentry_t *entries; -} freelist_t; - /* Forward */ static int vgetargs1(PyObject *, const char *, va_list *, int); static void seterror(int, const char *, int *, const char *, const char *); -static char *convertitem(PyObject *, const char **, va_list *, int, int *, - char *, size_t, freelist_t *); +static char *convertitem(PyObject *, const char **, va_list *, int, int *, + char *, size_t, PyObject **); static char *converttuple(PyObject *, const char **, va_list *, int, - int *, char *, size_t, int, freelist_t *); + int *, char *, size_t, int, PyObject **); static char *convertsimple(PyObject *, const char **, va_list *, int, char *, - size_t, freelist_t *); + size_t, PyObject **); static Py_ssize_t convertbuffer(PyObject *, void **p, char **); static int getbuffer(PyObject *, Py_buffer *, char**); static int vgetargskeywords(PyObject *, PyObject *, - const char *, char **, va_list *, int); + const char *, char **, va_list *, int); static char *skipitem(const char **, va_list *, int); int PyArg_Parse(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT); + va_end(va); + return retval; } int _PyArg_Parse_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_ParseTuple(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, 0); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTuple_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParse(PyObject *args, const char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, 0); + return vgetargs1(args, format, &lva, 0); } int _PyArg_VaParse_SizeT(PyObject *args, char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, FLAG_SIZE_T); + return vgetargs1(args, format, &lva, FLAG_SIZE_T); } /* Handle cleanup of allocated memory in case of exception */ +#define GETARGS_CAPSULE_NAME_CLEANUP_PTR "getargs.cleanup_ptr" +#define GETARGS_CAPSULE_NAME_CLEANUP_BUFFER "getargs.cleanup_buffer" + +static void +cleanup_ptr(PyObject *self) +{ + void *ptr = PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_PTR); + if (ptr) { + PyMem_FREE(ptr); + } +} + +static void +cleanup_buffer(PyObject *self) +{ + Py_buffer *ptr = (Py_buffer *)PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_BUFFER); + if (ptr) { + PyBuffer_Release(ptr); + } +} + static int -cleanup_ptr(PyObject *self, void *ptr) +addcleanup(void *ptr, PyObject **freelist, PyCapsule_Destructor destr) { - if (ptr) { - PyMem_FREE(ptr); + PyObject *cobj; + const char *name; + + if (!*freelist) { + *freelist = PyList_New(0); + if (!*freelist) { + destr(ptr); + return -1; + } } + + if (destr == cleanup_ptr) { + name = GETARGS_CAPSULE_NAME_CLEANUP_PTR; + } else if (destr == cleanup_buffer) { + name = GETARGS_CAPSULE_NAME_CLEANUP_BUFFER; + } else { + return -1; + } + cobj = PyCapsule_New(ptr, name, destr); + if (!cobj) { + destr(ptr); + return -1; + } + if (PyList_Append(*freelist, cobj)) { + Py_DECREF(cobj); + return -1; + } + Py_DECREF(cobj); return 0; } static int -cleanup_buffer(PyObject *self, void *ptr) +cleanreturn(int retval, PyObject *freelist) { - Py_buffer *buf = (Py_buffer *)ptr; - if (buf) { - PyBuffer_Release(buf); + if (freelist && retval != 0) { + /* We were successful, reset the destructors so that they + don't get called. */ + Py_ssize_t len = PyList_GET_SIZE(freelist), i; + for (i = 0; i < len; i++) + PyCapsule_SetDestructor(PyList_GET_ITEM(freelist, i), NULL); } - return 0; + Py_XDECREF(freelist); + return retval; } -static int -addcleanup(void *ptr, freelist_t *freelist, destr_t destructor) -{ - int index; - - index = freelist->first_available; - freelist->first_available += 1; - - freelist->entries[index].item = ptr; - freelist->entries[index].destructor = destructor; - - return 0; -} - -static int -cleanreturn(int retval, freelist_t *freelist) -{ - int index; - - if (retval == 0) { - /* A failure occurred, therefore execute all of the cleanup - functions. - */ - for (index = 0; index < freelist->first_available; ++index) { - freelist->entries[index].destructor(NULL, - freelist->entries[index].item); - } - } - PyMem_Free(freelist->entries); - return retval; -} static int vgetargs1(PyObject *args, const char *format, va_list *p_va, int flags) { - char msgbuf[256]; - int levels[32]; - const char *fname = NULL; - const char *message = NULL; - int min = -1; - int max = 0; - int level = 0; - int endfmt = 0; - const char *formatsave = format; - Py_ssize_t i, len; - char *msg; - freelist_t freelist = {0, NULL}; - int compat = flags & FLAG_COMPAT; + char msgbuf[256]; + int levels[32]; + const char *fname = NULL; + const char *message = NULL; + int min = -1; + int max = 0; + int level = 0; + int endfmt = 0; + const char *formatsave = format; + Py_ssize_t i, len; + char *msg; + PyObject *freelist = NULL; + int compat = flags & FLAG_COMPAT; - assert(compat || (args != (PyObject*)NULL)); - flags = flags & ~FLAG_COMPAT; + assert(compat || (args != (PyObject*)NULL)); + flags = flags & ~FLAG_COMPAT; - while (endfmt == 0) { - int c = *format++; - switch (c) { - case '(': - if (level == 0) - max++; - level++; - if (level >= 30) - Py_FatalError("too many tuple nesting levels " - "in argument format string"); - break; - case ')': - if (level == 0) - Py_FatalError("excess ')' in getargs format"); - else - level--; - break; - case '\0': - endfmt = 1; - break; - case ':': - fname = format; - endfmt = 1; - break; - case ';': - message = format; - endfmt = 1; - break; - default: - if (level == 0) { - if (c == 'O') - max++; - else if (isalpha(Py_CHARMASK(c))) { - if (c != 'e') /* skip encoded */ - max++; - } else if (c == '|') - min = max; - } - break; - } - } - - if (level != 0) - Py_FatalError(/* '(' */ "missing ')' in getargs format"); - - if (min < 0) - min = max; - - format = formatsave; - - freelist.entries = PyMem_New(freelistentry_t, max); + while (endfmt == 0) { + int c = *format++; + switch (c) { + case '(': + if (level == 0) + max++; + level++; + if (level >= 30) + Py_FatalError("too many tuple nesting levels " + "in argument format string"); + break; + case ')': + if (level == 0) + Py_FatalError("excess ')' in getargs format"); + else + level--; + break; + case '\0': + endfmt = 1; + break; + case ':': + fname = format; + endfmt = 1; + break; + case ';': + message = format; + endfmt = 1; + break; + default: + if (level == 0) { + if (c == 'O') + max++; + else if (isalpha(Py_CHARMASK(c))) { + if (c != 'e') /* skip encoded */ + max++; + } else if (c == '|') + min = max; + } + break; + } + } - if (compat) { - if (max == 0) { - if (args == NULL) - return cleanreturn(1, &freelist); - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes no arguments", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - else if (min == 1 && max == 1) { - if (args == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes at least one argument", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - msg = convertitem(args, &format, p_va, flags, levels, - msgbuf, sizeof(msgbuf), &freelist); - if (msg == NULL) - return cleanreturn(1, &freelist); - seterror(levels[0], msg, levels+1, fname, message); - return cleanreturn(0, &freelist); - } - else { - PyErr_SetString(PyExc_SystemError, - "old style getargs format uses new features"); - return cleanreturn(0, &freelist); - } - } - - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "new style getargs format but argument is not a tuple"); - return cleanreturn(0, &freelist); - } - - len = PyTuple_GET_SIZE(args); - - if (len < min || max < len) { - if (message == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.150s%s takes %s %d argument%s " - "(%ld given)", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()", - min==max ? "exactly" - : len < min ? "at least" : "at most", - len < min ? min : max, - (len < min ? min : max) == 1 ? "" : "s", - Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); - message = msgbuf; - } - PyErr_SetString(PyExc_TypeError, message); - return cleanreturn(0, &freelist); - } - - for (i = 0; i < len; i++) { - if (*format == '|') - format++; - msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, - flags, levels, msgbuf, - sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, message); - return cleanreturn(0, &freelist); - } - } + if (level != 0) + Py_FatalError(/* '(' */ "missing ')' in getargs format"); - if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && - *format != '(' && - *format != '|' && *format != ':' && *format != ';') { - PyErr_Format(PyExc_SystemError, - "bad format string: %.200s", formatsave); - return cleanreturn(0, &freelist); - } - - return cleanreturn(1, &freelist); + if (min < 0) + min = max; + + format = formatsave; + + if (compat) { + if (max == 0) { + if (args == NULL) + return 1; + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes no arguments", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + else if (min == 1 && max == 1) { + if (args == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes at least one argument", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + msg = convertitem(args, &format, p_va, flags, levels, + msgbuf, sizeof(msgbuf), &freelist); + if (msg == NULL) + return cleanreturn(1, freelist); + seterror(levels[0], msg, levels+1, fname, message); + return cleanreturn(0, freelist); + } + else { + PyErr_SetString(PyExc_SystemError, + "old style getargs format uses new features"); + return 0; + } + } + + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "new style getargs format but argument is not a tuple"); + return 0; + } + + len = PyTuple_GET_SIZE(args); + + if (len < min || max < len) { + if (message == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.150s%s takes %s %d argument%s " + "(%ld given)", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()", + min==max ? "exactly" + : len < min ? "at least" : "at most", + len < min ? min : max, + (len < min ? min : max) == 1 ? "" : "s", + Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); + message = msgbuf; + } + PyErr_SetString(PyExc_TypeError, message); + return 0; + } + + for (i = 0; i < len; i++) { + if (*format == '|') + format++; + msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, + flags, levels, msgbuf, + sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, msg); + return cleanreturn(0, freelist); + } + } + + if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && + *format != '(' && + *format != '|' && *format != ':' && *format != ';') { + PyErr_Format(PyExc_SystemError, + "bad format string: %.200s", formatsave); + return cleanreturn(0, freelist); + } + + return cleanreturn(1, freelist); } @@ -358,37 +357,37 @@ seterror(int iarg, const char *msg, int *levels, const char *fname, const char *message) { - char buf[512]; - int i; - char *p = buf; + char buf[512]; + int i; + char *p = buf; - if (PyErr_Occurred()) - return; - else if (message == NULL) { - if (fname != NULL) { - PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); - p += strlen(p); - } - if (iarg != 0) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - "argument %d", iarg); - i = 0; - p += strlen(p); - while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - ", item %d", levels[i]-1); - p += strlen(p); - i++; - } - } - else { - PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); - p += strlen(p); - } - PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); - message = buf; - } - PyErr_SetString(PyExc_TypeError, message); + if (PyErr_Occurred()) + return; + else if (message == NULL) { + if (fname != NULL) { + PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); + p += strlen(p); + } + if (iarg != 0) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + "argument %d", iarg); + i = 0; + p += strlen(p); + while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + ", item %d", levels[i]-1); + p += strlen(p); + i++; + } + } + else { + PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); + p += strlen(p); + } + PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); + message = buf; + } + PyErr_SetString(PyExc_TypeError, message); } @@ -404,85 +403,84 @@ *p_va is undefined, *levels is a 0-terminated list of item numbers, *msgbuf contains an error message, whose format is: - "must be , not ", where: - is the name of the expected type, and - is the name of the actual type, + "must be , not ", where: + is the name of the expected type, and + is the name of the actual type, and msgbuf is returned. */ static char * converttuple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, int toplevel, - freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, int toplevel, + PyObject **freelist) { - int level = 0; - int n = 0; - const char *format = *p_format; - int i; - - for (;;) { - int c = *format++; - if (c == '(') { - if (level == 0) - n++; - level++; - } - else if (c == ')') { - if (level == 0) - break; - level--; - } - else if (c == ':' || c == ';' || c == '\0') - break; - else if (level == 0 && isalpha(Py_CHARMASK(c))) - n++; - } - - if (!PySequence_Check(arg) || PyString_Check(arg)) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %.50s" : - "must be %d-item sequence, not %.50s", - n, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; - } - - if ((i = PySequence_Size(arg)) != n) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %d" : - "must be sequence of length %d, not %d", - n, i); - return msgbuf; - } + int level = 0; + int n = 0; + const char *format = *p_format; + int i; - format = *p_format; - for (i = 0; i < n; i++) { - char *msg; - PyObject *item; + for (;;) { + int c = *format++; + if (c == '(') { + if (level == 0) + n++; + level++; + } + else if (c == ')') { + if (level == 0) + break; + level--; + } + else if (c == ':' || c == ';' || c == '\0') + break; + else if (level == 0 && isalpha(Py_CHARMASK(c))) + n++; + } + + if (!PySequence_Check(arg) || PyString_Check(arg)) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %.50s" : + "must be %d-item sequence, not %.50s", + n, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; + } + + if ((i = PySequence_Size(arg)) != n) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %d" : + "must be sequence of length %d, not %d", + n, i); + return msgbuf; + } + + format = *p_format; + for (i = 0; i < n; i++) { + char *msg; + PyObject *item; item = PySequence_GetItem(arg, i); - if (item == NULL) { - PyErr_Clear(); - levels[0] = i+1; - levels[1] = 0; - strncpy(msgbuf, "is not retrievable", - bufsize); - return msgbuf; - } - PyPy_Borrow(arg, item); - msg = convertitem(item, &format, p_va, flags, levels+1, - msgbuf, bufsize, freelist); + if (item == NULL) { + PyErr_Clear(); + levels[0] = i+1; + levels[1] = 0; + strncpy(msgbuf, "is not retrievable", bufsize); + return msgbuf; + } + PyPy_Borrow(arg, item); + msg = convertitem(item, &format, p_va, flags, levels+1, + msgbuf, bufsize, freelist); /* PySequence_GetItem calls tp->sq_item, which INCREFs */ Py_XDECREF(item); - if (msg != NULL) { - levels[0] = i+1; - return msg; - } - } + if (msg != NULL) { + levels[0] = i+1; + return msg; + } + } - *p_format = format; - return NULL; + *p_format = format; + return NULL; } @@ -490,45 +488,45 @@ static char * convertitem(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, PyObject **freelist) { - char *msg; - const char *format = *p_format; - - if (*format == '(' /* ')' */) { - format++; - msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, - bufsize, 0, freelist); - if (msg == NULL) - format++; - } - else { - msg = convertsimple(arg, &format, p_va, flags, - msgbuf, bufsize, freelist); - if (msg != NULL) - levels[0] = 0; - } - if (msg == NULL) - *p_format = format; - return msg; + char *msg; + const char *format = *p_format; + + if (*format == '(' /* ')' */) { + format++; + msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, + bufsize, 0, freelist); + if (msg == NULL) + format++; + } + else { + msg = convertsimple(arg, &format, p_va, flags, + msgbuf, bufsize, freelist); + if (msg != NULL) + levels[0] = 0; + } + if (msg == NULL) + *p_format = format; + return msg; } #define UNICODE_DEFAULT_ENCODING(arg) \ - _PyUnicode_AsDefaultEncodedString(arg, NULL) + _PyUnicode_AsDefaultEncodedString(arg, NULL) /* Format an error message generated by convertsimple(). */ static char * converterr(const char *expected, PyObject *arg, char *msgbuf, size_t bufsize) { - assert(expected != NULL); - assert(arg != NULL); - PyOS_snprintf(msgbuf, bufsize, - "must be %.50s, not %.50s", expected, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; + assert(expected != NULL); + assert(arg != NULL); + PyOS_snprintf(msgbuf, bufsize, + "must be %.50s, not %.50s", expected, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; } #define CONV_UNICODE "(unicode conversion error)" @@ -536,14 +534,28 @@ /* explicitly check for float arguments when integers are expected. For now * signal a warning. Returns true if an exception was raised. */ static int +float_argument_warning(PyObject *arg) +{ + if (PyFloat_Check(arg) && + PyErr_Warn(PyExc_DeprecationWarning, + "integer argument expected, got float" )) + return 1; + else + return 0; +} + +/* explicitly check for float arguments when integers are expected. Raises + TypeError and returns true for float arguments. */ +static int float_argument_error(PyObject *arg) { - if (PyFloat_Check(arg) && - PyErr_Warn(PyExc_DeprecationWarning, - "integer argument expected, got float" )) - return 1; - else - return 0; + if (PyFloat_Check(arg)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 1; + } + else + return 0; } /* Convert a non-tuple argument. Return NULL if conversion went OK, @@ -557,836 +569,839 @@ static char * convertsimple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - char *msgbuf, size_t bufsize, freelist_t *freelist) + char *msgbuf, size_t bufsize, PyObject **freelist) { - /* For # codes */ -#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ - if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ - else q=va_arg(*p_va, int*); -#define STORE_SIZE(s) if (flags & FLAG_SIZE_T) *q2=s; else *q=s; + /* For # codes */ +#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ + if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ + else q=va_arg(*p_va, int*); +#define STORE_SIZE(s) \ + if (flags & FLAG_SIZE_T) \ + *q2=s; \ + else { \ + if (INT_MAX < s) { \ + PyErr_SetString(PyExc_OverflowError, \ + "size does not fit in an int"); \ + return converterr("", arg, msgbuf, bufsize); \ + } \ + *q=s; \ + } #define BUFFER_LEN ((flags & FLAG_SIZE_T) ? *q2:*q) - const char *format = *p_format; - char c = *format++; + const char *format = *p_format; + char c = *format++; #ifdef Py_USING_UNICODE - PyObject *uarg; -#endif - - switch (c) { - - case 'b': { /* unsigned byte -- very short int */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > UCHAR_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (unsigned char) ival; - break; - } - - case 'B': {/* byte sized bitfield - both signed and unsigned - values allowed */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned char) ival; - break; - } - - case 'h': {/* signed short int */ - short *p = va_arg(*p_va, short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < SHRT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > SHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (short) ival; - break; - } - - case 'H': { /* short int sized bitfield, both signed and - unsigned allowed */ - unsigned short *p = va_arg(*p_va, unsigned short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned short) ival; - break; - } - case 'i': {/* signed int */ - int *p = va_arg(*p_va, int *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival > INT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival < INT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = ival; - break; - } - case 'I': { /* int sized bitfield, both signed and - unsigned allowed */ - unsigned int *p = va_arg(*p_va, unsigned int *); - unsigned int ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); - if (ival == (unsigned int)-1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - case 'n': /* Py_ssize_t */ -#if SIZEOF_SIZE_T != SIZEOF_LONG - { - Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); - Py_ssize_t ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsSsize_t(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif - /* Fall through from 'n' to 'l' if Py_ssize_t is int */ - case 'l': {/* long int */ - long *p = va_arg(*p_va, long *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - - case 'k': { /* long sized bitfield */ - unsigned long *p = va_arg(*p_va, unsigned long *); - unsigned long ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } - -#ifdef HAVE_LONG_LONG - case 'L': {/* PY_LONG_LONG */ - PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); - PY_LONG_LONG ival = PyLong_AsLongLong( arg ); - if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { - return converterr("long", arg, msgbuf, bufsize); - } else { - *p = ival; - } - break; - } - - case 'K': { /* long long sized bitfield */ - unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); - unsigned PY_LONG_LONG ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif // HAVE_LONG_LONG - - case 'f': {/* float */ - float *p = va_arg(*p_va, float *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = (float) dval; - break; - } - - case 'd': {/* double */ - double *p = va_arg(*p_va, double *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = dval; - break; - } - -#ifndef WITHOUT_COMPLEX - case 'D': {/* complex double */ - Py_complex *p = va_arg(*p_va, Py_complex *); - Py_complex cval; - cval = PyComplex_AsCComplex(arg); - if (PyErr_Occurred()) - return converterr("complex", arg, msgbuf, bufsize); - else - *p = cval; - break; - } -#endif /* WITHOUT_COMPLEX */ - - case 'c': {/* char */ - char *p = va_arg(*p_va, char *); - if (PyString_Check(arg) && PyString_Size(arg) == 1) - *p = PyString_AS_STRING(arg)[0]; - else - return converterr("char", arg, msgbuf, bufsize); - break; - } - case 's': {/* string */ - if (*format == '*') { - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (PyString_Check(arg)) { - fflush(stdout); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { -#if 0 - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); -#else - return converterr("string or buffer", arg, msgbuf, bufsize); -#endif - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; - } else if (*format == '#') { - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string", arg, msgbuf, bufsize); - if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr("string without null bytes", - arg, msgbuf, bufsize); - } - break; - } - - case 'z': {/* string, may be NULL (None) */ - if (*format == '*') { - Py_FatalError("'*' format not supported in PyArg_*\n"); -#if 0 - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (arg == Py_None) - PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); - else if (PyString_Check(arg)) { - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; -#endif - } else if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (arg == Py_None) { - *p = 0; - STORE_SIZE(0); - } - else if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (arg == Py_None) - *p = 0; - else if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string or None", - arg, msgbuf, bufsize); - if (*format == '#') { - FETCH_SIZE; - assert(0); /* XXX redundant with if-case */ - if (arg == Py_None) - *q = 0; - else - *q = PyString_Size(arg); - format++; - } - else if (*p != NULL && - (Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr( - "string without null bytes or None", - arg, msgbuf, bufsize); - } - break; - } - case 'e': {/* encoded string */ - char **buffer; - const char *encoding; - PyObject *s; - Py_ssize_t size; - int recode_strings; - - /* Get 'e' parameter: the encoding name */ - encoding = (const char *)va_arg(*p_va, const char *); -#ifdef Py_USING_UNICODE - if (encoding == NULL) - encoding = PyUnicode_GetDefaultEncoding(); + PyObject *uarg; #endif - /* Get output buffer parameter: - 's' (recode all objects via Unicode) or - 't' (only recode non-string objects) - */ - if (*format == 's') - recode_strings = 1; - else if (*format == 't') - recode_strings = 0; - else - return converterr( - "(unknown parser marker combination)", - arg, msgbuf, bufsize); - buffer = (char **)va_arg(*p_va, char **); - format++; - if (buffer == NULL) - return converterr("(buffer is NULL)", - arg, msgbuf, bufsize); - - /* Encode object */ - if (!recode_strings && PyString_Check(arg)) { - s = arg; - Py_INCREF(s); - } - else { + switch (c) { + + case 'b': { /* unsigned byte -- very short int */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > UCHAR_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (unsigned char) ival; + break; + } + + case 'B': {/* byte sized bitfield - both signed and unsigned + values allowed */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned char) ival; + break; + } + + case 'h': {/* signed short int */ + short *p = va_arg(*p_va, short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < SHRT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > SHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (short) ival; + break; + } + + case 'H': { /* short int sized bitfield, both signed and + unsigned allowed */ + unsigned short *p = va_arg(*p_va, unsigned short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned short) ival; + break; + } + + case 'i': {/* signed int */ + int *p = va_arg(*p_va, int *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival > INT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival < INT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = ival; + break; + } + + case 'I': { /* int sized bitfield, both signed and + unsigned allowed */ + unsigned int *p = va_arg(*p_va, unsigned int *); + unsigned int ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); + if (ival == (unsigned int)-1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'n': /* Py_ssize_t */ +#if SIZEOF_SIZE_T != SIZEOF_LONG + { + Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); + Py_ssize_t ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsSsize_t(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + /* Fall through from 'n' to 'l' if Py_ssize_t is int */ + case 'l': {/* long int */ + long *p = va_arg(*p_va, long *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'k': { /* long sized bitfield */ + unsigned long *p = va_arg(*p_va, unsigned long *); + unsigned long ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } + +#ifdef HAVE_LONG_LONG + case 'L': {/* PY_LONG_LONG */ + PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); + PY_LONG_LONG ival; + if (float_argument_warning(arg)) + return converterr("long", arg, msgbuf, bufsize); + ival = PyLong_AsLongLong(arg); + if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { + return converterr("long", arg, msgbuf, bufsize); + } else { + *p = ival; + } + break; + } + + case 'K': { /* long long sized bitfield */ + unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); + unsigned PY_LONG_LONG ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + + case 'f': {/* float */ + float *p = va_arg(*p_va, float *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = (float) dval; + break; + } + + case 'd': {/* double */ + double *p = va_arg(*p_va, double *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = dval; + break; + } + +#ifndef WITHOUT_COMPLEX + case 'D': {/* complex double */ + Py_complex *p = va_arg(*p_va, Py_complex *); + Py_complex cval; + cval = PyComplex_AsCComplex(arg); + if (PyErr_Occurred()) + return converterr("complex", arg, msgbuf, bufsize); + else + *p = cval; + break; + } +#endif /* WITHOUT_COMPLEX */ + + case 'c': {/* char */ + char *p = va_arg(*p_va, char *); + if (PyString_Check(arg) && PyString_Size(arg) == 1) + *p = PyString_AS_STRING(arg)[0]; + else + return converterr("char", arg, msgbuf, bufsize); + break; + } + + case 's': {/* string */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } #ifdef Py_USING_UNICODE - PyObject *u; + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; - /* Convert object to Unicode */ - u = PyUnicode_FromObject(arg); - if (u == NULL) - return converterr( - "string or unicode or text buffer", - arg, msgbuf, bufsize); - - /* Encode object; use default error handling */ - s = PyUnicode_AsEncodedString(u, - encoding, - NULL); - Py_DECREF(u); - if (s == NULL) - return converterr("(encoding failed)", - arg, msgbuf, bufsize); - if (!PyString_Check(s)) { - Py_DECREF(s); - return converterr( - "(encoder failed to return a string)", - arg, msgbuf, bufsize); - } + if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string", arg, msgbuf, bufsize); + if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr("string without null bytes", + arg, msgbuf, bufsize); + } + break; + } + + case 'z': {/* string, may be NULL (None) */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (arg == Py_None) + PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); + else if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + + if (arg == Py_None) { + *p = 0; + STORE_SIZE(0); + } + else if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (arg == Py_None) + *p = 0; + else if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string or None", + arg, msgbuf, bufsize); + if (*format == '#') { + FETCH_SIZE; + assert(0); /* XXX redundant with if-case */ + if (arg == Py_None) { + STORE_SIZE(0); + } else { + STORE_SIZE(PyString_Size(arg)); + } + format++; + } + else if (*p != NULL && + (Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr( + "string without null bytes or None", + arg, msgbuf, bufsize); + } + break; + } + + case 'e': {/* encoded string */ + char **buffer; + const char *encoding; + PyObject *s; + Py_ssize_t size; + int recode_strings; + + /* Get 'e' parameter: the encoding name */ + encoding = (const char *)va_arg(*p_va, const char *); +#ifdef Py_USING_UNICODE + if (encoding == NULL) + encoding = PyUnicode_GetDefaultEncoding(); +#endif + + /* Get output buffer parameter: + 's' (recode all objects via Unicode) or + 't' (only recode non-string objects) + */ + if (*format == 's') + recode_strings = 1; + else if (*format == 't') + recode_strings = 0; + else + return converterr( + "(unknown parser marker combination)", + arg, msgbuf, bufsize); + buffer = (char **)va_arg(*p_va, char **); + format++; + if (buffer == NULL) + return converterr("(buffer is NULL)", + arg, msgbuf, bufsize); + + /* Encode object */ + if (!recode_strings && PyString_Check(arg)) { + s = arg; + Py_INCREF(s); + } + else { +#ifdef Py_USING_UNICODE + PyObject *u; + + /* Convert object to Unicode */ + u = PyUnicode_FromObject(arg); + if (u == NULL) + return converterr( + "string or unicode or text buffer", + arg, msgbuf, bufsize); + + /* Encode object; use default error handling */ + s = PyUnicode_AsEncodedString(u, + encoding, + NULL); + Py_DECREF(u); + if (s == NULL) + return converterr("(encoding failed)", + arg, msgbuf, bufsize); + if (!PyString_Check(s)) { + Py_DECREF(s); + return converterr( + "(encoder failed to return a string)", + arg, msgbuf, bufsize); + } #else - return converterr("string", arg, msgbuf, bufsize); + return converterr("string", arg, msgbuf, bufsize); #endif - } - size = PyString_GET_SIZE(s); + } + size = PyString_GET_SIZE(s); - /* Write output; output is guaranteed to be 0-terminated */ - if (*format == '#') { - /* Using buffer length parameter '#': - - - if *buffer is NULL, a new buffer of the - needed size is allocated and the data - copied into it; *buffer is updated to point - to the new buffer; the caller is - responsible for PyMem_Free()ing it after - usage + /* Write output; output is guaranteed to be 0-terminated */ + if (*format == '#') { + /* Using buffer length parameter '#': - - if *buffer is not NULL, the data is - copied to *buffer; *buffer_len has to be - set to the size of the buffer on input; - buffer overflow is signalled with an error; - buffer has to provide enough room for the - encoded string plus the trailing 0-byte - - - in both cases, *buffer_len is updated to - the size of the buffer /excluding/ the - trailing 0-byte - - */ - FETCH_SIZE; + - if *buffer is NULL, a new buffer of the + needed size is allocated and the data + copied into it; *buffer is updated to point + to the new buffer; the caller is + responsible for PyMem_Free()ing it after + usage - format++; - if (q == NULL && q2 == NULL) { - Py_DECREF(s); - return converterr( - "(buffer_len is NULL)", - arg, msgbuf, bufsize); - } - if (*buffer == NULL) { - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr( - "(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - } else { - if (size + 1 > BUFFER_LEN) { - Py_DECREF(s); - return converterr( - "(buffer overflow)", - arg, msgbuf, bufsize); - } - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - STORE_SIZE(size); - } else { - /* Using a 0-terminated buffer: - - - the encoded string has to be 0-terminated - for this variant to work; if it is not, an - error raised + - if *buffer is not NULL, the data is + copied to *buffer; *buffer_len has to be + set to the size of the buffer on input; + buffer overflow is signalled with an error; + buffer has to provide enough room for the + encoded string plus the trailing 0-byte - - a new buffer of the needed size is - allocated and the data copied into it; - *buffer is updated to point to the new - buffer; the caller is responsible for - PyMem_Free()ing it after usage + - in both cases, *buffer_len is updated to + the size of the buffer /excluding/ the + trailing 0-byte - */ - if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) - != size) { - Py_DECREF(s); - return converterr( - "encoded string without NULL bytes", - arg, msgbuf, bufsize); - } - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr("(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr("(cleanup problem)", - arg, msgbuf, bufsize); - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - } - Py_DECREF(s); - break; - } + */ + FETCH_SIZE; + + format++; + if (q == NULL && q2 == NULL) { + Py_DECREF(s); + return converterr( + "(buffer_len is NULL)", + arg, msgbuf, bufsize); + } + if (*buffer == NULL) { + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr( + "(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + } else { + if (size + 1 > BUFFER_LEN) { + Py_DECREF(s); + return converterr( + "(buffer overflow)", + arg, msgbuf, bufsize); + } + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + STORE_SIZE(size); + } else { + /* Using a 0-terminated buffer: + + - the encoded string has to be 0-terminated + for this variant to work; if it is not, an + error raised + + - a new buffer of the needed size is + allocated and the data copied into it; + *buffer is updated to point to the new + buffer; the caller is responsible for + PyMem_Free()ing it after usage + + */ + if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) + != size) { + Py_DECREF(s); + return converterr( + "encoded string without NULL bytes", + arg, msgbuf, bufsize); + } + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr("(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr("(cleanup problem)", + arg, msgbuf, bufsize); + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + } + Py_DECREF(s); + break; + } #ifdef Py_USING_UNICODE - case 'u': {/* raw unicode buffer (Py_UNICODE *) */ - if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - if (PyUnicode_Check(arg)) { - *p = PyUnicode_AS_UNICODE(arg); - STORE_SIZE(PyUnicode_GET_SIZE(arg)); - } - else { - return converterr("cannot convert raw buffers", - arg, msgbuf, bufsize); - } - format++; - } else { - Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); - if (PyUnicode_Check(arg)) - *p = PyUnicode_AS_UNICODE(arg); - else - return converterr("unicode", arg, msgbuf, bufsize); - } - break; - } + case 'u': {/* raw unicode buffer (Py_UNICODE *) */ + if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + if (PyUnicode_Check(arg)) { + *p = PyUnicode_AS_UNICODE(arg); + STORE_SIZE(PyUnicode_GET_SIZE(arg)); + } + else { + return converterr("cannot convert raw buffers", + arg, msgbuf, bufsize); + } + format++; + } else { + Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); + if (PyUnicode_Check(arg)) + *p = PyUnicode_AS_UNICODE(arg); + else + return converterr("unicode", arg, msgbuf, bufsize); + } + break; + } #endif - case 'S': { /* string object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyString_Check(arg)) - *p = arg; - else - return converterr("string", arg, msgbuf, bufsize); - break; - } - + case 'S': { /* string object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyString_Check(arg)) + *p = arg; + else + return converterr("string", arg, msgbuf, bufsize); + break; + } + #ifdef Py_USING_UNICODE - case 'U': { /* Unicode object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyUnicode_Check(arg)) - *p = arg; - else - return converterr("unicode", arg, msgbuf, bufsize); - break; - } + case 'U': { /* Unicode object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyUnicode_Check(arg)) + *p = arg; + else + return converterr("unicode", arg, msgbuf, bufsize); + break; + } #endif - case 'O': { /* object */ - PyTypeObject *type; - PyObject **p; - if (*format == '!') { - type = va_arg(*p_va, PyTypeObject*); - p = va_arg(*p_va, PyObject **); - format++; - if (PyType_IsSubtype(arg->ob_type, type)) - *p = arg; - else - return converterr(type->tp_name, arg, msgbuf, bufsize); - } - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - p = va_arg(*p_va, PyObject **); - format++; - if ((*pred)(arg)) - *p = arg; - else - return converterr("(unspecified)", - arg, msgbuf, bufsize); - - } - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - converter convert = va_arg(*p_va, converter); - void *addr = va_arg(*p_va, void *); - format++; - if (! (*convert)(arg, addr)) - return converterr("(unspecified)", - arg, msgbuf, bufsize); - } - else { - p = va_arg(*p_va, PyObject **); - *p = arg; - } - break; - } - - case 'w': { /* memory buffer, read-write access */ - Py_FatalError("'w' unsupported\n"); -#if 0 - void **p = va_arg(*p_va, void **); - void *res; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; + case 'O': { /* object */ + PyTypeObject *type; + PyObject **p; + if (*format == '!') { + type = va_arg(*p_va, PyTypeObject*); + p = va_arg(*p_va, PyObject **); + format++; + if (PyType_IsSubtype(arg->ob_type, type)) + *p = arg; + else + return converterr(type->tp_name, arg, msgbuf, bufsize); - if (pb && pb->bf_releasebuffer && *format != '*') - /* Buffer must be released, yet caller does not use - the Py_buffer protocol. */ - return converterr("pinned buffer", arg, msgbuf, bufsize); + } + else if (*format == '?') { + inquiry pred = va_arg(*p_va, inquiry); + p = va_arg(*p_va, PyObject **); + format++; + if ((*pred)(arg)) + *p = arg; + else + return converterr("(unspecified)", + arg, msgbuf, bufsize); - if (pb && pb->bf_getbuffer && *format == '*') { - /* Caller is interested in Py_buffer, and the object - supports it directly. */ - format++; - if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { - PyErr_Clear(); - return converterr("read-write buffer", arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) - return converterr("contiguous buffer", arg, msgbuf, bufsize); - break; - } + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + converter convert = va_arg(*p_va, converter); + void *addr = va_arg(*p_va, void *); + format++; + if (! (*convert)(arg, addr)) + return converterr("(unspecified)", + arg, msgbuf, bufsize); + } + else { + p = va_arg(*p_va, PyObject **); + *p = arg; + } + break; + } - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) - return converterr("read-write buffer", arg, msgbuf, bufsize); - if ((*pb->bf_getsegcount)(arg, NULL) != 1) - return converterr("single-segment read-write buffer", - arg, msgbuf, bufsize); - if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); - if (*format == '*') { - PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); - format++; - } - else { - *p = res; - if (*format == '#') { - FETCH_SIZE; - STORE_SIZE(count); - format++; - } - } - break; -#endif - } - - case 't': { /* 8-bit character buffer, read-only access */ - char **p = va_arg(*p_va, char **); - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; -#if 0 - if (*format++ != '#') - return converterr( - "invalid use of 't' format character", - arg, msgbuf, bufsize); -#endif - if (!PyType_HasFeature(arg->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER) -#if 0 - || pb == NULL || pb->bf_getcharbuffer == NULL || - pb->bf_getsegcount == NULL -#endif - ) - return converterr( - "string or read-only character buffer", - arg, msgbuf, bufsize); -#if 0 - if (pb->bf_getsegcount(arg, NULL) != 1) - return converterr( - "string or single-segment read-only buffer", - arg, msgbuf, bufsize); + case 'w': { /* memory buffer, read-write access */ + void **p = va_arg(*p_va, void **); + void *res; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; - if (pb->bf_releasebuffer) - return converterr( - "string or pinned buffer", - arg, msgbuf, bufsize); -#endif - count = pb->bf_getcharbuffer(arg, 0, p); -#if 0 - if (count < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); -#endif - { - FETCH_SIZE; - STORE_SIZE(count); - ++format; - } - break; - } - default: - return converterr("impossible", arg, msgbuf, bufsize); - - } - - *p_format = format; - return NULL; + if (pb && pb->bf_releasebuffer && *format != '*') + /* Buffer must be released, yet caller does not use + the Py_buffer protocol. */ + return converterr("pinned buffer", arg, msgbuf, bufsize); + + if (pb && pb->bf_getbuffer && *format == '*') { + /* Caller is interested in Py_buffer, and the object + supports it directly. */ + format++; + if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { + PyErr_Clear(); + return converterr("read-write buffer", arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) + return converterr("contiguous buffer", arg, msgbuf, bufsize); + break; + } + + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr("read-write buffer", arg, msgbuf, bufsize); + if ((*pb->bf_getsegcount)(arg, NULL) != 1) + return converterr("single-segment read-write buffer", + arg, msgbuf, bufsize); + if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + if (*format == '*') { + PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); + format++; + } + else { + *p = res; + if (*format == '#') { + FETCH_SIZE; + STORE_SIZE(count); + format++; + } + } + break; + } + + case 't': { /* 8-bit character buffer, read-only access */ + char **p = va_arg(*p_va, char **); + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + + if (*format++ != '#') + return converterr( + "invalid use of 't' format character", + arg, msgbuf, bufsize); + if (!PyType_HasFeature(arg->ob_type, + Py_TPFLAGS_HAVE_GETCHARBUFFER) || + pb == NULL || pb->bf_getcharbuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr( + "string or read-only character buffer", + arg, msgbuf, bufsize); + + if (pb->bf_getsegcount(arg, NULL) != 1) + return converterr( + "string or single-segment read-only buffer", + arg, msgbuf, bufsize); + + if (pb->bf_releasebuffer) + return converterr( + "string or pinned buffer", + arg, msgbuf, bufsize); + + count = pb->bf_getcharbuffer(arg, 0, p); + if (count < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + { + FETCH_SIZE; + STORE_SIZE(count); + } + break; + } + + default: + return converterr("impossible", arg, msgbuf, bufsize); + + } + + *p_format = format; + return NULL; } static Py_ssize_t convertbuffer(PyObject *arg, void **p, char **errmsg) { - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - pb->bf_releasebuffer != NULL) { - *errmsg = "string or read-only buffer"; - return -1; - } - if ((*pb->bf_getsegcount)(arg, NULL) != 1) { - *errmsg = "string or single-segment read-only buffer"; - return -1; - } - if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { - *errmsg = "(unspecified)"; - } - return count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + pb->bf_releasebuffer != NULL) { + *errmsg = "string or read-only buffer"; + return -1; + } + if ((*pb->bf_getsegcount)(arg, NULL) != 1) { + *errmsg = "string or single-segment read-only buffer"; + return -1; + } + if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { + *errmsg = "(unspecified)"; + } + return count; } static int getbuffer(PyObject *arg, Py_buffer *view, char **errmsg) { - void *buf; - Py_ssize_t count; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - if (pb == NULL) { - *errmsg = "string or buffer"; - return -1; - } - if (pb->bf_getbuffer) { - if (pb->bf_getbuffer(arg, view, 0) < 0) { - *errmsg = "convertible to a buffer"; - return -1; - } - if (!PyBuffer_IsContiguous(view, 'C')) { - *errmsg = "contiguous buffer"; - return -1; - } - return 0; - } + void *buf; + Py_ssize_t count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + if (pb == NULL) { + *errmsg = "string or buffer"; + return -1; + } + if (pb->bf_getbuffer) { + if (pb->bf_getbuffer(arg, view, 0) < 0) { + *errmsg = "convertible to a buffer"; + return -1; + } + if (!PyBuffer_IsContiguous(view, 'C')) { + *errmsg = "contiguous buffer"; + return -1; + } + return 0; + } - count = convertbuffer(arg, &buf, errmsg); - if (count < 0) { - *errmsg = "convertible to a buffer"; - return count; - } - PyBuffer_FillInfo(view, NULL, buf, count, 1, 0); - return 0; + count = convertbuffer(arg, &buf, errmsg); + if (count < 0) { + *errmsg = "convertible to a buffer"; + return count; + } + PyBuffer_FillInfo(view, arg, buf, count, 1, 0); + return 0; } /* Support for keyword arguments donated by @@ -1395,501 +1410,487 @@ /* Return false (0) for error, else true. */ int PyArg_ParseTupleAndKeywords(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, - kwlist, &va, FLAG_SIZE_T); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, + kwlist, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParseTupleAndKeywords(PyObject *args, PyObject *keywords, - const char *format, + const char *format, char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); - return retval; + retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); + return retval; } int _PyArg_VaParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, va_list va) + PyObject *keywords, + const char *format, + char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, - kwlist, &lva, FLAG_SIZE_T); - return retval; + retval = vgetargskeywords(args, keywords, format, + kwlist, &lva, FLAG_SIZE_T); + return retval; } #define IS_END_OF_FORMAT(c) (c == '\0' || c == ';' || c == ':') static int vgetargskeywords(PyObject *args, PyObject *keywords, const char *format, - char **kwlist, va_list *p_va, int flags) + char **kwlist, va_list *p_va, int flags) { - char msgbuf[512]; - int levels[32]; - const char *fname, *msg, *custom_msg, *keyword; - int min = INT_MAX; - int i, len, nargs, nkeywords; - PyObject *current_arg; - freelist_t freelist = {0, NULL}; + char msgbuf[512]; + int levels[32]; + const char *fname, *msg, *custom_msg, *keyword; + int min = INT_MAX; + int i, len, nargs, nkeywords; + PyObject *freelist = NULL, *current_arg; + assert(args != NULL && PyTuple_Check(args)); + assert(keywords == NULL || PyDict_Check(keywords)); + assert(format != NULL); + assert(kwlist != NULL); + assert(p_va != NULL); - assert(args != NULL && PyTuple_Check(args)); - assert(keywords == NULL || PyDict_Check(keywords)); - assert(format != NULL); - assert(kwlist != NULL); - assert(p_va != NULL); + /* grab the function name or custom error msg first (mutually exclusive) */ + fname = strchr(format, ':'); + if (fname) { + fname++; + custom_msg = NULL; + } + else { + custom_msg = strchr(format,';'); + if (custom_msg) + custom_msg++; + } - /* grab the function name or custom error msg first (mutually exclusive) */ - fname = strchr(format, ':'); - if (fname) { - fname++; - custom_msg = NULL; - } - else { - custom_msg = strchr(format,';'); - if (custom_msg) - custom_msg++; - } + /* scan kwlist and get greatest possible nbr of args */ + for (len=0; kwlist[len]; len++) + continue; - /* scan kwlist and get greatest possible nbr of args */ - for (len=0; kwlist[len]; len++) - continue; + nargs = PyTuple_GET_SIZE(args); + nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); + if (nargs + nkeywords > len) { + PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " + "argument%s (%d given)", + (fname == NULL) ? "function" : fname, + (fname == NULL) ? "" : "()", + len, + (len == 1) ? "" : "s", + nargs + nkeywords); + return 0; + } - freelist.entries = PyMem_New(freelistentry_t, len); + /* convert tuple args and keyword args in same loop, using kwlist to drive process */ + for (i = 0; i < len; i++) { + keyword = kwlist[i]; + if (*format == '|') { + min = i; + format++; + } + if (IS_END_OF_FORMAT(*format)) { + PyErr_Format(PyExc_RuntimeError, + "More keyword list entries (%d) than " + "format specifiers (%d)", len, i); + return cleanreturn(0, freelist); + } + current_arg = NULL; + if (nkeywords) { + current_arg = PyDict_GetItemString(keywords, keyword); + } + if (current_arg) { + --nkeywords; + if (i < nargs) { + /* arg present in tuple and in dict */ + PyErr_Format(PyExc_TypeError, + "Argument given by name ('%s') " + "and position (%d)", + keyword, i+1); + return cleanreturn(0, freelist); + } + } + else if (nkeywords && PyErr_Occurred()) + return cleanreturn(0, freelist); + else if (i < nargs) + current_arg = PyTuple_GET_ITEM(args, i); - nargs = PyTuple_GET_SIZE(args); - nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); - if (nargs + nkeywords > len) { - PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " - "argument%s (%d given)", - (fname == NULL) ? "function" : fname, - (fname == NULL) ? "" : "()", - len, - (len == 1) ? "" : "s", - nargs + nkeywords); - return cleanreturn(0, &freelist); - } + if (current_arg) { + msg = convertitem(current_arg, &format, p_va, flags, + levels, msgbuf, sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, custom_msg); + return cleanreturn(0, freelist); + } + continue; + } - /* convert tuple args and keyword args in same loop, using kwlist to drive process */ - for (i = 0; i < len; i++) { - keyword = kwlist[i]; - if (*format == '|') { - min = i; - format++; - } - if (IS_END_OF_FORMAT(*format)) { - PyErr_Format(PyExc_RuntimeError, - "More keyword list entries (%d) than " - "format specifiers (%d)", len, i); - return cleanreturn(0, &freelist); - } - current_arg = NULL; - if (nkeywords) { - current_arg = PyDict_GetItemString(keywords, keyword); - } - if (current_arg) { - --nkeywords; - if (i < nargs) { - /* arg present in tuple and in dict */ - PyErr_Format(PyExc_TypeError, - "Argument given by name ('%s') " - "and position (%d)", - keyword, i+1); - return cleanreturn(0, &freelist); - } - } - else if (nkeywords && PyErr_Occurred()) - return cleanreturn(0, &freelist); - else if (i < nargs) - current_arg = PyTuple_GET_ITEM(args, i); - - if (current_arg) { - msg = convertitem(current_arg, &format, p_va, flags, - levels, msgbuf, sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, custom_msg); - return cleanreturn(0, &freelist); - } - continue; - } + if (i < min) { + PyErr_Format(PyExc_TypeError, "Required argument " + "'%s' (pos %d) not found", + keyword, i+1); + return cleanreturn(0, freelist); + } + /* current code reports success when all required args + * fulfilled and no keyword args left, with no further + * validation. XXX Maybe skip this in debug build ? + */ + if (!nkeywords) + return cleanreturn(1, freelist); - if (i < min) { - PyErr_Format(PyExc_TypeError, "Required argument " - "'%s' (pos %d) not found", - keyword, i+1); - return cleanreturn(0, &freelist); - } - /* current code reports success when all required args - * fulfilled and no keyword args left, with no further - * validation. XXX Maybe skip this in debug build ? - */ - if (!nkeywords) - return cleanreturn(1, &freelist); + /* We are into optional args, skip thru to any remaining + * keyword args */ + msg = skipitem(&format, p_va, flags); + if (msg) { + PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, + format); + return cleanreturn(0, freelist); + } + } - /* We are into optional args, skip thru to any remaining - * keyword args */ - msg = skipitem(&format, p_va, flags); - if (msg) { - PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, - format); - return cleanreturn(0, &freelist); - } - } + if (!IS_END_OF_FORMAT(*format) && *format != '|') { + PyErr_Format(PyExc_RuntimeError, + "more argument specifiers than keyword list entries " + "(remaining format:'%s')", format); + return cleanreturn(0, freelist); + } - if (!IS_END_OF_FORMAT(*format) && *format != '|') { - PyErr_Format(PyExc_RuntimeError, - "more argument specifiers than keyword list entries " - "(remaining format:'%s')", format); - return cleanreturn(0, &freelist); - } + /* make sure there are no extraneous keyword arguments */ + if (nkeywords > 0) { + PyObject *key, *value; + Py_ssize_t pos = 0; + while (PyDict_Next(keywords, &pos, &key, &value)) { + int match = 0; + char *ks; + if (!PyString_Check(key)) { + PyErr_SetString(PyExc_TypeError, + "keywords must be strings"); + return cleanreturn(0, freelist); + } + ks = PyString_AsString(key); + for (i = 0; i < len; i++) { + if (!strcmp(ks, kwlist[i])) { + match = 1; + break; + } + } + if (!match) { + PyErr_Format(PyExc_TypeError, + "'%s' is an invalid keyword " + "argument for this function", + ks); + return cleanreturn(0, freelist); + } + } + } - /* make sure there are no extraneous keyword arguments */ - if (nkeywords > 0) { - PyObject *key, *value; - Py_ssize_t pos = 0; - while (PyDict_Next(keywords, &pos, &key, &value)) { - int match = 0; - char *ks; - if (!PyString_Check(key)) { - PyErr_SetString(PyExc_TypeError, - "keywords must be strings"); - return cleanreturn(0, &freelist); - } - ks = PyString_AsString(key); - for (i = 0; i < len; i++) { - if (!strcmp(ks, kwlist[i])) { - match = 1; - break; - } - } - if (!match) { - PyErr_Format(PyExc_TypeError, - "'%s' is an invalid keyword " - "argument for this function", - ks); - return cleanreturn(0, &freelist); - } - } - } - - return cleanreturn(1, &freelist); + return cleanreturn(1, freelist); } static char * skipitem(const char **p_format, va_list *p_va, int flags) { - const char *format = *p_format; - char c = *format++; - - switch (c) { + const char *format = *p_format; + char c = *format++; - /* simple codes - * The individual types (second arg of va_arg) are irrelevant */ + switch (c) { - case 'b': /* byte -- very short int */ - case 'B': /* byte as bitfield */ - case 'h': /* short int */ - case 'H': /* short int as bitfield */ - case 'i': /* int */ - case 'I': /* int sized bitfield */ - case 'l': /* long int */ - case 'k': /* long int sized bitfield */ + /* simple codes + * The individual types (second arg of va_arg) are irrelevant */ + + case 'b': /* byte -- very short int */ + case 'B': /* byte as bitfield */ + case 'h': /* short int */ + case 'H': /* short int as bitfield */ + case 'i': /* int */ + case 'I': /* int sized bitfield */ + case 'l': /* long int */ + case 'k': /* long int sized bitfield */ #ifdef HAVE_LONG_LONG - case 'L': /* PY_LONG_LONG */ - case 'K': /* PY_LONG_LONG sized bitfield */ + case 'L': /* PY_LONG_LONG */ + case 'K': /* PY_LONG_LONG sized bitfield */ #endif - case 'f': /* float */ - case 'd': /* double */ + case 'f': /* float */ + case 'd': /* double */ #ifndef WITHOUT_COMPLEX - case 'D': /* complex double */ + case 'D': /* complex double */ #endif - case 'c': /* char */ - { - (void) va_arg(*p_va, void *); - break; - } + case 'c': /* char */ + { + (void) va_arg(*p_va, void *); + break; + } - case 'n': /* Py_ssize_t */ - { - (void) va_arg(*p_va, Py_ssize_t *); - break; - } - - /* string codes */ - - case 'e': /* string with encoding */ - { - (void) va_arg(*p_va, const char *); - if (!(*format == 's' || *format == 't')) - /* after 'e', only 's' and 't' is allowed */ - goto err; - format++; - /* explicit fallthrough to string cases */ - } - - case 's': /* string */ - case 'z': /* string or None */ + case 'n': /* Py_ssize_t */ + { + (void) va_arg(*p_va, Py_ssize_t *); + break; + } + + /* string codes */ + + case 'e': /* string with encoding */ + { + (void) va_arg(*p_va, const char *); + if (!(*format == 's' || *format == 't')) + /* after 'e', only 's' and 't' is allowed */ + goto err; + format++; + /* explicit fallthrough to string cases */ + } + + case 's': /* string */ + case 'z': /* string or None */ #ifdef Py_USING_UNICODE - case 'u': /* unicode string */ + case 'u': /* unicode string */ #endif - case 't': /* buffer, read-only */ - case 'w': /* buffer, read-write */ - { - (void) va_arg(*p_va, char **); - if (*format == '#') { - if (flags & FLAG_SIZE_T) - (void) va_arg(*p_va, Py_ssize_t *); - else - (void) va_arg(*p_va, int *); - format++; - } else if ((c == 's' || c == 'z') && *format == '*') { - format++; - } - break; - } + case 't': /* buffer, read-only */ + case 'w': /* buffer, read-write */ + { + (void) va_arg(*p_va, char **); + if (*format == '#') { + if (flags & FLAG_SIZE_T) + (void) va_arg(*p_va, Py_ssize_t *); + else + (void) va_arg(*p_va, int *); + format++; + } else if ((c == 's' || c == 'z') && *format == '*') { + format++; + } + break; + } - /* object codes */ + /* object codes */ - case 'S': /* string object */ + case 'S': /* string object */ #ifdef Py_USING_UNICODE - case 'U': /* unicode string object */ + case 'U': /* unicode string object */ #endif - { - (void) va_arg(*p_va, PyObject **); - break; - } - - case 'O': /* object */ - { - if (*format == '!') { - format++; - (void) va_arg(*p_va, PyTypeObject*); - (void) va_arg(*p_va, PyObject **); - } -#if 0 -/* I don't know what this is for */ - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - format++; - if ((*pred)(arg)) { - (void) va_arg(*p_va, PyObject **); - } - } -#endif - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - (void) va_arg(*p_va, converter); - (void) va_arg(*p_va, void *); - format++; - } - else { - (void) va_arg(*p_va, PyObject **); - } - break; - } + { + (void) va_arg(*p_va, PyObject **); + break; + } - case '(': /* bypass tuple, not handled at all previously */ - { - char *msg; - for (;;) { - if (*format==')') - break; - if (IS_END_OF_FORMAT(*format)) - return "Unmatched left paren in format " - "string"; - msg = skipitem(&format, p_va, flags); - if (msg) - return msg; - } - format++; - break; - } + case 'O': /* object */ + { + if (*format == '!') { + format++; + (void) va_arg(*p_va, PyTypeObject*); + (void) va_arg(*p_va, PyObject **); + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + (void) va_arg(*p_va, converter); + (void) va_arg(*p_va, void *); + format++; + } + else { + (void) va_arg(*p_va, PyObject **); + } + break; + } - case ')': - return "Unmatched right paren in format string"; + case '(': /* bypass tuple, not handled at all previously */ + { + char *msg; + for (;;) { + if (*format==')') + break; + if (IS_END_OF_FORMAT(*format)) + return "Unmatched left paren in format " + "string"; + msg = skipitem(&format, p_va, flags); + if (msg) + return msg; + } + format++; + break; + } - default: + case ')': + return "Unmatched right paren in format string"; + + default: err: - return "impossible"; - - } + return "impossible"; - *p_format = format; - return NULL; + } + + *p_format = format; + return NULL; } int PyArg_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, ...) { - Py_ssize_t i, l; - PyObject **o; - va_list vargs; + Py_ssize_t i, l; + PyObject **o; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, max); + va_start(vargs, max); #else - va_start(vargs); + va_start(vargs); #endif - assert(min >= 0); - assert(min <= max); - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "PyArg_UnpackTuple() argument list is not a tuple"); - return 0; - } - l = PyTuple_GET_SIZE(args); - if (l < min) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at least "), min, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at least "), min, l); - va_end(vargs); - return 0; - } - if (l > max) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at most "), max, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at most "), max, l); - va_end(vargs); - return 0; - } - for (i = 0; i < l; i++) { - o = va_arg(vargs, PyObject **); - *o = PyTuple_GET_ITEM(args, i); - } - va_end(vargs); - return 1; + assert(min >= 0); + assert(min <= max); + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "PyArg_UnpackTuple() argument list is not a tuple"); + return 0; + } + l = PyTuple_GET_SIZE(args); + if (l < min) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at least "), min, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at least "), min, l); + va_end(vargs); + return 0; + } + if (l > max) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at most "), max, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at most "), max, l); + va_end(vargs); + return 0; + } + for (i = 0; i < l; i++) { + o = va_arg(vargs, PyObject **); + *o = PyTuple_GET_ITEM(args, i); + } + va_end(vargs); + return 1; } /* For type constructors that don't take keyword args * - * Sets a TypeError and returns 0 if the kwds dict is + * Sets a TypeError and returns 0 if the kwds dict is * not empty, returns 1 otherwise */ int _PyArg_NoKeywords(const char *funcname, PyObject *kw) { - if (kw == NULL) - return 1; - if (!PyDict_CheckExact(kw)) { - PyErr_BadInternalCall(); - return 0; - } - if (PyDict_Size(kw) == 0) - return 1; - - PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", - funcname); - return 0; + if (kw == NULL) + return 1; + if (!PyDict_CheckExact(kw)) { + PyErr_BadInternalCall(); + return 0; + } + if (PyDict_Size(kw) == 0) + return 1; + + PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", + funcname); + return 0; } #ifdef __cplusplus }; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -33,41 +33,41 @@ static int countformat(const char *format, int endchar) { - int count = 0; - int level = 0; - while (level > 0 || *format != endchar) { - switch (*format) { - case '\0': - /* Premature end */ - PyErr_SetString(PyExc_SystemError, - "unmatched paren in format"); - return -1; - case '(': - case '[': - case '{': - if (level == 0) - count++; - level++; - break; - case ')': - case ']': - case '}': - level--; - break; - case '#': - case '&': - case ',': - case ':': - case ' ': - case '\t': - break; - default: - if (level == 0) - count++; - } - format++; - } - return count; + int count = 0; + int level = 0; + while (level > 0 || *format != endchar) { + switch (*format) { + case '\0': + /* Premature end */ + PyErr_SetString(PyExc_SystemError, + "unmatched paren in format"); + return -1; + case '(': + case '[': + case '{': + if (level == 0) + count++; + level++; + break; + case ')': + case ']': + case '}': + level--; + break; + case '#': + case '&': + case ',': + case ':': + case ' ': + case '\t': + break; + default: + if (level == 0) + count++; + } + format++; + } + return count; } @@ -83,582 +83,435 @@ static PyObject * do_mkdict(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *d; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((d = PyDict_New()) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i+= 2) { - PyObject *k, *v; - int err; - k = do_mkvalue(p_format, p_va, flags); - if (k == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - k = Py_None; - } - v = do_mkvalue(p_format, p_va, flags); - if (v == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - v = Py_None; - } - err = PyDict_SetItem(d, k, v); - Py_DECREF(k); - Py_DECREF(v); - if (err < 0 || itemfailed) { - Py_DECREF(d); - return NULL; - } - } - if (d != NULL && **p_format != endchar) { - Py_DECREF(d); - d = NULL; - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - } - else if (endchar) - ++*p_format; - return d; + PyObject *d; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((d = PyDict_New()) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i+= 2) { + PyObject *k, *v; + int err; + k = do_mkvalue(p_format, p_va, flags); + if (k == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + k = Py_None; + } + v = do_mkvalue(p_format, p_va, flags); + if (v == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + v = Py_None; + } + err = PyDict_SetItem(d, k, v); + Py_DECREF(k); + Py_DECREF(v); + if (err < 0 || itemfailed) { + Py_DECREF(d); + return NULL; + } + } + if (d != NULL && **p_format != endchar) { + Py_DECREF(d); + d = NULL; + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + } + else if (endchar) + ++*p_format; + return d; } static PyObject * do_mklist(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - v = PyList_New(n); - if (v == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyList_SET_ITEM(v, i, w); - } + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + v = PyList_New(n); + if (v == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyList_SET_ITEM(v, i, w); + } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } #ifdef Py_USING_UNICODE static int _ustrlen(Py_UNICODE *u) { - int i = 0; - Py_UNICODE *v = u; - while (*v != 0) { i++; v++; } - return i; + int i = 0; + Py_UNICODE *v = u; + while (*v != 0) { i++; v++; } + return i; } #endif static PyObject * do_mktuple(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((v = PyTuple_New(n)) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyTuple_SET_ITEM(v, i, w); - } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((v = PyTuple_New(n)) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyTuple_SET_ITEM(v, i, w); + } + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } static PyObject * do_mkvalue(const char **p_format, va_list *p_va, int flags) { - for (;;) { - switch (*(*p_format)++) { - case '(': - return do_mktuple(p_format, p_va, ')', - countformat(*p_format, ')'), flags); + for (;;) { + switch (*(*p_format)++) { + case '(': + return do_mktuple(p_format, p_va, ')', + countformat(*p_format, ')'), flags); - case '[': - return do_mklist(p_format, p_va, ']', - countformat(*p_format, ']'), flags); + case '[': + return do_mklist(p_format, p_va, ']', + countformat(*p_format, ']'), flags); - case '{': - return do_mkdict(p_format, p_va, '}', - countformat(*p_format, '}'), flags); + case '{': + return do_mkdict(p_format, p_va, '}', + countformat(*p_format, '}'), flags); - case 'b': - case 'B': - case 'h': - case 'i': - return PyInt_FromLong((long)va_arg(*p_va, int)); - - case 'H': - return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + case 'b': + case 'B': + case 'h': + case 'i': + return PyInt_FromLong((long)va_arg(*p_va, int)); - case 'I': - { - unsigned int n; - n = va_arg(*p_va, unsigned int); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong((unsigned long)n); - else - return PyInt_FromLong(n); - } - - case 'n': + case 'H': + return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + + case 'I': + { + unsigned int n; + n = va_arg(*p_va, unsigned int); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong((unsigned long)n); + else + return PyInt_FromLong(n); + } + + case 'n': #if SIZEOF_SIZE_T!=SIZEOF_LONG - return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); + return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); #endif - /* Fall through from 'n' to 'l' if Py_ssize_t is long */ - case 'l': - return PyInt_FromLong(va_arg(*p_va, long)); + /* Fall through from 'n' to 'l' if Py_ssize_t is long */ + case 'l': + return PyInt_FromLong(va_arg(*p_va, long)); - case 'k': - { - unsigned long n; - n = va_arg(*p_va, unsigned long); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong(n); - else - return PyInt_FromLong(n); - } + case 'k': + { + unsigned long n; + n = va_arg(*p_va, unsigned long); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong(n); + else + return PyInt_FromLong(n); + } #ifdef HAVE_LONG_LONG - case 'L': - return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); + case 'L': + return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); - case 'K': - return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); + case 'K': + return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); #endif #ifdef Py_USING_UNICODE - case 'u': - { - PyObject *v; - Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (u == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) - n = _ustrlen(u); - v = PyUnicode_FromUnicode(u, n); - } - return v; - } + case 'u': + { + PyObject *v; + Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (u == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) + n = _ustrlen(u); + v = PyUnicode_FromUnicode(u, n); + } + return v; + } #endif - case 'f': - case 'd': - return PyFloat_FromDouble( - (double)va_arg(*p_va, va_double)); + case 'f': + case 'd': + return PyFloat_FromDouble( + (double)va_arg(*p_va, va_double)); #ifndef WITHOUT_COMPLEX - case 'D': - return PyComplex_FromCComplex( - *((Py_complex *)va_arg(*p_va, Py_complex *))); + case 'D': + return PyComplex_FromCComplex( + *((Py_complex *)va_arg(*p_va, Py_complex *))); #endif /* WITHOUT_COMPLEX */ - case 'c': - { - char p[1]; - p[0] = (char)va_arg(*p_va, int); - return PyString_FromStringAndSize(p, 1); - } + case 'c': + { + char p[1]; + p[0] = (char)va_arg(*p_va, int); + return PyString_FromStringAndSize(p, 1); + } - case 's': - case 'z': - { - PyObject *v; - char *str = va_arg(*p_va, char *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (str == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) { - size_t m = strlen(str); - if (m > PY_SSIZE_T_MAX) { - PyErr_SetString(PyExc_OverflowError, - "string too long for Python string"); - return NULL; - } - n = (Py_ssize_t)m; - } - v = PyString_FromStringAndSize(str, n); - } - return v; - } + case 's': + case 'z': + { + PyObject *v; + char *str = va_arg(*p_va, char *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (str == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) { + size_t m = strlen(str); + if (m > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_OverflowError, + "string too long for Python string"); + return NULL; + } + n = (Py_ssize_t)m; + } + v = PyString_FromStringAndSize(str, n); + } + return v; + } - case 'N': - case 'S': - case 'O': - if (**p_format == '&') { - typedef PyObject *(*converter)(void *); - converter func = va_arg(*p_va, converter); - void *arg = va_arg(*p_va, void *); - ++*p_format; - return (*func)(arg); - } - else { - PyObject *v; - v = va_arg(*p_va, PyObject *); - if (v != NULL) { - if (*(*p_format - 1) != 'N') - Py_INCREF(v); - } - else if (!PyErr_Occurred()) - /* If a NULL was passed - * because a call that should - * have constructed a value - * failed, that's OK, and we - * pass the error on; but if - * no error occurred it's not - * clear that the caller knew - * what she was doing. */ - PyErr_SetString(PyExc_SystemError, - "NULL object passed to Py_BuildValue"); - return v; - } + case 'N': + case 'S': + case 'O': + if (**p_format == '&') { + typedef PyObject *(*converter)(void *); + converter func = va_arg(*p_va, converter); + void *arg = va_arg(*p_va, void *); + ++*p_format; + return (*func)(arg); + } + else { + PyObject *v; + v = va_arg(*p_va, PyObject *); + if (v != NULL) { + if (*(*p_format - 1) != 'N') + Py_INCREF(v); + } + else if (!PyErr_Occurred()) + /* If a NULL was passed + * because a call that should + * have constructed a value + * failed, that's OK, and we + * pass the error on; but if + * no error occurred it's not + * clear that the caller knew + * what she was doing. */ + PyErr_SetString(PyExc_SystemError, + "NULL object passed to Py_BuildValue"); + return v; + } - case ':': - case ',': - case ' ': - case '\t': - break; + case ':': + case ',': + case ' ': + case '\t': + break; - default: - PyErr_SetString(PyExc_SystemError, - "bad format char passed to Py_BuildValue"); - return NULL; + default: + PyErr_SetString(PyExc_SystemError, + "bad format char passed to Py_BuildValue"); + return NULL; - } - } + } + } } PyObject * Py_BuildValue(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, 0); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, 0); + va_end(va); + return retval; } PyObject * _Py_BuildValue_SizeT(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, FLAG_SIZE_T); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, FLAG_SIZE_T); + va_end(va); + return retval; } PyObject * Py_VaBuildValue(const char *format, va_list va) { - return va_build_value(format, va, 0); + return va_build_value(format, va, 0); } PyObject * _Py_VaBuildValue_SizeT(const char *format, va_list va) { - return va_build_value(format, va, FLAG_SIZE_T); + return va_build_value(format, va, FLAG_SIZE_T); } static PyObject * va_build_value(const char *format, va_list va, int flags) { - const char *f = format; - int n = countformat(f, '\0'); - va_list lva; + const char *f = format; + int n = countformat(f, '\0'); + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - if (n < 0) - return NULL; - if (n == 0) { - Py_INCREF(Py_None); - return Py_None; - } - if (n == 1) - return do_mkvalue(&f, &lva, flags); - return do_mktuple(&f, &lva, '\0', n, flags); + if (n < 0) + return NULL; + if (n == 0) { + Py_INCREF(Py_None); + return Py_None; + } + if (n == 1) + return do_mkvalue(&f, &lva, flags); + return do_mktuple(&f, &lva, '\0', n, flags); } PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...) { - va_list vargs; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *args; + PyObject *res; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) - return NULL; + if (args == NULL) + return NULL; - res = PyEval_CallObject(obj, args); - Py_DECREF(args); + res = PyEval_CallObject(obj, args); + Py_DECREF(args); - return res; + return res; } PyObject * PyEval_CallMethod(PyObject *obj, const char *methodname, const char *format, ...) { - va_list vargs; - PyObject *meth; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *meth; + PyObject *args; + PyObject *res; - meth = PyObject_GetAttrString(obj, methodname); - if (meth == NULL) - return NULL; + meth = PyObject_GetAttrString(obj, methodname); + if (meth == NULL) + return NULL; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) { - Py_DECREF(meth); - return NULL; - } + if (args == NULL) { + Py_DECREF(meth); + return NULL; + } - res = PyEval_CallObject(meth, args); - Py_DECREF(meth); - Py_DECREF(args); + res = PyEval_CallObject(meth, args); + Py_DECREF(meth); + Py_DECREF(args); - return res; -} - -static PyObject* -call_function_tail(PyObject *callable, PyObject *args) -{ - PyObject *retval; - - if (args == NULL) - return NULL; - - if (!PyTuple_Check(args)) { - PyObject *a; - - a = PyTuple_New(1); - if (a == NULL) { - Py_DECREF(args); - return NULL; - } - PyTuple_SET_ITEM(a, 0, args); - args = a; - } - retval = PyObject_Call(callable, args, NULL); - - Py_DECREF(args); - - return retval; -} - -PyObject * -PyObject_CallFunction(PyObject *callable, const char *format, ...) -{ - va_list va; - PyObject *args; - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - return call_function_tail(callable, args); -} - -PyObject * -PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) -{ - va_list va; - PyObject *args; - PyObject *func = NULL; - PyObject *retval = NULL; - - func = PyObject_GetAttrString(o, name); - if (func == NULL) { - PyErr_SetString(PyExc_AttributeError, name); - return 0; - } - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - retval = call_function_tail(func, args); - - exit: - /* args gets consumed in call_function_tail */ - Py_XDECREF(func); - - return retval; -} - -static PyObject * -objargs_mktuple(va_list va) -{ - int i, n = 0; - va_list countva; - PyObject *result, *tmp; - -#ifdef VA_LIST_IS_ARRAY - memcpy(countva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(countva, va); -#else - countva = va; -#endif -#endif - - while (((PyObject *)va_arg(countva, PyObject *)) != NULL) - ++n; - result = PyTuple_New(n); - if (result != NULL && n > 0) { - for (i = 0; i < n; ++i) { - tmp = (PyObject *)va_arg(va, PyObject *); - Py_INCREF(tmp); - PyTuple_SET_ITEM(result, i, tmp); - } - } - return result; -} - -PyObject * -PyObject_CallFunctionObjArgs(PyObject *callable, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - /* count the args */ - va_start(vargs, callable); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) - return NULL; - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - - return tmp; -} - -PyObject * -PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - callable = PyObject_GetAttr(callable, name); - if (callable == NULL) - return NULL; - - /* count the args */ - va_start(vargs, name); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) { - Py_DECREF(callable); - return NULL; - } - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - Py_DECREF(callable); - - return tmp; + return res; } /* returns -1 in case of error, 0 if a new key was added, 1 if the key @@ -666,67 +519,67 @@ static int _PyModule_AddObject_NoConsumeRef(PyObject *m, const char *name, PyObject *o) { - PyObject *dict, *prev; - if (!PyModule_Check(m)) { - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs module as first arg"); - return -1; - } - if (!o) { - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs non-NULL value"); - return -1; - } + PyObject *dict, *prev; + if (!PyModule_Check(m)) { + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs module as first arg"); + return -1; + } + if (!o) { + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs non-NULL value"); + return -1; + } - dict = PyModule_GetDict(m); - if (dict == NULL) { - /* Internal error -- modules must have a dict! */ - PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", - PyModule_GetName(m)); - return -1; - } - prev = PyDict_GetItemString(dict, name); - if (PyDict_SetItemString(dict, name, o)) - return -1; - return prev != NULL; + dict = PyModule_GetDict(m); + if (dict == NULL) { + /* Internal error -- modules must have a dict! */ + PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", + PyModule_GetName(m)); + return -1; + } + prev = PyDict_GetItemString(dict, name); + if (PyDict_SetItemString(dict, name, o)) + return -1; + return prev != NULL; } int PyModule_AddObject(PyObject *m, const char *name, PyObject *o) { - int result = _PyModule_AddObject_NoConsumeRef(m, name, o); - /* XXX WORKAROUND for a common misusage of PyModule_AddObject: - for the common case of adding a new key, we don't consume a - reference, but instead just leak it away. The issue is that - people generally don't realize that this function consumes a - reference, because on CPython the reference is still stored - on the dictionary. */ - if (result != 0) - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result = _PyModule_AddObject_NoConsumeRef(m, name, o); + /* XXX WORKAROUND for a common misusage of PyModule_AddObject: + for the common case of adding a new key, we don't consume a + reference, but instead just leak it away. The issue is that + people generally don't realize that this function consumes a + reference, because on CPython the reference is still stored + on the dictionary. */ + if (result != 0) + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddIntConstant(PyObject *m, const char *name, long value) { - int result; - PyObject *o = PyInt_FromLong(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyInt_FromLong(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddStringConstant(PyObject *m, const char *name, const char *value) { - int result; - PyObject *o = PyString_FromString(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyString_FromString(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } diff --git a/pypy/module/cpyext/src/mysnprintf.c b/pypy/module/cpyext/src/mysnprintf.c --- a/pypy/module/cpyext/src/mysnprintf.c +++ b/pypy/module/cpyext/src/mysnprintf.c @@ -20,86 +20,86 @@ Return value (rv): - When 0 <= rv < size, the output conversion was unexceptional, and - rv characters were written to str (excluding a trailing \0 byte at - str[rv]). + When 0 <= rv < size, the output conversion was unexceptional, and + rv characters were written to str (excluding a trailing \0 byte at + str[rv]). - When rv >= size, output conversion was truncated, and a buffer of - size rv+1 would have been needed to avoid truncation. str[size-1] - is \0 in this case. + When rv >= size, output conversion was truncated, and a buffer of + size rv+1 would have been needed to avoid truncation. str[size-1] + is \0 in this case. - When rv < 0, "something bad happened". str[size-1] is \0 in this - case too, but the rest of str is unreliable. It could be that - an error in format codes was detected by libc, or on platforms - with a non-C99 vsnprintf simply that the buffer wasn't big enough - to avoid truncation, or on platforms without any vsnprintf that - PyMem_Malloc couldn't obtain space for a temp buffer. + When rv < 0, "something bad happened". str[size-1] is \0 in this + case too, but the rest of str is unreliable. It could be that + an error in format codes was detected by libc, or on platforms + with a non-C99 vsnprintf simply that the buffer wasn't big enough + to avoid truncation, or on platforms without any vsnprintf that + PyMem_Malloc couldn't obtain space for a temp buffer. CAUTION: Unlike C99, str != NULL and size > 0 are required. */ int +PyOS_snprintf(char *str, size_t size, const char *format, ...) +{ + int rc; + va_list va; + + va_start(va, format); + rc = PyOS_vsnprintf(str, size, format, va); + va_end(va); + return rc; +} + +int PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va) { - int len; /* # bytes written, excluding \0 */ + int len; /* # bytes written, excluding \0 */ #ifdef HAVE_SNPRINTF #define _PyOS_vsnprintf_EXTRA_SPACE 1 #else #define _PyOS_vsnprintf_EXTRA_SPACE 512 - char *buffer; + char *buffer; #endif - assert(str != NULL); - assert(size > 0); - assert(format != NULL); - /* We take a size_t as input but return an int. Sanity check - * our input so that it won't cause an overflow in the - * vsnprintf return value or the buffer malloc size. */ - if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { - len = -666; - goto Done; - } + assert(str != NULL); + assert(size > 0); + assert(format != NULL); + /* We take a size_t as input but return an int. Sanity check + * our input so that it won't cause an overflow in the + * vsnprintf return value or the buffer malloc size. */ + if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { + len = -666; + goto Done; + } #ifdef HAVE_SNPRINTF - len = vsnprintf(str, size, format, va); + len = vsnprintf(str, size, format, va); #else - /* Emulate it. */ - buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); - if (buffer == NULL) { - len = -666; - goto Done; - } + /* Emulate it. */ + buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); + if (buffer == NULL) { + len = -666; + goto Done; + } - len = vsprintf(buffer, format, va); - if (len < 0) - /* ignore the error */; + len = vsprintf(buffer, format, va); + if (len < 0) + /* ignore the error */; - else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) - Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); + else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) + Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); - else { - const size_t to_copy = (size_t)len < size ? - (size_t)len : size - 1; - assert(to_copy < size); - memcpy(str, buffer, to_copy); - str[to_copy] = '\0'; - } - PyMem_FREE(buffer); + else { + const size_t to_copy = (size_t)len < size ? + (size_t)len : size - 1; + assert(to_copy < size); + memcpy(str, buffer, to_copy); + str[to_copy] = '\0'; + } + PyMem_FREE(buffer); #endif Done: - if (size > 0) - str[size-1] = '\0'; - return len; + if (size > 0) + str[size-1] = '\0'; + return len; #undef _PyOS_vsnprintf_EXTRA_SPACE } - -int -PyOS_snprintf(char *str, size_t size, const char *format, ...) -{ - int rc; - va_list va; - - va_start(va, format); - rc = PyOS_vsnprintf(str, size, format, va); - va_end(va); - return rc; -} diff --git a/pypy/module/cpyext/src/object.c b/pypy/module/cpyext/src/object.c deleted file mode 100644 --- a/pypy/module/cpyext/src/object.c +++ /dev/null @@ -1,91 +0,0 @@ -// contains code from abstract.c -#include - - -static PyObject * -null_error(void) -{ - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_SystemError, - "null argument to internal routine"); - return NULL; -} - -int PyObject_AsReadBuffer(PyObject *obj, - const void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void *pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a readable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getreadbuffer)(obj, 0, &pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int PyObject_AsWriteBuffer(PyObject *obj, - void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void*pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a writeable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getwritebuffer)(obj,0,&pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int -PyObject_CheckReadBuffer(PyObject *obj) -{ - PyBufferProcs *pb = obj->ob_type->tp_as_buffer; - - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - (*pb->bf_getsegcount)(obj, NULL) != 1) - return 0; - return 1; -} diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -4,72 +4,75 @@ PyObject * PyErr_Format(PyObject *exception, const char *format, ...) { - va_list vargs; - PyObject* string; + va_list vargs; + PyObject* string; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - string = PyString_FromFormatV(format, vargs); - PyErr_SetObject(exception, string); - Py_XDECREF(string); - va_end(vargs); - return NULL; + string = PyString_FromFormatV(format, vargs); + PyErr_SetObject(exception, string); + Py_XDECREF(string); + va_end(vargs); + return NULL; } + + PyObject * PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { - char *dot; - PyObject *modulename = NULL; - PyObject *classname = NULL; - PyObject *mydict = NULL; - PyObject *bases = NULL; - PyObject *result = NULL; - dot = strrchr(name, '.'); - if (dot == NULL) { - PyErr_SetString(PyExc_SystemError, - "PyErr_NewException: name must be module.class"); - return NULL; - } - if (base == NULL) - base = PyExc_Exception; - if (dict == NULL) { - dict = mydict = PyDict_New(); - if (dict == NULL) - goto failure; - } - if (PyDict_GetItemString(dict, "__module__") == NULL) { - modulename = PyString_FromStringAndSize(name, - (Py_ssize_t)(dot-name)); - if (modulename == NULL) - goto failure; - if (PyDict_SetItemString(dict, "__module__", modulename) != 0) - goto failure; - } - if (PyTuple_Check(base)) { - bases = base; - /* INCREF as we create a new ref in the else branch */ - Py_INCREF(bases); - } else { - bases = PyTuple_Pack(1, base); - if (bases == NULL) - goto failure; - } - /* Create a real new-style class. */ - result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", - dot+1, bases, dict); + char *dot; + PyObject *modulename = NULL; + PyObject *classname = NULL; + PyObject *mydict = NULL; + PyObject *bases = NULL; + PyObject *result = NULL; + dot = strrchr(name, '.'); + if (dot == NULL) { + PyErr_SetString(PyExc_SystemError, + "PyErr_NewException: name must be module.class"); + return NULL; + } + if (base == NULL) + base = PyExc_Exception; + if (dict == NULL) { + dict = mydict = PyDict_New(); + if (dict == NULL) + goto failure; + } + if (PyDict_GetItemString(dict, "__module__") == NULL) { + modulename = PyString_FromStringAndSize(name, + (Py_ssize_t)(dot-name)); + if (modulename == NULL) + goto failure; + if (PyDict_SetItemString(dict, "__module__", modulename) != 0) + goto failure; + } + if (PyTuple_Check(base)) { + bases = base; + /* INCREF as we create a new ref in the else branch */ + Py_INCREF(bases); + } else { + bases = PyTuple_Pack(1, base); + if (bases == NULL) + goto failure; + } + /* Create a real new-style class. */ + result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", + dot+1, bases, dict); failure: - Py_XDECREF(bases); - Py_XDECREF(mydict); - Py_XDECREF(classname); - Py_XDECREF(modulename); - return result; + Py_XDECREF(bases); + Py_XDECREF(mydict); + Py_XDECREF(classname); + Py_XDECREF(modulename); + return result; } + /* Create an exception with docstring */ PyObject * PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) diff --git a/pypy/module/cpyext/src/pysignals.c b/pypy/module/cpyext/src/pysignals.c --- a/pypy/module/cpyext/src/pysignals.c +++ b/pypy/module/cpyext/src/pysignals.c @@ -17,17 +17,34 @@ PyOS_getsig(int sig) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context; - if (sigaction(sig, NULL, &context) == -1) - return SIG_ERR; - return context.sa_handler; + /* assume sigaction exists */ + struct sigaction context; + if (sigaction(sig, NULL, &context) == -1) + return SIG_ERR; + return context.sa_handler; #else - PyOS_sighandler_t handler; - handler = signal(sig, SIG_IGN); - if (handler != SIG_ERR) - signal(sig, handler); - return handler; + PyOS_sighandler_t handler; +/* Special signal handling for the secure CRT in Visual Studio 2005 */ +#if defined(_MSC_VER) && _MSC_VER >= 1400 + switch (sig) { + /* Only these signals are valid */ + case SIGINT: + case SIGILL: + case SIGFPE: + case SIGSEGV: + case SIGTERM: + case SIGBREAK: + case SIGABRT: + break; + /* Don't call signal() with other values or it will assert */ + default: + return SIG_ERR; + } +#endif /* _MSC_VER && _MSC_VER >= 1400 */ + handler = signal(sig, SIG_IGN); + if (handler != SIG_ERR) + signal(sig, handler); + return handler; #endif } @@ -35,21 +52,21 @@ PyOS_setsig(int sig, PyOS_sighandler_t handler) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context, ocontext; - context.sa_handler = handler; - sigemptyset(&context.sa_mask); - context.sa_flags = 0; - if (sigaction(sig, &context, &ocontext) == -1) - return SIG_ERR; - return ocontext.sa_handler; + /* assume sigaction exists */ + struct sigaction context, ocontext; + context.sa_handler = handler; + sigemptyset(&context.sa_mask); + context.sa_flags = 0; + if (sigaction(sig, &context, &ocontext) == -1) + return SIG_ERR; + return ocontext.sa_handler; #else - PyOS_sighandler_t oldhandler; - oldhandler = signal(sig, handler); + PyOS_sighandler_t oldhandler; + oldhandler = signal(sig, handler); #ifndef MS_WINDOWS - /* should check if this exists */ - siginterrupt(sig, 1); + /* should check if this exists */ + siginterrupt(sig, 1); #endif - return oldhandler; + return oldhandler; #endif } diff --git a/pypy/module/cpyext/src/pythonrun.c b/pypy/module/cpyext/src/pythonrun.c --- a/pypy/module/cpyext/src/pythonrun.c +++ b/pypy/module/cpyext/src/pythonrun.c @@ -9,28 +9,28 @@ void Py_FatalError(const char *msg) { - fprintf(stderr, "Fatal Python error: %s\n", msg); - fflush(stderr); /* it helps in Windows debug build */ + fprintf(stderr, "Fatal Python error: %s\n", msg); + fflush(stderr); /* it helps in Windows debug build */ #ifdef MS_WINDOWS - { - size_t len = strlen(msg); - WCHAR* buffer; - size_t i; + { + size_t len = strlen(msg); + WCHAR* buffer; + size_t i; - /* Convert the message to wchar_t. This uses a simple one-to-one - conversion, assuming that the this error message actually uses ASCII - only. If this ceases to be true, we will have to convert. */ - buffer = alloca( (len+1) * (sizeof *buffer)); - for( i=0; i<=len; ++i) - buffer[i] = msg[i]; - OutputDebugStringW(L"Fatal Python error: "); - OutputDebugStringW(buffer); - OutputDebugStringW(L"\n"); - } + /* Convert the message to wchar_t. This uses a simple one-to-one + conversion, assuming that the this error message actually uses ASCII + only. If this ceases to be true, we will have to convert. */ + buffer = alloca( (len+1) * (sizeof *buffer)); + for( i=0; i<=len; ++i) + buffer[i] = msg[i]; + OutputDebugStringW(L"Fatal Python error: "); + OutputDebugStringW(buffer); + OutputDebugStringW(L"\n"); + } #ifdef _DEBUG - DebugBreak(); + DebugBreak(); #endif #endif /* MS_WINDOWS */ - abort(); + abort(); } diff --git a/pypy/module/cpyext/src/stringobject.c b/pypy/module/cpyext/src/stringobject.c --- a/pypy/module/cpyext/src/stringobject.c +++ b/pypy/module/cpyext/src/stringobject.c @@ -4,246 +4,247 @@ PyObject * PyString_FromFormatV(const char *format, va_list vargs) { - va_list count; - Py_ssize_t n = 0; - const char* f; - char *s; - PyObject* string; + va_list count; + Py_ssize_t n = 0; + const char* f; + char *s; + PyObject* string; #ifdef VA_LIST_IS_ARRAY - Py_MEMCPY(count, vargs, sizeof(va_list)); + Py_MEMCPY(count, vargs, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(count, vargs); + __va_copy(count, vargs); #else - count = vargs; + count = vargs; #endif #endif - /* step 1: figure out how large a buffer we need */ - for (f = format; *f; f++) { - if (*f == '%') { + /* step 1: figure out how large a buffer we need */ + for (f = format; *f; f++) { + if (*f == '%') { #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - const char* p = f; - while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - ; + const char* p = f; + while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + ; - /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since - * they don't affect the amount of space we reserve. - */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - ++f; - } + /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since + * they don't affect the amount of space we reserve. + */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - ++f; - } + } + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + ++f; + } - switch (*f) { - case 'c': - (void)va_arg(count, int); - /* fall through... */ - case '%': - n++; - break; - case 'd': case 'u': case 'i': case 'x': - (void) va_arg(count, int); + switch (*f) { + case 'c': + (void)va_arg(count, int); + /* fall through... */ + case '%': + n++; + break; + case 'd': case 'u': case 'i': case 'x': + (void) va_arg(count, int); #ifdef HAVE_LONG_LONG - /* Need at most - ceil(log10(256)*SIZEOF_LONG_LONG) digits, - plus 1 for the sign. 53/22 is an upper - bound for log10(256). */ - if (longlongflag) - n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; - else + /* Need at most + ceil(log10(256)*SIZEOF_LONG_LONG) digits, + plus 1 for the sign. 53/22 is an upper + bound for log10(256). */ + if (longlongflag) + n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; + else #endif - /* 20 bytes is enough to hold a 64-bit - integer. Decimal takes the most - space. This isn't enough for - octal. */ - n += 20; + /* 20 bytes is enough to hold a 64-bit + integer. Decimal takes the most + space. This isn't enough for + octal. */ + n += 20; - break; - case 's': - s = va_arg(count, char*); - n += strlen(s); - break; - case 'p': - (void) va_arg(count, int); - /* maximum 64-bit pointer representation: - * 0xffffffffffffffff - * so 19 characters is enough. - * XXX I count 18 -- what's the extra for? - */ - n += 19; - break; - default: - /* if we stumble upon an unknown - formatting code, copy the rest of - the format string to the output - string. (we cannot just skip the - code, since there's no way to know - what's in the argument list) */ - n += strlen(p); - goto expand; - } - } else - n++; - } + break; + case 's': + s = va_arg(count, char*); + n += strlen(s); + break; + case 'p': + (void) va_arg(count, int); + /* maximum 64-bit pointer representation: + * 0xffffffffffffffff + * so 19 characters is enough. + * XXX I count 18 -- what's the extra for? + */ + n += 19; + break; + default: + /* if we stumble upon an unknown + formatting code, copy the rest of + the format string to the output + string. (we cannot just skip the + code, since there's no way to know + what's in the argument list) */ + n += strlen(p); + goto expand; + } + } else + n++; + } expand: - /* step 2: fill the buffer */ - /* Since we've analyzed how much space we need for the worst case, - use sprintf directly instead of the slower PyOS_snprintf. */ - string = PyString_FromStringAndSize(NULL, n); - if (!string) - return NULL; + /* step 2: fill the buffer */ + /* Since we've analyzed how much space we need for the worst case, + use sprintf directly instead of the slower PyOS_snprintf. */ + string = PyString_FromStringAndSize(NULL, n); + if (!string) + return NULL; - s = PyString_AsString(string); + s = PyString_AsString(string); - for (f = format; *f; f++) { - if (*f == '%') { - const char* p = f++; - Py_ssize_t i; - int longflag = 0; + for (f = format; *f; f++) { + if (*f == '%') { + const char* p = f++; + Py_ssize_t i; + int longflag = 0; #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - int size_tflag = 0; - /* parse the width.precision part (we're only - interested in the precision value, if any) */ - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - if (*f == '.') { - f++; - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - } - while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - f++; - /* Handle %ld, %lu, %lld and %llu. */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - longflag = 1; - ++f; - } + int size_tflag = 0; + /* parse the width.precision part (we're only + interested in the precision value, if any) */ + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + if (*f == '.') { + f++; + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + } + while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + f++; + /* Handle %ld, %lu, %lld and %llu. */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + longflag = 1; + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - /* handle the size_t flag. */ - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - size_tflag = 1; - ++f; - } + } + /* handle the size_t flag. */ + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + size_tflag = 1; + ++f; + } - switch (*f) { - case 'c': - *s++ = va_arg(vargs, int); - break; - case 'd': - if (longflag) - sprintf(s, "%ld", va_arg(vargs, long)); + switch (*f) { + case 'c': + *s++ = va_arg(vargs, int); + break; + case 'd': + if (longflag) + sprintf(s, "%ld", va_arg(vargs, long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "d", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "d", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "d", - va_arg(vargs, Py_ssize_t)); - else - sprintf(s, "%d", va_arg(vargs, int)); - s += strlen(s); - break; - case 'u': - if (longflag) - sprintf(s, "%lu", - va_arg(vargs, unsigned long)); + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "d", + va_arg(vargs, Py_ssize_t)); + else + sprintf(s, "%d", va_arg(vargs, int)); + s += strlen(s); + break; + case 'u': + if (longflag) + sprintf(s, "%lu", + va_arg(vargs, unsigned long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "u", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "u", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "u", - va_arg(vargs, size_t)); - else - sprintf(s, "%u", - va_arg(vargs, unsigned int)); - s += strlen(s); - break; - case 'i': - sprintf(s, "%i", va_arg(vargs, int)); - s += strlen(s); - break; - case 'x': - sprintf(s, "%x", va_arg(vargs, int)); - s += strlen(s); - break; - case 's': - p = va_arg(vargs, char*); - i = strlen(p); - if (n > 0 && i > n) - i = n; - Py_MEMCPY(s, p, i); - s += i; - break; - case 'p': - sprintf(s, "%p", va_arg(vargs, void*)); - /* %p is ill-defined: ensure leading 0x. */ - if (s[1] == 'X') - s[1] = 'x'; - else if (s[1] != 'x') { - memmove(s+2, s, strlen(s)+1); - s[0] = '0'; - s[1] = 'x'; - } - s += strlen(s); - break; - case '%': - *s++ = '%'; - break; - default: - strcpy(s, p); - s += strlen(s); - goto end; - } - } else - *s++ = *f; - } + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "u", + va_arg(vargs, size_t)); + else + sprintf(s, "%u", + va_arg(vargs, unsigned int)); + s += strlen(s); + break; + case 'i': + sprintf(s, "%i", va_arg(vargs, int)); + s += strlen(s); + break; + case 'x': + sprintf(s, "%x", va_arg(vargs, int)); + s += strlen(s); + break; + case 's': + p = va_arg(vargs, char*); + i = strlen(p); + if (n > 0 && i > n) + i = n; + Py_MEMCPY(s, p, i); + s += i; + break; + case 'p': + sprintf(s, "%p", va_arg(vargs, void*)); + /* %p is ill-defined: ensure leading 0x. */ + if (s[1] == 'X') + s[1] = 'x'; + else if (s[1] != 'x') { + memmove(s+2, s, strlen(s)+1); + s[0] = '0'; + s[1] = 'x'; + } + s += strlen(s); + break; + case '%': + *s++ = '%'; + break; + default: + strcpy(s, p); + s += strlen(s); + goto end; + } + } else + *s++ = *f; + } end: - _PyString_Resize(&string, s - PyString_AS_STRING(string)); - return string; + if (_PyString_Resize(&string, s - PyString_AS_STRING(string))) + return NULL; + return string; } PyObject * PyString_FromFormat(const char *format, ...) { - PyObject* ret; - va_list vargs; + PyObject* ret; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - ret = PyString_FromFormatV(format, vargs); - va_end(vargs); - return ret; + ret = PyString_FromFormatV(format, vargs); + va_end(vargs); + return ret; } diff --git a/pypy/module/cpyext/src/structseq.c b/pypy/module/cpyext/src/structseq.c --- a/pypy/module/cpyext/src/structseq.c +++ b/pypy/module/cpyext/src/structseq.c @@ -175,32 +175,33 @@ if (min_len != max_len) { if (len < min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at least %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at least %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } if (len > max_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at most %zd-sequence (%zd-sequence given)", - type->tp_name, max_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at most %zd-sequence (%zd-sequence given)", + type->tp_name, max_len, len); + Py_DECREF(arg); + return NULL; } } else { if (len != min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes a %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes a %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } } res = (PyStructSequence*) PyStructSequence_New(type); if (res == NULL) { + Py_DECREF(arg); return NULL; } for (i = 0; i < len; ++i) { diff --git a/pypy/module/cpyext/src/sysmodule.c b/pypy/module/cpyext/src/sysmodule.c --- a/pypy/module/cpyext/src/sysmodule.c +++ b/pypy/module/cpyext/src/sysmodule.c @@ -100,4 +100,3 @@ sys_write("stderr", stderr, format, va); va_end(va); } - diff --git a/pypy/module/cpyext/src/varargwrapper.c b/pypy/module/cpyext/src/varargwrapper.c --- a/pypy/module/cpyext/src/varargwrapper.c +++ b/pypy/module/cpyext/src/varargwrapper.c @@ -1,21 +1,25 @@ #include #include -PyObject * PyTuple_Pack(Py_ssize_t size, ...) +PyObject * +PyTuple_Pack(Py_ssize_t n, ...) { - va_list ap; - PyObject *cur, *tuple; - int i; + Py_ssize_t i; + PyObject *o; + PyObject *result; + va_list vargs; - tuple = PyTuple_New(size); - va_start(ap, size); - for (i = 0; i < size; i++) { - cur = va_arg(ap, PyObject*); - Py_INCREF(cur); - if (PyTuple_SetItem(tuple, i, cur) < 0) + va_start(vargs, n); + result = PyTuple_New(n); + if (result == NULL) + return NULL; + for (i = 0; i < n; i++) { + o = va_arg(vargs, PyObject *); + Py_INCREF(o); + if (PyTuple_SetItem(result, i, o) < 0) return NULL; } - va_end(ap); - return tuple; + va_end(vargs); + return result; } diff --git a/pypy/module/cpyext/structmember.py b/pypy/module/cpyext/structmember.py --- a/pypy/module/cpyext/structmember.py +++ b/pypy/module/cpyext/structmember.py @@ -10,7 +10,7 @@ PyString_FromString, PyString_FromStringAndSize) from pypy.module.cpyext.floatobject import PyFloat_AsDouble from pypy.module.cpyext.longobject import ( - PyLong_AsLongLong, PyLong_AsUnsignedLongLong) + PyLong_AsLongLong, PyLong_AsUnsignedLongLong, PyLong_AsSsize_t) from pypy.module.cpyext.typeobjectdefs import PyMemberDef from pypy.rlib.unroll import unrolling_iterable @@ -28,6 +28,7 @@ (T_DOUBLE, rffi.DOUBLE, PyFloat_AsDouble), (T_LONGLONG, rffi.LONGLONG, PyLong_AsLongLong), (T_ULONGLONG, rffi.ULONGLONG, PyLong_AsUnsignedLongLong), + (T_PYSSIZET, rffi.SSIZE_T, PyLong_AsSsize_t), ]) diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -18,6 +18,7 @@ T_OBJECT_EX = 16 T_LONGLONG = 17 T_ULONGLONG = 18 +T_PYSSIZET = 19 READONLY = RO = 1 READ_RESTRICTED = 2 diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1405,13 +1405,6 @@ """ raise NotImplementedError - at cpython_api([Py_ssize_t], PyObject) -def PyLong_FromSsize_t(space, v): - """Return a new PyLongObject object from a C Py_ssize_t, or - NULL on failure. - """ - raise NotImplementedError - @cpython_api([rffi.SIZE_T], PyObject) def PyLong_FromSize_t(space, v): """Return a new PyLongObject object from a C size_t, or @@ -1431,14 +1424,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject], Py_ssize_t, error=-1) -def PyLong_AsSsize_t(space, pylong): - """Return a C Py_ssize_t representation of the contents of pylong. If - pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised - and -1 will be returned. - """ - raise NotImplementedError - @cpython_api([PyObject, rffi.CCHARP], rffi.INT_real, error=-1) def PyMapping_DelItemString(space, o, key): """Remove the mapping for object key from the object o. Return -1 on @@ -1980,35 +1965,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) -def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): - """Decode length bytes from a UTF-32 encoded buffer string and return the - corresponding Unicode object. errors (if non-NULL) defines the error - handling. It defaults to "strict". - - If byteorder is non-NULL, the decoder starts decoding using the given byte - order: - - *byteorder == -1: little endian - *byteorder == 0: native order - *byteorder == 1: big endian - - If *byteorder is zero, and the first four bytes of the input data are a - byte order mark (BOM), the decoder switches to this byte order and the BOM is - not copied into the resulting Unicode string. If *byteorder is -1 or - 1, any byte order mark is copied to the output. - - After completion, *byteorder is set to the current byte order at the end - of input data. - - In a narrow build codepoints outside the BMP will be decoded as surrogate pairs. - - If byteorder is NULL, the codec starts in native order mode. - - Return NULL if an exception was raised by the codec. - """ - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP, Py_ssize_t], PyObject) def PyUnicode_DecodeUTF32Stateful(space, s, size, errors, byteorder, consumed): """If consumed is NULL, behave like PyUnicode_DecodeUTF32(). If diff --git a/pypy/module/cpyext/test/_sre.c b/pypy/module/cpyext/test/_sre.c --- a/pypy/module/cpyext/test/_sre.c +++ b/pypy/module/cpyext/test/_sre.c @@ -81,9 +81,6 @@ #define PyObject_DEL(op) PyMem_DEL((op)) #endif -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - /* -------------------------------------------------------------------- */ #if defined(_MSC_VER) @@ -1689,7 +1686,7 @@ if (PyUnicode_Check(string)) { /* unicode strings doesn't always support the buffer interface */ ptr = (void*) PyUnicode_AS_DATA(string); - bytes = PyUnicode_GET_DATA_SIZE(string); + /* bytes = PyUnicode_GET_DATA_SIZE(string); */ size = PyUnicode_GET_SIZE(string); charsize = sizeof(Py_UNICODE); @@ -2601,46 +2598,22 @@ {NULL, NULL} }; -static PyObject* -pattern_getattr(PatternObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(pattern_methods, (PyObject*) self, name); - - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - if (!strcmp(name, "flags")) - return Py_BuildValue("i", self->flags); - - if (!strcmp(name, "groups")) - return Py_BuildValue("i", self->groups); - - if (!strcmp(name, "groupindex") && self->groupindex) { - Py_INCREF(self->groupindex); - return self->groupindex; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define PAT_OFF(x) offsetof(PatternObject, x) +static PyMemberDef pattern_members[] = { + {"pattern", T_OBJECT, PAT_OFF(pattern), READONLY}, + {"flags", T_INT, PAT_OFF(flags), READONLY}, + {"groups", T_PYSSIZET, PAT_OFF(groups), READONLY}, + {"groupindex", T_OBJECT, PAT_OFF(groupindex), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Pattern_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Pattern", sizeof(PatternObject), sizeof(SRE_CODE), (destructor)pattern_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)pattern_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattrn */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ @@ -2653,12 +2626,16 @@ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ pattern_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(PatternObject, weakreflist), /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + pattern_methods, /* tp_methods */ + pattern_members, /* tp_members */ }; static int _validate(PatternObject *self); /* Forward */ @@ -2767,7 +2744,7 @@ #if defined(VVERBOSE) #define VTRACE(v) printf v #else -#define VTRACE(v) +#define VTRACE(v) do {} while(0) /* do nothing */ #endif /* Report failure */ @@ -2970,13 +2947,13 @@ <1=skip> <2=flags> <3=min> <4=max>; If SRE_INFO_PREFIX or SRE_INFO_CHARSET is in the flags, more follows. */ - SRE_CODE flags, min, max, i; + SRE_CODE flags, i; SRE_CODE *newcode; GET_SKIP; newcode = code+skip-1; GET_ARG; flags = arg; - GET_ARG; min = arg; - GET_ARG; max = arg; + GET_ARG; /* min */ + GET_ARG; /* max */ /* Check that only valid flags are present */ if ((flags & ~(SRE_INFO_PREFIX | SRE_INFO_LITERAL | @@ -2992,9 +2969,9 @@ FAIL; /* Validate the prefix */ if (flags & SRE_INFO_PREFIX) { - SRE_CODE prefix_len, prefix_skip; + SRE_CODE prefix_len; GET_ARG; prefix_len = arg; - GET_ARG; prefix_skip = arg; + GET_ARG; /* prefix skip */ /* Here comes the prefix string */ if (code+prefix_len < code || code+prefix_len > newcode) FAIL; @@ -3565,7 +3542,7 @@ #endif } -static PyMethodDef match_methods[] = { +static struct PyMethodDef match_methods[] = { {"group", (PyCFunction) match_group, METH_VARARGS}, {"start", (PyCFunction) match_start, METH_VARARGS}, {"end", (PyCFunction) match_end, METH_VARARGS}, @@ -3578,80 +3555,90 @@ {NULL, NULL} }; -static PyObject* -match_getattr(MatchObject* self, char* name) +static PyObject * +match_lastindex_get(MatchObject *self) { - PyObject* res; - - res = Py_FindMethod(match_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - if (!strcmp(name, "lastindex")) { - if (self->lastindex >= 0) - return Py_BuildValue("i", self->lastindex); - Py_INCREF(Py_None); - return Py_None; + if (self->lastindex >= 0) + return Py_BuildValue("i", self->lastindex); + Py_INCREF(Py_None); + return Py_None; +} + +static PyObject * +match_lastgroup_get(MatchObject *self) +{ + if (self->pattern->indexgroup && self->lastindex >= 0) { + PyObject* result = PySequence_GetItem( + self->pattern->indexgroup, self->lastindex + ); + if (result) + return result; + PyErr_Clear(); } - - if (!strcmp(name, "lastgroup")) { - if (self->pattern->indexgroup && self->lastindex >= 0) { - PyObject* result = PySequence_GetItem( - self->pattern->indexgroup, self->lastindex - ); - if (result) - return result; - PyErr_Clear(); - } - Py_INCREF(Py_None); - return Py_None; - } - - if (!strcmp(name, "string")) { - if (self->string) { - Py_INCREF(self->string); - return self->string; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - - if (!strcmp(name, "regs")) { - if (self->regs) { - Py_INCREF(self->regs); - return self->regs; - } else - return match_regs(self); - } - - if (!strcmp(name, "re")) { - Py_INCREF(self->pattern); - return (PyObject*) self->pattern; - } - - if (!strcmp(name, "pos")) - return Py_BuildValue("i", self->pos); - - if (!strcmp(name, "endpos")) - return Py_BuildValue("i", self->endpos); - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; + Py_INCREF(Py_None); + return Py_None; } +static PyObject * +match_regs_get(MatchObject *self) +{ + if (self->regs) { + Py_INCREF(self->regs); + return self->regs; + } else + return match_regs(self); +} + +static PyGetSetDef match_getset[] = { + {"lastindex", (getter)match_lastindex_get, (setter)NULL}, + {"lastgroup", (getter)match_lastgroup_get, (setter)NULL}, + {"regs", (getter)match_regs_get, (setter)NULL}, + {NULL} +}; + +#define MATCH_OFF(x) offsetof(MatchObject, x) +static PyMemberDef match_members[] = { + {"string", T_OBJECT, MATCH_OFF(string), READONLY}, + {"re", T_OBJECT, MATCH_OFF(pattern), READONLY}, + {"pos", T_PYSSIZET, MATCH_OFF(pos), READONLY}, + {"endpos", T_PYSSIZET, MATCH_OFF(endpos), READONLY}, + {NULL} +}; + + /* FIXME: implement setattr("string", None) as a special case (to detach the associated string, if any */ -statichere PyTypeObject Match_Type = { - PyObject_HEAD_INIT(NULL) - 0, "_" SRE_MODULE ".SRE_Match", +static PyTypeObject Match_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "_" SRE_MODULE ".SRE_Match", sizeof(MatchObject), sizeof(Py_ssize_t), - (destructor)match_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)match_getattr /*tp_getattr*/ + (destructor)match_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + match_methods, /* tp_methods */ + match_members, /* tp_members */ + match_getset, /* tp_getset */ }; static PyObject* @@ -3800,34 +3787,42 @@ {NULL, NULL} }; -static PyObject* -scanner_getattr(ScannerObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(scanner_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define SCAN_OFF(x) offsetof(ScannerObject, x) +static PyMemberDef scanner_members[] = { + {"pattern", T_OBJECT, SCAN_OFF(pattern), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Scanner_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Scanner", sizeof(ScannerObject), 0, (destructor)scanner_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)scanner_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_reserved */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + scanner_methods, /* tp_methods */ + scanner_members, /* tp_members */ + 0, /* tp_getset */ }; static PyObject* @@ -3879,8 +3874,9 @@ PyObject* x; /* Patch object types */ - Pattern_Type.ob_type = Match_Type.ob_type = - Scanner_Type.ob_type = &PyType_Type; + if (PyType_Ready(&Pattern_Type) || PyType_Ready(&Match_Type) || + PyType_Ready(&Scanner_Type)) + return; m = Py_InitModule("_" SRE_MODULE, _functions); if (m == NULL) diff --git a/pypy/module/cpyext/test/array.c b/pypy/module/cpyext/test/array.c --- a/pypy/module/cpyext/test/array.c +++ b/pypy/module/cpyext/test/array.c @@ -11,13 +11,10 @@ #include #else /* !STDC_HEADERS */ #ifdef HAVE_SYS_TYPES_H -#include /* For size_t */ +#include /* For size_t */ #endif /* HAVE_SYS_TYPES_H */ #endif /* !STDC_HEADERS */ -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - struct arrayobject; /* Forward */ /* All possible arraydescr values are defined in the vector "descriptors" @@ -25,18 +22,18 @@ * functions aren't visible yet. */ struct arraydescr { - int typecode; - int itemsize; - PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); - int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); + int typecode; + int itemsize; + PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); + int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); }; typedef struct arrayobject { - PyObject_VAR_HEAD - char *ob_item; - Py_ssize_t allocated; - struct arraydescr *ob_descr; - PyObject *weakreflist; /* List of weak references */ + PyObject_VAR_HEAD + char *ob_item; + Py_ssize_t allocated; + struct arraydescr *ob_descr; + PyObject *weakreflist; /* List of weak references */ } arrayobject; static PyTypeObject Arraytype; @@ -47,49 +44,49 @@ static int array_resize(arrayobject *self, Py_ssize_t newsize) { - char *items; - size_t _new_size; + char *items; + size_t _new_size; - /* Bypass realloc() when a previous overallocation is large enough - to accommodate the newsize. If the newsize is 16 smaller than the - current size, then proceed with the realloc() to shrink the list. - */ + /* Bypass realloc() when a previous overallocation is large enough + to accommodate the newsize. If the newsize is 16 smaller than the + current size, then proceed with the realloc() to shrink the list. + */ - if (self->allocated >= newsize && - Py_SIZE(self) < newsize + 16 && - self->ob_item != NULL) { - Py_SIZE(self) = newsize; - return 0; - } + if (self->allocated >= newsize && + Py_SIZE(self) < newsize + 16 && + self->ob_item != NULL) { + Py_SIZE(self) = newsize; + return 0; + } - /* This over-allocates proportional to the array size, making room - * for additional growth. The over-allocation is mild, but is - * enough to give linear-time amortized behavior over a long - * sequence of appends() in the presence of a poorly-performing - * system realloc(). - * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... - * Note, the pattern starts out the same as for lists but then - * grows at a smaller rate so that larger arrays only overallocate - * by about 1/16th -- this is done because arrays are presumed to be more - * memory critical. - */ + /* This over-allocates proportional to the array size, making room + * for additional growth. The over-allocation is mild, but is + * enough to give linear-time amortized behavior over a long + * sequence of appends() in the presence of a poorly-performing + * system realloc(). + * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... + * Note, the pattern starts out the same as for lists but then + * grows at a smaller rate so that larger arrays only overallocate + * by about 1/16th -- this is done because arrays are presumed to be more + * memory critical. + */ - _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; - items = self->ob_item; - /* XXX The following multiplication and division does not optimize away - like it does for lists since the size is not known at compile time */ - if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) - PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); - else - items = NULL; - if (items == NULL) { - PyErr_NoMemory(); - return -1; - } - self->ob_item = items; - Py_SIZE(self) = newsize; - self->allocated = _new_size; - return 0; + _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; + items = self->ob_item; + /* XXX The following multiplication and division does not optimize away + like it does for lists since the size is not known at compile time */ + if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) + PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); + else + items = NULL; + if (items == NULL) { + PyErr_NoMemory(); + return -1; + } + self->ob_item = items; + Py_SIZE(self) = newsize; + self->allocated = _new_size; + return 0; } /**************************************************************************** @@ -107,308 +104,308 @@ static PyObject * c_getitem(arrayobject *ap, Py_ssize_t i) { - return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); + return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); } static int c_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - char x; - if (!PyArg_Parse(v, "c;array item must be char", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; + char x; + if (!PyArg_Parse(v, "c;array item must be char", &x)) + return -1; + if (i >= 0) + ((char *)ap->ob_item)[i] = x; + return 0; } static PyObject * b_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((char *)ap->ob_item)[i]; - if (x >= 128) - x -= 256; - return PyInt_FromLong(x); + long x = ((char *)ap->ob_item)[i]; + if (x >= 128) + x -= 256; + return PyInt_FromLong(x); } static int b_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore - must use the next size up that is signed ('h') and manually do - the overflow checking */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - else if (x < -128) { - PyErr_SetString(PyExc_OverflowError, - "signed char is less than minimum"); - return -1; - } - else if (x > 127) { - PyErr_SetString(PyExc_OverflowError, - "signed char is greater than maximum"); - return -1; - } - if (i >= 0) - ((char *)ap->ob_item)[i] = (char)x; - return 0; + short x; + /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore + must use the next size up that is signed ('h') and manually do + the overflow checking */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + else if (x < -128) { + PyErr_SetString(PyExc_OverflowError, + "signed char is less than minimum"); + return -1; + } + else if (x > 127) { + PyErr_SetString(PyExc_OverflowError, + "signed char is greater than maximum"); + return -1; + } + if (i >= 0) + ((char *)ap->ob_item)[i] = (char)x; + return 0; } static PyObject * BB_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((unsigned char *)ap->ob_item)[i]; - return PyInt_FromLong(x); + long x = ((unsigned char *)ap->ob_item)[i]; + return PyInt_FromLong(x); } static int BB_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned char x; - /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ - if (!PyArg_Parse(v, "b;array item must be integer", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; + unsigned char x; + /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ + if (!PyArg_Parse(v, "b;array item must be integer", &x)) + return -1; + if (i >= 0) + ((char *)ap->ob_item)[i] = x; + return 0; } #ifdef Py_USING_UNICODE static PyObject * u_getitem(arrayobject *ap, Py_ssize_t i) { - return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); + return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); } static int u_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - Py_UNICODE *p; - Py_ssize_t len; + Py_UNICODE *p; + Py_ssize_t len; - if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) - return -1; - if (len != 1) { - PyErr_SetString(PyExc_TypeError, - "array item must be unicode character"); - return -1; - } - if (i >= 0) - ((Py_UNICODE *)ap->ob_item)[i] = p[0]; - return 0; + if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) + return -1; + if (len != 1) { + PyErr_SetString(PyExc_TypeError, + "array item must be unicode character"); + return -1; + } + if (i >= 0) + ((Py_UNICODE *)ap->ob_item)[i] = p[0]; + return 0; } #endif static PyObject * h_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); } static int h_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - if (i >= 0) - ((short *)ap->ob_item)[i] = x; - return 0; + short x; + /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + if (i >= 0) + ((short *)ap->ob_item)[i] = x; + return 0; } static PyObject * HH_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); } static int HH_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* PyArg_Parse's 'h' formatter is for a signed short, therefore - must use the next size up and manually do the overflow checking */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - else if (x < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is less than minimum"); - return -1; - } - else if (x > USHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is greater than maximum"); - return -1; - } - if (i >= 0) - ((short *)ap->ob_item)[i] = (short)x; - return 0; + int x; + /* PyArg_Parse's 'h' formatter is for a signed short, therefore + must use the next size up and manually do the overflow checking */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + else if (x < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is less than minimum"); + return -1; + } + else if (x > USHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is greater than maximum"); + return -1; + } + if (i >= 0) + ((short *)ap->ob_item)[i] = (short)x; + return 0; } static PyObject * i_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); } static int i_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - if (i >= 0) - ((int *)ap->ob_item)[i] = x; - return 0; + int x; + /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + if (i >= 0) + ((int *)ap->ob_item)[i] = x; + return 0; } static PyObject * II_getitem(arrayobject *ap, Py_ssize_t i) { - return PyLong_FromUnsignedLong( - (unsigned long) ((unsigned int *)ap->ob_item)[i]); + return PyLong_FromUnsignedLong( + (unsigned long) ((unsigned int *)ap->ob_item)[i]); } static int II_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned long x; - if (PyLong_Check(v)) { - x = PyLong_AsUnsignedLong(v); - if (x == (unsigned long) -1 && PyErr_Occurred()) - return -1; - } - else { - long y; - if (!PyArg_Parse(v, "l;array item must be integer", &y)) - return -1; - if (y < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is less than minimum"); - return -1; - } - x = (unsigned long)y; + unsigned long x; + if (PyLong_Check(v)) { + x = PyLong_AsUnsignedLong(v); + if (x == (unsigned long) -1 && PyErr_Occurred()) + return -1; + } + else { + long y; + if (!PyArg_Parse(v, "l;array item must be integer", &y)) + return -1; + if (y < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is less than minimum"); + return -1; + } + x = (unsigned long)y; - } - if (x > UINT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is greater than maximum"); - return -1; - } + } + if (x > UINT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is greater than maximum"); + return -1; + } - if (i >= 0) - ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; - return 0; + if (i >= 0) + ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; + return 0; } static PyObject * l_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong(((long *)ap->ob_item)[i]); + return PyInt_FromLong(((long *)ap->ob_item)[i]); } static int l_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - long x; - if (!PyArg_Parse(v, "l;array item must be integer", &x)) - return -1; - if (i >= 0) - ((long *)ap->ob_item)[i] = x; - return 0; + long x; + if (!PyArg_Parse(v, "l;array item must be integer", &x)) + return -1; + if (i >= 0) + ((long *)ap->ob_item)[i] = x; + return 0; } static PyObject * LL_getitem(arrayobject *ap, Py_ssize_t i) { - return PyLong_FromUnsignedLong(((unsigned long *)ap->ob_item)[i]); + return PyLong_FromUnsignedLong(((unsigned long *)ap->ob_item)[i]); } static int LL_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned long x; - if (PyLong_Check(v)) { - x = PyLong_AsUnsignedLong(v); - if (x == (unsigned long) -1 && PyErr_Occurred()) - return -1; - } - else { - long y; - if (!PyArg_Parse(v, "l;array item must be integer", &y)) - return -1; - if (y < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned long is less than minimum"); - return -1; - } - x = (unsigned long)y; + unsigned long x; + if (PyLong_Check(v)) { + x = PyLong_AsUnsignedLong(v); + if (x == (unsigned long) -1 && PyErr_Occurred()) + return -1; + } + else { + long y; + if (!PyArg_Parse(v, "l;array item must be integer", &y)) + return -1; + if (y < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned long is less than minimum"); + return -1; + } + x = (unsigned long)y; - } - if (x > ULONG_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned long is greater than maximum"); - return -1; - } + } + if (x > ULONG_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned long is greater than maximum"); + return -1; + } - if (i >= 0) - ((unsigned long *)ap->ob_item)[i] = x; - return 0; + if (i >= 0) + ((unsigned long *)ap->ob_item)[i] = x; + return 0; } static PyObject * f_getitem(arrayobject *ap, Py_ssize_t i) { - return PyFloat_FromDouble((double) ((float *)ap->ob_item)[i]); + return PyFloat_FromDouble((double) ((float *)ap->ob_item)[i]); } static int f_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - float x; - if (!PyArg_Parse(v, "f;array item must be float", &x)) - return -1; - if (i >= 0) - ((float *)ap->ob_item)[i] = x; - return 0; + float x; + if (!PyArg_Parse(v, "f;array item must be float", &x)) + return -1; + if (i >= 0) + ((float *)ap->ob_item)[i] = x; + return 0; } static PyObject * d_getitem(arrayobject *ap, Py_ssize_t i) { - return PyFloat_FromDouble(((double *)ap->ob_item)[i]); + return PyFloat_FromDouble(((double *)ap->ob_item)[i]); } static int d_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - double x; - if (!PyArg_Parse(v, "d;array item must be float", &x)) - return -1; - if (i >= 0) - ((double *)ap->ob_item)[i] = x; - return 0; + double x; + if (!PyArg_Parse(v, "d;array item must be float", &x)) + return -1; + if (i >= 0) + ((double *)ap->ob_item)[i] = x; + return 0; } /* Description of types */ static struct arraydescr descriptors[] = { - {'c', sizeof(char), c_getitem, c_setitem}, - {'b', sizeof(char), b_getitem, b_setitem}, - {'B', sizeof(char), BB_getitem, BB_setitem}, + {'c', sizeof(char), c_getitem, c_setitem}, + {'b', sizeof(char), b_getitem, b_setitem}, + {'B', sizeof(char), BB_getitem, BB_setitem}, #ifdef Py_USING_UNICODE - {'u', sizeof(Py_UNICODE), u_getitem, u_setitem}, + {'u', sizeof(Py_UNICODE), u_getitem, u_setitem}, #endif - {'h', sizeof(short), h_getitem, h_setitem}, - {'H', sizeof(short), HH_getitem, HH_setitem}, - {'i', sizeof(int), i_getitem, i_setitem}, - {'I', sizeof(int), II_getitem, II_setitem}, - {'l', sizeof(long), l_getitem, l_setitem}, - {'L', sizeof(long), LL_getitem, LL_setitem}, - {'f', sizeof(float), f_getitem, f_setitem}, - {'d', sizeof(double), d_getitem, d_setitem}, - {'\0', 0, 0, 0} /* Sentinel */ + {'h', sizeof(short), h_getitem, h_setitem}, + {'H', sizeof(short), HH_getitem, HH_setitem}, + {'i', sizeof(int), i_getitem, i_setitem}, + {'I', sizeof(int), II_getitem, II_setitem}, + {'l', sizeof(long), l_getitem, l_setitem}, + {'L', sizeof(long), LL_getitem, LL_setitem}, + {'f', sizeof(float), f_getitem, f_setitem}, + {'d', sizeof(double), d_getitem, d_setitem}, + {'\0', 0, 0, 0} /* Sentinel */ }; /**************************************************************************** @@ -418,78 +415,78 @@ static PyObject * newarrayobject(PyTypeObject *type, Py_ssize_t size, struct arraydescr *descr) { - arrayobject *op; - size_t nbytes; + arrayobject *op; + size_t nbytes; - if (size < 0) { - PyErr_BadInternalCall(); - return NULL; - } + if (size < 0) { + PyErr_BadInternalCall(); + return NULL; + } - nbytes = size * descr->itemsize; - /* Check for overflow */ - if (nbytes / descr->itemsize != (size_t)size) { - return PyErr_NoMemory(); - } - op = (arrayobject *) type->tp_alloc(type, 0); - if (op == NULL) { - return NULL; - } - op->ob_descr = descr; - op->allocated = size; - op->weakreflist = NULL; - Py_SIZE(op) = size; - if (size <= 0) { - op->ob_item = NULL; - } - else { - op->ob_item = PyMem_NEW(char, nbytes); - if (op->ob_item == NULL) { - Py_DECREF(op); - return PyErr_NoMemory(); - } - } - return (PyObject *) op; + nbytes = size * descr->itemsize; + /* Check for overflow */ + if (nbytes / descr->itemsize != (size_t)size) { + return PyErr_NoMemory(); + } + op = (arrayobject *) type->tp_alloc(type, 0); + if (op == NULL) { + return NULL; + } + op->ob_descr = descr; + op->allocated = size; + op->weakreflist = NULL; + Py_SIZE(op) = size; + if (size <= 0) { + op->ob_item = NULL; + } + else { + op->ob_item = PyMem_NEW(char, nbytes); + if (op->ob_item == NULL) { + Py_DECREF(op); + return PyErr_NoMemory(); + } + } + return (PyObject *) op; } static PyObject * getarrayitem(PyObject *op, Py_ssize_t i) { - register arrayobject *ap; - assert(array_Check(op)); - ap = (arrayobject *)op; - assert(i>=0 && iob_descr->getitem)(ap, i); + register arrayobject *ap; + assert(array_Check(op)); + ap = (arrayobject *)op; + assert(i>=0 && iob_descr->getitem)(ap, i); } static int ins1(arrayobject *self, Py_ssize_t where, PyObject *v) { - char *items; - Py_ssize_t n = Py_SIZE(self); - if (v == NULL) { - PyErr_BadInternalCall(); - return -1; - } - if ((*self->ob_descr->setitem)(self, -1, v) < 0) - return -1; + char *items; + Py_ssize_t n = Py_SIZE(self); + if (v == NULL) { + PyErr_BadInternalCall(); + return -1; + } + if ((*self->ob_descr->setitem)(self, -1, v) < 0) + return -1; - if (array_resize(self, n+1) == -1) - return -1; - items = self->ob_item; - if (where < 0) { - where += n; - if (where < 0) - where = 0; - } - if (where > n) - where = n; - /* appends don't need to call memmove() */ - if (where != n) - memmove(items + (where+1)*self->ob_descr->itemsize, - items + where*self->ob_descr->itemsize, - (n-where)*self->ob_descr->itemsize); - return (*self->ob_descr->setitem)(self, where, v); + if (array_resize(self, n+1) == -1) + return -1; + items = self->ob_item; + if (where < 0) { + where += n; + if (where < 0) + where = 0; + } + if (where > n) + where = n; + /* appends don't need to call memmove() */ + if (where != n) + memmove(items + (where+1)*self->ob_descr->itemsize, + items + where*self->ob_descr->itemsize, + (n-where)*self->ob_descr->itemsize); + return (*self->ob_descr->setitem)(self, where, v); } /* Methods */ @@ -497,141 +494,141 @@ static void array_dealloc(arrayobject *op) { - if (op->weakreflist != NULL) - PyObject_ClearWeakRefs((PyObject *) op); - if (op->ob_item != NULL) - PyMem_DEL(op->ob_item); - Py_TYPE(op)->tp_free((PyObject *)op); + if (op->weakreflist != NULL) + PyObject_ClearWeakRefs((PyObject *) op); + if (op->ob_item != NULL) + PyMem_DEL(op->ob_item); + Py_TYPE(op)->tp_free((PyObject *)op); } static PyObject * array_richcompare(PyObject *v, PyObject *w, int op) { - arrayobject *va, *wa; - PyObject *vi = NULL; - PyObject *wi = NULL; - Py_ssize_t i, k; - PyObject *res; + arrayobject *va, *wa; + PyObject *vi = NULL; + PyObject *wi = NULL; + Py_ssize_t i, k; + PyObject *res; - if (!array_Check(v) || !array_Check(w)) { - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } + if (!array_Check(v) || !array_Check(w)) { + Py_INCREF(Py_NotImplemented); + return Py_NotImplemented; + } - va = (arrayobject *)v; - wa = (arrayobject *)w; + va = (arrayobject *)v; + wa = (arrayobject *)w; - if (Py_SIZE(va) != Py_SIZE(wa) && (op == Py_EQ || op == Py_NE)) { - /* Shortcut: if the lengths differ, the arrays differ */ - if (op == Py_EQ) - res = Py_False; - else - res = Py_True; - Py_INCREF(res); - return res; - } + if (Py_SIZE(va) != Py_SIZE(wa) && (op == Py_EQ || op == Py_NE)) { + /* Shortcut: if the lengths differ, the arrays differ */ + if (op == Py_EQ) + res = Py_False; + else + res = Py_True; + Py_INCREF(res); + return res; + } - /* Search for the first index where items are different */ - k = 1; - for (i = 0; i < Py_SIZE(va) && i < Py_SIZE(wa); i++) { - vi = getarrayitem(v, i); - wi = getarrayitem(w, i); - if (vi == NULL || wi == NULL) { - Py_XDECREF(vi); - Py_XDECREF(wi); - return NULL; - } - k = PyObject_RichCompareBool(vi, wi, Py_EQ); - if (k == 0) - break; /* Keeping vi and wi alive! */ - Py_DECREF(vi); - Py_DECREF(wi); - if (k < 0) - return NULL; - } + /* Search for the first index where items are different */ + k = 1; + for (i = 0; i < Py_SIZE(va) && i < Py_SIZE(wa); i++) { + vi = getarrayitem(v, i); + wi = getarrayitem(w, i); + if (vi == NULL || wi == NULL) { + Py_XDECREF(vi); + Py_XDECREF(wi); + return NULL; + } + k = PyObject_RichCompareBool(vi, wi, Py_EQ); + if (k == 0) + break; /* Keeping vi and wi alive! */ + Py_DECREF(vi); + Py_DECREF(wi); + if (k < 0) + return NULL; + } - if (k) { - /* No more items to compare -- compare sizes */ - Py_ssize_t vs = Py_SIZE(va); - Py_ssize_t ws = Py_SIZE(wa); - int cmp; - switch (op) { - case Py_LT: cmp = vs < ws; break; - case Py_LE: cmp = vs <= ws; break; - case Py_EQ: cmp = vs == ws; break; - case Py_NE: cmp = vs != ws; break; - case Py_GT: cmp = vs > ws; break; - case Py_GE: cmp = vs >= ws; break; - default: return NULL; /* cannot happen */ - } - if (cmp) - res = Py_True; - else - res = Py_False; - Py_INCREF(res); - return res; - } + if (k) { + /* No more items to compare -- compare sizes */ + Py_ssize_t vs = Py_SIZE(va); + Py_ssize_t ws = Py_SIZE(wa); + int cmp; + switch (op) { + case Py_LT: cmp = vs < ws; break; + case Py_LE: cmp = vs <= ws; break; + case Py_EQ: cmp = vs == ws; break; + case Py_NE: cmp = vs != ws; break; + case Py_GT: cmp = vs > ws; break; + case Py_GE: cmp = vs >= ws; break; + default: return NULL; /* cannot happen */ + } + if (cmp) + res = Py_True; + else + res = Py_False; + Py_INCREF(res); + return res; + } - /* We have an item that differs. First, shortcuts for EQ/NE */ - if (op == Py_EQ) { - Py_INCREF(Py_False); - res = Py_False; - } - else if (op == Py_NE) { - Py_INCREF(Py_True); - res = Py_True; - } - else { - /* Compare the final item again using the proper operator */ - res = PyObject_RichCompare(vi, wi, op); - } - Py_DECREF(vi); - Py_DECREF(wi); - return res; + /* We have an item that differs. First, shortcuts for EQ/NE */ + if (op == Py_EQ) { + Py_INCREF(Py_False); + res = Py_False; + } + else if (op == Py_NE) { + Py_INCREF(Py_True); + res = Py_True; + } + else { + /* Compare the final item again using the proper operator */ + res = PyObject_RichCompare(vi, wi, op); + } + Py_DECREF(vi); + Py_DECREF(wi); + return res; } static Py_ssize_t array_length(arrayobject *a) { - return Py_SIZE(a); + return Py_SIZE(a); } static PyObject * array_item(arrayobject *a, Py_ssize_t i) { - if (i < 0 || i >= Py_SIZE(a)) { - PyErr_SetString(PyExc_IndexError, "array index out of range"); - return NULL; - } - return getarrayitem((PyObject *)a, i); + if (i < 0 || i >= Py_SIZE(a)) { + PyErr_SetString(PyExc_IndexError, "array index out of range"); + return NULL; + } + return getarrayitem((PyObject *)a, i); } static PyObject * array_slice(arrayobject *a, Py_ssize_t ilow, Py_ssize_t ihigh) { - arrayobject *np; - if (ilow < 0) - ilow = 0; - else if (ilow > Py_SIZE(a)) - ilow = Py_SIZE(a); - if (ihigh < 0) - ihigh = 0; - if (ihigh < ilow) - ihigh = ilow; - else if (ihigh > Py_SIZE(a)) - ihigh = Py_SIZE(a); - np = (arrayobject *) newarrayobject(&Arraytype, ihigh - ilow, a->ob_descr); - if (np == NULL) - return NULL; - memcpy(np->ob_item, a->ob_item + ilow * a->ob_descr->itemsize, - (ihigh-ilow) * a->ob_descr->itemsize); - return (PyObject *)np; + arrayobject *np; + if (ilow < 0) + ilow = 0; + else if (ilow > Py_SIZE(a)) + ilow = Py_SIZE(a); + if (ihigh < 0) + ihigh = 0; + if (ihigh < ilow) + ihigh = ilow; + else if (ihigh > Py_SIZE(a)) + ihigh = Py_SIZE(a); + np = (arrayobject *) newarrayobject(&Arraytype, ihigh - ilow, a->ob_descr); + if (np == NULL) + return NULL; + memcpy(np->ob_item, a->ob_item + ilow * a->ob_descr->itemsize, + (ihigh-ilow) * a->ob_descr->itemsize); + return (PyObject *)np; } static PyObject * array_copy(arrayobject *a, PyObject *unused) { - return array_slice(a, 0, Py_SIZE(a)); + return array_slice(a, 0, Py_SIZE(a)); } PyDoc_STRVAR(copy_doc, @@ -642,297 +639,297 @@ static PyObject * array_concat(arrayobject *a, PyObject *bb) { - Py_ssize_t size; - arrayobject *np; - if (!array_Check(bb)) { - PyErr_Format(PyExc_TypeError, - "can only append array (not \"%.200s\") to array", - Py_TYPE(bb)->tp_name); - return NULL; - } + Py_ssize_t size; + arrayobject *np; + if (!array_Check(bb)) { + PyErr_Format(PyExc_TypeError, + "can only append array (not \"%.200s\") to array", + Py_TYPE(bb)->tp_name); + return NULL; + } #define b ((arrayobject *)bb) - if (a->ob_descr != b->ob_descr) { - PyErr_BadArgument(); - return NULL; - } - if (Py_SIZE(a) > PY_SSIZE_T_MAX - Py_SIZE(b)) { - return PyErr_NoMemory(); - } - size = Py_SIZE(a) + Py_SIZE(b); - np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); - if (np == NULL) { - return NULL; - } - memcpy(np->ob_item, a->ob_item, Py_SIZE(a)*a->ob_descr->itemsize); - memcpy(np->ob_item + Py_SIZE(a)*a->ob_descr->itemsize, - b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); - return (PyObject *)np; + if (a->ob_descr != b->ob_descr) { + PyErr_BadArgument(); + return NULL; + } + if (Py_SIZE(a) > PY_SSIZE_T_MAX - Py_SIZE(b)) { + return PyErr_NoMemory(); + } + size = Py_SIZE(a) + Py_SIZE(b); + np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); + if (np == NULL) { + return NULL; + } + memcpy(np->ob_item, a->ob_item, Py_SIZE(a)*a->ob_descr->itemsize); + memcpy(np->ob_item + Py_SIZE(a)*a->ob_descr->itemsize, + b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); + return (PyObject *)np; #undef b } static PyObject * array_repeat(arrayobject *a, Py_ssize_t n) { - Py_ssize_t i; - Py_ssize_t size; - arrayobject *np; - char *p; - Py_ssize_t nbytes; - if (n < 0) - n = 0; - if ((Py_SIZE(a) != 0) && (n > PY_SSIZE_T_MAX / Py_SIZE(a))) { - return PyErr_NoMemory(); - } - size = Py_SIZE(a) * n; - np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); - if (np == NULL) - return NULL; - p = np->ob_item; - nbytes = Py_SIZE(a) * a->ob_descr->itemsize; - for (i = 0; i < n; i++) { - memcpy(p, a->ob_item, nbytes); - p += nbytes; - } - return (PyObject *) np; + Py_ssize_t i; + Py_ssize_t size; + arrayobject *np; + char *p; + Py_ssize_t nbytes; + if (n < 0) + n = 0; + if ((Py_SIZE(a) != 0) && (n > PY_SSIZE_T_MAX / Py_SIZE(a))) { + return PyErr_NoMemory(); + } + size = Py_SIZE(a) * n; + np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); + if (np == NULL) + return NULL; + p = np->ob_item; + nbytes = Py_SIZE(a) * a->ob_descr->itemsize; + for (i = 0; i < n; i++) { + memcpy(p, a->ob_item, nbytes); + p += nbytes; + } + return (PyObject *) np; } static int array_ass_slice(arrayobject *a, Py_ssize_t ilow, Py_ssize_t ihigh, PyObject *v) { - char *item; - Py_ssize_t n; /* Size of replacement array */ - Py_ssize_t d; /* Change in size */ + char *item; + Py_ssize_t n; /* Size of replacement array */ + Py_ssize_t d; /* Change in size */ #define b ((arrayobject *)v) - if (v == NULL) - n = 0; - else if (array_Check(v)) { - n = Py_SIZE(b); - if (a == b) { - /* Special case "a[i:j] = a" -- copy b first */ - int ret; - v = array_slice(b, 0, n); - if (!v) - return -1; - ret = array_ass_slice(a, ilow, ihigh, v); - Py_DECREF(v); - return ret; - } - if (b->ob_descr != a->ob_descr) { - PyErr_BadArgument(); - return -1; - } - } - else { - PyErr_Format(PyExc_TypeError, - "can only assign array (not \"%.200s\") to array slice", - Py_TYPE(v)->tp_name); - return -1; - } - if (ilow < 0) - ilow = 0; - else if (ilow > Py_SIZE(a)) - ilow = Py_SIZE(a); - if (ihigh < 0) - ihigh = 0; - if (ihigh < ilow) - ihigh = ilow; - else if (ihigh > Py_SIZE(a)) - ihigh = Py_SIZE(a); - item = a->ob_item; - d = n - (ihigh-ilow); - if (d < 0) { /* Delete -d items */ - memmove(item + (ihigh+d)*a->ob_descr->itemsize, - item + ihigh*a->ob_descr->itemsize, - (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); - Py_SIZE(a) += d; - PyMem_RESIZE(item, char, Py_SIZE(a)*a->ob_descr->itemsize); - /* Can't fail */ - a->ob_item = item; - a->allocated = Py_SIZE(a); - } - else if (d > 0) { /* Insert d items */ - PyMem_RESIZE(item, char, - (Py_SIZE(a) + d)*a->ob_descr->itemsize); - if (item == NULL) { - PyErr_NoMemory(); - return -1; - } - memmove(item + (ihigh+d)*a->ob_descr->itemsize, - item + ihigh*a->ob_descr->itemsize, - (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); - a->ob_item = item; - Py_SIZE(a) += d; - a->allocated = Py_SIZE(a); - } - if (n > 0) - memcpy(item + ilow*a->ob_descr->itemsize, b->ob_item, - n*b->ob_descr->itemsize); - return 0; + if (v == NULL) + n = 0; + else if (array_Check(v)) { + n = Py_SIZE(b); + if (a == b) { + /* Special case "a[i:j] = a" -- copy b first */ + int ret; + v = array_slice(b, 0, n); + if (!v) + return -1; + ret = array_ass_slice(a, ilow, ihigh, v); + Py_DECREF(v); + return ret; + } + if (b->ob_descr != a->ob_descr) { + PyErr_BadArgument(); + return -1; + } + } + else { + PyErr_Format(PyExc_TypeError, + "can only assign array (not \"%.200s\") to array slice", + Py_TYPE(v)->tp_name); + return -1; + } + if (ilow < 0) + ilow = 0; + else if (ilow > Py_SIZE(a)) + ilow = Py_SIZE(a); + if (ihigh < 0) + ihigh = 0; + if (ihigh < ilow) + ihigh = ilow; + else if (ihigh > Py_SIZE(a)) + ihigh = Py_SIZE(a); + item = a->ob_item; + d = n - (ihigh-ilow); + if (d < 0) { /* Delete -d items */ + memmove(item + (ihigh+d)*a->ob_descr->itemsize, + item + ihigh*a->ob_descr->itemsize, + (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); + Py_SIZE(a) += d; + PyMem_RESIZE(item, char, Py_SIZE(a)*a->ob_descr->itemsize); + /* Can't fail */ + a->ob_item = item; + a->allocated = Py_SIZE(a); + } + else if (d > 0) { /* Insert d items */ + PyMem_RESIZE(item, char, + (Py_SIZE(a) + d)*a->ob_descr->itemsize); + if (item == NULL) { + PyErr_NoMemory(); + return -1; + } + memmove(item + (ihigh+d)*a->ob_descr->itemsize, + item + ihigh*a->ob_descr->itemsize, + (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); + a->ob_item = item; + Py_SIZE(a) += d; + a->allocated = Py_SIZE(a); + } + if (n > 0) + memcpy(item + ilow*a->ob_descr->itemsize, b->ob_item, + n*b->ob_descr->itemsize); + return 0; #undef b } static int array_ass_item(arrayobject *a, Py_ssize_t i, PyObject *v) { - if (i < 0 || i >= Py_SIZE(a)) { - PyErr_SetString(PyExc_IndexError, - "array assignment index out of range"); - return -1; - } - if (v == NULL) - return array_ass_slice(a, i, i+1, v); - return (*a->ob_descr->setitem)(a, i, v); + if (i < 0 || i >= Py_SIZE(a)) { + PyErr_SetString(PyExc_IndexError, + "array assignment index out of range"); + return -1; + } + if (v == NULL) + return array_ass_slice(a, i, i+1, v); + return (*a->ob_descr->setitem)(a, i, v); } static int setarrayitem(PyObject *a, Py_ssize_t i, PyObject *v) { - assert(array_Check(a)); - return array_ass_item((arrayobject *)a, i, v); + assert(array_Check(a)); + return array_ass_item((arrayobject *)a, i, v); } static int array_iter_extend(arrayobject *self, PyObject *bb) { - PyObject *it, *v; + PyObject *it, *v; - it = PyObject_GetIter(bb); - if (it == NULL) - return -1; + it = PyObject_GetIter(bb); + if (it == NULL) + return -1; - while ((v = PyIter_Next(it)) != NULL) { - if (ins1(self, (int) Py_SIZE(self), v) != 0) { - Py_DECREF(v); - Py_DECREF(it); - return -1; - } - Py_DECREF(v); - } - Py_DECREF(it); - if (PyErr_Occurred()) - return -1; - return 0; + while ((v = PyIter_Next(it)) != NULL) { + if (ins1(self, Py_SIZE(self), v) != 0) { + Py_DECREF(v); + Py_DECREF(it); + return -1; + } + Py_DECREF(v); + } + Py_DECREF(it); + if (PyErr_Occurred()) + return -1; + return 0; } static int array_do_extend(arrayobject *self, PyObject *bb) { - Py_ssize_t size; - char *old_item; + Py_ssize_t size; + char *old_item; - if (!array_Check(bb)) - return array_iter_extend(self, bb); + if (!array_Check(bb)) + return array_iter_extend(self, bb); #define b ((arrayobject *)bb) - if (self->ob_descr != b->ob_descr) { - PyErr_SetString(PyExc_TypeError, - "can only extend with array of same kind"); - return -1; - } - if ((Py_SIZE(self) > PY_SSIZE_T_MAX - Py_SIZE(b)) || - ((Py_SIZE(self) + Py_SIZE(b)) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { - PyErr_NoMemory(); - return -1; - } - size = Py_SIZE(self) + Py_SIZE(b); - old_item = self->ob_item; - PyMem_RESIZE(self->ob_item, char, size*self->ob_descr->itemsize); - if (self->ob_item == NULL) { - self->ob_item = old_item; - PyErr_NoMemory(); - return -1; - } - memcpy(self->ob_item + Py_SIZE(self)*self->ob_descr->itemsize, - b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); - Py_SIZE(self) = size; - self->allocated = size; + if (self->ob_descr != b->ob_descr) { + PyErr_SetString(PyExc_TypeError, + "can only extend with array of same kind"); + return -1; + } + if ((Py_SIZE(self) > PY_SSIZE_T_MAX - Py_SIZE(b)) || + ((Py_SIZE(self) + Py_SIZE(b)) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { + PyErr_NoMemory(); + return -1; + } + size = Py_SIZE(self) + Py_SIZE(b); + old_item = self->ob_item; + PyMem_RESIZE(self->ob_item, char, size*self->ob_descr->itemsize); + if (self->ob_item == NULL) { + self->ob_item = old_item; + PyErr_NoMemory(); + return -1; + } + memcpy(self->ob_item + Py_SIZE(self)*self->ob_descr->itemsize, + b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); + Py_SIZE(self) = size; + self->allocated = size; - return 0; + return 0; #undef b } static PyObject * array_inplace_concat(arrayobject *self, PyObject *bb) { - if (!array_Check(bb)) { - PyErr_Format(PyExc_TypeError, - "can only extend array with array (not \"%.200s\")", - Py_TYPE(bb)->tp_name); - return NULL; - } - if (array_do_extend(self, bb) == -1) - return NULL; - Py_INCREF(self); - return (PyObject *)self; + if (!array_Check(bb)) { + PyErr_Format(PyExc_TypeError, + "can only extend array with array (not \"%.200s\")", + Py_TYPE(bb)->tp_name); + return NULL; + } + if (array_do_extend(self, bb) == -1) + return NULL; + Py_INCREF(self); + return (PyObject *)self; } static PyObject * array_inplace_repeat(arrayobject *self, Py_ssize_t n) { - char *items, *p; - Py_ssize_t size, i; + char *items, *p; + Py_ssize_t size, i; - if (Py_SIZE(self) > 0) { - if (n < 0) - n = 0; - items = self->ob_item; - if ((self->ob_descr->itemsize != 0) && - (Py_SIZE(self) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { - return PyErr_NoMemory(); - } - size = Py_SIZE(self) * self->ob_descr->itemsize; - if (n == 0) { - PyMem_FREE(items); - self->ob_item = NULL; - Py_SIZE(self) = 0; - self->allocated = 0; - } - else { - if (size > PY_SSIZE_T_MAX / n) { - return PyErr_NoMemory(); - } - PyMem_RESIZE(items, char, n * size); - if (items == NULL) - return PyErr_NoMemory(); - p = items; - for (i = 1; i < n; i++) { - p += size; - memcpy(p, items, size); - } - self->ob_item = items; - Py_SIZE(self) *= n; - self->allocated = Py_SIZE(self); - } - } - Py_INCREF(self); - return (PyObject *)self; + if (Py_SIZE(self) > 0) { + if (n < 0) + n = 0; + items = self->ob_item; + if ((self->ob_descr->itemsize != 0) && + (Py_SIZE(self) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { + return PyErr_NoMemory(); + } + size = Py_SIZE(self) * self->ob_descr->itemsize; + if (n == 0) { + PyMem_FREE(items); + self->ob_item = NULL; + Py_SIZE(self) = 0; + self->allocated = 0; + } + else { + if (size > PY_SSIZE_T_MAX / n) { + return PyErr_NoMemory(); + } + PyMem_RESIZE(items, char, n * size); + if (items == NULL) + return PyErr_NoMemory(); + p = items; + for (i = 1; i < n; i++) { + p += size; + memcpy(p, items, size); + } + self->ob_item = items; + Py_SIZE(self) *= n; + self->allocated = Py_SIZE(self); + } + } + Py_INCREF(self); + return (PyObject *)self; } static PyObject * ins(arrayobject *self, Py_ssize_t where, PyObject *v) { - if (ins1(self, where, v) != 0) - return NULL; - Py_INCREF(Py_None); - return Py_None; + if (ins1(self, where, v) != 0) + return NULL; + Py_INCREF(Py_None); + return Py_None; } static PyObject * array_count(arrayobject *self, PyObject *v) { - Py_ssize_t count = 0; - Py_ssize_t i; + Py_ssize_t count = 0; + Py_ssize_t i; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self, i); - int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - if (cmp > 0) - count++; - else if (cmp < 0) - return NULL; - } - return PyInt_FromSsize_t(count); + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self, i); + int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + if (cmp > 0) + count++; + else if (cmp < 0) + return NULL; + } + return PyInt_FromSsize_t(count); } PyDoc_STRVAR(count_doc, @@ -943,20 +940,20 @@ static PyObject * array_index(arrayobject *self, PyObject *v) { - Py_ssize_t i; + Py_ssize_t i; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self, i); - int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - if (cmp > 0) { - return PyInt_FromLong((long)i); - } - else if (cmp < 0) - return NULL; - } - PyErr_SetString(PyExc_ValueError, "array.index(x): x not in list"); - return NULL; + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self, i); + int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + if (cmp > 0) { + return PyInt_FromLong((long)i); + } + else if (cmp < 0) + return NULL; + } + PyErr_SetString(PyExc_ValueError, "array.index(x): x not in list"); + return NULL; } PyDoc_STRVAR(index_doc, @@ -967,38 +964,38 @@ static int array_contains(arrayobject *self, PyObject *v) { - Py_ssize_t i; - int cmp; + Py_ssize_t i; + int cmp; - for (i = 0, cmp = 0 ; cmp == 0 && i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self, i); - cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - } - return cmp; + for (i = 0, cmp = 0 ; cmp == 0 && i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self, i); + cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + } + return cmp; } static PyObject * array_remove(arrayobject *self, PyObject *v) { - int i; + int i; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self,i); - int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - if (cmp > 0) { - if (array_ass_slice(self, i, i+1, - (PyObject *)NULL) != 0) - return NULL; - Py_INCREF(Py_None); - return Py_None; - } - else if (cmp < 0) - return NULL; - } - PyErr_SetString(PyExc_ValueError, "array.remove(x): x not in list"); - return NULL; + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self,i); + int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + if (cmp > 0) { + if (array_ass_slice(self, i, i+1, + (PyObject *)NULL) != 0) + return NULL; + Py_INCREF(Py_None); + return Py_None; + } + else if (cmp < 0) + return NULL; + } + PyErr_SetString(PyExc_ValueError, "array.remove(x): x not in list"); + return NULL; } PyDoc_STRVAR(remove_doc, @@ -1009,27 +1006,27 @@ static PyObject * array_pop(arrayobject *self, PyObject *args) { - Py_ssize_t i = -1; - PyObject *v; - if (!PyArg_ParseTuple(args, "|n:pop", &i)) - return NULL; - if (Py_SIZE(self) == 0) { - /* Special-case most common failure cause */ - PyErr_SetString(PyExc_IndexError, "pop from empty array"); - return NULL; - } - if (i < 0) - i += Py_SIZE(self); - if (i < 0 || i >= Py_SIZE(self)) { - PyErr_SetString(PyExc_IndexError, "pop index out of range"); - return NULL; - } - v = getarrayitem((PyObject *)self,i); - if (array_ass_slice(self, i, i+1, (PyObject *)NULL) != 0) { - Py_DECREF(v); - return NULL; - } - return v; + Py_ssize_t i = -1; + PyObject *v; + if (!PyArg_ParseTuple(args, "|n:pop", &i)) + return NULL; + if (Py_SIZE(self) == 0) { + /* Special-case most common failure cause */ + PyErr_SetString(PyExc_IndexError, "pop from empty array"); + return NULL; + } + if (i < 0) + i += Py_SIZE(self); + if (i < 0 || i >= Py_SIZE(self)) { + PyErr_SetString(PyExc_IndexError, "pop index out of range"); + return NULL; + } + v = getarrayitem((PyObject *)self,i); + if (array_ass_slice(self, i, i+1, (PyObject *)NULL) != 0) { + Py_DECREF(v); + return NULL; + } + return v; } PyDoc_STRVAR(pop_doc, @@ -1040,10 +1037,10 @@ static PyObject * array_extend(arrayobject *self, PyObject *bb) From noreply at buildbot.pypy.org Thu May 3 14:36:00 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 May 2012 14:36:00 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: Kill two more files. Message-ID: <20120503123600.E89EC82009@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r54885:9f30fa8b60be Date: 2012-05-03 14:33 +0200 http://bitbucket.org/pypy/pypy/changeset/9f30fa8b60be/ Log: Kill two more files. diff --git a/pypy/translator/stm/src_stm/fifo.c b/pypy/translator/stm/src_stm/fifo.c deleted file mode 100644 --- a/pypy/translator/stm/src_stm/fifo.c +++ /dev/null @@ -1,67 +0,0 @@ -/* -*- c-basic-offset: 2 -*- */ - - -/* xxx Direct access to this field. Relies on genc producing always the - same names, but that should be ok. */ -#define NEXT(item) (((struct pypy_pypy_rlib_rstm_Transaction0 *)(item)) \ - ->t_inst__next_transaction) - - -typedef struct { - void *first; - void *last; -} stm_fifo_t; - - -static void fifo_init(stm_fifo_t *fifo) -{ - fifo->first = NULL; - fifo->last = NULL; -} - -static void *fifo_next(void *item) -{ - return NEXT(item); -} - -static void fifo_append(stm_fifo_t *fifo, void *newitem) -{ - NEXT(newitem) = NULL; - if (fifo->last == NULL) - fifo->first = newitem; - else - NEXT(fifo->last) = newitem; - fifo->last = newitem; -} - -static bool_t fifo_is_empty(stm_fifo_t *fifo) -{ - assert((fifo->first == NULL) == (fifo->last == NULL)); - return (fifo->first == NULL); -} - -static void *fifo_popleft(stm_fifo_t *fifo) -{ - void *item = fifo->first; - fifo->first = NEXT(item); - if (fifo->first == NULL) - fifo->last = NULL; - NEXT(item) = NULL; /* ensure the NEXT is cleared, - to avoid spurious keepalives */ - return item; -} - -static void fifo_extend(stm_fifo_t *fifo, void *newitems) -{ - if (fifo->last == NULL) - fifo->first = newitems; - else - NEXT(fifo->last) = newitems; - - while (NEXT(newitems) != NULL) - newitems = NEXT(newitems); - - fifo->last = newitems; -} - -#undef NEXT diff --git a/pypy/translator/stm/src_stm/rpyintf.c b/pypy/translator/stm/src_stm/rpyintf.c deleted file mode 100644 --- a/pypy/translator/stm/src_stm/rpyintf.c +++ /dev/null @@ -1,154 +0,0 @@ -/* -*- c-basic-offset: 2 -*- */ - -#include "src_stm/fifo.c" - - -/* this mutex is used to ensure non-conflicting accesses to global - data in run_thread(). */ -static pthread_mutex_t mutex_global = PTHREAD_MUTEX_INITIALIZER; - -/* this lock is acquired if and only if there are no tasks pending, - i.e. the fifo stm_g_pending is empty. */ -static pthread_mutex_t mutex_no_tasks_pending = PTHREAD_MUTEX_INITIALIZER; - -/* some global data put there by run_all_transactions(). */ -static stm_fifo_t stm_g_pending; -static int stm_g_num_threads, stm_g_num_waiting_threads, stm_g_finished; - - -static void* perform_transaction(void *transaction) -{ - void *new_transaction_list; - jmp_buf _jmpbuf; - long counter; - volatile long v_counter = 0; - - setjmp(_jmpbuf); - - begin_transaction(&_jmpbuf); - - counter = v_counter; - v_counter = counter + 1; - - new_transaction_list = pypy_g__stm_run_transaction(transaction, counter); - - commit_transaction(); - - return new_transaction_list; -} - -static void add_list(void *new_transaction_list) -{ - bool_t was_empty; - - if (new_transaction_list == NULL) - return; - - was_empty = fifo_is_empty(&stm_g_pending); - fifo_extend(&stm_g_pending, new_transaction_list); - if (was_empty) - pthread_mutex_unlock(&mutex_no_tasks_pending); -} - - -/* the main function running a thread */ -static void *run_thread(void *ignored) -{ - pthread_mutex_lock(&mutex_global); - pypy_g__stm_thread_starting(); - - while (1) - { - if (fifo_is_empty(&stm_g_pending)) - { - stm_g_num_waiting_threads += 1; - if (stm_g_num_waiting_threads == stm_g_num_threads) - { - stm_g_finished = 1; - pthread_mutex_unlock(&mutex_no_tasks_pending); - } - pthread_mutex_unlock(&mutex_global); - - pthread_mutex_lock(&mutex_no_tasks_pending); - pthread_mutex_unlock(&mutex_no_tasks_pending); - - pthread_mutex_lock(&mutex_global); - stm_g_num_waiting_threads -= 1; - if (stm_g_finished) - break; - } - else - { - void *new_transaction_list; - void *transaction = fifo_popleft(&stm_g_pending); - if (fifo_is_empty(&stm_g_pending)) - pthread_mutex_lock(&mutex_no_tasks_pending); - pthread_mutex_unlock(&mutex_global); - - while (1) - { - new_transaction_list = perform_transaction(transaction); - - /* for now, always break out of this loop, - unless 'new_transaction_list' contains precisely one item */ - if (new_transaction_list == NULL) - break; - if (fifo_next(new_transaction_list) != NULL) - break; - - transaction = new_transaction_list; /* single element */ - } - - pthread_mutex_lock(&mutex_global); - add_list(new_transaction_list); - } - } - - pypy_g__stm_thread_stopping(); - pthread_mutex_unlock(&mutex_global); - return NULL; -} - -void stm_run_all_transactions(void *initial_transaction, - long num_threads) -{ - long i; - pthread_t *th = malloc(num_threads * sizeof(pthread_t*)); - if (th == NULL) - { - /* XXX turn into a nice exception */ - fprintf(stderr, "out of memory: too many threads?\n"); - exit(1); - } - - fifo_init(&stm_g_pending); - fifo_append(&stm_g_pending, initial_transaction); - stm_g_num_threads = (int)num_threads; - stm_g_num_waiting_threads = 0; - stm_g_finished = 0; - - for (i=0; i Author: Armin Rigo Branch: stm-thread Changeset: r54886:086878de9950 Date: 2012-05-03 14:34 +0200 http://bitbucket.org/pypy/pypy/changeset/086878de9950/ Log: Add a demo transaction target using the GIL. diff --git a/pypy/translator/stm/test/targetdemo2.py b/pypy/translator/stm/test/targetdemo2.py new file mode 100644 --- /dev/null +++ b/pypy/translator/stm/test/targetdemo2.py @@ -0,0 +1,176 @@ +import time +from pypy.module.thread import ll_thread, gil +from pypy.rlib.objectmodel import invoke_around_extcall, we_are_translated + + +class Node: + def __init__(self, value): + self.value = value + self.next = None + +class Global: + NUM_THREADS = 4 + LENGTH = 5000 + USE_MEMORY = False + anchor = Node(-1) +glob = Global() + +def add_at_end_of_chained_list(node, value, threadindex): + x = Node(value) + while node.next: + node = node.next + if glob.USE_MEMORY: + x = Node(value) + if not we_are_translated(): + print threadindex + time.sleep(0.01) + newnode = x + assert node.next is None + node.next = newnode + +def check_chained_list(node): + seen = [0] * (glob.LENGTH+1) + seen[-1] = glob.NUM_THREADS + errors = glob.LENGTH + while node is not None: + value = node.value + #print value + if not (0 <= value < glob.LENGTH): + print "node.value out of bounds:", value + raise AssertionError + seen[value] += 1 + if seen[value] > seen[value-1]: + errors = min(errors, value) + node = node.next + if errors < glob.LENGTH: + value = errors + print "seen[%d] = %d, seen[%d] = %d" % (value-1, seen[value-1], + value, seen[value]) + raise AssertionError + + if seen[glob.LENGTH-1] != glob.NUM_THREADS: + print "seen[LENGTH-1] != NUM_THREADS" + raise AssertionError + print "check ok!" + + +class ThreadRunner(object): + def __init__(self, i): + self.index = i + self.finished_lock = ll_thread.allocate_lock() + self.finished_lock.acquire(True) + + def run(self): + try: + self.really_run() + finally: + self.finished_lock.release() + + def really_run(self): + for value in range(glob.LENGTH): + add_at_end_of_chained_list(glob.anchor, value, self.index) + gil.do_yield_thread() + +# ____________________________________________________________ +# bah, we are really missing an RPython interface to threads + +class Bootstrapper(object): + # The following lock is held whenever the fields + # 'bootstrapper.w_callable' and 'bootstrapper.args' are in use. + lock = None + args = None + + @staticmethod + def setup(): + if bootstrapper.lock is None: + bootstrapper.lock = ll_thread.allocate_lock() + + @staticmethod + def reinit(): + bootstrapper.lock = None + bootstrapper.args = None + + def _freeze_(self): + self.reinit() + return False + + @staticmethod + def bootstrap(): + # Note that when this runs, we already hold the GIL. This is ensured + # by rffi's callback mecanism: we are a callback for the + # c_thread_start() external function. + ll_thread.gc_thread_start() + args = bootstrapper.args + bootstrapper.release() + # run! + try: + args.run() + finally: + ll_thread.gc_thread_die() + + @staticmethod + def acquire(args): + # If the previous thread didn't start yet, wait until it does. + # Note that bootstrapper.lock must be a regular lock, not a NOAUTO + # lock, because the GIL must be released while we wait. + bootstrapper.lock.acquire(True) + bootstrapper.args = args + + @staticmethod + def release(): + # clean up 'bootstrapper' to make it ready for the next + # start_new_thread() and release the lock to tell that there + # isn't any bootstrapping thread left. + bootstrapper.args = None + bootstrapper.lock.release() + +bootstrapper = Bootstrapper() + +def setup_threads(): + #space.threadlocals.setup_threads(space) + bootstrapper.setup() + invoke_around_extcall(gil.before_external_call, gil.after_external_call) + +def start_thread(args): + bootstrapper.acquire(args) + try: + ll_thread.gc_thread_prepare() # (this has no effect any more) + ident = ll_thread.start_new_thread(bootstrapper.bootstrap, ()) + except Exception, e: + bootstrapper.release() # normally called by the new thread + raise + return ident + +# __________ Entry point __________ + +def entry_point(argv): + print "hello 2nd world" + if len(argv) > 1: + glob.NUM_THREADS = int(argv[1]) + if len(argv) > 2: + glob.LENGTH = int(argv[2]) + if len(argv) > 3: + glob.USE_MEMORY = bool(int(argv[3])) + # + setup_threads() + # + locks = [] + for i in range(glob.NUM_THREADS): + threadrunner = ThreadRunner(i) + start_thread(threadrunner) + locks.append(threadrunner.finished_lock) + for lock in locks: + lock.acquire(True) + # + check_chained_list(glob.anchor.next) + # + return 0 + +# _____ Define and setup target ___ + +def target(*args): + return entry_point, None + +if __name__ == '__main__': + import sys + entry_point(sys.argv) From noreply at buildbot.pypy.org Thu May 3 17:09:22 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 3 May 2012 17:09:22 +0200 (CEST) Subject: [pypy-commit] pypy default: Remove the distinction between handle() and really_handle(), Message-ID: <20120503150922.2D52182009@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r54887:b7558f5630d6 Date: 2012-05-03 17:08 +0200 http://bitbucket.org/pypy/pypy/changeset/b7558f5630d6/ Log: Remove the distinction between handle() and really_handle(), which was there with the comment "JIT hack" dating back to the old JIT in 2007. diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -1290,10 +1290,6 @@ w(self.valuestackdepth)]) def handle(self, frame, unroller): - next_instr = self.really_handle(frame, unroller) # JIT hack - return r_uint(next_instr) - - def really_handle(self, frame, unroller): """ Purely abstract method """ raise NotImplementedError @@ -1305,17 +1301,17 @@ _opname = 'SETUP_LOOP' handling_mask = SBreakLoop.kind | SContinueLoop.kind - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): if isinstance(unroller, SContinueLoop): # re-push the loop block without cleaning up the value stack, # and jump to the beginning of the loop, stored in the # exception's argument frame.append_block(self) - return unroller.jump_to + return r_uint(unroller.jump_to) else: # jump to the end of the loop self.cleanupstack(frame) - return self.handlerposition + return r_uint(self.handlerposition) class ExceptBlock(FrameBlock): @@ -1325,7 +1321,7 @@ _opname = 'SETUP_EXCEPT' handling_mask = SApplicationException.kind - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): # push the exception to the value stack for inspection by the # exception handler (the code after the except:) self.cleanupstack(frame) @@ -1340,7 +1336,7 @@ frame.pushvalue(operationerr.get_w_value(frame.space)) frame.pushvalue(operationerr.w_type) frame.last_exception = operationerr - return self.handlerposition # jump to the handler + return r_uint(self.handlerposition) # jump to the handler class FinallyBlock(FrameBlock): @@ -1361,7 +1357,7 @@ frame.pushvalue(frame.space.w_None) frame.pushvalue(frame.space.w_None) - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): # any abnormal reason for unrolling a finally: triggers the end of # the block unrolling and the entering the finally: handler. # see comments in cleanup(). @@ -1369,18 +1365,18 @@ frame.pushvalue(frame.space.wrap(unroller)) frame.pushvalue(frame.space.w_None) frame.pushvalue(frame.space.w_None) - return self.handlerposition # jump to the handler + return r_uint(self.handlerposition) # jump to the handler class WithBlock(FinallyBlock): _immutable_ = True - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): if (frame.space.full_exceptions and isinstance(unroller, SApplicationException)): unroller.operr.normalize_exception(frame.space) - return FinallyBlock.really_handle(self, frame, unroller) + return FinallyBlock.handle(self, frame, unroller) block_classes = {'SETUP_LOOP': LoopBlock, 'SETUP_EXCEPT': ExceptBlock, From noreply at buildbot.pypy.org Thu May 3 18:10:41 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 18:10:41 +0200 (CEST) Subject: [pypy-commit] pyrepl default: use reverse=True instead of negative comparator Message-ID: <20120503161041.E25C282009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r188:a6cab4ec445c Date: 2012-05-03 17:50 +0200 http://bitbucket.org/pypy/pyrepl/changeset/a6cab4ec445c/ Log: use reverse=True instead of negative comparator diff --git a/pyrepl/module_lister.py b/pyrepl/module_lister.py --- a/pyrepl/module_lister.py +++ b/pyrepl/module_lister.py @@ -45,13 +45,7 @@ def _make_module_list(): import imp suffs = [x[0] for x in imp.get_suffixes() if x[0] != '.pyc'] - def compare(x, y): - c = -cmp(len(x), len(y)) - if c: - return c - else: - return -cmp(x, y) - suffs.sort(compare) + suffs.sort(reverse=True) _packages[''] = list(sys.builtin_module_names) for dir in sys.path: if dir == '': From noreply at buildbot.pypy.org Thu May 3 18:10:43 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 18:10:43 +0200 (CEST) Subject: [pypy-commit] pyrepl default: split keymap creation from EventQueue Message-ID: <20120503161043.0CD788208A@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r189:3ca3f2b26ee3 Date: 2012-05-03 17:58 +0200 http://bitbucket.org/pypy/pyrepl/changeset/3ca3f2b26ee3/ Log: split keymap creation from EventQueue diff --git a/pyrepl/unix_eventqueue.py b/pyrepl/unix_eventqueue.py --- a/pyrepl/unix_eventqueue.py +++ b/pyrepl/unix_eventqueue.py @@ -52,18 +52,29 @@ "up" : "kcuu1", } -class EventQueue(object): - def __init__(self, fd, encoding): - our_keycodes = {} - for key, tiname in _keynames.items(): - keycode = curses.tigetstr(tiname) - trace('key {key} tiname {tiname} keycode {keycode!r}', **locals()) - if keycode: - our_keycodes[keycode] = key - if os.isatty(fd): - our_keycodes[tcgetattr(fd)[6][VERASE]] = unicode('backspace') - self.k = self.ck = keymap.compile_keymap(our_keycodes) - trace('keymap {k!r}', k=self.k) +def general_keycodes(): + keycodes = {} + for key, tiname in _keynames.items(): + keycode = curses.tigetstr(tiname) + trace('key {key} tiname {tiname} keycode {keycode!r}', **locals()) + if keycode: + keycodes[keycode] = key + return keycodes + + + +def EventQueue(fd, encoding): + keycodes = general_keycodes() + if os.isatty(fd): + backspace = tcgetattr(fd)[6][VERASE] + keycodes[backspace] = unicode('backspace') + k = keymap.compile_keymap(keycodes) + trace('keymap {k!r}', k=k) + return EncodedQueue(k, encoding) + +class EncodedQueue(object): + def __init__(self, keymap, encoding): + self.k = self.ck = keymap self.events = [] self.buf = [] self.encoding=encoding diff --git a/testing/test_unix_reader.py b/testing/test_unix_reader.py --- a/testing/test_unix_reader.py +++ b/testing/test_unix_reader.py @@ -1,9 +1,8 @@ -from pyrepl.unix_eventqueue import EventQueue +from pyrepl.unix_eventqueue import EncodedQueue from pyrepl import curses - at pytest.mark.xfail(run=False, reason='wtf segfault') def test_simple(): - q = EventQueue(0, 'utf-8') + q = EncodedQueue({}, 'utf-8') From noreply at buildbot.pypy.org Thu May 3 18:10:44 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 3 May 2012 18:10:44 +0200 (CEST) Subject: [pypy-commit] pyrepl default: refactor unix eventqueue decoding Message-ID: <20120503161044.3A3C082009@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r190:b2fe479ec33c Date: 2012-05-03 18:10 +0200 http://bitbucket.org/pypy/pyrepl/changeset/b2fe479ec33c/ Log: refactor unix eventqueue decoding diff --git a/pyrepl/unix_eventqueue.py b/pyrepl/unix_eventqueue.py --- a/pyrepl/unix_eventqueue.py +++ b/pyrepl/unix_eventqueue.py @@ -76,7 +76,7 @@ def __init__(self, keymap, encoding): self.k = self.ck = keymap self.events = [] - self.buf = [] + self.buf = bytearray() self.encoding=encoding def get(self): @@ -89,32 +89,33 @@ return not self.events def flush_buf(self): - raw = b''.join(self.buf) - self.buf = [] - return raw + old = self.buf + self.buf = bytearray() + return old def insert(self, event): trace('added event {event}', event=event) self.events.append(event) def push(self, char): + self.buf.append(char) if char in self.k: + if self.k is self.ck: + #sanity check, buffer is empty when a special key comes + assert len(self.buf) == 1 k = self.k[char] trace('found map {k!r}', k=k) - self.buf.append(char) if isinstance(k, dict): self.k = k else: self.insert(Event('key', k, self.flush_buf())) self.k = self.ck - elif self.buf: - keys = self.flush_buf() - decoded = keys.decode(self.encoding, 'ignore') # XXX surogate? - #XXX: incorrect - for c in decoded: - self.insert(Event('key', c, c)) - self.buf = [] + + else: + try: + decoded = bytes(self.buf).decode(self.encoding) + except: + return + + self.insert(Event('key', decoded, self.flush_buf())) self.k = self.ck - self.push(char) - else: - self.insert(Event('key', char.decode(self.encoding), char)) diff --git a/testing/test_unix_reader.py b/testing/test_unix_reader.py --- a/testing/test_unix_reader.py +++ b/testing/test_unix_reader.py @@ -1,3 +1,4 @@ +from __future__ import unicode_literals from pyrepl.unix_eventqueue import EncodedQueue from pyrepl import curses @@ -6,3 +7,11 @@ def test_simple(): q = EncodedQueue({}, 'utf-8') + a = u'\u1234' + b = a.encode('utf-8') + for c in b: + q.push(c) + + event = q.get() + assert q.get() is None + From noreply at buildbot.pypy.org Fri May 4 10:31:11 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 4 May 2012 10:31:11 +0200 (CEST) Subject: [pypy-commit] pypy default: Remove the duplicate entries (the ones which are not in alphabetical Message-ID: <20120504083111.84CB89B6038@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r54888:ef610cf966f1 Date: 2012-05-04 10:30 +0200 http://bitbucket.org/pypy/pypy/changeset/ef610cf966f1/ Log: Remove the duplicate entries (the ones which are not in alphabetical order, my fault). diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1652,8 +1652,6 @@ 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', - 'UnicodeEncodeError', - 'UnicodeDecodeError', ] if sys.platform.startswith("win"): From noreply at buildbot.pypy.org Fri May 4 11:10:13 2012 From: noreply at buildbot.pypy.org (mattip) Date: Fri, 4 May 2012 11:10:13 +0200 (CEST) Subject: [pypy-commit] pypy win32-stdlib: merge from default Message-ID: <20120504091013.A77E59B6038@wyvern.cs.uni-duesseldorf.de> Author: Matti Picus Branch: win32-stdlib Changeset: r54889:0518a9199086 Date: 2012-05-04 09:31 +0300 http://bitbucket.org/pypy/pypy/changeset/0518a9199086/ Log: merge from default diff too long, truncating to 10000 out of 14119 lines diff --git a/lib-python/modified-2.7/test/test_peepholer.py b/lib-python/modified-2.7/test/test_peepholer.py --- a/lib-python/modified-2.7/test/test_peepholer.py +++ b/lib-python/modified-2.7/test/test_peepholer.py @@ -145,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst --- a/pypy/doc/cppyy.rst +++ b/pypy/doc/cppyy.rst @@ -21,6 +21,26 @@ .. _`llvm`: http://llvm.org/ +Motivation +========== + +The cppyy module offers two unique features, which result in great +performance as well as better functionality and cross-language integration +than would otherwise be possible. +First, cppyy is written in RPython and therefore open to optimizations by the +JIT up until the actual point of call into C++. +This means that there are no conversions necessary between a garbage collected +and a reference counted environment, as is needed for the use of existing +extension modules written or generated for CPython. +It also means that if variables are already unboxed by the JIT, they can be +passed through directly to C++. +Second, Reflex (and cling far more so) adds dynamic features to C++, thus +greatly reducing impedance mismatches between the two languages. +In fact, Reflex is dynamic enough that you could write the runtime bindings +generation in python (as opposed to RPython) and this is used to create very +natural "pythonizations" of the bound code. + + Installation ============ @@ -80,7 +100,7 @@ void SetMyInt(int i) { m_myint = i; } public: - int m_myint; + int m_myint; }; Then, generate the bindings using ``genreflex`` (part of ROOT), and compile the @@ -111,6 +131,123 @@ That's all there is to it! +Advanced example +================ +The following snippet of C++ is very contrived, to allow showing that such +pathological code can be handled and to show how certain features play out in +practice:: + + $ cat MyAdvanced.h + #include + + class Base1 { + public: + Base1(int i) : m_i(i) {} + virtual ~Base1() {} + int m_i; + }; + + class Base2 { + public: + Base2(double d) : m_d(d) {} + virtual ~Base2() {} + double m_d; + }; + + class C; + + class Derived : public virtual Base1, public virtual Base2 { + public: + Derived(const std::string& name, int i, double d) : Base1(i), Base2(d), m_name(name) {} + virtual C* gimeC() { return (C*)0; } + std::string m_name; + }; + + Base1* BaseFactory(const std::string& name, int i, double d) { + return new Derived(name, i, d); + } + +This code is still only in a header file, with all functions inline, for +convenience of the example. +If the implementations live in a separate source file or shared library, the +only change needed is to link those in when building the reflection library. + +If you were to run ``genreflex`` like above in the basic example, you will +find that not all classes of interest will be reflected, nor will be the +global factory function. +In particular, ``std::string`` will be missing, since it is not defined in +this header file, but in a header file that is included. +In practical terms, general classes such as ``std::string`` should live in a +core reflection set, but for the moment assume we want to have it in the +reflection library that we are building for this example. + +The ``genreflex`` script can be steered using a so-called `selection file`_, +which is a simple XML file specifying, either explicitly or by using a +pattern, which classes, variables, namespaces, etc. to select from the given +header file. +With the aid of a selection file, a large project can be easily managed: +simply ``#include`` all relevant headers into a single header file that is +handed to ``genreflex``. +Then, apply a selection file to pick up all the relevant classes. +For our purposes, the following rather straightforward selection will do +(the name ``lcgdict`` for the root is historical, but required):: + + $ cat MyAdvanced.xml + + + + + + + +.. _`selection file`: http://root.cern.ch/drupal/content/generating-reflex-dictionaries + +Now the reflection info can be generated and compiled:: + + $ genreflex MyAdvanced.h --selection=MyAdvanced.xml + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyAdvanced_rflx.cpp -o libAdvExDict.so + +and subsequently be used from PyPy:: + + >>>> import cppyy + >>>> cppyy.load_reflection_info("libAdvExDict.so") + + >>>> d = cppyy.gbl.BaseFactory("name", 42, 3.14) + >>>> type(d) + + >>>> isinstance(d, cppyy.gbl.Base1) + True + >>>> isinstance(d, cppyy.gbl.Base2) + True + >>>> d.m_i, d.m_d + (42, 3.14) + >>>> d.m_name == "name" + True + >>>> + +Again, that's all there is to it! + +A couple of things to note, though. +If you look back at the C++ definition of the ``BaseFactory`` function, +you will see that it declares the return type to be a ``Base1``, yet the +bindings return an object of the actual type ``Derived``? +This choice is made for a couple of reasons. +First, it makes method dispatching easier: if bound objects are always their +most derived type, then it is easy to calculate any offsets, if necessary. +Second, it makes memory management easier: the combination of the type and +the memory address uniquely identifies an object. +That way, it can be recycled and object identity can be maintained if it is +entered as a function argument into C++ and comes back to PyPy as a return +value. +Last, but not least, casting is decidedly unpythonistic. +By always providing the most derived type known, casting becomes unnecessary. +For example, the data member of ``Base2`` is simply directly available. +Note also that the unreflected ``gimeC`` method of ``Derived`` does not +preclude its use. +It is only the ``gimeC`` method that is unusable as long as class ``C`` is +unknown to the system. + + Features ======== @@ -160,6 +297,8 @@ * **doc strings**: The doc string of a method or function contains the C++ arguments and return types of all overloads of that name, as applicable. +* **enums**: Are translated as ints with no further checking. + * **functions**: Work as expected and live in their appropriate namespace (which can be the global one, ``cppyy.gbl``). @@ -178,6 +317,9 @@ To select a specific virtual method, do like with normal python classes that override methods: select it from the class that you need, rather than calling the method on the instance. + To select a specific overload, use the __dispatch__ special function, which + takes the name of the desired method and its signature (which can be + obtained from the doc string) as arguments. * **namespaces**: Are represented as python classes. Namespaces are more open-ended than classes, so sometimes initial access may @@ -236,6 +378,9 @@ using classes that themselves are templates (etc.) in the arguments. All classes must already exist in the loaded reflection info. +* **typedefs**: Are simple python references to the actual classes to which + they refer. + * **unary operators**: Are supported if a python equivalent exists, and if the operator is defined in the C++ class. @@ -253,6 +398,107 @@ Only that one specific method can not be used. +Templates +========= + +A bit of special care needs to be taken for the use of templates. +For a templated class to be completely available, it must be guaranteed that +said class is fully instantiated, and hence all executable C++ code is +generated and compiled in. +The easiest way to fulfill that guarantee is by explicit instantiation in the +header file that is handed to ``genreflex``. +The following example should make that clear:: + + $ cat MyTemplate.h + #include + + class MyClass { + public: + MyClass(int i = -99) : m_i(i) {} + MyClass(const MyClass& s) : m_i(s.m_i) {} + MyClass& operator=(const MyClass& s) { m_i = s.m_i; return *this; } + ~MyClass() {} + int m_i; + }; + + template class std::vector; + +If you know for certain that all symbols will be linked in from other sources, +you can also declare the explicit template instantiation ``extern``. + +Unfortunately, this is not enough for gcc. +The iterators, if they are going to be used, need to be instantiated as well, +as do the comparison operators on those iterators, as these live in an +internal namespace, rather than in the iterator classes. +One way to handle this, is to deal with this once in a macro, then reuse that +macro for all ``vector`` classes. +Thus, the header above needs this, instead of just the explicit instantiation +of the ``vector``:: + + #define STLTYPES_EXPLICIT_INSTANTIATION_DECL(STLTYPE, TTYPE) \ + template class std::STLTYPE< TTYPE >; \ + template class __gnu_cxx::__normal_iterator >; \ + template class __gnu_cxx::__normal_iterator >;\ + namespace __gnu_cxx { \ + template bool operator==(const std::STLTYPE< TTYPE >::iterator&, \ + const std::STLTYPE< TTYPE >::iterator&); \ + template bool operator!=(const std::STLTYPE< TTYPE >::iterator&, \ + const std::STLTYPE< TTYPE >::iterator&); \ + } + + STLTYPES_EXPLICIT_INSTANTIATION_DECL(vector, MyClass) + +Then, still for gcc, the selection file needs to contain the full hierarchy as +well as the global overloads for comparisons for the iterators:: + + $ cat MyTemplate.xml + + + + + + + + + + + + + +Run the normal ``genreflex`` and compilation steps:: + + $ genreflex MyTemplate.h --selection=MyTemplate.xm + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyTemplate_rflx.cpp -o libTemplateDict.so + +Note: this is a dirty corner that clearly could do with some automation, +even if the macro already helps. +Such automation is planned. +In fact, in the cling world, the backend can perform the template +instantations and generate the reflection info on the fly, and none of the +above will any longer be necessary. + +Subsequent use should be as expected. +Note the meta-class style of "instantiating" the template:: + + >>>> import cppyy + >>>> cppyy.load_reflection_info("libTemplateDict.so") + >>>> std = cppyy.gbl.std + >>>> MyClass = cppyy.gbl.MyClass + >>>> v = std.vector(MyClass)() + >>>> v += [MyClass(1), MyClass(2), MyClass(3)] + >>>> for m in v: + .... print m.m_i, + .... + 1 2 3 + >>>> + +Other templates work similarly. +The arguments to the template instantiation can either be a string with the +full list of arguments, or the explicit classes. +The latter makes for easier code writing if the classes passed to the +instantiation are themselves templates. + + The fast lane ============= diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst --- a/pypy/doc/extending.rst +++ b/pypy/doc/extending.rst @@ -116,13 +116,21 @@ Reflex ====== -This method is only experimental for now, and is being exercised on a branch, -`reflex-support`_, so you will have to build PyPy yourself. +This method is still experimental and is being exercised on a branch, +`reflex-support`_, which adds the `cppyy`_ module. The method works by using the `Reflex package`_ to provide reflection information of the C++ code, which is then used to automatically generate -bindings at runtime, which can then be used from python. +bindings at runtime. +From a python standpoint, there is no difference between generating bindings +at runtime, or having them "statically" generated and available in scripts +or compiled into extension modules: python classes and functions are always +runtime structures, created when a script or module loads. +However, if the backend itself is capable of dynamic behavior, it is a much +better functional match to python, allowing tighter integration and more +natural language mappings. Full details are `available here`_. +.. _`cppyy`: cppyy.html .. _`reflex-support`: cppyy.html .. _`Reflex package`: http://root.cern.ch/drupal/content/reflex .. _`available here`: cppyy.html @@ -130,16 +138,33 @@ Pros ---- -If it works, it is mostly automatic, and hence easy in use. -The bindings can make use of direct pointers, in which case the calls are -very fast. +The cppyy module is written in RPython, which makes it possible to keep the +code execution visible to the JIT all the way to the actual point of call into +C++, thus allowing for a very fast interface. +Reflex is currently in use in large software environments in High Energy +Physics (HEP), across many different projects and packages, and its use can be +virtually completely automated in a production environment. +One of its uses in HEP is in providing language bindings for CPython. +Thus, it is possible to use Reflex to have bound code work on both CPython and +on PyPy. +In the medium-term, Reflex will be replaced by `cling`_, which is based on +`llvm`_. +This will affect the backend only; the python-side interface is expected to +remain the same, except that cling adds a lot of dynamic behavior to C++, +enabling further language integration. + +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ Cons ---- -C++ is a large language, and these bindings are not yet feature-complete. -Although missing features should do no harm if you don't use them, if you do -need a particular feature, it may be necessary to work around it in python -or with a C++ helper function. +C++ is a large language, and cppyy is not yet feature-complete. +Still, the experience gained in developing the equivalent bindings for CPython +means that adding missing features is a simple matter of engineering, not a +question of research. +The module is written so that currently missing features should do no harm if +you don't use them, if you do need a particular feature, it may be necessary +to work around it in python or with a C++ helper function. Although Reflex works on various platforms, the bindings with PyPy have only been tested on Linux. diff --git a/pypy/doc/windows.rst b/pypy/doc/windows.rst --- a/pypy/doc/windows.rst +++ b/pypy/doc/windows.rst @@ -24,7 +24,8 @@ translation. Failing that, they will pick the most recent Visual Studio compiler they can find. In addition, the target architecture (32 bits, 64 bits) is automatically selected. A 32 bit build can only be built -using a 32 bit Python and vice versa. +using a 32 bit Python and vice versa. By default pypy is built using the +Multi-threaded DLL (/MD) runtime environment. **Note:** PyPy is currently not supported for 64 bit Windows, and translation will fail in this case. @@ -102,10 +103,12 @@ Download the source code of expat on sourceforge: http://sourceforge.net/projects/expat/ and extract it in the base -directory. Then open the project file ``expat.dsw`` with Visual +directory. Version 2.1.0 is known to pass tests. Then open the project +file ``expat.dsw`` with Visual Studio; follow the instruction for converting the project files, -switch to the "Release" configuration, and build the solution (the -``expat`` project is actually enough for pypy). +switch to the "Release" configuration, reconfigure the runtime for +Multi-threaded DLL (/MD) and build the solution (the ``expat`` project +is actually enough for pypy). Then, copy the file ``win32\bin\release\libexpat.dll`` somewhere in your PATH. diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -304,14 +304,19 @@ # produce compatible pycs. if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): - unistr = self.space.unicode_w(w_const) - if len(unistr) == 1: - ch = ord(unistr[0]) - else: - ch = 0 - if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): - return subs + #unistr = self.space.unicode_w(w_const) + #if len(unistr) == 1: + # ch = ord(unistr[0]) + #else: + # ch = 0 + #if (ch > 0xFFFF or + # (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): + # --XXX-- for now we always disable optimization of + # u'...'[constant] because the tests above are not + # enough to fix issue5057 (CPython has the same + # problem as of April 24, 2012). + # See test_const_fold_unicode_subscr + return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -844,7 +844,8 @@ return u"abc"[0] """ counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + if 0: # xxx later? + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} # getitem outside of the BMP should not be optimized source = """def f(): @@ -854,12 +855,20 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + source = """def f(): + return u"\U00012345abcdef"[3] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, + ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) source = """def f(): return u"\uE01F"[0] """ counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + if 0: # xxx later? + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} monkeypatch.undo() # getslice is not yet optimized. diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -1290,10 +1290,6 @@ w(self.valuestackdepth)]) def handle(self, frame, unroller): - next_instr = self.really_handle(frame, unroller) # JIT hack - return r_uint(next_instr) - - def really_handle(self, frame, unroller): """ Purely abstract method """ raise NotImplementedError @@ -1305,17 +1301,17 @@ _opname = 'SETUP_LOOP' handling_mask = SBreakLoop.kind | SContinueLoop.kind - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): if isinstance(unroller, SContinueLoop): # re-push the loop block without cleaning up the value stack, # and jump to the beginning of the loop, stored in the # exception's argument frame.append_block(self) - return unroller.jump_to + return r_uint(unroller.jump_to) else: # jump to the end of the loop self.cleanupstack(frame) - return self.handlerposition + return r_uint(self.handlerposition) class ExceptBlock(FrameBlock): @@ -1325,7 +1321,7 @@ _opname = 'SETUP_EXCEPT' handling_mask = SApplicationException.kind - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): # push the exception to the value stack for inspection by the # exception handler (the code after the except:) self.cleanupstack(frame) @@ -1340,7 +1336,7 @@ frame.pushvalue(operationerr.get_w_value(frame.space)) frame.pushvalue(operationerr.w_type) frame.last_exception = operationerr - return self.handlerposition # jump to the handler + return r_uint(self.handlerposition) # jump to the handler class FinallyBlock(FrameBlock): @@ -1361,7 +1357,7 @@ frame.pushvalue(frame.space.w_None) frame.pushvalue(frame.space.w_None) - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): # any abnormal reason for unrolling a finally: triggers the end of # the block unrolling and the entering the finally: handler. # see comments in cleanup(). @@ -1369,18 +1365,18 @@ frame.pushvalue(frame.space.wrap(unroller)) frame.pushvalue(frame.space.w_None) frame.pushvalue(frame.space.w_None) - return self.handlerposition # jump to the handler + return r_uint(self.handlerposition) # jump to the handler class WithBlock(FinallyBlock): _immutable_ = True - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): if (frame.space.full_exceptions and isinstance(unroller, SApplicationException)): unroller.operr.normalize_exception(frame.space) - return FinallyBlock.really_handle(self, frame, unroller) + return FinallyBlock.handle(self, frame, unroller) block_classes = {'SETUP_LOOP': LoopBlock, 'SETUP_EXCEPT': ExceptBlock, diff --git a/pypy/jit/backend/llsupport/asmmemmgr.py b/pypy/jit/backend/llsupport/asmmemmgr.py --- a/pypy/jit/backend/llsupport/asmmemmgr.py +++ b/pypy/jit/backend/llsupport/asmmemmgr.py @@ -277,6 +277,8 @@ from pypy.jit.backend.hlinfo import highleveljitinfo if highleveljitinfo.sys_executable: debug_print('SYS_EXECUTABLE', highleveljitinfo.sys_executable) + else: + debug_print('SYS_EXECUTABLE', '??') # HEX = '0123456789ABCDEF' dump = [] diff --git a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py --- a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py +++ b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py @@ -217,7 +217,8 @@ encoded = ''.join(writtencode).encode('hex').upper() ataddr = '@%x' % addr assert log == [('test-logname-section', - [('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] + [('debug_print', 'SYS_EXECUTABLE', '??'), + ('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] lltype.free(p, flavor='raw') diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -20,6 +20,7 @@ self.dependencies = {} # contains frame boxes that are not virtualizables self.nonstandard_virtualizables = {} + # heap cache # maps descrs to {from_box, to_box} dicts self.heap_cache = {} @@ -29,6 +30,26 @@ # cache the length of arrays self.length_cache = {} + # replace_box is called surprisingly often, therefore it's not efficient + # to go over all the dicts and fix them. + # instead, these two dicts are kept, and a replace_box adds an entry to + # each of them. + # every time one of the dicts heap_cache, heap_array_cache, length_cache + # is accessed, suitable indirections need to be performed + + # this looks all very subtle, but in practice the patterns of + # replacements should not be that complex. Usually a box is replaced by + # a const, once. Also, if something goes wrong, the effect is that less + # caching than possible is done, which is not a huge problem. + self.input_indirections = {} + self.output_indirections = {} + + def _input_indirection(self, box): + return self.input_indirections.get(box, box) + + def _output_indirection(self, box): + return self.output_indirections.get(box, box) + def invalidate_caches(self, opnum, descr, argboxes): self.mark_escaped(opnum, argboxes) self.clear_caches(opnum, descr, argboxes) @@ -132,14 +153,16 @@ self.arraylen_now_known(box, lengthbox) def getfield(self, box, descr): + box = self._input_indirection(box) d = self.heap_cache.get(descr, None) if d: tobox = d.get(box, None) - if tobox: - return tobox + return self._output_indirection(tobox) return None def getfield_now_known(self, box, descr, fieldbox): + box = self._input_indirection(box) + fieldbox = self._input_indirection(fieldbox) self.heap_cache.setdefault(descr, {})[box] = fieldbox def setfield(self, box, descr, fieldbox): @@ -148,6 +171,8 @@ self.heap_cache[descr] = new_d def _do_write_with_aliasing(self, d, box, fieldbox): + box = self._input_indirection(box) + fieldbox = self._input_indirection(fieldbox) # slightly subtle logic here # a write to an arbitrary box, all other boxes can alias this one if not d or box not in self.new_boxes: @@ -166,6 +191,7 @@ return new_d def getarrayitem(self, box, descr, indexbox): + box = self._input_indirection(box) if not isinstance(indexbox, ConstInt): return index = indexbox.getint() @@ -173,9 +199,11 @@ if cache: indexcache = cache.get(index, None) if indexcache is not None: - return indexcache.get(box, None) + return self._output_indirection(indexcache.get(box, None)) def getarrayitem_now_known(self, box, descr, indexbox, valuebox): + box = self._input_indirection(box) + valuebox = self._input_indirection(valuebox) if not isinstance(indexbox, ConstInt): return index = indexbox.getint() @@ -198,25 +226,13 @@ cache[index] = self._do_write_with_aliasing(indexcache, box, valuebox) def arraylen(self, box): - return self.length_cache.get(box, None) + box = self._input_indirection(box) + return self._output_indirection(self.length_cache.get(box, None)) def arraylen_now_known(self, box, lengthbox): - self.length_cache[box] = lengthbox - - def _replace_box(self, d, oldbox, newbox): - new_d = {} - for frombox, tobox in d.iteritems(): - if frombox is oldbox: - frombox = newbox - if tobox is oldbox: - tobox = newbox - new_d[frombox] = tobox - return new_d + box = self._input_indirection(box) + self.length_cache[box] = self._input_indirection(lengthbox) def replace_box(self, oldbox, newbox): - for descr, d in self.heap_cache.iteritems(): - self.heap_cache[descr] = self._replace_box(d, oldbox, newbox) - for descr, d in self.heap_array_cache.iteritems(): - for index, cache in d.iteritems(): - d[index] = self._replace_box(cache, oldbox, newbox) - self.length_cache = self._replace_box(self.length_cache, oldbox, newbox) + self.input_indirections[self._output_indirection(newbox)] = self._input_indirection(oldbox) + self.output_indirections[self._input_indirection(oldbox)] = self._output_indirection(newbox) diff --git a/pypy/jit/metainterp/jitexc.py b/pypy/jit/metainterp/jitexc.py --- a/pypy/jit/metainterp/jitexc.py +++ b/pypy/jit/metainterp/jitexc.py @@ -12,7 +12,6 @@ """ _go_through_llinterp_uncaught_ = True # ugh - def _get_standard_error(rtyper, Class): exdata = rtyper.getexceptiondata() clsdef = rtyper.annotator.bookkeeper.getuniqueclassdef(Class) diff --git a/pypy/jit/metainterp/optimize.py b/pypy/jit/metainterp/optimize.py --- a/pypy/jit/metainterp/optimize.py +++ b/pypy/jit/metainterp/optimize.py @@ -5,3 +5,9 @@ """Raised when the optimize*.py detect that the loop that we are trying to build cannot possibly make sense as a long-running loop (e.g. it cannot run 2 complete iterations).""" + + def __init__(self, msg='?'): + debug_start("jit-abort") + debug_print(msg) + debug_stop("jit-abort") + self.msg = msg diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -49,8 +49,9 @@ optimizations.append(OptFfiCall()) if ('rewrite' not in enable_opts or 'virtualize' not in enable_opts - or 'heap' not in enable_opts or 'unroll' not in enable_opts): - optimizations.append(OptSimplify()) + or 'heap' not in enable_opts or 'unroll' not in enable_opts + or 'pure' not in enable_opts): + optimizations.append(OptSimplify(unroll)) return optimizations, unroll diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -257,8 +257,8 @@ opnum == rop.COPYSTRCONTENT or # no effect on GC struct/array opnum == rop.COPYUNICODECONTENT): # no effect on GC struct/array return - assert opnum != rop.CALL_PURE if (opnum == rop.CALL or + opnum == rop.CALL_PURE or opnum == rop.CALL_MAY_FORCE or opnum == rop.CALL_RELEASE_GIL or opnum == rop.CALL_ASSEMBLER): @@ -481,7 +481,7 @@ # already between the tracing and now. In this case, we are # simply ignoring the QUASIIMMUT_FIELD hint and compiling it # as a regular getfield. - if not qmutdescr.is_still_valid(): + if not qmutdescr.is_still_valid_for(structvalue.get_key_box()): self._remove_guard_not_invalidated = True return # record as an out-of-line guard diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -191,10 +191,13 @@ # GUARD_OVERFLOW, then the loop is invalid. lastop = self.last_emitted_operation if lastop is None: - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') opnum = lastop.getopnum() if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') + self.emit_operation(op) def optimize_INT_ADD_OVF(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -525,6 +525,7 @@ @specialize.argtype(0) def _emit_operation(self, op): + assert op.getopnum() != rop.CALL_PURE for i in range(op.numargs()): arg = op.getarg(i) try: diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -208,7 +208,8 @@ box = value.box assert isinstance(box, Const) if not box.same_constant(constbox): - raise InvalidLoop + raise InvalidLoop('A GURAD_{VALUE,TRUE,FALSE} was proven to' + + 'always fail') return if emit_operation: self.emit_operation(op) @@ -220,7 +221,7 @@ if value.is_null(): return elif value.is_nonnull(): - raise InvalidLoop + raise InvalidLoop('A GUARD_ISNULL was proven to always fail') self.emit_operation(op) value.make_constant(self.optimizer.cpu.ts.CONST_NULL) @@ -229,7 +230,7 @@ if value.is_nonnull(): return elif value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL was proven to always fail') self.emit_operation(op) value.make_nonnull(op) @@ -278,7 +279,7 @@ if realclassbox is not None: if realclassbox.same_constant(expectedclassbox): return - raise InvalidLoop + raise InvalidLoop('A GUARD_CLASS was proven to always fail') if value.last_guard: # there already has been a guard_nonnull or guard_class or # guard_nonnull_class on this value. @@ -301,7 +302,8 @@ def optimize_GUARD_NONNULL_CLASS(self, op): value = self.getvalue(op.getarg(0)) if value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL_CLASS was proven to always ' + + 'fail') self.optimize_GUARD_CLASS(op) def optimize_CALL_LOOPINVARIANT(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -4,8 +4,9 @@ from pypy.jit.metainterp.history import TargetToken, JitCellToken class OptSimplify(Optimization): - def __init__(self): + def __init__(self, unroll): self.last_label_descr = None + self.unroll = unroll def optimize_CALL_PURE(self, op): args = op.getarglist() @@ -35,24 +36,26 @@ pass def optimize_LABEL(self, op): - descr = op.getdescr() - if isinstance(descr, JitCellToken): - return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) - self.last_label_descr = op.getdescr() + if not self.unroll: + descr = op.getdescr() + if isinstance(descr, JitCellToken): + return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) + self.last_label_descr = op.getdescr() self.emit_operation(op) def optimize_JUMP(self, op): - descr = op.getdescr() - assert isinstance(descr, JitCellToken) - if not descr.target_tokens: - assert self.last_label_descr is not None - target_token = self.last_label_descr - assert isinstance(target_token, TargetToken) - assert target_token.targeting_jitcell_token is descr - op.setdescr(self.last_label_descr) - else: - assert len(descr.target_tokens) == 1 - op.setdescr(descr.target_tokens[0]) + if not self.unroll: + descr = op.getdescr() + assert isinstance(descr, JitCellToken) + if not descr.target_tokens: + assert self.last_label_descr is not None + target_token = self.last_label_descr + assert isinstance(target_token, TargetToken) + assert target_token.targeting_jitcell_token is descr + op.setdescr(self.last_label_descr) + else: + assert len(descr.target_tokens) == 1 + op.setdescr(descr.target_tokens[0]) self.emit_operation(op) dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py @@ -0,0 +1,46 @@ +from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import OptimizeOptTest +from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin +from pypy.jit.metainterp.resoperation import rop + + +allopts = OptimizeOptTest.enable_opts.split(':') +for optnum in range(len(allopts)): + myopts = allopts[:] + del myopts[optnum] + + class TestLLtype(OptimizeOptTest, LLtypeMixin): + enable_opts = ':'.join(myopts) + + def optimize_loop(self, ops, expected, expected_preamble=None, + call_pure_results=None, expected_short=None): + loop = self.parse(ops) + if expected != "crash!": + expected = self.parse(expected) + if expected_preamble: + expected_preamble = self.parse(expected_preamble) + if expected_short: + expected_short = self.parse(expected_short) + + preamble = self.unroll_and_optimize(loop, call_pure_results) + + for op in preamble.operations + loop.operations: + assert op.getopnum() not in (rop.CALL_PURE, + rop.CALL_LOOPINVARIANT, + rop.VIRTUAL_REF_FINISH, + rop.VIRTUAL_REF, + rop.QUASIIMMUT_FIELD, + rop.MARK_OPAQUE_PTR, + rop.RECORD_KNOWN_CLASS) + + def raises(self, e, fn, *args): + try: + fn(*args) + except e: + pass + + opt = allopts[optnum] + exec "TestNo%sLLtype = TestLLtype" % (opt[0].upper() + opt[1:]) + +del TestLLtype # No need to run the last set twice +del TestNoUnrollLLtype # This case is take care of by test_optimizebasic + diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -7,7 +7,7 @@ import pypy.jit.metainterp.optimizeopt.optimizer as optimizeopt import pypy.jit.metainterp.optimizeopt.virtualize as virtualize from pypy.jit.metainterp.optimize import InvalidLoop -from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt +from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt, get_const_ptr_for_string from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation from pypy.rlib.rarithmetic import LONG_BIT @@ -5067,11 +5067,29 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_call_pure_vstring_const(self): + ops = """ + [] + p0 = newstr(3) + strsetitem(p0, 0, 97) + strsetitem(p0, 1, 98) + strsetitem(p0, 2, 99) + i0 = call_pure(123, p0, descr=nonwritedescr) + finish(i0) + """ + expected = """ + [] + finish(5) + """ + call_pure_results = { + (ConstInt(123), get_const_ptr_for_string("abc"),): ConstInt(5), + } + self.optimize_loop(ops, expected, call_pure_results) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass - ##class TestOOtype(BaseTestOptimizeBasic, OOtypeMixin): ## def test_instanceof(self): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -105,6 +105,9 @@ return loop + def raises(self, e, fn, *args): + py.test.raises(e, fn, *args) + class OptimizeOptTest(BaseTestWithUnroll): def setup_method(self, meth=None): @@ -2639,7 +2642,7 @@ p2 = new_with_vtable(ConstClass(node_vtable)) jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_2(self): @@ -2651,7 +2654,7 @@ escape(p2) # prevent it from staying Virtual jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_3(self): @@ -2665,7 +2668,7 @@ setfield_gc(p3, p4, descr=nextdescr) jump(p3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_merge_guard_class_guard_value(self): @@ -4411,7 +4414,7 @@ setfield_gc(p1, p3, descr=nextdescr) jump(p3) """ - py.test.raises(BogusPureField, self.optimize_loop, ops, "crash!") + self.raises(BogusPureField, self.optimize_loop, ops, "crash!") def test_dont_complains_different_field(self): ops = """ @@ -5024,7 +5027,7 @@ i2 = int_add(i0, 3) jump(i2) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_ne_const_not(self): ops = """ @@ -5074,7 +5077,7 @@ i3 = int_add(i0, 3) jump(i3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_lshift(self): ops = """ @@ -6533,9 +6536,9 @@ def test_quasi_immut_2(self): ops = """ [] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) escape(i1) jump() """ @@ -6585,13 +6588,13 @@ def test_call_may_force_invalidated_guards_reload(self): ops = """ [i0a, i0b] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) call_may_force(i0b, descr=mayforcevirtdescr) - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i2 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i2 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) i3 = escape(i1) i4 = escape(i2) jump(i3, i4) @@ -7813,6 +7816,52 @@ """ self.optimize_loop(ops, expected) + def test_issue1080_infinitie_loop_virtual(self): + ops = """ + [p10] + p52 = getfield_gc(p10, descr=nextdescr) # inst_storage + p54 = getarrayitem_gc(p52, 0, descr=arraydescr) + p69 = getfield_gc_pure(p54, descr=otherdescr) # inst_w_function + + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + + p106 = new_with_vtable(ConstClass(node_vtable)) + p108 = new_array(3, descr=arraydescr) + p110 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p110, ConstPtr(myptr2), descr=otherdescr) # inst_w_function + setarrayitem_gc(p108, 0, p110, descr=arraydescr) + setfield_gc(p106, p108, descr=nextdescr) # inst_storage + jump(p106) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr2), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + + + def test_issue1080_infinitie_loop_simple(self): + ops = """ + [p69] + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + jump(ConstPtr(myptr)) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -122,6 +122,7 @@ quasi.inst_field = -4247 quasifielddescr = cpu.fielddescrof(QUASI, 'inst_field') quasibox = BoxPtr(lltype.cast_opaque_ptr(llmemory.GCREF, quasi)) + quasiptr = quasibox.value quasiimmutdescr = QuasiImmutDescr(cpu, quasibox, quasifielddescr, cpu.fielddescrof(QUASI, 'mutate_field')) diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -315,7 +315,10 @@ try: jumpargs = virtual_state.make_inputargs(values, self.optimizer) except BadVirtualState: - raise InvalidLoop + raise InvalidLoop('The state of the optimizer at the end of ' + + 'peeled loop is inconsistent with the ' + + 'VirtualState at the begining of the peeled ' + + 'loop') jumpop.initarglist(jumpargs) # Inline the short preamble at the end of the loop @@ -325,7 +328,11 @@ for i in range(len(short_inputargs)): if short_inputargs[i] in args: if args[short_inputargs[i]] != jmp_to_short_args[i]: - raise InvalidLoop + raise InvalidLoop('The short preamble wants the ' + + 'same box passed to multiple of its ' + + 'inputargs, but the jump at the ' + + 'end of this bridge does not do that.') + args[short_inputargs[i]] = jmp_to_short_args[i] self.short_inliner = Inliner(short_inputargs, jmp_to_short_args) for op in self.short[1:]: @@ -378,7 +385,10 @@ #final_virtual_state.debug_print("Bad virtual state at end of loop, ", # bad) #debug_stop('jit-log-virtualstate') - raise InvalidLoop + raise InvalidLoop('The virtual state at the end of the peeled ' + + 'loop is not compatible with the virtual ' + + 'state at the start of the loop which makes ' + + 'it impossible to close the loop') #debug_stop('jit-log-virtualstate') @@ -526,8 +536,8 @@ args = jumpop.getarglist() modifier = VirtualStateAdder(self.optimizer) virtual_state = modifier.get_virtual_state(args) - #debug_start('jit-log-virtualstate') - #virtual_state.debug_print("Looking for ") + debug_start('jit-log-virtualstate') + virtual_state.debug_print("Looking for ") for target in cell_token.target_tokens: if not target.virtual_state: @@ -536,10 +546,10 @@ extra_guards = [] bad = {} - #debugmsg = 'Did not match ' + debugmsg = 'Did not match ' if target.virtual_state.generalization_of(virtual_state, bad): ok = True - #debugmsg = 'Matched ' + debugmsg = 'Matched ' else: try: cpu = self.optimizer.cpu @@ -548,13 +558,13 @@ extra_guards) ok = True - #debugmsg = 'Guarded to match ' + debugmsg = 'Guarded to match ' except InvalidLoop: pass - #target.virtual_state.debug_print(debugmsg, bad) + target.virtual_state.debug_print(debugmsg, bad) if ok: - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') values = [self.getvalue(arg) for arg in jumpop.getarglist()] @@ -581,7 +591,7 @@ jumpop.setdescr(cell_token.target_tokens[0]) self.optimizer.send_extra_operation(jumpop) return True - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') return False class ValueImporter(object): diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -27,11 +27,15 @@ if self.generalization_of(other, renum, {}): return if renum[self.position] != other.position: - raise InvalidLoop + raise InvalidLoop('The numbering of the virtual states does not ' + + 'match. This means that two virtual fields ' + + 'have been set to the same Box in one of the ' + + 'virtual states but not in the other.') self._generate_guards(other, box, cpu, extra_guards) def _generate_guards(self, other, box, cpu, extra_guards): - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') def enum_forced_boxes(self, boxes, value, optimizer): raise NotImplementedError @@ -346,10 +350,12 @@ def _generate_guards(self, other, box, cpu, extra_guards): if not isinstance(other, NotVirtualStateInfo): - raise InvalidLoop + raise InvalidLoop('The VirtualStates does not match as a ' + + 'virtual appears where a pointer is needed ' + + 'and it is too late to force it.') if self.lenbound or other.lenbound: - raise InvalidLoop + raise InvalidLoop('The array length bounds does not match.') if self.level == LEVEL_KNOWNCLASS and \ box.nonnull() and \ @@ -400,7 +406,8 @@ return # Remaining cases are probably not interesting - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') if self.level == LEVEL_CONSTANT: import pdb; pdb.set_trace() raise NotImplementedError diff --git a/pypy/jit/metainterp/quasiimmut.py b/pypy/jit/metainterp/quasiimmut.py --- a/pypy/jit/metainterp/quasiimmut.py +++ b/pypy/jit/metainterp/quasiimmut.py @@ -120,8 +120,10 @@ self.fielddescr, self.structbox) return fieldbox.constbox() - def is_still_valid(self): + def is_still_valid_for(self, structconst): assert self.structbox is not None + if not self.structbox.constbox().same_constant(structconst): + return False cpu = self.cpu gcref = self.structbox.getref_base() qmut = get_current_qmut_instance(cpu, gcref, self.mutatefielddescr) diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py --- a/pypy/jit/metainterp/test/test_heapcache.py +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -2,12 +2,14 @@ from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.history import ConstInt -box1 = object() -box2 = object() -box3 = object() -box4 = object() +box1 = "box1" +box2 = "box2" +box3 = "box3" +box4 = "box4" +box5 = "box5" lengthbox1 = object() lengthbox2 = object() +lengthbox3 = object() descr1 = object() descr2 = object() descr3 = object() @@ -276,11 +278,43 @@ h.setfield(box1, descr2, box3) h.setfield(box2, descr3, box3) h.replace_box(box1, box4) - assert h.getfield(box1, descr1) is None - assert h.getfield(box1, descr2) is None assert h.getfield(box4, descr1) is box2 assert h.getfield(box4, descr2) is box3 assert h.getfield(box2, descr3) is box3 + h.setfield(box4, descr1, box3) + assert h.getfield(box4, descr1) is box3 + + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box3, box4) + assert h.getfield(box1, descr1) is box2 + assert h.getfield(box1, descr2) is box4 + assert h.getfield(box2, descr3) is box4 + + def test_replace_box_twice(self): + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box1, box4) + h.replace_box(box4, box5) + assert h.getfield(box5, descr1) is box2 + assert h.getfield(box5, descr2) is box3 + assert h.getfield(box2, descr3) is box3 + h.setfield(box5, descr1, box3) + assert h.getfield(box4, descr1) is box3 + + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box3, box4) + h.replace_box(box4, box5) + assert h.getfield(box1, descr1) is box2 + assert h.getfield(box1, descr2) is box5 + assert h.getfield(box2, descr3) is box5 def test_replace_box_array(self): h = HeapCache() @@ -291,9 +325,6 @@ h.setarrayitem(box3, descr2, index2, box1) h.setarrayitem(box2, descr3, index2, box3) h.replace_box(box1, box4) - assert h.getarrayitem(box1, descr1, index1) is None - assert h.getarrayitem(box1, descr2, index1) is None - assert h.arraylen(box1) is None assert h.arraylen(box4) is lengthbox1 assert h.getarrayitem(box4, descr1, index1) is box2 assert h.getarrayitem(box4, descr2, index1) is box3 @@ -304,6 +335,27 @@ h.replace_box(lengthbox1, lengthbox2) assert h.arraylen(box4) is lengthbox2 + def test_replace_box_array_twice(self): + h = HeapCache() + h.setarrayitem(box1, descr1, index1, box2) + h.setarrayitem(box1, descr2, index1, box3) + h.arraylen_now_known(box1, lengthbox1) + h.setarrayitem(box2, descr1, index2, box1) + h.setarrayitem(box3, descr2, index2, box1) + h.setarrayitem(box2, descr3, index2, box3) + h.replace_box(box1, box4) + h.replace_box(box4, box5) + assert h.arraylen(box4) is lengthbox1 + assert h.getarrayitem(box5, descr1, index1) is box2 + assert h.getarrayitem(box5, descr2, index1) is box3 + assert h.getarrayitem(box2, descr1, index2) is box5 + assert h.getarrayitem(box3, descr2, index2) is box5 + assert h.getarrayitem(box2, descr3, index2) is box3 + + h.replace_box(lengthbox1, lengthbox2) + h.replace_box(lengthbox2, lengthbox3) + assert h.arraylen(box4) is lengthbox3 + def test_ll_arraycopy(self): h = HeapCache() h.new_array(box1, lengthbox1) diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.quasiimmut import get_current_qmut_instance from pypy.jit.metainterp.test.support import LLJitMixin from pypy.jit.codewriter.policy import StopAtXPolicy -from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe +from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe, promote def test_get_current_qmut_instance(): @@ -506,6 +506,27 @@ "guard_not_invalidated": 2 }) + def test_issue1080(self): + myjitdriver = JitDriver(greens=[], reds=["n", "sa", "a"]) + class Foo(object): + _immutable_fields_ = ["x?"] + def __init__(self, x): + self.x = x + one, two = Foo(1), Foo(2) + def main(n): + sa = 0 + a = one + while n: + myjitdriver.jit_merge_point(n=n, sa=sa, a=a) + sa += a.x + if a.x == 1: + a = two + elif a.x == 2: + a = one + n -= 1 + return sa + res = self.meta_interp(main, [10]) + assert res == main(10) class TestLLtypeGreenFieldsTests(QuasiImmutTests, LLJitMixin): pass diff --git a/pypy/module/_multiprocessing/test/test_connection.py b/pypy/module/_multiprocessing/test/test_connection.py --- a/pypy/module/_multiprocessing/test/test_connection.py +++ b/pypy/module/_multiprocessing/test/test_connection.py @@ -157,13 +157,15 @@ raises(IOError, _multiprocessing.Connection, -15) def test_byte_order(self): + import socket + if not 'fromfd' in dir(socket): + skip('No fromfd in socket') # The exact format of net strings (length in network byte # order) is important for interoperation with others # implementations. rhandle, whandle = self.make_pair() whandle.send_bytes("abc") whandle.send_bytes("defg") - import socket sock = socket.fromfd(rhandle.fileno(), socket.AF_INET, socket.SOCK_STREAM) data1 = sock.recv(7) diff --git a/pypy/module/_winreg/test/test_winreg.py b/pypy/module/_winreg/test/test_winreg.py --- a/pypy/module/_winreg/test/test_winreg.py +++ b/pypy/module/_winreg/test/test_winreg.py @@ -198,7 +198,10 @@ import nt r = ExpandEnvironmentStrings(u"%windir%\\test") assert isinstance(r, unicode) - assert r == nt.environ["WINDIR"] + "\\test" + if 'WINDIR' in nt.environ.keys(): + assert r == nt.environ["WINDIR"] + "\\test" + else: + assert r == nt.environ["windir"] + "\\test" def test_long_key(self): from _winreg import ( diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -103,8 +103,8 @@ """.split() for name in constant_names: setattr(CConfig_constants, name, rffi_platform.ConstantInteger(name)) -udir.join('pypy_decl.h').write("/* Will be filled later */") -udir.join('pypy_macros.h').write("/* Will be filled later */") +udir.join('pypy_decl.h').write("/* Will be filled later */\n") +udir.join('pypy_macros.h').write("/* Will be filled later */\n") globals().update(rffi_platform.configure(CConfig_constants)) def copy_header_files(dstdir): @@ -927,12 +927,12 @@ source_dir / "pyerrors.c", source_dir / "modsupport.c", source_dir / "getargs.c", + source_dir / "abstract.c", source_dir / "stringobject.c", source_dir / "mysnprintf.c", source_dir / "pythonrun.c", source_dir / "sysmodule.c", source_dir / "bufferobject.c", - source_dir / "object.c", source_dir / "cobject.c", source_dir / "structseq.c", source_dir / "capsule.c", diff --git a/pypy/module/cpyext/complexobject.py b/pypy/module/cpyext/complexobject.py --- a/pypy/module/cpyext/complexobject.py +++ b/pypy/module/cpyext/complexobject.py @@ -33,6 +33,11 @@ # CPython also accepts anything return 0.0 + at cpython_api([Py_complex_ptr], PyObject) +def _PyComplex_FromCComplex(space, v): + """Create a new Python complex number object from a C Py_complex value.""" + return space.newcomplex(v.c_real, v.c_imag) + # lltype does not handle functions returning a structure. This implements a # helper function, which takes as argument a reference to the return value. @cpython_api([PyObject, Py_complex_ptr], lltype.Void) diff --git a/pypy/module/cpyext/floatobject.py b/pypy/module/cpyext/floatobject.py --- a/pypy/module/cpyext/floatobject.py +++ b/pypy/module/cpyext/floatobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.api import ( CANNOT_FAIL, cpython_api, PyObject, build_type_checkers, CONST_STRING) from pypy.interpreter.error import OperationError +from pypy.rlib.rstruct import runpack PyFloat_Check, PyFloat_CheckExact = build_type_checkers("Float") @@ -33,3 +34,19 @@ backward compatibility.""" return space.call_function(space.w_float, w_obj) + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack4(space, ptr, le): + input = rffi.charpsize2str(ptr, 4) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("f", input) + + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack8(space, ptr, le): + input = rffi.charpsize2str(ptr, 8) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("d", input) + diff --git a/pypy/module/cpyext/include/complexobject.h b/pypy/module/cpyext/include/complexobject.h --- a/pypy/module/cpyext/include/complexobject.h +++ b/pypy/module/cpyext/include/complexobject.h @@ -21,6 +21,8 @@ return result; } +#define PyComplex_FromCComplex(c) _PyComplex_FromCComplex(&c) + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -38,10 +38,19 @@ PyObject_VAR_HEAD } PyVarObject; +#ifndef PYPY_DEBUG_REFCOUNT #define Py_INCREF(ob) (Py_IncRef((PyObject *)ob)) #define Py_DECREF(ob) (Py_DecRef((PyObject *)ob)) #define Py_XINCREF(ob) (Py_IncRef((PyObject *)ob)) #define Py_XDECREF(ob) (Py_DecRef((PyObject *)ob)) +#else +#define Py_INCREF(ob) (((PyObject *)ob)->ob_refcnt++) +#define Py_DECREF(ob) ((((PyObject *)ob)->ob_refcnt > 1) ? \ + ((PyObject *)ob)->ob_refcnt-- : (Py_DecRef((PyObject *)ob))) + +#define Py_XINCREF(op) do { if ((op) == NULL) ; else Py_INCREF(op); } while (0) +#define Py_XDECREF(op) do { if ((op) == NULL) ; else Py_DECREF(op); } while (0) +#endif #define Py_CLEAR(op) \ do { \ diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -29,6 +29,10 @@ # define vsnprintf _vsnprintf #endif +#include +PyAPI_FUNC(int) PyOS_snprintf(char *str, size_t size, const char *format, ...); +PyAPI_FUNC(int) PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va); + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/stringobject.h b/pypy/module/cpyext/include/stringobject.h --- a/pypy/module/cpyext/include/stringobject.h +++ b/pypy/module/cpyext/include/stringobject.h @@ -7,8 +7,6 @@ extern "C" { #endif -int PyOS_snprintf(char *str, size_t size, const char *format, ...); - #define PyString_GET_SIZE(op) PyString_Size(op) #define PyString_AS_STRING(op) PyString_AsString(op) diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -40,7 +40,8 @@ when the value is NULL, instead of converting to None. */ #define T_LONGLONG 17 -#define T_ULONGLONG 18 +#define T_ULONGLONG 18 +#define T_PYSSIZET 19 /* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 diff --git a/pypy/module/cpyext/longobject.py b/pypy/module/cpyext/longobject.py --- a/pypy/module/cpyext/longobject.py +++ b/pypy/module/cpyext/longobject.py @@ -1,6 +1,7 @@ from pypy.rpython.lltypesystem import lltype, rffi -from pypy.module.cpyext.api import (cpython_api, PyObject, build_type_checkers, - CONST_STRING, ADDR, CANNOT_FAIL) +from pypy.module.cpyext.api import ( + cpython_api, PyObject, build_type_checkers, Py_ssize_t, + CONST_STRING, ADDR, CANNOT_FAIL) from pypy.objspace.std.longobject import W_LongObject from pypy.interpreter.error import OperationError from pypy.module.cpyext.intobject import PyInt_AsUnsignedLongMask @@ -15,6 +16,13 @@ """Return a new PyLongObject object from v, or NULL on failure.""" return space.newlong(val) + at cpython_api([Py_ssize_t], PyObject) +def PyLong_FromSsize_t(space, val): + """Return a new PyLongObject object from a C Py_ssize_t, or + NULL on failure. + """ + return space.newlong(val) + @cpython_api([rffi.LONGLONG], PyObject) def PyLong_FromLongLong(space, val): """Return a new PyLongObject object from a C long long, or NULL @@ -56,6 +64,14 @@ and -1 will be returned.""" return space.int_w(w_long) + at cpython_api([PyObject], Py_ssize_t, error=-1) +def PyLong_AsSsize_t(space, w_long): + """Return a C Py_ssize_t representation of the contents of pylong. If + pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised + and -1 will be returned. + """ + return space.int_w(w_long) + @cpython_api([PyObject], rffi.LONGLONG, error=-1) def PyLong_AsLongLong(space, w_long): """ diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -65,6 +65,7 @@ check_num_args(space, w_args, 0) args_w = space.fixedview(w_args) res = generic_cpy_call(space, func_inquiry, w_self) + res = rffi.cast(lltype.Signed, res) if res == -1: space.fromcache(State).check_and_raise_exception() return space.wrap(bool(res)) diff --git a/pypy/module/cpyext/src/abstract.c b/pypy/module/cpyext/src/abstract.c new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/src/abstract.c @@ -0,0 +1,269 @@ +/* Abstract Object Interface */ + +#include "Python.h" + +/* Shorthands to return certain errors */ + +static PyObject * +type_error(const char *msg, PyObject *obj) +{ + PyErr_Format(PyExc_TypeError, msg, obj->ob_type->tp_name); + return NULL; +} + +static PyObject * +null_error(void) +{ + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_SystemError, + "null argument to internal routine"); + return NULL; +} + +/* Operations on any object */ + +int +PyObject_CheckReadBuffer(PyObject *obj) +{ + PyBufferProcs *pb = obj->ob_type->tp_as_buffer; + + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + (*pb->bf_getsegcount)(obj, NULL) != 1) + return 0; + return 1; +} + +int PyObject_AsReadBuffer(PyObject *obj, + const void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void *pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a readable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getreadbuffer)(obj, 0, &pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +int PyObject_AsWriteBuffer(PyObject *obj, + void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void*pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a writeable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getwritebuffer)(obj,0,&pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +/* Operations on callable objects */ + +static PyObject* +call_function_tail(PyObject *callable, PyObject *args) +{ + PyObject *retval; + + if (args == NULL) + return NULL; + + if (!PyTuple_Check(args)) { + PyObject *a; + + a = PyTuple_New(1); + if (a == NULL) { + Py_DECREF(args); + return NULL; + } + PyTuple_SET_ITEM(a, 0, args); + args = a; + } + retval = PyObject_Call(callable, args, NULL); + + Py_DECREF(args); + + return retval; +} + +PyObject * +PyObject_CallFunction(PyObject *callable, const char *format, ...) +{ + va_list va; + PyObject *args; + + if (callable == NULL) + return null_error(); + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + return call_function_tail(callable, args); +} + +PyObject * +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) +{ + va_list va; + PyObject *args; + PyObject *func = NULL; + PyObject *retval = NULL; + + if (o == NULL || name == NULL) + return null_error(); + + func = PyObject_GetAttrString(o, name); + if (func == NULL) { + PyErr_SetString(PyExc_AttributeError, name); + return 0; + } + + if (!PyCallable_Check(func)) { + type_error("attribute of type '%.200s' is not callable", func); + goto exit; + } + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + retval = call_function_tail(func, args); + + exit: + /* args gets consumed in call_function_tail */ + Py_XDECREF(func); + + return retval; +} + +static PyObject * +objargs_mktuple(va_list va) +{ + int i, n = 0; + va_list countva; + PyObject *result, *tmp; + +#ifdef VA_LIST_IS_ARRAY + memcpy(countva, va, sizeof(va_list)); +#else +#ifdef __va_copy + __va_copy(countva, va); +#else + countva = va; +#endif +#endif + + while (((PyObject *)va_arg(countva, PyObject *)) != NULL) + ++n; + result = PyTuple_New(n); + if (result != NULL && n > 0) { + for (i = 0; i < n; ++i) { + tmp = (PyObject *)va_arg(va, PyObject *); + Py_INCREF(tmp); + PyTuple_SET_ITEM(result, i, tmp); + } + } + return result; +} + +PyObject * +PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL || name == NULL) + return null_error(); + + callable = PyObject_GetAttr(callable, name); + if (callable == NULL) + return NULL; + + /* count the args */ + va_start(vargs, name); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) { + Py_DECREF(callable); + return NULL; + } + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + Py_DECREF(callable); + + return tmp; +} + +PyObject * +PyObject_CallFunctionObjArgs(PyObject *callable, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL) + return null_error(); + + /* count the args */ + va_start(vargs, callable); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) + return NULL; + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + + return tmp; +} + diff --git a/pypy/module/cpyext/src/bufferobject.c b/pypy/module/cpyext/src/bufferobject.c --- a/pypy/module/cpyext/src/bufferobject.c +++ b/pypy/module/cpyext/src/bufferobject.c @@ -13,207 +13,207 @@ static int get_buf(PyBufferObject *self, void **ptr, Py_ssize_t *size, - enum buffer_t buffer_type) + enum buffer_t buffer_type) { - if (self->b_base == NULL) { - assert (ptr != NULL); - *ptr = self->b_ptr; - *size = self->b_size; - } - else { - Py_ssize_t count, offset; - readbufferproc proc = 0; - PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; - if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return 0; - } - if ((buffer_type == READ_BUFFER) || - ((buffer_type == ANY_BUFFER) && self->b_readonly)) - proc = bp->bf_getreadbuffer; - else if ((buffer_type == WRITE_BUFFER) || - (buffer_type == ANY_BUFFER)) - proc = (readbufferproc)bp->bf_getwritebuffer; - else if (buffer_type == CHAR_BUFFER) { + if (self->b_base == NULL) { + assert (ptr != NULL); + *ptr = self->b_ptr; + *size = self->b_size; + } + else { + Py_ssize_t count, offset; + readbufferproc proc = 0; + PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; + if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return 0; + } + if ((buffer_type == READ_BUFFER) || + ((buffer_type == ANY_BUFFER) && self->b_readonly)) + proc = bp->bf_getreadbuffer; + else if ((buffer_type == WRITE_BUFFER) || + (buffer_type == ANY_BUFFER)) + proc = (readbufferproc)bp->bf_getwritebuffer; + else if (buffer_type == CHAR_BUFFER) { if (!PyType_HasFeature(self->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER)) { - PyErr_SetString(PyExc_TypeError, - "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); - return 0; - } - proc = (readbufferproc)bp->bf_getcharbuffer; - } - if (!proc) { - char *buffer_type_name; - switch (buffer_type) { - case READ_BUFFER: - buffer_type_name = "read"; - break; - case WRITE_BUFFER: - buffer_type_name = "write"; - break; - case CHAR_BUFFER: - buffer_type_name = "char"; - break; - default: - buffer_type_name = "no"; - break; - } - PyErr_Format(PyExc_TypeError, - "%s buffer type not available", - buffer_type_name); - return 0; - } - if ((count = (*proc)(self->b_base, 0, ptr)) < 0) - return 0; - /* apply constraints to the start/end */ - if (self->b_offset > count) - offset = count; - else - offset = self->b_offset; - *(char **)ptr = *(char **)ptr + offset; - if (self->b_size == Py_END_OF_BUFFER) - *size = count; - else - *size = self->b_size; - if (offset + *size > count) - *size = count - offset; - } - return 1; + Py_TPFLAGS_HAVE_GETCHARBUFFER)) { + PyErr_SetString(PyExc_TypeError, + "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); + return 0; + } + proc = (readbufferproc)bp->bf_getcharbuffer; + } + if (!proc) { + char *buffer_type_name; + switch (buffer_type) { + case READ_BUFFER: + buffer_type_name = "read"; + break; + case WRITE_BUFFER: + buffer_type_name = "write"; + break; + case CHAR_BUFFER: + buffer_type_name = "char"; + break; + default: + buffer_type_name = "no"; + break; + } + PyErr_Format(PyExc_TypeError, + "%s buffer type not available", + buffer_type_name); + return 0; + } + if ((count = (*proc)(self->b_base, 0, ptr)) < 0) + return 0; + /* apply constraints to the start/end */ + if (self->b_offset > count) + offset = count; + else + offset = self->b_offset; + *(char **)ptr = *(char **)ptr + offset; + if (self->b_size == Py_END_OF_BUFFER) + *size = count; + else + *size = self->b_size; + if (offset + *size > count) + *size = count - offset; + } + return 1; } static PyObject * buffer_from_memory(PyObject *base, Py_ssize_t size, Py_ssize_t offset, void *ptr, - int readonly) + int readonly) { - PyBufferObject * b; + PyBufferObject * b; - if (size < 0 && size != Py_END_OF_BUFFER) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } + if (size < 0 && size != Py_END_OF_BUFFER) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } - b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); - if ( b == NULL ) - return NULL; + b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); + if ( b == NULL ) + return NULL; - Py_XINCREF(base); - b->b_base = base; - b->b_ptr = ptr; - b->b_size = size; - b->b_offset = offset; - b->b_readonly = readonly; - b->b_hash = -1; + Py_XINCREF(base); + b->b_base = base; + b->b_ptr = ptr; + b->b_size = size; + b->b_offset = offset; + b->b_readonly = readonly; + b->b_hash = -1; - return (PyObject *) b; + return (PyObject *) b; } static PyObject * buffer_from_object(PyObject *base, Py_ssize_t size, Py_ssize_t offset, int readonly) { - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } - if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { - /* another buffer, refer to the base object */ - PyBufferObject *b = (PyBufferObject *)base; - if (b->b_size != Py_END_OF_BUFFER) { - Py_ssize_t base_size = b->b_size - offset; - if (base_size < 0) - base_size = 0; - if (size == Py_END_OF_BUFFER || size > base_size) - size = base_size; - } - offset += b->b_offset; - base = b->b_base; - } - return buffer_from_memory(base, size, offset, NULL, readonly); + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } + if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { + /* another buffer, refer to the base object */ + PyBufferObject *b = (PyBufferObject *)base; + if (b->b_size != Py_END_OF_BUFFER) { + Py_ssize_t base_size = b->b_size - offset; + if (base_size < 0) + base_size = 0; + if (size == Py_END_OF_BUFFER || size > base_size) + size = base_size; + } + offset += b->b_offset; + base = b->b_base; + } + return buffer_from_memory(base, size, offset, NULL, readonly); } PyObject * PyBuffer_FromObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 1); + return buffer_from_object(base, size, offset, 1); } PyObject * PyBuffer_FromReadWriteObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 0); + return buffer_from_object(base, size, offset, 0); } PyObject * PyBuffer_FromMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 1); + return buffer_from_memory(NULL, size, 0, ptr, 1); } PyObject * PyBuffer_FromReadWriteMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 0); + return buffer_from_memory(NULL, size, 0, ptr, 0); } PyObject * PyBuffer_New(Py_ssize_t size) { - PyObject *o; - PyBufferObject * b; + PyObject *o; + PyBufferObject * b; - if (size < 0) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (sizeof(*b) > PY_SSIZE_T_MAX - size) { - /* unlikely */ - return PyErr_NoMemory(); - } - /* Inline PyObject_New */ - o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); - if ( o == NULL ) - return PyErr_NoMemory(); - b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); + if (size < 0) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (sizeof(*b) > PY_SSIZE_T_MAX - size) { + /* unlikely */ + return PyErr_NoMemory(); + } + /* Inline PyObject_New */ + o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); + if ( o == NULL ) + return PyErr_NoMemory(); + b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); - b->b_base = NULL; - b->b_ptr = (void *)(b + 1); - b->b_size = size; - b->b_offset = 0; - b->b_readonly = 0; - b->b_hash = -1; + b->b_base = NULL; + b->b_ptr = (void *)(b + 1); + b->b_size = size; + b->b_offset = 0; + b->b_readonly = 0; + b->b_hash = -1; - return o; + return o; } /* Methods */ @@ -221,19 +221,21 @@ static PyObject * buffer_new(PyTypeObject *type, PyObject *args, PyObject *kw) { - PyObject *ob; - Py_ssize_t offset = 0; - Py_ssize_t size = Py_END_OF_BUFFER; + PyObject *ob; + Py_ssize_t offset = 0; + Py_ssize_t size = Py_END_OF_BUFFER; - /*if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) - return NULL;*/ - - if (!_PyArg_NoKeywords("buffer()", kw)) - return NULL; + /* + * if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) + * return NULL; + */ - if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) - return NULL; - return PyBuffer_FromObject(ob, offset, size); + if (!_PyArg_NoKeywords("buffer()", kw)) + return NULL; + + if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) + return NULL; + return PyBuffer_FromObject(ob, offset, size); } PyDoc_STRVAR(buffer_doc, @@ -248,99 +250,99 @@ static void buffer_dealloc(PyBufferObject *self) { - Py_XDECREF(self->b_base); - PyObject_DEL(self); + Py_XDECREF(self->b_base); + PyObject_DEL(self); } static int buffer_compare(PyBufferObject *self, PyBufferObject *other) { - void *p1, *p2; - Py_ssize_t len_self, len_other, min_len; - int cmp; + void *p1, *p2; + Py_ssize_t len_self, len_other, min_len; + int cmp; - if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) - return -1; - if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) - return -1; - min_len = (len_self < len_other) ? len_self : len_other; - if (min_len > 0) { - cmp = memcmp(p1, p2, min_len); - if (cmp != 0) - return cmp < 0 ? -1 : 1; - } - return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; + if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) + return -1; + if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) + return -1; + min_len = (len_self < len_other) ? len_self : len_other; + if (min_len > 0) { + cmp = memcmp(p1, p2, min_len); + if (cmp != 0) + return cmp < 0 ? -1 : 1; + } + return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; } static PyObject * buffer_repr(PyBufferObject *self) { - const char *status = self->b_readonly ? "read-only" : "read-write"; + const char *status = self->b_readonly ? "read-only" : "read-write"; if ( self->b_base == NULL ) - return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", - status, - self->b_ptr, - self->b_size, - self); - else - return PyString_FromFormat( - "<%s buffer for %p, size %zd, offset %zd at %p>", - status, - self->b_base, - self->b_size, - self->b_offset, - self); + return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", + status, + self->b_ptr, + self->b_size, + self); + else + return PyString_FromFormat( + "<%s buffer for %p, size %zd, offset %zd at %p>", + status, + self->b_base, + self->b_size, + self->b_offset, + self); } static long buffer_hash(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - register Py_ssize_t len; - register unsigned char *p; - register long x; + void *ptr; + Py_ssize_t size; + register Py_ssize_t len; + register unsigned char *p; + register long x; - if ( self->b_hash != -1 ) - return self->b_hash; + if ( self->b_hash != -1 ) + return self->b_hash; - /* XXX potential bugs here, a readonly buffer does not imply that the - * underlying memory is immutable. b_readonly is a necessary but not - * sufficient condition for a buffer to be hashable. Perhaps it would - * be better to only allow hashing if the underlying object is known to - * be immutable (e.g. PyString_Check() is true). Another idea would - * be to call tp_hash on the underlying object and see if it raises - * an error. */ - if ( !self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, - "writable buffers are not hashable"); - return -1; - } + /* XXX potential bugs here, a readonly buffer does not imply that the + * underlying memory is immutable. b_readonly is a necessary but not + * sufficient condition for a buffer to be hashable. Perhaps it would + * be better to only allow hashing if the underlying object is known to + * be immutable (e.g. PyString_Check() is true). Another idea would + * be to call tp_hash on the underlying object and see if it raises + * an error. */ + if ( !self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, + "writable buffers are not hashable"); + return -1; + } - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - p = (unsigned char *) ptr; - len = size; - x = *p << 7; - while (--len >= 0) - x = (1000003*x) ^ *p++; - x ^= size; - if (x == -1) - x = -2; - self->b_hash = x; - return x; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + p = (unsigned char *) ptr; + len = size; + x = *p << 7; + while (--len >= 0) + x = (1000003*x) ^ *p++; + x ^= size; + if (x == -1) + x = -2; + self->b_hash = x; + return x; } static PyObject * buffer_str(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - return PyString_FromStringAndSize((const char *)ptr, size); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + return PyString_FromStringAndSize((const char *)ptr, size); } /* Sequence methods */ @@ -348,374 +350,372 @@ static Py_ssize_t buffer_length(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - return size; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + return size; } static PyObject * buffer_concat(PyBufferObject *self, PyObject *other) { - PyBufferProcs *pb = other->ob_type->tp_as_buffer; - void *ptr1, *ptr2; - char *p; - PyObject *ob; - Py_ssize_t size, count; + PyBufferProcs *pb = other->ob_type->tp_as_buffer; + void *ptr1, *ptr2; + char *p; + PyObject *ob; + Py_ssize_t size, count; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return NULL; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return NULL; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return NULL; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return NULL; - - /* optimize special case */ - if ( size == 0 ) - { - Py_INCREF(other); - return other; - } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return NULL; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return NULL; + /* optimize special case */ + if ( size == 0 ) + { + Py_INCREF(other); + return other; + } - assert(count <= PY_SIZE_MAX - size); + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return NULL; - ob = PyString_FromStringAndSize(NULL, size + count); - if ( ob == NULL ) - return NULL; - p = PyString_AS_STRING(ob); - memcpy(p, ptr1, size); - memcpy(p + size, ptr2, count); + assert(count <= PY_SIZE_MAX - size); - /* there is an extra byte in the string object, so this is safe */ - p[size + count] = '\0'; + ob = PyString_FromStringAndSize(NULL, size + count); + if ( ob == NULL ) + return NULL; + p = PyString_AS_STRING(ob); + memcpy(p, ptr1, size); + memcpy(p + size, ptr2, count); - return ob; + /* there is an extra byte in the string object, so this is safe */ + p[size + count] = '\0'; + + return ob; } static PyObject * buffer_repeat(PyBufferObject *self, Py_ssize_t count) { - PyObject *ob; - register char *p; - void *ptr; - Py_ssize_t size; + PyObject *ob; + register char *p; + void *ptr; + Py_ssize_t size; - if ( count < 0 ) - count = 0; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if (count > PY_SSIZE_T_MAX / size) { - PyErr_SetString(PyExc_MemoryError, "result too large"); - return NULL; - } - ob = PyString_FromStringAndSize(NULL, size * count); - if ( ob == NULL ) - return NULL; + if ( count < 0 ) + count = 0; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if (count > PY_SSIZE_T_MAX / size) { + PyErr_SetString(PyExc_MemoryError, "result too large"); + return NULL; + } + ob = PyString_FromStringAndSize(NULL, size * count); + if ( ob == NULL ) + return NULL; - p = PyString_AS_STRING(ob); - while ( count-- ) - { - memcpy(p, ptr, size); - p += size; - } + p = PyString_AS_STRING(ob); + while ( count-- ) + { + memcpy(p, ptr, size); + p += size; + } - /* there is an extra byte in the string object, so this is safe */ - *p = '\0'; + /* there is an extra byte in the string object, so this is safe */ + *p = '\0'; - return ob; + return ob; } static PyObject * buffer_item(PyBufferObject *self, Py_ssize_t idx) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( idx < 0 || idx >= size ) { - PyErr_SetString(PyExc_IndexError, "buffer index out of range"); - return NULL; - } - return PyString_FromStringAndSize((char *)ptr + idx, 1); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( idx < 0 || idx >= size ) { + PyErr_SetString(PyExc_IndexError, "buffer index out of range"); + return NULL; + } + return PyString_FromStringAndSize((char *)ptr + idx, 1); } static PyObject * buffer_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( left < 0 ) - left = 0; - if ( right < 0 ) - right = 0; - if ( right > size ) - right = size; - if ( right < left ) - right = left; - return PyString_FromStringAndSize((char *)ptr + left, - right - left); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( left < 0 ) + left = 0; + if ( right < 0 ) + right = 0; + if ( right > size ) + right = size; + if ( right < left ) + right = left; + return PyString_FromStringAndSize((char *)ptr + left, + right - left); } static PyObject * buffer_subscript(PyBufferObject *self, PyObject *item) { - void *p; - Py_ssize_t size; - - if (!get_buf(self, &p, &size, ANY_BUFFER)) - return NULL; - + void *p; + Py_ssize_t size; + + if (!get_buf(self, &p, &size, ANY_BUFFER)) + return NULL; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return NULL; - if (i < 0) - i += size; - return buffer_item(self, i); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength, cur, i; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return NULL; + if (i < 0) + i += size; + return buffer_item(self, i); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength, cur, i; - if (PySlice_GetIndicesEx((PySliceObject*)item, size, - &start, &stop, &step, &slicelength) < 0) { - return NULL; - } + if (PySlice_GetIndicesEx((PySliceObject*)item, size, + &start, &stop, &step, &slicelength) < 0) { + return NULL; + } - if (slicelength <= 0) - return PyString_FromStringAndSize("", 0); - else if (step == 1) - return PyString_FromStringAndSize((char *)p + start, - stop - start); - else { - PyObject *result; - char *source_buf = (char *)p; - char *result_buf = (char *)PyMem_Malloc(slicelength); + if (slicelength <= 0) + return PyString_FromStringAndSize("", 0); + else if (step == 1) + return PyString_FromStringAndSize((char *)p + start, + stop - start); + else { + PyObject *result; + char *source_buf = (char *)p; + char *result_buf = (char *)PyMem_Malloc(slicelength); - if (result_buf == NULL) - return PyErr_NoMemory(); + if (result_buf == NULL) + return PyErr_NoMemory(); - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - result_buf[i] = source_buf[cur]; - } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + result_buf[i] = source_buf[cur]; + } - result = PyString_FromStringAndSize(result_buf, - slicelength); - PyMem_Free(result_buf); - return result; - } - } - else { - PyErr_SetString(PyExc_TypeError, - "sequence index must be integer"); - return NULL; - } + result = PyString_FromStringAndSize(result_buf, + slicelength); + PyMem_Free(result_buf); + return result; + } + } + else { + PyErr_SetString(PyExc_TypeError, + "sequence index must be integer"); + return NULL; + } } static int buffer_ass_item(PyBufferObject *self, Py_ssize_t idx, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; - if (idx < 0 || idx >= size) { - PyErr_SetString(PyExc_IndexError, - "buffer assignment index out of range"); - return -1; - } + if (idx < 0 || idx >= size) { + PyErr_SetString(PyExc_IndexError, + "buffer assignment index out of range"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; - if ( count != 1 ) { - PyErr_SetString(PyExc_TypeError, - "right operand must be a single byte"); - return -1; - } + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; + if ( count != 1 ) { + PyErr_SetString(PyExc_TypeError, + "right operand must be a single byte"); + return -1; + } - ((char *)ptr1)[idx] = *(char *)ptr2; - return 0; + ((char *)ptr1)[idx] = *(char *)ptr2; + return 0; } static int buffer_ass_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t slice_len; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t slice_len; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; - if ( left < 0 ) - left = 0; - else if ( left > size ) - left = size; - if ( right < left ) - right = left; - else if ( right > size ) - right = size; - slice_len = right - left; + if ( left < 0 ) + left = 0; + else if ( left > size ) + left = size; + if ( right < left ) + right = left; + else if ( right > size ) + right = size; + slice_len = right - left; - if ( count != slice_len ) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ( count != slice_len ) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - if ( slice_len ) - memcpy((char *)ptr1 + left, ptr2, slice_len); + if ( slice_len ) + memcpy((char *)ptr1 + left, ptr2, slice_len); - return 0; + return 0; } static int buffer_ass_subscript(PyBufferObject *self, PyObject *item, PyObject *value) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t selfsize; - Py_ssize_t othersize; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t selfsize; + Py_ssize_t othersize; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = value ? value->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) - return -1; - + pb = value ? value->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) + return -1; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return -1; - if (i < 0) - i += selfsize; - return buffer_ass_item(self, i, value); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength; - - if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, - &start, &stop, &step, &slicelength) < 0) - return -1; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return -1; + if (i < 0) + i += selfsize; + return buffer_ass_item(self, i, value); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength; - if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) - return -1; + if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, + &start, &stop, &step, &slicelength) < 0) + return -1; - if (othersize != slicelength) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) + return -1; - if (slicelength == 0) - return 0; - else if (step == 1) { - memcpy((char *)ptr1 + start, ptr2, slicelength); - return 0; - } - else { - Py_ssize_t cur, i; - - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - ((char *)ptr1)[cur] = ((char *)ptr2)[i]; - } + if (othersize != slicelength) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - return 0; - } - } else { - PyErr_SetString(PyExc_TypeError, - "buffer indices must be integers"); - return -1; - } + if (slicelength == 0) + return 0; + else if (step == 1) { + memcpy((char *)ptr1 + start, ptr2, slicelength); + return 0; + } + else { + Py_ssize_t cur, i; + + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + ((char *)ptr1)[cur] = ((char *)ptr2)[i]; + } + + return 0; + } + } else { + PyErr_SetString(PyExc_TypeError, + "buffer indices must be integers"); + return -1; + } } /* Buffer methods */ @@ -723,64 +723,64 @@ static Py_ssize_t buffer_getreadbuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, READ_BUFFER)) - return -1; - return size; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, READ_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getwritebuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; + Py_ssize_t size; - if ( self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, "buffer is read-only"); + return -1; + } - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, WRITE_BUFFER)) - return -1; - return size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, WRITE_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getsegcount(PyBufferObject *self, Py_ssize_t *lenp) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - if (lenp) - *lenp = size; - return 1; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + if (lenp) + *lenp = size; + return 1; } static Py_ssize_t buffer_getcharbuf(PyBufferObject *self, Py_ssize_t idx, const char **pp) { - void *ptr; - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) - return -1; - *pp = (const char *)ptr; - return size; + void *ptr; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) + return -1; + *pp = (const char *)ptr; + return size; } void init_bufferobject(void) @@ -789,67 +789,65 @@ } static PySequenceMethods buffer_as_sequence = { - (lenfunc)buffer_length, /*sq_length*/ - (binaryfunc)buffer_concat, /*sq_concat*/ - (ssizeargfunc)buffer_repeat, /*sq_repeat*/ - (ssizeargfunc)buffer_item, /*sq_item*/ - (ssizessizeargfunc)buffer_slice, /*sq_slice*/ - (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ - (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ + (lenfunc)buffer_length, /*sq_length*/ + (binaryfunc)buffer_concat, /*sq_concat*/ + (ssizeargfunc)buffer_repeat, /*sq_repeat*/ + (ssizeargfunc)buffer_item, /*sq_item*/ + (ssizessizeargfunc)buffer_slice, /*sq_slice*/ + (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ + (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ }; static PyMappingMethods buffer_as_mapping = { - (lenfunc)buffer_length, - (binaryfunc)buffer_subscript, - (objobjargproc)buffer_ass_subscript, + (lenfunc)buffer_length, + (binaryfunc)buffer_subscript, + (objobjargproc)buffer_ass_subscript, }; static PyBufferProcs buffer_as_buffer = { - (readbufferproc)buffer_getreadbuf, - (writebufferproc)buffer_getwritebuf, - (segcountproc)buffer_getsegcount, - (charbufferproc)buffer_getcharbuf, + (readbufferproc)buffer_getreadbuf, + (writebufferproc)buffer_getwritebuf, + (segcountproc)buffer_getsegcount, + (charbufferproc)buffer_getcharbuf, }; PyTypeObject PyBuffer_Type = { - PyObject_HEAD_INIT(NULL) + PyVarObject_HEAD_INIT(NULL, 0) + "buffer", + sizeof(PyBufferObject), 0, - "buffer", - sizeof(PyBufferObject), - 0, - (destructor)buffer_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - (cmpfunc)buffer_compare, /* tp_compare */ - (reprfunc)buffer_repr, /* tp_repr */ - 0, /* tp_as_number */ - &buffer_as_sequence, /* tp_as_sequence */ - &buffer_as_mapping, /* tp_as_mapping */ - (hashfunc)buffer_hash, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)buffer_str, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - &buffer_as_buffer, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ - buffer_doc, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - buffer_new, /* tp_new */ + (destructor)buffer_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + (cmpfunc)buffer_compare, /* tp_compare */ + (reprfunc)buffer_repr, /* tp_repr */ + 0, /* tp_as_number */ + &buffer_as_sequence, /* tp_as_sequence */ + &buffer_as_mapping, /* tp_as_mapping */ + (hashfunc)buffer_hash, /* tp_hash */ + 0, /* tp_call */ + (reprfunc)buffer_str, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + &buffer_as_buffer, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ + buffer_doc, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + buffer_new, /* tp_new */ }; - diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -50,6 +50,10 @@ PyCObject_AsVoidPtr(PyObject *self) { if (self) { + if (PyCapsule_CheckExact(self)) { + const char *name = PyCapsule_GetName(self); + return (void *)PyCapsule_GetPointer(self, name); + } if (self->ob_type == &PyCObject_Type) return ((PyCObject *)self)->cobject; PyErr_SetString(PyExc_TypeError, diff --git a/pypy/module/cpyext/src/getargs.c b/pypy/module/cpyext/src/getargs.c --- a/pypy/module/cpyext/src/getargs.c +++ b/pypy/module/cpyext/src/getargs.c @@ -7,349 +7,348 @@ #ifdef __cplusplus -extern "C" { +extern "C" { #endif - int PyArg_Parse(PyObject *, const char *, ...); int PyArg_ParseTuple(PyObject *, const char *, ...); int PyArg_VaParse(PyObject *, const char *, va_list); int PyArg_ParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, ...); + const char *, char **, ...); int PyArg_VaParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, va_list); + const char *, char **, va_list); #define FLAG_COMPAT 1 #define FLAG_SIZE_T 2 -typedef int (*destr_t)(PyObject *, void *); - - -/* Keep track of "objects" that have been allocated or initialized and - which will need to be deallocated or cleaned up somehow if overall - parsing fails. -*/ -typedef struct { - void *item; - destr_t destructor; -} freelistentry_t; - -typedef struct { - int first_available; - freelistentry_t *entries; -} freelist_t; - /* Forward */ static int vgetargs1(PyObject *, const char *, va_list *, int); static void seterror(int, const char *, int *, const char *, const char *); -static char *convertitem(PyObject *, const char **, va_list *, int, int *, - char *, size_t, freelist_t *); +static char *convertitem(PyObject *, const char **, va_list *, int, int *, + char *, size_t, PyObject **); static char *converttuple(PyObject *, const char **, va_list *, int, - int *, char *, size_t, int, freelist_t *); + int *, char *, size_t, int, PyObject **); static char *convertsimple(PyObject *, const char **, va_list *, int, char *, - size_t, freelist_t *); + size_t, PyObject **); static Py_ssize_t convertbuffer(PyObject *, void **p, char **); static int getbuffer(PyObject *, Py_buffer *, char**); static int vgetargskeywords(PyObject *, PyObject *, - const char *, char **, va_list *, int); + const char *, char **, va_list *, int); static char *skipitem(const char **, va_list *, int); int PyArg_Parse(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT); + va_end(va); + return retval; } int _PyArg_Parse_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_ParseTuple(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, 0); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTuple_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParse(PyObject *args, const char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, 0); + return vgetargs1(args, format, &lva, 0); } int _PyArg_VaParse_SizeT(PyObject *args, char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, FLAG_SIZE_T); + return vgetargs1(args, format, &lva, FLAG_SIZE_T); } /* Handle cleanup of allocated memory in case of exception */ +#define GETARGS_CAPSULE_NAME_CLEANUP_PTR "getargs.cleanup_ptr" +#define GETARGS_CAPSULE_NAME_CLEANUP_BUFFER "getargs.cleanup_buffer" + +static void +cleanup_ptr(PyObject *self) +{ + void *ptr = PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_PTR); + if (ptr) { + PyMem_FREE(ptr); + } +} + +static void +cleanup_buffer(PyObject *self) +{ + Py_buffer *ptr = (Py_buffer *)PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_BUFFER); + if (ptr) { + PyBuffer_Release(ptr); + } +} + static int -cleanup_ptr(PyObject *self, void *ptr) +addcleanup(void *ptr, PyObject **freelist, PyCapsule_Destructor destr) { - if (ptr) { - PyMem_FREE(ptr); + PyObject *cobj; + const char *name; + + if (!*freelist) { + *freelist = PyList_New(0); + if (!*freelist) { + destr(ptr); + return -1; + } } + + if (destr == cleanup_ptr) { + name = GETARGS_CAPSULE_NAME_CLEANUP_PTR; + } else if (destr == cleanup_buffer) { + name = GETARGS_CAPSULE_NAME_CLEANUP_BUFFER; + } else { + return -1; + } + cobj = PyCapsule_New(ptr, name, destr); + if (!cobj) { + destr(ptr); + return -1; + } + if (PyList_Append(*freelist, cobj)) { + Py_DECREF(cobj); + return -1; + } + Py_DECREF(cobj); return 0; } static int -cleanup_buffer(PyObject *self, void *ptr) +cleanreturn(int retval, PyObject *freelist) { - Py_buffer *buf = (Py_buffer *)ptr; - if (buf) { - PyBuffer_Release(buf); + if (freelist && retval != 0) { + /* We were successful, reset the destructors so that they + don't get called. */ + Py_ssize_t len = PyList_GET_SIZE(freelist), i; + for (i = 0; i < len; i++) + PyCapsule_SetDestructor(PyList_GET_ITEM(freelist, i), NULL); } - return 0; + Py_XDECREF(freelist); + return retval; } -static int -addcleanup(void *ptr, freelist_t *freelist, destr_t destructor) -{ - int index; - - index = freelist->first_available; - freelist->first_available += 1; - - freelist->entries[index].item = ptr; - freelist->entries[index].destructor = destructor; - - return 0; -} - -static int -cleanreturn(int retval, freelist_t *freelist) -{ - int index; - - if (retval == 0) { - /* A failure occurred, therefore execute all of the cleanup - functions. - */ - for (index = 0; index < freelist->first_available; ++index) { - freelist->entries[index].destructor(NULL, - freelist->entries[index].item); - } - } - PyMem_Free(freelist->entries); - return retval; -} static int vgetargs1(PyObject *args, const char *format, va_list *p_va, int flags) { - char msgbuf[256]; - int levels[32]; - const char *fname = NULL; - const char *message = NULL; - int min = -1; - int max = 0; - int level = 0; - int endfmt = 0; - const char *formatsave = format; - Py_ssize_t i, len; - char *msg; - freelist_t freelist = {0, NULL}; - int compat = flags & FLAG_COMPAT; + char msgbuf[256]; + int levels[32]; + const char *fname = NULL; + const char *message = NULL; + int min = -1; + int max = 0; + int level = 0; + int endfmt = 0; + const char *formatsave = format; + Py_ssize_t i, len; + char *msg; + PyObject *freelist = NULL; + int compat = flags & FLAG_COMPAT; - assert(compat || (args != (PyObject*)NULL)); - flags = flags & ~FLAG_COMPAT; + assert(compat || (args != (PyObject*)NULL)); + flags = flags & ~FLAG_COMPAT; - while (endfmt == 0) { - int c = *format++; - switch (c) { - case '(': - if (level == 0) - max++; - level++; - if (level >= 30) - Py_FatalError("too many tuple nesting levels " - "in argument format string"); - break; - case ')': - if (level == 0) - Py_FatalError("excess ')' in getargs format"); - else - level--; - break; - case '\0': - endfmt = 1; - break; - case ':': - fname = format; - endfmt = 1; - break; - case ';': - message = format; - endfmt = 1; - break; - default: - if (level == 0) { - if (c == 'O') - max++; - else if (isalpha(Py_CHARMASK(c))) { - if (c != 'e') /* skip encoded */ - max++; - } else if (c == '|') - min = max; - } - break; - } - } - - if (level != 0) - Py_FatalError(/* '(' */ "missing ')' in getargs format"); - - if (min < 0) - min = max; - - format = formatsave; - - freelist.entries = PyMem_New(freelistentry_t, max); + while (endfmt == 0) { + int c = *format++; + switch (c) { + case '(': + if (level == 0) + max++; + level++; + if (level >= 30) + Py_FatalError("too many tuple nesting levels " + "in argument format string"); + break; + case ')': + if (level == 0) + Py_FatalError("excess ')' in getargs format"); + else + level--; + break; + case '\0': + endfmt = 1; + break; + case ':': + fname = format; + endfmt = 1; + break; + case ';': + message = format; + endfmt = 1; + break; + default: + if (level == 0) { + if (c == 'O') + max++; + else if (isalpha(Py_CHARMASK(c))) { + if (c != 'e') /* skip encoded */ + max++; + } else if (c == '|') + min = max; + } + break; + } + } - if (compat) { - if (max == 0) { - if (args == NULL) - return cleanreturn(1, &freelist); - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes no arguments", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - else if (min == 1 && max == 1) { - if (args == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes at least one argument", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - msg = convertitem(args, &format, p_va, flags, levels, - msgbuf, sizeof(msgbuf), &freelist); - if (msg == NULL) - return cleanreturn(1, &freelist); - seterror(levels[0], msg, levels+1, fname, message); - return cleanreturn(0, &freelist); - } - else { - PyErr_SetString(PyExc_SystemError, - "old style getargs format uses new features"); - return cleanreturn(0, &freelist); - } - } - - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "new style getargs format but argument is not a tuple"); - return cleanreturn(0, &freelist); - } - - len = PyTuple_GET_SIZE(args); - - if (len < min || max < len) { - if (message == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.150s%s takes %s %d argument%s " - "(%ld given)", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()", - min==max ? "exactly" - : len < min ? "at least" : "at most", - len < min ? min : max, - (len < min ? min : max) == 1 ? "" : "s", - Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); - message = msgbuf; - } - PyErr_SetString(PyExc_TypeError, message); - return cleanreturn(0, &freelist); - } - - for (i = 0; i < len; i++) { - if (*format == '|') - format++; - msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, - flags, levels, msgbuf, - sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, message); - return cleanreturn(0, &freelist); - } - } + if (level != 0) + Py_FatalError(/* '(' */ "missing ')' in getargs format"); - if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && - *format != '(' && - *format != '|' && *format != ':' && *format != ';') { - PyErr_Format(PyExc_SystemError, - "bad format string: %.200s", formatsave); - return cleanreturn(0, &freelist); - } - - return cleanreturn(1, &freelist); + if (min < 0) + min = max; + + format = formatsave; + + if (compat) { + if (max == 0) { + if (args == NULL) + return 1; + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes no arguments", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + else if (min == 1 && max == 1) { + if (args == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes at least one argument", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + msg = convertitem(args, &format, p_va, flags, levels, + msgbuf, sizeof(msgbuf), &freelist); + if (msg == NULL) + return cleanreturn(1, freelist); + seterror(levels[0], msg, levels+1, fname, message); + return cleanreturn(0, freelist); + } + else { + PyErr_SetString(PyExc_SystemError, + "old style getargs format uses new features"); + return 0; + } + } + + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "new style getargs format but argument is not a tuple"); + return 0; + } + + len = PyTuple_GET_SIZE(args); + + if (len < min || max < len) { + if (message == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.150s%s takes %s %d argument%s " + "(%ld given)", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()", + min==max ? "exactly" + : len < min ? "at least" : "at most", + len < min ? min : max, + (len < min ? min : max) == 1 ? "" : "s", + Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); + message = msgbuf; + } + PyErr_SetString(PyExc_TypeError, message); + return 0; + } + + for (i = 0; i < len; i++) { + if (*format == '|') + format++; + msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, + flags, levels, msgbuf, + sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, msg); + return cleanreturn(0, freelist); + } + } + + if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && + *format != '(' && + *format != '|' && *format != ':' && *format != ';') { + PyErr_Format(PyExc_SystemError, + "bad format string: %.200s", formatsave); + return cleanreturn(0, freelist); + } + + return cleanreturn(1, freelist); } @@ -358,37 +357,37 @@ seterror(int iarg, const char *msg, int *levels, const char *fname, const char *message) { - char buf[512]; - int i; - char *p = buf; + char buf[512]; + int i; + char *p = buf; - if (PyErr_Occurred()) - return; - else if (message == NULL) { - if (fname != NULL) { - PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); - p += strlen(p); - } - if (iarg != 0) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - "argument %d", iarg); - i = 0; - p += strlen(p); - while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - ", item %d", levels[i]-1); - p += strlen(p); - i++; - } - } - else { - PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); - p += strlen(p); - } - PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); - message = buf; - } - PyErr_SetString(PyExc_TypeError, message); + if (PyErr_Occurred()) + return; + else if (message == NULL) { + if (fname != NULL) { + PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); + p += strlen(p); + } + if (iarg != 0) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + "argument %d", iarg); + i = 0; + p += strlen(p); + while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + ", item %d", levels[i]-1); + p += strlen(p); + i++; + } + } + else { + PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); + p += strlen(p); + } + PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); + message = buf; + } + PyErr_SetString(PyExc_TypeError, message); } @@ -404,85 +403,84 @@ *p_va is undefined, *levels is a 0-terminated list of item numbers, *msgbuf contains an error message, whose format is: - "must be , not ", where: - is the name of the expected type, and - is the name of the actual type, + "must be , not ", where: + is the name of the expected type, and + is the name of the actual type, and msgbuf is returned. */ static char * converttuple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, int toplevel, - freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, int toplevel, + PyObject **freelist) { - int level = 0; - int n = 0; - const char *format = *p_format; - int i; - - for (;;) { - int c = *format++; - if (c == '(') { - if (level == 0) - n++; - level++; - } - else if (c == ')') { - if (level == 0) - break; - level--; - } - else if (c == ':' || c == ';' || c == '\0') - break; - else if (level == 0 && isalpha(Py_CHARMASK(c))) - n++; - } - - if (!PySequence_Check(arg) || PyString_Check(arg)) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %.50s" : - "must be %d-item sequence, not %.50s", - n, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; - } - - if ((i = PySequence_Size(arg)) != n) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %d" : - "must be sequence of length %d, not %d", - n, i); - return msgbuf; - } + int level = 0; + int n = 0; + const char *format = *p_format; + int i; - format = *p_format; - for (i = 0; i < n; i++) { - char *msg; - PyObject *item; + for (;;) { + int c = *format++; + if (c == '(') { + if (level == 0) + n++; + level++; + } + else if (c == ')') { + if (level == 0) + break; + level--; + } + else if (c == ':' || c == ';' || c == '\0') + break; + else if (level == 0 && isalpha(Py_CHARMASK(c))) + n++; + } + + if (!PySequence_Check(arg) || PyString_Check(arg)) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %.50s" : + "must be %d-item sequence, not %.50s", + n, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; + } + + if ((i = PySequence_Size(arg)) != n) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %d" : + "must be sequence of length %d, not %d", + n, i); + return msgbuf; + } + + format = *p_format; + for (i = 0; i < n; i++) { + char *msg; + PyObject *item; item = PySequence_GetItem(arg, i); - if (item == NULL) { - PyErr_Clear(); - levels[0] = i+1; - levels[1] = 0; - strncpy(msgbuf, "is not retrievable", - bufsize); - return msgbuf; - } - PyPy_Borrow(arg, item); - msg = convertitem(item, &format, p_va, flags, levels+1, - msgbuf, bufsize, freelist); + if (item == NULL) { + PyErr_Clear(); + levels[0] = i+1; + levels[1] = 0; + strncpy(msgbuf, "is not retrievable", bufsize); + return msgbuf; + } + PyPy_Borrow(arg, item); + msg = convertitem(item, &format, p_va, flags, levels+1, + msgbuf, bufsize, freelist); /* PySequence_GetItem calls tp->sq_item, which INCREFs */ Py_XDECREF(item); - if (msg != NULL) { - levels[0] = i+1; - return msg; - } - } + if (msg != NULL) { + levels[0] = i+1; + return msg; + } + } - *p_format = format; - return NULL; + *p_format = format; + return NULL; } @@ -490,45 +488,45 @@ static char * convertitem(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, PyObject **freelist) { - char *msg; - const char *format = *p_format; - - if (*format == '(' /* ')' */) { - format++; - msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, - bufsize, 0, freelist); - if (msg == NULL) - format++; - } - else { - msg = convertsimple(arg, &format, p_va, flags, - msgbuf, bufsize, freelist); - if (msg != NULL) - levels[0] = 0; - } - if (msg == NULL) - *p_format = format; - return msg; + char *msg; + const char *format = *p_format; + + if (*format == '(' /* ')' */) { + format++; + msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, + bufsize, 0, freelist); + if (msg == NULL) + format++; + } + else { + msg = convertsimple(arg, &format, p_va, flags, + msgbuf, bufsize, freelist); + if (msg != NULL) + levels[0] = 0; + } + if (msg == NULL) + *p_format = format; + return msg; } #define UNICODE_DEFAULT_ENCODING(arg) \ - _PyUnicode_AsDefaultEncodedString(arg, NULL) + _PyUnicode_AsDefaultEncodedString(arg, NULL) /* Format an error message generated by convertsimple(). */ static char * converterr(const char *expected, PyObject *arg, char *msgbuf, size_t bufsize) { - assert(expected != NULL); - assert(arg != NULL); - PyOS_snprintf(msgbuf, bufsize, - "must be %.50s, not %.50s", expected, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; + assert(expected != NULL); + assert(arg != NULL); + PyOS_snprintf(msgbuf, bufsize, + "must be %.50s, not %.50s", expected, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; } #define CONV_UNICODE "(unicode conversion error)" @@ -536,14 +534,28 @@ /* explicitly check for float arguments when integers are expected. For now * signal a warning. Returns true if an exception was raised. */ static int +float_argument_warning(PyObject *arg) +{ + if (PyFloat_Check(arg) && + PyErr_Warn(PyExc_DeprecationWarning, + "integer argument expected, got float" )) + return 1; + else + return 0; +} + +/* explicitly check for float arguments when integers are expected. Raises + TypeError and returns true for float arguments. */ +static int float_argument_error(PyObject *arg) { - if (PyFloat_Check(arg) && - PyErr_Warn(PyExc_DeprecationWarning, - "integer argument expected, got float" )) - return 1; - else - return 0; + if (PyFloat_Check(arg)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 1; + } + else + return 0; } /* Convert a non-tuple argument. Return NULL if conversion went OK, @@ -557,836 +569,839 @@ static char * convertsimple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - char *msgbuf, size_t bufsize, freelist_t *freelist) + char *msgbuf, size_t bufsize, PyObject **freelist) { - /* For # codes */ -#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ - if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ - else q=va_arg(*p_va, int*); -#define STORE_SIZE(s) if (flags & FLAG_SIZE_T) *q2=s; else *q=s; + /* For # codes */ +#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ + if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ + else q=va_arg(*p_va, int*); +#define STORE_SIZE(s) \ + if (flags & FLAG_SIZE_T) \ + *q2=s; \ + else { \ + if (INT_MAX < s) { \ + PyErr_SetString(PyExc_OverflowError, \ + "size does not fit in an int"); \ + return converterr("", arg, msgbuf, bufsize); \ + } \ + *q=s; \ + } #define BUFFER_LEN ((flags & FLAG_SIZE_T) ? *q2:*q) - const char *format = *p_format; - char c = *format++; + const char *format = *p_format; + char c = *format++; #ifdef Py_USING_UNICODE - PyObject *uarg; -#endif - - switch (c) { - - case 'b': { /* unsigned byte -- very short int */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > UCHAR_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (unsigned char) ival; - break; - } - - case 'B': {/* byte sized bitfield - both signed and unsigned - values allowed */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned char) ival; - break; - } - - case 'h': {/* signed short int */ - short *p = va_arg(*p_va, short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < SHRT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > SHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (short) ival; - break; - } - - case 'H': { /* short int sized bitfield, both signed and - unsigned allowed */ - unsigned short *p = va_arg(*p_va, unsigned short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned short) ival; - break; - } - case 'i': {/* signed int */ - int *p = va_arg(*p_va, int *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival > INT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival < INT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = ival; - break; - } - case 'I': { /* int sized bitfield, both signed and - unsigned allowed */ - unsigned int *p = va_arg(*p_va, unsigned int *); - unsigned int ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); - if (ival == (unsigned int)-1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - case 'n': /* Py_ssize_t */ -#if SIZEOF_SIZE_T != SIZEOF_LONG - { - Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); - Py_ssize_t ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsSsize_t(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif - /* Fall through from 'n' to 'l' if Py_ssize_t is int */ - case 'l': {/* long int */ - long *p = va_arg(*p_va, long *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - - case 'k': { /* long sized bitfield */ - unsigned long *p = va_arg(*p_va, unsigned long *); - unsigned long ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } - -#ifdef HAVE_LONG_LONG - case 'L': {/* PY_LONG_LONG */ - PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); - PY_LONG_LONG ival = PyLong_AsLongLong( arg ); - if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { - return converterr("long", arg, msgbuf, bufsize); - } else { - *p = ival; - } - break; - } - - case 'K': { /* long long sized bitfield */ - unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); - unsigned PY_LONG_LONG ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif // HAVE_LONG_LONG - - case 'f': {/* float */ - float *p = va_arg(*p_va, float *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = (float) dval; - break; - } - - case 'd': {/* double */ - double *p = va_arg(*p_va, double *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = dval; - break; - } - -#ifndef WITHOUT_COMPLEX - case 'D': {/* complex double */ - Py_complex *p = va_arg(*p_va, Py_complex *); - Py_complex cval; - cval = PyComplex_AsCComplex(arg); - if (PyErr_Occurred()) - return converterr("complex", arg, msgbuf, bufsize); - else - *p = cval; - break; - } -#endif /* WITHOUT_COMPLEX */ - - case 'c': {/* char */ - char *p = va_arg(*p_va, char *); - if (PyString_Check(arg) && PyString_Size(arg) == 1) - *p = PyString_AS_STRING(arg)[0]; - else - return converterr("char", arg, msgbuf, bufsize); - break; - } - case 's': {/* string */ - if (*format == '*') { - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (PyString_Check(arg)) { - fflush(stdout); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { -#if 0 - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); -#else - return converterr("string or buffer", arg, msgbuf, bufsize); -#endif - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; - } else if (*format == '#') { - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string", arg, msgbuf, bufsize); - if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr("string without null bytes", - arg, msgbuf, bufsize); - } - break; - } - - case 'z': {/* string, may be NULL (None) */ - if (*format == '*') { - Py_FatalError("'*' format not supported in PyArg_*\n"); -#if 0 - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (arg == Py_None) - PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); - else if (PyString_Check(arg)) { - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; -#endif - } else if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (arg == Py_None) { - *p = 0; - STORE_SIZE(0); - } - else if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (arg == Py_None) - *p = 0; - else if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string or None", - arg, msgbuf, bufsize); - if (*format == '#') { - FETCH_SIZE; - assert(0); /* XXX redundant with if-case */ - if (arg == Py_None) - *q = 0; - else - *q = PyString_Size(arg); - format++; - } - else if (*p != NULL && - (Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr( - "string without null bytes or None", - arg, msgbuf, bufsize); - } - break; - } - case 'e': {/* encoded string */ - char **buffer; - const char *encoding; - PyObject *s; - Py_ssize_t size; - int recode_strings; - - /* Get 'e' parameter: the encoding name */ - encoding = (const char *)va_arg(*p_va, const char *); -#ifdef Py_USING_UNICODE - if (encoding == NULL) - encoding = PyUnicode_GetDefaultEncoding(); + PyObject *uarg; #endif - /* Get output buffer parameter: - 's' (recode all objects via Unicode) or - 't' (only recode non-string objects) - */ - if (*format == 's') - recode_strings = 1; - else if (*format == 't') - recode_strings = 0; - else - return converterr( - "(unknown parser marker combination)", - arg, msgbuf, bufsize); - buffer = (char **)va_arg(*p_va, char **); - format++; - if (buffer == NULL) - return converterr("(buffer is NULL)", - arg, msgbuf, bufsize); - - /* Encode object */ - if (!recode_strings && PyString_Check(arg)) { - s = arg; - Py_INCREF(s); - } - else { + switch (c) { + + case 'b': { /* unsigned byte -- very short int */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > UCHAR_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (unsigned char) ival; + break; + } + + case 'B': {/* byte sized bitfield - both signed and unsigned + values allowed */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned char) ival; + break; + } + + case 'h': {/* signed short int */ + short *p = va_arg(*p_va, short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < SHRT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > SHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (short) ival; + break; + } + + case 'H': { /* short int sized bitfield, both signed and + unsigned allowed */ + unsigned short *p = va_arg(*p_va, unsigned short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned short) ival; + break; + } + + case 'i': {/* signed int */ + int *p = va_arg(*p_va, int *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival > INT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival < INT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = ival; + break; + } + + case 'I': { /* int sized bitfield, both signed and + unsigned allowed */ + unsigned int *p = va_arg(*p_va, unsigned int *); + unsigned int ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); + if (ival == (unsigned int)-1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'n': /* Py_ssize_t */ +#if SIZEOF_SIZE_T != SIZEOF_LONG + { + Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); + Py_ssize_t ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsSsize_t(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + /* Fall through from 'n' to 'l' if Py_ssize_t is int */ + case 'l': {/* long int */ + long *p = va_arg(*p_va, long *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'k': { /* long sized bitfield */ + unsigned long *p = va_arg(*p_va, unsigned long *); + unsigned long ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } + +#ifdef HAVE_LONG_LONG + case 'L': {/* PY_LONG_LONG */ + PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); + PY_LONG_LONG ival; + if (float_argument_warning(arg)) + return converterr("long", arg, msgbuf, bufsize); + ival = PyLong_AsLongLong(arg); + if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { + return converterr("long", arg, msgbuf, bufsize); + } else { + *p = ival; + } + break; + } + + case 'K': { /* long long sized bitfield */ + unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); + unsigned PY_LONG_LONG ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + + case 'f': {/* float */ + float *p = va_arg(*p_va, float *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = (float) dval; + break; + } + + case 'd': {/* double */ + double *p = va_arg(*p_va, double *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = dval; + break; + } + +#ifndef WITHOUT_COMPLEX + case 'D': {/* complex double */ + Py_complex *p = va_arg(*p_va, Py_complex *); + Py_complex cval; + cval = PyComplex_AsCComplex(arg); + if (PyErr_Occurred()) + return converterr("complex", arg, msgbuf, bufsize); + else + *p = cval; + break; + } +#endif /* WITHOUT_COMPLEX */ + + case 'c': {/* char */ + char *p = va_arg(*p_va, char *); + if (PyString_Check(arg) && PyString_Size(arg) == 1) + *p = PyString_AS_STRING(arg)[0]; + else + return converterr("char", arg, msgbuf, bufsize); + break; + } + + case 's': {/* string */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } #ifdef Py_USING_UNICODE - PyObject *u; + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; - /* Convert object to Unicode */ - u = PyUnicode_FromObject(arg); - if (u == NULL) - return converterr( - "string or unicode or text buffer", - arg, msgbuf, bufsize); - - /* Encode object; use default error handling */ - s = PyUnicode_AsEncodedString(u, - encoding, - NULL); - Py_DECREF(u); - if (s == NULL) - return converterr("(encoding failed)", - arg, msgbuf, bufsize); - if (!PyString_Check(s)) { - Py_DECREF(s); - return converterr( - "(encoder failed to return a string)", - arg, msgbuf, bufsize); - } + if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string", arg, msgbuf, bufsize); + if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr("string without null bytes", + arg, msgbuf, bufsize); + } + break; + } + + case 'z': {/* string, may be NULL (None) */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (arg == Py_None) + PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); + else if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + + if (arg == Py_None) { + *p = 0; + STORE_SIZE(0); + } + else if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (arg == Py_None) + *p = 0; + else if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string or None", + arg, msgbuf, bufsize); + if (*format == '#') { + FETCH_SIZE; + assert(0); /* XXX redundant with if-case */ + if (arg == Py_None) { + STORE_SIZE(0); + } else { + STORE_SIZE(PyString_Size(arg)); + } + format++; + } + else if (*p != NULL && + (Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr( + "string without null bytes or None", + arg, msgbuf, bufsize); + } + break; + } + + case 'e': {/* encoded string */ + char **buffer; + const char *encoding; + PyObject *s; + Py_ssize_t size; + int recode_strings; + + /* Get 'e' parameter: the encoding name */ + encoding = (const char *)va_arg(*p_va, const char *); +#ifdef Py_USING_UNICODE + if (encoding == NULL) + encoding = PyUnicode_GetDefaultEncoding(); +#endif + + /* Get output buffer parameter: + 's' (recode all objects via Unicode) or + 't' (only recode non-string objects) + */ + if (*format == 's') + recode_strings = 1; + else if (*format == 't') + recode_strings = 0; + else + return converterr( + "(unknown parser marker combination)", + arg, msgbuf, bufsize); + buffer = (char **)va_arg(*p_va, char **); + format++; + if (buffer == NULL) + return converterr("(buffer is NULL)", + arg, msgbuf, bufsize); + + /* Encode object */ + if (!recode_strings && PyString_Check(arg)) { + s = arg; + Py_INCREF(s); + } + else { +#ifdef Py_USING_UNICODE + PyObject *u; + + /* Convert object to Unicode */ + u = PyUnicode_FromObject(arg); + if (u == NULL) + return converterr( + "string or unicode or text buffer", + arg, msgbuf, bufsize); + + /* Encode object; use default error handling */ + s = PyUnicode_AsEncodedString(u, + encoding, + NULL); + Py_DECREF(u); + if (s == NULL) + return converterr("(encoding failed)", + arg, msgbuf, bufsize); + if (!PyString_Check(s)) { + Py_DECREF(s); + return converterr( + "(encoder failed to return a string)", + arg, msgbuf, bufsize); + } #else - return converterr("string", arg, msgbuf, bufsize); + return converterr("string", arg, msgbuf, bufsize); #endif - } - size = PyString_GET_SIZE(s); + } + size = PyString_GET_SIZE(s); - /* Write output; output is guaranteed to be 0-terminated */ - if (*format == '#') { - /* Using buffer length parameter '#': - - - if *buffer is NULL, a new buffer of the - needed size is allocated and the data - copied into it; *buffer is updated to point - to the new buffer; the caller is - responsible for PyMem_Free()ing it after - usage + /* Write output; output is guaranteed to be 0-terminated */ + if (*format == '#') { + /* Using buffer length parameter '#': - - if *buffer is not NULL, the data is - copied to *buffer; *buffer_len has to be - set to the size of the buffer on input; - buffer overflow is signalled with an error; - buffer has to provide enough room for the - encoded string plus the trailing 0-byte - - - in both cases, *buffer_len is updated to - the size of the buffer /excluding/ the - trailing 0-byte - - */ - FETCH_SIZE; + - if *buffer is NULL, a new buffer of the + needed size is allocated and the data + copied into it; *buffer is updated to point + to the new buffer; the caller is + responsible for PyMem_Free()ing it after + usage - format++; - if (q == NULL && q2 == NULL) { - Py_DECREF(s); - return converterr( - "(buffer_len is NULL)", - arg, msgbuf, bufsize); - } - if (*buffer == NULL) { - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr( - "(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - } else { - if (size + 1 > BUFFER_LEN) { - Py_DECREF(s); - return converterr( - "(buffer overflow)", - arg, msgbuf, bufsize); - } - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - STORE_SIZE(size); - } else { - /* Using a 0-terminated buffer: - - - the encoded string has to be 0-terminated - for this variant to work; if it is not, an - error raised + - if *buffer is not NULL, the data is + copied to *buffer; *buffer_len has to be + set to the size of the buffer on input; + buffer overflow is signalled with an error; + buffer has to provide enough room for the + encoded string plus the trailing 0-byte - - a new buffer of the needed size is - allocated and the data copied into it; - *buffer is updated to point to the new - buffer; the caller is responsible for - PyMem_Free()ing it after usage + - in both cases, *buffer_len is updated to + the size of the buffer /excluding/ the + trailing 0-byte - */ - if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) - != size) { - Py_DECREF(s); - return converterr( - "encoded string without NULL bytes", - arg, msgbuf, bufsize); - } - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr("(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr("(cleanup problem)", - arg, msgbuf, bufsize); - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - } - Py_DECREF(s); - break; - } + */ + FETCH_SIZE; + + format++; + if (q == NULL && q2 == NULL) { + Py_DECREF(s); + return converterr( + "(buffer_len is NULL)", + arg, msgbuf, bufsize); + } + if (*buffer == NULL) { + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr( + "(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + } else { + if (size + 1 > BUFFER_LEN) { + Py_DECREF(s); + return converterr( + "(buffer overflow)", + arg, msgbuf, bufsize); + } + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + STORE_SIZE(size); + } else { + /* Using a 0-terminated buffer: + + - the encoded string has to be 0-terminated + for this variant to work; if it is not, an + error raised + + - a new buffer of the needed size is + allocated and the data copied into it; + *buffer is updated to point to the new + buffer; the caller is responsible for + PyMem_Free()ing it after usage + + */ + if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) + != size) { + Py_DECREF(s); + return converterr( + "encoded string without NULL bytes", + arg, msgbuf, bufsize); + } + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr("(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr("(cleanup problem)", + arg, msgbuf, bufsize); + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + } + Py_DECREF(s); + break; + } #ifdef Py_USING_UNICODE - case 'u': {/* raw unicode buffer (Py_UNICODE *) */ - if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - if (PyUnicode_Check(arg)) { - *p = PyUnicode_AS_UNICODE(arg); - STORE_SIZE(PyUnicode_GET_SIZE(arg)); - } - else { - return converterr("cannot convert raw buffers", - arg, msgbuf, bufsize); - } - format++; - } else { - Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); - if (PyUnicode_Check(arg)) - *p = PyUnicode_AS_UNICODE(arg); - else - return converterr("unicode", arg, msgbuf, bufsize); - } - break; - } + case 'u': {/* raw unicode buffer (Py_UNICODE *) */ + if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + if (PyUnicode_Check(arg)) { + *p = PyUnicode_AS_UNICODE(arg); + STORE_SIZE(PyUnicode_GET_SIZE(arg)); + } + else { + return converterr("cannot convert raw buffers", + arg, msgbuf, bufsize); + } + format++; + } else { + Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); + if (PyUnicode_Check(arg)) + *p = PyUnicode_AS_UNICODE(arg); + else + return converterr("unicode", arg, msgbuf, bufsize); + } + break; + } #endif - case 'S': { /* string object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyString_Check(arg)) - *p = arg; - else - return converterr("string", arg, msgbuf, bufsize); - break; - } - + case 'S': { /* string object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyString_Check(arg)) + *p = arg; + else + return converterr("string", arg, msgbuf, bufsize); + break; + } + #ifdef Py_USING_UNICODE - case 'U': { /* Unicode object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyUnicode_Check(arg)) - *p = arg; - else - return converterr("unicode", arg, msgbuf, bufsize); - break; - } + case 'U': { /* Unicode object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyUnicode_Check(arg)) + *p = arg; + else + return converterr("unicode", arg, msgbuf, bufsize); + break; + } #endif - case 'O': { /* object */ - PyTypeObject *type; - PyObject **p; - if (*format == '!') { - type = va_arg(*p_va, PyTypeObject*); - p = va_arg(*p_va, PyObject **); - format++; - if (PyType_IsSubtype(arg->ob_type, type)) - *p = arg; - else - return converterr(type->tp_name, arg, msgbuf, bufsize); - } - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - p = va_arg(*p_va, PyObject **); - format++; - if ((*pred)(arg)) - *p = arg; - else - return converterr("(unspecified)", - arg, msgbuf, bufsize); - - } - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - converter convert = va_arg(*p_va, converter); - void *addr = va_arg(*p_va, void *); - format++; - if (! (*convert)(arg, addr)) - return converterr("(unspecified)", - arg, msgbuf, bufsize); - } - else { - p = va_arg(*p_va, PyObject **); - *p = arg; - } - break; - } - - case 'w': { /* memory buffer, read-write access */ - Py_FatalError("'w' unsupported\n"); -#if 0 - void **p = va_arg(*p_va, void **); - void *res; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; + case 'O': { /* object */ + PyTypeObject *type; + PyObject **p; + if (*format == '!') { + type = va_arg(*p_va, PyTypeObject*); + p = va_arg(*p_va, PyObject **); + format++; + if (PyType_IsSubtype(arg->ob_type, type)) + *p = arg; + else + return converterr(type->tp_name, arg, msgbuf, bufsize); - if (pb && pb->bf_releasebuffer && *format != '*') - /* Buffer must be released, yet caller does not use - the Py_buffer protocol. */ - return converterr("pinned buffer", arg, msgbuf, bufsize); + } + else if (*format == '?') { + inquiry pred = va_arg(*p_va, inquiry); + p = va_arg(*p_va, PyObject **); + format++; + if ((*pred)(arg)) + *p = arg; + else + return converterr("(unspecified)", + arg, msgbuf, bufsize); - if (pb && pb->bf_getbuffer && *format == '*') { - /* Caller is interested in Py_buffer, and the object - supports it directly. */ - format++; - if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { - PyErr_Clear(); - return converterr("read-write buffer", arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) - return converterr("contiguous buffer", arg, msgbuf, bufsize); - break; - } + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + converter convert = va_arg(*p_va, converter); + void *addr = va_arg(*p_va, void *); + format++; + if (! (*convert)(arg, addr)) + return converterr("(unspecified)", + arg, msgbuf, bufsize); + } + else { + p = va_arg(*p_va, PyObject **); + *p = arg; + } + break; + } - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) - return converterr("read-write buffer", arg, msgbuf, bufsize); - if ((*pb->bf_getsegcount)(arg, NULL) != 1) - return converterr("single-segment read-write buffer", - arg, msgbuf, bufsize); - if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); - if (*format == '*') { - PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); - format++; - } - else { - *p = res; - if (*format == '#') { - FETCH_SIZE; - STORE_SIZE(count); - format++; - } - } - break; -#endif - } - - case 't': { /* 8-bit character buffer, read-only access */ - char **p = va_arg(*p_va, char **); - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; -#if 0 - if (*format++ != '#') - return converterr( - "invalid use of 't' format character", - arg, msgbuf, bufsize); -#endif - if (!PyType_HasFeature(arg->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER) -#if 0 - || pb == NULL || pb->bf_getcharbuffer == NULL || - pb->bf_getsegcount == NULL -#endif - ) - return converterr( - "string or read-only character buffer", - arg, msgbuf, bufsize); -#if 0 - if (pb->bf_getsegcount(arg, NULL) != 1) - return converterr( - "string or single-segment read-only buffer", - arg, msgbuf, bufsize); + case 'w': { /* memory buffer, read-write access */ + void **p = va_arg(*p_va, void **); + void *res; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; - if (pb->bf_releasebuffer) - return converterr( - "string or pinned buffer", - arg, msgbuf, bufsize); -#endif - count = pb->bf_getcharbuffer(arg, 0, p); -#if 0 - if (count < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); -#endif - { - FETCH_SIZE; - STORE_SIZE(count); - ++format; - } - break; - } - default: - return converterr("impossible", arg, msgbuf, bufsize); - - } - - *p_format = format; - return NULL; + if (pb && pb->bf_releasebuffer && *format != '*') + /* Buffer must be released, yet caller does not use + the Py_buffer protocol. */ + return converterr("pinned buffer", arg, msgbuf, bufsize); + + if (pb && pb->bf_getbuffer && *format == '*') { + /* Caller is interested in Py_buffer, and the object + supports it directly. */ + format++; + if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { + PyErr_Clear(); + return converterr("read-write buffer", arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) + return converterr("contiguous buffer", arg, msgbuf, bufsize); + break; + } + + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr("read-write buffer", arg, msgbuf, bufsize); + if ((*pb->bf_getsegcount)(arg, NULL) != 1) + return converterr("single-segment read-write buffer", + arg, msgbuf, bufsize); + if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + if (*format == '*') { + PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); + format++; + } + else { + *p = res; + if (*format == '#') { + FETCH_SIZE; + STORE_SIZE(count); + format++; + } + } + break; + } + + case 't': { /* 8-bit character buffer, read-only access */ + char **p = va_arg(*p_va, char **); + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + + if (*format++ != '#') + return converterr( + "invalid use of 't' format character", + arg, msgbuf, bufsize); + if (!PyType_HasFeature(arg->ob_type, + Py_TPFLAGS_HAVE_GETCHARBUFFER) || + pb == NULL || pb->bf_getcharbuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr( + "string or read-only character buffer", + arg, msgbuf, bufsize); + + if (pb->bf_getsegcount(arg, NULL) != 1) + return converterr( + "string or single-segment read-only buffer", + arg, msgbuf, bufsize); + + if (pb->bf_releasebuffer) + return converterr( + "string or pinned buffer", + arg, msgbuf, bufsize); + + count = pb->bf_getcharbuffer(arg, 0, p); + if (count < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + { + FETCH_SIZE; + STORE_SIZE(count); + } + break; + } + + default: + return converterr("impossible", arg, msgbuf, bufsize); + + } + + *p_format = format; + return NULL; } static Py_ssize_t convertbuffer(PyObject *arg, void **p, char **errmsg) { - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - pb->bf_releasebuffer != NULL) { - *errmsg = "string or read-only buffer"; - return -1; - } - if ((*pb->bf_getsegcount)(arg, NULL) != 1) { - *errmsg = "string or single-segment read-only buffer"; - return -1; - } - if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { - *errmsg = "(unspecified)"; - } - return count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + pb->bf_releasebuffer != NULL) { + *errmsg = "string or read-only buffer"; + return -1; + } + if ((*pb->bf_getsegcount)(arg, NULL) != 1) { + *errmsg = "string or single-segment read-only buffer"; + return -1; + } + if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { + *errmsg = "(unspecified)"; + } + return count; } static int getbuffer(PyObject *arg, Py_buffer *view, char **errmsg) { - void *buf; - Py_ssize_t count; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - if (pb == NULL) { - *errmsg = "string or buffer"; - return -1; - } - if (pb->bf_getbuffer) { - if (pb->bf_getbuffer(arg, view, 0) < 0) { - *errmsg = "convertible to a buffer"; - return -1; - } - if (!PyBuffer_IsContiguous(view, 'C')) { - *errmsg = "contiguous buffer"; - return -1; - } - return 0; - } + void *buf; + Py_ssize_t count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + if (pb == NULL) { + *errmsg = "string or buffer"; + return -1; + } + if (pb->bf_getbuffer) { + if (pb->bf_getbuffer(arg, view, 0) < 0) { + *errmsg = "convertible to a buffer"; + return -1; + } + if (!PyBuffer_IsContiguous(view, 'C')) { + *errmsg = "contiguous buffer"; + return -1; + } + return 0; + } - count = convertbuffer(arg, &buf, errmsg); - if (count < 0) { - *errmsg = "convertible to a buffer"; - return count; - } - PyBuffer_FillInfo(view, NULL, buf, count, 1, 0); - return 0; + count = convertbuffer(arg, &buf, errmsg); + if (count < 0) { + *errmsg = "convertible to a buffer"; + return count; + } + PyBuffer_FillInfo(view, arg, buf, count, 1, 0); + return 0; } /* Support for keyword arguments donated by @@ -1395,501 +1410,487 @@ /* Return false (0) for error, else true. */ int PyArg_ParseTupleAndKeywords(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, - kwlist, &va, FLAG_SIZE_T); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, + kwlist, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParseTupleAndKeywords(PyObject *args, PyObject *keywords, - const char *format, + const char *format, char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); - return retval; + retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); + return retval; } int _PyArg_VaParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, va_list va) + PyObject *keywords, + const char *format, + char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, - kwlist, &lva, FLAG_SIZE_T); - return retval; + retval = vgetargskeywords(args, keywords, format, + kwlist, &lva, FLAG_SIZE_T); + return retval; } #define IS_END_OF_FORMAT(c) (c == '\0' || c == ';' || c == ':') static int vgetargskeywords(PyObject *args, PyObject *keywords, const char *format, - char **kwlist, va_list *p_va, int flags) + char **kwlist, va_list *p_va, int flags) { - char msgbuf[512]; - int levels[32]; - const char *fname, *msg, *custom_msg, *keyword; - int min = INT_MAX; - int i, len, nargs, nkeywords; - PyObject *current_arg; - freelist_t freelist = {0, NULL}; + char msgbuf[512]; + int levels[32]; + const char *fname, *msg, *custom_msg, *keyword; + int min = INT_MAX; + int i, len, nargs, nkeywords; + PyObject *freelist = NULL, *current_arg; + assert(args != NULL && PyTuple_Check(args)); + assert(keywords == NULL || PyDict_Check(keywords)); + assert(format != NULL); + assert(kwlist != NULL); + assert(p_va != NULL); - assert(args != NULL && PyTuple_Check(args)); - assert(keywords == NULL || PyDict_Check(keywords)); - assert(format != NULL); - assert(kwlist != NULL); - assert(p_va != NULL); + /* grab the function name or custom error msg first (mutually exclusive) */ + fname = strchr(format, ':'); + if (fname) { + fname++; + custom_msg = NULL; + } + else { + custom_msg = strchr(format,';'); + if (custom_msg) + custom_msg++; + } - /* grab the function name or custom error msg first (mutually exclusive) */ - fname = strchr(format, ':'); - if (fname) { - fname++; - custom_msg = NULL; - } - else { - custom_msg = strchr(format,';'); - if (custom_msg) - custom_msg++; - } + /* scan kwlist and get greatest possible nbr of args */ + for (len=0; kwlist[len]; len++) + continue; - /* scan kwlist and get greatest possible nbr of args */ - for (len=0; kwlist[len]; len++) - continue; + nargs = PyTuple_GET_SIZE(args); + nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); + if (nargs + nkeywords > len) { + PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " + "argument%s (%d given)", + (fname == NULL) ? "function" : fname, + (fname == NULL) ? "" : "()", + len, + (len == 1) ? "" : "s", + nargs + nkeywords); + return 0; + } - freelist.entries = PyMem_New(freelistentry_t, len); + /* convert tuple args and keyword args in same loop, using kwlist to drive process */ + for (i = 0; i < len; i++) { + keyword = kwlist[i]; + if (*format == '|') { + min = i; + format++; + } + if (IS_END_OF_FORMAT(*format)) { + PyErr_Format(PyExc_RuntimeError, + "More keyword list entries (%d) than " + "format specifiers (%d)", len, i); + return cleanreturn(0, freelist); + } + current_arg = NULL; + if (nkeywords) { + current_arg = PyDict_GetItemString(keywords, keyword); + } + if (current_arg) { + --nkeywords; + if (i < nargs) { + /* arg present in tuple and in dict */ + PyErr_Format(PyExc_TypeError, + "Argument given by name ('%s') " + "and position (%d)", + keyword, i+1); + return cleanreturn(0, freelist); + } + } + else if (nkeywords && PyErr_Occurred()) + return cleanreturn(0, freelist); + else if (i < nargs) + current_arg = PyTuple_GET_ITEM(args, i); - nargs = PyTuple_GET_SIZE(args); - nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); - if (nargs + nkeywords > len) { - PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " - "argument%s (%d given)", - (fname == NULL) ? "function" : fname, - (fname == NULL) ? "" : "()", - len, - (len == 1) ? "" : "s", - nargs + nkeywords); - return cleanreturn(0, &freelist); - } + if (current_arg) { + msg = convertitem(current_arg, &format, p_va, flags, + levels, msgbuf, sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, custom_msg); + return cleanreturn(0, freelist); + } + continue; + } - /* convert tuple args and keyword args in same loop, using kwlist to drive process */ - for (i = 0; i < len; i++) { - keyword = kwlist[i]; - if (*format == '|') { - min = i; - format++; - } - if (IS_END_OF_FORMAT(*format)) { - PyErr_Format(PyExc_RuntimeError, - "More keyword list entries (%d) than " - "format specifiers (%d)", len, i); - return cleanreturn(0, &freelist); - } - current_arg = NULL; - if (nkeywords) { - current_arg = PyDict_GetItemString(keywords, keyword); - } - if (current_arg) { - --nkeywords; - if (i < nargs) { - /* arg present in tuple and in dict */ - PyErr_Format(PyExc_TypeError, - "Argument given by name ('%s') " - "and position (%d)", - keyword, i+1); - return cleanreturn(0, &freelist); - } - } - else if (nkeywords && PyErr_Occurred()) - return cleanreturn(0, &freelist); - else if (i < nargs) - current_arg = PyTuple_GET_ITEM(args, i); - - if (current_arg) { - msg = convertitem(current_arg, &format, p_va, flags, - levels, msgbuf, sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, custom_msg); - return cleanreturn(0, &freelist); - } - continue; - } + if (i < min) { + PyErr_Format(PyExc_TypeError, "Required argument " + "'%s' (pos %d) not found", + keyword, i+1); + return cleanreturn(0, freelist); + } + /* current code reports success when all required args + * fulfilled and no keyword args left, with no further + * validation. XXX Maybe skip this in debug build ? + */ + if (!nkeywords) + return cleanreturn(1, freelist); - if (i < min) { - PyErr_Format(PyExc_TypeError, "Required argument " - "'%s' (pos %d) not found", - keyword, i+1); - return cleanreturn(0, &freelist); - } - /* current code reports success when all required args - * fulfilled and no keyword args left, with no further - * validation. XXX Maybe skip this in debug build ? - */ - if (!nkeywords) - return cleanreturn(1, &freelist); + /* We are into optional args, skip thru to any remaining + * keyword args */ + msg = skipitem(&format, p_va, flags); + if (msg) { + PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, + format); + return cleanreturn(0, freelist); + } + } - /* We are into optional args, skip thru to any remaining - * keyword args */ - msg = skipitem(&format, p_va, flags); - if (msg) { - PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, - format); - return cleanreturn(0, &freelist); - } - } + if (!IS_END_OF_FORMAT(*format) && *format != '|') { + PyErr_Format(PyExc_RuntimeError, + "more argument specifiers than keyword list entries " + "(remaining format:'%s')", format); + return cleanreturn(0, freelist); + } - if (!IS_END_OF_FORMAT(*format) && *format != '|') { - PyErr_Format(PyExc_RuntimeError, - "more argument specifiers than keyword list entries " - "(remaining format:'%s')", format); - return cleanreturn(0, &freelist); - } + /* make sure there are no extraneous keyword arguments */ + if (nkeywords > 0) { + PyObject *key, *value; + Py_ssize_t pos = 0; + while (PyDict_Next(keywords, &pos, &key, &value)) { + int match = 0; + char *ks; + if (!PyString_Check(key)) { + PyErr_SetString(PyExc_TypeError, + "keywords must be strings"); + return cleanreturn(0, freelist); + } + ks = PyString_AsString(key); + for (i = 0; i < len; i++) { + if (!strcmp(ks, kwlist[i])) { + match = 1; + break; + } + } + if (!match) { + PyErr_Format(PyExc_TypeError, + "'%s' is an invalid keyword " + "argument for this function", + ks); + return cleanreturn(0, freelist); + } + } + } - /* make sure there are no extraneous keyword arguments */ - if (nkeywords > 0) { - PyObject *key, *value; - Py_ssize_t pos = 0; - while (PyDict_Next(keywords, &pos, &key, &value)) { - int match = 0; - char *ks; - if (!PyString_Check(key)) { - PyErr_SetString(PyExc_TypeError, - "keywords must be strings"); - return cleanreturn(0, &freelist); - } - ks = PyString_AsString(key); - for (i = 0; i < len; i++) { - if (!strcmp(ks, kwlist[i])) { - match = 1; - break; - } - } - if (!match) { - PyErr_Format(PyExc_TypeError, - "'%s' is an invalid keyword " - "argument for this function", - ks); - return cleanreturn(0, &freelist); - } - } - } - - return cleanreturn(1, &freelist); + return cleanreturn(1, freelist); } static char * skipitem(const char **p_format, va_list *p_va, int flags) { - const char *format = *p_format; - char c = *format++; - - switch (c) { + const char *format = *p_format; + char c = *format++; - /* simple codes - * The individual types (second arg of va_arg) are irrelevant */ + switch (c) { - case 'b': /* byte -- very short int */ - case 'B': /* byte as bitfield */ - case 'h': /* short int */ - case 'H': /* short int as bitfield */ - case 'i': /* int */ - case 'I': /* int sized bitfield */ - case 'l': /* long int */ - case 'k': /* long int sized bitfield */ + /* simple codes + * The individual types (second arg of va_arg) are irrelevant */ + + case 'b': /* byte -- very short int */ + case 'B': /* byte as bitfield */ + case 'h': /* short int */ + case 'H': /* short int as bitfield */ + case 'i': /* int */ + case 'I': /* int sized bitfield */ + case 'l': /* long int */ + case 'k': /* long int sized bitfield */ #ifdef HAVE_LONG_LONG - case 'L': /* PY_LONG_LONG */ - case 'K': /* PY_LONG_LONG sized bitfield */ + case 'L': /* PY_LONG_LONG */ + case 'K': /* PY_LONG_LONG sized bitfield */ #endif - case 'f': /* float */ - case 'd': /* double */ + case 'f': /* float */ + case 'd': /* double */ #ifndef WITHOUT_COMPLEX - case 'D': /* complex double */ + case 'D': /* complex double */ #endif - case 'c': /* char */ - { - (void) va_arg(*p_va, void *); - break; - } + case 'c': /* char */ + { + (void) va_arg(*p_va, void *); + break; + } - case 'n': /* Py_ssize_t */ - { - (void) va_arg(*p_va, Py_ssize_t *); - break; - } - - /* string codes */ - - case 'e': /* string with encoding */ - { - (void) va_arg(*p_va, const char *); - if (!(*format == 's' || *format == 't')) - /* after 'e', only 's' and 't' is allowed */ - goto err; - format++; - /* explicit fallthrough to string cases */ - } - - case 's': /* string */ - case 'z': /* string or None */ + case 'n': /* Py_ssize_t */ + { + (void) va_arg(*p_va, Py_ssize_t *); + break; + } + + /* string codes */ + + case 'e': /* string with encoding */ + { + (void) va_arg(*p_va, const char *); + if (!(*format == 's' || *format == 't')) + /* after 'e', only 's' and 't' is allowed */ + goto err; + format++; + /* explicit fallthrough to string cases */ + } + + case 's': /* string */ + case 'z': /* string or None */ #ifdef Py_USING_UNICODE - case 'u': /* unicode string */ + case 'u': /* unicode string */ #endif - case 't': /* buffer, read-only */ - case 'w': /* buffer, read-write */ - { - (void) va_arg(*p_va, char **); - if (*format == '#') { - if (flags & FLAG_SIZE_T) - (void) va_arg(*p_va, Py_ssize_t *); - else - (void) va_arg(*p_va, int *); - format++; - } else if ((c == 's' || c == 'z') && *format == '*') { - format++; - } - break; - } + case 't': /* buffer, read-only */ + case 'w': /* buffer, read-write */ + { + (void) va_arg(*p_va, char **); + if (*format == '#') { + if (flags & FLAG_SIZE_T) + (void) va_arg(*p_va, Py_ssize_t *); + else + (void) va_arg(*p_va, int *); + format++; + } else if ((c == 's' || c == 'z') && *format == '*') { + format++; + } + break; + } - /* object codes */ + /* object codes */ - case 'S': /* string object */ + case 'S': /* string object */ #ifdef Py_USING_UNICODE - case 'U': /* unicode string object */ + case 'U': /* unicode string object */ #endif - { - (void) va_arg(*p_va, PyObject **); - break; - } - - case 'O': /* object */ - { - if (*format == '!') { - format++; - (void) va_arg(*p_va, PyTypeObject*); - (void) va_arg(*p_va, PyObject **); - } -#if 0 -/* I don't know what this is for */ - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - format++; - if ((*pred)(arg)) { - (void) va_arg(*p_va, PyObject **); - } - } -#endif - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - (void) va_arg(*p_va, converter); - (void) va_arg(*p_va, void *); - format++; - } - else { - (void) va_arg(*p_va, PyObject **); - } - break; - } + { + (void) va_arg(*p_va, PyObject **); + break; + } - case '(': /* bypass tuple, not handled at all previously */ - { - char *msg; - for (;;) { - if (*format==')') - break; - if (IS_END_OF_FORMAT(*format)) - return "Unmatched left paren in format " - "string"; - msg = skipitem(&format, p_va, flags); - if (msg) - return msg; - } - format++; - break; - } + case 'O': /* object */ + { + if (*format == '!') { + format++; + (void) va_arg(*p_va, PyTypeObject*); + (void) va_arg(*p_va, PyObject **); + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + (void) va_arg(*p_va, converter); + (void) va_arg(*p_va, void *); + format++; + } + else { + (void) va_arg(*p_va, PyObject **); + } + break; + } - case ')': - return "Unmatched right paren in format string"; + case '(': /* bypass tuple, not handled at all previously */ + { + char *msg; + for (;;) { + if (*format==')') + break; + if (IS_END_OF_FORMAT(*format)) + return "Unmatched left paren in format " + "string"; + msg = skipitem(&format, p_va, flags); + if (msg) + return msg; + } + format++; + break; + } - default: + case ')': + return "Unmatched right paren in format string"; + + default: err: - return "impossible"; - - } + return "impossible"; - *p_format = format; - return NULL; + } + + *p_format = format; + return NULL; } int PyArg_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, ...) { - Py_ssize_t i, l; - PyObject **o; - va_list vargs; + Py_ssize_t i, l; + PyObject **o; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, max); + va_start(vargs, max); #else - va_start(vargs); + va_start(vargs); #endif - assert(min >= 0); - assert(min <= max); - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "PyArg_UnpackTuple() argument list is not a tuple"); - return 0; - } - l = PyTuple_GET_SIZE(args); - if (l < min) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at least "), min, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at least "), min, l); - va_end(vargs); - return 0; - } - if (l > max) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at most "), max, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at most "), max, l); - va_end(vargs); - return 0; - } - for (i = 0; i < l; i++) { - o = va_arg(vargs, PyObject **); - *o = PyTuple_GET_ITEM(args, i); - } - va_end(vargs); - return 1; + assert(min >= 0); + assert(min <= max); + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "PyArg_UnpackTuple() argument list is not a tuple"); + return 0; + } + l = PyTuple_GET_SIZE(args); + if (l < min) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at least "), min, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at least "), min, l); + va_end(vargs); + return 0; + } + if (l > max) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at most "), max, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at most "), max, l); + va_end(vargs); + return 0; + } + for (i = 0; i < l; i++) { + o = va_arg(vargs, PyObject **); + *o = PyTuple_GET_ITEM(args, i); + } + va_end(vargs); + return 1; } /* For type constructors that don't take keyword args * - * Sets a TypeError and returns 0 if the kwds dict is + * Sets a TypeError and returns 0 if the kwds dict is * not empty, returns 1 otherwise */ int _PyArg_NoKeywords(const char *funcname, PyObject *kw) { - if (kw == NULL) - return 1; - if (!PyDict_CheckExact(kw)) { - PyErr_BadInternalCall(); - return 0; - } - if (PyDict_Size(kw) == 0) - return 1; - - PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", - funcname); - return 0; + if (kw == NULL) + return 1; + if (!PyDict_CheckExact(kw)) { + PyErr_BadInternalCall(); + return 0; + } + if (PyDict_Size(kw) == 0) + return 1; + + PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", + funcname); + return 0; } #ifdef __cplusplus }; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -33,41 +33,41 @@ static int countformat(const char *format, int endchar) { - int count = 0; - int level = 0; - while (level > 0 || *format != endchar) { - switch (*format) { - case '\0': - /* Premature end */ - PyErr_SetString(PyExc_SystemError, - "unmatched paren in format"); - return -1; - case '(': - case '[': - case '{': - if (level == 0) - count++; - level++; - break; - case ')': - case ']': - case '}': - level--; - break; - case '#': - case '&': - case ',': - case ':': - case ' ': - case '\t': - break; - default: - if (level == 0) - count++; - } - format++; - } - return count; + int count = 0; + int level = 0; + while (level > 0 || *format != endchar) { + switch (*format) { + case '\0': + /* Premature end */ + PyErr_SetString(PyExc_SystemError, + "unmatched paren in format"); + return -1; + case '(': + case '[': + case '{': + if (level == 0) + count++; + level++; + break; + case ')': + case ']': + case '}': + level--; + break; + case '#': + case '&': + case ',': + case ':': + case ' ': + case '\t': + break; + default: + if (level == 0) + count++; + } + format++; + } + return count; } @@ -83,582 +83,435 @@ static PyObject * do_mkdict(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *d; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((d = PyDict_New()) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i+= 2) { - PyObject *k, *v; - int err; - k = do_mkvalue(p_format, p_va, flags); - if (k == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - k = Py_None; - } - v = do_mkvalue(p_format, p_va, flags); - if (v == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - v = Py_None; - } - err = PyDict_SetItem(d, k, v); - Py_DECREF(k); - Py_DECREF(v); - if (err < 0 || itemfailed) { - Py_DECREF(d); - return NULL; - } - } - if (d != NULL && **p_format != endchar) { - Py_DECREF(d); - d = NULL; - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - } - else if (endchar) - ++*p_format; - return d; + PyObject *d; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((d = PyDict_New()) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i+= 2) { + PyObject *k, *v; + int err; + k = do_mkvalue(p_format, p_va, flags); + if (k == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + k = Py_None; + } + v = do_mkvalue(p_format, p_va, flags); + if (v == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + v = Py_None; + } + err = PyDict_SetItem(d, k, v); + Py_DECREF(k); + Py_DECREF(v); + if (err < 0 || itemfailed) { + Py_DECREF(d); + return NULL; + } + } + if (d != NULL && **p_format != endchar) { + Py_DECREF(d); + d = NULL; + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + } + else if (endchar) + ++*p_format; + return d; } static PyObject * do_mklist(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - v = PyList_New(n); - if (v == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyList_SET_ITEM(v, i, w); - } + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + v = PyList_New(n); + if (v == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyList_SET_ITEM(v, i, w); + } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } #ifdef Py_USING_UNICODE static int _ustrlen(Py_UNICODE *u) { - int i = 0; - Py_UNICODE *v = u; - while (*v != 0) { i++; v++; } - return i; + int i = 0; + Py_UNICODE *v = u; + while (*v != 0) { i++; v++; } + return i; } #endif static PyObject * do_mktuple(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((v = PyTuple_New(n)) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyTuple_SET_ITEM(v, i, w); - } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((v = PyTuple_New(n)) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyTuple_SET_ITEM(v, i, w); + } + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } static PyObject * do_mkvalue(const char **p_format, va_list *p_va, int flags) { - for (;;) { - switch (*(*p_format)++) { - case '(': - return do_mktuple(p_format, p_va, ')', - countformat(*p_format, ')'), flags); + for (;;) { + switch (*(*p_format)++) { + case '(': + return do_mktuple(p_format, p_va, ')', + countformat(*p_format, ')'), flags); - case '[': - return do_mklist(p_format, p_va, ']', - countformat(*p_format, ']'), flags); + case '[': + return do_mklist(p_format, p_va, ']', + countformat(*p_format, ']'), flags); - case '{': - return do_mkdict(p_format, p_va, '}', - countformat(*p_format, '}'), flags); + case '{': + return do_mkdict(p_format, p_va, '}', + countformat(*p_format, '}'), flags); - case 'b': - case 'B': - case 'h': - case 'i': - return PyInt_FromLong((long)va_arg(*p_va, int)); - - case 'H': - return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + case 'b': + case 'B': + case 'h': + case 'i': + return PyInt_FromLong((long)va_arg(*p_va, int)); - case 'I': - { - unsigned int n; - n = va_arg(*p_va, unsigned int); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong((unsigned long)n); - else - return PyInt_FromLong(n); - } - - case 'n': + case 'H': + return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + + case 'I': + { + unsigned int n; + n = va_arg(*p_va, unsigned int); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong((unsigned long)n); + else + return PyInt_FromLong(n); + } + + case 'n': #if SIZEOF_SIZE_T!=SIZEOF_LONG - return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); + return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); #endif - /* Fall through from 'n' to 'l' if Py_ssize_t is long */ - case 'l': - return PyInt_FromLong(va_arg(*p_va, long)); + /* Fall through from 'n' to 'l' if Py_ssize_t is long */ + case 'l': + return PyInt_FromLong(va_arg(*p_va, long)); - case 'k': - { - unsigned long n; - n = va_arg(*p_va, unsigned long); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong(n); - else - return PyInt_FromLong(n); - } + case 'k': + { + unsigned long n; + n = va_arg(*p_va, unsigned long); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong(n); + else + return PyInt_FromLong(n); + } #ifdef HAVE_LONG_LONG - case 'L': - return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); + case 'L': + return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); - case 'K': - return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); + case 'K': + return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); #endif #ifdef Py_USING_UNICODE - case 'u': - { - PyObject *v; - Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (u == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) - n = _ustrlen(u); - v = PyUnicode_FromUnicode(u, n); - } - return v; - } + case 'u': + { + PyObject *v; + Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (u == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) + n = _ustrlen(u); + v = PyUnicode_FromUnicode(u, n); + } + return v; + } #endif - case 'f': - case 'd': - return PyFloat_FromDouble( - (double)va_arg(*p_va, va_double)); + case 'f': + case 'd': + return PyFloat_FromDouble( + (double)va_arg(*p_va, va_double)); #ifndef WITHOUT_COMPLEX - case 'D': - return PyComplex_FromCComplex( - *((Py_complex *)va_arg(*p_va, Py_complex *))); + case 'D': + return PyComplex_FromCComplex( + *((Py_complex *)va_arg(*p_va, Py_complex *))); #endif /* WITHOUT_COMPLEX */ - case 'c': - { - char p[1]; - p[0] = (char)va_arg(*p_va, int); - return PyString_FromStringAndSize(p, 1); - } + case 'c': + { + char p[1]; + p[0] = (char)va_arg(*p_va, int); + return PyString_FromStringAndSize(p, 1); + } - case 's': - case 'z': - { - PyObject *v; - char *str = va_arg(*p_va, char *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (str == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) { - size_t m = strlen(str); - if (m > PY_SSIZE_T_MAX) { - PyErr_SetString(PyExc_OverflowError, - "string too long for Python string"); - return NULL; - } - n = (Py_ssize_t)m; - } - v = PyString_FromStringAndSize(str, n); - } - return v; - } + case 's': + case 'z': + { + PyObject *v; + char *str = va_arg(*p_va, char *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (str == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) { + size_t m = strlen(str); + if (m > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_OverflowError, + "string too long for Python string"); + return NULL; + } + n = (Py_ssize_t)m; + } + v = PyString_FromStringAndSize(str, n); + } + return v; + } - case 'N': - case 'S': - case 'O': - if (**p_format == '&') { - typedef PyObject *(*converter)(void *); - converter func = va_arg(*p_va, converter); - void *arg = va_arg(*p_va, void *); - ++*p_format; - return (*func)(arg); - } - else { - PyObject *v; - v = va_arg(*p_va, PyObject *); - if (v != NULL) { - if (*(*p_format - 1) != 'N') - Py_INCREF(v); - } - else if (!PyErr_Occurred()) - /* If a NULL was passed - * because a call that should - * have constructed a value - * failed, that's OK, and we - * pass the error on; but if - * no error occurred it's not - * clear that the caller knew - * what she was doing. */ - PyErr_SetString(PyExc_SystemError, - "NULL object passed to Py_BuildValue"); - return v; - } + case 'N': + case 'S': + case 'O': + if (**p_format == '&') { + typedef PyObject *(*converter)(void *); + converter func = va_arg(*p_va, converter); + void *arg = va_arg(*p_va, void *); + ++*p_format; + return (*func)(arg); + } + else { + PyObject *v; + v = va_arg(*p_va, PyObject *); + if (v != NULL) { + if (*(*p_format - 1) != 'N') + Py_INCREF(v); + } + else if (!PyErr_Occurred()) + /* If a NULL was passed + * because a call that should + * have constructed a value + * failed, that's OK, and we + * pass the error on; but if + * no error occurred it's not + * clear that the caller knew + * what she was doing. */ + PyErr_SetString(PyExc_SystemError, + "NULL object passed to Py_BuildValue"); + return v; + } - case ':': - case ',': - case ' ': - case '\t': - break; + case ':': + case ',': + case ' ': + case '\t': + break; - default: - PyErr_SetString(PyExc_SystemError, - "bad format char passed to Py_BuildValue"); - return NULL; + default: + PyErr_SetString(PyExc_SystemError, + "bad format char passed to Py_BuildValue"); + return NULL; - } - } + } + } } PyObject * Py_BuildValue(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, 0); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, 0); + va_end(va); + return retval; } PyObject * _Py_BuildValue_SizeT(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, FLAG_SIZE_T); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, FLAG_SIZE_T); + va_end(va); + return retval; } PyObject * Py_VaBuildValue(const char *format, va_list va) { - return va_build_value(format, va, 0); + return va_build_value(format, va, 0); } PyObject * _Py_VaBuildValue_SizeT(const char *format, va_list va) { - return va_build_value(format, va, FLAG_SIZE_T); + return va_build_value(format, va, FLAG_SIZE_T); } static PyObject * va_build_value(const char *format, va_list va, int flags) { - const char *f = format; - int n = countformat(f, '\0'); - va_list lva; + const char *f = format; + int n = countformat(f, '\0'); + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - if (n < 0) - return NULL; - if (n == 0) { - Py_INCREF(Py_None); - return Py_None; - } - if (n == 1) - return do_mkvalue(&f, &lva, flags); - return do_mktuple(&f, &lva, '\0', n, flags); + if (n < 0) + return NULL; + if (n == 0) { + Py_INCREF(Py_None); + return Py_None; + } + if (n == 1) + return do_mkvalue(&f, &lva, flags); + return do_mktuple(&f, &lva, '\0', n, flags); } PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...) { - va_list vargs; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *args; + PyObject *res; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) - return NULL; + if (args == NULL) + return NULL; - res = PyEval_CallObject(obj, args); - Py_DECREF(args); + res = PyEval_CallObject(obj, args); + Py_DECREF(args); - return res; + return res; } PyObject * PyEval_CallMethod(PyObject *obj, const char *methodname, const char *format, ...) { - va_list vargs; - PyObject *meth; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *meth; + PyObject *args; + PyObject *res; - meth = PyObject_GetAttrString(obj, methodname); - if (meth == NULL) - return NULL; + meth = PyObject_GetAttrString(obj, methodname); + if (meth == NULL) + return NULL; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) { - Py_DECREF(meth); - return NULL; - } + if (args == NULL) { + Py_DECREF(meth); + return NULL; + } - res = PyEval_CallObject(meth, args); - Py_DECREF(meth); - Py_DECREF(args); + res = PyEval_CallObject(meth, args); + Py_DECREF(meth); + Py_DECREF(args); - return res; -} - -static PyObject* -call_function_tail(PyObject *callable, PyObject *args) -{ - PyObject *retval; - - if (args == NULL) - return NULL; - - if (!PyTuple_Check(args)) { - PyObject *a; - - a = PyTuple_New(1); - if (a == NULL) { - Py_DECREF(args); - return NULL; - } - PyTuple_SET_ITEM(a, 0, args); - args = a; - } - retval = PyObject_Call(callable, args, NULL); - - Py_DECREF(args); - - return retval; -} - -PyObject * -PyObject_CallFunction(PyObject *callable, const char *format, ...) -{ - va_list va; - PyObject *args; - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - return call_function_tail(callable, args); -} - -PyObject * -PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) -{ - va_list va; - PyObject *args; - PyObject *func = NULL; - PyObject *retval = NULL; - - func = PyObject_GetAttrString(o, name); - if (func == NULL) { - PyErr_SetString(PyExc_AttributeError, name); - return 0; - } - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - retval = call_function_tail(func, args); - - exit: - /* args gets consumed in call_function_tail */ - Py_XDECREF(func); - - return retval; -} - -static PyObject * -objargs_mktuple(va_list va) -{ - int i, n = 0; - va_list countva; - PyObject *result, *tmp; - -#ifdef VA_LIST_IS_ARRAY - memcpy(countva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(countva, va); -#else - countva = va; -#endif -#endif - - while (((PyObject *)va_arg(countva, PyObject *)) != NULL) - ++n; - result = PyTuple_New(n); - if (result != NULL && n > 0) { - for (i = 0; i < n; ++i) { - tmp = (PyObject *)va_arg(va, PyObject *); - Py_INCREF(tmp); - PyTuple_SET_ITEM(result, i, tmp); - } - } - return result; -} - -PyObject * -PyObject_CallFunctionObjArgs(PyObject *callable, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - /* count the args */ - va_start(vargs, callable); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) - return NULL; - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - - return tmp; -} - -PyObject * -PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - callable = PyObject_GetAttr(callable, name); - if (callable == NULL) - return NULL; - - /* count the args */ - va_start(vargs, name); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) { - Py_DECREF(callable); - return NULL; - } - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - Py_DECREF(callable); - - return tmp; + return res; } /* returns -1 in case of error, 0 if a new key was added, 1 if the key @@ -666,67 +519,67 @@ static int _PyModule_AddObject_NoConsumeRef(PyObject *m, const char *name, PyObject *o) { - PyObject *dict, *prev; - if (!PyModule_Check(m)) { - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs module as first arg"); - return -1; - } - if (!o) { - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs non-NULL value"); - return -1; - } + PyObject *dict, *prev; + if (!PyModule_Check(m)) { + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs module as first arg"); + return -1; + } + if (!o) { + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs non-NULL value"); + return -1; + } - dict = PyModule_GetDict(m); - if (dict == NULL) { - /* Internal error -- modules must have a dict! */ - PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", - PyModule_GetName(m)); - return -1; - } - prev = PyDict_GetItemString(dict, name); - if (PyDict_SetItemString(dict, name, o)) - return -1; - return prev != NULL; + dict = PyModule_GetDict(m); + if (dict == NULL) { + /* Internal error -- modules must have a dict! */ + PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", + PyModule_GetName(m)); + return -1; + } + prev = PyDict_GetItemString(dict, name); + if (PyDict_SetItemString(dict, name, o)) + return -1; + return prev != NULL; } int PyModule_AddObject(PyObject *m, const char *name, PyObject *o) { - int result = _PyModule_AddObject_NoConsumeRef(m, name, o); - /* XXX WORKAROUND for a common misusage of PyModule_AddObject: - for the common case of adding a new key, we don't consume a - reference, but instead just leak it away. The issue is that - people generally don't realize that this function consumes a - reference, because on CPython the reference is still stored - on the dictionary. */ - if (result != 0) - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result = _PyModule_AddObject_NoConsumeRef(m, name, o); + /* XXX WORKAROUND for a common misusage of PyModule_AddObject: + for the common case of adding a new key, we don't consume a + reference, but instead just leak it away. The issue is that + people generally don't realize that this function consumes a + reference, because on CPython the reference is still stored + on the dictionary. */ + if (result != 0) + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddIntConstant(PyObject *m, const char *name, long value) { - int result; - PyObject *o = PyInt_FromLong(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyInt_FromLong(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddStringConstant(PyObject *m, const char *name, const char *value) { - int result; - PyObject *o = PyString_FromString(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyString_FromString(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } diff --git a/pypy/module/cpyext/src/mysnprintf.c b/pypy/module/cpyext/src/mysnprintf.c --- a/pypy/module/cpyext/src/mysnprintf.c +++ b/pypy/module/cpyext/src/mysnprintf.c @@ -20,86 +20,86 @@ Return value (rv): - When 0 <= rv < size, the output conversion was unexceptional, and - rv characters were written to str (excluding a trailing \0 byte at - str[rv]). + When 0 <= rv < size, the output conversion was unexceptional, and + rv characters were written to str (excluding a trailing \0 byte at + str[rv]). - When rv >= size, output conversion was truncated, and a buffer of - size rv+1 would have been needed to avoid truncation. str[size-1] - is \0 in this case. + When rv >= size, output conversion was truncated, and a buffer of + size rv+1 would have been needed to avoid truncation. str[size-1] + is \0 in this case. - When rv < 0, "something bad happened". str[size-1] is \0 in this - case too, but the rest of str is unreliable. It could be that - an error in format codes was detected by libc, or on platforms - with a non-C99 vsnprintf simply that the buffer wasn't big enough - to avoid truncation, or on platforms without any vsnprintf that - PyMem_Malloc couldn't obtain space for a temp buffer. + When rv < 0, "something bad happened". str[size-1] is \0 in this + case too, but the rest of str is unreliable. It could be that + an error in format codes was detected by libc, or on platforms + with a non-C99 vsnprintf simply that the buffer wasn't big enough + to avoid truncation, or on platforms without any vsnprintf that + PyMem_Malloc couldn't obtain space for a temp buffer. CAUTION: Unlike C99, str != NULL and size > 0 are required. */ int +PyOS_snprintf(char *str, size_t size, const char *format, ...) +{ + int rc; + va_list va; + + va_start(va, format); + rc = PyOS_vsnprintf(str, size, format, va); + va_end(va); + return rc; +} + +int PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va) { - int len; /* # bytes written, excluding \0 */ + int len; /* # bytes written, excluding \0 */ #ifdef HAVE_SNPRINTF #define _PyOS_vsnprintf_EXTRA_SPACE 1 #else #define _PyOS_vsnprintf_EXTRA_SPACE 512 - char *buffer; + char *buffer; #endif - assert(str != NULL); - assert(size > 0); - assert(format != NULL); - /* We take a size_t as input but return an int. Sanity check - * our input so that it won't cause an overflow in the - * vsnprintf return value or the buffer malloc size. */ - if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { - len = -666; - goto Done; - } + assert(str != NULL); + assert(size > 0); + assert(format != NULL); + /* We take a size_t as input but return an int. Sanity check + * our input so that it won't cause an overflow in the + * vsnprintf return value or the buffer malloc size. */ + if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { + len = -666; + goto Done; + } #ifdef HAVE_SNPRINTF - len = vsnprintf(str, size, format, va); + len = vsnprintf(str, size, format, va); #else - /* Emulate it. */ - buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); - if (buffer == NULL) { - len = -666; - goto Done; - } + /* Emulate it. */ + buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); + if (buffer == NULL) { + len = -666; + goto Done; + } - len = vsprintf(buffer, format, va); - if (len < 0) - /* ignore the error */; + len = vsprintf(buffer, format, va); + if (len < 0) + /* ignore the error */; - else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) - Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); + else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) + Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); - else { - const size_t to_copy = (size_t)len < size ? - (size_t)len : size - 1; - assert(to_copy < size); - memcpy(str, buffer, to_copy); - str[to_copy] = '\0'; - } - PyMem_FREE(buffer); + else { + const size_t to_copy = (size_t)len < size ? + (size_t)len : size - 1; + assert(to_copy < size); + memcpy(str, buffer, to_copy); + str[to_copy] = '\0'; + } + PyMem_FREE(buffer); #endif Done: - if (size > 0) - str[size-1] = '\0'; - return len; + if (size > 0) + str[size-1] = '\0'; + return len; #undef _PyOS_vsnprintf_EXTRA_SPACE } - -int -PyOS_snprintf(char *str, size_t size, const char *format, ...) -{ - int rc; - va_list va; - - va_start(va, format); - rc = PyOS_vsnprintf(str, size, format, va); - va_end(va); - return rc; -} diff --git a/pypy/module/cpyext/src/object.c b/pypy/module/cpyext/src/object.c deleted file mode 100644 --- a/pypy/module/cpyext/src/object.c +++ /dev/null @@ -1,91 +0,0 @@ -// contains code from abstract.c -#include - - -static PyObject * -null_error(void) -{ - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_SystemError, - "null argument to internal routine"); - return NULL; -} - -int PyObject_AsReadBuffer(PyObject *obj, - const void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void *pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a readable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getreadbuffer)(obj, 0, &pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int PyObject_AsWriteBuffer(PyObject *obj, - void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void*pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a writeable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getwritebuffer)(obj,0,&pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int -PyObject_CheckReadBuffer(PyObject *obj) -{ - PyBufferProcs *pb = obj->ob_type->tp_as_buffer; - - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - (*pb->bf_getsegcount)(obj, NULL) != 1) - return 0; - return 1; -} diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -4,72 +4,75 @@ PyObject * PyErr_Format(PyObject *exception, const char *format, ...) { - va_list vargs; - PyObject* string; + va_list vargs; + PyObject* string; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - string = PyString_FromFormatV(format, vargs); - PyErr_SetObject(exception, string); - Py_XDECREF(string); - va_end(vargs); - return NULL; + string = PyString_FromFormatV(format, vargs); + PyErr_SetObject(exception, string); + Py_XDECREF(string); + va_end(vargs); + return NULL; } + + PyObject * PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { - char *dot; - PyObject *modulename = NULL; - PyObject *classname = NULL; - PyObject *mydict = NULL; - PyObject *bases = NULL; - PyObject *result = NULL; - dot = strrchr(name, '.'); - if (dot == NULL) { - PyErr_SetString(PyExc_SystemError, - "PyErr_NewException: name must be module.class"); - return NULL; - } - if (base == NULL) - base = PyExc_Exception; - if (dict == NULL) { - dict = mydict = PyDict_New(); - if (dict == NULL) - goto failure; - } - if (PyDict_GetItemString(dict, "__module__") == NULL) { - modulename = PyString_FromStringAndSize(name, - (Py_ssize_t)(dot-name)); - if (modulename == NULL) - goto failure; - if (PyDict_SetItemString(dict, "__module__", modulename) != 0) - goto failure; - } - if (PyTuple_Check(base)) { - bases = base; - /* INCREF as we create a new ref in the else branch */ - Py_INCREF(bases); - } else { - bases = PyTuple_Pack(1, base); - if (bases == NULL) - goto failure; - } - /* Create a real new-style class. */ - result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", - dot+1, bases, dict); + char *dot; + PyObject *modulename = NULL; + PyObject *classname = NULL; + PyObject *mydict = NULL; + PyObject *bases = NULL; + PyObject *result = NULL; + dot = strrchr(name, '.'); + if (dot == NULL) { + PyErr_SetString(PyExc_SystemError, + "PyErr_NewException: name must be module.class"); + return NULL; + } + if (base == NULL) + base = PyExc_Exception; + if (dict == NULL) { + dict = mydict = PyDict_New(); + if (dict == NULL) + goto failure; + } + if (PyDict_GetItemString(dict, "__module__") == NULL) { + modulename = PyString_FromStringAndSize(name, + (Py_ssize_t)(dot-name)); + if (modulename == NULL) + goto failure; + if (PyDict_SetItemString(dict, "__module__", modulename) != 0) + goto failure; + } + if (PyTuple_Check(base)) { + bases = base; + /* INCREF as we create a new ref in the else branch */ + Py_INCREF(bases); + } else { + bases = PyTuple_Pack(1, base); + if (bases == NULL) + goto failure; + } + /* Create a real new-style class. */ + result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", + dot+1, bases, dict); failure: - Py_XDECREF(bases); - Py_XDECREF(mydict); - Py_XDECREF(classname); - Py_XDECREF(modulename); - return result; + Py_XDECREF(bases); + Py_XDECREF(mydict); + Py_XDECREF(classname); + Py_XDECREF(modulename); + return result; } + /* Create an exception with docstring */ PyObject * PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) diff --git a/pypy/module/cpyext/src/pysignals.c b/pypy/module/cpyext/src/pysignals.c --- a/pypy/module/cpyext/src/pysignals.c +++ b/pypy/module/cpyext/src/pysignals.c @@ -17,17 +17,34 @@ PyOS_getsig(int sig) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context; - if (sigaction(sig, NULL, &context) == -1) - return SIG_ERR; - return context.sa_handler; + /* assume sigaction exists */ + struct sigaction context; + if (sigaction(sig, NULL, &context) == -1) + return SIG_ERR; + return context.sa_handler; #else - PyOS_sighandler_t handler; - handler = signal(sig, SIG_IGN); - if (handler != SIG_ERR) - signal(sig, handler); - return handler; + PyOS_sighandler_t handler; +/* Special signal handling for the secure CRT in Visual Studio 2005 */ +#if defined(_MSC_VER) && _MSC_VER >= 1400 + switch (sig) { + /* Only these signals are valid */ + case SIGINT: + case SIGILL: + case SIGFPE: + case SIGSEGV: + case SIGTERM: + case SIGBREAK: + case SIGABRT: + break; + /* Don't call signal() with other values or it will assert */ + default: + return SIG_ERR; + } +#endif /* _MSC_VER && _MSC_VER >= 1400 */ + handler = signal(sig, SIG_IGN); + if (handler != SIG_ERR) + signal(sig, handler); + return handler; #endif } @@ -35,21 +52,21 @@ PyOS_setsig(int sig, PyOS_sighandler_t handler) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context, ocontext; - context.sa_handler = handler; - sigemptyset(&context.sa_mask); - context.sa_flags = 0; - if (sigaction(sig, &context, &ocontext) == -1) - return SIG_ERR; - return ocontext.sa_handler; + /* assume sigaction exists */ + struct sigaction context, ocontext; + context.sa_handler = handler; + sigemptyset(&context.sa_mask); + context.sa_flags = 0; + if (sigaction(sig, &context, &ocontext) == -1) + return SIG_ERR; + return ocontext.sa_handler; #else - PyOS_sighandler_t oldhandler; - oldhandler = signal(sig, handler); + PyOS_sighandler_t oldhandler; + oldhandler = signal(sig, handler); #ifndef MS_WINDOWS - /* should check if this exists */ - siginterrupt(sig, 1); + /* should check if this exists */ + siginterrupt(sig, 1); #endif - return oldhandler; + return oldhandler; #endif } diff --git a/pypy/module/cpyext/src/pythonrun.c b/pypy/module/cpyext/src/pythonrun.c --- a/pypy/module/cpyext/src/pythonrun.c +++ b/pypy/module/cpyext/src/pythonrun.c @@ -9,28 +9,28 @@ void Py_FatalError(const char *msg) { - fprintf(stderr, "Fatal Python error: %s\n", msg); - fflush(stderr); /* it helps in Windows debug build */ + fprintf(stderr, "Fatal Python error: %s\n", msg); + fflush(stderr); /* it helps in Windows debug build */ #ifdef MS_WINDOWS - { - size_t len = strlen(msg); - WCHAR* buffer; - size_t i; + { + size_t len = strlen(msg); + WCHAR* buffer; + size_t i; - /* Convert the message to wchar_t. This uses a simple one-to-one - conversion, assuming that the this error message actually uses ASCII - only. If this ceases to be true, we will have to convert. */ - buffer = alloca( (len+1) * (sizeof *buffer)); - for( i=0; i<=len; ++i) - buffer[i] = msg[i]; - OutputDebugStringW(L"Fatal Python error: "); - OutputDebugStringW(buffer); - OutputDebugStringW(L"\n"); - } + /* Convert the message to wchar_t. This uses a simple one-to-one + conversion, assuming that the this error message actually uses ASCII + only. If this ceases to be true, we will have to convert. */ + buffer = alloca( (len+1) * (sizeof *buffer)); + for( i=0; i<=len; ++i) + buffer[i] = msg[i]; + OutputDebugStringW(L"Fatal Python error: "); + OutputDebugStringW(buffer); + OutputDebugStringW(L"\n"); + } #ifdef _DEBUG - DebugBreak(); + DebugBreak(); #endif #endif /* MS_WINDOWS */ - abort(); + abort(); } diff --git a/pypy/module/cpyext/src/stringobject.c b/pypy/module/cpyext/src/stringobject.c --- a/pypy/module/cpyext/src/stringobject.c +++ b/pypy/module/cpyext/src/stringobject.c @@ -4,246 +4,247 @@ PyObject * PyString_FromFormatV(const char *format, va_list vargs) { - va_list count; - Py_ssize_t n = 0; - const char* f; - char *s; - PyObject* string; + va_list count; + Py_ssize_t n = 0; + const char* f; + char *s; + PyObject* string; #ifdef VA_LIST_IS_ARRAY - Py_MEMCPY(count, vargs, sizeof(va_list)); + Py_MEMCPY(count, vargs, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(count, vargs); + __va_copy(count, vargs); #else - count = vargs; + count = vargs; #endif #endif - /* step 1: figure out how large a buffer we need */ - for (f = format; *f; f++) { - if (*f == '%') { + /* step 1: figure out how large a buffer we need */ + for (f = format; *f; f++) { + if (*f == '%') { #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - const char* p = f; - while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - ; + const char* p = f; + while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + ; - /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since - * they don't affect the amount of space we reserve. - */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - ++f; - } + /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since + * they don't affect the amount of space we reserve. + */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - ++f; - } + } + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + ++f; + } - switch (*f) { - case 'c': - (void)va_arg(count, int); - /* fall through... */ - case '%': - n++; - break; - case 'd': case 'u': case 'i': case 'x': - (void) va_arg(count, int); + switch (*f) { + case 'c': + (void)va_arg(count, int); + /* fall through... */ + case '%': + n++; + break; + case 'd': case 'u': case 'i': case 'x': + (void) va_arg(count, int); #ifdef HAVE_LONG_LONG - /* Need at most - ceil(log10(256)*SIZEOF_LONG_LONG) digits, - plus 1 for the sign. 53/22 is an upper - bound for log10(256). */ - if (longlongflag) - n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; - else + /* Need at most + ceil(log10(256)*SIZEOF_LONG_LONG) digits, + plus 1 for the sign. 53/22 is an upper + bound for log10(256). */ + if (longlongflag) + n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; + else #endif - /* 20 bytes is enough to hold a 64-bit - integer. Decimal takes the most - space. This isn't enough for - octal. */ - n += 20; + /* 20 bytes is enough to hold a 64-bit + integer. Decimal takes the most + space. This isn't enough for + octal. */ + n += 20; - break; - case 's': - s = va_arg(count, char*); - n += strlen(s); - break; - case 'p': - (void) va_arg(count, int); - /* maximum 64-bit pointer representation: - * 0xffffffffffffffff - * so 19 characters is enough. - * XXX I count 18 -- what's the extra for? - */ - n += 19; - break; - default: - /* if we stumble upon an unknown - formatting code, copy the rest of - the format string to the output - string. (we cannot just skip the - code, since there's no way to know - what's in the argument list) */ - n += strlen(p); - goto expand; - } - } else - n++; - } + break; + case 's': + s = va_arg(count, char*); + n += strlen(s); + break; + case 'p': + (void) va_arg(count, int); + /* maximum 64-bit pointer representation: + * 0xffffffffffffffff + * so 19 characters is enough. + * XXX I count 18 -- what's the extra for? + */ + n += 19; + break; + default: + /* if we stumble upon an unknown + formatting code, copy the rest of + the format string to the output + string. (we cannot just skip the + code, since there's no way to know + what's in the argument list) */ + n += strlen(p); + goto expand; + } + } else + n++; + } expand: - /* step 2: fill the buffer */ - /* Since we've analyzed how much space we need for the worst case, - use sprintf directly instead of the slower PyOS_snprintf. */ - string = PyString_FromStringAndSize(NULL, n); - if (!string) - return NULL; + /* step 2: fill the buffer */ + /* Since we've analyzed how much space we need for the worst case, + use sprintf directly instead of the slower PyOS_snprintf. */ + string = PyString_FromStringAndSize(NULL, n); + if (!string) + return NULL; - s = PyString_AsString(string); + s = PyString_AsString(string); - for (f = format; *f; f++) { - if (*f == '%') { - const char* p = f++; - Py_ssize_t i; - int longflag = 0; + for (f = format; *f; f++) { + if (*f == '%') { + const char* p = f++; + Py_ssize_t i; + int longflag = 0; #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - int size_tflag = 0; - /* parse the width.precision part (we're only - interested in the precision value, if any) */ - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - if (*f == '.') { - f++; - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - } - while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - f++; - /* Handle %ld, %lu, %lld and %llu. */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - longflag = 1; - ++f; - } + int size_tflag = 0; + /* parse the width.precision part (we're only + interested in the precision value, if any) */ + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + if (*f == '.') { + f++; + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + } + while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + f++; + /* Handle %ld, %lu, %lld and %llu. */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + longflag = 1; + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - /* handle the size_t flag. */ - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - size_tflag = 1; - ++f; - } + } + /* handle the size_t flag. */ + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + size_tflag = 1; + ++f; + } - switch (*f) { - case 'c': - *s++ = va_arg(vargs, int); - break; - case 'd': - if (longflag) - sprintf(s, "%ld", va_arg(vargs, long)); + switch (*f) { + case 'c': + *s++ = va_arg(vargs, int); + break; + case 'd': + if (longflag) + sprintf(s, "%ld", va_arg(vargs, long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "d", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "d", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "d", - va_arg(vargs, Py_ssize_t)); - else - sprintf(s, "%d", va_arg(vargs, int)); - s += strlen(s); - break; - case 'u': - if (longflag) - sprintf(s, "%lu", - va_arg(vargs, unsigned long)); + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "d", + va_arg(vargs, Py_ssize_t)); + else + sprintf(s, "%d", va_arg(vargs, int)); + s += strlen(s); + break; + case 'u': + if (longflag) + sprintf(s, "%lu", + va_arg(vargs, unsigned long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "u", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "u", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "u", - va_arg(vargs, size_t)); - else - sprintf(s, "%u", - va_arg(vargs, unsigned int)); - s += strlen(s); - break; - case 'i': - sprintf(s, "%i", va_arg(vargs, int)); - s += strlen(s); - break; - case 'x': - sprintf(s, "%x", va_arg(vargs, int)); - s += strlen(s); - break; - case 's': - p = va_arg(vargs, char*); - i = strlen(p); - if (n > 0 && i > n) - i = n; - Py_MEMCPY(s, p, i); - s += i; - break; - case 'p': - sprintf(s, "%p", va_arg(vargs, void*)); - /* %p is ill-defined: ensure leading 0x. */ - if (s[1] == 'X') - s[1] = 'x'; - else if (s[1] != 'x') { - memmove(s+2, s, strlen(s)+1); - s[0] = '0'; - s[1] = 'x'; - } - s += strlen(s); - break; - case '%': - *s++ = '%'; - break; - default: - strcpy(s, p); - s += strlen(s); - goto end; - } - } else - *s++ = *f; - } + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "u", + va_arg(vargs, size_t)); + else + sprintf(s, "%u", + va_arg(vargs, unsigned int)); + s += strlen(s); + break; + case 'i': + sprintf(s, "%i", va_arg(vargs, int)); + s += strlen(s); + break; + case 'x': + sprintf(s, "%x", va_arg(vargs, int)); + s += strlen(s); + break; + case 's': + p = va_arg(vargs, char*); + i = strlen(p); + if (n > 0 && i > n) + i = n; + Py_MEMCPY(s, p, i); + s += i; + break; + case 'p': + sprintf(s, "%p", va_arg(vargs, void*)); + /* %p is ill-defined: ensure leading 0x. */ + if (s[1] == 'X') + s[1] = 'x'; + else if (s[1] != 'x') { + memmove(s+2, s, strlen(s)+1); + s[0] = '0'; + s[1] = 'x'; + } + s += strlen(s); + break; + case '%': + *s++ = '%'; + break; + default: + strcpy(s, p); + s += strlen(s); + goto end; + } + } else + *s++ = *f; + } end: - _PyString_Resize(&string, s - PyString_AS_STRING(string)); - return string; + if (_PyString_Resize(&string, s - PyString_AS_STRING(string))) + return NULL; + return string; } PyObject * PyString_FromFormat(const char *format, ...) { - PyObject* ret; - va_list vargs; + PyObject* ret; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - ret = PyString_FromFormatV(format, vargs); - va_end(vargs); - return ret; + ret = PyString_FromFormatV(format, vargs); + va_end(vargs); + return ret; } diff --git a/pypy/module/cpyext/src/structseq.c b/pypy/module/cpyext/src/structseq.c --- a/pypy/module/cpyext/src/structseq.c +++ b/pypy/module/cpyext/src/structseq.c @@ -175,32 +175,33 @@ if (min_len != max_len) { if (len < min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at least %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at least %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } if (len > max_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at most %zd-sequence (%zd-sequence given)", - type->tp_name, max_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at most %zd-sequence (%zd-sequence given)", + type->tp_name, max_len, len); + Py_DECREF(arg); + return NULL; } } else { if (len != min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes a %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes a %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } } res = (PyStructSequence*) PyStructSequence_New(type); if (res == NULL) { + Py_DECREF(arg); return NULL; } for (i = 0; i < len; ++i) { diff --git a/pypy/module/cpyext/src/sysmodule.c b/pypy/module/cpyext/src/sysmodule.c --- a/pypy/module/cpyext/src/sysmodule.c +++ b/pypy/module/cpyext/src/sysmodule.c @@ -100,4 +100,3 @@ sys_write("stderr", stderr, format, va); va_end(va); } - diff --git a/pypy/module/cpyext/src/varargwrapper.c b/pypy/module/cpyext/src/varargwrapper.c --- a/pypy/module/cpyext/src/varargwrapper.c +++ b/pypy/module/cpyext/src/varargwrapper.c @@ -1,21 +1,25 @@ #include #include -PyObject * PyTuple_Pack(Py_ssize_t size, ...) +PyObject * +PyTuple_Pack(Py_ssize_t n, ...) { - va_list ap; - PyObject *cur, *tuple; - int i; + Py_ssize_t i; + PyObject *o; + PyObject *result; + va_list vargs; - tuple = PyTuple_New(size); - va_start(ap, size); - for (i = 0; i < size; i++) { - cur = va_arg(ap, PyObject*); - Py_INCREF(cur); - if (PyTuple_SetItem(tuple, i, cur) < 0) + va_start(vargs, n); + result = PyTuple_New(n); + if (result == NULL) + return NULL; + for (i = 0; i < n; i++) { + o = va_arg(vargs, PyObject *); + Py_INCREF(o); + if (PyTuple_SetItem(result, i, o) < 0) return NULL; } - va_end(ap); - return tuple; + va_end(vargs); + return result; } diff --git a/pypy/module/cpyext/structmember.py b/pypy/module/cpyext/structmember.py --- a/pypy/module/cpyext/structmember.py +++ b/pypy/module/cpyext/structmember.py @@ -10,7 +10,7 @@ PyString_FromString, PyString_FromStringAndSize) from pypy.module.cpyext.floatobject import PyFloat_AsDouble from pypy.module.cpyext.longobject import ( - PyLong_AsLongLong, PyLong_AsUnsignedLongLong) + PyLong_AsLongLong, PyLong_AsUnsignedLongLong, PyLong_AsSsize_t) from pypy.module.cpyext.typeobjectdefs import PyMemberDef from pypy.rlib.unroll import unrolling_iterable @@ -28,6 +28,7 @@ (T_DOUBLE, rffi.DOUBLE, PyFloat_AsDouble), (T_LONGLONG, rffi.LONGLONG, PyLong_AsLongLong), (T_ULONGLONG, rffi.ULONGLONG, PyLong_AsUnsignedLongLong), + (T_PYSSIZET, rffi.SSIZE_T, PyLong_AsSsize_t), ]) diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -18,6 +18,7 @@ T_OBJECT_EX = 16 T_LONGLONG = 17 T_ULONGLONG = 18 +T_PYSSIZET = 19 READONLY = RO = 1 READ_RESTRICTED = 2 diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1405,13 +1405,6 @@ """ raise NotImplementedError - at cpython_api([Py_ssize_t], PyObject) -def PyLong_FromSsize_t(space, v): - """Return a new PyLongObject object from a C Py_ssize_t, or - NULL on failure. - """ - raise NotImplementedError - @cpython_api([rffi.SIZE_T], PyObject) def PyLong_FromSize_t(space, v): """Return a new PyLongObject object from a C size_t, or @@ -1431,14 +1424,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject], Py_ssize_t, error=-1) -def PyLong_AsSsize_t(space, pylong): - """Return a C Py_ssize_t representation of the contents of pylong. If - pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised - and -1 will be returned. - """ - raise NotImplementedError - @cpython_api([PyObject, rffi.CCHARP], rffi.INT_real, error=-1) def PyMapping_DelItemString(space, o, key): """Remove the mapping for object key from the object o. Return -1 on @@ -1980,35 +1965,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) -def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): - """Decode length bytes from a UTF-32 encoded buffer string and return the - corresponding Unicode object. errors (if non-NULL) defines the error - handling. It defaults to "strict". - - If byteorder is non-NULL, the decoder starts decoding using the given byte - order: - - *byteorder == -1: little endian - *byteorder == 0: native order - *byteorder == 1: big endian - - If *byteorder is zero, and the first four bytes of the input data are a - byte order mark (BOM), the decoder switches to this byte order and the BOM is - not copied into the resulting Unicode string. If *byteorder is -1 or - 1, any byte order mark is copied to the output. - - After completion, *byteorder is set to the current byte order at the end - of input data. - - In a narrow build codepoints outside the BMP will be decoded as surrogate pairs. - - If byteorder is NULL, the codec starts in native order mode. - - Return NULL if an exception was raised by the codec. - """ - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP, Py_ssize_t], PyObject) def PyUnicode_DecodeUTF32Stateful(space, s, size, errors, byteorder, consumed): """If consumed is NULL, behave like PyUnicode_DecodeUTF32(). If diff --git a/pypy/module/cpyext/test/_sre.c b/pypy/module/cpyext/test/_sre.c --- a/pypy/module/cpyext/test/_sre.c +++ b/pypy/module/cpyext/test/_sre.c @@ -81,9 +81,6 @@ #define PyObject_DEL(op) PyMem_DEL((op)) #endif -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - /* -------------------------------------------------------------------- */ #if defined(_MSC_VER) @@ -1689,7 +1686,7 @@ if (PyUnicode_Check(string)) { /* unicode strings doesn't always support the buffer interface */ ptr = (void*) PyUnicode_AS_DATA(string); - bytes = PyUnicode_GET_DATA_SIZE(string); + /* bytes = PyUnicode_GET_DATA_SIZE(string); */ size = PyUnicode_GET_SIZE(string); charsize = sizeof(Py_UNICODE); @@ -2601,46 +2598,22 @@ {NULL, NULL} }; -static PyObject* -pattern_getattr(PatternObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(pattern_methods, (PyObject*) self, name); - - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - if (!strcmp(name, "flags")) - return Py_BuildValue("i", self->flags); - - if (!strcmp(name, "groups")) - return Py_BuildValue("i", self->groups); - - if (!strcmp(name, "groupindex") && self->groupindex) { - Py_INCREF(self->groupindex); - return self->groupindex; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define PAT_OFF(x) offsetof(PatternObject, x) +static PyMemberDef pattern_members[] = { + {"pattern", T_OBJECT, PAT_OFF(pattern), READONLY}, + {"flags", T_INT, PAT_OFF(flags), READONLY}, + {"groups", T_PYSSIZET, PAT_OFF(groups), READONLY}, + {"groupindex", T_OBJECT, PAT_OFF(groupindex), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Pattern_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Pattern", sizeof(PatternObject), sizeof(SRE_CODE), (destructor)pattern_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)pattern_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattrn */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ @@ -2653,12 +2626,16 @@ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ pattern_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(PatternObject, weakreflist), /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + pattern_methods, /* tp_methods */ + pattern_members, /* tp_members */ }; static int _validate(PatternObject *self); /* Forward */ @@ -2767,7 +2744,7 @@ #if defined(VVERBOSE) #define VTRACE(v) printf v #else -#define VTRACE(v) +#define VTRACE(v) do {} while(0) /* do nothing */ #endif /* Report failure */ @@ -2970,13 +2947,13 @@ <1=skip> <2=flags> <3=min> <4=max>; If SRE_INFO_PREFIX or SRE_INFO_CHARSET is in the flags, more follows. */ - SRE_CODE flags, min, max, i; + SRE_CODE flags, i; SRE_CODE *newcode; GET_SKIP; newcode = code+skip-1; GET_ARG; flags = arg; - GET_ARG; min = arg; - GET_ARG; max = arg; + GET_ARG; /* min */ + GET_ARG; /* max */ /* Check that only valid flags are present */ if ((flags & ~(SRE_INFO_PREFIX | SRE_INFO_LITERAL | @@ -2992,9 +2969,9 @@ FAIL; /* Validate the prefix */ if (flags & SRE_INFO_PREFIX) { - SRE_CODE prefix_len, prefix_skip; + SRE_CODE prefix_len; GET_ARG; prefix_len = arg; - GET_ARG; prefix_skip = arg; + GET_ARG; /* prefix skip */ /* Here comes the prefix string */ if (code+prefix_len < code || code+prefix_len > newcode) FAIL; @@ -3565,7 +3542,7 @@ #endif } -static PyMethodDef match_methods[] = { +static struct PyMethodDef match_methods[] = { {"group", (PyCFunction) match_group, METH_VARARGS}, {"start", (PyCFunction) match_start, METH_VARARGS}, {"end", (PyCFunction) match_end, METH_VARARGS}, @@ -3578,80 +3555,90 @@ {NULL, NULL} }; -static PyObject* -match_getattr(MatchObject* self, char* name) +static PyObject * +match_lastindex_get(MatchObject *self) { - PyObject* res; - - res = Py_FindMethod(match_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - if (!strcmp(name, "lastindex")) { - if (self->lastindex >= 0) - return Py_BuildValue("i", self->lastindex); - Py_INCREF(Py_None); - return Py_None; + if (self->lastindex >= 0) + return Py_BuildValue("i", self->lastindex); + Py_INCREF(Py_None); + return Py_None; +} + +static PyObject * +match_lastgroup_get(MatchObject *self) +{ + if (self->pattern->indexgroup && self->lastindex >= 0) { + PyObject* result = PySequence_GetItem( + self->pattern->indexgroup, self->lastindex + ); + if (result) + return result; + PyErr_Clear(); } - - if (!strcmp(name, "lastgroup")) { - if (self->pattern->indexgroup && self->lastindex >= 0) { - PyObject* result = PySequence_GetItem( - self->pattern->indexgroup, self->lastindex - ); - if (result) - return result; - PyErr_Clear(); - } - Py_INCREF(Py_None); - return Py_None; - } - - if (!strcmp(name, "string")) { - if (self->string) { - Py_INCREF(self->string); - return self->string; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - - if (!strcmp(name, "regs")) { - if (self->regs) { - Py_INCREF(self->regs); - return self->regs; - } else - return match_regs(self); - } - - if (!strcmp(name, "re")) { - Py_INCREF(self->pattern); - return (PyObject*) self->pattern; - } - - if (!strcmp(name, "pos")) - return Py_BuildValue("i", self->pos); - - if (!strcmp(name, "endpos")) - return Py_BuildValue("i", self->endpos); - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; + Py_INCREF(Py_None); + return Py_None; } +static PyObject * +match_regs_get(MatchObject *self) +{ + if (self->regs) { + Py_INCREF(self->regs); + return self->regs; + } else + return match_regs(self); +} + +static PyGetSetDef match_getset[] = { + {"lastindex", (getter)match_lastindex_get, (setter)NULL}, + {"lastgroup", (getter)match_lastgroup_get, (setter)NULL}, + {"regs", (getter)match_regs_get, (setter)NULL}, + {NULL} +}; + +#define MATCH_OFF(x) offsetof(MatchObject, x) +static PyMemberDef match_members[] = { + {"string", T_OBJECT, MATCH_OFF(string), READONLY}, + {"re", T_OBJECT, MATCH_OFF(pattern), READONLY}, + {"pos", T_PYSSIZET, MATCH_OFF(pos), READONLY}, + {"endpos", T_PYSSIZET, MATCH_OFF(endpos), READONLY}, + {NULL} +}; + + /* FIXME: implement setattr("string", None) as a special case (to detach the associated string, if any */ -statichere PyTypeObject Match_Type = { - PyObject_HEAD_INIT(NULL) - 0, "_" SRE_MODULE ".SRE_Match", +static PyTypeObject Match_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "_" SRE_MODULE ".SRE_Match", sizeof(MatchObject), sizeof(Py_ssize_t), - (destructor)match_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)match_getattr /*tp_getattr*/ + (destructor)match_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + match_methods, /* tp_methods */ + match_members, /* tp_members */ + match_getset, /* tp_getset */ }; static PyObject* @@ -3800,34 +3787,42 @@ {NULL, NULL} }; -static PyObject* -scanner_getattr(ScannerObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(scanner_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define SCAN_OFF(x) offsetof(ScannerObject, x) +static PyMemberDef scanner_members[] = { + {"pattern", T_OBJECT, SCAN_OFF(pattern), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Scanner_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Scanner", sizeof(ScannerObject), 0, (destructor)scanner_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)scanner_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_reserved */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + scanner_methods, /* tp_methods */ + scanner_members, /* tp_members */ + 0, /* tp_getset */ }; static PyObject* @@ -3879,8 +3874,9 @@ PyObject* x; /* Patch object types */ - Pattern_Type.ob_type = Match_Type.ob_type = - Scanner_Type.ob_type = &PyType_Type; + if (PyType_Ready(&Pattern_Type) || PyType_Ready(&Match_Type) || + PyType_Ready(&Scanner_Type)) + return; m = Py_InitModule("_" SRE_MODULE, _functions); if (m == NULL) diff --git a/pypy/module/cpyext/test/array.c b/pypy/module/cpyext/test/array.c --- a/pypy/module/cpyext/test/array.c +++ b/pypy/module/cpyext/test/array.c @@ -11,13 +11,10 @@ #include #else /* !STDC_HEADERS */ #ifdef HAVE_SYS_TYPES_H -#include /* For size_t */ +#include /* For size_t */ #endif /* HAVE_SYS_TYPES_H */ #endif /* !STDC_HEADERS */ -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - struct arrayobject; /* Forward */ /* All possible arraydescr values are defined in the vector "descriptors" @@ -25,18 +22,18 @@ * functions aren't visible yet. */ struct arraydescr { - int typecode; - int itemsize; - PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); - int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); + int typecode; + int itemsize; + PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); + int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); }; typedef struct arrayobject { - PyObject_VAR_HEAD - char *ob_item; - Py_ssize_t allocated; - struct arraydescr *ob_descr; - PyObject *weakreflist; /* List of weak references */ + PyObject_VAR_HEAD + char *ob_item; + Py_ssize_t allocated; + struct arraydescr *ob_descr; + PyObject *weakreflist; /* List of weak references */ } arrayobject; static PyTypeObject Arraytype; @@ -47,49 +44,49 @@ static int array_resize(arrayobject *self, Py_ssize_t newsize) { - char *items; - size_t _new_size; + char *items; + size_t _new_size; - /* Bypass realloc() when a previous overallocation is large enough - to accommodate the newsize. If the newsize is 16 smaller than the - current size, then proceed with the realloc() to shrink the list. - */ + /* Bypass realloc() when a previous overallocation is large enough + to accommodate the newsize. If the newsize is 16 smaller than the + current size, then proceed with the realloc() to shrink the list. + */ - if (self->allocated >= newsize && - Py_SIZE(self) < newsize + 16 && - self->ob_item != NULL) { - Py_SIZE(self) = newsize; - return 0; - } + if (self->allocated >= newsize && + Py_SIZE(self) < newsize + 16 && + self->ob_item != NULL) { + Py_SIZE(self) = newsize; + return 0; + } - /* This over-allocates proportional to the array size, making room - * for additional growth. The over-allocation is mild, but is - * enough to give linear-time amortized behavior over a long - * sequence of appends() in the presence of a poorly-performing - * system realloc(). - * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... - * Note, the pattern starts out the same as for lists but then - * grows at a smaller rate so that larger arrays only overallocate - * by about 1/16th -- this is done because arrays are presumed to be more - * memory critical. - */ + /* This over-allocates proportional to the array size, making room + * for additional growth. The over-allocation is mild, but is + * enough to give linear-time amortized behavior over a long + * sequence of appends() in the presence of a poorly-performing + * system realloc(). + * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... + * Note, the pattern starts out the same as for lists but then + * grows at a smaller rate so that larger arrays only overallocate + * by about 1/16th -- this is done because arrays are presumed to be more + * memory critical. + */ - _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; - items = self->ob_item; - /* XXX The following multiplication and division does not optimize away - like it does for lists since the size is not known at compile time */ - if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) - PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); - else - items = NULL; - if (items == NULL) { - PyErr_NoMemory(); - return -1; - } - self->ob_item = items; - Py_SIZE(self) = newsize; - self->allocated = _new_size; - return 0; + _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; + items = self->ob_item; + /* XXX The following multiplication and division does not optimize away + like it does for lists since the size is not known at compile time */ + if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) + PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); + else + items = NULL; + if (items == NULL) { + PyErr_NoMemory(); + return -1; + } + self->ob_item = items; + Py_SIZE(self) = newsize; + self->allocated = _new_size; + return 0; } /**************************************************************************** @@ -107,308 +104,308 @@ static PyObject * c_getitem(arrayobject *ap, Py_ssize_t i) { - return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); + return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); } static int c_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - char x; - if (!PyArg_Parse(v, "c;array item must be char", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; + char x; + if (!PyArg_Parse(v, "c;array item must be char", &x)) + return -1; + if (i >= 0) + ((char *)ap->ob_item)[i] = x; + return 0; } static PyObject * b_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((char *)ap->ob_item)[i]; - if (x >= 128) - x -= 256; - return PyInt_FromLong(x); + long x = ((char *)ap->ob_item)[i]; + if (x >= 128) + x -= 256; + return PyInt_FromLong(x); } static int b_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore - must use the next size up that is signed ('h') and manually do - the overflow checking */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - else if (x < -128) { - PyErr_SetString(PyExc_OverflowError, - "signed char is less than minimum"); - return -1; - } - else if (x > 127) { - PyErr_SetString(PyExc_OverflowError, - "signed char is greater than maximum"); - return -1; - } - if (i >= 0) - ((char *)ap->ob_item)[i] = (char)x; - return 0; + short x; + /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore + must use the next size up that is signed ('h') and manually do + the overflow checking */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + else if (x < -128) { + PyErr_SetString(PyExc_OverflowError, + "signed char is less than minimum"); + return -1; + } + else if (x > 127) { + PyErr_SetString(PyExc_OverflowError, + "signed char is greater than maximum"); + return -1; + } + if (i >= 0) + ((char *)ap->ob_item)[i] = (char)x; + return 0; } static PyObject * BB_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((unsigned char *)ap->ob_item)[i]; - return PyInt_FromLong(x); + long x = ((unsigned char *)ap->ob_item)[i]; + return PyInt_FromLong(x); } static int BB_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned char x; - /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ - if (!PyArg_Parse(v, "b;array item must be integer", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; + unsigned char x; + /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ + if (!PyArg_Parse(v, "b;array item must be integer", &x)) + return -1; + if (i >= 0) + ((char *)ap->ob_item)[i] = x; + return 0; } #ifdef Py_USING_UNICODE static PyObject * u_getitem(arrayobject *ap, Py_ssize_t i) { - return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); + return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); } static int u_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - Py_UNICODE *p; - Py_ssize_t len; + Py_UNICODE *p; + Py_ssize_t len; - if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) - return -1; - if (len != 1) { - PyErr_SetString(PyExc_TypeError, - "array item must be unicode character"); - return -1; - } - if (i >= 0) - ((Py_UNICODE *)ap->ob_item)[i] = p[0]; - return 0; + if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) + return -1; + if (len != 1) { + PyErr_SetString(PyExc_TypeError, + "array item must be unicode character"); + return -1; + } + if (i >= 0) + ((Py_UNICODE *)ap->ob_item)[i] = p[0]; + return 0; } #endif static PyObject * h_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); } static int h_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - if (i >= 0) - ((short *)ap->ob_item)[i] = x; - return 0; + short x; + /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + if (i >= 0) + ((short *)ap->ob_item)[i] = x; + return 0; } static PyObject * HH_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); } static int HH_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* PyArg_Parse's 'h' formatter is for a signed short, therefore - must use the next size up and manually do the overflow checking */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - else if (x < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is less than minimum"); - return -1; - } - else if (x > USHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is greater than maximum"); - return -1; - } - if (i >= 0) - ((short *)ap->ob_item)[i] = (short)x; - return 0; + int x; + /* PyArg_Parse's 'h' formatter is for a signed short, therefore + must use the next size up and manually do the overflow checking */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + else if (x < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is less than minimum"); + return -1; + } + else if (x > USHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is greater than maximum"); + return -1; + } + if (i >= 0) + ((short *)ap->ob_item)[i] = (short)x; + return 0; } static PyObject * i_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); } static int i_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - if (i >= 0) - ((int *)ap->ob_item)[i] = x; - return 0; + int x; + /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + if (i >= 0) + ((int *)ap->ob_item)[i] = x; + return 0; } static PyObject * II_getitem(arrayobject *ap, Py_ssize_t i) { - return PyLong_FromUnsignedLong( - (unsigned long) ((unsigned int *)ap->ob_item)[i]); + return PyLong_FromUnsignedLong( + (unsigned long) ((unsigned int *)ap->ob_item)[i]); } static int II_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned long x; - if (PyLong_Check(v)) { - x = PyLong_AsUnsignedLong(v); - if (x == (unsigned long) -1 && PyErr_Occurred()) - return -1; - } - else { - long y; - if (!PyArg_Parse(v, "l;array item must be integer", &y)) - return -1; - if (y < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is less than minimum"); - return -1; - } - x = (unsigned long)y; + unsigned long x; + if (PyLong_Check(v)) { + x = PyLong_AsUnsignedLong(v); + if (x == (unsigned long) -1 && PyErr_Occurred()) + return -1; + } + else { + long y; + if (!PyArg_Parse(v, "l;array item must be integer", &y)) + return -1; + if (y < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is less than minimum"); + return -1; + } + x = (unsigned long)y; - } - if (x > UINT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is greater than maximum"); - return -1; - } + } + if (x > UINT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is greater than maximum"); + return -1; + } - if (i >= 0) - ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; - return 0; + if (i >= 0) + ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; + return 0; } static PyObject * l_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong(((long *)ap->ob_item)[i]); + return PyInt_FromLong(((long *)ap->ob_item)[i]); } static int l_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - long x; - if (!PyArg_Parse(v, "l;array item must be integer", &x)) - return -1; - if (i >= 0) - ((long *)ap->ob_item)[i] = x; - return 0; + long x; + if (!PyArg_Parse(v, "l;array item must be integer", &x)) + return -1; + if (i >= 0) + ((long *)ap->ob_item)[i] = x; + return 0; From noreply at buildbot.pypy.org Fri May 4 17:40:03 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 4 May 2012 17:40:03 +0200 (CEST) Subject: [pypy-commit] pyrepl py3k-readline: a branch where to try to make pyrepl.readline compatible with python3 Message-ID: <20120504154003.781B99B6038@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k-readline Changeset: r191:d1f24ac576da Date: 2012-05-03 21:31 +0200 http://bitbucket.org/pypy/pyrepl/changeset/d1f24ac576da/ Log: a branch where to try to make pyrepl.readline compatible with python3 diff --git a/pyrepl/readline.py b/pyrepl/readline.py --- a/pyrepl/readline.py +++ b/pyrepl/readline.py @@ -419,9 +419,14 @@ else: # this is not really what readline.c does. Better than nothing I guess - import __builtin__ - _old_raw_input = __builtin__.raw_input - __builtin__.raw_input = _wrapper.raw_input + if sys.version_info < (3,): + import __builtin__ + _old_raw_input = __builtin__.raw_input + __builtin__.raw_input = _wrapper.raw_input + else: + import builtins + _old_raw_input = builtins.input + builtins.input = _wrapper.raw_input _old_raw_input = None _setup() From noreply at buildbot.pypy.org Fri May 4 17:40:04 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 4 May 2012 17:40:04 +0200 (CEST) Subject: [pypy-commit] pyrepl py3k-readline: .keys() is no longer a list Message-ID: <20120504154004.94A419B603A@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k-readline Changeset: r192:08d1c1afa597 Date: 2012-05-03 21:32 +0200 http://bitbucket.org/pypy/pyrepl/changeset/08d1c1afa597/ Log: .keys() is no longer a list diff --git a/pyrepl/readline.py b/pyrepl/readline.py --- a/pyrepl/readline.py +++ b/pyrepl/readline.py @@ -226,7 +226,7 @@ self.config.completer_delims = dict.fromkeys(string) def get_completer_delims(self): - chars = self.config.completer_delims.keys() + chars = list(self.config.completer_delims.keys()) chars.sort() return ''.join(chars) From noreply at buildbot.pypy.org Fri May 4 17:40:05 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 4 May 2012 17:40:05 +0200 (CEST) Subject: [pypy-commit] pyrepl py3k-readline: bytearray does not support bytes char in py3k. And no need for the utf-8 hack Message-ID: <20120504154005.ABC369B603C@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k-readline Changeset: r193:9334769ae08f Date: 2012-05-04 17:22 +0200 http://bitbucket.org/pypy/pyrepl/changeset/9334769ae08f/ Log: bytearray does not support bytes char in py3k. And no need for the utf-8 hack diff --git a/pyrepl/simple_interact.py b/pyrepl/simple_interact.py --- a/pyrepl/simple_interact.py +++ b/pyrepl/simple_interact.py @@ -33,6 +33,7 @@ return False return True + def run_multiline_interactive_console(mainmodule=None): import code import __main__ @@ -41,7 +42,10 @@ def more_lines(unicodetext): # ooh, look at the hack: - src = "#coding:utf-8\n"+unicodetext.encode('utf-8') + if sys.version_info < (3,): + src = "#coding:utf-8\n"+unicodetext.encode('utf-8') + else: + src = unicodetext try: code = console.compile(src, '', 'single') except (OverflowError, SyntaxError, ValueError): diff --git a/pyrepl/unix_eventqueue.py b/pyrepl/unix_eventqueue.py --- a/pyrepl/unix_eventqueue.py +++ b/pyrepl/unix_eventqueue.py @@ -98,7 +98,7 @@ self.events.append(event) def push(self, char): - self.buf.append(char) + self.buf.append(ord(char)) if char in self.k: if self.k is self.ck: #sanity check, buffer is empty when a special key comes From noreply at buildbot.pypy.org Fri May 4 17:40:06 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Fri, 4 May 2012 17:40:06 +0200 (CEST) Subject: [pypy-commit] pyrepl py3k-readline: fix reading/writing the pyrepl.readline history file, w.r.t. to unicode vs bytes Message-ID: <20120504154006.C70429B603D@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k-readline Changeset: r194:8d689f601f1a Date: 2012-05-04 17:35 +0200 http://bitbucket.org/pypy/pyrepl/changeset/8d689f601f1a/ Log: fix reading/writing the pyrepl.readline history file, w.r.t. to unicode vs bytes diff --git a/pyrepl/readline.py b/pyrepl/readline.py --- a/pyrepl/readline.py +++ b/pyrepl/readline.py @@ -32,6 +32,10 @@ from pyrepl.completing_reader import CompletingReader from pyrepl.unix_console import UnixConsole, _error +try: + unicode +except NameError: + unicode = str ENCODING = sys.getfilesystemencoding() or 'latin1' # XXX review @@ -232,6 +236,8 @@ def _histline(self, line): line = line.rstrip('\n') + if isinstance(line, unicode): + return line # on py3k try: return unicode(line, ENCODING) except UnicodeDecodeError: # bah, silently fall back... @@ -271,7 +277,9 @@ history = self.get_reader().get_trimmed_history(maxlength) f = open(os.path.expanduser(filename), 'w') for entry in history: - if isinstance(entry, unicode): + # if we are on py3k, we don't need to encode strings before + # writing it to a file + if isinstance(entry, unicode) and sys.version_info < (3,): try: entry = entry.encode(ENCODING) except UnicodeEncodeError: # bah, silently fall back... From noreply at buildbot.pypy.org Fri May 4 18:59:20 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 4 May 2012 18:59:20 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: Intermediate check-in Message-ID: <20120504165920.0AD5C9B6038@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r54890:1fdaa02047f0 Date: 2012-05-04 18:00 +0200 http://bitbucket.org/pypy/pypy/changeset/1fdaa02047f0/ Log: Intermediate check-in diff --git a/pypy/config/translationoption.py b/pypy/config/translationoption.py --- a/pypy/config/translationoption.py +++ b/pypy/config/translationoption.py @@ -108,6 +108,7 @@ BoolOption("stm", "enable use of Software Transactional Memory", default=False, cmdline="--stm", requires=[("translation.gc", "stmgc"), + ("translation.thread", True), ("translation.continuation", False), # XXX for now ]), BoolOption("sandbox", "Produce a fully-sandboxed executable", diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/rstm.py @@ -0,0 +1,17 @@ +from pypy.translator.stm import stmgcintf + + +def before_external_call(): + stmgcintf.StmOperations.before_external_call() +before_external_call._gctransformer_hint_cannot_collect_ = True +before_external_call._dont_reach_me_in_del_ = True + +def after_external_call(): + stmgcintf.StmOperations.after_external_call() +after_external_call._gctransformer_hint_cannot_collect_ = True +after_external_call._dont_reach_me_in_del_ = True + +def do_yield_thread(): + stmgcintf.StmOperations.do_yield_thread() +do_yield_thread._gctransformer_hint_close_stack_ = True +do_yield_thread._dont_reach_me_in_del_ = True diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -403,14 +403,10 @@ 'stm_getarrayitem': LLOp(sideeffects=False, canrun=True), 'stm_getinteriorfield': LLOp(sideeffects=False, canrun=True), 'stm_become_inevitable': LLOp(), - 'stm_enter_transactional_mode': LLOp(canrun=True, canmallocgc=True), - 'stm_leave_transactional_mode': LLOp(canrun=True, canmallocgc=True), 'stm_writebarrier': LLOp(), 'stm_normalize_global': LLOp(), 'stm_start_transaction': LLOp(canrun=True, canmallocgc=True), 'stm_stop_transaction': LLOp(canrun=True, canmallocgc=True), - 'stm_thread_starting': LLOp(), - 'stm_thread_stopping': LLOp(), # __________ address operations __________ diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -24,7 +24,8 @@ # - Each object lives either in the shared area, or in a thread-local # nursery. The shared area contains: # - the prebuilt objects -# - the small objects allocated via minimarkpage.py +# - the small objects allocated via minimarkpage.py (XXX so far, +# just with malloc) # - the non-small raw-malloced objects # # - The GLOBAL objects are all located in the shared area. @@ -38,27 +39,25 @@ # is not actually generational (slow when running long transactions # or before running transactions at all). # -# - A few details are different depending on the running mode: -# either "transactional" or "non-transactional". The transactional -# mode is where we have multiple threads, in a transaction.run() -# call. The non-transactional mode has got only the main thread. +# - So far, the GC is always running in "transactional" mode. Later, +# it would be possible to speed it up in case there is only one +# (non-blocked) thread. # # GC Flags on objects: # # - GCFLAG_GLOBAL: identifies GLOBAL objects. All prebuilt objects # start as GLOBAL; conversely, all freshly allocated objects start -# as LOCAL. But they may switch between the two; see below. -# All objects that are or have been GLOBAL are immortal for now +# as LOCAL, and become GLOBAL if they survive an end-of-transaction. +# All objects that are GLOBAL are immortal for now # (global_collect() will be done later). # # - GCFLAG_WAS_COPIED: means that the object is either a LOCAL COPY -# or, if GLOBAL, then it has or had at least one LOCAL COPY. Used -# in transactional mode only; see below. +# or, if GLOBAL, then it has or had at least one LOCAL COPY. See +# below. # # - GCFLAG_VISITED: used during collections to flag objects found to be # surviving. Between collections, it must be set on the LOCAL COPY -# objects or the ones from 'mt_global_turned_local' (see below), and -# only on them. +# objects, and only on them. # # - GCFLAG_HAS_SHADOW: set on nursery objects whose id() or identityhash() # was taken. Means that we already have a corresponding object allocated @@ -69,51 +68,34 @@ # When the mutator (= the program outside the GC) wants to write to an # object, stm_writebarrier() does something special on GLOBAL objects: # -# - In non-transactional mode, the write barrier turns the object LOCAL -# and add it in the list 'main_thread_tls.mt_global_turned_local'. -# This list contains all previously-GLOBAL objects that have been -# modified. [XXX:TODO]Objects turned LOCAL are changed back to GLOBAL and -# removed from 'mt_global_turned_local' by the next collection, -# unless they are also found in the stack (the reason being that if -# they are in the stack and stm_writebarrier() has already been -# called, then it might not be called a second time if they are -# changed again after collection). -# -# - In transactional mode, the write barrier creates a LOCAL COPY of -# the object and returns it (or, if already created by the same -# transaction, finds it again). The list of LOCAL COPY objects has -# a role similar to 'mt_global_turned_local', but is maintained by C -# code (see tldict_lookup()). +# - In transactional mode (always for now), the write barrier creates +# a LOCAL COPY of the object and returns it (or, if already created by +# the same transaction, finds it again). The mapping from GLOBAL to +# LOCAL COPY objects is maintained by C code (see tldict_lookup()). # # Invariant: between two transactions, all objects visible from the current # thread are always GLOBAL. In particular: # # - The LOCAL objects of a thread are not visible at all from other threads. -# This means that in transactional mode there is *no* pointer from a -# GLOBAL object directly to a LOCAL object. -# -# - At the end of enter_transactional_mode(), and at the beginning of -# leave_transactional_mode(), *all* objects everywhere are GLOBAL. +# This means that there is *no* pointer from a GLOBAL object directly to +# a LOCAL object. At most, there can be pointers from a GLOBAL object to +# another GLOBAL object that itself has a LOCAL COPY --- or, of course, +# pointers from a LOCAL object to anything. # # Collection: for now we have only local_collection(), which ignores all # GLOBAL objects. # -# - In non-transactional mode, we use 'mt_global_turned_local' as a list -# of roots, together with the stack. By construction, all objects that -# are still GLOBAL can be ignored, because they cannot point to a LOCAL -# object (except to a 'mt_global_turned_local' object). -# -# - In transactional mode, we similarly use the list maintained by C code +# - To find the roots, we take the list (maintained by the C code) # of the LOCAL COPY objects of the current transaction, together with -# the stack. Again, GLOBAL objects can be ignored because they have no -# pointer to any LOCAL object at all in that mode. +# the stack. GLOBAL objects can be ignored because they have no +# pointer to any LOCAL object at all. # # - A special case is the end-of-transaction collection, done by the same # local_collection() with a twist: all pointers to a LOCAL COPY object -# are replaced with copies to the corresponding GLOBAL original. When +# are replaced with pointers to the corresponding GLOBAL original. When # it is done, we mark all surviving LOCAL objects as GLOBAL too, and we # are back to the situation where this thread sees only GLOBAL objects. -# What we leave to the C code to do "as a finishing touch" is to copy +# What we leave to the C code to do "as the finishing touch" is to copy # transactionally the content of the LOCAL COPY objects back over the # GLOBAL originals; before this is done, the transaction can be aborted # at any point with no visible side-effect on any object that other @@ -129,11 +111,8 @@ # - if GCFLAG_HAS_SHADOW, to the shadow object outside the nursery. # (It is not used on other nursery objects before collection.) # -# - it contains the 'next' object of the 'mt_global_turned_local' list. -# # - it contains the 'next' object of the 'sharedarea_tls.chained_list' -# list, which describes all LOCAL objects malloced outside the nursery -# (excluding the ones that were GLOBAL at some point). +# list, which describes all LOCAL objects malloced outside the nursery. # # - for nursery objects, during collection, if they are copied outside # the nursery, they grow GCFLAG_VISITED and their 'version' points @@ -199,7 +178,6 @@ self.stm_operations = stm_operations self.nursery_size = nursery_size self.sharedarea = stmshared.StmGCSharedArea(self) - self.transactional_mode = False # def _stm_getsize(obj): # indirection to hide 'self' return self.get_size(obj) @@ -224,16 +202,17 @@ # self.sharedarea.setup() # - from pypy.rpython.memory.gc.stmtls import StmGCTLS - self.main_thread_tls = StmGCTLS(self, in_main_thread=True) - self.main_thread_tls.start_transaction() + self.setup_thread() def setup_thread(self): from pypy.rpython.memory.gc.stmtls import StmGCTLS - StmGCTLS(self, in_main_thread=False) + StmGCTLS(self).start_transaction() def teardown_thread(self): - self.get_tls().teardown_thread() + self.stm_operations.try_inevitable() + stmtls = self.get_tls() + stmtls.stop_transaction() + stmtls.delete() @always_inline def get_tls(self): @@ -241,16 +220,6 @@ tls = self.stm_operations.get_tls() return StmGCTLS.cast_address_to_tls_object(tls) - def enter_transactional_mode(self): - ll_assert(not self.transactional_mode, "already in transactional mode") - self.main_thread_tls.enter_transactional_mode() - self.transactional_mode = True - - def leave_transactional_mode(self): - ll_assert(self.transactional_mode, "already in non-transactional mode") - self.transactional_mode = False - self.main_thread_tls.leave_transactional_mode() - # ---------- def malloc_fixedsize_clear(self, typeid, size, @@ -259,11 +228,7 @@ contains_weakptr=False): #assert not needs_finalizer, "XXX" --- finalizer is just ignored # - # Check the mode: either in a transactional thread, or in - # the main thread. For now we do the same thing in both - # modes, but set different flags. - # - # Get the memory from the nursery. + # Get the memory from the thread-local nursery. size_gc_header = self.gcheaderbuilder.size_gc_header totalsize = size_gc_header + size tls = self.get_tls() @@ -380,15 +345,7 @@ # @dont_inline def _stm_write_barrier_global(obj): - tls = self.get_tls() - if tls is self.main_thread_tls: - # not in a transaction: the main thread writes to a global obj. - # In this case we turn the object local. - tls.main_thread_writes_to_global_obj(obj) - return obj - # - # else, in a transaction: we find or make a local copy of the - # global object + # find or make a local copy of the global object hdr = self.header(obj) if hdr.tid & GCFLAG_WAS_COPIED == 0: # @@ -410,10 +367,13 @@ # Here, we need to really make a local copy size = self.get_size(obj) totalsize = self.gcheaderbuilder.size_gc_header + size + tls = self.get_tls() try: localobj = tls.malloc_local_copy(totalsize) except MemoryError: - # XXX + # should not really let the exception propagate. + # XXX do something slightly better, like abort the transaction + # and raise a MemoryError when retrying fatalerror("MemoryError in _stm_write_barrier_global -- sorry") return llmemory.NULL # @@ -446,8 +406,8 @@ comparison with another pointer. If 'obj' is the local version of an existing global object, then returns the global object. Don't use for e.g. hashing, because if 'obj' - is a purely local object, it just returns 'obj' --- which - will change at the next commit. + is a local object, it just returns 'obj' --- even for nursery + objects, which move at the next local collection. """ if not obj: return obj @@ -472,7 +432,7 @@ # # The object is still in the nursery of the current TLS. # (It cannot be in the nursery of a different thread, because - # such objects are not visible to different threads at all.) + # such an object would not be visible to this thread at all.) # ll_assert(hdr.tid & GCFLAG_WAS_COPIED == 0, "id: WAS_COPIED?") # diff --git a/pypy/rpython/memory/gc/stmtls.py b/pypy/rpython/memory/gc/stmtls.py --- a/pypy/rpython/memory/gc/stmtls.py +++ b/pypy/rpython/memory/gc/stmtls.py @@ -22,9 +22,8 @@ nontranslated_dict = {} - def __init__(self, gc, in_main_thread): + def __init__(self, gc): self.gc = gc - self.in_main_thread = in_main_thread self.stm_operations = self.gc.stm_operations self.null_address_dict = self.gc.null_address_dict self.AddressStack = self.gc.AddressStack @@ -48,14 +47,10 @@ # --- the LOCAL objects which are weakrefs. They are also listed # in the appropriate place, like sharedarea_tls, if needed. self.local_weakrefs = self.AddressStack() - # --- main thread only: this is the list of GLOBAL objects that - # have been turned into LOCAL objects - if in_main_thread: - self.mt_global_turned_local = NULL # self._register_with_C_code() - def teardown_thread(self): + def delete(self): self._cleanup_state() self._unregister_with_C_code() self.local_weakrefs.delete() @@ -81,7 +76,7 @@ n = 10000 + len(StmGCTLS.nontranslated_dict) tlsaddr = rffi.cast(llmemory.Address, n) StmGCTLS.nontranslated_dict[n] = self - self.stm_operations.set_tls(tlsaddr, int(self.in_main_thread)) + self.stm_operations.set_tls(tlsaddr) def _unregister_with_C_code(self): ll_assert(self.gc.get_tls() is self, @@ -106,74 +101,12 @@ # ------------------------------------------------------------ - def enter_transactional_mode(self): - """Called on the main thread, just before spawning the other - threads.""" - self.stop_transaction() - # - # We must also mark the following objects as GLOBAL again - obj = self.mt_global_turned_local - self.mt_global_turned_local = NULL - self.mt_save_prebuilt_turned_local = self.AddressStack() - while obj: - hdr = self.gc.header(obj) - if hdr.tid & GCFLAG_PREBUILT: - self.mt_save_prebuilt_turned_local.append(obj) - obj = hdr.version - ll_assert(hdr.tid & GCFLAG_GLOBAL == 0, "already GLOBAL [2]") - ll_assert(hdr.tid & GCFLAG_VISITED != 0, "missing VISITED [2]") - hdr.tid += GCFLAG_GLOBAL - GCFLAG_VISITED - self._clear_version_for_global_object(hdr) - if not we_are_translated(): - del self.mt_global_turned_local # don't use any more - # - if self.gc.DEBUG: - self.check_all_global_objects() - - def leave_transactional_mode(self): - """Restart using the main thread for mallocs.""" - if not we_are_translated(): - for key, value in StmGCTLS.nontranslated_dict.items(): - if value is not self: - del StmGCTLS.nontranslated_dict[key] - self.start_transaction() - # - if self.gc.DEBUG: - self.check_all_global_objects() - # - # Do something special here after we restarted the execution - # in the main thread. At this point, *all* objects are GLOBAL. - # The write_barrier will ensure that any write makes the written-to - # objects LOCAL again. However, it is possible that the write - # barrier was called before the enter/leave_transactional_mode() - # and will not be called again before writing. But such objects - # are right now directly in the stack. So to fix this issue, we - # conservatively mark as local all objects directly from the stack. - # XXX TODO: do the same thing after each local_collection() by - # the main thread. - self.mt_global_turned_local = NULL - self.gc.root_walker.walk_current_stack_roots( - StmGCTLS._remark_object_as_local, self) - # Messy, because prebuilt objects may be Constants in the flow - # graphs and so don't appear in the stack, so need a special case. - # We save and restore which *prebuilt* objects were originally - # in mt_global_turned_local. (Note that we can't simply save - # and restore mt_global_turned_local for *all* objects, because - # that would not be enough: the stack typically contains also many - # fresh objects that used to be local in enter_transactional_mode().) - while self.mt_save_prebuilt_turned_local.non_empty(): - obj = self.mt_save_prebuilt_turned_local.pop() - hdr = self.gc.header(obj) - if hdr.tid & GCFLAG_GLOBAL: - self.main_thread_writes_to_global_obj(obj) - self.mt_save_prebuilt_turned_local.delete() - def start_transaction(self): """Start a transaction: performs any pending cleanups, and set up a fresh state for allocating. Called at the start of - each transaction, and at the start of the main thread.""" - # Note that the calls to enter() and - # end_of_transaction_collection() are not balanced: if a + each transaction, including at the start of a thread.""" + # Note that the calls to start_transaction() and + # stop_transaction() are not balanced: if a # transaction is aborted, the latter might never be called. # Be ready here to clean up any state. self._cleanup_state() @@ -186,6 +119,10 @@ llarena.arena_reset(self.nursery_start, clear_size, 2) self.nursery_free = self.nursery_start self.nursery_top = self.nursery_start + self.nursery_size + # At this point, all visible objects are GLOBAL, but newly + # malloced objects will be LOCAL. + if self.gc.DEBUG: + self.check_all_global_objects() def stop_transaction(self): """Stop a transaction: do a local collection to empty the @@ -193,7 +130,7 @@ then mark all these objects as global.""" self.local_collection(end_of_transaction=1) if not self.local_nursery_is_empty(): - self.local_collection(end_of_transaction=2) + self.local_collection(end_of_transaction=1, run_finalizers=False) self._promote_locals_to_globals() self._disable_mallocs() @@ -204,15 +141,14 @@ # ------------------------------------------------------------ - def local_collection(self, end_of_transaction=0): + def local_collection(self, end_of_transaction=0, run_finalizers=True): """Do a local collection. This should be equivalent to a minor collection only, but the GC is not generational so far, so it is for now the same as a full collection --- but only on LOCAL objects, not touching the GLOBAL objects. More precisely, this - finds all YOUNG LOCAL objects, move them out of the nursery if - necessary, and make them OLD LOCAL objects. This starts from - the roots from the stack. The flag GCFLAG_WAS_COPIED is kept - and the C tree is updated if the local young objects move. + finds all LOCAL objects alive, moving them if necessary out of the + nursery. This starts from the roots from the stack and the LOCAL + COPY objects. """ # debug_start("gc-local") @@ -243,10 +179,7 @@ # # Also find the roots that are the local copy of GCFLAG_WAS_COPIED # objects. - if not self.in_main_thread: - self.collect_roots_from_tldict() - else: - self.collect_from_mt_global_turned_local() + self.collect_roots_from_tldict() # # Now repeatedly follow objects until 'pending' is empty. self.collect_flush_pending() @@ -264,7 +197,7 @@ # don't have GCFLAG_VISITED. As the newly allocated nursery # objects don't have it either, at the start of the next # collection, the only LOCAL objects that have it are the ones - # in 'mt_global_turned_local' or the C tldict with GCFLAG_WAS_COPIED. + # in the C tldict, together with GCFLAG_WAS_COPIED. # # All live nursery objects are out, and the rest dies. Fill # the whole nursery with zero and reset the current nursery pointer. @@ -292,13 +225,15 @@ fatalerror("malloc in a non-main thread but outside a transaction") if llmemory.raw_malloc_usage(size) > self.nursery_size // 8 * 7: fatalerror("object too large to ever fit in the nursery") - while True: - self.local_collection() + self.local_collection() + free = self.nursery_free + top = self.nursery_top + if (top - free) < llmemory.raw_malloc_usage(size): + # try again + self.local_collection(run_finalizers=False) + ll_assert(self.local_nursery_is_empty(), "nursery must be empty [0]") free = self.nursery_free - top = self.nursery_top - if (top - free) < llmemory.raw_malloc_usage(size): - continue # try again - return free + return free def is_in_nursery(self, addr): ll_assert(llmemory.cast_adr_to_int(addr) & 1 == 0, @@ -306,7 +241,7 @@ return self.nursery_start <= addr < self.nursery_top def malloc_local_copy(self, totalsize): - """Allocate an object that will be used as a LOCAL copy of + """Allocate an object that will be used as a LOCAL COPY of some GLOBAL object.""" localobj = self.sharedarea_tls.malloc_object(totalsize) self.copied_local_objects.append(localobj) @@ -315,25 +250,6 @@ def fresh_new_weakref(self, obj): self.local_weakrefs.append(obj) - def _remark_object_as_local(self, root): - obj = root.address[0] - hdr = self.gc.header(obj) - if hdr.tid & GCFLAG_GLOBAL: - self.main_thread_writes_to_global_obj(obj) - - def main_thread_writes_to_global_obj(self, obj): - hdr = self.gc.header(obj) - # XXX should we also remove GCFLAG_WAS_COPIED here if it is set? - ll_assert(hdr.tid & GCFLAG_VISITED == 0, - "write in main thread: unexpected GCFLAG_VISITED") - # remove GCFLAG_GLOBAL and (if it was there) GCFLAG_WAS_COPIED, - # and add GCFLAG_VISITED - hdr.tid &= ~(GCFLAG_GLOBAL | GCFLAG_WAS_COPIED) - hdr.tid |= GCFLAG_VISITED - # add the object into this linked list - hdr.version = self.mt_global_turned_local - self.mt_global_turned_local = obj - # ------------------------------------------------------------ def _promote_locals_to_globals(self): @@ -392,9 +308,10 @@ def trace_and_drag_out_of_nursery(self, obj): # This is called to fix the references inside 'obj', to ensure that # they are global. If necessary, the referenced objects are copied - # into the global area first. This is called on the LOCAL copy of + # out of the nursery first. This is called on the LOCAL copy of # the roots, and on the freshly OLD copy of all other reached LOCAL - # objects. + # objects. This only looks inside 'obj': it does not depend on or + # touch the flags of 'obj'. self.gc.trace(obj, self._trace_drag_out, None) def _trace_drag_out1(self, root): @@ -454,6 +371,8 @@ # # If 'obj' was already forwarded, change it to its forwarding address. # If 'obj' has already a shadow but isn't forwarded so far, use it. + # The common case is the "else" part, so we use only one test to + # know if we are in the common case or not. if hdr.tid & (GCFLAG_VISITED | GCFLAG_HAS_SHADOW): # if hdr.tid & GCFLAG_VISITED: @@ -552,19 +471,6 @@ # self.trace_and_drag_out_of_nursery(localobj) - def collect_from_mt_global_turned_local(self): - # NB. all objects in the 'mt_global_turned_local' list are - # currently immortal (because they were once GLOBAL) - obj = self.mt_global_turned_local - while obj: - hdr = self.gc.header(obj) - ll_assert(hdr.tid & GCFLAG_GLOBAL == 0, - "unexpected GCFLAG_GLOBAL in mt_global_turned_local") - ll_assert(hdr.tid & GCFLAG_VISITED != 0, - "missing GCFLAG_VISITED in mt_global_turned_local") - self.trace_and_drag_out_of_nursery(obj) - obj = hdr.version - def collect_flush_pending(self): # Follow the objects in the 'pending' stack and move the # young objects they point to out of the nursery. diff --git a/pypy/translator/stm/stmgcintf.py b/pypy/translator/stm/stmgcintf.py --- a/pypy/translator/stm/stmgcintf.py +++ b/pypy/translator/stm/stmgcintf.py @@ -47,9 +47,12 @@ # C part of the implementation of the pypy.rlib.rstm module in_transaction = smexternal('stm_in_transaction', [], lltype.Signed) - run_all_transactions = smexternal('stm_run_all_transactions', - [rffi.VOIDP, lltype.Signed], - lltype.Void) + before_external_call = smexternal('stm_before_external_call', + [], lltype.Void) + after_external_call = smexternal('stm_after_external_call', + [], lltype.Void) + do_yield_thread = smexternal('stm_do_yield_thread', + [], lltype.Void) # for the GC: store and read a thread-local-storage field, as well # as initialize and shut down the internal thread_descriptor diff --git a/pypy/translator/stm/test/targetdemo2.py b/pypy/translator/stm/test/targetdemo2.py --- a/pypy/translator/stm/test/targetdemo2.py +++ b/pypy/translator/stm/test/targetdemo2.py @@ -1,5 +1,6 @@ import time -from pypy.module.thread import ll_thread, gil +from pypy.module.thread import ll_thread +from pypy.rlib import rstm from pypy.rlib.objectmodel import invoke_around_extcall, we_are_translated @@ -69,7 +70,7 @@ def really_run(self): for value in range(glob.LENGTH): add_at_end_of_chained_list(glob.anchor, value, self.index) - gil.do_yield_thread() + rstm.do_yield_thread() # ____________________________________________________________ # bah, we are really missing an RPython interface to threads @@ -129,7 +130,7 @@ def setup_threads(): #space.threadlocals.setup_threads(space) bootstrapper.setup() - invoke_around_extcall(gil.before_external_call, gil.after_external_call) + invoke_around_extcall(rstm.before_external_call, rstm.after_external_call) def start_thread(args): bootstrapper.acquire(args) From noreply at buildbot.pypy.org Fri May 4 20:37:18 2012 From: noreply at buildbot.pypy.org (wlav) Date: Fri, 4 May 2012 20:37:18 +0200 (CEST) Subject: [pypy-commit] pypy default: document use/install of standalone Reflex Message-ID: <20120504183718.986699B6038@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: Changeset: r54891:5e8d21a87161 Date: 2012-05-04 11:37 -0700 http://bitbucket.org/pypy/pypy/changeset/5e8d21a87161/ Log: document use/install of standalone Reflex diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst --- a/pypy/doc/cppyy.rst +++ b/pypy/doc/cppyy.rst @@ -51,8 +51,15 @@ `Download`_ a binary or install from `source`_. Some Linux and Mac systems may have ROOT provided in the list of scientific software of their packager. -A current, standalone version of Reflex should be provided at some point, -once the dependencies and general packaging have been thought out. +If, however, you prefer a standalone version of Reflex, the best is to get +this `recent snapshot`_, and install like so:: + + $ tar jxf reflex-2012-05-02.tar.bz2 + $ cd reflex-2012-05-02 + $ build/autogen + $ ./configure + $ make && make install + Also, make sure you have a version of `gccxml`_ installed, which is most easily provided by the packager of your system. If you read up on gccxml, you'll probably notice that it is no longer being @@ -61,12 +68,13 @@ .. _`Download`: http://root.cern.ch/drupal/content/downloading-root .. _`source`: http://root.cern.ch/drupal/content/installing-root-source +.. _`recent snapshot`: http://cern.ch/wlav/reflex-2012-05-02.tar.bz2 .. _`gccxml`: http://www.gccxml.org Next, get the `PyPy sources`_, select the reflex-support branch, and build pypy-c. For the build to succeed, the ``$ROOTSYS`` environment variable must point to -the location of your ROOT installation:: +the location of your ROOT (or standalone Reflex) installation:: $ hg clone https://bitbucket.org/pypy/pypy $ cd pypy From noreply at buildbot.pypy.org Fri May 4 21:07:21 2012 From: noreply at buildbot.pypy.org (wlav) Date: Fri, 4 May 2012 21:07:21 +0200 (CEST) Subject: [pypy-commit] pypy reflex-support: long long and unsigned long long converters and executors Message-ID: <20120504190721.2578D9B6038@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r54892:a0f4fef869dd Date: 2012-05-04 11:17 -0700 http://bitbucket.org/pypy/pypy/changeset/a0f4fef869dd/ Log: long long and unsigned long long converters and executors diff --git a/pypy/module/cppyy/capi/__init__.py b/pypy/module/cppyy/capi/__init__.py --- a/pypy/module/cppyy/capi/__init__.py +++ b/pypy/module/cppyy/capi/__init__.py @@ -113,6 +113,11 @@ [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.LONG, threadsafe=threadsafe, compilation_info=backend.eci) +c_call_ll = rffi.llexternal( + "cppyy_call_ll", + [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.LONGLONG, + threadsafe=threadsafe, + compilation_info=backend.eci) c_call_f = rffi.llexternal( "cppyy_call_f", [C_METHOD, C_OBJECT, rffi.INT, rffi.VOIDP], rffi.DOUBLE, diff --git a/pypy/module/cppyy/converter.py b/pypy/module/cppyy/converter.py --- a/pypy/module/cppyy/converter.py +++ b/pypy/module/cppyy/converter.py @@ -366,6 +366,29 @@ ba = rffi.cast(rffi.CCHARP, address) ba[capi.c_function_arg_typeoffset()] = self.typecode +class LongLongConverter(IntTypeConverterMixin, TypeConverter): + _immutable_ = True + libffitype = libffi.types.slong + c_type = rffi.LONGLONG + c_ptrtype = rffi.LONGLONGP + + def __init__(self, space, default): + self.default = rffi.cast(self.c_type, capi.c_strtoll(default)) + + def _unwrap_object(self, space, w_obj): + return space.r_longlong_w(w_obj) + +class ConstLongLongRefConverter(ConstRefNumericTypeConverterMixin, LongLongConverter): + _immutable_ = True + libffitype = libffi.types.pointer + typecode = 'r' + + def convert_argument(self, space, w_obj, address): + x = rffi.cast(self.c_ptrtype, address) + x[0] = self._unwrap_object(space, w_obj) + ba = rffi.cast(rffi.CCHARP, address) + ba[capi.c_function_arg_typeoffset()] = self.typecode + class UnsignedLongConverter(IntTypeConverterMixin, TypeConverter): _immutable_ = True libffitype = libffi.types.ulong @@ -382,6 +405,23 @@ _immutable_ = True libffitype = libffi.types.pointer +class UnsignedLongLongConverter(IntTypeConverterMixin, TypeConverter): + _immutable_ = True + libffitype = libffi.types.ulong + c_type = rffi.ULONGLONG + c_ptrtype = rffi.ULONGLONGP + + def __init__(self, space, default): + self.default = rffi.cast(self.c_type, capi.c_strtoull(default)) + + def _unwrap_object(self, space, w_obj): + return space.r_ulonglong_w(w_obj) + +class ConstUnsignedLongLongRefConverter(ConstRefNumericTypeConverterMixin, UnsignedLongLongConverter): + _immutable_ = True + libffitype = libffi.types.pointer + + class FloatConverter(FloatTypeConverterMixin, TypeConverter): _immutable_ = True libffitype = libffi.types.float @@ -748,6 +788,14 @@ _converters["const unsigned long int&"] = ConstUnsignedLongRefConverter _converters["unsigned long"] = _converters["unsigned long int"] _converters["const unsigned long&"] = _converters["const unsigned long int&"] +_converters["long long int"] = LongLongConverter +_converters["const long long int&"] = ConstLongLongRefConverter +_converters["long long"] = _converters["long long int"] +_converters["const long long&"] = _converters["const long long int&"] +_converters["unsigned long long int"] = UnsignedLongLongConverter +_converters["const unsigned long long int&"] = ConstUnsignedLongLongRefConverter +_converters["unsigned long long"] = _converters["unsigned long long int"] +_converters["const unsigned long long&"] = _converters["const unsigned long long int&"] _converters["float"] = FloatConverter _converters["const float&"] = ConstFloatRefConverter _converters["double"] = DoubleConverter diff --git a/pypy/module/cppyy/executor.py b/pypy/module/cppyy/executor.py --- a/pypy/module/cppyy/executor.py +++ b/pypy/module/cppyy/executor.py @@ -147,6 +147,32 @@ result = libffifunc.call(argchain, rffi.ULONG) return space.wrap(result) +class LongLongExecutor(FunctionExecutor): + _immutable_ = True + libffitype = libffi.types.sint64 + + def _wrap_result(self, space, result): + return space.wrap(result) + + def execute(self, space, cppmethod, cppthis, num_args, args): + result = capi.c_call_ll(cppmethod, cppthis, num_args, args) + return self._wrap_result(space, result) + + def execute_libffi(self, space, libffifunc, argchain): + result = libffifunc.call(argchain, rffi.LONGLONG) + return space.wrap(result) + +class UnsignedLongLongExecutor(LongLongExecutor): + _immutable_ = True + libffitype = libffi.types.uint64 + + def _wrap_result(self, space, result): + return space.wrap(rffi.cast(rffi.ULONGLONG, result)) + + def execute_libffi(self, space, libffifunc, argchain): + result = libffifunc.call(argchain, rffi.ULONGLONG) + return space.wrap(result) + class ConstIntRefExecutor(FunctionExecutor): _immutable_ = True libffitype = libffi.types.pointer @@ -396,6 +422,10 @@ _executors["unsigned long"] = _executors["unsigned long int"] _executors["unsigned long int*"] = UnsignedLongPtrExecutor _executors["unsigned long*"] = _executors["unsigned long int*"] +_executors["long long int"] = LongLongExecutor +_executors["long long"] = _executors["long long int"] +_executors["unsigned long long int"] = UnsignedLongLongExecutor +_executors["unsigned long long"] = _executors["unsigned long long int"] _executors["float"] = FloatExecutor _executors["float*"] = FloatPtrExecutor _executors["double"] = DoubleExecutor diff --git a/pypy/module/cppyy/include/capi.h b/pypy/module/cppyy/include/capi.h --- a/pypy/module/cppyy/include/capi.h +++ b/pypy/module/cppyy/include/capi.h @@ -31,6 +31,7 @@ short cppyy_call_h(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); int cppyy_call_i(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); long cppyy_call_l(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); + long long cppyy_call_ll(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); double cppyy_call_f(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); double cppyy_call_d(cppyy_method_t method, cppyy_object_t self, int nargs, void* args); diff --git a/pypy/module/cppyy/src/reflexcwrapper.cxx b/pypy/module/cppyy/src/reflexcwrapper.cxx --- a/pypy/module/cppyy/src/reflexcwrapper.cxx +++ b/pypy/module/cppyy/src/reflexcwrapper.cxx @@ -136,6 +136,10 @@ return cppyy_call_T(method, self, nargs, args); } +long long cppyy_call_ll(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + return cppyy_call_T(method, self, nargs, args); +} + double cppyy_call_f(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { return cppyy_call_T(method, self, nargs, args); } diff --git a/pypy/module/cppyy/test/datatypes.cxx b/pypy/module/cppyy/test/datatypes.cxx --- a/pypy/module/cppyy/test/datatypes.cxx +++ b/pypy/module/cppyy/test/datatypes.cxx @@ -13,8 +13,10 @@ m_uint = 22u; m_long = -33l; m_ulong = 33ul; - m_float = -44.f; - m_double = -55.; + m_llong = -44ll; + m_ullong = 55ull; + m_float = -66.f; + m_double = -77.; m_enum = kNothing; m_short_array2 = new short[N]; @@ -86,6 +88,8 @@ unsigned int cppyy_test_data::get_uint() { return m_uint; } long cppyy_test_data::get_long() { return m_long; } unsigned long cppyy_test_data::get_ulong() { return m_ulong; } +long long cppyy_test_data::get_llong() { return m_llong; } +unsigned long long cppyy_test_data::get_ullong() { return m_ullong; } float cppyy_test_data::get_float() { return m_float; } double cppyy_test_data::get_double() { return m_double; } cppyy_test_data::what cppyy_test_data::get_enum() { return m_enum; } @@ -129,22 +133,28 @@ void cppyy_test_data::set_long_c(const long& l) { m_long = l; } void cppyy_test_data::set_ulong(unsigned long ul) { m_ulong = ul; } void cppyy_test_data::set_ulong_c(const unsigned long& ul) { m_ulong = ul; } +void cppyy_test_data::set_llong(long long ll) { m_llong = ll; } +void cppyy_test_data::set_llong_c(const long long& ll) { m_llong = ll; } +void cppyy_test_data::set_ullong(unsigned long long ull) { m_ullong = ull; } +void cppyy_test_data::set_ullong_c(const unsigned long long& ull) { m_ullong = ull; } void cppyy_test_data::set_float(float f) { m_float = f; } void cppyy_test_data::set_float_c(const float& f) { m_float = f; } void cppyy_test_data::set_double(double d) { m_double = d; } void cppyy_test_data::set_double_c(const double& d) { m_double = d; } void cppyy_test_data::set_enum(what w) { m_enum = w; } -char cppyy_test_data::s_char = 's'; -unsigned char cppyy_test_data::s_uchar = 'u'; -short cppyy_test_data::s_short = -101; -unsigned short cppyy_test_data::s_ushort = 255u; -int cppyy_test_data::s_int = -202; -unsigned int cppyy_test_data::s_uint = 202u; -long cppyy_test_data::s_long = -303l; -unsigned long cppyy_test_data::s_ulong = 303ul; -float cppyy_test_data::s_float = -404.f; -double cppyy_test_data::s_double = -505.; +char cppyy_test_data::s_char = 's'; +unsigned char cppyy_test_data::s_uchar = 'u'; +short cppyy_test_data::s_short = -101; +unsigned short cppyy_test_data::s_ushort = 255u; +int cppyy_test_data::s_int = -202; +unsigned int cppyy_test_data::s_uint = 202u; +long cppyy_test_data::s_long = -303l; +unsigned long cppyy_test_data::s_ulong = 303ul; +long long cppyy_test_data::s_llong = -404ll; +unsigned long long cppyy_test_data::s_ullong = 505ull; +float cppyy_test_data::s_float = -606.f; +double cppyy_test_data::s_double = -707.; cppyy_test_data::what cppyy_test_data::s_enum = cppyy_test_data::kNothing; diff --git a/pypy/module/cppyy/test/datatypes.h b/pypy/module/cppyy/test/datatypes.h --- a/pypy/module/cppyy/test/datatypes.h +++ b/pypy/module/cppyy/test/datatypes.h @@ -21,18 +21,20 @@ void destroy_arrays(); // getters - bool get_bool(); - char get_char(); - unsigned char get_uchar(); - short get_short(); - unsigned short get_ushort(); - int get_int(); - unsigned int get_uint(); - long get_long(); - unsigned long get_ulong(); - float get_float(); - double get_double(); - what get_enum(); + bool get_bool(); + char get_char(); + unsigned char get_uchar(); + short get_short(); + unsigned short get_ushort(); + int get_int(); + unsigned int get_uint(); + long get_long(); + unsigned long get_ulong(); + long long get_llong(); + unsigned long long get_ullong(); + float get_float(); + double get_double(); + what get_enum(); short* get_short_array(); short* get_short_array2(); @@ -71,8 +73,12 @@ void set_uint_c(const unsigned int& ui); void set_long(long l); void set_long_c(const long& l); + void set_llong(long long ll); + void set_llong_c(const long long& ll); void set_ulong(unsigned long ul); void set_ulong_c(const unsigned long& ul); + void set_ullong(unsigned long long ll); + void set_ullong_c(const unsigned long long& ll); void set_float(float f); void set_float_c(const float& f); void set_double(double d); @@ -81,18 +87,20 @@ public: // basic types - bool m_bool; - char m_char; - unsigned char m_uchar; - short m_short; - unsigned short m_ushort; - int m_int; - unsigned int m_uint; - long m_long; - unsigned long m_ulong; - float m_float; - double m_double; - what m_enum; + bool m_bool; + char m_char; + unsigned char m_uchar; + short m_short; + unsigned short m_ushort; + int m_int; + unsigned int m_uint; + long m_long; + unsigned long m_ulong; + long long m_llong; + unsigned long long m_ullong; + float m_float; + double m_double; + what m_enum; // array types short m_short_array[N]; @@ -118,17 +126,19 @@ cppyy_test_pod* m_ppod; public: - static char s_char; - static unsigned char s_uchar; - static short s_short; - static unsigned short s_ushort; - static int s_int; - static unsigned int s_uint; - static long s_long; - static unsigned long s_ulong; - static float s_float; - static double s_double; - static what s_enum; + static char s_char; + static unsigned char s_uchar; + static short s_short; + static unsigned short s_ushort; + static int s_int; + static unsigned int s_uint; + static long s_long; + static unsigned long s_ulong; + static long long s_llong; + static unsigned long long s_ullong; + static float s_float; + static double s_double; + static what s_enum; private: bool m_owns_arrays; diff --git a/pypy/module/cppyy/test/test_datatypes.py b/pypy/module/cppyy/test/test_datatypes.py --- a/pypy/module/cppyy/test/test_datatypes.py +++ b/pypy/module/cppyy/test/test_datatypes.py @@ -53,10 +53,12 @@ assert c.m_uint == 22 assert c.m_long == -33 assert c.m_ulong == 33 + assert c.m_llong == -44 + assert c.m_ullong == 55 # reading floating point types - assert round(c.m_float + 44., 5) == 0 - assert round(c.m_double + 55., 8) == 0 + assert round(c.m_float + 66., 5) == 0 + assert round(c.m_double + 77., 8) == 0 # reding of array types for i in range(self.N): @@ -146,7 +148,7 @@ # TODO: raises(TypeError, 'c.set_uchar(-1)') # integer types - names = ['short', 'ushort', 'int', 'uint', 'long', 'ulong'] + names = ['short', 'ushort', 'int', 'uint', 'long', 'ulong', 'llong', 'ullong'] for i in range(len(names)): exec 'c.m_%s = %d' % (names[i],i) assert eval('c.get_%s()' % names[i]) == i @@ -175,6 +177,7 @@ c.destroy_arrays() # integer arrays + names = ['short', 'ushort', 'int', 'uint', 'long', 'ulong'] import array a = range(self.N) atypes = ['h', 'H', 'i', 'I', 'l', 'L' ] @@ -232,12 +235,16 @@ assert c.s_long == -303L assert c.s_ulong == 303L assert cppyy_test_data.s_ulong == 303L + assert cppyy_test_data.s_llong == -404L + assert c.s_llong == -404L + assert c.s_ullong == 505L + assert cppyy_test_data.s_ullong == 505L # floating point types - assert round(cppyy_test_data.s_float + 404., 5) == 0 - assert round(c.s_float + 404., 5) == 0 - assert round(cppyy_test_data.s_double + 505., 8) == 0 - assert round(c.s_double + 505., 8) == 0 + assert round(cppyy_test_data.s_float + 606., 5) == 0 + assert round(c.s_float + 606., 5) == 0 + assert round(cppyy_test_data.s_double + 707., 8) == 0 + assert round(c.s_double + 707., 8) == 0 c.destruct() diff --git a/pypy/module/cppyy/test/test_zjit.py b/pypy/module/cppyy/test/test_zjit.py --- a/pypy/module/cppyy/test/test_zjit.py +++ b/pypy/module/cppyy/test/test_zjit.py @@ -137,6 +137,8 @@ return obj c_int_w = int_w + r_longlong_w = int_w + r_ulonglong_w = uint_w def isinstance_w(self, w_obj, w_type): assert isinstance(w_obj, FakeBase) From noreply at buildbot.pypy.org Fri May 4 21:07:22 2012 From: noreply at buildbot.pypy.org (wlav) Date: Fri, 4 May 2012 21:07:22 +0200 (CEST) Subject: [pypy-commit] pypy reflex-support: likewise (u)llong support for CINT backend Message-ID: <20120504190722.B2B0A9B6038@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r54893:19cddebe0b37 Date: 2012-05-04 11:20 -0700 http://bitbucket.org/pypy/pypy/changeset/19cddebe0b37/ Log: likewise (u)llong support for CINT backend diff --git a/pypy/module/cppyy/src/cintcwrapper.cxx b/pypy/module/cppyy/src/cintcwrapper.cxx --- a/pypy/module/cppyy/src/cintcwrapper.cxx +++ b/pypy/module/cppyy/src/cintcwrapper.cxx @@ -343,6 +343,11 @@ return G__int(result); } +long long cppyy_call_ll(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { + G__value result = cppyy_call_T(method, self, nargs, args); + return G__Longlong(result); +} + double cppyy_call_f(cppyy_method_t method, cppyy_object_t self, int nargs, void* args) { G__value result = cppyy_call_T(method, self, nargs, args); return G__double(result); From noreply at buildbot.pypy.org Fri May 4 21:07:24 2012 From: noreply at buildbot.pypy.org (wlav) Date: Fri, 4 May 2012 21:07:24 +0200 (CEST) Subject: [pypy-commit] pypy reflex-support: merge default into branch Message-ID: <20120504190724.756999B6038@wyvern.cs.uni-duesseldorf.de> Author: Wim Lavrijsen Branch: reflex-support Changeset: r54894:00da721fb4f6 Date: 2012-05-04 12:06 -0700 http://bitbucket.org/pypy/pypy/changeset/00da721fb4f6/ Log: merge default into branch diff too long, truncating to 10000 out of 12144 lines diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1652,8 +1652,6 @@ 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', - 'UnicodeEncodeError', - 'UnicodeDecodeError', ] if sys.platform.startswith("win"): diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -1290,10 +1290,6 @@ w(self.valuestackdepth)]) def handle(self, frame, unroller): - next_instr = self.really_handle(frame, unroller) # JIT hack - return r_uint(next_instr) - - def really_handle(self, frame, unroller): """ Purely abstract method """ raise NotImplementedError @@ -1305,17 +1301,17 @@ _opname = 'SETUP_LOOP' handling_mask = SBreakLoop.kind | SContinueLoop.kind - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): if isinstance(unroller, SContinueLoop): # re-push the loop block without cleaning up the value stack, # and jump to the beginning of the loop, stored in the # exception's argument frame.append_block(self) - return unroller.jump_to + return r_uint(unroller.jump_to) else: # jump to the end of the loop self.cleanupstack(frame) - return self.handlerposition + return r_uint(self.handlerposition) class ExceptBlock(FrameBlock): @@ -1325,7 +1321,7 @@ _opname = 'SETUP_EXCEPT' handling_mask = SApplicationException.kind - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): # push the exception to the value stack for inspection by the # exception handler (the code after the except:) self.cleanupstack(frame) @@ -1340,7 +1336,7 @@ frame.pushvalue(operationerr.get_w_value(frame.space)) frame.pushvalue(operationerr.w_type) frame.last_exception = operationerr - return self.handlerposition # jump to the handler + return r_uint(self.handlerposition) # jump to the handler class FinallyBlock(FrameBlock): @@ -1361,7 +1357,7 @@ frame.pushvalue(frame.space.w_None) frame.pushvalue(frame.space.w_None) - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): # any abnormal reason for unrolling a finally: triggers the end of # the block unrolling and the entering the finally: handler. # see comments in cleanup(). @@ -1369,18 +1365,18 @@ frame.pushvalue(frame.space.wrap(unroller)) frame.pushvalue(frame.space.w_None) frame.pushvalue(frame.space.w_None) - return self.handlerposition # jump to the handler + return r_uint(self.handlerposition) # jump to the handler class WithBlock(FinallyBlock): _immutable_ = True - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): if (frame.space.full_exceptions and isinstance(unroller, SApplicationException)): unroller.operr.normalize_exception(frame.space) - return FinallyBlock.really_handle(self, frame, unroller) + return FinallyBlock.handle(self, frame, unroller) block_classes = {'SETUP_LOOP': LoopBlock, 'SETUP_EXCEPT': ExceptBlock, diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -51,7 +51,7 @@ if ('rewrite' not in enable_opts or 'virtualize' not in enable_opts or 'heap' not in enable_opts or 'unroll' not in enable_opts or 'pure' not in enable_opts): - optimizations.append(OptSimplify()) + optimizations.append(OptSimplify(unroll)) return optimizations, unroll diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -4,8 +4,9 @@ from pypy.jit.metainterp.history import TargetToken, JitCellToken class OptSimplify(Optimization): - def __init__(self): + def __init__(self, unroll): self.last_label_descr = None + self.unroll = unroll def optimize_CALL_PURE(self, op): args = op.getarglist() @@ -35,24 +36,26 @@ pass def optimize_LABEL(self, op): - descr = op.getdescr() - if isinstance(descr, JitCellToken): - return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) - self.last_label_descr = op.getdescr() + if not self.unroll: + descr = op.getdescr() + if isinstance(descr, JitCellToken): + return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) + self.last_label_descr = op.getdescr() self.emit_operation(op) def optimize_JUMP(self, op): - descr = op.getdescr() - assert isinstance(descr, JitCellToken) - if not descr.target_tokens: - assert self.last_label_descr is not None - target_token = self.last_label_descr - assert isinstance(target_token, TargetToken) - assert target_token.targeting_jitcell_token is descr - op.setdescr(self.last_label_descr) - else: - assert len(descr.target_tokens) == 1 - op.setdescr(descr.target_tokens[0]) + if not self.unroll: + descr = op.getdescr() + assert isinstance(descr, JitCellToken) + if not descr.target_tokens: + assert self.last_label_descr is not None + target_token = self.last_label_descr + assert isinstance(target_token, TargetToken) + assert target_token.targeting_jitcell_token is descr + op.setdescr(self.last_label_descr) + else: + assert len(descr.target_tokens) == 1 + op.setdescr(descr.target_tokens[0]) self.emit_operation(op) dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py @@ -0,0 +1,46 @@ +from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import OptimizeOptTest +from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin +from pypy.jit.metainterp.resoperation import rop + + +allopts = OptimizeOptTest.enable_opts.split(':') +for optnum in range(len(allopts)): + myopts = allopts[:] + del myopts[optnum] + + class TestLLtype(OptimizeOptTest, LLtypeMixin): + enable_opts = ':'.join(myopts) + + def optimize_loop(self, ops, expected, expected_preamble=None, + call_pure_results=None, expected_short=None): + loop = self.parse(ops) + if expected != "crash!": + expected = self.parse(expected) + if expected_preamble: + expected_preamble = self.parse(expected_preamble) + if expected_short: + expected_short = self.parse(expected_short) + + preamble = self.unroll_and_optimize(loop, call_pure_results) + + for op in preamble.operations + loop.operations: + assert op.getopnum() not in (rop.CALL_PURE, + rop.CALL_LOOPINVARIANT, + rop.VIRTUAL_REF_FINISH, + rop.VIRTUAL_REF, + rop.QUASIIMMUT_FIELD, + rop.MARK_OPAQUE_PTR, + rop.RECORD_KNOWN_CLASS) + + def raises(self, e, fn, *args): + try: + fn(*args) + except e: + pass + + opt = allopts[optnum] + exec "TestNo%sLLtype = TestLLtype" % (opt[0].upper() + opt[1:]) + +del TestLLtype # No need to run the last set twice +del TestNoUnrollLLtype # This case is take care of by test_optimizebasic + diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -105,6 +105,9 @@ return loop + def raises(self, e, fn, *args): + py.test.raises(e, fn, *args) + class OptimizeOptTest(BaseTestWithUnroll): def setup_method(self, meth=None): @@ -2639,7 +2642,7 @@ p2 = new_with_vtable(ConstClass(node_vtable)) jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_2(self): @@ -2651,7 +2654,7 @@ escape(p2) # prevent it from staying Virtual jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_3(self): @@ -2665,7 +2668,7 @@ setfield_gc(p3, p4, descr=nextdescr) jump(p3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_merge_guard_class_guard_value(self): @@ -4411,7 +4414,7 @@ setfield_gc(p1, p3, descr=nextdescr) jump(p3) """ - py.test.raises(BogusPureField, self.optimize_loop, ops, "crash!") + self.raises(BogusPureField, self.optimize_loop, ops, "crash!") def test_dont_complains_different_field(self): ops = """ @@ -5024,7 +5027,7 @@ i2 = int_add(i0, 3) jump(i2) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_ne_const_not(self): ops = """ @@ -5074,7 +5077,7 @@ i3 = int_add(i0, 3) jump(i3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_lshift(self): ops = """ diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -927,12 +927,12 @@ source_dir / "pyerrors.c", source_dir / "modsupport.c", source_dir / "getargs.c", + source_dir / "abstract.c", source_dir / "stringobject.c", source_dir / "mysnprintf.c", source_dir / "pythonrun.c", source_dir / "sysmodule.c", source_dir / "bufferobject.c", - source_dir / "object.c", source_dir / "cobject.c", source_dir / "structseq.c", source_dir / "capsule.c", diff --git a/pypy/module/cpyext/complexobject.py b/pypy/module/cpyext/complexobject.py --- a/pypy/module/cpyext/complexobject.py +++ b/pypy/module/cpyext/complexobject.py @@ -33,6 +33,11 @@ # CPython also accepts anything return 0.0 + at cpython_api([Py_complex_ptr], PyObject) +def _PyComplex_FromCComplex(space, v): + """Create a new Python complex number object from a C Py_complex value.""" + return space.newcomplex(v.c_real, v.c_imag) + # lltype does not handle functions returning a structure. This implements a # helper function, which takes as argument a reference to the return value. @cpython_api([PyObject, Py_complex_ptr], lltype.Void) diff --git a/pypy/module/cpyext/floatobject.py b/pypy/module/cpyext/floatobject.py --- a/pypy/module/cpyext/floatobject.py +++ b/pypy/module/cpyext/floatobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.api import ( CANNOT_FAIL, cpython_api, PyObject, build_type_checkers, CONST_STRING) from pypy.interpreter.error import OperationError +from pypy.rlib.rstruct import runpack PyFloat_Check, PyFloat_CheckExact = build_type_checkers("Float") @@ -33,3 +34,19 @@ backward compatibility.""" return space.call_function(space.w_float, w_obj) + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack4(space, ptr, le): + input = rffi.charpsize2str(ptr, 4) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("f", input) + + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack8(space, ptr, le): + input = rffi.charpsize2str(ptr, 8) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("d", input) + diff --git a/pypy/module/cpyext/include/complexobject.h b/pypy/module/cpyext/include/complexobject.h --- a/pypy/module/cpyext/include/complexobject.h +++ b/pypy/module/cpyext/include/complexobject.h @@ -21,6 +21,8 @@ return result; } +#define PyComplex_FromCComplex(c) _PyComplex_FromCComplex(&c) + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -29,6 +29,10 @@ # define vsnprintf _vsnprintf #endif +#include +PyAPI_FUNC(int) PyOS_snprintf(char *str, size_t size, const char *format, ...); +PyAPI_FUNC(int) PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va); + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/stringobject.h b/pypy/module/cpyext/include/stringobject.h --- a/pypy/module/cpyext/include/stringobject.h +++ b/pypy/module/cpyext/include/stringobject.h @@ -7,8 +7,6 @@ extern "C" { #endif -int PyOS_snprintf(char *str, size_t size, const char *format, ...); - #define PyString_GET_SIZE(op) PyString_Size(op) #define PyString_AS_STRING(op) PyString_AsString(op) diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -40,7 +40,8 @@ when the value is NULL, instead of converting to None. */ #define T_LONGLONG 17 -#define T_ULONGLONG 18 +#define T_ULONGLONG 18 +#define T_PYSSIZET 19 /* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 diff --git a/pypy/module/cpyext/longobject.py b/pypy/module/cpyext/longobject.py --- a/pypy/module/cpyext/longobject.py +++ b/pypy/module/cpyext/longobject.py @@ -1,6 +1,7 @@ from pypy.rpython.lltypesystem import lltype, rffi -from pypy.module.cpyext.api import (cpython_api, PyObject, build_type_checkers, - CONST_STRING, ADDR, CANNOT_FAIL) +from pypy.module.cpyext.api import ( + cpython_api, PyObject, build_type_checkers, Py_ssize_t, + CONST_STRING, ADDR, CANNOT_FAIL) from pypy.objspace.std.longobject import W_LongObject from pypy.interpreter.error import OperationError from pypy.module.cpyext.intobject import PyInt_AsUnsignedLongMask @@ -15,6 +16,13 @@ """Return a new PyLongObject object from v, or NULL on failure.""" return space.newlong(val) + at cpython_api([Py_ssize_t], PyObject) +def PyLong_FromSsize_t(space, val): + """Return a new PyLongObject object from a C Py_ssize_t, or + NULL on failure. + """ + return space.newlong(val) + @cpython_api([rffi.LONGLONG], PyObject) def PyLong_FromLongLong(space, val): """Return a new PyLongObject object from a C long long, or NULL @@ -56,6 +64,14 @@ and -1 will be returned.""" return space.int_w(w_long) + at cpython_api([PyObject], Py_ssize_t, error=-1) +def PyLong_AsSsize_t(space, w_long): + """Return a C Py_ssize_t representation of the contents of pylong. If + pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised + and -1 will be returned. + """ + return space.int_w(w_long) + @cpython_api([PyObject], rffi.LONGLONG, error=-1) def PyLong_AsLongLong(space, w_long): """ diff --git a/pypy/module/cpyext/src/abstract.c b/pypy/module/cpyext/src/abstract.c new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/src/abstract.c @@ -0,0 +1,269 @@ +/* Abstract Object Interface */ + +#include "Python.h" + +/* Shorthands to return certain errors */ + +static PyObject * +type_error(const char *msg, PyObject *obj) +{ + PyErr_Format(PyExc_TypeError, msg, obj->ob_type->tp_name); + return NULL; +} + +static PyObject * +null_error(void) +{ + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_SystemError, + "null argument to internal routine"); + return NULL; +} + +/* Operations on any object */ + +int +PyObject_CheckReadBuffer(PyObject *obj) +{ + PyBufferProcs *pb = obj->ob_type->tp_as_buffer; + + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + (*pb->bf_getsegcount)(obj, NULL) != 1) + return 0; + return 1; +} + +int PyObject_AsReadBuffer(PyObject *obj, + const void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void *pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a readable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getreadbuffer)(obj, 0, &pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +int PyObject_AsWriteBuffer(PyObject *obj, + void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void*pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a writeable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getwritebuffer)(obj,0,&pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +/* Operations on callable objects */ + +static PyObject* +call_function_tail(PyObject *callable, PyObject *args) +{ + PyObject *retval; + + if (args == NULL) + return NULL; + + if (!PyTuple_Check(args)) { + PyObject *a; + + a = PyTuple_New(1); + if (a == NULL) { + Py_DECREF(args); + return NULL; + } + PyTuple_SET_ITEM(a, 0, args); + args = a; + } + retval = PyObject_Call(callable, args, NULL); + + Py_DECREF(args); + + return retval; +} + +PyObject * +PyObject_CallFunction(PyObject *callable, const char *format, ...) +{ + va_list va; + PyObject *args; + + if (callable == NULL) + return null_error(); + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + return call_function_tail(callable, args); +} + +PyObject * +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) +{ + va_list va; + PyObject *args; + PyObject *func = NULL; + PyObject *retval = NULL; + + if (o == NULL || name == NULL) + return null_error(); + + func = PyObject_GetAttrString(o, name); + if (func == NULL) { + PyErr_SetString(PyExc_AttributeError, name); + return 0; + } + + if (!PyCallable_Check(func)) { + type_error("attribute of type '%.200s' is not callable", func); + goto exit; + } + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + retval = call_function_tail(func, args); + + exit: + /* args gets consumed in call_function_tail */ + Py_XDECREF(func); + + return retval; +} + +static PyObject * +objargs_mktuple(va_list va) +{ + int i, n = 0; + va_list countva; + PyObject *result, *tmp; + +#ifdef VA_LIST_IS_ARRAY + memcpy(countva, va, sizeof(va_list)); +#else +#ifdef __va_copy + __va_copy(countva, va); +#else + countva = va; +#endif +#endif + + while (((PyObject *)va_arg(countva, PyObject *)) != NULL) + ++n; + result = PyTuple_New(n); + if (result != NULL && n > 0) { + for (i = 0; i < n; ++i) { + tmp = (PyObject *)va_arg(va, PyObject *); + Py_INCREF(tmp); + PyTuple_SET_ITEM(result, i, tmp); + } + } + return result; +} + +PyObject * +PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL || name == NULL) + return null_error(); + + callable = PyObject_GetAttr(callable, name); + if (callable == NULL) + return NULL; + + /* count the args */ + va_start(vargs, name); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) { + Py_DECREF(callable); + return NULL; + } + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + Py_DECREF(callable); + + return tmp; +} + +PyObject * +PyObject_CallFunctionObjArgs(PyObject *callable, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL) + return null_error(); + + /* count the args */ + va_start(vargs, callable); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) + return NULL; + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + + return tmp; +} + diff --git a/pypy/module/cpyext/src/bufferobject.c b/pypy/module/cpyext/src/bufferobject.c --- a/pypy/module/cpyext/src/bufferobject.c +++ b/pypy/module/cpyext/src/bufferobject.c @@ -13,207 +13,207 @@ static int get_buf(PyBufferObject *self, void **ptr, Py_ssize_t *size, - enum buffer_t buffer_type) + enum buffer_t buffer_type) { - if (self->b_base == NULL) { - assert (ptr != NULL); - *ptr = self->b_ptr; - *size = self->b_size; - } - else { - Py_ssize_t count, offset; - readbufferproc proc = 0; - PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; - if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return 0; - } - if ((buffer_type == READ_BUFFER) || - ((buffer_type == ANY_BUFFER) && self->b_readonly)) - proc = bp->bf_getreadbuffer; - else if ((buffer_type == WRITE_BUFFER) || - (buffer_type == ANY_BUFFER)) - proc = (readbufferproc)bp->bf_getwritebuffer; - else if (buffer_type == CHAR_BUFFER) { + if (self->b_base == NULL) { + assert (ptr != NULL); + *ptr = self->b_ptr; + *size = self->b_size; + } + else { + Py_ssize_t count, offset; + readbufferproc proc = 0; + PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; + if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return 0; + } + if ((buffer_type == READ_BUFFER) || + ((buffer_type == ANY_BUFFER) && self->b_readonly)) + proc = bp->bf_getreadbuffer; + else if ((buffer_type == WRITE_BUFFER) || + (buffer_type == ANY_BUFFER)) + proc = (readbufferproc)bp->bf_getwritebuffer; + else if (buffer_type == CHAR_BUFFER) { if (!PyType_HasFeature(self->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER)) { - PyErr_SetString(PyExc_TypeError, - "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); - return 0; - } - proc = (readbufferproc)bp->bf_getcharbuffer; - } - if (!proc) { - char *buffer_type_name; - switch (buffer_type) { - case READ_BUFFER: - buffer_type_name = "read"; - break; - case WRITE_BUFFER: - buffer_type_name = "write"; - break; - case CHAR_BUFFER: - buffer_type_name = "char"; - break; - default: - buffer_type_name = "no"; - break; - } - PyErr_Format(PyExc_TypeError, - "%s buffer type not available", - buffer_type_name); - return 0; - } - if ((count = (*proc)(self->b_base, 0, ptr)) < 0) - return 0; - /* apply constraints to the start/end */ - if (self->b_offset > count) - offset = count; - else - offset = self->b_offset; - *(char **)ptr = *(char **)ptr + offset; - if (self->b_size == Py_END_OF_BUFFER) - *size = count; - else - *size = self->b_size; - if (offset + *size > count) - *size = count - offset; - } - return 1; + Py_TPFLAGS_HAVE_GETCHARBUFFER)) { + PyErr_SetString(PyExc_TypeError, + "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); + return 0; + } + proc = (readbufferproc)bp->bf_getcharbuffer; + } + if (!proc) { + char *buffer_type_name; + switch (buffer_type) { + case READ_BUFFER: + buffer_type_name = "read"; + break; + case WRITE_BUFFER: + buffer_type_name = "write"; + break; + case CHAR_BUFFER: + buffer_type_name = "char"; + break; + default: + buffer_type_name = "no"; + break; + } + PyErr_Format(PyExc_TypeError, + "%s buffer type not available", + buffer_type_name); + return 0; + } + if ((count = (*proc)(self->b_base, 0, ptr)) < 0) + return 0; + /* apply constraints to the start/end */ + if (self->b_offset > count) + offset = count; + else + offset = self->b_offset; + *(char **)ptr = *(char **)ptr + offset; + if (self->b_size == Py_END_OF_BUFFER) + *size = count; + else + *size = self->b_size; + if (offset + *size > count) + *size = count - offset; + } + return 1; } static PyObject * buffer_from_memory(PyObject *base, Py_ssize_t size, Py_ssize_t offset, void *ptr, - int readonly) + int readonly) { - PyBufferObject * b; + PyBufferObject * b; - if (size < 0 && size != Py_END_OF_BUFFER) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } + if (size < 0 && size != Py_END_OF_BUFFER) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } - b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); - if ( b == NULL ) - return NULL; + b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); + if ( b == NULL ) + return NULL; - Py_XINCREF(base); - b->b_base = base; - b->b_ptr = ptr; - b->b_size = size; - b->b_offset = offset; - b->b_readonly = readonly; - b->b_hash = -1; + Py_XINCREF(base); + b->b_base = base; + b->b_ptr = ptr; + b->b_size = size; + b->b_offset = offset; + b->b_readonly = readonly; + b->b_hash = -1; - return (PyObject *) b; + return (PyObject *) b; } static PyObject * buffer_from_object(PyObject *base, Py_ssize_t size, Py_ssize_t offset, int readonly) { - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } - if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { - /* another buffer, refer to the base object */ - PyBufferObject *b = (PyBufferObject *)base; - if (b->b_size != Py_END_OF_BUFFER) { - Py_ssize_t base_size = b->b_size - offset; - if (base_size < 0) - base_size = 0; - if (size == Py_END_OF_BUFFER || size > base_size) - size = base_size; - } - offset += b->b_offset; - base = b->b_base; - } - return buffer_from_memory(base, size, offset, NULL, readonly); + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } + if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { + /* another buffer, refer to the base object */ + PyBufferObject *b = (PyBufferObject *)base; + if (b->b_size != Py_END_OF_BUFFER) { + Py_ssize_t base_size = b->b_size - offset; + if (base_size < 0) + base_size = 0; + if (size == Py_END_OF_BUFFER || size > base_size) + size = base_size; + } + offset += b->b_offset; + base = b->b_base; + } + return buffer_from_memory(base, size, offset, NULL, readonly); } PyObject * PyBuffer_FromObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 1); + return buffer_from_object(base, size, offset, 1); } PyObject * PyBuffer_FromReadWriteObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 0); + return buffer_from_object(base, size, offset, 0); } PyObject * PyBuffer_FromMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 1); + return buffer_from_memory(NULL, size, 0, ptr, 1); } PyObject * PyBuffer_FromReadWriteMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 0); + return buffer_from_memory(NULL, size, 0, ptr, 0); } PyObject * PyBuffer_New(Py_ssize_t size) { - PyObject *o; - PyBufferObject * b; + PyObject *o; + PyBufferObject * b; - if (size < 0) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (sizeof(*b) > PY_SSIZE_T_MAX - size) { - /* unlikely */ - return PyErr_NoMemory(); - } - /* Inline PyObject_New */ - o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); - if ( o == NULL ) - return PyErr_NoMemory(); - b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); + if (size < 0) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (sizeof(*b) > PY_SSIZE_T_MAX - size) { + /* unlikely */ + return PyErr_NoMemory(); + } + /* Inline PyObject_New */ + o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); + if ( o == NULL ) + return PyErr_NoMemory(); + b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); - b->b_base = NULL; - b->b_ptr = (void *)(b + 1); - b->b_size = size; - b->b_offset = 0; - b->b_readonly = 0; - b->b_hash = -1; + b->b_base = NULL; + b->b_ptr = (void *)(b + 1); + b->b_size = size; + b->b_offset = 0; + b->b_readonly = 0; + b->b_hash = -1; - return o; + return o; } /* Methods */ @@ -221,19 +221,21 @@ static PyObject * buffer_new(PyTypeObject *type, PyObject *args, PyObject *kw) { - PyObject *ob; - Py_ssize_t offset = 0; - Py_ssize_t size = Py_END_OF_BUFFER; + PyObject *ob; + Py_ssize_t offset = 0; + Py_ssize_t size = Py_END_OF_BUFFER; - /*if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) - return NULL;*/ - - if (!_PyArg_NoKeywords("buffer()", kw)) - return NULL; + /* + * if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) + * return NULL; + */ - if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) - return NULL; - return PyBuffer_FromObject(ob, offset, size); + if (!_PyArg_NoKeywords("buffer()", kw)) + return NULL; + + if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) + return NULL; + return PyBuffer_FromObject(ob, offset, size); } PyDoc_STRVAR(buffer_doc, @@ -248,99 +250,99 @@ static void buffer_dealloc(PyBufferObject *self) { - Py_XDECREF(self->b_base); - PyObject_DEL(self); + Py_XDECREF(self->b_base); + PyObject_DEL(self); } static int buffer_compare(PyBufferObject *self, PyBufferObject *other) { - void *p1, *p2; - Py_ssize_t len_self, len_other, min_len; - int cmp; + void *p1, *p2; + Py_ssize_t len_self, len_other, min_len; + int cmp; - if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) - return -1; - if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) - return -1; - min_len = (len_self < len_other) ? len_self : len_other; - if (min_len > 0) { - cmp = memcmp(p1, p2, min_len); - if (cmp != 0) - return cmp < 0 ? -1 : 1; - } - return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; + if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) + return -1; + if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) + return -1; + min_len = (len_self < len_other) ? len_self : len_other; + if (min_len > 0) { + cmp = memcmp(p1, p2, min_len); + if (cmp != 0) + return cmp < 0 ? -1 : 1; + } + return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; } static PyObject * buffer_repr(PyBufferObject *self) { - const char *status = self->b_readonly ? "read-only" : "read-write"; + const char *status = self->b_readonly ? "read-only" : "read-write"; if ( self->b_base == NULL ) - return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", - status, - self->b_ptr, - self->b_size, - self); - else - return PyString_FromFormat( - "<%s buffer for %p, size %zd, offset %zd at %p>", - status, - self->b_base, - self->b_size, - self->b_offset, - self); + return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", + status, + self->b_ptr, + self->b_size, + self); + else + return PyString_FromFormat( + "<%s buffer for %p, size %zd, offset %zd at %p>", + status, + self->b_base, + self->b_size, + self->b_offset, + self); } static long buffer_hash(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - register Py_ssize_t len; - register unsigned char *p; - register long x; + void *ptr; + Py_ssize_t size; + register Py_ssize_t len; + register unsigned char *p; + register long x; - if ( self->b_hash != -1 ) - return self->b_hash; + if ( self->b_hash != -1 ) + return self->b_hash; - /* XXX potential bugs here, a readonly buffer does not imply that the - * underlying memory is immutable. b_readonly is a necessary but not - * sufficient condition for a buffer to be hashable. Perhaps it would - * be better to only allow hashing if the underlying object is known to - * be immutable (e.g. PyString_Check() is true). Another idea would - * be to call tp_hash on the underlying object and see if it raises - * an error. */ - if ( !self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, - "writable buffers are not hashable"); - return -1; - } + /* XXX potential bugs here, a readonly buffer does not imply that the + * underlying memory is immutable. b_readonly is a necessary but not + * sufficient condition for a buffer to be hashable. Perhaps it would + * be better to only allow hashing if the underlying object is known to + * be immutable (e.g. PyString_Check() is true). Another idea would + * be to call tp_hash on the underlying object and see if it raises + * an error. */ + if ( !self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, + "writable buffers are not hashable"); + return -1; + } - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - p = (unsigned char *) ptr; - len = size; - x = *p << 7; - while (--len >= 0) - x = (1000003*x) ^ *p++; - x ^= size; - if (x == -1) - x = -2; - self->b_hash = x; - return x; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + p = (unsigned char *) ptr; + len = size; + x = *p << 7; + while (--len >= 0) + x = (1000003*x) ^ *p++; + x ^= size; + if (x == -1) + x = -2; + self->b_hash = x; + return x; } static PyObject * buffer_str(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - return PyString_FromStringAndSize((const char *)ptr, size); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + return PyString_FromStringAndSize((const char *)ptr, size); } /* Sequence methods */ @@ -348,374 +350,372 @@ static Py_ssize_t buffer_length(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - return size; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + return size; } static PyObject * buffer_concat(PyBufferObject *self, PyObject *other) { - PyBufferProcs *pb = other->ob_type->tp_as_buffer; - void *ptr1, *ptr2; - char *p; - PyObject *ob; - Py_ssize_t size, count; + PyBufferProcs *pb = other->ob_type->tp_as_buffer; + void *ptr1, *ptr2; + char *p; + PyObject *ob; + Py_ssize_t size, count; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return NULL; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return NULL; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return NULL; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return NULL; - - /* optimize special case */ - if ( size == 0 ) - { - Py_INCREF(other); - return other; - } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return NULL; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return NULL; + /* optimize special case */ + if ( size == 0 ) + { + Py_INCREF(other); + return other; + } - assert(count <= PY_SIZE_MAX - size); + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return NULL; - ob = PyString_FromStringAndSize(NULL, size + count); - if ( ob == NULL ) - return NULL; - p = PyString_AS_STRING(ob); - memcpy(p, ptr1, size); - memcpy(p + size, ptr2, count); + assert(count <= PY_SIZE_MAX - size); - /* there is an extra byte in the string object, so this is safe */ - p[size + count] = '\0'; + ob = PyString_FromStringAndSize(NULL, size + count); + if ( ob == NULL ) + return NULL; + p = PyString_AS_STRING(ob); + memcpy(p, ptr1, size); + memcpy(p + size, ptr2, count); - return ob; + /* there is an extra byte in the string object, so this is safe */ + p[size + count] = '\0'; + + return ob; } static PyObject * buffer_repeat(PyBufferObject *self, Py_ssize_t count) { - PyObject *ob; - register char *p; - void *ptr; - Py_ssize_t size; + PyObject *ob; + register char *p; + void *ptr; + Py_ssize_t size; - if ( count < 0 ) - count = 0; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if (count > PY_SSIZE_T_MAX / size) { - PyErr_SetString(PyExc_MemoryError, "result too large"); - return NULL; - } - ob = PyString_FromStringAndSize(NULL, size * count); - if ( ob == NULL ) - return NULL; + if ( count < 0 ) + count = 0; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if (count > PY_SSIZE_T_MAX / size) { + PyErr_SetString(PyExc_MemoryError, "result too large"); + return NULL; + } + ob = PyString_FromStringAndSize(NULL, size * count); + if ( ob == NULL ) + return NULL; - p = PyString_AS_STRING(ob); - while ( count-- ) - { - memcpy(p, ptr, size); - p += size; - } + p = PyString_AS_STRING(ob); + while ( count-- ) + { + memcpy(p, ptr, size); + p += size; + } - /* there is an extra byte in the string object, so this is safe */ - *p = '\0'; + /* there is an extra byte in the string object, so this is safe */ + *p = '\0'; - return ob; + return ob; } static PyObject * buffer_item(PyBufferObject *self, Py_ssize_t idx) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( idx < 0 || idx >= size ) { - PyErr_SetString(PyExc_IndexError, "buffer index out of range"); - return NULL; - } - return PyString_FromStringAndSize((char *)ptr + idx, 1); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( idx < 0 || idx >= size ) { + PyErr_SetString(PyExc_IndexError, "buffer index out of range"); + return NULL; + } + return PyString_FromStringAndSize((char *)ptr + idx, 1); } static PyObject * buffer_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( left < 0 ) - left = 0; - if ( right < 0 ) - right = 0; - if ( right > size ) - right = size; - if ( right < left ) - right = left; - return PyString_FromStringAndSize((char *)ptr + left, - right - left); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( left < 0 ) + left = 0; + if ( right < 0 ) + right = 0; + if ( right > size ) + right = size; + if ( right < left ) + right = left; + return PyString_FromStringAndSize((char *)ptr + left, + right - left); } static PyObject * buffer_subscript(PyBufferObject *self, PyObject *item) { - void *p; - Py_ssize_t size; - - if (!get_buf(self, &p, &size, ANY_BUFFER)) - return NULL; - + void *p; + Py_ssize_t size; + + if (!get_buf(self, &p, &size, ANY_BUFFER)) + return NULL; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return NULL; - if (i < 0) - i += size; - return buffer_item(self, i); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength, cur, i; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return NULL; + if (i < 0) + i += size; + return buffer_item(self, i); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength, cur, i; - if (PySlice_GetIndicesEx((PySliceObject*)item, size, - &start, &stop, &step, &slicelength) < 0) { - return NULL; - } + if (PySlice_GetIndicesEx((PySliceObject*)item, size, + &start, &stop, &step, &slicelength) < 0) { + return NULL; + } - if (slicelength <= 0) - return PyString_FromStringAndSize("", 0); - else if (step == 1) - return PyString_FromStringAndSize((char *)p + start, - stop - start); - else { - PyObject *result; - char *source_buf = (char *)p; - char *result_buf = (char *)PyMem_Malloc(slicelength); + if (slicelength <= 0) + return PyString_FromStringAndSize("", 0); + else if (step == 1) + return PyString_FromStringAndSize((char *)p + start, + stop - start); + else { + PyObject *result; + char *source_buf = (char *)p; + char *result_buf = (char *)PyMem_Malloc(slicelength); - if (result_buf == NULL) - return PyErr_NoMemory(); + if (result_buf == NULL) + return PyErr_NoMemory(); - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - result_buf[i] = source_buf[cur]; - } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + result_buf[i] = source_buf[cur]; + } - result = PyString_FromStringAndSize(result_buf, - slicelength); - PyMem_Free(result_buf); - return result; - } - } - else { - PyErr_SetString(PyExc_TypeError, - "sequence index must be integer"); - return NULL; - } + result = PyString_FromStringAndSize(result_buf, + slicelength); + PyMem_Free(result_buf); + return result; + } + } + else { + PyErr_SetString(PyExc_TypeError, + "sequence index must be integer"); + return NULL; + } } static int buffer_ass_item(PyBufferObject *self, Py_ssize_t idx, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; - if (idx < 0 || idx >= size) { - PyErr_SetString(PyExc_IndexError, - "buffer assignment index out of range"); - return -1; - } + if (idx < 0 || idx >= size) { + PyErr_SetString(PyExc_IndexError, + "buffer assignment index out of range"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; - if ( count != 1 ) { - PyErr_SetString(PyExc_TypeError, - "right operand must be a single byte"); - return -1; - } + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; + if ( count != 1 ) { + PyErr_SetString(PyExc_TypeError, + "right operand must be a single byte"); + return -1; + } - ((char *)ptr1)[idx] = *(char *)ptr2; - return 0; + ((char *)ptr1)[idx] = *(char *)ptr2; + return 0; } static int buffer_ass_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t slice_len; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t slice_len; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; - if ( left < 0 ) - left = 0; - else if ( left > size ) - left = size; - if ( right < left ) - right = left; - else if ( right > size ) - right = size; - slice_len = right - left; + if ( left < 0 ) + left = 0; + else if ( left > size ) + left = size; + if ( right < left ) + right = left; + else if ( right > size ) + right = size; + slice_len = right - left; - if ( count != slice_len ) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ( count != slice_len ) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - if ( slice_len ) - memcpy((char *)ptr1 + left, ptr2, slice_len); + if ( slice_len ) + memcpy((char *)ptr1 + left, ptr2, slice_len); - return 0; + return 0; } static int buffer_ass_subscript(PyBufferObject *self, PyObject *item, PyObject *value) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t selfsize; - Py_ssize_t othersize; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t selfsize; + Py_ssize_t othersize; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = value ? value->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) - return -1; - + pb = value ? value->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) + return -1; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return -1; - if (i < 0) - i += selfsize; - return buffer_ass_item(self, i, value); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength; - - if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, - &start, &stop, &step, &slicelength) < 0) - return -1; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return -1; + if (i < 0) + i += selfsize; + return buffer_ass_item(self, i, value); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength; - if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) - return -1; + if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, + &start, &stop, &step, &slicelength) < 0) + return -1; - if (othersize != slicelength) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) + return -1; - if (slicelength == 0) - return 0; - else if (step == 1) { - memcpy((char *)ptr1 + start, ptr2, slicelength); - return 0; - } - else { - Py_ssize_t cur, i; - - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - ((char *)ptr1)[cur] = ((char *)ptr2)[i]; - } + if (othersize != slicelength) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - return 0; - } - } else { - PyErr_SetString(PyExc_TypeError, - "buffer indices must be integers"); - return -1; - } + if (slicelength == 0) + return 0; + else if (step == 1) { + memcpy((char *)ptr1 + start, ptr2, slicelength); + return 0; + } + else { + Py_ssize_t cur, i; + + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + ((char *)ptr1)[cur] = ((char *)ptr2)[i]; + } + + return 0; + } + } else { + PyErr_SetString(PyExc_TypeError, + "buffer indices must be integers"); + return -1; + } } /* Buffer methods */ @@ -723,64 +723,64 @@ static Py_ssize_t buffer_getreadbuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, READ_BUFFER)) - return -1; - return size; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, READ_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getwritebuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; + Py_ssize_t size; - if ( self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, "buffer is read-only"); + return -1; + } - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, WRITE_BUFFER)) - return -1; - return size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, WRITE_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getsegcount(PyBufferObject *self, Py_ssize_t *lenp) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - if (lenp) - *lenp = size; - return 1; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + if (lenp) + *lenp = size; + return 1; } static Py_ssize_t buffer_getcharbuf(PyBufferObject *self, Py_ssize_t idx, const char **pp) { - void *ptr; - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) - return -1; - *pp = (const char *)ptr; - return size; + void *ptr; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) + return -1; + *pp = (const char *)ptr; + return size; } void init_bufferobject(void) @@ -789,67 +789,65 @@ } static PySequenceMethods buffer_as_sequence = { - (lenfunc)buffer_length, /*sq_length*/ - (binaryfunc)buffer_concat, /*sq_concat*/ - (ssizeargfunc)buffer_repeat, /*sq_repeat*/ - (ssizeargfunc)buffer_item, /*sq_item*/ - (ssizessizeargfunc)buffer_slice, /*sq_slice*/ - (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ - (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ + (lenfunc)buffer_length, /*sq_length*/ + (binaryfunc)buffer_concat, /*sq_concat*/ + (ssizeargfunc)buffer_repeat, /*sq_repeat*/ + (ssizeargfunc)buffer_item, /*sq_item*/ + (ssizessizeargfunc)buffer_slice, /*sq_slice*/ + (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ + (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ }; static PyMappingMethods buffer_as_mapping = { - (lenfunc)buffer_length, - (binaryfunc)buffer_subscript, - (objobjargproc)buffer_ass_subscript, + (lenfunc)buffer_length, + (binaryfunc)buffer_subscript, + (objobjargproc)buffer_ass_subscript, }; static PyBufferProcs buffer_as_buffer = { - (readbufferproc)buffer_getreadbuf, - (writebufferproc)buffer_getwritebuf, - (segcountproc)buffer_getsegcount, - (charbufferproc)buffer_getcharbuf, + (readbufferproc)buffer_getreadbuf, + (writebufferproc)buffer_getwritebuf, + (segcountproc)buffer_getsegcount, + (charbufferproc)buffer_getcharbuf, }; PyTypeObject PyBuffer_Type = { - PyObject_HEAD_INIT(NULL) + PyVarObject_HEAD_INIT(NULL, 0) + "buffer", + sizeof(PyBufferObject), 0, - "buffer", - sizeof(PyBufferObject), - 0, - (destructor)buffer_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - (cmpfunc)buffer_compare, /* tp_compare */ - (reprfunc)buffer_repr, /* tp_repr */ - 0, /* tp_as_number */ - &buffer_as_sequence, /* tp_as_sequence */ - &buffer_as_mapping, /* tp_as_mapping */ - (hashfunc)buffer_hash, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)buffer_str, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - &buffer_as_buffer, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ - buffer_doc, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - buffer_new, /* tp_new */ + (destructor)buffer_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + (cmpfunc)buffer_compare, /* tp_compare */ + (reprfunc)buffer_repr, /* tp_repr */ + 0, /* tp_as_number */ + &buffer_as_sequence, /* tp_as_sequence */ + &buffer_as_mapping, /* tp_as_mapping */ + (hashfunc)buffer_hash, /* tp_hash */ + 0, /* tp_call */ + (reprfunc)buffer_str, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + &buffer_as_buffer, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ + buffer_doc, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + buffer_new, /* tp_new */ }; - diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -50,6 +50,10 @@ PyCObject_AsVoidPtr(PyObject *self) { if (self) { + if (PyCapsule_CheckExact(self)) { + const char *name = PyCapsule_GetName(self); + return (void *)PyCapsule_GetPointer(self, name); + } if (self->ob_type == &PyCObject_Type) return ((PyCObject *)self)->cobject; PyErr_SetString(PyExc_TypeError, diff --git a/pypy/module/cpyext/src/getargs.c b/pypy/module/cpyext/src/getargs.c --- a/pypy/module/cpyext/src/getargs.c +++ b/pypy/module/cpyext/src/getargs.c @@ -7,349 +7,348 @@ #ifdef __cplusplus -extern "C" { +extern "C" { #endif - int PyArg_Parse(PyObject *, const char *, ...); int PyArg_ParseTuple(PyObject *, const char *, ...); int PyArg_VaParse(PyObject *, const char *, va_list); int PyArg_ParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, ...); + const char *, char **, ...); int PyArg_VaParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, va_list); + const char *, char **, va_list); #define FLAG_COMPAT 1 #define FLAG_SIZE_T 2 -typedef int (*destr_t)(PyObject *, void *); - - -/* Keep track of "objects" that have been allocated or initialized and - which will need to be deallocated or cleaned up somehow if overall - parsing fails. -*/ -typedef struct { - void *item; - destr_t destructor; -} freelistentry_t; - -typedef struct { - int first_available; - freelistentry_t *entries; -} freelist_t; - /* Forward */ static int vgetargs1(PyObject *, const char *, va_list *, int); static void seterror(int, const char *, int *, const char *, const char *); -static char *convertitem(PyObject *, const char **, va_list *, int, int *, - char *, size_t, freelist_t *); +static char *convertitem(PyObject *, const char **, va_list *, int, int *, + char *, size_t, PyObject **); static char *converttuple(PyObject *, const char **, va_list *, int, - int *, char *, size_t, int, freelist_t *); + int *, char *, size_t, int, PyObject **); static char *convertsimple(PyObject *, const char **, va_list *, int, char *, - size_t, freelist_t *); + size_t, PyObject **); static Py_ssize_t convertbuffer(PyObject *, void **p, char **); static int getbuffer(PyObject *, Py_buffer *, char**); static int vgetargskeywords(PyObject *, PyObject *, - const char *, char **, va_list *, int); + const char *, char **, va_list *, int); static char *skipitem(const char **, va_list *, int); int PyArg_Parse(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT); + va_end(va); + return retval; } int _PyArg_Parse_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_ParseTuple(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, 0); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTuple_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParse(PyObject *args, const char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, 0); + return vgetargs1(args, format, &lva, 0); } int _PyArg_VaParse_SizeT(PyObject *args, char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, FLAG_SIZE_T); + return vgetargs1(args, format, &lva, FLAG_SIZE_T); } /* Handle cleanup of allocated memory in case of exception */ +#define GETARGS_CAPSULE_NAME_CLEANUP_PTR "getargs.cleanup_ptr" +#define GETARGS_CAPSULE_NAME_CLEANUP_BUFFER "getargs.cleanup_buffer" + +static void +cleanup_ptr(PyObject *self) +{ + void *ptr = PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_PTR); + if (ptr) { + PyMem_FREE(ptr); + } +} + +static void +cleanup_buffer(PyObject *self) +{ + Py_buffer *ptr = (Py_buffer *)PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_BUFFER); + if (ptr) { + PyBuffer_Release(ptr); + } +} + static int -cleanup_ptr(PyObject *self, void *ptr) +addcleanup(void *ptr, PyObject **freelist, PyCapsule_Destructor destr) { - if (ptr) { - PyMem_FREE(ptr); + PyObject *cobj; + const char *name; + + if (!*freelist) { + *freelist = PyList_New(0); + if (!*freelist) { + destr(ptr); + return -1; + } } + + if (destr == cleanup_ptr) { + name = GETARGS_CAPSULE_NAME_CLEANUP_PTR; + } else if (destr == cleanup_buffer) { + name = GETARGS_CAPSULE_NAME_CLEANUP_BUFFER; + } else { + return -1; + } + cobj = PyCapsule_New(ptr, name, destr); + if (!cobj) { + destr(ptr); + return -1; + } + if (PyList_Append(*freelist, cobj)) { + Py_DECREF(cobj); + return -1; + } + Py_DECREF(cobj); return 0; } static int -cleanup_buffer(PyObject *self, void *ptr) +cleanreturn(int retval, PyObject *freelist) { - Py_buffer *buf = (Py_buffer *)ptr; - if (buf) { - PyBuffer_Release(buf); + if (freelist && retval != 0) { + /* We were successful, reset the destructors so that they + don't get called. */ + Py_ssize_t len = PyList_GET_SIZE(freelist), i; + for (i = 0; i < len; i++) + PyCapsule_SetDestructor(PyList_GET_ITEM(freelist, i), NULL); } - return 0; + Py_XDECREF(freelist); + return retval; } -static int -addcleanup(void *ptr, freelist_t *freelist, destr_t destructor) -{ - int index; - - index = freelist->first_available; - freelist->first_available += 1; - - freelist->entries[index].item = ptr; - freelist->entries[index].destructor = destructor; - - return 0; -} - -static int -cleanreturn(int retval, freelist_t *freelist) -{ - int index; - - if (retval == 0) { - /* A failure occurred, therefore execute all of the cleanup - functions. - */ - for (index = 0; index < freelist->first_available; ++index) { - freelist->entries[index].destructor(NULL, - freelist->entries[index].item); - } - } - PyMem_Free(freelist->entries); - return retval; -} static int vgetargs1(PyObject *args, const char *format, va_list *p_va, int flags) { - char msgbuf[256]; - int levels[32]; - const char *fname = NULL; - const char *message = NULL; - int min = -1; - int max = 0; - int level = 0; - int endfmt = 0; - const char *formatsave = format; - Py_ssize_t i, len; - char *msg; - freelist_t freelist = {0, NULL}; - int compat = flags & FLAG_COMPAT; + char msgbuf[256]; + int levels[32]; + const char *fname = NULL; + const char *message = NULL; + int min = -1; + int max = 0; + int level = 0; + int endfmt = 0; + const char *formatsave = format; + Py_ssize_t i, len; + char *msg; + PyObject *freelist = NULL; + int compat = flags & FLAG_COMPAT; - assert(compat || (args != (PyObject*)NULL)); - flags = flags & ~FLAG_COMPAT; + assert(compat || (args != (PyObject*)NULL)); + flags = flags & ~FLAG_COMPAT; - while (endfmt == 0) { - int c = *format++; - switch (c) { - case '(': - if (level == 0) - max++; - level++; - if (level >= 30) - Py_FatalError("too many tuple nesting levels " - "in argument format string"); - break; - case ')': - if (level == 0) - Py_FatalError("excess ')' in getargs format"); - else - level--; - break; - case '\0': - endfmt = 1; - break; - case ':': - fname = format; - endfmt = 1; - break; - case ';': - message = format; - endfmt = 1; - break; - default: - if (level == 0) { - if (c == 'O') - max++; - else if (isalpha(Py_CHARMASK(c))) { - if (c != 'e') /* skip encoded */ - max++; - } else if (c == '|') - min = max; - } - break; - } - } - - if (level != 0) - Py_FatalError(/* '(' */ "missing ')' in getargs format"); - - if (min < 0) - min = max; - - format = formatsave; - - freelist.entries = PyMem_New(freelistentry_t, max); + while (endfmt == 0) { + int c = *format++; + switch (c) { + case '(': + if (level == 0) + max++; + level++; + if (level >= 30) + Py_FatalError("too many tuple nesting levels " + "in argument format string"); + break; + case ')': + if (level == 0) + Py_FatalError("excess ')' in getargs format"); + else + level--; + break; + case '\0': + endfmt = 1; + break; + case ':': + fname = format; + endfmt = 1; + break; + case ';': + message = format; + endfmt = 1; + break; + default: + if (level == 0) { + if (c == 'O') + max++; + else if (isalpha(Py_CHARMASK(c))) { + if (c != 'e') /* skip encoded */ + max++; + } else if (c == '|') + min = max; + } + break; + } + } - if (compat) { - if (max == 0) { - if (args == NULL) - return cleanreturn(1, &freelist); - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes no arguments", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - else if (min == 1 && max == 1) { - if (args == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes at least one argument", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - msg = convertitem(args, &format, p_va, flags, levels, - msgbuf, sizeof(msgbuf), &freelist); - if (msg == NULL) - return cleanreturn(1, &freelist); - seterror(levels[0], msg, levels+1, fname, message); - return cleanreturn(0, &freelist); - } - else { - PyErr_SetString(PyExc_SystemError, - "old style getargs format uses new features"); - return cleanreturn(0, &freelist); - } - } - - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "new style getargs format but argument is not a tuple"); - return cleanreturn(0, &freelist); - } - - len = PyTuple_GET_SIZE(args); - - if (len < min || max < len) { - if (message == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.150s%s takes %s %d argument%s " - "(%ld given)", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()", - min==max ? "exactly" - : len < min ? "at least" : "at most", - len < min ? min : max, - (len < min ? min : max) == 1 ? "" : "s", - Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); - message = msgbuf; - } - PyErr_SetString(PyExc_TypeError, message); - return cleanreturn(0, &freelist); - } - - for (i = 0; i < len; i++) { - if (*format == '|') - format++; - msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, - flags, levels, msgbuf, - sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, message); - return cleanreturn(0, &freelist); - } - } + if (level != 0) + Py_FatalError(/* '(' */ "missing ')' in getargs format"); - if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && - *format != '(' && - *format != '|' && *format != ':' && *format != ';') { - PyErr_Format(PyExc_SystemError, - "bad format string: %.200s", formatsave); - return cleanreturn(0, &freelist); - } - - return cleanreturn(1, &freelist); + if (min < 0) + min = max; + + format = formatsave; + + if (compat) { + if (max == 0) { + if (args == NULL) + return 1; + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes no arguments", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + else if (min == 1 && max == 1) { + if (args == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes at least one argument", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + msg = convertitem(args, &format, p_va, flags, levels, + msgbuf, sizeof(msgbuf), &freelist); + if (msg == NULL) + return cleanreturn(1, freelist); + seterror(levels[0], msg, levels+1, fname, message); + return cleanreturn(0, freelist); + } + else { + PyErr_SetString(PyExc_SystemError, + "old style getargs format uses new features"); + return 0; + } + } + + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "new style getargs format but argument is not a tuple"); + return 0; + } + + len = PyTuple_GET_SIZE(args); + + if (len < min || max < len) { + if (message == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.150s%s takes %s %d argument%s " + "(%ld given)", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()", + min==max ? "exactly" + : len < min ? "at least" : "at most", + len < min ? min : max, + (len < min ? min : max) == 1 ? "" : "s", + Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); + message = msgbuf; + } + PyErr_SetString(PyExc_TypeError, message); + return 0; + } + + for (i = 0; i < len; i++) { + if (*format == '|') + format++; + msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, + flags, levels, msgbuf, + sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, msg); + return cleanreturn(0, freelist); + } + } + + if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && + *format != '(' && + *format != '|' && *format != ':' && *format != ';') { + PyErr_Format(PyExc_SystemError, + "bad format string: %.200s", formatsave); + return cleanreturn(0, freelist); + } + + return cleanreturn(1, freelist); } @@ -358,37 +357,37 @@ seterror(int iarg, const char *msg, int *levels, const char *fname, const char *message) { - char buf[512]; - int i; - char *p = buf; + char buf[512]; + int i; + char *p = buf; - if (PyErr_Occurred()) - return; - else if (message == NULL) { - if (fname != NULL) { - PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); - p += strlen(p); - } - if (iarg != 0) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - "argument %d", iarg); - i = 0; - p += strlen(p); - while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - ", item %d", levels[i]-1); - p += strlen(p); - i++; - } - } - else { - PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); - p += strlen(p); - } - PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); - message = buf; - } - PyErr_SetString(PyExc_TypeError, message); + if (PyErr_Occurred()) + return; + else if (message == NULL) { + if (fname != NULL) { + PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); + p += strlen(p); + } + if (iarg != 0) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + "argument %d", iarg); + i = 0; + p += strlen(p); + while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + ", item %d", levels[i]-1); + p += strlen(p); + i++; + } + } + else { + PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); + p += strlen(p); + } + PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); + message = buf; + } + PyErr_SetString(PyExc_TypeError, message); } @@ -404,85 +403,84 @@ *p_va is undefined, *levels is a 0-terminated list of item numbers, *msgbuf contains an error message, whose format is: - "must be , not ", where: - is the name of the expected type, and - is the name of the actual type, + "must be , not ", where: + is the name of the expected type, and + is the name of the actual type, and msgbuf is returned. */ static char * converttuple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, int toplevel, - freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, int toplevel, + PyObject **freelist) { - int level = 0; - int n = 0; - const char *format = *p_format; - int i; - - for (;;) { - int c = *format++; - if (c == '(') { - if (level == 0) - n++; - level++; - } - else if (c == ')') { - if (level == 0) - break; - level--; - } - else if (c == ':' || c == ';' || c == '\0') - break; - else if (level == 0 && isalpha(Py_CHARMASK(c))) - n++; - } - - if (!PySequence_Check(arg) || PyString_Check(arg)) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %.50s" : - "must be %d-item sequence, not %.50s", - n, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; - } - - if ((i = PySequence_Size(arg)) != n) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %d" : - "must be sequence of length %d, not %d", - n, i); - return msgbuf; - } + int level = 0; + int n = 0; + const char *format = *p_format; + int i; - format = *p_format; - for (i = 0; i < n; i++) { - char *msg; - PyObject *item; + for (;;) { + int c = *format++; + if (c == '(') { + if (level == 0) + n++; + level++; + } + else if (c == ')') { + if (level == 0) + break; + level--; + } + else if (c == ':' || c == ';' || c == '\0') + break; + else if (level == 0 && isalpha(Py_CHARMASK(c))) + n++; + } + + if (!PySequence_Check(arg) || PyString_Check(arg)) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %.50s" : + "must be %d-item sequence, not %.50s", + n, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; + } + + if ((i = PySequence_Size(arg)) != n) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %d" : + "must be sequence of length %d, not %d", + n, i); + return msgbuf; + } + + format = *p_format; + for (i = 0; i < n; i++) { + char *msg; + PyObject *item; item = PySequence_GetItem(arg, i); - if (item == NULL) { - PyErr_Clear(); - levels[0] = i+1; - levels[1] = 0; - strncpy(msgbuf, "is not retrievable", - bufsize); - return msgbuf; - } - PyPy_Borrow(arg, item); - msg = convertitem(item, &format, p_va, flags, levels+1, - msgbuf, bufsize, freelist); + if (item == NULL) { + PyErr_Clear(); + levels[0] = i+1; + levels[1] = 0; + strncpy(msgbuf, "is not retrievable", bufsize); + return msgbuf; + } + PyPy_Borrow(arg, item); + msg = convertitem(item, &format, p_va, flags, levels+1, + msgbuf, bufsize, freelist); /* PySequence_GetItem calls tp->sq_item, which INCREFs */ Py_XDECREF(item); - if (msg != NULL) { - levels[0] = i+1; - return msg; - } - } + if (msg != NULL) { + levels[0] = i+1; + return msg; + } + } - *p_format = format; - return NULL; + *p_format = format; + return NULL; } @@ -490,45 +488,45 @@ static char * convertitem(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, PyObject **freelist) { - char *msg; - const char *format = *p_format; - - if (*format == '(' /* ')' */) { - format++; - msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, - bufsize, 0, freelist); - if (msg == NULL) - format++; - } - else { - msg = convertsimple(arg, &format, p_va, flags, - msgbuf, bufsize, freelist); - if (msg != NULL) - levels[0] = 0; - } - if (msg == NULL) - *p_format = format; - return msg; + char *msg; + const char *format = *p_format; + + if (*format == '(' /* ')' */) { + format++; + msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, + bufsize, 0, freelist); + if (msg == NULL) + format++; + } + else { + msg = convertsimple(arg, &format, p_va, flags, + msgbuf, bufsize, freelist); + if (msg != NULL) + levels[0] = 0; + } + if (msg == NULL) + *p_format = format; + return msg; } #define UNICODE_DEFAULT_ENCODING(arg) \ - _PyUnicode_AsDefaultEncodedString(arg, NULL) + _PyUnicode_AsDefaultEncodedString(arg, NULL) /* Format an error message generated by convertsimple(). */ static char * converterr(const char *expected, PyObject *arg, char *msgbuf, size_t bufsize) { - assert(expected != NULL); - assert(arg != NULL); - PyOS_snprintf(msgbuf, bufsize, - "must be %.50s, not %.50s", expected, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; + assert(expected != NULL); + assert(arg != NULL); + PyOS_snprintf(msgbuf, bufsize, + "must be %.50s, not %.50s", expected, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; } #define CONV_UNICODE "(unicode conversion error)" @@ -536,14 +534,28 @@ /* explicitly check for float arguments when integers are expected. For now * signal a warning. Returns true if an exception was raised. */ static int +float_argument_warning(PyObject *arg) +{ + if (PyFloat_Check(arg) && + PyErr_Warn(PyExc_DeprecationWarning, + "integer argument expected, got float" )) + return 1; + else + return 0; +} + +/* explicitly check for float arguments when integers are expected. Raises + TypeError and returns true for float arguments. */ +static int float_argument_error(PyObject *arg) { - if (PyFloat_Check(arg) && - PyErr_Warn(PyExc_DeprecationWarning, - "integer argument expected, got float" )) - return 1; - else - return 0; + if (PyFloat_Check(arg)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 1; + } + else + return 0; } /* Convert a non-tuple argument. Return NULL if conversion went OK, @@ -557,836 +569,839 @@ static char * convertsimple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - char *msgbuf, size_t bufsize, freelist_t *freelist) + char *msgbuf, size_t bufsize, PyObject **freelist) { - /* For # codes */ -#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ - if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ - else q=va_arg(*p_va, int*); -#define STORE_SIZE(s) if (flags & FLAG_SIZE_T) *q2=s; else *q=s; + /* For # codes */ +#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ + if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ + else q=va_arg(*p_va, int*); +#define STORE_SIZE(s) \ + if (flags & FLAG_SIZE_T) \ + *q2=s; \ + else { \ + if (INT_MAX < s) { \ + PyErr_SetString(PyExc_OverflowError, \ + "size does not fit in an int"); \ + return converterr("", arg, msgbuf, bufsize); \ + } \ + *q=s; \ + } #define BUFFER_LEN ((flags & FLAG_SIZE_T) ? *q2:*q) - const char *format = *p_format; - char c = *format++; + const char *format = *p_format; + char c = *format++; #ifdef Py_USING_UNICODE - PyObject *uarg; -#endif - - switch (c) { - - case 'b': { /* unsigned byte -- very short int */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > UCHAR_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (unsigned char) ival; - break; - } - - case 'B': {/* byte sized bitfield - both signed and unsigned - values allowed */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned char) ival; - break; - } - - case 'h': {/* signed short int */ - short *p = va_arg(*p_va, short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < SHRT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > SHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (short) ival; - break; - } - - case 'H': { /* short int sized bitfield, both signed and - unsigned allowed */ - unsigned short *p = va_arg(*p_va, unsigned short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned short) ival; - break; - } - case 'i': {/* signed int */ - int *p = va_arg(*p_va, int *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival > INT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival < INT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = ival; - break; - } - case 'I': { /* int sized bitfield, both signed and - unsigned allowed */ - unsigned int *p = va_arg(*p_va, unsigned int *); - unsigned int ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); - if (ival == (unsigned int)-1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - case 'n': /* Py_ssize_t */ -#if SIZEOF_SIZE_T != SIZEOF_LONG - { - Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); - Py_ssize_t ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsSsize_t(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif - /* Fall through from 'n' to 'l' if Py_ssize_t is int */ - case 'l': {/* long int */ - long *p = va_arg(*p_va, long *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - - case 'k': { /* long sized bitfield */ - unsigned long *p = va_arg(*p_va, unsigned long *); - unsigned long ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } - -#ifdef HAVE_LONG_LONG - case 'L': {/* PY_LONG_LONG */ - PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); - PY_LONG_LONG ival = PyLong_AsLongLong( arg ); - if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { - return converterr("long", arg, msgbuf, bufsize); - } else { - *p = ival; - } - break; - } - - case 'K': { /* long long sized bitfield */ - unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); - unsigned PY_LONG_LONG ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif // HAVE_LONG_LONG - - case 'f': {/* float */ - float *p = va_arg(*p_va, float *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = (float) dval; - break; - } - - case 'd': {/* double */ - double *p = va_arg(*p_va, double *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = dval; - break; - } - -#ifndef WITHOUT_COMPLEX - case 'D': {/* complex double */ - Py_complex *p = va_arg(*p_va, Py_complex *); - Py_complex cval; - cval = PyComplex_AsCComplex(arg); - if (PyErr_Occurred()) - return converterr("complex", arg, msgbuf, bufsize); - else - *p = cval; - break; - } -#endif /* WITHOUT_COMPLEX */ - - case 'c': {/* char */ - char *p = va_arg(*p_va, char *); - if (PyString_Check(arg) && PyString_Size(arg) == 1) - *p = PyString_AS_STRING(arg)[0]; - else - return converterr("char", arg, msgbuf, bufsize); - break; - } - case 's': {/* string */ - if (*format == '*') { - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (PyString_Check(arg)) { - fflush(stdout); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { -#if 0 - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); -#else - return converterr("string or buffer", arg, msgbuf, bufsize); -#endif - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; - } else if (*format == '#') { - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string", arg, msgbuf, bufsize); - if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr("string without null bytes", - arg, msgbuf, bufsize); - } - break; - } - - case 'z': {/* string, may be NULL (None) */ - if (*format == '*') { - Py_FatalError("'*' format not supported in PyArg_*\n"); -#if 0 - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (arg == Py_None) - PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); - else if (PyString_Check(arg)) { - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; -#endif - } else if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (arg == Py_None) { - *p = 0; - STORE_SIZE(0); - } - else if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (arg == Py_None) - *p = 0; - else if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string or None", - arg, msgbuf, bufsize); - if (*format == '#') { - FETCH_SIZE; - assert(0); /* XXX redundant with if-case */ - if (arg == Py_None) - *q = 0; - else - *q = PyString_Size(arg); - format++; - } - else if (*p != NULL && - (Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr( - "string without null bytes or None", - arg, msgbuf, bufsize); - } - break; - } - case 'e': {/* encoded string */ - char **buffer; - const char *encoding; - PyObject *s; - Py_ssize_t size; - int recode_strings; - - /* Get 'e' parameter: the encoding name */ - encoding = (const char *)va_arg(*p_va, const char *); -#ifdef Py_USING_UNICODE - if (encoding == NULL) - encoding = PyUnicode_GetDefaultEncoding(); + PyObject *uarg; #endif - /* Get output buffer parameter: - 's' (recode all objects via Unicode) or - 't' (only recode non-string objects) - */ - if (*format == 's') - recode_strings = 1; - else if (*format == 't') - recode_strings = 0; - else - return converterr( - "(unknown parser marker combination)", - arg, msgbuf, bufsize); - buffer = (char **)va_arg(*p_va, char **); - format++; - if (buffer == NULL) - return converterr("(buffer is NULL)", - arg, msgbuf, bufsize); - - /* Encode object */ - if (!recode_strings && PyString_Check(arg)) { - s = arg; - Py_INCREF(s); - } - else { + switch (c) { + + case 'b': { /* unsigned byte -- very short int */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > UCHAR_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (unsigned char) ival; + break; + } + + case 'B': {/* byte sized bitfield - both signed and unsigned + values allowed */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned char) ival; + break; + } + + case 'h': {/* signed short int */ + short *p = va_arg(*p_va, short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < SHRT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > SHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (short) ival; + break; + } + + case 'H': { /* short int sized bitfield, both signed and + unsigned allowed */ + unsigned short *p = va_arg(*p_va, unsigned short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned short) ival; + break; + } + + case 'i': {/* signed int */ + int *p = va_arg(*p_va, int *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival > INT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival < INT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = ival; + break; + } + + case 'I': { /* int sized bitfield, both signed and + unsigned allowed */ + unsigned int *p = va_arg(*p_va, unsigned int *); + unsigned int ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); + if (ival == (unsigned int)-1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'n': /* Py_ssize_t */ +#if SIZEOF_SIZE_T != SIZEOF_LONG + { + Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); + Py_ssize_t ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsSsize_t(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + /* Fall through from 'n' to 'l' if Py_ssize_t is int */ + case 'l': {/* long int */ + long *p = va_arg(*p_va, long *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'k': { /* long sized bitfield */ + unsigned long *p = va_arg(*p_va, unsigned long *); + unsigned long ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } + +#ifdef HAVE_LONG_LONG + case 'L': {/* PY_LONG_LONG */ + PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); + PY_LONG_LONG ival; + if (float_argument_warning(arg)) + return converterr("long", arg, msgbuf, bufsize); + ival = PyLong_AsLongLong(arg); + if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { + return converterr("long", arg, msgbuf, bufsize); + } else { + *p = ival; + } + break; + } + + case 'K': { /* long long sized bitfield */ + unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); + unsigned PY_LONG_LONG ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + + case 'f': {/* float */ + float *p = va_arg(*p_va, float *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = (float) dval; + break; + } + + case 'd': {/* double */ + double *p = va_arg(*p_va, double *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = dval; + break; + } + +#ifndef WITHOUT_COMPLEX + case 'D': {/* complex double */ + Py_complex *p = va_arg(*p_va, Py_complex *); + Py_complex cval; + cval = PyComplex_AsCComplex(arg); + if (PyErr_Occurred()) + return converterr("complex", arg, msgbuf, bufsize); + else + *p = cval; + break; + } +#endif /* WITHOUT_COMPLEX */ + + case 'c': {/* char */ + char *p = va_arg(*p_va, char *); + if (PyString_Check(arg) && PyString_Size(arg) == 1) + *p = PyString_AS_STRING(arg)[0]; + else + return converterr("char", arg, msgbuf, bufsize); + break; + } + + case 's': {/* string */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } #ifdef Py_USING_UNICODE - PyObject *u; + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; - /* Convert object to Unicode */ - u = PyUnicode_FromObject(arg); - if (u == NULL) - return converterr( - "string or unicode or text buffer", - arg, msgbuf, bufsize); - - /* Encode object; use default error handling */ - s = PyUnicode_AsEncodedString(u, - encoding, - NULL); - Py_DECREF(u); - if (s == NULL) - return converterr("(encoding failed)", - arg, msgbuf, bufsize); - if (!PyString_Check(s)) { - Py_DECREF(s); - return converterr( - "(encoder failed to return a string)", - arg, msgbuf, bufsize); - } + if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string", arg, msgbuf, bufsize); + if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr("string without null bytes", + arg, msgbuf, bufsize); + } + break; + } + + case 'z': {/* string, may be NULL (None) */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (arg == Py_None) + PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); + else if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + + if (arg == Py_None) { + *p = 0; + STORE_SIZE(0); + } + else if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (arg == Py_None) + *p = 0; + else if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string or None", + arg, msgbuf, bufsize); + if (*format == '#') { + FETCH_SIZE; + assert(0); /* XXX redundant with if-case */ + if (arg == Py_None) { + STORE_SIZE(0); + } else { + STORE_SIZE(PyString_Size(arg)); + } + format++; + } + else if (*p != NULL && + (Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr( + "string without null bytes or None", + arg, msgbuf, bufsize); + } + break; + } + + case 'e': {/* encoded string */ + char **buffer; + const char *encoding; + PyObject *s; + Py_ssize_t size; + int recode_strings; + + /* Get 'e' parameter: the encoding name */ + encoding = (const char *)va_arg(*p_va, const char *); +#ifdef Py_USING_UNICODE + if (encoding == NULL) + encoding = PyUnicode_GetDefaultEncoding(); +#endif + + /* Get output buffer parameter: + 's' (recode all objects via Unicode) or + 't' (only recode non-string objects) + */ + if (*format == 's') + recode_strings = 1; + else if (*format == 't') + recode_strings = 0; + else + return converterr( + "(unknown parser marker combination)", + arg, msgbuf, bufsize); + buffer = (char **)va_arg(*p_va, char **); + format++; + if (buffer == NULL) + return converterr("(buffer is NULL)", + arg, msgbuf, bufsize); + + /* Encode object */ + if (!recode_strings && PyString_Check(arg)) { + s = arg; + Py_INCREF(s); + } + else { +#ifdef Py_USING_UNICODE + PyObject *u; + + /* Convert object to Unicode */ + u = PyUnicode_FromObject(arg); + if (u == NULL) + return converterr( + "string or unicode or text buffer", + arg, msgbuf, bufsize); + + /* Encode object; use default error handling */ + s = PyUnicode_AsEncodedString(u, + encoding, + NULL); + Py_DECREF(u); + if (s == NULL) + return converterr("(encoding failed)", + arg, msgbuf, bufsize); + if (!PyString_Check(s)) { + Py_DECREF(s); + return converterr( + "(encoder failed to return a string)", + arg, msgbuf, bufsize); + } #else - return converterr("string", arg, msgbuf, bufsize); + return converterr("string", arg, msgbuf, bufsize); #endif - } - size = PyString_GET_SIZE(s); + } + size = PyString_GET_SIZE(s); - /* Write output; output is guaranteed to be 0-terminated */ - if (*format == '#') { - /* Using buffer length parameter '#': - - - if *buffer is NULL, a new buffer of the - needed size is allocated and the data - copied into it; *buffer is updated to point - to the new buffer; the caller is - responsible for PyMem_Free()ing it after - usage + /* Write output; output is guaranteed to be 0-terminated */ + if (*format == '#') { + /* Using buffer length parameter '#': - - if *buffer is not NULL, the data is - copied to *buffer; *buffer_len has to be - set to the size of the buffer on input; - buffer overflow is signalled with an error; - buffer has to provide enough room for the - encoded string plus the trailing 0-byte - - - in both cases, *buffer_len is updated to - the size of the buffer /excluding/ the - trailing 0-byte - - */ - FETCH_SIZE; + - if *buffer is NULL, a new buffer of the + needed size is allocated and the data + copied into it; *buffer is updated to point + to the new buffer; the caller is + responsible for PyMem_Free()ing it after + usage - format++; - if (q == NULL && q2 == NULL) { - Py_DECREF(s); - return converterr( - "(buffer_len is NULL)", - arg, msgbuf, bufsize); - } - if (*buffer == NULL) { - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr( - "(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - } else { - if (size + 1 > BUFFER_LEN) { - Py_DECREF(s); - return converterr( - "(buffer overflow)", - arg, msgbuf, bufsize); - } - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - STORE_SIZE(size); - } else { - /* Using a 0-terminated buffer: - - - the encoded string has to be 0-terminated - for this variant to work; if it is not, an - error raised + - if *buffer is not NULL, the data is + copied to *buffer; *buffer_len has to be + set to the size of the buffer on input; + buffer overflow is signalled with an error; + buffer has to provide enough room for the + encoded string plus the trailing 0-byte - - a new buffer of the needed size is - allocated and the data copied into it; - *buffer is updated to point to the new - buffer; the caller is responsible for - PyMem_Free()ing it after usage + - in both cases, *buffer_len is updated to + the size of the buffer /excluding/ the + trailing 0-byte - */ - if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) - != size) { - Py_DECREF(s); - return converterr( - "encoded string without NULL bytes", - arg, msgbuf, bufsize); - } - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr("(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr("(cleanup problem)", - arg, msgbuf, bufsize); - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - } - Py_DECREF(s); - break; - } + */ + FETCH_SIZE; + + format++; + if (q == NULL && q2 == NULL) { + Py_DECREF(s); + return converterr( + "(buffer_len is NULL)", + arg, msgbuf, bufsize); + } + if (*buffer == NULL) { + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr( + "(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + } else { + if (size + 1 > BUFFER_LEN) { + Py_DECREF(s); + return converterr( + "(buffer overflow)", + arg, msgbuf, bufsize); + } + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + STORE_SIZE(size); + } else { + /* Using a 0-terminated buffer: + + - the encoded string has to be 0-terminated + for this variant to work; if it is not, an + error raised + + - a new buffer of the needed size is + allocated and the data copied into it; + *buffer is updated to point to the new + buffer; the caller is responsible for + PyMem_Free()ing it after usage + + */ + if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) + != size) { + Py_DECREF(s); + return converterr( + "encoded string without NULL bytes", + arg, msgbuf, bufsize); + } + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr("(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr("(cleanup problem)", + arg, msgbuf, bufsize); + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + } + Py_DECREF(s); + break; + } #ifdef Py_USING_UNICODE - case 'u': {/* raw unicode buffer (Py_UNICODE *) */ - if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - if (PyUnicode_Check(arg)) { - *p = PyUnicode_AS_UNICODE(arg); - STORE_SIZE(PyUnicode_GET_SIZE(arg)); - } - else { - return converterr("cannot convert raw buffers", - arg, msgbuf, bufsize); - } - format++; - } else { - Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); - if (PyUnicode_Check(arg)) - *p = PyUnicode_AS_UNICODE(arg); - else - return converterr("unicode", arg, msgbuf, bufsize); - } - break; - } + case 'u': {/* raw unicode buffer (Py_UNICODE *) */ + if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + if (PyUnicode_Check(arg)) { + *p = PyUnicode_AS_UNICODE(arg); + STORE_SIZE(PyUnicode_GET_SIZE(arg)); + } + else { + return converterr("cannot convert raw buffers", + arg, msgbuf, bufsize); + } + format++; + } else { + Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); + if (PyUnicode_Check(arg)) + *p = PyUnicode_AS_UNICODE(arg); + else + return converterr("unicode", arg, msgbuf, bufsize); + } + break; + } #endif - case 'S': { /* string object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyString_Check(arg)) - *p = arg; - else - return converterr("string", arg, msgbuf, bufsize); - break; - } - + case 'S': { /* string object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyString_Check(arg)) + *p = arg; + else + return converterr("string", arg, msgbuf, bufsize); + break; + } + #ifdef Py_USING_UNICODE - case 'U': { /* Unicode object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyUnicode_Check(arg)) - *p = arg; - else - return converterr("unicode", arg, msgbuf, bufsize); - break; - } + case 'U': { /* Unicode object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyUnicode_Check(arg)) + *p = arg; + else + return converterr("unicode", arg, msgbuf, bufsize); + break; + } #endif - case 'O': { /* object */ - PyTypeObject *type; - PyObject **p; - if (*format == '!') { - type = va_arg(*p_va, PyTypeObject*); - p = va_arg(*p_va, PyObject **); - format++; - if (PyType_IsSubtype(arg->ob_type, type)) - *p = arg; - else - return converterr(type->tp_name, arg, msgbuf, bufsize); - } - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - p = va_arg(*p_va, PyObject **); - format++; - if ((*pred)(arg)) - *p = arg; - else - return converterr("(unspecified)", - arg, msgbuf, bufsize); - - } - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - converter convert = va_arg(*p_va, converter); - void *addr = va_arg(*p_va, void *); - format++; - if (! (*convert)(arg, addr)) - return converterr("(unspecified)", - arg, msgbuf, bufsize); - } - else { - p = va_arg(*p_va, PyObject **); - *p = arg; - } - break; - } - - case 'w': { /* memory buffer, read-write access */ - Py_FatalError("'w' unsupported\n"); -#if 0 - void **p = va_arg(*p_va, void **); - void *res; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; + case 'O': { /* object */ + PyTypeObject *type; + PyObject **p; + if (*format == '!') { + type = va_arg(*p_va, PyTypeObject*); + p = va_arg(*p_va, PyObject **); + format++; + if (PyType_IsSubtype(arg->ob_type, type)) + *p = arg; + else + return converterr(type->tp_name, arg, msgbuf, bufsize); - if (pb && pb->bf_releasebuffer && *format != '*') - /* Buffer must be released, yet caller does not use - the Py_buffer protocol. */ - return converterr("pinned buffer", arg, msgbuf, bufsize); + } + else if (*format == '?') { + inquiry pred = va_arg(*p_va, inquiry); + p = va_arg(*p_va, PyObject **); + format++; + if ((*pred)(arg)) + *p = arg; + else + return converterr("(unspecified)", + arg, msgbuf, bufsize); - if (pb && pb->bf_getbuffer && *format == '*') { - /* Caller is interested in Py_buffer, and the object - supports it directly. */ - format++; - if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { - PyErr_Clear(); - return converterr("read-write buffer", arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) - return converterr("contiguous buffer", arg, msgbuf, bufsize); - break; - } + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + converter convert = va_arg(*p_va, converter); + void *addr = va_arg(*p_va, void *); + format++; + if (! (*convert)(arg, addr)) + return converterr("(unspecified)", + arg, msgbuf, bufsize); + } + else { + p = va_arg(*p_va, PyObject **); + *p = arg; + } + break; + } - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) - return converterr("read-write buffer", arg, msgbuf, bufsize); - if ((*pb->bf_getsegcount)(arg, NULL) != 1) - return converterr("single-segment read-write buffer", - arg, msgbuf, bufsize); - if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); - if (*format == '*') { - PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); - format++; - } - else { - *p = res; - if (*format == '#') { - FETCH_SIZE; - STORE_SIZE(count); - format++; - } - } - break; -#endif - } - - case 't': { /* 8-bit character buffer, read-only access */ - char **p = va_arg(*p_va, char **); - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; -#if 0 - if (*format++ != '#') - return converterr( - "invalid use of 't' format character", - arg, msgbuf, bufsize); -#endif - if (!PyType_HasFeature(arg->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER) -#if 0 - || pb == NULL || pb->bf_getcharbuffer == NULL || - pb->bf_getsegcount == NULL -#endif - ) - return converterr( - "string or read-only character buffer", - arg, msgbuf, bufsize); -#if 0 - if (pb->bf_getsegcount(arg, NULL) != 1) - return converterr( - "string or single-segment read-only buffer", - arg, msgbuf, bufsize); + case 'w': { /* memory buffer, read-write access */ + void **p = va_arg(*p_va, void **); + void *res; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; - if (pb->bf_releasebuffer) - return converterr( - "string or pinned buffer", - arg, msgbuf, bufsize); -#endif - count = pb->bf_getcharbuffer(arg, 0, p); -#if 0 - if (count < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); -#endif - { - FETCH_SIZE; - STORE_SIZE(count); - ++format; - } - break; - } - default: - return converterr("impossible", arg, msgbuf, bufsize); - - } - - *p_format = format; - return NULL; + if (pb && pb->bf_releasebuffer && *format != '*') + /* Buffer must be released, yet caller does not use + the Py_buffer protocol. */ + return converterr("pinned buffer", arg, msgbuf, bufsize); + + if (pb && pb->bf_getbuffer && *format == '*') { + /* Caller is interested in Py_buffer, and the object + supports it directly. */ + format++; + if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { + PyErr_Clear(); + return converterr("read-write buffer", arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) + return converterr("contiguous buffer", arg, msgbuf, bufsize); + break; + } + + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr("read-write buffer", arg, msgbuf, bufsize); + if ((*pb->bf_getsegcount)(arg, NULL) != 1) + return converterr("single-segment read-write buffer", + arg, msgbuf, bufsize); + if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + if (*format == '*') { + PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); + format++; + } + else { + *p = res; + if (*format == '#') { + FETCH_SIZE; + STORE_SIZE(count); + format++; + } + } + break; + } + + case 't': { /* 8-bit character buffer, read-only access */ + char **p = va_arg(*p_va, char **); + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + + if (*format++ != '#') + return converterr( + "invalid use of 't' format character", + arg, msgbuf, bufsize); + if (!PyType_HasFeature(arg->ob_type, + Py_TPFLAGS_HAVE_GETCHARBUFFER) || + pb == NULL || pb->bf_getcharbuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr( + "string or read-only character buffer", + arg, msgbuf, bufsize); + + if (pb->bf_getsegcount(arg, NULL) != 1) + return converterr( + "string or single-segment read-only buffer", + arg, msgbuf, bufsize); + + if (pb->bf_releasebuffer) + return converterr( + "string or pinned buffer", + arg, msgbuf, bufsize); + + count = pb->bf_getcharbuffer(arg, 0, p); + if (count < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + { + FETCH_SIZE; + STORE_SIZE(count); + } + break; + } + + default: + return converterr("impossible", arg, msgbuf, bufsize); + + } + + *p_format = format; + return NULL; } static Py_ssize_t convertbuffer(PyObject *arg, void **p, char **errmsg) { - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - pb->bf_releasebuffer != NULL) { - *errmsg = "string or read-only buffer"; - return -1; - } - if ((*pb->bf_getsegcount)(arg, NULL) != 1) { - *errmsg = "string or single-segment read-only buffer"; - return -1; - } - if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { - *errmsg = "(unspecified)"; - } - return count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + pb->bf_releasebuffer != NULL) { + *errmsg = "string or read-only buffer"; + return -1; + } + if ((*pb->bf_getsegcount)(arg, NULL) != 1) { + *errmsg = "string or single-segment read-only buffer"; + return -1; + } + if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { + *errmsg = "(unspecified)"; + } + return count; } static int getbuffer(PyObject *arg, Py_buffer *view, char **errmsg) { - void *buf; - Py_ssize_t count; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - if (pb == NULL) { - *errmsg = "string or buffer"; - return -1; - } - if (pb->bf_getbuffer) { - if (pb->bf_getbuffer(arg, view, 0) < 0) { - *errmsg = "convertible to a buffer"; - return -1; - } - if (!PyBuffer_IsContiguous(view, 'C')) { - *errmsg = "contiguous buffer"; - return -1; - } - return 0; - } + void *buf; + Py_ssize_t count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + if (pb == NULL) { + *errmsg = "string or buffer"; + return -1; + } + if (pb->bf_getbuffer) { + if (pb->bf_getbuffer(arg, view, 0) < 0) { + *errmsg = "convertible to a buffer"; + return -1; + } + if (!PyBuffer_IsContiguous(view, 'C')) { + *errmsg = "contiguous buffer"; + return -1; + } + return 0; + } - count = convertbuffer(arg, &buf, errmsg); - if (count < 0) { - *errmsg = "convertible to a buffer"; - return count; - } - PyBuffer_FillInfo(view, NULL, buf, count, 1, 0); - return 0; + count = convertbuffer(arg, &buf, errmsg); + if (count < 0) { + *errmsg = "convertible to a buffer"; + return count; + } + PyBuffer_FillInfo(view, arg, buf, count, 1, 0); + return 0; } /* Support for keyword arguments donated by @@ -1395,501 +1410,487 @@ /* Return false (0) for error, else true. */ int PyArg_ParseTupleAndKeywords(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, - kwlist, &va, FLAG_SIZE_T); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, + kwlist, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParseTupleAndKeywords(PyObject *args, PyObject *keywords, - const char *format, + const char *format, char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); - return retval; + retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); + return retval; } int _PyArg_VaParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, va_list va) + PyObject *keywords, + const char *format, + char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, - kwlist, &lva, FLAG_SIZE_T); - return retval; + retval = vgetargskeywords(args, keywords, format, + kwlist, &lva, FLAG_SIZE_T); + return retval; } #define IS_END_OF_FORMAT(c) (c == '\0' || c == ';' || c == ':') static int vgetargskeywords(PyObject *args, PyObject *keywords, const char *format, - char **kwlist, va_list *p_va, int flags) + char **kwlist, va_list *p_va, int flags) { - char msgbuf[512]; - int levels[32]; - const char *fname, *msg, *custom_msg, *keyword; - int min = INT_MAX; - int i, len, nargs, nkeywords; - PyObject *current_arg; - freelist_t freelist = {0, NULL}; + char msgbuf[512]; + int levels[32]; + const char *fname, *msg, *custom_msg, *keyword; + int min = INT_MAX; + int i, len, nargs, nkeywords; + PyObject *freelist = NULL, *current_arg; + assert(args != NULL && PyTuple_Check(args)); + assert(keywords == NULL || PyDict_Check(keywords)); + assert(format != NULL); + assert(kwlist != NULL); + assert(p_va != NULL); - assert(args != NULL && PyTuple_Check(args)); - assert(keywords == NULL || PyDict_Check(keywords)); - assert(format != NULL); - assert(kwlist != NULL); - assert(p_va != NULL); + /* grab the function name or custom error msg first (mutually exclusive) */ + fname = strchr(format, ':'); + if (fname) { + fname++; + custom_msg = NULL; + } + else { + custom_msg = strchr(format,';'); + if (custom_msg) + custom_msg++; + } - /* grab the function name or custom error msg first (mutually exclusive) */ - fname = strchr(format, ':'); - if (fname) { - fname++; - custom_msg = NULL; - } - else { - custom_msg = strchr(format,';'); - if (custom_msg) - custom_msg++; - } + /* scan kwlist and get greatest possible nbr of args */ + for (len=0; kwlist[len]; len++) + continue; - /* scan kwlist and get greatest possible nbr of args */ - for (len=0; kwlist[len]; len++) - continue; + nargs = PyTuple_GET_SIZE(args); + nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); + if (nargs + nkeywords > len) { + PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " + "argument%s (%d given)", + (fname == NULL) ? "function" : fname, + (fname == NULL) ? "" : "()", + len, + (len == 1) ? "" : "s", + nargs + nkeywords); + return 0; + } - freelist.entries = PyMem_New(freelistentry_t, len); + /* convert tuple args and keyword args in same loop, using kwlist to drive process */ + for (i = 0; i < len; i++) { + keyword = kwlist[i]; + if (*format == '|') { + min = i; + format++; + } + if (IS_END_OF_FORMAT(*format)) { + PyErr_Format(PyExc_RuntimeError, + "More keyword list entries (%d) than " + "format specifiers (%d)", len, i); + return cleanreturn(0, freelist); + } + current_arg = NULL; + if (nkeywords) { + current_arg = PyDict_GetItemString(keywords, keyword); + } + if (current_arg) { + --nkeywords; + if (i < nargs) { + /* arg present in tuple and in dict */ + PyErr_Format(PyExc_TypeError, + "Argument given by name ('%s') " + "and position (%d)", + keyword, i+1); + return cleanreturn(0, freelist); + } + } + else if (nkeywords && PyErr_Occurred()) + return cleanreturn(0, freelist); + else if (i < nargs) + current_arg = PyTuple_GET_ITEM(args, i); - nargs = PyTuple_GET_SIZE(args); - nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); - if (nargs + nkeywords > len) { - PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " - "argument%s (%d given)", - (fname == NULL) ? "function" : fname, - (fname == NULL) ? "" : "()", - len, - (len == 1) ? "" : "s", - nargs + nkeywords); - return cleanreturn(0, &freelist); - } + if (current_arg) { + msg = convertitem(current_arg, &format, p_va, flags, + levels, msgbuf, sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, custom_msg); + return cleanreturn(0, freelist); + } + continue; + } - /* convert tuple args and keyword args in same loop, using kwlist to drive process */ - for (i = 0; i < len; i++) { - keyword = kwlist[i]; - if (*format == '|') { - min = i; - format++; - } - if (IS_END_OF_FORMAT(*format)) { - PyErr_Format(PyExc_RuntimeError, - "More keyword list entries (%d) than " - "format specifiers (%d)", len, i); - return cleanreturn(0, &freelist); - } - current_arg = NULL; - if (nkeywords) { - current_arg = PyDict_GetItemString(keywords, keyword); - } - if (current_arg) { - --nkeywords; - if (i < nargs) { - /* arg present in tuple and in dict */ - PyErr_Format(PyExc_TypeError, - "Argument given by name ('%s') " - "and position (%d)", - keyword, i+1); - return cleanreturn(0, &freelist); - } - } - else if (nkeywords && PyErr_Occurred()) - return cleanreturn(0, &freelist); - else if (i < nargs) - current_arg = PyTuple_GET_ITEM(args, i); - - if (current_arg) { - msg = convertitem(current_arg, &format, p_va, flags, - levels, msgbuf, sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, custom_msg); - return cleanreturn(0, &freelist); - } - continue; - } + if (i < min) { + PyErr_Format(PyExc_TypeError, "Required argument " + "'%s' (pos %d) not found", + keyword, i+1); + return cleanreturn(0, freelist); + } + /* current code reports success when all required args + * fulfilled and no keyword args left, with no further + * validation. XXX Maybe skip this in debug build ? + */ + if (!nkeywords) + return cleanreturn(1, freelist); - if (i < min) { - PyErr_Format(PyExc_TypeError, "Required argument " - "'%s' (pos %d) not found", - keyword, i+1); - return cleanreturn(0, &freelist); - } - /* current code reports success when all required args - * fulfilled and no keyword args left, with no further - * validation. XXX Maybe skip this in debug build ? - */ - if (!nkeywords) - return cleanreturn(1, &freelist); + /* We are into optional args, skip thru to any remaining + * keyword args */ + msg = skipitem(&format, p_va, flags); + if (msg) { + PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, + format); + return cleanreturn(0, freelist); + } + } - /* We are into optional args, skip thru to any remaining - * keyword args */ - msg = skipitem(&format, p_va, flags); - if (msg) { - PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, - format); - return cleanreturn(0, &freelist); - } - } + if (!IS_END_OF_FORMAT(*format) && *format != '|') { + PyErr_Format(PyExc_RuntimeError, + "more argument specifiers than keyword list entries " + "(remaining format:'%s')", format); + return cleanreturn(0, freelist); + } - if (!IS_END_OF_FORMAT(*format) && *format != '|') { - PyErr_Format(PyExc_RuntimeError, - "more argument specifiers than keyword list entries " - "(remaining format:'%s')", format); - return cleanreturn(0, &freelist); - } + /* make sure there are no extraneous keyword arguments */ + if (nkeywords > 0) { + PyObject *key, *value; + Py_ssize_t pos = 0; + while (PyDict_Next(keywords, &pos, &key, &value)) { + int match = 0; + char *ks; + if (!PyString_Check(key)) { + PyErr_SetString(PyExc_TypeError, + "keywords must be strings"); + return cleanreturn(0, freelist); + } + ks = PyString_AsString(key); + for (i = 0; i < len; i++) { + if (!strcmp(ks, kwlist[i])) { + match = 1; + break; + } + } + if (!match) { + PyErr_Format(PyExc_TypeError, + "'%s' is an invalid keyword " + "argument for this function", + ks); + return cleanreturn(0, freelist); + } + } + } - /* make sure there are no extraneous keyword arguments */ - if (nkeywords > 0) { - PyObject *key, *value; - Py_ssize_t pos = 0; - while (PyDict_Next(keywords, &pos, &key, &value)) { - int match = 0; - char *ks; - if (!PyString_Check(key)) { - PyErr_SetString(PyExc_TypeError, - "keywords must be strings"); - return cleanreturn(0, &freelist); - } - ks = PyString_AsString(key); - for (i = 0; i < len; i++) { - if (!strcmp(ks, kwlist[i])) { - match = 1; - break; - } - } - if (!match) { - PyErr_Format(PyExc_TypeError, - "'%s' is an invalid keyword " - "argument for this function", - ks); - return cleanreturn(0, &freelist); - } - } - } - - return cleanreturn(1, &freelist); + return cleanreturn(1, freelist); } static char * skipitem(const char **p_format, va_list *p_va, int flags) { - const char *format = *p_format; - char c = *format++; - - switch (c) { + const char *format = *p_format; + char c = *format++; - /* simple codes - * The individual types (second arg of va_arg) are irrelevant */ + switch (c) { - case 'b': /* byte -- very short int */ - case 'B': /* byte as bitfield */ - case 'h': /* short int */ - case 'H': /* short int as bitfield */ - case 'i': /* int */ - case 'I': /* int sized bitfield */ - case 'l': /* long int */ - case 'k': /* long int sized bitfield */ + /* simple codes + * The individual types (second arg of va_arg) are irrelevant */ + + case 'b': /* byte -- very short int */ + case 'B': /* byte as bitfield */ + case 'h': /* short int */ + case 'H': /* short int as bitfield */ + case 'i': /* int */ + case 'I': /* int sized bitfield */ + case 'l': /* long int */ + case 'k': /* long int sized bitfield */ #ifdef HAVE_LONG_LONG - case 'L': /* PY_LONG_LONG */ - case 'K': /* PY_LONG_LONG sized bitfield */ + case 'L': /* PY_LONG_LONG */ + case 'K': /* PY_LONG_LONG sized bitfield */ #endif - case 'f': /* float */ - case 'd': /* double */ + case 'f': /* float */ + case 'd': /* double */ #ifndef WITHOUT_COMPLEX - case 'D': /* complex double */ + case 'D': /* complex double */ #endif - case 'c': /* char */ - { - (void) va_arg(*p_va, void *); - break; - } + case 'c': /* char */ + { + (void) va_arg(*p_va, void *); + break; + } - case 'n': /* Py_ssize_t */ - { - (void) va_arg(*p_va, Py_ssize_t *); - break; - } - - /* string codes */ - - case 'e': /* string with encoding */ - { - (void) va_arg(*p_va, const char *); - if (!(*format == 's' || *format == 't')) - /* after 'e', only 's' and 't' is allowed */ - goto err; - format++; - /* explicit fallthrough to string cases */ - } - - case 's': /* string */ - case 'z': /* string or None */ + case 'n': /* Py_ssize_t */ + { + (void) va_arg(*p_va, Py_ssize_t *); + break; + } + + /* string codes */ + + case 'e': /* string with encoding */ + { + (void) va_arg(*p_va, const char *); + if (!(*format == 's' || *format == 't')) + /* after 'e', only 's' and 't' is allowed */ + goto err; + format++; + /* explicit fallthrough to string cases */ + } + + case 's': /* string */ + case 'z': /* string or None */ #ifdef Py_USING_UNICODE - case 'u': /* unicode string */ + case 'u': /* unicode string */ #endif - case 't': /* buffer, read-only */ - case 'w': /* buffer, read-write */ - { - (void) va_arg(*p_va, char **); - if (*format == '#') { - if (flags & FLAG_SIZE_T) - (void) va_arg(*p_va, Py_ssize_t *); - else - (void) va_arg(*p_va, int *); - format++; - } else if ((c == 's' || c == 'z') && *format == '*') { - format++; - } - break; - } + case 't': /* buffer, read-only */ + case 'w': /* buffer, read-write */ + { + (void) va_arg(*p_va, char **); + if (*format == '#') { + if (flags & FLAG_SIZE_T) + (void) va_arg(*p_va, Py_ssize_t *); + else + (void) va_arg(*p_va, int *); + format++; + } else if ((c == 's' || c == 'z') && *format == '*') { + format++; + } + break; + } - /* object codes */ + /* object codes */ - case 'S': /* string object */ + case 'S': /* string object */ #ifdef Py_USING_UNICODE - case 'U': /* unicode string object */ + case 'U': /* unicode string object */ #endif - { - (void) va_arg(*p_va, PyObject **); - break; - } - - case 'O': /* object */ - { - if (*format == '!') { - format++; - (void) va_arg(*p_va, PyTypeObject*); - (void) va_arg(*p_va, PyObject **); - } -#if 0 -/* I don't know what this is for */ - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - format++; - if ((*pred)(arg)) { - (void) va_arg(*p_va, PyObject **); - } - } -#endif - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - (void) va_arg(*p_va, converter); - (void) va_arg(*p_va, void *); - format++; - } - else { - (void) va_arg(*p_va, PyObject **); - } - break; - } + { + (void) va_arg(*p_va, PyObject **); + break; + } - case '(': /* bypass tuple, not handled at all previously */ - { - char *msg; - for (;;) { - if (*format==')') - break; - if (IS_END_OF_FORMAT(*format)) - return "Unmatched left paren in format " - "string"; - msg = skipitem(&format, p_va, flags); - if (msg) - return msg; - } - format++; - break; - } + case 'O': /* object */ + { + if (*format == '!') { + format++; + (void) va_arg(*p_va, PyTypeObject*); + (void) va_arg(*p_va, PyObject **); + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + (void) va_arg(*p_va, converter); + (void) va_arg(*p_va, void *); + format++; + } + else { + (void) va_arg(*p_va, PyObject **); + } + break; + } - case ')': - return "Unmatched right paren in format string"; + case '(': /* bypass tuple, not handled at all previously */ + { + char *msg; + for (;;) { + if (*format==')') + break; + if (IS_END_OF_FORMAT(*format)) + return "Unmatched left paren in format " + "string"; + msg = skipitem(&format, p_va, flags); + if (msg) + return msg; + } + format++; + break; + } - default: + case ')': + return "Unmatched right paren in format string"; + + default: err: - return "impossible"; - - } + return "impossible"; - *p_format = format; - return NULL; + } + + *p_format = format; + return NULL; } int PyArg_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, ...) { - Py_ssize_t i, l; - PyObject **o; - va_list vargs; + Py_ssize_t i, l; + PyObject **o; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, max); + va_start(vargs, max); #else - va_start(vargs); + va_start(vargs); #endif - assert(min >= 0); - assert(min <= max); - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "PyArg_UnpackTuple() argument list is not a tuple"); - return 0; - } - l = PyTuple_GET_SIZE(args); - if (l < min) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at least "), min, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at least "), min, l); - va_end(vargs); - return 0; - } - if (l > max) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at most "), max, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at most "), max, l); - va_end(vargs); - return 0; - } - for (i = 0; i < l; i++) { - o = va_arg(vargs, PyObject **); - *o = PyTuple_GET_ITEM(args, i); - } - va_end(vargs); - return 1; + assert(min >= 0); + assert(min <= max); + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "PyArg_UnpackTuple() argument list is not a tuple"); + return 0; + } + l = PyTuple_GET_SIZE(args); + if (l < min) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at least "), min, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at least "), min, l); + va_end(vargs); + return 0; + } + if (l > max) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at most "), max, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at most "), max, l); + va_end(vargs); + return 0; + } + for (i = 0; i < l; i++) { + o = va_arg(vargs, PyObject **); + *o = PyTuple_GET_ITEM(args, i); + } + va_end(vargs); + return 1; } /* For type constructors that don't take keyword args * - * Sets a TypeError and returns 0 if the kwds dict is + * Sets a TypeError and returns 0 if the kwds dict is * not empty, returns 1 otherwise */ int _PyArg_NoKeywords(const char *funcname, PyObject *kw) { - if (kw == NULL) - return 1; - if (!PyDict_CheckExact(kw)) { - PyErr_BadInternalCall(); - return 0; - } - if (PyDict_Size(kw) == 0) - return 1; - - PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", - funcname); - return 0; + if (kw == NULL) + return 1; + if (!PyDict_CheckExact(kw)) { + PyErr_BadInternalCall(); + return 0; + } + if (PyDict_Size(kw) == 0) + return 1; + + PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", + funcname); + return 0; } #ifdef __cplusplus }; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -33,41 +33,41 @@ static int countformat(const char *format, int endchar) { - int count = 0; - int level = 0; - while (level > 0 || *format != endchar) { - switch (*format) { - case '\0': - /* Premature end */ - PyErr_SetString(PyExc_SystemError, - "unmatched paren in format"); - return -1; - case '(': - case '[': - case '{': - if (level == 0) - count++; - level++; - break; - case ')': - case ']': - case '}': - level--; - break; - case '#': - case '&': - case ',': - case ':': - case ' ': - case '\t': - break; - default: - if (level == 0) - count++; - } - format++; - } - return count; + int count = 0; + int level = 0; + while (level > 0 || *format != endchar) { + switch (*format) { + case '\0': + /* Premature end */ + PyErr_SetString(PyExc_SystemError, + "unmatched paren in format"); + return -1; + case '(': + case '[': + case '{': + if (level == 0) + count++; + level++; + break; + case ')': + case ']': + case '}': + level--; + break; + case '#': + case '&': + case ',': + case ':': + case ' ': + case '\t': + break; + default: + if (level == 0) + count++; + } + format++; + } + return count; } @@ -83,582 +83,435 @@ static PyObject * do_mkdict(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *d; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((d = PyDict_New()) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i+= 2) { - PyObject *k, *v; - int err; - k = do_mkvalue(p_format, p_va, flags); - if (k == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - k = Py_None; - } - v = do_mkvalue(p_format, p_va, flags); - if (v == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - v = Py_None; - } - err = PyDict_SetItem(d, k, v); - Py_DECREF(k); - Py_DECREF(v); - if (err < 0 || itemfailed) { - Py_DECREF(d); - return NULL; - } - } - if (d != NULL && **p_format != endchar) { - Py_DECREF(d); - d = NULL; - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - } - else if (endchar) - ++*p_format; - return d; + PyObject *d; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((d = PyDict_New()) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i+= 2) { + PyObject *k, *v; + int err; + k = do_mkvalue(p_format, p_va, flags); + if (k == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + k = Py_None; + } + v = do_mkvalue(p_format, p_va, flags); + if (v == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + v = Py_None; + } + err = PyDict_SetItem(d, k, v); + Py_DECREF(k); + Py_DECREF(v); + if (err < 0 || itemfailed) { + Py_DECREF(d); + return NULL; + } + } + if (d != NULL && **p_format != endchar) { + Py_DECREF(d); + d = NULL; + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + } + else if (endchar) + ++*p_format; + return d; } static PyObject * do_mklist(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - v = PyList_New(n); - if (v == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyList_SET_ITEM(v, i, w); - } + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + v = PyList_New(n); + if (v == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyList_SET_ITEM(v, i, w); + } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } #ifdef Py_USING_UNICODE static int _ustrlen(Py_UNICODE *u) { - int i = 0; - Py_UNICODE *v = u; - while (*v != 0) { i++; v++; } - return i; + int i = 0; + Py_UNICODE *v = u; + while (*v != 0) { i++; v++; } + return i; } #endif static PyObject * do_mktuple(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((v = PyTuple_New(n)) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyTuple_SET_ITEM(v, i, w); - } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((v = PyTuple_New(n)) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyTuple_SET_ITEM(v, i, w); + } + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } static PyObject * do_mkvalue(const char **p_format, va_list *p_va, int flags) { - for (;;) { - switch (*(*p_format)++) { - case '(': - return do_mktuple(p_format, p_va, ')', - countformat(*p_format, ')'), flags); + for (;;) { + switch (*(*p_format)++) { + case '(': + return do_mktuple(p_format, p_va, ')', + countformat(*p_format, ')'), flags); - case '[': - return do_mklist(p_format, p_va, ']', - countformat(*p_format, ']'), flags); + case '[': + return do_mklist(p_format, p_va, ']', + countformat(*p_format, ']'), flags); - case '{': - return do_mkdict(p_format, p_va, '}', - countformat(*p_format, '}'), flags); + case '{': + return do_mkdict(p_format, p_va, '}', + countformat(*p_format, '}'), flags); - case 'b': - case 'B': - case 'h': - case 'i': - return PyInt_FromLong((long)va_arg(*p_va, int)); - - case 'H': - return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + case 'b': + case 'B': + case 'h': + case 'i': + return PyInt_FromLong((long)va_arg(*p_va, int)); - case 'I': - { - unsigned int n; - n = va_arg(*p_va, unsigned int); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong((unsigned long)n); - else - return PyInt_FromLong(n); - } - - case 'n': + case 'H': + return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + + case 'I': + { + unsigned int n; + n = va_arg(*p_va, unsigned int); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong((unsigned long)n); + else + return PyInt_FromLong(n); + } + + case 'n': #if SIZEOF_SIZE_T!=SIZEOF_LONG - return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); + return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); #endif - /* Fall through from 'n' to 'l' if Py_ssize_t is long */ - case 'l': - return PyInt_FromLong(va_arg(*p_va, long)); + /* Fall through from 'n' to 'l' if Py_ssize_t is long */ + case 'l': + return PyInt_FromLong(va_arg(*p_va, long)); - case 'k': - { - unsigned long n; - n = va_arg(*p_va, unsigned long); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong(n); - else - return PyInt_FromLong(n); - } + case 'k': + { + unsigned long n; + n = va_arg(*p_va, unsigned long); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong(n); + else + return PyInt_FromLong(n); + } #ifdef HAVE_LONG_LONG - case 'L': - return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); + case 'L': + return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); - case 'K': - return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); + case 'K': + return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); #endif #ifdef Py_USING_UNICODE - case 'u': - { - PyObject *v; - Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (u == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) - n = _ustrlen(u); - v = PyUnicode_FromUnicode(u, n); - } - return v; - } + case 'u': + { + PyObject *v; + Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (u == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) + n = _ustrlen(u); + v = PyUnicode_FromUnicode(u, n); + } + return v; + } #endif - case 'f': - case 'd': - return PyFloat_FromDouble( - (double)va_arg(*p_va, va_double)); + case 'f': + case 'd': + return PyFloat_FromDouble( + (double)va_arg(*p_va, va_double)); #ifndef WITHOUT_COMPLEX - case 'D': - return PyComplex_FromCComplex( - *((Py_complex *)va_arg(*p_va, Py_complex *))); + case 'D': + return PyComplex_FromCComplex( + *((Py_complex *)va_arg(*p_va, Py_complex *))); #endif /* WITHOUT_COMPLEX */ - case 'c': - { - char p[1]; - p[0] = (char)va_arg(*p_va, int); - return PyString_FromStringAndSize(p, 1); - } + case 'c': + { + char p[1]; + p[0] = (char)va_arg(*p_va, int); + return PyString_FromStringAndSize(p, 1); + } - case 's': - case 'z': - { - PyObject *v; - char *str = va_arg(*p_va, char *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (str == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) { - size_t m = strlen(str); - if (m > PY_SSIZE_T_MAX) { - PyErr_SetString(PyExc_OverflowError, - "string too long for Python string"); - return NULL; - } - n = (Py_ssize_t)m; - } - v = PyString_FromStringAndSize(str, n); - } - return v; - } + case 's': + case 'z': + { + PyObject *v; + char *str = va_arg(*p_va, char *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (str == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) { + size_t m = strlen(str); + if (m > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_OverflowError, + "string too long for Python string"); + return NULL; + } + n = (Py_ssize_t)m; + } + v = PyString_FromStringAndSize(str, n); + } + return v; + } - case 'N': - case 'S': - case 'O': - if (**p_format == '&') { - typedef PyObject *(*converter)(void *); - converter func = va_arg(*p_va, converter); - void *arg = va_arg(*p_va, void *); - ++*p_format; - return (*func)(arg); - } - else { - PyObject *v; - v = va_arg(*p_va, PyObject *); - if (v != NULL) { - if (*(*p_format - 1) != 'N') - Py_INCREF(v); - } - else if (!PyErr_Occurred()) - /* If a NULL was passed - * because a call that should - * have constructed a value - * failed, that's OK, and we - * pass the error on; but if - * no error occurred it's not - * clear that the caller knew - * what she was doing. */ - PyErr_SetString(PyExc_SystemError, - "NULL object passed to Py_BuildValue"); - return v; - } + case 'N': + case 'S': + case 'O': + if (**p_format == '&') { + typedef PyObject *(*converter)(void *); + converter func = va_arg(*p_va, converter); + void *arg = va_arg(*p_va, void *); + ++*p_format; + return (*func)(arg); + } + else { + PyObject *v; + v = va_arg(*p_va, PyObject *); + if (v != NULL) { + if (*(*p_format - 1) != 'N') + Py_INCREF(v); + } + else if (!PyErr_Occurred()) + /* If a NULL was passed + * because a call that should + * have constructed a value + * failed, that's OK, and we + * pass the error on; but if + * no error occurred it's not + * clear that the caller knew + * what she was doing. */ + PyErr_SetString(PyExc_SystemError, + "NULL object passed to Py_BuildValue"); + return v; + } - case ':': - case ',': - case ' ': - case '\t': - break; + case ':': + case ',': + case ' ': + case '\t': + break; - default: - PyErr_SetString(PyExc_SystemError, - "bad format char passed to Py_BuildValue"); - return NULL; + default: + PyErr_SetString(PyExc_SystemError, + "bad format char passed to Py_BuildValue"); + return NULL; - } - } + } + } } PyObject * Py_BuildValue(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, 0); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, 0); + va_end(va); + return retval; } PyObject * _Py_BuildValue_SizeT(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, FLAG_SIZE_T); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, FLAG_SIZE_T); + va_end(va); + return retval; } PyObject * Py_VaBuildValue(const char *format, va_list va) { - return va_build_value(format, va, 0); + return va_build_value(format, va, 0); } PyObject * _Py_VaBuildValue_SizeT(const char *format, va_list va) { - return va_build_value(format, va, FLAG_SIZE_T); + return va_build_value(format, va, FLAG_SIZE_T); } static PyObject * va_build_value(const char *format, va_list va, int flags) { - const char *f = format; - int n = countformat(f, '\0'); - va_list lva; + const char *f = format; + int n = countformat(f, '\0'); + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - if (n < 0) - return NULL; - if (n == 0) { - Py_INCREF(Py_None); - return Py_None; - } - if (n == 1) - return do_mkvalue(&f, &lva, flags); - return do_mktuple(&f, &lva, '\0', n, flags); + if (n < 0) + return NULL; + if (n == 0) { + Py_INCREF(Py_None); + return Py_None; + } + if (n == 1) + return do_mkvalue(&f, &lva, flags); + return do_mktuple(&f, &lva, '\0', n, flags); } PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...) { - va_list vargs; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *args; + PyObject *res; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) - return NULL; + if (args == NULL) + return NULL; - res = PyEval_CallObject(obj, args); - Py_DECREF(args); + res = PyEval_CallObject(obj, args); + Py_DECREF(args); - return res; + return res; } PyObject * PyEval_CallMethod(PyObject *obj, const char *methodname, const char *format, ...) { - va_list vargs; - PyObject *meth; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *meth; + PyObject *args; + PyObject *res; - meth = PyObject_GetAttrString(obj, methodname); - if (meth == NULL) - return NULL; + meth = PyObject_GetAttrString(obj, methodname); + if (meth == NULL) + return NULL; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) { - Py_DECREF(meth); - return NULL; - } + if (args == NULL) { + Py_DECREF(meth); + return NULL; + } - res = PyEval_CallObject(meth, args); - Py_DECREF(meth); - Py_DECREF(args); + res = PyEval_CallObject(meth, args); + Py_DECREF(meth); + Py_DECREF(args); - return res; -} - -static PyObject* -call_function_tail(PyObject *callable, PyObject *args) -{ - PyObject *retval; - - if (args == NULL) - return NULL; - - if (!PyTuple_Check(args)) { - PyObject *a; - - a = PyTuple_New(1); - if (a == NULL) { - Py_DECREF(args); - return NULL; - } - PyTuple_SET_ITEM(a, 0, args); - args = a; - } - retval = PyObject_Call(callable, args, NULL); - - Py_DECREF(args); - - return retval; -} - -PyObject * -PyObject_CallFunction(PyObject *callable, const char *format, ...) -{ - va_list va; - PyObject *args; - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - return call_function_tail(callable, args); -} - -PyObject * -PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) -{ - va_list va; - PyObject *args; - PyObject *func = NULL; - PyObject *retval = NULL; - - func = PyObject_GetAttrString(o, name); - if (func == NULL) { - PyErr_SetString(PyExc_AttributeError, name); - return 0; - } - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - retval = call_function_tail(func, args); - - exit: - /* args gets consumed in call_function_tail */ - Py_XDECREF(func); - - return retval; -} - -static PyObject * -objargs_mktuple(va_list va) -{ - int i, n = 0; - va_list countva; - PyObject *result, *tmp; - -#ifdef VA_LIST_IS_ARRAY - memcpy(countva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(countva, va); -#else - countva = va; -#endif -#endif - - while (((PyObject *)va_arg(countva, PyObject *)) != NULL) - ++n; - result = PyTuple_New(n); - if (result != NULL && n > 0) { - for (i = 0; i < n; ++i) { - tmp = (PyObject *)va_arg(va, PyObject *); - Py_INCREF(tmp); - PyTuple_SET_ITEM(result, i, tmp); - } - } - return result; -} - -PyObject * -PyObject_CallFunctionObjArgs(PyObject *callable, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - /* count the args */ - va_start(vargs, callable); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) - return NULL; - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - - return tmp; -} - -PyObject * -PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - callable = PyObject_GetAttr(callable, name); - if (callable == NULL) - return NULL; - - /* count the args */ - va_start(vargs, name); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) { - Py_DECREF(callable); - return NULL; - } - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - Py_DECREF(callable); - - return tmp; + return res; } /* returns -1 in case of error, 0 if a new key was added, 1 if the key @@ -666,67 +519,67 @@ static int _PyModule_AddObject_NoConsumeRef(PyObject *m, const char *name, PyObject *o) { - PyObject *dict, *prev; - if (!PyModule_Check(m)) { - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs module as first arg"); - return -1; - } - if (!o) { - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs non-NULL value"); - return -1; - } + PyObject *dict, *prev; + if (!PyModule_Check(m)) { + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs module as first arg"); + return -1; + } + if (!o) { + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs non-NULL value"); + return -1; + } - dict = PyModule_GetDict(m); - if (dict == NULL) { - /* Internal error -- modules must have a dict! */ - PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", - PyModule_GetName(m)); - return -1; - } - prev = PyDict_GetItemString(dict, name); - if (PyDict_SetItemString(dict, name, o)) - return -1; - return prev != NULL; + dict = PyModule_GetDict(m); + if (dict == NULL) { + /* Internal error -- modules must have a dict! */ + PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", + PyModule_GetName(m)); + return -1; + } + prev = PyDict_GetItemString(dict, name); + if (PyDict_SetItemString(dict, name, o)) + return -1; + return prev != NULL; } int PyModule_AddObject(PyObject *m, const char *name, PyObject *o) { - int result = _PyModule_AddObject_NoConsumeRef(m, name, o); - /* XXX WORKAROUND for a common misusage of PyModule_AddObject: - for the common case of adding a new key, we don't consume a - reference, but instead just leak it away. The issue is that - people generally don't realize that this function consumes a - reference, because on CPython the reference is still stored - on the dictionary. */ - if (result != 0) - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result = _PyModule_AddObject_NoConsumeRef(m, name, o); + /* XXX WORKAROUND for a common misusage of PyModule_AddObject: + for the common case of adding a new key, we don't consume a + reference, but instead just leak it away. The issue is that + people generally don't realize that this function consumes a + reference, because on CPython the reference is still stored + on the dictionary. */ + if (result != 0) + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddIntConstant(PyObject *m, const char *name, long value) { - int result; - PyObject *o = PyInt_FromLong(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyInt_FromLong(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddStringConstant(PyObject *m, const char *name, const char *value) { - int result; - PyObject *o = PyString_FromString(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyString_FromString(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } diff --git a/pypy/module/cpyext/src/mysnprintf.c b/pypy/module/cpyext/src/mysnprintf.c --- a/pypy/module/cpyext/src/mysnprintf.c +++ b/pypy/module/cpyext/src/mysnprintf.c @@ -20,86 +20,86 @@ Return value (rv): - When 0 <= rv < size, the output conversion was unexceptional, and - rv characters were written to str (excluding a trailing \0 byte at - str[rv]). + When 0 <= rv < size, the output conversion was unexceptional, and + rv characters were written to str (excluding a trailing \0 byte at + str[rv]). - When rv >= size, output conversion was truncated, and a buffer of - size rv+1 would have been needed to avoid truncation. str[size-1] - is \0 in this case. + When rv >= size, output conversion was truncated, and a buffer of + size rv+1 would have been needed to avoid truncation. str[size-1] + is \0 in this case. - When rv < 0, "something bad happened". str[size-1] is \0 in this - case too, but the rest of str is unreliable. It could be that - an error in format codes was detected by libc, or on platforms - with a non-C99 vsnprintf simply that the buffer wasn't big enough - to avoid truncation, or on platforms without any vsnprintf that - PyMem_Malloc couldn't obtain space for a temp buffer. + When rv < 0, "something bad happened". str[size-1] is \0 in this + case too, but the rest of str is unreliable. It could be that + an error in format codes was detected by libc, or on platforms + with a non-C99 vsnprintf simply that the buffer wasn't big enough + to avoid truncation, or on platforms without any vsnprintf that + PyMem_Malloc couldn't obtain space for a temp buffer. CAUTION: Unlike C99, str != NULL and size > 0 are required. */ int +PyOS_snprintf(char *str, size_t size, const char *format, ...) +{ + int rc; + va_list va; + + va_start(va, format); + rc = PyOS_vsnprintf(str, size, format, va); + va_end(va); + return rc; +} + +int PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va) { - int len; /* # bytes written, excluding \0 */ + int len; /* # bytes written, excluding \0 */ #ifdef HAVE_SNPRINTF #define _PyOS_vsnprintf_EXTRA_SPACE 1 #else #define _PyOS_vsnprintf_EXTRA_SPACE 512 - char *buffer; + char *buffer; #endif - assert(str != NULL); - assert(size > 0); - assert(format != NULL); - /* We take a size_t as input but return an int. Sanity check - * our input so that it won't cause an overflow in the - * vsnprintf return value or the buffer malloc size. */ - if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { - len = -666; - goto Done; - } + assert(str != NULL); + assert(size > 0); + assert(format != NULL); + /* We take a size_t as input but return an int. Sanity check + * our input so that it won't cause an overflow in the + * vsnprintf return value or the buffer malloc size. */ + if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { + len = -666; + goto Done; + } #ifdef HAVE_SNPRINTF - len = vsnprintf(str, size, format, va); + len = vsnprintf(str, size, format, va); #else - /* Emulate it. */ - buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); - if (buffer == NULL) { - len = -666; - goto Done; - } + /* Emulate it. */ + buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); + if (buffer == NULL) { + len = -666; + goto Done; + } - len = vsprintf(buffer, format, va); - if (len < 0) - /* ignore the error */; + len = vsprintf(buffer, format, va); + if (len < 0) + /* ignore the error */; - else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) - Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); + else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) + Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); - else { - const size_t to_copy = (size_t)len < size ? - (size_t)len : size - 1; - assert(to_copy < size); - memcpy(str, buffer, to_copy); - str[to_copy] = '\0'; - } - PyMem_FREE(buffer); + else { + const size_t to_copy = (size_t)len < size ? + (size_t)len : size - 1; + assert(to_copy < size); + memcpy(str, buffer, to_copy); + str[to_copy] = '\0'; + } + PyMem_FREE(buffer); #endif Done: - if (size > 0) - str[size-1] = '\0'; - return len; + if (size > 0) + str[size-1] = '\0'; + return len; #undef _PyOS_vsnprintf_EXTRA_SPACE } - -int -PyOS_snprintf(char *str, size_t size, const char *format, ...) -{ - int rc; - va_list va; - - va_start(va, format); - rc = PyOS_vsnprintf(str, size, format, va); - va_end(va); - return rc; -} diff --git a/pypy/module/cpyext/src/object.c b/pypy/module/cpyext/src/object.c deleted file mode 100644 --- a/pypy/module/cpyext/src/object.c +++ /dev/null @@ -1,91 +0,0 @@ -// contains code from abstract.c -#include - - -static PyObject * -null_error(void) -{ - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_SystemError, - "null argument to internal routine"); - return NULL; -} - -int PyObject_AsReadBuffer(PyObject *obj, - const void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void *pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a readable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getreadbuffer)(obj, 0, &pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int PyObject_AsWriteBuffer(PyObject *obj, - void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void*pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a writeable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getwritebuffer)(obj,0,&pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int -PyObject_CheckReadBuffer(PyObject *obj) -{ - PyBufferProcs *pb = obj->ob_type->tp_as_buffer; - - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - (*pb->bf_getsegcount)(obj, NULL) != 1) - return 0; - return 1; -} diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -4,72 +4,75 @@ PyObject * PyErr_Format(PyObject *exception, const char *format, ...) { - va_list vargs; - PyObject* string; + va_list vargs; + PyObject* string; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - string = PyString_FromFormatV(format, vargs); - PyErr_SetObject(exception, string); - Py_XDECREF(string); - va_end(vargs); - return NULL; + string = PyString_FromFormatV(format, vargs); + PyErr_SetObject(exception, string); + Py_XDECREF(string); + va_end(vargs); + return NULL; } + + PyObject * PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { - char *dot; - PyObject *modulename = NULL; - PyObject *classname = NULL; - PyObject *mydict = NULL; - PyObject *bases = NULL; - PyObject *result = NULL; - dot = strrchr(name, '.'); - if (dot == NULL) { - PyErr_SetString(PyExc_SystemError, - "PyErr_NewException: name must be module.class"); - return NULL; - } - if (base == NULL) - base = PyExc_Exception; - if (dict == NULL) { - dict = mydict = PyDict_New(); - if (dict == NULL) - goto failure; - } - if (PyDict_GetItemString(dict, "__module__") == NULL) { - modulename = PyString_FromStringAndSize(name, - (Py_ssize_t)(dot-name)); - if (modulename == NULL) - goto failure; - if (PyDict_SetItemString(dict, "__module__", modulename) != 0) - goto failure; - } - if (PyTuple_Check(base)) { - bases = base; - /* INCREF as we create a new ref in the else branch */ - Py_INCREF(bases); - } else { - bases = PyTuple_Pack(1, base); - if (bases == NULL) - goto failure; - } - /* Create a real new-style class. */ - result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", - dot+1, bases, dict); + char *dot; + PyObject *modulename = NULL; + PyObject *classname = NULL; + PyObject *mydict = NULL; + PyObject *bases = NULL; + PyObject *result = NULL; + dot = strrchr(name, '.'); + if (dot == NULL) { + PyErr_SetString(PyExc_SystemError, + "PyErr_NewException: name must be module.class"); + return NULL; + } + if (base == NULL) + base = PyExc_Exception; + if (dict == NULL) { + dict = mydict = PyDict_New(); + if (dict == NULL) + goto failure; + } + if (PyDict_GetItemString(dict, "__module__") == NULL) { + modulename = PyString_FromStringAndSize(name, + (Py_ssize_t)(dot-name)); + if (modulename == NULL) + goto failure; + if (PyDict_SetItemString(dict, "__module__", modulename) != 0) + goto failure; + } + if (PyTuple_Check(base)) { + bases = base; + /* INCREF as we create a new ref in the else branch */ + Py_INCREF(bases); + } else { + bases = PyTuple_Pack(1, base); + if (bases == NULL) + goto failure; + } + /* Create a real new-style class. */ + result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", + dot+1, bases, dict); failure: - Py_XDECREF(bases); - Py_XDECREF(mydict); - Py_XDECREF(classname); - Py_XDECREF(modulename); - return result; + Py_XDECREF(bases); + Py_XDECREF(mydict); + Py_XDECREF(classname); + Py_XDECREF(modulename); + return result; } + /* Create an exception with docstring */ PyObject * PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) diff --git a/pypy/module/cpyext/src/pysignals.c b/pypy/module/cpyext/src/pysignals.c --- a/pypy/module/cpyext/src/pysignals.c +++ b/pypy/module/cpyext/src/pysignals.c @@ -17,17 +17,34 @@ PyOS_getsig(int sig) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context; - if (sigaction(sig, NULL, &context) == -1) - return SIG_ERR; - return context.sa_handler; + /* assume sigaction exists */ + struct sigaction context; + if (sigaction(sig, NULL, &context) == -1) + return SIG_ERR; + return context.sa_handler; #else - PyOS_sighandler_t handler; - handler = signal(sig, SIG_IGN); - if (handler != SIG_ERR) - signal(sig, handler); - return handler; + PyOS_sighandler_t handler; +/* Special signal handling for the secure CRT in Visual Studio 2005 */ +#if defined(_MSC_VER) && _MSC_VER >= 1400 + switch (sig) { + /* Only these signals are valid */ + case SIGINT: + case SIGILL: + case SIGFPE: + case SIGSEGV: + case SIGTERM: + case SIGBREAK: + case SIGABRT: + break; + /* Don't call signal() with other values or it will assert */ + default: + return SIG_ERR; + } +#endif /* _MSC_VER && _MSC_VER >= 1400 */ + handler = signal(sig, SIG_IGN); + if (handler != SIG_ERR) + signal(sig, handler); + return handler; #endif } @@ -35,21 +52,21 @@ PyOS_setsig(int sig, PyOS_sighandler_t handler) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context, ocontext; - context.sa_handler = handler; - sigemptyset(&context.sa_mask); - context.sa_flags = 0; - if (sigaction(sig, &context, &ocontext) == -1) - return SIG_ERR; - return ocontext.sa_handler; + /* assume sigaction exists */ + struct sigaction context, ocontext; + context.sa_handler = handler; + sigemptyset(&context.sa_mask); + context.sa_flags = 0; + if (sigaction(sig, &context, &ocontext) == -1) + return SIG_ERR; + return ocontext.sa_handler; #else - PyOS_sighandler_t oldhandler; - oldhandler = signal(sig, handler); + PyOS_sighandler_t oldhandler; + oldhandler = signal(sig, handler); #ifndef MS_WINDOWS - /* should check if this exists */ - siginterrupt(sig, 1); + /* should check if this exists */ + siginterrupt(sig, 1); #endif - return oldhandler; + return oldhandler; #endif } diff --git a/pypy/module/cpyext/src/pythonrun.c b/pypy/module/cpyext/src/pythonrun.c --- a/pypy/module/cpyext/src/pythonrun.c +++ b/pypy/module/cpyext/src/pythonrun.c @@ -9,28 +9,28 @@ void Py_FatalError(const char *msg) { - fprintf(stderr, "Fatal Python error: %s\n", msg); - fflush(stderr); /* it helps in Windows debug build */ + fprintf(stderr, "Fatal Python error: %s\n", msg); + fflush(stderr); /* it helps in Windows debug build */ #ifdef MS_WINDOWS - { - size_t len = strlen(msg); - WCHAR* buffer; - size_t i; + { + size_t len = strlen(msg); + WCHAR* buffer; + size_t i; - /* Convert the message to wchar_t. This uses a simple one-to-one - conversion, assuming that the this error message actually uses ASCII - only. If this ceases to be true, we will have to convert. */ - buffer = alloca( (len+1) * (sizeof *buffer)); - for( i=0; i<=len; ++i) - buffer[i] = msg[i]; - OutputDebugStringW(L"Fatal Python error: "); - OutputDebugStringW(buffer); - OutputDebugStringW(L"\n"); - } + /* Convert the message to wchar_t. This uses a simple one-to-one + conversion, assuming that the this error message actually uses ASCII + only. If this ceases to be true, we will have to convert. */ + buffer = alloca( (len+1) * (sizeof *buffer)); + for( i=0; i<=len; ++i) + buffer[i] = msg[i]; + OutputDebugStringW(L"Fatal Python error: "); + OutputDebugStringW(buffer); + OutputDebugStringW(L"\n"); + } #ifdef _DEBUG - DebugBreak(); + DebugBreak(); #endif #endif /* MS_WINDOWS */ - abort(); + abort(); } diff --git a/pypy/module/cpyext/src/stringobject.c b/pypy/module/cpyext/src/stringobject.c --- a/pypy/module/cpyext/src/stringobject.c +++ b/pypy/module/cpyext/src/stringobject.c @@ -4,246 +4,247 @@ PyObject * PyString_FromFormatV(const char *format, va_list vargs) { - va_list count; - Py_ssize_t n = 0; - const char* f; - char *s; - PyObject* string; + va_list count; + Py_ssize_t n = 0; + const char* f; + char *s; + PyObject* string; #ifdef VA_LIST_IS_ARRAY - Py_MEMCPY(count, vargs, sizeof(va_list)); + Py_MEMCPY(count, vargs, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(count, vargs); + __va_copy(count, vargs); #else - count = vargs; + count = vargs; #endif #endif - /* step 1: figure out how large a buffer we need */ - for (f = format; *f; f++) { - if (*f == '%') { + /* step 1: figure out how large a buffer we need */ + for (f = format; *f; f++) { + if (*f == '%') { #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - const char* p = f; - while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - ; + const char* p = f; + while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + ; - /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since - * they don't affect the amount of space we reserve. - */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - ++f; - } + /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since + * they don't affect the amount of space we reserve. + */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - ++f; - } + } + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + ++f; + } - switch (*f) { - case 'c': - (void)va_arg(count, int); - /* fall through... */ - case '%': - n++; - break; - case 'd': case 'u': case 'i': case 'x': - (void) va_arg(count, int); + switch (*f) { + case 'c': + (void)va_arg(count, int); + /* fall through... */ + case '%': + n++; + break; + case 'd': case 'u': case 'i': case 'x': + (void) va_arg(count, int); #ifdef HAVE_LONG_LONG - /* Need at most - ceil(log10(256)*SIZEOF_LONG_LONG) digits, - plus 1 for the sign. 53/22 is an upper - bound for log10(256). */ - if (longlongflag) - n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; - else + /* Need at most + ceil(log10(256)*SIZEOF_LONG_LONG) digits, + plus 1 for the sign. 53/22 is an upper + bound for log10(256). */ + if (longlongflag) + n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; + else #endif - /* 20 bytes is enough to hold a 64-bit - integer. Decimal takes the most - space. This isn't enough for - octal. */ - n += 20; + /* 20 bytes is enough to hold a 64-bit + integer. Decimal takes the most + space. This isn't enough for + octal. */ + n += 20; - break; - case 's': - s = va_arg(count, char*); - n += strlen(s); - break; - case 'p': - (void) va_arg(count, int); - /* maximum 64-bit pointer representation: - * 0xffffffffffffffff - * so 19 characters is enough. - * XXX I count 18 -- what's the extra for? - */ - n += 19; - break; - default: - /* if we stumble upon an unknown - formatting code, copy the rest of - the format string to the output - string. (we cannot just skip the - code, since there's no way to know - what's in the argument list) */ - n += strlen(p); - goto expand; - } - } else - n++; - } + break; + case 's': + s = va_arg(count, char*); + n += strlen(s); + break; + case 'p': + (void) va_arg(count, int); + /* maximum 64-bit pointer representation: + * 0xffffffffffffffff + * so 19 characters is enough. + * XXX I count 18 -- what's the extra for? + */ + n += 19; + break; + default: + /* if we stumble upon an unknown + formatting code, copy the rest of + the format string to the output + string. (we cannot just skip the + code, since there's no way to know + what's in the argument list) */ + n += strlen(p); + goto expand; + } + } else + n++; + } expand: - /* step 2: fill the buffer */ - /* Since we've analyzed how much space we need for the worst case, - use sprintf directly instead of the slower PyOS_snprintf. */ - string = PyString_FromStringAndSize(NULL, n); - if (!string) - return NULL; + /* step 2: fill the buffer */ + /* Since we've analyzed how much space we need for the worst case, + use sprintf directly instead of the slower PyOS_snprintf. */ + string = PyString_FromStringAndSize(NULL, n); + if (!string) + return NULL; - s = PyString_AsString(string); + s = PyString_AsString(string); - for (f = format; *f; f++) { - if (*f == '%') { - const char* p = f++; - Py_ssize_t i; - int longflag = 0; + for (f = format; *f; f++) { + if (*f == '%') { + const char* p = f++; + Py_ssize_t i; + int longflag = 0; #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - int size_tflag = 0; - /* parse the width.precision part (we're only - interested in the precision value, if any) */ - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - if (*f == '.') { - f++; - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - } - while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - f++; - /* Handle %ld, %lu, %lld and %llu. */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - longflag = 1; - ++f; - } + int size_tflag = 0; + /* parse the width.precision part (we're only + interested in the precision value, if any) */ + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + if (*f == '.') { + f++; + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + } + while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + f++; + /* Handle %ld, %lu, %lld and %llu. */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + longflag = 1; + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - /* handle the size_t flag. */ - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - size_tflag = 1; - ++f; - } + } + /* handle the size_t flag. */ + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + size_tflag = 1; + ++f; + } - switch (*f) { - case 'c': - *s++ = va_arg(vargs, int); - break; - case 'd': - if (longflag) - sprintf(s, "%ld", va_arg(vargs, long)); + switch (*f) { + case 'c': + *s++ = va_arg(vargs, int); + break; + case 'd': + if (longflag) + sprintf(s, "%ld", va_arg(vargs, long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "d", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "d", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "d", - va_arg(vargs, Py_ssize_t)); - else - sprintf(s, "%d", va_arg(vargs, int)); - s += strlen(s); - break; - case 'u': - if (longflag) - sprintf(s, "%lu", - va_arg(vargs, unsigned long)); + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "d", + va_arg(vargs, Py_ssize_t)); + else + sprintf(s, "%d", va_arg(vargs, int)); + s += strlen(s); + break; + case 'u': + if (longflag) + sprintf(s, "%lu", + va_arg(vargs, unsigned long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "u", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "u", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "u", - va_arg(vargs, size_t)); - else - sprintf(s, "%u", - va_arg(vargs, unsigned int)); - s += strlen(s); - break; - case 'i': - sprintf(s, "%i", va_arg(vargs, int)); - s += strlen(s); - break; - case 'x': - sprintf(s, "%x", va_arg(vargs, int)); - s += strlen(s); - break; - case 's': - p = va_arg(vargs, char*); - i = strlen(p); - if (n > 0 && i > n) - i = n; - Py_MEMCPY(s, p, i); - s += i; - break; - case 'p': - sprintf(s, "%p", va_arg(vargs, void*)); - /* %p is ill-defined: ensure leading 0x. */ - if (s[1] == 'X') - s[1] = 'x'; - else if (s[1] != 'x') { - memmove(s+2, s, strlen(s)+1); - s[0] = '0'; - s[1] = 'x'; - } - s += strlen(s); - break; - case '%': - *s++ = '%'; - break; - default: - strcpy(s, p); - s += strlen(s); - goto end; - } - } else - *s++ = *f; - } + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "u", + va_arg(vargs, size_t)); + else + sprintf(s, "%u", + va_arg(vargs, unsigned int)); + s += strlen(s); + break; + case 'i': + sprintf(s, "%i", va_arg(vargs, int)); + s += strlen(s); + break; + case 'x': + sprintf(s, "%x", va_arg(vargs, int)); + s += strlen(s); + break; + case 's': + p = va_arg(vargs, char*); + i = strlen(p); + if (n > 0 && i > n) + i = n; + Py_MEMCPY(s, p, i); + s += i; + break; + case 'p': + sprintf(s, "%p", va_arg(vargs, void*)); + /* %p is ill-defined: ensure leading 0x. */ + if (s[1] == 'X') + s[1] = 'x'; + else if (s[1] != 'x') { + memmove(s+2, s, strlen(s)+1); + s[0] = '0'; + s[1] = 'x'; + } + s += strlen(s); + break; + case '%': + *s++ = '%'; + break; + default: + strcpy(s, p); + s += strlen(s); + goto end; + } + } else + *s++ = *f; + } end: - _PyString_Resize(&string, s - PyString_AS_STRING(string)); - return string; + if (_PyString_Resize(&string, s - PyString_AS_STRING(string))) + return NULL; + return string; } PyObject * PyString_FromFormat(const char *format, ...) { - PyObject* ret; - va_list vargs; + PyObject* ret; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - ret = PyString_FromFormatV(format, vargs); - va_end(vargs); - return ret; + ret = PyString_FromFormatV(format, vargs); + va_end(vargs); + return ret; } diff --git a/pypy/module/cpyext/src/structseq.c b/pypy/module/cpyext/src/structseq.c --- a/pypy/module/cpyext/src/structseq.c +++ b/pypy/module/cpyext/src/structseq.c @@ -175,32 +175,33 @@ if (min_len != max_len) { if (len < min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at least %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at least %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } if (len > max_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at most %zd-sequence (%zd-sequence given)", - type->tp_name, max_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at most %zd-sequence (%zd-sequence given)", + type->tp_name, max_len, len); + Py_DECREF(arg); + return NULL; } } else { if (len != min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes a %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes a %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } } res = (PyStructSequence*) PyStructSequence_New(type); if (res == NULL) { + Py_DECREF(arg); return NULL; } for (i = 0; i < len; ++i) { diff --git a/pypy/module/cpyext/src/sysmodule.c b/pypy/module/cpyext/src/sysmodule.c --- a/pypy/module/cpyext/src/sysmodule.c +++ b/pypy/module/cpyext/src/sysmodule.c @@ -100,4 +100,3 @@ sys_write("stderr", stderr, format, va); va_end(va); } - diff --git a/pypy/module/cpyext/src/varargwrapper.c b/pypy/module/cpyext/src/varargwrapper.c --- a/pypy/module/cpyext/src/varargwrapper.c +++ b/pypy/module/cpyext/src/varargwrapper.c @@ -1,21 +1,25 @@ #include #include -PyObject * PyTuple_Pack(Py_ssize_t size, ...) +PyObject * +PyTuple_Pack(Py_ssize_t n, ...) { - va_list ap; - PyObject *cur, *tuple; - int i; + Py_ssize_t i; + PyObject *o; + PyObject *result; + va_list vargs; - tuple = PyTuple_New(size); - va_start(ap, size); - for (i = 0; i < size; i++) { - cur = va_arg(ap, PyObject*); - Py_INCREF(cur); - if (PyTuple_SetItem(tuple, i, cur) < 0) + va_start(vargs, n); + result = PyTuple_New(n); + if (result == NULL) + return NULL; + for (i = 0; i < n; i++) { + o = va_arg(vargs, PyObject *); + Py_INCREF(o); + if (PyTuple_SetItem(result, i, o) < 0) return NULL; } - va_end(ap); - return tuple; + va_end(vargs); + return result; } diff --git a/pypy/module/cpyext/structmember.py b/pypy/module/cpyext/structmember.py --- a/pypy/module/cpyext/structmember.py +++ b/pypy/module/cpyext/structmember.py @@ -10,7 +10,7 @@ PyString_FromString, PyString_FromStringAndSize) from pypy.module.cpyext.floatobject import PyFloat_AsDouble from pypy.module.cpyext.longobject import ( - PyLong_AsLongLong, PyLong_AsUnsignedLongLong) + PyLong_AsLongLong, PyLong_AsUnsignedLongLong, PyLong_AsSsize_t) from pypy.module.cpyext.typeobjectdefs import PyMemberDef from pypy.rlib.unroll import unrolling_iterable @@ -28,6 +28,7 @@ (T_DOUBLE, rffi.DOUBLE, PyFloat_AsDouble), (T_LONGLONG, rffi.LONGLONG, PyLong_AsLongLong), (T_ULONGLONG, rffi.ULONGLONG, PyLong_AsUnsignedLongLong), + (T_PYSSIZET, rffi.SSIZE_T, PyLong_AsSsize_t), ]) diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -18,6 +18,7 @@ T_OBJECT_EX = 16 T_LONGLONG = 17 T_ULONGLONG = 18 +T_PYSSIZET = 19 READONLY = RO = 1 READ_RESTRICTED = 2 diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1405,13 +1405,6 @@ """ raise NotImplementedError - at cpython_api([Py_ssize_t], PyObject) -def PyLong_FromSsize_t(space, v): - """Return a new PyLongObject object from a C Py_ssize_t, or - NULL on failure. - """ - raise NotImplementedError - @cpython_api([rffi.SIZE_T], PyObject) def PyLong_FromSize_t(space, v): """Return a new PyLongObject object from a C size_t, or @@ -1431,14 +1424,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject], Py_ssize_t, error=-1) -def PyLong_AsSsize_t(space, pylong): - """Return a C Py_ssize_t representation of the contents of pylong. If - pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised - and -1 will be returned. - """ - raise NotImplementedError - @cpython_api([PyObject, rffi.CCHARP], rffi.INT_real, error=-1) def PyMapping_DelItemString(space, o, key): """Remove the mapping for object key from the object o. Return -1 on @@ -1980,35 +1965,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) -def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): - """Decode length bytes from a UTF-32 encoded buffer string and return the - corresponding Unicode object. errors (if non-NULL) defines the error - handling. It defaults to "strict". - - If byteorder is non-NULL, the decoder starts decoding using the given byte - order: - - *byteorder == -1: little endian - *byteorder == 0: native order - *byteorder == 1: big endian - - If *byteorder is zero, and the first four bytes of the input data are a - byte order mark (BOM), the decoder switches to this byte order and the BOM is - not copied into the resulting Unicode string. If *byteorder is -1 or - 1, any byte order mark is copied to the output. - - After completion, *byteorder is set to the current byte order at the end - of input data. - - In a narrow build codepoints outside the BMP will be decoded as surrogate pairs. - - If byteorder is NULL, the codec starts in native order mode. - - Return NULL if an exception was raised by the codec. - """ - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP, Py_ssize_t], PyObject) def PyUnicode_DecodeUTF32Stateful(space, s, size, errors, byteorder, consumed): """If consumed is NULL, behave like PyUnicode_DecodeUTF32(). If diff --git a/pypy/module/cpyext/test/_sre.c b/pypy/module/cpyext/test/_sre.c --- a/pypy/module/cpyext/test/_sre.c +++ b/pypy/module/cpyext/test/_sre.c @@ -81,9 +81,6 @@ #define PyObject_DEL(op) PyMem_DEL((op)) #endif -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - /* -------------------------------------------------------------------- */ #if defined(_MSC_VER) @@ -1689,7 +1686,7 @@ if (PyUnicode_Check(string)) { /* unicode strings doesn't always support the buffer interface */ ptr = (void*) PyUnicode_AS_DATA(string); - bytes = PyUnicode_GET_DATA_SIZE(string); + /* bytes = PyUnicode_GET_DATA_SIZE(string); */ size = PyUnicode_GET_SIZE(string); charsize = sizeof(Py_UNICODE); @@ -2601,46 +2598,22 @@ {NULL, NULL} }; -static PyObject* -pattern_getattr(PatternObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(pattern_methods, (PyObject*) self, name); - - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - if (!strcmp(name, "flags")) - return Py_BuildValue("i", self->flags); - - if (!strcmp(name, "groups")) - return Py_BuildValue("i", self->groups); - - if (!strcmp(name, "groupindex") && self->groupindex) { - Py_INCREF(self->groupindex); - return self->groupindex; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define PAT_OFF(x) offsetof(PatternObject, x) +static PyMemberDef pattern_members[] = { + {"pattern", T_OBJECT, PAT_OFF(pattern), READONLY}, + {"flags", T_INT, PAT_OFF(flags), READONLY}, + {"groups", T_PYSSIZET, PAT_OFF(groups), READONLY}, + {"groupindex", T_OBJECT, PAT_OFF(groupindex), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Pattern_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Pattern", sizeof(PatternObject), sizeof(SRE_CODE), (destructor)pattern_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)pattern_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattrn */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ @@ -2653,12 +2626,16 @@ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ pattern_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(PatternObject, weakreflist), /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + pattern_methods, /* tp_methods */ + pattern_members, /* tp_members */ }; static int _validate(PatternObject *self); /* Forward */ @@ -2767,7 +2744,7 @@ #if defined(VVERBOSE) #define VTRACE(v) printf v #else -#define VTRACE(v) +#define VTRACE(v) do {} while(0) /* do nothing */ #endif /* Report failure */ @@ -2970,13 +2947,13 @@ <1=skip> <2=flags> <3=min> <4=max>; If SRE_INFO_PREFIX or SRE_INFO_CHARSET is in the flags, more follows. */ - SRE_CODE flags, min, max, i; + SRE_CODE flags, i; SRE_CODE *newcode; GET_SKIP; newcode = code+skip-1; GET_ARG; flags = arg; - GET_ARG; min = arg; - GET_ARG; max = arg; + GET_ARG; /* min */ + GET_ARG; /* max */ /* Check that only valid flags are present */ if ((flags & ~(SRE_INFO_PREFIX | SRE_INFO_LITERAL | @@ -2992,9 +2969,9 @@ FAIL; /* Validate the prefix */ if (flags & SRE_INFO_PREFIX) { - SRE_CODE prefix_len, prefix_skip; + SRE_CODE prefix_len; GET_ARG; prefix_len = arg; - GET_ARG; prefix_skip = arg; + GET_ARG; /* prefix skip */ /* Here comes the prefix string */ if (code+prefix_len < code || code+prefix_len > newcode) FAIL; @@ -3565,7 +3542,7 @@ #endif } -static PyMethodDef match_methods[] = { +static struct PyMethodDef match_methods[] = { {"group", (PyCFunction) match_group, METH_VARARGS}, {"start", (PyCFunction) match_start, METH_VARARGS}, {"end", (PyCFunction) match_end, METH_VARARGS}, @@ -3578,80 +3555,90 @@ {NULL, NULL} }; -static PyObject* -match_getattr(MatchObject* self, char* name) +static PyObject * +match_lastindex_get(MatchObject *self) { - PyObject* res; - - res = Py_FindMethod(match_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - if (!strcmp(name, "lastindex")) { - if (self->lastindex >= 0) - return Py_BuildValue("i", self->lastindex); - Py_INCREF(Py_None); - return Py_None; + if (self->lastindex >= 0) + return Py_BuildValue("i", self->lastindex); + Py_INCREF(Py_None); + return Py_None; +} + +static PyObject * +match_lastgroup_get(MatchObject *self) +{ + if (self->pattern->indexgroup && self->lastindex >= 0) { + PyObject* result = PySequence_GetItem( + self->pattern->indexgroup, self->lastindex + ); + if (result) + return result; + PyErr_Clear(); } - - if (!strcmp(name, "lastgroup")) { - if (self->pattern->indexgroup && self->lastindex >= 0) { - PyObject* result = PySequence_GetItem( - self->pattern->indexgroup, self->lastindex - ); - if (result) - return result; - PyErr_Clear(); - } - Py_INCREF(Py_None); - return Py_None; - } - - if (!strcmp(name, "string")) { - if (self->string) { - Py_INCREF(self->string); - return self->string; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - - if (!strcmp(name, "regs")) { - if (self->regs) { - Py_INCREF(self->regs); - return self->regs; - } else - return match_regs(self); - } - - if (!strcmp(name, "re")) { - Py_INCREF(self->pattern); - return (PyObject*) self->pattern; - } - - if (!strcmp(name, "pos")) - return Py_BuildValue("i", self->pos); - - if (!strcmp(name, "endpos")) - return Py_BuildValue("i", self->endpos); - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; + Py_INCREF(Py_None); + return Py_None; } +static PyObject * +match_regs_get(MatchObject *self) +{ + if (self->regs) { + Py_INCREF(self->regs); + return self->regs; + } else + return match_regs(self); +} + +static PyGetSetDef match_getset[] = { + {"lastindex", (getter)match_lastindex_get, (setter)NULL}, + {"lastgroup", (getter)match_lastgroup_get, (setter)NULL}, + {"regs", (getter)match_regs_get, (setter)NULL}, + {NULL} +}; + +#define MATCH_OFF(x) offsetof(MatchObject, x) +static PyMemberDef match_members[] = { + {"string", T_OBJECT, MATCH_OFF(string), READONLY}, + {"re", T_OBJECT, MATCH_OFF(pattern), READONLY}, + {"pos", T_PYSSIZET, MATCH_OFF(pos), READONLY}, + {"endpos", T_PYSSIZET, MATCH_OFF(endpos), READONLY}, + {NULL} +}; + + /* FIXME: implement setattr("string", None) as a special case (to detach the associated string, if any */ -statichere PyTypeObject Match_Type = { - PyObject_HEAD_INIT(NULL) - 0, "_" SRE_MODULE ".SRE_Match", +static PyTypeObject Match_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "_" SRE_MODULE ".SRE_Match", sizeof(MatchObject), sizeof(Py_ssize_t), - (destructor)match_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)match_getattr /*tp_getattr*/ + (destructor)match_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + match_methods, /* tp_methods */ + match_members, /* tp_members */ + match_getset, /* tp_getset */ }; static PyObject* @@ -3800,34 +3787,42 @@ {NULL, NULL} }; -static PyObject* -scanner_getattr(ScannerObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(scanner_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define SCAN_OFF(x) offsetof(ScannerObject, x) +static PyMemberDef scanner_members[] = { + {"pattern", T_OBJECT, SCAN_OFF(pattern), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Scanner_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Scanner", sizeof(ScannerObject), 0, (destructor)scanner_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)scanner_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_reserved */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + scanner_methods, /* tp_methods */ + scanner_members, /* tp_members */ + 0, /* tp_getset */ }; static PyObject* @@ -3879,8 +3874,9 @@ PyObject* x; /* Patch object types */ - Pattern_Type.ob_type = Match_Type.ob_type = - Scanner_Type.ob_type = &PyType_Type; + if (PyType_Ready(&Pattern_Type) || PyType_Ready(&Match_Type) || + PyType_Ready(&Scanner_Type)) + return; m = Py_InitModule("_" SRE_MODULE, _functions); if (m == NULL) diff --git a/pypy/module/cpyext/test/array.c b/pypy/module/cpyext/test/array.c --- a/pypy/module/cpyext/test/array.c +++ b/pypy/module/cpyext/test/array.c @@ -11,13 +11,10 @@ #include #else /* !STDC_HEADERS */ #ifdef HAVE_SYS_TYPES_H -#include /* For size_t */ +#include /* For size_t */ #endif /* HAVE_SYS_TYPES_H */ #endif /* !STDC_HEADERS */ -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - struct arrayobject; /* Forward */ /* All possible arraydescr values are defined in the vector "descriptors" @@ -25,18 +22,18 @@ * functions aren't visible yet. */ struct arraydescr { - int typecode; - int itemsize; - PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); - int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); + int typecode; + int itemsize; + PyObject * (*getitem)(struct arrayobject *, Py_ssize_t); + int (*setitem)(struct arrayobject *, Py_ssize_t, PyObject *); }; typedef struct arrayobject { - PyObject_VAR_HEAD - char *ob_item; - Py_ssize_t allocated; - struct arraydescr *ob_descr; - PyObject *weakreflist; /* List of weak references */ + PyObject_VAR_HEAD + char *ob_item; + Py_ssize_t allocated; + struct arraydescr *ob_descr; + PyObject *weakreflist; /* List of weak references */ } arrayobject; static PyTypeObject Arraytype; @@ -47,49 +44,49 @@ static int array_resize(arrayobject *self, Py_ssize_t newsize) { - char *items; - size_t _new_size; + char *items; + size_t _new_size; - /* Bypass realloc() when a previous overallocation is large enough - to accommodate the newsize. If the newsize is 16 smaller than the - current size, then proceed with the realloc() to shrink the list. - */ + /* Bypass realloc() when a previous overallocation is large enough + to accommodate the newsize. If the newsize is 16 smaller than the + current size, then proceed with the realloc() to shrink the list. + */ - if (self->allocated >= newsize && - Py_SIZE(self) < newsize + 16 && - self->ob_item != NULL) { - Py_SIZE(self) = newsize; - return 0; - } + if (self->allocated >= newsize && + Py_SIZE(self) < newsize + 16 && + self->ob_item != NULL) { + Py_SIZE(self) = newsize; + return 0; + } - /* This over-allocates proportional to the array size, making room - * for additional growth. The over-allocation is mild, but is - * enough to give linear-time amortized behavior over a long - * sequence of appends() in the presence of a poorly-performing - * system realloc(). - * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... - * Note, the pattern starts out the same as for lists but then - * grows at a smaller rate so that larger arrays only overallocate - * by about 1/16th -- this is done because arrays are presumed to be more - * memory critical. - */ + /* This over-allocates proportional to the array size, making room + * for additional growth. The over-allocation is mild, but is + * enough to give linear-time amortized behavior over a long + * sequence of appends() in the presence of a poorly-performing + * system realloc(). + * The growth pattern is: 0, 4, 8, 16, 25, 34, 46, 56, 67, 79, ... + * Note, the pattern starts out the same as for lists but then + * grows at a smaller rate so that larger arrays only overallocate + * by about 1/16th -- this is done because arrays are presumed to be more + * memory critical. + */ - _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; - items = self->ob_item; - /* XXX The following multiplication and division does not optimize away - like it does for lists since the size is not known at compile time */ - if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) - PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); - else - items = NULL; - if (items == NULL) { - PyErr_NoMemory(); - return -1; - } - self->ob_item = items; - Py_SIZE(self) = newsize; - self->allocated = _new_size; - return 0; + _new_size = (newsize >> 4) + (Py_SIZE(self) < 8 ? 3 : 7) + newsize; + items = self->ob_item; + /* XXX The following multiplication and division does not optimize away + like it does for lists since the size is not known at compile time */ + if (_new_size <= ((~(size_t)0) / self->ob_descr->itemsize)) + PyMem_RESIZE(items, char, (_new_size * self->ob_descr->itemsize)); + else + items = NULL; + if (items == NULL) { + PyErr_NoMemory(); + return -1; + } + self->ob_item = items; + Py_SIZE(self) = newsize; + self->allocated = _new_size; + return 0; } /**************************************************************************** @@ -107,308 +104,308 @@ static PyObject * c_getitem(arrayobject *ap, Py_ssize_t i) { - return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); + return PyString_FromStringAndSize(&((char *)ap->ob_item)[i], 1); } static int c_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - char x; - if (!PyArg_Parse(v, "c;array item must be char", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; + char x; + if (!PyArg_Parse(v, "c;array item must be char", &x)) + return -1; + if (i >= 0) + ((char *)ap->ob_item)[i] = x; + return 0; } static PyObject * b_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((char *)ap->ob_item)[i]; - if (x >= 128) - x -= 256; - return PyInt_FromLong(x); + long x = ((char *)ap->ob_item)[i]; + if (x >= 128) + x -= 256; + return PyInt_FromLong(x); } static int b_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore - must use the next size up that is signed ('h') and manually do - the overflow checking */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - else if (x < -128) { - PyErr_SetString(PyExc_OverflowError, - "signed char is less than minimum"); - return -1; - } - else if (x > 127) { - PyErr_SetString(PyExc_OverflowError, - "signed char is greater than maximum"); - return -1; - } - if (i >= 0) - ((char *)ap->ob_item)[i] = (char)x; - return 0; + short x; + /* PyArg_Parse's 'b' formatter is for an unsigned char, therefore + must use the next size up that is signed ('h') and manually do + the overflow checking */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + else if (x < -128) { + PyErr_SetString(PyExc_OverflowError, + "signed char is less than minimum"); + return -1; + } + else if (x > 127) { + PyErr_SetString(PyExc_OverflowError, + "signed char is greater than maximum"); + return -1; + } + if (i >= 0) + ((char *)ap->ob_item)[i] = (char)x; + return 0; } static PyObject * BB_getitem(arrayobject *ap, Py_ssize_t i) { - long x = ((unsigned char *)ap->ob_item)[i]; - return PyInt_FromLong(x); + long x = ((unsigned char *)ap->ob_item)[i]; + return PyInt_FromLong(x); } static int BB_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned char x; - /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ - if (!PyArg_Parse(v, "b;array item must be integer", &x)) - return -1; - if (i >= 0) - ((char *)ap->ob_item)[i] = x; - return 0; + unsigned char x; + /* 'B' == unsigned char, maps to PyArg_Parse's 'b' formatter */ + if (!PyArg_Parse(v, "b;array item must be integer", &x)) + return -1; + if (i >= 0) + ((char *)ap->ob_item)[i] = x; + return 0; } #ifdef Py_USING_UNICODE static PyObject * u_getitem(arrayobject *ap, Py_ssize_t i) { - return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); + return PyUnicode_FromUnicode(&((Py_UNICODE *) ap->ob_item)[i], 1); } static int u_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - Py_UNICODE *p; - Py_ssize_t len; + Py_UNICODE *p; + Py_ssize_t len; - if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) - return -1; - if (len != 1) { - PyErr_SetString(PyExc_TypeError, - "array item must be unicode character"); - return -1; - } - if (i >= 0) - ((Py_UNICODE *)ap->ob_item)[i] = p[0]; - return 0; + if (!PyArg_Parse(v, "u#;array item must be unicode character", &p, &len)) + return -1; + if (len != 1) { + PyErr_SetString(PyExc_TypeError, + "array item must be unicode character"); + return -1; + } + if (i >= 0) + ((Py_UNICODE *)ap->ob_item)[i] = p[0]; + return 0; } #endif static PyObject * h_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((short *)ap->ob_item)[i]); } static int h_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - short x; - /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ - if (!PyArg_Parse(v, "h;array item must be integer", &x)) - return -1; - if (i >= 0) - ((short *)ap->ob_item)[i] = x; - return 0; + short x; + /* 'h' == signed short, maps to PyArg_Parse's 'h' formatter */ + if (!PyArg_Parse(v, "h;array item must be integer", &x)) + return -1; + if (i >= 0) + ((short *)ap->ob_item)[i] = x; + return 0; } static PyObject * HH_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((unsigned short *)ap->ob_item)[i]); } static int HH_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* PyArg_Parse's 'h' formatter is for a signed short, therefore - must use the next size up and manually do the overflow checking */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - else if (x < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is less than minimum"); - return -1; - } - else if (x > USHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned short is greater than maximum"); - return -1; - } - if (i >= 0) - ((short *)ap->ob_item)[i] = (short)x; - return 0; + int x; + /* PyArg_Parse's 'h' formatter is for a signed short, therefore + must use the next size up and manually do the overflow checking */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + else if (x < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is less than minimum"); + return -1; + } + else if (x > USHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned short is greater than maximum"); + return -1; + } + if (i >= 0) + ((short *)ap->ob_item)[i] = (short)x; + return 0; } static PyObject * i_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); + return PyInt_FromLong((long) ((int *)ap->ob_item)[i]); } static int i_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - int x; - /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ - if (!PyArg_Parse(v, "i;array item must be integer", &x)) - return -1; - if (i >= 0) - ((int *)ap->ob_item)[i] = x; - return 0; + int x; + /* 'i' == signed int, maps to PyArg_Parse's 'i' formatter */ + if (!PyArg_Parse(v, "i;array item must be integer", &x)) + return -1; + if (i >= 0) + ((int *)ap->ob_item)[i] = x; + return 0; } static PyObject * II_getitem(arrayobject *ap, Py_ssize_t i) { - return PyLong_FromUnsignedLong( - (unsigned long) ((unsigned int *)ap->ob_item)[i]); + return PyLong_FromUnsignedLong( + (unsigned long) ((unsigned int *)ap->ob_item)[i]); } static int II_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned long x; - if (PyLong_Check(v)) { - x = PyLong_AsUnsignedLong(v); - if (x == (unsigned long) -1 && PyErr_Occurred()) - return -1; - } - else { - long y; - if (!PyArg_Parse(v, "l;array item must be integer", &y)) - return -1; - if (y < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is less than minimum"); - return -1; - } - x = (unsigned long)y; + unsigned long x; + if (PyLong_Check(v)) { + x = PyLong_AsUnsignedLong(v); + if (x == (unsigned long) -1 && PyErr_Occurred()) + return -1; + } + else { + long y; + if (!PyArg_Parse(v, "l;array item must be integer", &y)) + return -1; + if (y < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is less than minimum"); + return -1; + } + x = (unsigned long)y; - } - if (x > UINT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned int is greater than maximum"); - return -1; - } + } + if (x > UINT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned int is greater than maximum"); + return -1; + } - if (i >= 0) - ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; - return 0; + if (i >= 0) + ((unsigned int *)ap->ob_item)[i] = (unsigned int)x; + return 0; } static PyObject * l_getitem(arrayobject *ap, Py_ssize_t i) { - return PyInt_FromLong(((long *)ap->ob_item)[i]); + return PyInt_FromLong(((long *)ap->ob_item)[i]); } static int l_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - long x; - if (!PyArg_Parse(v, "l;array item must be integer", &x)) - return -1; - if (i >= 0) - ((long *)ap->ob_item)[i] = x; - return 0; + long x; + if (!PyArg_Parse(v, "l;array item must be integer", &x)) + return -1; + if (i >= 0) + ((long *)ap->ob_item)[i] = x; + return 0; } static PyObject * LL_getitem(arrayobject *ap, Py_ssize_t i) { - return PyLong_FromUnsignedLong(((unsigned long *)ap->ob_item)[i]); + return PyLong_FromUnsignedLong(((unsigned long *)ap->ob_item)[i]); } static int LL_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - unsigned long x; - if (PyLong_Check(v)) { - x = PyLong_AsUnsignedLong(v); - if (x == (unsigned long) -1 && PyErr_Occurred()) - return -1; - } - else { - long y; - if (!PyArg_Parse(v, "l;array item must be integer", &y)) - return -1; - if (y < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned long is less than minimum"); - return -1; - } - x = (unsigned long)y; + unsigned long x; + if (PyLong_Check(v)) { + x = PyLong_AsUnsignedLong(v); + if (x == (unsigned long) -1 && PyErr_Occurred()) + return -1; + } + else { + long y; + if (!PyArg_Parse(v, "l;array item must be integer", &y)) + return -1; + if (y < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned long is less than minimum"); + return -1; + } + x = (unsigned long)y; - } - if (x > ULONG_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned long is greater than maximum"); - return -1; - } + } + if (x > ULONG_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned long is greater than maximum"); + return -1; + } - if (i >= 0) - ((unsigned long *)ap->ob_item)[i] = x; - return 0; + if (i >= 0) + ((unsigned long *)ap->ob_item)[i] = x; + return 0; } static PyObject * f_getitem(arrayobject *ap, Py_ssize_t i) { - return PyFloat_FromDouble((double) ((float *)ap->ob_item)[i]); + return PyFloat_FromDouble((double) ((float *)ap->ob_item)[i]); } static int f_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - float x; - if (!PyArg_Parse(v, "f;array item must be float", &x)) - return -1; - if (i >= 0) - ((float *)ap->ob_item)[i] = x; - return 0; + float x; + if (!PyArg_Parse(v, "f;array item must be float", &x)) + return -1; + if (i >= 0) + ((float *)ap->ob_item)[i] = x; + return 0; } static PyObject * d_getitem(arrayobject *ap, Py_ssize_t i) { - return PyFloat_FromDouble(((double *)ap->ob_item)[i]); + return PyFloat_FromDouble(((double *)ap->ob_item)[i]); } static int d_setitem(arrayobject *ap, Py_ssize_t i, PyObject *v) { - double x; - if (!PyArg_Parse(v, "d;array item must be float", &x)) - return -1; - if (i >= 0) - ((double *)ap->ob_item)[i] = x; - return 0; + double x; + if (!PyArg_Parse(v, "d;array item must be float", &x)) + return -1; + if (i >= 0) + ((double *)ap->ob_item)[i] = x; + return 0; } /* Description of types */ static struct arraydescr descriptors[] = { - {'c', sizeof(char), c_getitem, c_setitem}, - {'b', sizeof(char), b_getitem, b_setitem}, - {'B', sizeof(char), BB_getitem, BB_setitem}, + {'c', sizeof(char), c_getitem, c_setitem}, + {'b', sizeof(char), b_getitem, b_setitem}, + {'B', sizeof(char), BB_getitem, BB_setitem}, #ifdef Py_USING_UNICODE - {'u', sizeof(Py_UNICODE), u_getitem, u_setitem}, + {'u', sizeof(Py_UNICODE), u_getitem, u_setitem}, #endif - {'h', sizeof(short), h_getitem, h_setitem}, - {'H', sizeof(short), HH_getitem, HH_setitem}, - {'i', sizeof(int), i_getitem, i_setitem}, - {'I', sizeof(int), II_getitem, II_setitem}, - {'l', sizeof(long), l_getitem, l_setitem}, - {'L', sizeof(long), LL_getitem, LL_setitem}, - {'f', sizeof(float), f_getitem, f_setitem}, - {'d', sizeof(double), d_getitem, d_setitem}, - {'\0', 0, 0, 0} /* Sentinel */ + {'h', sizeof(short), h_getitem, h_setitem}, + {'H', sizeof(short), HH_getitem, HH_setitem}, + {'i', sizeof(int), i_getitem, i_setitem}, + {'I', sizeof(int), II_getitem, II_setitem}, + {'l', sizeof(long), l_getitem, l_setitem}, + {'L', sizeof(long), LL_getitem, LL_setitem}, + {'f', sizeof(float), f_getitem, f_setitem}, + {'d', sizeof(double), d_getitem, d_setitem}, + {'\0', 0, 0, 0} /* Sentinel */ }; /**************************************************************************** @@ -418,78 +415,78 @@ static PyObject * newarrayobject(PyTypeObject *type, Py_ssize_t size, struct arraydescr *descr) { - arrayobject *op; - size_t nbytes; + arrayobject *op; + size_t nbytes; - if (size < 0) { - PyErr_BadInternalCall(); - return NULL; - } + if (size < 0) { + PyErr_BadInternalCall(); + return NULL; + } - nbytes = size * descr->itemsize; - /* Check for overflow */ - if (nbytes / descr->itemsize != (size_t)size) { - return PyErr_NoMemory(); - } - op = (arrayobject *) type->tp_alloc(type, 0); - if (op == NULL) { - return NULL; - } - op->ob_descr = descr; - op->allocated = size; - op->weakreflist = NULL; - Py_SIZE(op) = size; - if (size <= 0) { - op->ob_item = NULL; - } - else { - op->ob_item = PyMem_NEW(char, nbytes); - if (op->ob_item == NULL) { - Py_DECREF(op); - return PyErr_NoMemory(); - } - } - return (PyObject *) op; + nbytes = size * descr->itemsize; + /* Check for overflow */ + if (nbytes / descr->itemsize != (size_t)size) { + return PyErr_NoMemory(); + } + op = (arrayobject *) type->tp_alloc(type, 0); + if (op == NULL) { + return NULL; + } + op->ob_descr = descr; + op->allocated = size; + op->weakreflist = NULL; + Py_SIZE(op) = size; + if (size <= 0) { + op->ob_item = NULL; + } + else { + op->ob_item = PyMem_NEW(char, nbytes); + if (op->ob_item == NULL) { + Py_DECREF(op); + return PyErr_NoMemory(); + } + } + return (PyObject *) op; } static PyObject * getarrayitem(PyObject *op, Py_ssize_t i) { - register arrayobject *ap; - assert(array_Check(op)); - ap = (arrayobject *)op; - assert(i>=0 && iob_descr->getitem)(ap, i); + register arrayobject *ap; + assert(array_Check(op)); + ap = (arrayobject *)op; + assert(i>=0 && iob_descr->getitem)(ap, i); } static int ins1(arrayobject *self, Py_ssize_t where, PyObject *v) { - char *items; - Py_ssize_t n = Py_SIZE(self); - if (v == NULL) { - PyErr_BadInternalCall(); - return -1; - } - if ((*self->ob_descr->setitem)(self, -1, v) < 0) - return -1; + char *items; + Py_ssize_t n = Py_SIZE(self); + if (v == NULL) { + PyErr_BadInternalCall(); + return -1; + } + if ((*self->ob_descr->setitem)(self, -1, v) < 0) + return -1; - if (array_resize(self, n+1) == -1) - return -1; - items = self->ob_item; - if (where < 0) { - where += n; - if (where < 0) - where = 0; - } - if (where > n) - where = n; - /* appends don't need to call memmove() */ - if (where != n) - memmove(items + (where+1)*self->ob_descr->itemsize, - items + where*self->ob_descr->itemsize, - (n-where)*self->ob_descr->itemsize); - return (*self->ob_descr->setitem)(self, where, v); + if (array_resize(self, n+1) == -1) + return -1; + items = self->ob_item; + if (where < 0) { + where += n; + if (where < 0) + where = 0; + } + if (where > n) + where = n; + /* appends don't need to call memmove() */ + if (where != n) + memmove(items + (where+1)*self->ob_descr->itemsize, + items + where*self->ob_descr->itemsize, + (n-where)*self->ob_descr->itemsize); + return (*self->ob_descr->setitem)(self, where, v); } /* Methods */ @@ -497,141 +494,141 @@ static void array_dealloc(arrayobject *op) { - if (op->weakreflist != NULL) - PyObject_ClearWeakRefs((PyObject *) op); - if (op->ob_item != NULL) - PyMem_DEL(op->ob_item); - Py_TYPE(op)->tp_free((PyObject *)op); + if (op->weakreflist != NULL) + PyObject_ClearWeakRefs((PyObject *) op); + if (op->ob_item != NULL) + PyMem_DEL(op->ob_item); + Py_TYPE(op)->tp_free((PyObject *)op); } static PyObject * array_richcompare(PyObject *v, PyObject *w, int op) { - arrayobject *va, *wa; - PyObject *vi = NULL; - PyObject *wi = NULL; - Py_ssize_t i, k; - PyObject *res; + arrayobject *va, *wa; + PyObject *vi = NULL; + PyObject *wi = NULL; + Py_ssize_t i, k; + PyObject *res; - if (!array_Check(v) || !array_Check(w)) { - Py_INCREF(Py_NotImplemented); - return Py_NotImplemented; - } + if (!array_Check(v) || !array_Check(w)) { + Py_INCREF(Py_NotImplemented); + return Py_NotImplemented; + } - va = (arrayobject *)v; - wa = (arrayobject *)w; + va = (arrayobject *)v; + wa = (arrayobject *)w; - if (Py_SIZE(va) != Py_SIZE(wa) && (op == Py_EQ || op == Py_NE)) { - /* Shortcut: if the lengths differ, the arrays differ */ - if (op == Py_EQ) - res = Py_False; - else - res = Py_True; - Py_INCREF(res); - return res; - } + if (Py_SIZE(va) != Py_SIZE(wa) && (op == Py_EQ || op == Py_NE)) { + /* Shortcut: if the lengths differ, the arrays differ */ + if (op == Py_EQ) + res = Py_False; + else + res = Py_True; + Py_INCREF(res); + return res; + } - /* Search for the first index where items are different */ - k = 1; - for (i = 0; i < Py_SIZE(va) && i < Py_SIZE(wa); i++) { - vi = getarrayitem(v, i); - wi = getarrayitem(w, i); - if (vi == NULL || wi == NULL) { - Py_XDECREF(vi); - Py_XDECREF(wi); - return NULL; - } - k = PyObject_RichCompareBool(vi, wi, Py_EQ); - if (k == 0) - break; /* Keeping vi and wi alive! */ - Py_DECREF(vi); - Py_DECREF(wi); - if (k < 0) - return NULL; - } + /* Search for the first index where items are different */ + k = 1; + for (i = 0; i < Py_SIZE(va) && i < Py_SIZE(wa); i++) { + vi = getarrayitem(v, i); + wi = getarrayitem(w, i); + if (vi == NULL || wi == NULL) { + Py_XDECREF(vi); + Py_XDECREF(wi); + return NULL; + } + k = PyObject_RichCompareBool(vi, wi, Py_EQ); + if (k == 0) + break; /* Keeping vi and wi alive! */ + Py_DECREF(vi); + Py_DECREF(wi); + if (k < 0) + return NULL; + } - if (k) { - /* No more items to compare -- compare sizes */ - Py_ssize_t vs = Py_SIZE(va); - Py_ssize_t ws = Py_SIZE(wa); - int cmp; - switch (op) { - case Py_LT: cmp = vs < ws; break; - case Py_LE: cmp = vs <= ws; break; - case Py_EQ: cmp = vs == ws; break; - case Py_NE: cmp = vs != ws; break; - case Py_GT: cmp = vs > ws; break; - case Py_GE: cmp = vs >= ws; break; - default: return NULL; /* cannot happen */ - } - if (cmp) - res = Py_True; - else - res = Py_False; - Py_INCREF(res); - return res; - } + if (k) { + /* No more items to compare -- compare sizes */ + Py_ssize_t vs = Py_SIZE(va); + Py_ssize_t ws = Py_SIZE(wa); + int cmp; + switch (op) { + case Py_LT: cmp = vs < ws; break; + case Py_LE: cmp = vs <= ws; break; + case Py_EQ: cmp = vs == ws; break; + case Py_NE: cmp = vs != ws; break; + case Py_GT: cmp = vs > ws; break; + case Py_GE: cmp = vs >= ws; break; + default: return NULL; /* cannot happen */ + } + if (cmp) + res = Py_True; + else + res = Py_False; + Py_INCREF(res); + return res; + } - /* We have an item that differs. First, shortcuts for EQ/NE */ - if (op == Py_EQ) { - Py_INCREF(Py_False); - res = Py_False; - } - else if (op == Py_NE) { - Py_INCREF(Py_True); - res = Py_True; - } - else { - /* Compare the final item again using the proper operator */ - res = PyObject_RichCompare(vi, wi, op); - } - Py_DECREF(vi); - Py_DECREF(wi); - return res; + /* We have an item that differs. First, shortcuts for EQ/NE */ + if (op == Py_EQ) { + Py_INCREF(Py_False); + res = Py_False; + } + else if (op == Py_NE) { + Py_INCREF(Py_True); + res = Py_True; + } + else { + /* Compare the final item again using the proper operator */ + res = PyObject_RichCompare(vi, wi, op); + } + Py_DECREF(vi); + Py_DECREF(wi); + return res; } static Py_ssize_t array_length(arrayobject *a) { - return Py_SIZE(a); + return Py_SIZE(a); } static PyObject * array_item(arrayobject *a, Py_ssize_t i) { - if (i < 0 || i >= Py_SIZE(a)) { - PyErr_SetString(PyExc_IndexError, "array index out of range"); - return NULL; - } - return getarrayitem((PyObject *)a, i); + if (i < 0 || i >= Py_SIZE(a)) { + PyErr_SetString(PyExc_IndexError, "array index out of range"); + return NULL; + } + return getarrayitem((PyObject *)a, i); } static PyObject * array_slice(arrayobject *a, Py_ssize_t ilow, Py_ssize_t ihigh) { - arrayobject *np; - if (ilow < 0) - ilow = 0; - else if (ilow > Py_SIZE(a)) - ilow = Py_SIZE(a); - if (ihigh < 0) - ihigh = 0; - if (ihigh < ilow) - ihigh = ilow; - else if (ihigh > Py_SIZE(a)) - ihigh = Py_SIZE(a); - np = (arrayobject *) newarrayobject(&Arraytype, ihigh - ilow, a->ob_descr); - if (np == NULL) - return NULL; - memcpy(np->ob_item, a->ob_item + ilow * a->ob_descr->itemsize, - (ihigh-ilow) * a->ob_descr->itemsize); - return (PyObject *)np; + arrayobject *np; + if (ilow < 0) + ilow = 0; + else if (ilow > Py_SIZE(a)) + ilow = Py_SIZE(a); + if (ihigh < 0) + ihigh = 0; + if (ihigh < ilow) + ihigh = ilow; + else if (ihigh > Py_SIZE(a)) + ihigh = Py_SIZE(a); + np = (arrayobject *) newarrayobject(&Arraytype, ihigh - ilow, a->ob_descr); + if (np == NULL) + return NULL; + memcpy(np->ob_item, a->ob_item + ilow * a->ob_descr->itemsize, + (ihigh-ilow) * a->ob_descr->itemsize); + return (PyObject *)np; } static PyObject * array_copy(arrayobject *a, PyObject *unused) { - return array_slice(a, 0, Py_SIZE(a)); + return array_slice(a, 0, Py_SIZE(a)); } PyDoc_STRVAR(copy_doc, @@ -642,297 +639,297 @@ static PyObject * array_concat(arrayobject *a, PyObject *bb) { - Py_ssize_t size; - arrayobject *np; - if (!array_Check(bb)) { - PyErr_Format(PyExc_TypeError, - "can only append array (not \"%.200s\") to array", - Py_TYPE(bb)->tp_name); - return NULL; - } + Py_ssize_t size; + arrayobject *np; + if (!array_Check(bb)) { + PyErr_Format(PyExc_TypeError, + "can only append array (not \"%.200s\") to array", + Py_TYPE(bb)->tp_name); + return NULL; + } #define b ((arrayobject *)bb) - if (a->ob_descr != b->ob_descr) { - PyErr_BadArgument(); - return NULL; - } - if (Py_SIZE(a) > PY_SSIZE_T_MAX - Py_SIZE(b)) { - return PyErr_NoMemory(); - } - size = Py_SIZE(a) + Py_SIZE(b); - np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); - if (np == NULL) { - return NULL; - } - memcpy(np->ob_item, a->ob_item, Py_SIZE(a)*a->ob_descr->itemsize); - memcpy(np->ob_item + Py_SIZE(a)*a->ob_descr->itemsize, - b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); - return (PyObject *)np; + if (a->ob_descr != b->ob_descr) { + PyErr_BadArgument(); + return NULL; + } + if (Py_SIZE(a) > PY_SSIZE_T_MAX - Py_SIZE(b)) { + return PyErr_NoMemory(); + } + size = Py_SIZE(a) + Py_SIZE(b); + np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); + if (np == NULL) { + return NULL; + } + memcpy(np->ob_item, a->ob_item, Py_SIZE(a)*a->ob_descr->itemsize); + memcpy(np->ob_item + Py_SIZE(a)*a->ob_descr->itemsize, + b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); + return (PyObject *)np; #undef b } static PyObject * array_repeat(arrayobject *a, Py_ssize_t n) { - Py_ssize_t i; - Py_ssize_t size; - arrayobject *np; - char *p; - Py_ssize_t nbytes; - if (n < 0) - n = 0; - if ((Py_SIZE(a) != 0) && (n > PY_SSIZE_T_MAX / Py_SIZE(a))) { - return PyErr_NoMemory(); - } - size = Py_SIZE(a) * n; - np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); - if (np == NULL) - return NULL; - p = np->ob_item; - nbytes = Py_SIZE(a) * a->ob_descr->itemsize; - for (i = 0; i < n; i++) { - memcpy(p, a->ob_item, nbytes); - p += nbytes; - } - return (PyObject *) np; + Py_ssize_t i; + Py_ssize_t size; + arrayobject *np; + char *p; + Py_ssize_t nbytes; + if (n < 0) + n = 0; + if ((Py_SIZE(a) != 0) && (n > PY_SSIZE_T_MAX / Py_SIZE(a))) { + return PyErr_NoMemory(); + } + size = Py_SIZE(a) * n; + np = (arrayobject *) newarrayobject(&Arraytype, size, a->ob_descr); + if (np == NULL) + return NULL; + p = np->ob_item; + nbytes = Py_SIZE(a) * a->ob_descr->itemsize; + for (i = 0; i < n; i++) { + memcpy(p, a->ob_item, nbytes); + p += nbytes; + } + return (PyObject *) np; } static int array_ass_slice(arrayobject *a, Py_ssize_t ilow, Py_ssize_t ihigh, PyObject *v) { - char *item; - Py_ssize_t n; /* Size of replacement array */ - Py_ssize_t d; /* Change in size */ + char *item; + Py_ssize_t n; /* Size of replacement array */ + Py_ssize_t d; /* Change in size */ #define b ((arrayobject *)v) - if (v == NULL) - n = 0; - else if (array_Check(v)) { - n = Py_SIZE(b); - if (a == b) { - /* Special case "a[i:j] = a" -- copy b first */ - int ret; - v = array_slice(b, 0, n); - if (!v) - return -1; - ret = array_ass_slice(a, ilow, ihigh, v); - Py_DECREF(v); - return ret; - } - if (b->ob_descr != a->ob_descr) { - PyErr_BadArgument(); - return -1; - } - } - else { - PyErr_Format(PyExc_TypeError, - "can only assign array (not \"%.200s\") to array slice", - Py_TYPE(v)->tp_name); - return -1; - } - if (ilow < 0) - ilow = 0; - else if (ilow > Py_SIZE(a)) - ilow = Py_SIZE(a); - if (ihigh < 0) - ihigh = 0; - if (ihigh < ilow) - ihigh = ilow; - else if (ihigh > Py_SIZE(a)) - ihigh = Py_SIZE(a); - item = a->ob_item; - d = n - (ihigh-ilow); - if (d < 0) { /* Delete -d items */ - memmove(item + (ihigh+d)*a->ob_descr->itemsize, - item + ihigh*a->ob_descr->itemsize, - (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); - Py_SIZE(a) += d; - PyMem_RESIZE(item, char, Py_SIZE(a)*a->ob_descr->itemsize); - /* Can't fail */ - a->ob_item = item; - a->allocated = Py_SIZE(a); - } - else if (d > 0) { /* Insert d items */ - PyMem_RESIZE(item, char, - (Py_SIZE(a) + d)*a->ob_descr->itemsize); - if (item == NULL) { - PyErr_NoMemory(); - return -1; - } - memmove(item + (ihigh+d)*a->ob_descr->itemsize, - item + ihigh*a->ob_descr->itemsize, - (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); - a->ob_item = item; - Py_SIZE(a) += d; - a->allocated = Py_SIZE(a); - } - if (n > 0) - memcpy(item + ilow*a->ob_descr->itemsize, b->ob_item, - n*b->ob_descr->itemsize); - return 0; + if (v == NULL) + n = 0; + else if (array_Check(v)) { + n = Py_SIZE(b); + if (a == b) { + /* Special case "a[i:j] = a" -- copy b first */ + int ret; + v = array_slice(b, 0, n); + if (!v) + return -1; + ret = array_ass_slice(a, ilow, ihigh, v); + Py_DECREF(v); + return ret; + } + if (b->ob_descr != a->ob_descr) { + PyErr_BadArgument(); + return -1; + } + } + else { + PyErr_Format(PyExc_TypeError, + "can only assign array (not \"%.200s\") to array slice", + Py_TYPE(v)->tp_name); + return -1; + } + if (ilow < 0) + ilow = 0; + else if (ilow > Py_SIZE(a)) + ilow = Py_SIZE(a); + if (ihigh < 0) + ihigh = 0; + if (ihigh < ilow) + ihigh = ilow; + else if (ihigh > Py_SIZE(a)) + ihigh = Py_SIZE(a); + item = a->ob_item; + d = n - (ihigh-ilow); + if (d < 0) { /* Delete -d items */ + memmove(item + (ihigh+d)*a->ob_descr->itemsize, + item + ihigh*a->ob_descr->itemsize, + (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); + Py_SIZE(a) += d; + PyMem_RESIZE(item, char, Py_SIZE(a)*a->ob_descr->itemsize); + /* Can't fail */ + a->ob_item = item; + a->allocated = Py_SIZE(a); + } + else if (d > 0) { /* Insert d items */ + PyMem_RESIZE(item, char, + (Py_SIZE(a) + d)*a->ob_descr->itemsize); + if (item == NULL) { + PyErr_NoMemory(); + return -1; + } + memmove(item + (ihigh+d)*a->ob_descr->itemsize, + item + ihigh*a->ob_descr->itemsize, + (Py_SIZE(a)-ihigh)*a->ob_descr->itemsize); + a->ob_item = item; + Py_SIZE(a) += d; + a->allocated = Py_SIZE(a); + } + if (n > 0) + memcpy(item + ilow*a->ob_descr->itemsize, b->ob_item, + n*b->ob_descr->itemsize); + return 0; #undef b } static int array_ass_item(arrayobject *a, Py_ssize_t i, PyObject *v) { - if (i < 0 || i >= Py_SIZE(a)) { - PyErr_SetString(PyExc_IndexError, - "array assignment index out of range"); - return -1; - } - if (v == NULL) - return array_ass_slice(a, i, i+1, v); - return (*a->ob_descr->setitem)(a, i, v); + if (i < 0 || i >= Py_SIZE(a)) { + PyErr_SetString(PyExc_IndexError, + "array assignment index out of range"); + return -1; + } + if (v == NULL) + return array_ass_slice(a, i, i+1, v); + return (*a->ob_descr->setitem)(a, i, v); } static int setarrayitem(PyObject *a, Py_ssize_t i, PyObject *v) { - assert(array_Check(a)); - return array_ass_item((arrayobject *)a, i, v); + assert(array_Check(a)); + return array_ass_item((arrayobject *)a, i, v); } static int array_iter_extend(arrayobject *self, PyObject *bb) { - PyObject *it, *v; + PyObject *it, *v; - it = PyObject_GetIter(bb); - if (it == NULL) - return -1; + it = PyObject_GetIter(bb); + if (it == NULL) + return -1; - while ((v = PyIter_Next(it)) != NULL) { - if (ins1(self, (int) Py_SIZE(self), v) != 0) { - Py_DECREF(v); - Py_DECREF(it); - return -1; - } - Py_DECREF(v); - } - Py_DECREF(it); - if (PyErr_Occurred()) - return -1; - return 0; + while ((v = PyIter_Next(it)) != NULL) { + if (ins1(self, Py_SIZE(self), v) != 0) { + Py_DECREF(v); + Py_DECREF(it); + return -1; + } + Py_DECREF(v); + } + Py_DECREF(it); + if (PyErr_Occurred()) + return -1; + return 0; } static int array_do_extend(arrayobject *self, PyObject *bb) { - Py_ssize_t size; - char *old_item; + Py_ssize_t size; + char *old_item; - if (!array_Check(bb)) - return array_iter_extend(self, bb); + if (!array_Check(bb)) + return array_iter_extend(self, bb); #define b ((arrayobject *)bb) - if (self->ob_descr != b->ob_descr) { - PyErr_SetString(PyExc_TypeError, - "can only extend with array of same kind"); - return -1; - } - if ((Py_SIZE(self) > PY_SSIZE_T_MAX - Py_SIZE(b)) || - ((Py_SIZE(self) + Py_SIZE(b)) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { - PyErr_NoMemory(); - return -1; - } - size = Py_SIZE(self) + Py_SIZE(b); - old_item = self->ob_item; - PyMem_RESIZE(self->ob_item, char, size*self->ob_descr->itemsize); - if (self->ob_item == NULL) { - self->ob_item = old_item; - PyErr_NoMemory(); - return -1; - } - memcpy(self->ob_item + Py_SIZE(self)*self->ob_descr->itemsize, - b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); - Py_SIZE(self) = size; - self->allocated = size; + if (self->ob_descr != b->ob_descr) { + PyErr_SetString(PyExc_TypeError, + "can only extend with array of same kind"); + return -1; + } + if ((Py_SIZE(self) > PY_SSIZE_T_MAX - Py_SIZE(b)) || + ((Py_SIZE(self) + Py_SIZE(b)) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { + PyErr_NoMemory(); + return -1; + } + size = Py_SIZE(self) + Py_SIZE(b); + old_item = self->ob_item; + PyMem_RESIZE(self->ob_item, char, size*self->ob_descr->itemsize); + if (self->ob_item == NULL) { + self->ob_item = old_item; + PyErr_NoMemory(); + return -1; + } + memcpy(self->ob_item + Py_SIZE(self)*self->ob_descr->itemsize, + b->ob_item, Py_SIZE(b)*b->ob_descr->itemsize); + Py_SIZE(self) = size; + self->allocated = size; - return 0; + return 0; #undef b } static PyObject * array_inplace_concat(arrayobject *self, PyObject *bb) { - if (!array_Check(bb)) { - PyErr_Format(PyExc_TypeError, - "can only extend array with array (not \"%.200s\")", - Py_TYPE(bb)->tp_name); - return NULL; - } - if (array_do_extend(self, bb) == -1) - return NULL; - Py_INCREF(self); - return (PyObject *)self; + if (!array_Check(bb)) { + PyErr_Format(PyExc_TypeError, + "can only extend array with array (not \"%.200s\")", + Py_TYPE(bb)->tp_name); + return NULL; + } + if (array_do_extend(self, bb) == -1) + return NULL; + Py_INCREF(self); + return (PyObject *)self; } static PyObject * array_inplace_repeat(arrayobject *self, Py_ssize_t n) { - char *items, *p; - Py_ssize_t size, i; + char *items, *p; + Py_ssize_t size, i; - if (Py_SIZE(self) > 0) { - if (n < 0) - n = 0; - items = self->ob_item; - if ((self->ob_descr->itemsize != 0) && - (Py_SIZE(self) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { - return PyErr_NoMemory(); - } - size = Py_SIZE(self) * self->ob_descr->itemsize; - if (n == 0) { - PyMem_FREE(items); - self->ob_item = NULL; - Py_SIZE(self) = 0; - self->allocated = 0; - } - else { - if (size > PY_SSIZE_T_MAX / n) { - return PyErr_NoMemory(); - } - PyMem_RESIZE(items, char, n * size); - if (items == NULL) - return PyErr_NoMemory(); - p = items; - for (i = 1; i < n; i++) { - p += size; - memcpy(p, items, size); - } - self->ob_item = items; - Py_SIZE(self) *= n; - self->allocated = Py_SIZE(self); - } - } - Py_INCREF(self); - return (PyObject *)self; + if (Py_SIZE(self) > 0) { + if (n < 0) + n = 0; + items = self->ob_item; + if ((self->ob_descr->itemsize != 0) && + (Py_SIZE(self) > PY_SSIZE_T_MAX / self->ob_descr->itemsize)) { + return PyErr_NoMemory(); + } + size = Py_SIZE(self) * self->ob_descr->itemsize; + if (n == 0) { + PyMem_FREE(items); + self->ob_item = NULL; + Py_SIZE(self) = 0; + self->allocated = 0; + } + else { + if (size > PY_SSIZE_T_MAX / n) { + return PyErr_NoMemory(); + } + PyMem_RESIZE(items, char, n * size); + if (items == NULL) + return PyErr_NoMemory(); + p = items; + for (i = 1; i < n; i++) { + p += size; + memcpy(p, items, size); + } + self->ob_item = items; + Py_SIZE(self) *= n; + self->allocated = Py_SIZE(self); + } + } + Py_INCREF(self); + return (PyObject *)self; } static PyObject * ins(arrayobject *self, Py_ssize_t where, PyObject *v) { - if (ins1(self, where, v) != 0) - return NULL; - Py_INCREF(Py_None); - return Py_None; + if (ins1(self, where, v) != 0) + return NULL; + Py_INCREF(Py_None); + return Py_None; } static PyObject * array_count(arrayobject *self, PyObject *v) { - Py_ssize_t count = 0; - Py_ssize_t i; + Py_ssize_t count = 0; + Py_ssize_t i; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self, i); - int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - if (cmp > 0) - count++; - else if (cmp < 0) - return NULL; - } - return PyInt_FromSsize_t(count); + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self, i); + int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + if (cmp > 0) + count++; + else if (cmp < 0) + return NULL; + } + return PyInt_FromSsize_t(count); } PyDoc_STRVAR(count_doc, @@ -943,20 +940,20 @@ static PyObject * array_index(arrayobject *self, PyObject *v) { - Py_ssize_t i; + Py_ssize_t i; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self, i); - int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - if (cmp > 0) { - return PyInt_FromLong((long)i); - } - else if (cmp < 0) - return NULL; - } - PyErr_SetString(PyExc_ValueError, "array.index(x): x not in list"); - return NULL; + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self, i); + int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + if (cmp > 0) { + return PyInt_FromLong((long)i); + } + else if (cmp < 0) + return NULL; + } + PyErr_SetString(PyExc_ValueError, "array.index(x): x not in list"); + return NULL; } PyDoc_STRVAR(index_doc, @@ -967,38 +964,38 @@ static int array_contains(arrayobject *self, PyObject *v) { - Py_ssize_t i; - int cmp; + Py_ssize_t i; + int cmp; - for (i = 0, cmp = 0 ; cmp == 0 && i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self, i); - cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - } - return cmp; + for (i = 0, cmp = 0 ; cmp == 0 && i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self, i); + cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + } + return cmp; } static PyObject * array_remove(arrayobject *self, PyObject *v) { - int i; + int i; - for (i = 0; i < Py_SIZE(self); i++) { - PyObject *selfi = getarrayitem((PyObject *)self,i); - int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); - Py_DECREF(selfi); - if (cmp > 0) { - if (array_ass_slice(self, i, i+1, - (PyObject *)NULL) != 0) - return NULL; - Py_INCREF(Py_None); - return Py_None; - } - else if (cmp < 0) - return NULL; - } - PyErr_SetString(PyExc_ValueError, "array.remove(x): x not in list"); - return NULL; + for (i = 0; i < Py_SIZE(self); i++) { + PyObject *selfi = getarrayitem((PyObject *)self,i); + int cmp = PyObject_RichCompareBool(selfi, v, Py_EQ); + Py_DECREF(selfi); + if (cmp > 0) { + if (array_ass_slice(self, i, i+1, + (PyObject *)NULL) != 0) + return NULL; + Py_INCREF(Py_None); + return Py_None; + } + else if (cmp < 0) + return NULL; + } + PyErr_SetString(PyExc_ValueError, "array.remove(x): x not in list"); + return NULL; } PyDoc_STRVAR(remove_doc, @@ -1009,27 +1006,27 @@ static PyObject * array_pop(arrayobject *self, PyObject *args) { - Py_ssize_t i = -1; - PyObject *v; - if (!PyArg_ParseTuple(args, "|n:pop", &i)) - return NULL; - if (Py_SIZE(self) == 0) { - /* Special-case most common failure cause */ - PyErr_SetString(PyExc_IndexError, "pop from empty array"); - return NULL; - } - if (i < 0) - i += Py_SIZE(self); - if (i < 0 || i >= Py_SIZE(self)) { - PyErr_SetString(PyExc_IndexError, "pop index out of range"); - return NULL; - } - v = getarrayitem((PyObject *)self,i); - if (array_ass_slice(self, i, i+1, (PyObject *)NULL) != 0) { - Py_DECREF(v); - return NULL; - } - return v; + Py_ssize_t i = -1; + PyObject *v; + if (!PyArg_ParseTuple(args, "|n:pop", &i)) + return NULL; + if (Py_SIZE(self) == 0) { + /* Special-case most common failure cause */ + PyErr_SetString(PyExc_IndexError, "pop from empty array"); + return NULL; + } + if (i < 0) + i += Py_SIZE(self); + if (i < 0 || i >= Py_SIZE(self)) { + PyErr_SetString(PyExc_IndexError, "pop index out of range"); + return NULL; + } + v = getarrayitem((PyObject *)self,i); + if (array_ass_slice(self, i, i+1, (PyObject *)NULL) != 0) { + Py_DECREF(v); + return NULL; + } + return v; } PyDoc_STRVAR(pop_doc, @@ -1040,10 +1037,10 @@ static PyObject * array_extend(arrayobject *self, PyObject *bb) { - if (array_do_extend(self, bb) == -1) - return NULL; - Py_INCREF(Py_None); - return Py_None; + if (array_do_extend(self, bb) == -1) + return NULL; + Py_INCREF(Py_None); + return Py_None; } PyDoc_STRVAR(extend_doc, @@ -1054,11 +1051,11 @@ static PyObject * array_insert(arrayobject *self, PyObject *args) { - Py_ssize_t i; - PyObject *v; - if (!PyArg_ParseTuple(args, "nO:insert", &i, &v)) - return NULL; - return ins(self, i, v); + Py_ssize_t i; + PyObject *v; + if (!PyArg_ParseTuple(args, "nO:insert", &i, &v)) + return NULL; + return ins(self, i, v); } PyDoc_STRVAR(insert_doc, @@ -1070,15 +1067,15 @@ static PyObject * array_buffer_info(arrayobject *self, PyObject *unused) { - PyObject* retval = NULL; - retval = PyTuple_New(2); - if (!retval) - return NULL; + PyObject* retval = NULL; + retval = PyTuple_New(2); + if (!retval) + return NULL; - PyTuple_SET_ITEM(retval, 0, PyLong_FromVoidPtr(self->ob_item)); - PyTuple_SET_ITEM(retval, 1, PyInt_FromLong((long)(Py_SIZE(self)))); + PyTuple_SET_ITEM(retval, 0, PyLong_FromVoidPtr(self->ob_item)); + PyTuple_SET_ITEM(retval, 1, PyInt_FromLong((long)(Py_SIZE(self)))); - return retval; + return retval; } PyDoc_STRVAR(buffer_info_doc, @@ -1093,7 +1090,7 @@ static PyObject * array_append(arrayobject *self, PyObject *v) { - return ins(self, (int) Py_SIZE(self), v); From noreply at buildbot.pypy.org Sat May 5 09:29:57 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Sat, 5 May 2012 09:29:57 +0200 (CEST) Subject: [pypy-commit] pypy default: If optimization steps are dissabled we might not be able to optimize out the entire short preamble from the end of its peeled loop. In that case, the ops that remains might force the short preamble to be extened with additional ops Message-ID: <20120505072957.8D3B48208B@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r54895:a99ae0b598c9 Date: 2012-05-05 09:29 +0200 http://bitbucket.org/pypy/pypy/changeset/a99ae0b598c9/ Log: If optimization steps are dissabled we might not be able to optimize out the entire short preamble from the end of its peeled loop. In that case, the ops that remains might force the short preamble to be extened with additional ops diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -335,9 +335,13 @@ args[short_inputargs[i]] = jmp_to_short_args[i] self.short_inliner = Inliner(short_inputargs, jmp_to_short_args) - for op in self.short[1:]: + i = 1 + while i < len(self.short): + # Note that self.short might be extended during this loop + op = self.short[i] newop = self.short_inliner.inline_op(op) self.optimizer.send_extra_operation(newop) + i += 1 # Import boxes produced in the preamble but used in the loop newoperations = self.optimizer.get_newoperations() From noreply at buildbot.pypy.org Sat May 5 15:31:42 2012 From: noreply at buildbot.pypy.org (timo_jbo) Date: Sat, 5 May 2012 15:31:42 +0200 (CEST) Subject: [pypy-commit] pypy numpypy-issue1137: pep8, swallow exceptions in __int__ and __index__ like numpy and better error Message-ID: <20120505133142.54E8E8208B@wyvern.cs.uni-duesseldorf.de> Author: Timo Paulssen Branch: numpypy-issue1137 Changeset: r54896:7ab172b5fbf2 Date: 2012-05-05 15:30 +0200 http://bitbucket.org/pypy/pypy/changeset/7ab172b5fbf2/ Log: pep8, swallow exceptions in __int__ and __index__ like numpy and better error when the index doesn't supply a working __len__, either. diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -354,19 +354,27 @@ try: value = space.int_w(space.index(w_idx)) return True - except: pass + except OperationError: + pass try: value = space.int_w(w_idx) return True - except: pass + except OperationError: + pass if space.isinstance_w(w_idx, space.w_slice): return False elif (space.isinstance_w(w_idx, space.w_slice) or space.isinstance_w(w_idx, space.w_int)): return False - lgt = space.len_w(w_idx) + + try: + lgt = space.len_w(w_idx) + except OperationError: + raise OperationError(space.w_IndexError, + space.wrap("index must be either an int or a sequence.")) + if lgt > shape_len: raise OperationError(space.w_IndexError, space.wrap("invalid index")) @@ -1045,13 +1053,15 @@ try: idx = space.int_w(space.index(w_idx)) is_valid = True - except: pass + except OperationError: + pass if not is_valid: try: idx = space.int_w(w_idx) is_valid = True - except: pass + except OperationError: + pass if is_valid: if idx < 0: diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -424,6 +424,23 @@ assert a[1] == 100 + def test_access_swallow_exception(self): + class ErrorIndex(object): + def __index__(self): + return 1 / 0 + + class ErrorInt(object): + def __int__(self): + return 1 / 0 + + # numpy will swallow errors in __int__ and __index__ and + # just raise IndexError. + + from _numpypy import arange + a = arange(10) + raises(IndexError, "a[ErrorIndex()] == 0") + raises(IndexError, "a[ErrorInt()] == 0") + def test_setslice_array(self): from _numpypy import array a = array(range(5)) From noreply at buildbot.pypy.org Sat May 5 16:12:57 2012 From: noreply at buildbot.pypy.org (timo_jbo) Date: Sat, 5 May 2012 16:12:57 +0200 (CEST) Subject: [pypy-commit] pypy numpypy-issue1137: fixed the FakeSpace just enough to work again. Message-ID: <20120505141257.71F5B8208B@wyvern.cs.uni-duesseldorf.de> Author: Timo Paulssen Branch: numpypy-issue1137 Changeset: r54897:657b043bac5a Date: 2012-05-05 16:11 +0200 http://bitbucket.org/pypy/pypy/changeset/657b043bac5a/ Log: fixed the FakeSpace just enough to work again. diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -6,6 +6,7 @@ import re from pypy.interpreter.baseobjspace import InternalSpaceCache, W_Root +from pypy.interpreter.error import OperationError from pypy.module.micronumpy import interp_boxes from pypy.module.micronumpy.interp_dtype import get_dtype_cache from pypy.module.micronumpy.interp_numarray import (Scalar, BaseArray, @@ -39,11 +40,11 @@ THREE_ARG_FUNCTIONS = ['where'] class FakeSpace(object): - w_ValueError = None - w_TypeError = None - w_IndexError = None - w_OverflowError = None - w_NotImplementedError = None + w_ValueError = "ValueError" + w_TypeError = "TypeError" + w_IndexError = "IndexError" + w_OverflowError = "OverflowError" + w_NotImplementedError = "NotImplementedError" w_None = None w_bool = "bool" @@ -126,6 +127,8 @@ return w_obj.intval elif isinstance(w_obj, FloatObject): return int(w_obj.floatval) + elif isinstance(w_obj, SliceObject): + raise OperationError(self.w_TypeError, self.wrap("slice.")) raise NotImplementedError def index(self, w_obj): From noreply at buildbot.pypy.org Sat May 5 16:12:58 2012 From: noreply at buildbot.pypy.org (timo_jbo) Date: Sat, 5 May 2012 16:12:58 +0200 (CEST) Subject: [pypy-commit] pypy default: Merge numpypy-issue1137 Message-ID: <20120505141258.E16698208B@wyvern.cs.uni-duesseldorf.de> Author: Timo Paulssen Branch: Changeset: r54898:629cfca82920 Date: 2012-05-05 16:12 +0200 http://bitbucket.org/pypy/pypy/changeset/629cfca82920/ Log: Merge numpypy-issue1137 diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -6,6 +6,7 @@ import re from pypy.interpreter.baseobjspace import InternalSpaceCache, W_Root +from pypy.interpreter.error import OperationError from pypy.module.micronumpy import interp_boxes from pypy.module.micronumpy.interp_dtype import get_dtype_cache from pypy.module.micronumpy.interp_numarray import (Scalar, BaseArray, @@ -39,11 +40,11 @@ THREE_ARG_FUNCTIONS = ['where'] class FakeSpace(object): - w_ValueError = None - w_TypeError = None - w_IndexError = None - w_OverflowError = None - w_NotImplementedError = None + w_ValueError = "ValueError" + w_TypeError = "TypeError" + w_IndexError = "IndexError" + w_OverflowError = "OverflowError" + w_NotImplementedError = "NotImplementedError" w_None = None w_bool = "bool" @@ -126,8 +127,13 @@ return w_obj.intval elif isinstance(w_obj, FloatObject): return int(w_obj.floatval) + elif isinstance(w_obj, SliceObject): + raise OperationError(self.w_TypeError, self.wrap("slice.")) raise NotImplementedError + def index(self, w_obj): + return self.wrap(self.int_w(w_obj)) + def str_w(self, w_obj): if isinstance(w_obj, StringObject): return w_obj.v diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -350,12 +350,31 @@ if shape_len == 1: if space.isinstance_w(w_idx, space.w_int): return True + + try: + value = space.int_w(space.index(w_idx)) + return True + except OperationError: + pass + + try: + value = space.int_w(w_idx) + return True + except OperationError: + pass + if space.isinstance_w(w_idx, space.w_slice): return False elif (space.isinstance_w(w_idx, space.w_slice) or space.isinstance_w(w_idx, space.w_int)): return False - lgt = space.len_w(w_idx) + + try: + lgt = space.len_w(w_idx) + except OperationError: + raise OperationError(space.w_IndexError, + space.wrap("index must be either an int or a sequence.")) + if lgt > shape_len: raise OperationError(space.w_IndexError, space.wrap("invalid index")) @@ -1030,8 +1049,21 @@ @jit.unroll_safe def _index_of_single_item(self, space, w_idx): - if space.isinstance_w(w_idx, space.w_int): - idx = space.int_w(w_idx) + is_valid = False + try: + idx = space.int_w(space.index(w_idx)) + is_valid = True + except OperationError: + pass + + if not is_valid: + try: + idx = space.int_w(w_idx) + is_valid = True + except OperationError: + pass + + if is_valid: if idx < 0: idx = self.shape[0] + idx if idx < 0 or idx >= self.shape[0]: diff --git a/pypy/module/micronumpy/test/test_base.py b/pypy/module/micronumpy/test/test_base.py --- a/pypy/module/micronumpy/test/test_base.py +++ b/pypy/module/micronumpy/test/test_base.py @@ -10,6 +10,7 @@ import sys class BaseNumpyAppTest(object): + @classmethod def setup_class(cls): if option.runappdirect: if '__pypy__' not in sys.builtin_module_names: diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -8,7 +8,6 @@ from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest - class MockDtype(object): class itemtype(object): @staticmethod @@ -195,6 +194,36 @@ assert _to_coords(13, 'F') == [1, 0, 2] class AppTestNumArray(BaseNumpyAppTest): + def w_CustomIndexObject(self, index): + class CustomIndexObject(object): + def __init__(self, index): + self.index = index + def __index__(self): + return self.index + + return CustomIndexObject(index) + + def w_CustomIndexIntObject(self, index, value): + class CustomIndexIntObject(object): + def __init__(self, index, value): + self.index = index + self.value = value + def __index__(self): + return self.index + def __int__(self): + return self.value + + return CustomIndexIntObject(index, value) + + def w_CustomIntObject(self, value): + class CustomIntObject(object): + def __init__(self, value): + self.value = value + def __index__(self): + return self.value + + return CustomIntObject(value) + def test_ndarray(self): from _numpypy import ndarray, array, dtype @@ -329,6 +358,28 @@ assert a[1, 3] == 8 assert a.T[1, 2] == 11 + def test_getitem_obj_index(self): + from _numpypy import arange + + a = arange(10) + + assert a[self.CustomIndexObject(1)] == 1 + + def test_getitem_obj_prefer_index_to_int(self): + from _numpypy import arange + + a = arange(10) + + + assert a[self.CustomIndexIntObject(0, 1)] == 0 + + def test_getitem_obj_int(self): + from _numpypy import arange + + a = arange(10) + + assert a[self.CustomIntObject(1)] == 1 + def test_setitem(self): from _numpypy import array a = array(range(5)) @@ -348,6 +399,48 @@ for i in xrange(5): assert a[i] == i + def test_setitem_obj_index(self): + from _numpypy import arange + + a = arange(10) + + a[self.CustomIndexObject(1)] = 100 + assert a[1] == 100 + + def test_setitem_obj_prefer_index_to_int(self): + from _numpypy import arange + + a = arange(10) + + a[self.CustomIndexIntObject(0, 1)] = 100 + assert a[0] == 100 + + def test_setitem_obj_int(self): + from _numpypy import arange + + a = arange(10) + + a[self.CustomIntObject(1)] = 100 + + assert a[1] == 100 + + def test_access_swallow_exception(self): + class ErrorIndex(object): + def __index__(self): + return 1 / 0 + + class ErrorInt(object): + def __int__(self): + return 1 / 0 + + # numpy will swallow errors in __int__ and __index__ and + # just raise IndexError. + + from _numpypy import arange + a = arange(10) + raises(IndexError, "a[ErrorIndex()] == 0") + raises(IndexError, "a[ErrorInt()] == 0") + def test_setslice_array(self): from _numpypy import array a = array(range(5)) From noreply at buildbot.pypy.org Sat May 5 17:21:54 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 May 2012 17:21:54 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: fix fix fix Message-ID: <20120505152154.4F5B28208B@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r54899:2f769f82169c Date: 2012-05-04 21:40 +0200 http://bitbucket.org/pypy/pypy/changeset/2f769f82169c/ Log: fix fix fix diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -2,12 +2,12 @@ def before_external_call(): - stmgcintf.StmOperations.before_external_call() + stmgcintf.StmOperations.commit_transaction() before_external_call._gctransformer_hint_cannot_collect_ = True before_external_call._dont_reach_me_in_del_ = True def after_external_call(): - stmgcintf.StmOperations.after_external_call() + stmgcintf.StmOperations.begin_inevitable_transaction() after_external_call._gctransformer_hint_cannot_collect_ = True after_external_call._dont_reach_me_in_del_ = True diff --git a/pypy/rpython/memory/gc/stmtls.py b/pypy/rpython/memory/gc/stmtls.py --- a/pypy/rpython/memory/gc/stmtls.py +++ b/pypy/rpython/memory/gc/stmtls.py @@ -337,8 +337,6 @@ # detect_flag_combination is GCFLAG_WAS_COPIED|GCFLAG_VISITED. # This case is to force pointers to the LOCAL copy to be # replaced with pointers to the GLOBAL copy. - ll_assert(not self.in_main_thread, - "unexpected flag combination in the main thread") return 2 if can_be_in_nursery and self.is_in_nursery(obj): diff --git a/pypy/rpython/memory/gctransform/stmframework.py b/pypy/rpython/memory/gctransform/stmframework.py --- a/pypy/rpython/memory/gctransform/stmframework.py +++ b/pypy/rpython/memory/gctransform/stmframework.py @@ -12,45 +12,34 @@ class StmFrameworkGCTransformer(FrameworkGCTransformer): def _declare_functions(self, GCClass, getfn, s_gc, *args): + gc = self.gcdata.gc # - def thread_starting(gc): + def gc_thread_start(): self.root_walker.allocate_shadow_stack() gc.setup_thread() # - def thread_stopping(gc): + def gc_thread_die(): gc.teardown_thread() self.root_walker.free_shadow_stack() # - def start_transaction(gc): - self.root_walker.start_transaction() - gc.start_transaction() + #def start_transaction(gc): + # self.root_walker.start_transaction() + # gc.start_transaction() # super(StmFrameworkGCTransformer, self)._declare_functions( GCClass, getfn, s_gc, *args) - self.thread_starting_ptr = getfn( - thread_starting, - [s_gc], annmodel.s_None) - self.thread_stopping_ptr = getfn( - thread_stopping, - [s_gc], annmodel.s_None) + self.thread_start_ptr = getfn( + gc_thread_start, + [], annmodel.s_None) + self.thread_die_ptr = getfn( + gc_thread_die, + [], annmodel.s_None) self.stm_writebarrier_ptr = getfn( - self.gcdata.gc.stm_writebarrier, + gc.stm_writebarrier, [annmodel.SomeAddress()], annmodel.SomeAddress()) self.stm_normalize_global_ptr = getfn( - self.gcdata.gc.stm_normalize_global, + gc.stm_normalize_global, [annmodel.SomeAddress()], annmodel.SomeAddress()) - self.stm_enter_transactional_mode_ptr = getfn( - self.gcdata.gc.enter_transactional_mode.im_func, - [s_gc], annmodel.s_None) - self.stm_leave_transactional_mode_ptr = getfn( - self.gcdata.gc.leave_transactional_mode.im_func, - [s_gc], annmodel.s_None) - self.stm_start_ptr = getfn( - start_transaction, - [s_gc], annmodel.s_None) - self.stm_stop_ptr = getfn( - self.gcdata.gc.stop_transaction.im_func, - [s_gc], annmodel.s_None) def build_root_walker(self): return StmShadowStackRootWalker(self) @@ -159,6 +148,11 @@ if rsd is not None: self.root_stack_depth = rsd + def need_thread_support(self, gctransformer, getfn): + # we always have thread support, and it is handled + # in _declare_functions() already + pass + def setup_root_walker(self): self.allocate_shadow_stack() self.gcdata.main_thread_stack_base = self.stackgcdata.root_stack_base diff --git a/pypy/translator/stm/src_stm/core.c b/pypy/translator/stm/src_stm/core.c --- a/pypy/translator/stm/src_stm/core.c +++ b/pypy/translator/stm/src_stm/core.c @@ -404,10 +404,6 @@ owner_version_t ovt; \ \ assert(sizeof(TYPE) == SIZE); \ - /* XXX try to remove this check from the main path: */ \ - /* d is NULL when running in the main thread */ \ - if (d == NULL) \ - return *(TYPE *)(((char *)addr) + offset); \ \ if ((GETTID(o) & GCFLAG_WAS_COPIED) != 0) \ { \ @@ -439,8 +435,6 @@ volatile orec_t *o = get_orec(src); owner_version_t ovt; - assert(d != NULL); - /* don't copy the header */ src = ((char *)src) + sizeof(orec_t); dst = ((char *)dst) + sizeof(orec_t); @@ -449,14 +443,10 @@ STM_DO_READ(memcpy(dst, src, size)); } -static void descriptor_init(long in_main_thread) +static void descriptor_init(void) { assert(thread_descriptor == NULL); - if (in_main_thread) - { - /* the main thread doesn't have a thread_descriptor at all */ - } - else + if (1) { struct tx_descriptor *d = malloc(sizeof(struct tx_descriptor)); memset(d, 0, sizeof(struct tx_descriptor)); @@ -485,8 +475,7 @@ static void descriptor_done(void) { struct tx_descriptor *d = thread_descriptor; - if (d == NULL) - return; + assert(d != NULL); thread_descriptor = NULL; @@ -527,13 +516,32 @@ static void begin_transaction(jmp_buf* buf) { struct tx_descriptor *d = thread_descriptor; - /* you need to call descriptor_init() before calling this */ - assert(d != NULL); d->setjmp_buf = buf; d->start_time = (/*d->last_known_global_timestamp*/ global_timestamp) & ~1; } -static void commit_transaction(void) +void stm_begin_inevitable_transaction(void) +{ + /* Equivalent to begin_transaction(); stm_try_inevitable(); + except more efficient */ + struct tx_descriptor *d = thread_descriptor; + d->setjmp_buf = NULL; + + while (1) + { + mutex_lock(); + unsigned long curtime = get_global_timestamp(d) & ~1; + if (change_global_timestamp(d, curtime, curtime + 1)) + { + d->start_time = curtime; + break; + } + mutex_unlock(); + tx_spinloop(6); + } +} + +void stm_commit_transaction(void) { struct tx_descriptor *d = thread_descriptor; assert(d != NULL); @@ -604,8 +612,6 @@ by another thread. We set the lowest bit in global_timestamp to 1. */ struct tx_descriptor *d = thread_descriptor; - if (d == NULL) - return; /* I am running in the main thread */ if (is_inevitable(d)) return; /* I am already inevitable */ @@ -632,13 +638,14 @@ in try_inevitable() very soon)? unclear. For now let's try to spinloop, after the waiting done by acquiring the mutex */ - mutex_unlock(); - tx_spinloop(6); - continue; } - if (change_global_timestamp(d, curtime, curtime + 1)) - break; + else + { + if (change_global_timestamp(d, curtime, curtime + 1)) + break; + } mutex_unlock(); + tx_spinloop(6); } d->setjmp_buf = NULL; /* inevitable from now on */ #ifdef RPY_STM_DEBUG_PRINT @@ -656,16 +663,14 @@ long stm_thread_id(void) { struct tx_descriptor *d = thread_descriptor; - if (d == NULL) - return 0; /* no thread_descriptor: it's the main thread */ return d->my_lock_word; } static __thread void *rpython_tls_object; -void stm_set_tls(void *newtls, long in_main_thread) +void stm_set_tls(void *newtls) { - descriptor_init(in_main_thread); + descriptor_init(); rpython_tls_object = newtls; } @@ -709,12 +714,6 @@ } REDOLOG_LOOP_END; } -long stm_in_transaction(void) -{ - struct tx_descriptor *d = thread_descriptor; - return d != NULL; -} - #undef GETVERSION #undef GETVERSIONREF #undef SETVERSION diff --git a/pypy/translator/stm/src_stm/et.c b/pypy/translator/stm/src_stm/et.c --- a/pypy/translator/stm/src_stm/et.c +++ b/pypy/translator/stm/src_stm/et.c @@ -54,6 +54,5 @@ #include "src_stm/lists.c" #include "src_stm/core.c" -#include "src_stm/rpyintf.c" /************************************************************/ diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -12,10 +12,7 @@ /* see comments in ../stmgcintf.py */ -long stm_in_transaction(void); -void stm_run_all_transactions(void*, long); - -void stm_set_tls(void *, long); +void stm_set_tls(void *); void *stm_get_tls(void); void stm_del_tls(void); long stm_thread_id(void); @@ -24,6 +21,9 @@ void stm_tldict_add(void *, void *); void stm_tldict_enum(void); +void stm_begin_inevitable_transaction(void); +void stm_commit_transaction(void); + /* these functions are declared by generated C code from pypy.rlib.rstm and from the GC (see llop.nop(...)) */ extern void pypy_g__stm_thread_starting(void); diff --git a/pypy/translator/stm/stmgcintf.py b/pypy/translator/stm/stmgcintf.py --- a/pypy/translator/stm/stmgcintf.py +++ b/pypy/translator/stm/stmgcintf.py @@ -47,17 +47,17 @@ # C part of the implementation of the pypy.rlib.rstm module in_transaction = smexternal('stm_in_transaction', [], lltype.Signed) - before_external_call = smexternal('stm_before_external_call', - [], lltype.Void) - after_external_call = smexternal('stm_after_external_call', - [], lltype.Void) + is_inevitable = smexternal('stm_is_inevitable', [], lltype.Signed) + begin_inevitable_transaction = smexternal( + 'stm_begin_inevitable_transaction', [], lltype.Void) + commit_transaction = smexternal( + 'stm_commit_transaction', [], lltype.Void) do_yield_thread = smexternal('stm_do_yield_thread', [], lltype.Void) # for the GC: store and read a thread-local-storage field, as well # as initialize and shut down the internal thread_descriptor - set_tls = smexternal('stm_set_tls', [llmemory.Address, lltype.Signed], - lltype.Void) + set_tls = smexternal('stm_set_tls', [llmemory.Address], lltype.Void) get_tls = smexternal('stm_get_tls', [], llmemory.Address) del_tls = smexternal('stm_del_tls', [], lltype.Void) diff --git a/pypy/translator/stm/test/targetdemo2.py b/pypy/translator/stm/test/targetdemo2.py --- a/pypy/translator/stm/test/targetdemo2.py +++ b/pypy/translator/stm/test/targetdemo2.py @@ -63,15 +63,12 @@ def run(self): try: - self.really_run() + for value in range(glob.LENGTH): + add_at_end_of_chained_list(glob.anchor, value, self.index) + #rstm.do_yield_thread() finally: self.finished_lock.release() - def really_run(self): - for value in range(glob.LENGTH): - add_at_end_of_chained_list(glob.anchor, value, self.index) - rstm.do_yield_thread() - # ____________________________________________________________ # bah, we are really missing an RPython interface to threads From noreply at buildbot.pypy.org Sat May 5 17:21:55 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 May 2012 17:21:55 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: In-progress Message-ID: <20120505152155.A8326820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r54900:ef0813096a29 Date: 2012-05-05 17:21 +0200 http://bitbucket.org/pypy/pypy/changeset/ef0813096a29/ Log: In-progress diff --git a/pypy/rlib/objectmodel.py b/pypy/rlib/objectmodel.py --- a/pypy/rlib/objectmodel.py +++ b/pypy/rlib/objectmodel.py @@ -476,7 +476,8 @@ def hlinvoke(repr, llcallable, *args): raise TypeError, "hlinvoke is meant to be rtyped and not called direclty" -def invoke_around_extcall(before, after): +def invoke_around_extcall(before, after, + enter_callback=None, leave_callback=None): """Call before() before any external function call, and after() after. At the moment only one pair before()/after() can be registered at a time. """ @@ -490,6 +491,13 @@ from pypy.rpython.annlowlevel import llhelper llhelper(rffi.AroundFnPtr, before) llhelper(rffi.AroundFnPtr, after) + # do the same thing about enter/leave_callback + if enter_callback is not None: + rffi.aroundstate.enter_callback = enter_callback + llhelper(rffi.EnterCallbackFnPtr, enter_callback) + if leave_callback is not None: + rffi.aroundstate.leave_callback = leave_callback + llhelper(rffi.LeaveCallbackFnPtr, leave_callback) def is_in_callback(): from pypy.rpython.lltypesystem import rffi diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -1,4 +1,5 @@ from pypy.translator.stm import stmgcintf +from pypy.rlib.debug import ll_assert def before_external_call(): @@ -11,6 +12,20 @@ after_external_call._gctransformer_hint_cannot_collect_ = True after_external_call._dont_reach_me_in_del_ = True +def enter_callback_call(): + new_thread = stmgcintf.StmOperations.descriptor_init() + stmgcintf.StmOperations.begin_inevitable_transaction() + return new_thread +enter_callback_call._gctransformer_hint_cannot_collect_ = True +enter_callback_call._dont_reach_me_in_del_ = True + +def leave_callback_call(token): + stmgcintf.StmOperations.commit_transaction() + if token == 1: + stmgcintf.StmOperations.descriptor_done() +leave_callback_call._gctransformer_hint_cannot_collect_ = True +leave_callback_call._dont_reach_me_in_del_ = True + def do_yield_thread(): stmgcintf.StmOperations.do_yield_thread() do_yield_thread._gctransformer_hint_close_stack_ = True diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -964,8 +964,6 @@ op_stm_writebarrier = _stm_not_implemented op_stm_normalize_global = _stm_not_implemented op_stm_become_inevitable = _stm_not_implemented - op_stm_thread_starting = _stm_not_implemented - op_stm_thread_stopping = _stm_not_implemented # operations on pyobjects! for opname in lloperation.opimpls.keys(): diff --git a/pypy/rpython/lltypesystem/opimpl.py b/pypy/rpython/lltypesystem/opimpl.py --- a/pypy/rpython/lltypesystem/opimpl.py +++ b/pypy/rpython/lltypesystem/opimpl.py @@ -629,12 +629,6 @@ def op_stm_stop_transaction(): pass -def op_stm_enter_transactional_mode(): - pass - -def op_stm_leave_transactional_mode(): - pass - def op_nop(x): pass diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -281,10 +281,12 @@ source = py.code.Source(r""" def wrapper(%s): # no *args - no GIL for mallocing the tuple llop.gc_stack_bottom(lltype.Void) # marker for trackgcroot.py + token = 0 if aroundstate is not None: - after = aroundstate.after - if after: - after() + if aroundstate.enter_callback is not None: + token = aroundstate.enter_callback() + elif aroundstate.after is not None: + aroundstate.after() # from now on we hold the GIL stackcounter.stacks_counter += 1 try: @@ -299,9 +301,10 @@ result = errorcode stackcounter.stacks_counter -= 1 if aroundstate is not None: - before = aroundstate.before - if before: - before() + if aroundstate.leave_callback is not None: + aroundstate.leave_callback(token) + elif aroundstate.before is not None: + aroundstate.before() # here we don't hold the GIL any more. As in the wrapper() produced # by llexternal, it is essential that no exception checking occurs # after the call to before(). @@ -317,11 +320,15 @@ _make_wrapper_for._annspecialcase_ = 'specialize:memo' AroundFnPtr = lltype.Ptr(lltype.FuncType([], lltype.Void)) +EnterCallbackFnPtr = lltype.Ptr(lltype.FuncType([], lltype.Signed)) +LeaveCallbackFnPtr = lltype.Ptr(lltype.FuncType([lltype.Signed], lltype.Void)) class AroundState: _alloc_flavor_ = "raw" def _freeze_(self): self.before = None # or a regular RPython function self.after = None # or a regular RPython function + self.enter_callback = None + self.leave_callback = None return False aroundstate = AroundState() aroundstate._freeze_() diff --git a/pypy/rpython/memory/gc/stmgc.py b/pypy/rpython/memory/gc/stmgc.py --- a/pypy/rpython/memory/gc/stmgc.py +++ b/pypy/rpython/memory/gc/stmgc.py @@ -124,7 +124,6 @@ GCFLAG_VISITED = first_gcflag << 2 GCFLAG_HAS_SHADOW = first_gcflag << 3 GCFLAG_FIXED_HASH = first_gcflag << 4 -GCFLAG_PREBUILT = first_gcflag << 5 def always_inline(fn): @@ -202,16 +201,30 @@ # self.sharedarea.setup() # + self.stm_operations.descriptor_init() + self.stm_operations.begin_inevitable_transaction() self.setup_thread() def setup_thread(self): + """Build the StmGCTLS object and start a transaction at the level + of the GC. The C-level transaction should already be started.""" + ll_assert(self.stm_operations.in_transaction(), + "setup_thread: not in a transaction") from pypy.rpython.memory.gc.stmtls import StmGCTLS - StmGCTLS(self).start_transaction() + stmtls = StmGCTLS(self) + stmtls.start_transaction() def teardown_thread(self): - self.stm_operations.try_inevitable() + """Stop the current transaction, commit it at the level of + C code, and tear down the StmGCTLS object. For symmetry, this + ensures that the level of C has another (empty) transaction + started.""" + ll_assert(bool(self.stm_operations.in_transaction()), + "teardown_thread: not in a transaction") stmtls = self.get_tls() stmtls.stop_transaction() + self.stm_operations.commit_transaction() + self.stm_operations.begin_inevitable_transaction() stmtls.delete() @always_inline @@ -287,7 +300,7 @@ hdr.tid = self.combine(typeid16, flags) def init_gc_object_immortal(self, addr, typeid16, flags=0): - flags |= GCFLAG_GLOBAL | GCFLAG_PREBUILT + flags |= GCFLAG_GLOBAL self.init_gc_object(addr, typeid16, flags) # ---------- diff --git a/pypy/rpython/memory/gc/stmtls.py b/pypy/rpython/memory/gc/stmtls.py --- a/pypy/rpython/memory/gc/stmtls.py +++ b/pypy/rpython/memory/gc/stmtls.py @@ -11,7 +11,6 @@ from pypy.rpython.memory.gc.stmgc import always_inline, dont_inline from pypy.rpython.memory.gc.stmgc import GCFLAG_GLOBAL, GCFLAG_VISITED from pypy.rpython.memory.gc.stmgc import GCFLAG_WAS_COPIED, GCFLAG_HAS_SHADOW -from pypy.rpython.memory.gc.stmgc import GCFLAG_PREBUILT class StmGCTLS(object): diff --git a/pypy/rpython/memory/gctransform/stmframework.py b/pypy/rpython/memory/gctransform/stmframework.py --- a/pypy/rpython/memory/gctransform/stmframework.py +++ b/pypy/rpython/memory/gctransform/stmframework.py @@ -12,33 +12,13 @@ class StmFrameworkGCTransformer(FrameworkGCTransformer): def _declare_functions(self, GCClass, getfn, s_gc, *args): - gc = self.gcdata.gc - # - def gc_thread_start(): - self.root_walker.allocate_shadow_stack() - gc.setup_thread() - # - def gc_thread_die(): - gc.teardown_thread() - self.root_walker.free_shadow_stack() - # - #def start_transaction(gc): - # self.root_walker.start_transaction() - # gc.start_transaction() - # super(StmFrameworkGCTransformer, self)._declare_functions( GCClass, getfn, s_gc, *args) - self.thread_start_ptr = getfn( - gc_thread_start, - [], annmodel.s_None) - self.thread_die_ptr = getfn( - gc_thread_die, - [], annmodel.s_None) self.stm_writebarrier_ptr = getfn( - gc.stm_writebarrier, + self.gcdata.gc.stm_writebarrier, [annmodel.SomeAddress()], annmodel.SomeAddress()) self.stm_normalize_global_ptr = getfn( - gc.stm_normalize_global, + self.gcdata.gc.stm_normalize_global, [annmodel.SomeAddress()], annmodel.SomeAddress()) def build_root_walker(self): @@ -55,24 +35,6 @@ resulttype=llmemory.Address) hop.genop('adr_add', [v_gcdata_adr, c_ofs], resultvar=op.result) - def gct_stm_thread_starting(self, hop): - hop.genop("direct_call", [self.thread_starting_ptr, self.c_const_gc]) - - def gct_stm_thread_stopping(self, hop): - hop.genop("direct_call", [self.thread_stopping_ptr, self.c_const_gc]) - - def gct_stm_enter_transactional_mode(self, hop): - livevars = self.push_roots(hop) - hop.genop("direct_call", [self.stm_enter_transactional_mode_ptr, - self.c_const_gc]) - self.pop_roots(hop, livevars) - - def gct_stm_leave_transactional_mode(self, hop): - livevars = self.push_roots(hop) - hop.genop("direct_call", [self.stm_leave_transactional_mode_ptr, - self.c_const_gc]) - self.pop_roots(hop, livevars) - def gct_stm_writebarrier(self, hop): op = hop.spaceop v_adr = hop.genop('cast_ptr_to_adr', @@ -149,9 +111,22 @@ self.root_stack_depth = rsd def need_thread_support(self, gctransformer, getfn): - # we always have thread support, and it is handled - # in _declare_functions() already - pass + gc = gctransformer.gcdata.gc + # + def gc_thread_start(): + self.allocate_shadow_stack() + gc.setup_thread() + # + def gc_thread_die(): + gc.teardown_thread() + self.free_shadow_stack() + # + self.thread_start_ptr = getfn( + gc_thread_start, + [], annmodel.s_None) + self.thread_die_ptr = getfn( + gc_thread_die, + [], annmodel.s_None) def setup_root_walker(self): self.allocate_shadow_stack() diff --git a/pypy/translator/stm/src_stm/core.c b/pypy/translator/stm/src_stm/core.c --- a/pypy/translator/stm/src_stm/core.c +++ b/pypy/translator/stm/src_stm/core.c @@ -12,6 +12,7 @@ /*unsigned long last_known_global_timestamp;*/ owner_version_t my_lock_word; struct OrecList reads; + int active; /* 0 = inactive, 1 = regular, 2 = inevitable */ unsigned num_commits; unsigned num_aborts[ABORT_REASONS]; unsigned num_spinloops[SPINLOOP_REASONS]; @@ -61,6 +62,7 @@ unsigned int c; int i; struct tx_descriptor *d = thread_descriptor; + assert(d->active); d->num_spinloops[num]++; //printf("tx_spinloop(%d)\n", num); @@ -80,7 +82,10 @@ static _Bool is_inevitable(struct tx_descriptor *d) { - return d->setjmp_buf == NULL; + /* Assert that we are running a transaction. + Returns True if this transaction is inevitable. */ + assert(d->active == 1 + !d->setjmp_buf); + return d->active == 2; } /*** run the redo log to commit a transaction, and release the locks */ @@ -180,6 +185,7 @@ { d->reads.size = 0; redolog_clear(&d->redolog); + d->active = 0; } static void tx_cleanup(struct tx_descriptor *d) @@ -201,7 +207,7 @@ static void tx_abort(int reason) { struct tx_descriptor *d = thread_descriptor; - assert(!is_inevitable(d)); + assert(d->active == 1); d->num_aborts[reason]++; #ifdef RPY_STM_DEBUG_PRINT PYPY_DEBUG_START("stm-abort"); @@ -220,7 +226,7 @@ { int i; owner_version_t ovt; - assert(!is_inevitable(d)); + assert(d->active == 1); for (i=0; ireads.size; i++) { retry: @@ -251,7 +257,7 @@ { int i; owner_version_t ovt; - assert(!is_inevitable(d)); + assert(d->active == 1); for (i=0; ireads.size; i++) { ovt = GETVERSION(d->reads.items[i]); // read this orec @@ -443,10 +449,9 @@ STM_DO_READ(memcpy(dst, src, size)); } -static void descriptor_init(void) +long stm_descriptor_init(void) { - assert(thread_descriptor == NULL); - if (1) + if (thread_descriptor == NULL) { struct tx_descriptor *d = malloc(sizeof(struct tx_descriptor)); memset(d, 0, sizeof(struct tx_descriptor)); @@ -469,13 +474,17 @@ (long)pthread_self(), (long)d->my_lock_word); PYPY_DEBUG_STOP("stm-init"); #endif + return 1; } + else + return 0; /* already initialized */ } -static void descriptor_done(void) +void stm_descriptor_done(void) { struct tx_descriptor *d = thread_descriptor; assert(d != NULL); + assert(d->active == 0); thread_descriptor = NULL; @@ -516,6 +525,8 @@ static void begin_transaction(jmp_buf* buf) { struct tx_descriptor *d = thread_descriptor; + assert(d->active == 0); + d->active = 1; d->setjmp_buf = buf; d->start_time = (/*d->last_known_global_timestamp*/ global_timestamp) & ~1; } @@ -525,6 +536,8 @@ /* Equivalent to begin_transaction(); stm_try_inevitable(); except more efficient */ struct tx_descriptor *d = thread_descriptor; + assert(d->active == 0); + d->active = 2; d->setjmp_buf = NULL; while (1) @@ -612,8 +625,10 @@ by another thread. We set the lowest bit in global_timestamp to 1. */ struct tx_descriptor *d = thread_descriptor; - if (is_inevitable(d)) - return; /* I am already inevitable */ + if (d == NULL || d->active != 1) + return; /* I am already inevitable, or not in a transaction at all + (XXX statically we should know when we're outside + a transaction) */ #ifdef RPY_STM_DEBUG_PRINT PYPY_DEBUG_START("stm-inevitable"); @@ -670,7 +685,6 @@ void stm_set_tls(void *newtls) { - descriptor_init(); rpython_tls_object = newtls; } @@ -681,7 +695,7 @@ void stm_del_tls(void) { - descriptor_done(); + rpython_tls_object = NULL; } void *stm_tldict_lookup(void *key) @@ -714,6 +728,18 @@ } REDOLOG_LOOP_END; } +long stm_in_transaction(void) +{ + struct tx_descriptor *d = thread_descriptor; + return d->active; +} + +long stm_is_inevitable(void) +{ + struct tx_descriptor *d = thread_descriptor; + return is_inevitable(d); +} + #undef GETVERSION #undef GETVERSIONREF #undef SETVERSION diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -21,9 +21,15 @@ void stm_tldict_add(void *, void *); void stm_tldict_enum(void); +long stm_descriptor_init(void); +void stm_descriptor_done(void); + void stm_begin_inevitable_transaction(void); void stm_commit_transaction(void); +long stm_in_transaction(void); +long stm_is_inevitable(void); + /* these functions are declared by generated C code from pypy.rlib.rstm and from the GC (see llop.nop(...)) */ extern void pypy_g__stm_thread_starting(void); diff --git a/pypy/translator/stm/stmgcintf.py b/pypy/translator/stm/stmgcintf.py --- a/pypy/translator/stm/stmgcintf.py +++ b/pypy/translator/stm/stmgcintf.py @@ -48,6 +48,8 @@ # C part of the implementation of the pypy.rlib.rstm module in_transaction = smexternal('stm_in_transaction', [], lltype.Signed) is_inevitable = smexternal('stm_is_inevitable', [], lltype.Signed) + descriptor_init = smexternal('stm_descriptor_init', [], lltype.Signed) + descriptor_done = smexternal('stm_descriptor_done', [], lltype.Void) begin_inevitable_transaction = smexternal( 'stm_begin_inevitable_transaction', [], lltype.Void) commit_transaction = smexternal( diff --git a/pypy/translator/stm/test/targetdemo2.py b/pypy/translator/stm/test/targetdemo2.py --- a/pypy/translator/stm/test/targetdemo2.py +++ b/pypy/translator/stm/test/targetdemo2.py @@ -127,7 +127,8 @@ def setup_threads(): #space.threadlocals.setup_threads(space) bootstrapper.setup() - invoke_around_extcall(rstm.before_external_call, rstm.after_external_call) + invoke_around_extcall(rstm.before_external_call, rstm.after_external_call, + rstm.enter_callback_call, rstm.leave_callback_call) def start_thread(args): bootstrapper.acquire(args) diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -330,3 +330,6 @@ for link in block.exits: link.args = [renames.get(v, v) for v in link.args] block.operations = newoperations + +# XXX must repeat stm_writebarrier after doing something that can +# go to the next transaction From noreply at buildbot.pypy.org Sat May 5 20:49:59 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 May 2012 20:49:59 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: Implement the TransactionBreakAnalyzer, needed in this version of STM. Message-ID: <20120505184959.65A038208B@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r54901:85ba9ec36d90 Date: 2012-05-05 20:20 +0200 http://bitbucket.org/pypy/pypy/changeset/85ba9ec36d90/ Log: Implement the TransactionBreakAnalyzer, needed in this version of STM. A variable pointing to a local object can again point to a global object later, and thus will need another stm_writebarrier. diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -6,11 +6,13 @@ stmgcintf.StmOperations.commit_transaction() before_external_call._gctransformer_hint_cannot_collect_ = True before_external_call._dont_reach_me_in_del_ = True +before_external_call._transaction_break_ = True def after_external_call(): stmgcintf.StmOperations.begin_inevitable_transaction() after_external_call._gctransformer_hint_cannot_collect_ = True after_external_call._dont_reach_me_in_del_ = True +after_external_call._transaction_break_ = True def enter_callback_call(): new_thread = stmgcintf.StmOperations.descriptor_init() @@ -18,6 +20,7 @@ return new_thread enter_callback_call._gctransformer_hint_cannot_collect_ = True enter_callback_call._dont_reach_me_in_del_ = True +enter_callback_call._transaction_break_ = True def leave_callback_call(token): stmgcintf.StmOperations.commit_transaction() @@ -25,8 +28,10 @@ stmgcintf.StmOperations.descriptor_done() leave_callback_call._gctransformer_hint_cannot_collect_ = True leave_callback_call._dont_reach_me_in_del_ = True +leave_callback_call._transaction_break_ = True def do_yield_thread(): stmgcintf.StmOperations.do_yield_thread() do_yield_thread._gctransformer_hint_close_stack_ = True do_yield_thread._dont_reach_me_in_del_ = True +do_yield_thread._transaction_break_ = True diff --git a/pypy/translator/stm/gcsource.py b/pypy/translator/stm/gcsource.py --- a/pypy/translator/stm/gcsource.py +++ b/pypy/translator/stm/gcsource.py @@ -1,6 +1,7 @@ from pypy.objspace.flow.model import Variable from pypy.rpython.lltypesystem import lltype, rclass from pypy.translator.simplify import get_graph +from pypy.translator.backendopt import graphanalyze COPIES_POINTER = set([ @@ -110,6 +111,43 @@ return resultlist +class TransactionBreakAnalyzer(graphanalyze.BoolGraphAnalyzer): + """This analyzer looks for function calls that may ultimately + cause a transaction break (end of previous transaction, start + of next one).""" + + def analyze_direct_call(self, graph, seen=None): + try: + func = graph.func + except AttributeError: + pass + else: + if getattr(func, '_transaction_break_', False): + return True + return graphanalyze.GraphAnalyzer.analyze_direct_call(self, graph, + seen) + + def analyze_simple_operation(self, op, graphinfo): + return op.opname in ('stm_start_transaction', + 'stm_stop_transaction') + + +def enum_transactionbroken_vars(translator): + transactionbreak_analyzer = TransactionBreakAnalyzer(translator) + transactionbreak_analyzer.analyze_all() + for graph in translator.graphs: + for block in graph.iterblocks(): + livevars = set() + for link in block.exits: + livevars |= set(link.args) - set(link.getextravars()) + for op in block.operations[::-1]: + livevars.discard(op.result) + if transactionbreak_analyzer.analyze(op): + for v in livevars: + yield v + livevars.update(op.args) + + class GcSource(object): """Works like a dict {gcptr-var: set-of-sources}. A source is a Constant, or a SpaceOperation that creates the value, or a string @@ -120,6 +158,8 @@ self._backmapping = {} for v1, v2 in enum_gc_dependencies(translator): self._backmapping.setdefault(v2, []).append(v1) + for v2 in enum_transactionbroken_vars(translator): + self._backmapping.setdefault(v2, []).append('transactionbreak') def __getitem__(self, variable): result = set() diff --git a/pypy/translator/stm/test/test_gcsource.py b/pypy/translator/stm/test/test_gcsource.py --- a/pypy/translator/stm/test/test_gcsource.py +++ b/pypy/translator/stm/test/test_gcsource.py @@ -149,3 +149,26 @@ s = gsrc[v_result] assert len(s) == 1 assert list(s)[0].opname == 'hint' + +def test_transactionbroken(): + def break_transaction(): + pass + break_transaction._transaction_break_ = True + # + def main(n): + x = X(n) + break_transaction() + return x + gsrc = gcsource(main, [int]) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert 'transactionbreak' in s + # + def main(n): + break_transaction() + x = X(n) + return x + gsrc = gcsource(main, [int]) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert 'transactionbreak' not in s diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -330,6 +330,3 @@ for link in block.exits: link.args = [renames.get(v, v) for v in link.args] block.operations = newoperations - -# XXX must repeat stm_writebarrier after doing something that can -# go to the next transaction From noreply at buildbot.pypy.org Sat May 5 20:50:00 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 May 2012 20:50:00 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: Analyze statically which calls can eventually cause transaction breaks. Message-ID: <20120505185000.9D23C820B3@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r54902:d6e6921e2043 Date: 2012-05-05 20:49 +0200 http://bitbucket.org/pypy/pypy/changeset/d6e6921e2043/ Log: Analyze statically which calls can eventually cause transaction breaks. For any such call, the variables around the call can switch back from LOCAL to GLOBAL. diff --git a/pypy/translator/stm/gcsource.py b/pypy/translator/stm/gcsource.py --- a/pypy/translator/stm/gcsource.py +++ b/pypy/translator/stm/gcsource.py @@ -1,6 +1,7 @@ from pypy.objspace.flow.model import Variable from pypy.rpython.lltypesystem import lltype, rclass from pypy.translator.simplify import get_graph +from pypy.translator.unsimplify import split_block from pypy.translator.backendopt import graphanalyze @@ -132,20 +133,36 @@ 'stm_stop_transaction') -def enum_transactionbroken_vars(translator): - transactionbreak_analyzer = TransactionBreakAnalyzer(translator) - transactionbreak_analyzer.analyze_all() +def enum_transactionbroken_vars(translator, transactionbreak_analyzer): + if transactionbreak_analyzer is None: + return # for tests only for graph in translator.graphs: for block in graph.iterblocks(): - livevars = set() + if not block.operations: + continue + for op in block.operations[:-1]: + assert not transactionbreak_analyzer.analyze(op) + op = block.operations[-1] + if not transactionbreak_analyzer.analyze(op): + continue + # This block ends in a transaction breaking operation. So + # any variable passed from this block to a next one (with + # the exception of the variable freshly returned by the + # last operation) must be assumed to be potentially global. for link in block.exits: - livevars |= set(link.args) - set(link.getextravars()) - for op in block.operations[::-1]: - livevars.discard(op.result) - if transactionbreak_analyzer.analyze(op): - for v in livevars: - yield v - livevars.update(op.args) + for v1, v2 in zip(link.args, link.target.inputargs): + if v1 is not op.result: + yield v2 + +def break_blocks_after_transaction_breaker(translator, graph, + transactionbreak_analyzer): + """Split blocks so that they end immediately after any operation + that may cause a transaction break.""" + for block in list(graph.iterblocks()): + for i in range(len(block.operations)-2, -1, -1): + op = block.operations[i] + if transactionbreak_analyzer.analyze(op): + split_block(translator.annotator, block, i + 1) class GcSource(object): @@ -153,12 +170,13 @@ Constant, or a SpaceOperation that creates the value, or a string which describes a special case.""" - def __init__(self, translator): + def __init__(self, translator, transactionbreak_analyzer=None): self.translator = translator self._backmapping = {} for v1, v2 in enum_gc_dependencies(translator): self._backmapping.setdefault(v2, []).append(v1) - for v2 in enum_transactionbroken_vars(translator): + for v2 in enum_transactionbroken_vars(translator, + transactionbreak_analyzer): self._backmapping.setdefault(v2, []).append('transactionbreak') def __getitem__(self, variable): diff --git a/pypy/translator/stm/test/test_gcsource.py b/pypy/translator/stm/test/test_gcsource.py --- a/pypy/translator/stm/test/test_gcsource.py +++ b/pypy/translator/stm/test/test_gcsource.py @@ -1,5 +1,7 @@ from pypy.translator.translator import TranslationContext from pypy.translator.stm.gcsource import GcSource +from pypy.translator.stm.gcsource import TransactionBreakAnalyzer +from pypy.translator.stm.gcsource import break_blocks_after_transaction_breaker from pypy.objspace.flow.model import SpaceOperation, Constant from pypy.rpython.lltypesystem import lltype from pypy.rlib.jit import hint @@ -10,11 +12,19 @@ self.n = n -def gcsource(func, sig): +def gcsource(func, sig, transactionbreak=False): t = TranslationContext() t.buildannotator().build_types(func, sig) t.buildrtyper().specialize() - gsrc = GcSource(t) + if transactionbreak: + transactionbreak_analyzer = TransactionBreakAnalyzer(t) + transactionbreak_analyzer.analyze_all() + for graph in t.graphs: + break_blocks_after_transaction_breaker( + t, graph, transactionbreak_analyzer) + else: + transactionbreak_analyzer = None + gsrc = GcSource(t, transactionbreak_analyzer) return gsrc def test_simple(): @@ -159,7 +169,7 @@ x = X(n) break_transaction() return x - gsrc = gcsource(main, [int]) + gsrc = gcsource(main, [int], transactionbreak=True) v_result = gsrc.translator.graphs[0].getreturnvar() s = gsrc[v_result] assert 'transactionbreak' in s @@ -168,7 +178,27 @@ break_transaction() x = X(n) return x - gsrc = gcsource(main, [int]) + gsrc = gcsource(main, [int], transactionbreak=True) v_result = gsrc.translator.graphs[0].getreturnvar() s = gsrc[v_result] assert 'transactionbreak' not in s + # + def main(n): + x = X(n) + break_transaction() + y = X(n) # extra operation in the same block + return x + gsrc = gcsource(main, [int], transactionbreak=True) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert 'transactionbreak' in s + # + def g(n): + break_transaction() + return X(n) + def main(n): + return g(n) + gsrc = gcsource(main, [int], transactionbreak=True) + v_result = gsrc.translator.graphs[0].getreturnvar() + s = gsrc[v_result] + assert 'transactionbreak' not in s diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -3,6 +3,7 @@ from pypy.annotation import model as annmodel from pypy.translator.unsimplify import varoftype, copyvar from pypy.translator.stm.localtracker import StmLocalTracker +from pypy.translator.stm import gcsource from pypy.rpython.lltypesystem import lltype, lloperation from pypy.rpython import rclass @@ -39,12 +40,22 @@ def transform(self): assert not hasattr(self.translator, 'stm_transformation_applied') self.start_log() - for graph in self.translator.graphs: + t = self.translator + transactionbreak_analyzer = gcsource.TransactionBreakAnalyzer(t) + transactionbreak_analyzer.analyze_all() + # + for graph in t.graphs: + gcsource.break_blocks_after_transaction_breaker( + t, graph, self.transactionbreak_analyzer) + # + for graph in t.graphs: pre_insert_stm_writebarrier(graph) - self.localtracker = StmLocalTracker(self.translator) - for graph in self.translator.graphs: + # + self.localtracker = StmLocalTracker(t, transactionbreak_analyzer) + for graph in t.graphs: self.transform_graph(graph) self.localtracker = None + # self.translator.stm_transformation_applied = True self.print_logs() @@ -253,7 +264,6 @@ # one variable on which we do 'stm_writebarrier', but there are # also other variables that contain the same pointer, e.g. casted # to a different precise type. - from pypy.translator.stm.gcsource import COPIES_POINTER, _is_gc # def emit(op): for v1 in op.args: @@ -277,9 +287,9 @@ copies = {} wants_a_writebarrier = {} for op in block.operations: - if op.opname in COPIES_POINTER: + if op.opname in gcsource.COPIES_POINTER: assert len(op.args) == 1 - if _is_gc(op.result) and _is_gc(op.args[0]): + if gcsource._is_gc(op.result) and gcsource._is_gc(op.args[0]): copies[op.result] = op elif (op.opname in ('getfield', 'getarrayitem', 'getinteriorfield') and From noreply at buildbot.pypy.org Sat May 5 20:52:32 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 May 2012 20:52:32 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: Typos Message-ID: <20120505185232.A56648208B@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r54903:b49b172d57c4 Date: 2012-05-05 20:52 +0200 http://bitbucket.org/pypy/pypy/changeset/b49b172d57c4/ Log: Typos diff --git a/pypy/translator/stm/localtracker.py b/pypy/translator/stm/localtracker.py --- a/pypy/translator/stm/localtracker.py +++ b/pypy/translator/stm/localtracker.py @@ -15,9 +15,9 @@ of the stmgc: a pointer is 'local' if it goes to the thread-local memory, and 'global' if it points to the shared read-only memory area.""" - def __init__(self, translator): + def __init__(self, translator, transactionbreak_analyzer): self.translator = translator - self.gsrc = GcSource(translator) + self.gsrc = GcSource(translator, transactionbreak_analyzer) def is_local(self, variable): try: diff --git a/pypy/translator/stm/transform.py b/pypy/translator/stm/transform.py --- a/pypy/translator/stm/transform.py +++ b/pypy/translator/stm/transform.py @@ -46,7 +46,7 @@ # for graph in t.graphs: gcsource.break_blocks_after_transaction_breaker( - t, graph, self.transactionbreak_analyzer) + t, graph, transactionbreak_analyzer) # for graph in t.graphs: pre_insert_stm_writebarrier(graph) From noreply at buildbot.pypy.org Sat May 5 21:33:06 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sat, 5 May 2012 21:33:06 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: Fix fix fix. Good, now targetdemo2 passes (but still with only Message-ID: <20120505193306.DB46E8208B@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r54904:4bc269d8fe3a Date: 2012-05-05 21:32 +0200 http://bitbucket.org/pypy/pypy/changeset/4bc269d8fe3a/ Log: Fix fix fix. Good, now targetdemo2 passes (but still with only inevitable transactions, which is of course still pointless) diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -1,32 +1,38 @@ from pypy.translator.stm import stmgcintf from pypy.rlib.debug import ll_assert +from pypy.rpython.lltypesystem import lltype +from pypy.rpython.lltypesystem.lloperation import llop def before_external_call(): + llop.stm_stop_transaction(lltype.Void) stmgcintf.StmOperations.commit_transaction() -before_external_call._gctransformer_hint_cannot_collect_ = True before_external_call._dont_reach_me_in_del_ = True before_external_call._transaction_break_ = True def after_external_call(): stmgcintf.StmOperations.begin_inevitable_transaction() -after_external_call._gctransformer_hint_cannot_collect_ = True + llop.stm_start_transaction(lltype.Void) after_external_call._dont_reach_me_in_del_ = True after_external_call._transaction_break_ = True def enter_callback_call(): - new_thread = stmgcintf.StmOperations.descriptor_init() + token = stmgcintf.StmOperations.descriptor_init() stmgcintf.StmOperations.begin_inevitable_transaction() - return new_thread -enter_callback_call._gctransformer_hint_cannot_collect_ = True + if token != 1: + llop.stm_start_transaction(lltype.Void) + #else: the StmGCTLS is not built yet. leave it to gc_thread_start() + return token enter_callback_call._dont_reach_me_in_del_ = True enter_callback_call._transaction_break_ = True def leave_callback_call(token): + if token != 1: + llop.stm_stop_transaction(lltype.Void) + #else: the StmGCTLS is already destroyed, done by gc_thread_die() stmgcintf.StmOperations.commit_transaction() if token == 1: stmgcintf.StmOperations.descriptor_done() -leave_callback_call._gctransformer_hint_cannot_collect_ = True leave_callback_call._dont_reach_me_in_del_ = True leave_callback_call._transaction_break_ = True diff --git a/pypy/rpython/memory/gctransform/stmframework.py b/pypy/rpython/memory/gctransform/stmframework.py --- a/pypy/rpython/memory/gctransform/stmframework.py +++ b/pypy/rpython/memory/gctransform/stmframework.py @@ -14,6 +14,12 @@ def _declare_functions(self, GCClass, getfn, s_gc, *args): super(StmFrameworkGCTransformer, self)._declare_functions( GCClass, getfn, s_gc, *args) + self.stm_start_ptr = getfn( + self.gcdata.gc.start_transaction.im_func, + [s_gc], annmodel.s_None) + self.stm_stop_ptr = getfn( + self.gcdata.gc.stop_transaction.im_func, + [s_gc], annmodel.s_None) self.stm_writebarrier_ptr = getfn( self.gcdata.gc.stm_writebarrier, [annmodel.SomeAddress()], annmodel.SomeAddress()) @@ -148,6 +154,7 @@ def start_transaction(self): # When a transaction is aborted, it leaves behind its shadow # stack content. We have to clear it here. + XXX stackgcdata = self.stackgcdata stackgcdata.root_stack_top = stackgcdata.root_stack_base From noreply at buildbot.pypy.org Sun May 6 10:45:21 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 6 May 2012 10:45:21 +0200 (CEST) Subject: [pypy-commit] pypy default: Optimize out SAME_AS even if OptRewrite is dissabled to prevent unrolling from crashing in that case Message-ID: <20120506084521.352D482E46@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r54905:8f06426c3392 Date: 2012-05-06 10:16 +0200 http://bitbucket.org/pypy/pypy/changeset/8f06426c3392/ Log: Optimize out SAME_AS even if OptRewrite is dissabled to prevent unrolling from crashing in that case diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -652,7 +652,11 @@ arrayvalue.make_len_gt(MODE_UNICODE, op.getdescr(), indexvalue.box.getint()) self.optimize_default(op) - + # These are typically removed already by OptRewrite, but it can be + # dissabled and unrolling emits some SAME_AS ops to setup the + # optimizier state. These needs to always be optimized out. + def optimize_SAME_AS(self, op): + self.make_equal_to(op.result, self.getvalue(op.getarg(0))) dispatch_opt = make_dispatcher_method(Optimizer, 'optimize_', From noreply at buildbot.pypy.org Sun May 6 10:45:23 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Sun, 6 May 2012 10:45:23 +0200 (CEST) Subject: [pypy-commit] pypy default: merge Message-ID: <20120506084523.31E9582F4E@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r54906:bc24e3ff27a9 Date: 2012-05-06 10:44 +0200 http://bitbucket.org/pypy/pypy/changeset/bc24e3ff27a9/ Log: merge diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -6,6 +6,7 @@ import re from pypy.interpreter.baseobjspace import InternalSpaceCache, W_Root +from pypy.interpreter.error import OperationError from pypy.module.micronumpy import interp_boxes from pypy.module.micronumpy.interp_dtype import get_dtype_cache from pypy.module.micronumpy.interp_numarray import (Scalar, BaseArray, @@ -39,11 +40,11 @@ THREE_ARG_FUNCTIONS = ['where'] class FakeSpace(object): - w_ValueError = None - w_TypeError = None - w_IndexError = None - w_OverflowError = None - w_NotImplementedError = None + w_ValueError = "ValueError" + w_TypeError = "TypeError" + w_IndexError = "IndexError" + w_OverflowError = "OverflowError" + w_NotImplementedError = "NotImplementedError" w_None = None w_bool = "bool" @@ -126,8 +127,13 @@ return w_obj.intval elif isinstance(w_obj, FloatObject): return int(w_obj.floatval) + elif isinstance(w_obj, SliceObject): + raise OperationError(self.w_TypeError, self.wrap("slice.")) raise NotImplementedError + def index(self, w_obj): + return self.wrap(self.int_w(w_obj)) + def str_w(self, w_obj): if isinstance(w_obj, StringObject): return w_obj.v diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -350,12 +350,31 @@ if shape_len == 1: if space.isinstance_w(w_idx, space.w_int): return True + + try: + value = space.int_w(space.index(w_idx)) + return True + except OperationError: + pass + + try: + value = space.int_w(w_idx) + return True + except OperationError: + pass + if space.isinstance_w(w_idx, space.w_slice): return False elif (space.isinstance_w(w_idx, space.w_slice) or space.isinstance_w(w_idx, space.w_int)): return False - lgt = space.len_w(w_idx) + + try: + lgt = space.len_w(w_idx) + except OperationError: + raise OperationError(space.w_IndexError, + space.wrap("index must be either an int or a sequence.")) + if lgt > shape_len: raise OperationError(space.w_IndexError, space.wrap("invalid index")) @@ -1030,8 +1049,21 @@ @jit.unroll_safe def _index_of_single_item(self, space, w_idx): - if space.isinstance_w(w_idx, space.w_int): - idx = space.int_w(w_idx) + is_valid = False + try: + idx = space.int_w(space.index(w_idx)) + is_valid = True + except OperationError: + pass + + if not is_valid: + try: + idx = space.int_w(w_idx) + is_valid = True + except OperationError: + pass + + if is_valid: if idx < 0: idx = self.shape[0] + idx if idx < 0 or idx >= self.shape[0]: diff --git a/pypy/module/micronumpy/test/test_base.py b/pypy/module/micronumpy/test/test_base.py --- a/pypy/module/micronumpy/test/test_base.py +++ b/pypy/module/micronumpy/test/test_base.py @@ -10,6 +10,7 @@ import sys class BaseNumpyAppTest(object): + @classmethod def setup_class(cls): if option.runappdirect: if '__pypy__' not in sys.builtin_module_names: diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -8,7 +8,6 @@ from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest - class MockDtype(object): class itemtype(object): @staticmethod @@ -195,6 +194,36 @@ assert _to_coords(13, 'F') == [1, 0, 2] class AppTestNumArray(BaseNumpyAppTest): + def w_CustomIndexObject(self, index): + class CustomIndexObject(object): + def __init__(self, index): + self.index = index + def __index__(self): + return self.index + + return CustomIndexObject(index) + + def w_CustomIndexIntObject(self, index, value): + class CustomIndexIntObject(object): + def __init__(self, index, value): + self.index = index + self.value = value + def __index__(self): + return self.index + def __int__(self): + return self.value + + return CustomIndexIntObject(index, value) + + def w_CustomIntObject(self, value): + class CustomIntObject(object): + def __init__(self, value): + self.value = value + def __index__(self): + return self.value + + return CustomIntObject(value) + def test_ndarray(self): from _numpypy import ndarray, array, dtype @@ -329,6 +358,28 @@ assert a[1, 3] == 8 assert a.T[1, 2] == 11 + def test_getitem_obj_index(self): + from _numpypy import arange + + a = arange(10) + + assert a[self.CustomIndexObject(1)] == 1 + + def test_getitem_obj_prefer_index_to_int(self): + from _numpypy import arange + + a = arange(10) + + + assert a[self.CustomIndexIntObject(0, 1)] == 0 + + def test_getitem_obj_int(self): + from _numpypy import arange + + a = arange(10) + + assert a[self.CustomIntObject(1)] == 1 + def test_setitem(self): from _numpypy import array a = array(range(5)) @@ -348,6 +399,48 @@ for i in xrange(5): assert a[i] == i + def test_setitem_obj_index(self): + from _numpypy import arange + + a = arange(10) + + a[self.CustomIndexObject(1)] = 100 + assert a[1] == 100 + + def test_setitem_obj_prefer_index_to_int(self): + from _numpypy import arange + + a = arange(10) + + a[self.CustomIndexIntObject(0, 1)] = 100 + assert a[0] == 100 + + def test_setitem_obj_int(self): + from _numpypy import arange + + a = arange(10) + + a[self.CustomIntObject(1)] = 100 + + assert a[1] == 100 + + def test_access_swallow_exception(self): + class ErrorIndex(object): + def __index__(self): + return 1 / 0 + + class ErrorInt(object): + def __int__(self): + return 1 / 0 + + # numpy will swallow errors in __int__ and __index__ and + # just raise IndexError. + + from _numpypy import arange + a = arange(10) + raises(IndexError, "a[ErrorIndex()] == 0") + raises(IndexError, "a[ErrorInt()] == 0") + def test_setslice_array(self): from _numpypy import array a = array(range(5)) From noreply at buildbot.pypy.org Sun May 6 14:51:17 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Sun, 6 May 2012 14:51:17 +0200 (CEST) Subject: [pypy-commit] pypy stdlib-unification: merge from default Message-ID: <20120506125117.C047082E46@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: stdlib-unification Changeset: r54907:b4c7d0e64cc7 Date: 2012-05-06 14:44 +0200 http://bitbucket.org/pypy/pypy/changeset/b4c7d0e64cc7/ Log: merge from default diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst --- a/pypy/doc/cppyy.rst +++ b/pypy/doc/cppyy.rst @@ -51,8 +51,15 @@ `Download`_ a binary or install from `source`_. Some Linux and Mac systems may have ROOT provided in the list of scientific software of their packager. -A current, standalone version of Reflex should be provided at some point, -once the dependencies and general packaging have been thought out. +If, however, you prefer a standalone version of Reflex, the best is to get +this `recent snapshot`_, and install like so:: + + $ tar jxf reflex-2012-05-02.tar.bz2 + $ cd reflex-2012-05-02 + $ build/autogen + $ ./configure + $ make && make install + Also, make sure you have a version of `gccxml`_ installed, which is most easily provided by the packager of your system. If you read up on gccxml, you'll probably notice that it is no longer being @@ -61,12 +68,13 @@ .. _`Download`: http://root.cern.ch/drupal/content/downloading-root .. _`source`: http://root.cern.ch/drupal/content/installing-root-source +.. _`recent snapshot`: http://cern.ch/wlav/reflex-2012-05-02.tar.bz2 .. _`gccxml`: http://www.gccxml.org Next, get the `PyPy sources`_, select the reflex-support branch, and build pypy-c. For the build to succeed, the ``$ROOTSYS`` environment variable must point to -the location of your ROOT installation:: +the location of your ROOT (or standalone Reflex) installation:: $ hg clone https://bitbucket.org/pypy/pypy $ cd pypy diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1652,8 +1652,6 @@ 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', - 'UnicodeEncodeError', - 'UnicodeDecodeError', ] if sys.platform.startswith("win"): diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -1290,10 +1290,6 @@ w(self.valuestackdepth)]) def handle(self, frame, unroller): - next_instr = self.really_handle(frame, unroller) # JIT hack - return r_uint(next_instr) - - def really_handle(self, frame, unroller): """ Purely abstract method """ raise NotImplementedError @@ -1305,17 +1301,17 @@ _opname = 'SETUP_LOOP' handling_mask = SBreakLoop.kind | SContinueLoop.kind - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): if isinstance(unroller, SContinueLoop): # re-push the loop block without cleaning up the value stack, # and jump to the beginning of the loop, stored in the # exception's argument frame.append_block(self) - return unroller.jump_to + return r_uint(unroller.jump_to) else: # jump to the end of the loop self.cleanupstack(frame) - return self.handlerposition + return r_uint(self.handlerposition) class ExceptBlock(FrameBlock): @@ -1325,7 +1321,7 @@ _opname = 'SETUP_EXCEPT' handling_mask = SApplicationException.kind - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): # push the exception to the value stack for inspection by the # exception handler (the code after the except:) self.cleanupstack(frame) @@ -1340,7 +1336,7 @@ frame.pushvalue(operationerr.get_w_value(frame.space)) frame.pushvalue(operationerr.w_type) frame.last_exception = operationerr - return self.handlerposition # jump to the handler + return r_uint(self.handlerposition) # jump to the handler class FinallyBlock(FrameBlock): @@ -1361,7 +1357,7 @@ frame.pushvalue(frame.space.w_None) frame.pushvalue(frame.space.w_None) - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): # any abnormal reason for unrolling a finally: triggers the end of # the block unrolling and the entering the finally: handler. # see comments in cleanup(). @@ -1369,18 +1365,18 @@ frame.pushvalue(frame.space.wrap(unroller)) frame.pushvalue(frame.space.w_None) frame.pushvalue(frame.space.w_None) - return self.handlerposition # jump to the handler + return r_uint(self.handlerposition) # jump to the handler class WithBlock(FinallyBlock): _immutable_ = True - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): if (frame.space.full_exceptions and isinstance(unroller, SApplicationException)): unroller.operr.normalize_exception(frame.space) - return FinallyBlock.really_handle(self, frame, unroller) + return FinallyBlock.handle(self, frame, unroller) block_classes = {'SETUP_LOOP': LoopBlock, 'SETUP_EXCEPT': ExceptBlock, diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -652,7 +652,11 @@ arrayvalue.make_len_gt(MODE_UNICODE, op.getdescr(), indexvalue.box.getint()) self.optimize_default(op) - + # These are typically removed already by OptRewrite, but it can be + # dissabled and unrolling emits some SAME_AS ops to setup the + # optimizier state. These needs to always be optimized out. + def optimize_SAME_AS(self, op): + self.make_equal_to(op.result, self.getvalue(op.getarg(0))) dispatch_opt = make_dispatcher_method(Optimizer, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -335,9 +335,13 @@ args[short_inputargs[i]] = jmp_to_short_args[i] self.short_inliner = Inliner(short_inputargs, jmp_to_short_args) - for op in self.short[1:]: + i = 1 + while i < len(self.short): + # Note that self.short might be extended during this loop + op = self.short[i] newop = self.short_inliner.inline_op(op) self.optimizer.send_extra_operation(newop) + i += 1 # Import boxes produced in the preamble but used in the loop newoperations = self.optimizer.get_newoperations() diff --git a/pypy/module/micronumpy/compile.py b/pypy/module/micronumpy/compile.py --- a/pypy/module/micronumpy/compile.py +++ b/pypy/module/micronumpy/compile.py @@ -6,6 +6,7 @@ import re from pypy.interpreter.baseobjspace import InternalSpaceCache, W_Root +from pypy.interpreter.error import OperationError from pypy.module.micronumpy import interp_boxes from pypy.module.micronumpy.interp_dtype import get_dtype_cache from pypy.module.micronumpy.interp_numarray import (Scalar, BaseArray, @@ -39,11 +40,11 @@ THREE_ARG_FUNCTIONS = ['where'] class FakeSpace(object): - w_ValueError = None - w_TypeError = None - w_IndexError = None - w_OverflowError = None - w_NotImplementedError = None + w_ValueError = "ValueError" + w_TypeError = "TypeError" + w_IndexError = "IndexError" + w_OverflowError = "OverflowError" + w_NotImplementedError = "NotImplementedError" w_None = None w_bool = "bool" @@ -126,8 +127,13 @@ return w_obj.intval elif isinstance(w_obj, FloatObject): return int(w_obj.floatval) + elif isinstance(w_obj, SliceObject): + raise OperationError(self.w_TypeError, self.wrap("slice.")) raise NotImplementedError + def index(self, w_obj): + return self.wrap(self.int_w(w_obj)) + def str_w(self, w_obj): if isinstance(w_obj, StringObject): return w_obj.v diff --git a/pypy/module/micronumpy/interp_numarray.py b/pypy/module/micronumpy/interp_numarray.py --- a/pypy/module/micronumpy/interp_numarray.py +++ b/pypy/module/micronumpy/interp_numarray.py @@ -350,12 +350,31 @@ if shape_len == 1: if space.isinstance_w(w_idx, space.w_int): return True + + try: + value = space.int_w(space.index(w_idx)) + return True + except OperationError: + pass + + try: + value = space.int_w(w_idx) + return True + except OperationError: + pass + if space.isinstance_w(w_idx, space.w_slice): return False elif (space.isinstance_w(w_idx, space.w_slice) or space.isinstance_w(w_idx, space.w_int)): return False - lgt = space.len_w(w_idx) + + try: + lgt = space.len_w(w_idx) + except OperationError: + raise OperationError(space.w_IndexError, + space.wrap("index must be either an int or a sequence.")) + if lgt > shape_len: raise OperationError(space.w_IndexError, space.wrap("invalid index")) @@ -1030,8 +1049,21 @@ @jit.unroll_safe def _index_of_single_item(self, space, w_idx): - if space.isinstance_w(w_idx, space.w_int): - idx = space.int_w(w_idx) + is_valid = False + try: + idx = space.int_w(space.index(w_idx)) + is_valid = True + except OperationError: + pass + + if not is_valid: + try: + idx = space.int_w(w_idx) + is_valid = True + except OperationError: + pass + + if is_valid: if idx < 0: idx = self.shape[0] + idx if idx < 0 or idx >= self.shape[0]: diff --git a/pypy/module/micronumpy/test/test_base.py b/pypy/module/micronumpy/test/test_base.py --- a/pypy/module/micronumpy/test/test_base.py +++ b/pypy/module/micronumpy/test/test_base.py @@ -10,6 +10,7 @@ import sys class BaseNumpyAppTest(object): + @classmethod def setup_class(cls): if option.runappdirect: if '__pypy__' not in sys.builtin_module_names: diff --git a/pypy/module/micronumpy/test/test_numarray.py b/pypy/module/micronumpy/test/test_numarray.py --- a/pypy/module/micronumpy/test/test_numarray.py +++ b/pypy/module/micronumpy/test/test_numarray.py @@ -8,7 +8,6 @@ from pypy.module.micronumpy.interp_numarray import W_NDimArray, shape_agreement from pypy.module.micronumpy.test.test_base import BaseNumpyAppTest - class MockDtype(object): class itemtype(object): @staticmethod @@ -195,6 +194,36 @@ assert _to_coords(13, 'F') == [1, 0, 2] class AppTestNumArray(BaseNumpyAppTest): + def w_CustomIndexObject(self, index): + class CustomIndexObject(object): + def __init__(self, index): + self.index = index + def __index__(self): + return self.index + + return CustomIndexObject(index) + + def w_CustomIndexIntObject(self, index, value): + class CustomIndexIntObject(object): + def __init__(self, index, value): + self.index = index + self.value = value + def __index__(self): + return self.index + def __int__(self): + return self.value + + return CustomIndexIntObject(index, value) + + def w_CustomIntObject(self, value): + class CustomIntObject(object): + def __init__(self, value): + self.value = value + def __index__(self): + return self.value + + return CustomIntObject(value) + def test_ndarray(self): from _numpypy import ndarray, array, dtype @@ -329,6 +358,28 @@ assert a[1, 3] == 8 assert a.T[1, 2] == 11 + def test_getitem_obj_index(self): + from _numpypy import arange + + a = arange(10) + + assert a[self.CustomIndexObject(1)] == 1 + + def test_getitem_obj_prefer_index_to_int(self): + from _numpypy import arange + + a = arange(10) + + + assert a[self.CustomIndexIntObject(0, 1)] == 0 + + def test_getitem_obj_int(self): + from _numpypy import arange + + a = arange(10) + + assert a[self.CustomIntObject(1)] == 1 + def test_setitem(self): from _numpypy import array a = array(range(5)) @@ -348,6 +399,48 @@ for i in xrange(5): assert a[i] == i + def test_setitem_obj_index(self): + from _numpypy import arange + + a = arange(10) + + a[self.CustomIndexObject(1)] = 100 + assert a[1] == 100 + + def test_setitem_obj_prefer_index_to_int(self): + from _numpypy import arange + + a = arange(10) + + a[self.CustomIndexIntObject(0, 1)] = 100 + assert a[0] == 100 + + def test_setitem_obj_int(self): + from _numpypy import arange + + a = arange(10) + + a[self.CustomIntObject(1)] = 100 + + assert a[1] == 100 + + def test_access_swallow_exception(self): + class ErrorIndex(object): + def __index__(self): + return 1 / 0 + + class ErrorInt(object): + def __int__(self): + return 1 / 0 + + # numpy will swallow errors in __int__ and __index__ and + # just raise IndexError. + + from _numpypy import arange + a = arange(10) + raises(IndexError, "a[ErrorIndex()] == 0") + raises(IndexError, "a[ErrorInt()] == 0") + def test_setslice_array(self): from _numpypy import array a = array(range(5)) diff --git a/pypy/module/thread/gil.py b/pypy/module/thread/gil.py --- a/pypy/module/thread/gil.py +++ b/pypy/module/thread/gil.py @@ -5,7 +5,7 @@ # This module adds a global lock to an object space. # If multiple threads try to execute simultaneously in this space, # all but one will be blocked. The other threads get a chance to run -# from time to time, using the hook yield_thread(). +# from time to time, using the periodic action GILReleaseAction. from pypy.module.thread import ll_thread as thread from pypy.module.thread.error import wrap_thread_error @@ -51,8 +51,6 @@ self.gil_ready = False self.setup_threads(space) - def yield_thread(self): - do_yield_thread() class GILReleaseAction(PeriodicAsyncAction): """An action called every sys.checkinterval bytecodes. It releases diff --git a/pypy/module/thread/test/test_gil.py b/pypy/module/thread/test/test_gil.py --- a/pypy/module/thread/test/test_gil.py +++ b/pypy/module/thread/test/test_gil.py @@ -55,7 +55,7 @@ assert state.datalen3 == len(state.data) assert state.datalen4 == len(state.data) debug_print(main, i, state.datalen4) - state.threadlocals.yield_thread() + gil.do_yield_thread() assert i == j j += 1 def bootstrap(): From noreply at buildbot.pypy.org Sun May 6 14:51:19 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Sun, 6 May 2012 14:51:19 +0200 (CEST) Subject: [pypy-commit] pypy stdlib-unification/py3k: merge from py3k Message-ID: <20120506125119.2110482E46@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: stdlib-unification/py3k Changeset: r54908:143ff76797dd Date: 2012-05-06 14:50 +0200 http://bitbucket.org/pypy/pypy/changeset/143ff76797dd/ Log: merge from py3k diff --git a/pypy/module/_sre/test/test_app_sre.py b/pypy/module/_sre/test/test_app_sre.py --- a/pypy/module/_sre/test/test_app_sre.py +++ b/pypy/module/_sre/test/test_app_sre.py @@ -189,13 +189,13 @@ def test_sub_unicode(self): import re - assert isinstance(re.sub(u"a", u"b", u""), str) + assert isinstance(re.sub("a", "b", ""), str) # the input is returned unmodified if no substitution is performed, # which (if interpreted literally, as CPython does) gives the # following strangeish rules: - assert isinstance(re.sub(u"a", u"b", "diwoiioamoi"), str) - assert isinstance(re.sub(u"a", u"b", b"diwoiiobmoi"), bytes) - assert isinstance(re.sub(u'x', b'y', b'x'), bytes) + assert isinstance(re.sub("a", "b", "diwoiioamoi"), str) + assert isinstance(re.sub("a", "b", b"diwoiiobmoi"), bytes) + assert isinstance(re.sub('x', b'y', b'x'), bytes) def test_sub_callable(self): import re @@ -327,17 +327,17 @@ def test_getlower_no_flags(self): UPPER_AE = "\xc4" s.assert_lower_equal([("a", "a"), ("A", "a"), (UPPER_AE, UPPER_AE), - (u"\u00c4", u"\u00c4"), (u"\u4444", u"\u4444")], 0) + ("\u00c4", "\u00c4"), ("\u4444", "\u4444")], 0) def test_getlower_locale(self): import locale, sre_constants UPPER_AE = "\xc4" LOWER_AE = "\xe4" - UPPER_PI = u"\u03a0" + UPPER_PI = "\u03a0" try: locale.setlocale(locale.LC_ALL, "de_DE") s.assert_lower_equal([("a", "a"), ("A", "a"), (UPPER_AE, LOWER_AE), - (u"\u00c4", u"\u00e4"), (UPPER_PI, UPPER_PI)], + ("\u00c4", "\u00e4"), (UPPER_PI, UPPER_PI)], sre_constants.SRE_FLAG_LOCALE) except locale.Error: # skip test @@ -347,11 +347,11 @@ import sre_constants UPPER_AE = "\xc4" LOWER_AE = "\xe4" - UPPER_PI = u"\u03a0" - LOWER_PI = u"\u03c0" + UPPER_PI = "\u03a0" + LOWER_PI = "\u03c0" s.assert_lower_equal([("a", "a"), ("A", "a"), (UPPER_AE, LOWER_AE), - (u"\u00c4", u"\u00e4"), (UPPER_PI, LOWER_PI), - (u"\u4444", u"\u4444")], sre_constants.SRE_FLAG_UNICODE) + ("\u00c4", "\u00e4"), (UPPER_PI, LOWER_PI), + ("\u4444", "\u4444")], sre_constants.SRE_FLAG_UNICODE) class AppTestSimpleSearches: @@ -373,26 +373,26 @@ def test_search_simple_boundaries(self): import re - UPPER_PI = u"\u03a0" + UPPER_PI = "\u03a0" assert re.search(r"bla\b", "bla") assert re.search(r"bla\b", "bla ja") - assert re.search(r"bla\b", u"bla%s" % UPPER_PI) + assert re.search(r"bla\b", "bla%s" % UPPER_PI, re.ASCII) assert not re.search(r"bla\b", "blano") - assert not re.search(r"bla\b", u"bla%s" % UPPER_PI, re.UNICODE) + assert not re.search(r"bla\b", "bla%s" % UPPER_PI, re.UNICODE) def test_search_simple_categories(self): import re - LOWER_PI = u"\u03c0" - INDIAN_DIGIT = u"\u0966" - EM_SPACE = u"\u2001" + LOWER_PI = "\u03c0" + INDIAN_DIGIT = "\u0966" + EM_SPACE = "\u2001" LOWER_AE = "\xe4" assert re.search(r"bla\d\s\w", "bla3 b") - assert re.search(r"b\d", u"b%s" % INDIAN_DIGIT, re.UNICODE) - assert not re.search(r"b\D", u"b%s" % INDIAN_DIGIT, re.UNICODE) - assert re.search(r"b\s", u"b%s" % EM_SPACE, re.UNICODE) - assert not re.search(r"b\S", u"b%s" % EM_SPACE, re.UNICODE) - assert re.search(r"b\w", u"b%s" % LOWER_PI, re.UNICODE) - assert not re.search(r"b\W", u"b%s" % LOWER_PI, re.UNICODE) + assert re.search(r"b\d", "b%s" % INDIAN_DIGIT, re.UNICODE) + assert not re.search(r"b\D", "b%s" % INDIAN_DIGIT, re.UNICODE) + assert re.search(r"b\s", "b%s" % EM_SPACE, re.UNICODE) + assert not re.search(r"b\S", "b%s" % EM_SPACE, re.UNICODE) + assert re.search(r"b\w", "b%s" % LOWER_PI, re.UNICODE) + assert not re.search(r"b\W", "b%s" % LOWER_PI, re.UNICODE) assert re.search(r"b\w", "b%s" % LOWER_AE, re.UNICODE) def test_search_simple_any(self): @@ -403,38 +403,38 @@ def test_search_simple_in(self): import re - UPPER_PI = u"\u03a0" - LOWER_PI = u"\u03c0" - EM_SPACE = u"\u2001" - LINE_SEP = u"\u2028" + UPPER_PI = "\u03a0" + LOWER_PI = "\u03c0" + EM_SPACE = "\u2001" + LINE_SEP = "\u2028" assert re.search(r"b[\da-z]a", "bb1a") assert re.search(r"b[\da-z]a", "bbsa") assert not re.search(r"b[\da-z]a", "bbSa") assert re.search(r"b[^okd]a", "bsa") assert not re.search(r"b[^okd]a", "bda") - assert re.search(u"b[%s%s%s]a" % (LOWER_PI, UPPER_PI, EM_SPACE), - u"b%sa" % UPPER_PI) # bigcharset - assert re.search(u"b[%s%s%s]a" % (LOWER_PI, UPPER_PI, EM_SPACE), - u"b%sa" % EM_SPACE) - assert not re.search(u"b[%s%s%s]a" % (LOWER_PI, UPPER_PI, EM_SPACE), - u"b%sa" % LINE_SEP) + assert re.search("b[%s%s%s]a" % (LOWER_PI, UPPER_PI, EM_SPACE), + "b%sa" % UPPER_PI) # bigcharset + assert re.search("b[%s%s%s]a" % (LOWER_PI, UPPER_PI, EM_SPACE), + "b%sa" % EM_SPACE) + assert not re.search("b[%s%s%s]a" % (LOWER_PI, UPPER_PI, EM_SPACE), + "b%sa" % LINE_SEP) def test_search_simple_literal_ignore(self): import re - UPPER_PI = u"\u03a0" - LOWER_PI = u"\u03c0" + UPPER_PI = "\u03a0" + LOWER_PI = "\u03c0" assert re.search(r"ba", "ba", re.IGNORECASE) assert re.search(r"ba", "BA", re.IGNORECASE) - assert re.search(u"b%s" % UPPER_PI, u"B%s" % LOWER_PI, + assert re.search("b%s" % UPPER_PI, "B%s" % LOWER_PI, re.IGNORECASE | re.UNICODE) def test_search_simple_in_ignore(self): import re - UPPER_PI = u"\u03a0" - LOWER_PI = u"\u03c0" + UPPER_PI = "\u03a0" + LOWER_PI = "\u03c0" assert re.search(r"ba[A-C]", "bac", re.IGNORECASE) assert re.search(r"ba[a-c]", "baB", re.IGNORECASE) - assert re.search(u"ba[%s]" % UPPER_PI, "ba%s" % LOWER_PI, + assert re.search("ba[%s]" % UPPER_PI, "ba%s" % LOWER_PI, re.IGNORECASE | re.UNICODE) assert re.search(r"ba[^A-C]", "bar", re.IGNORECASE) assert not re.search(r"ba[^A-C]", "baA", re.IGNORECASE) @@ -496,13 +496,13 @@ def test_search_simple_groupref(self): import re - UPPER_PI = u"\u03a0" - LOWER_PI = u"\u03c0" + UPPER_PI = "\u03a0" + LOWER_PI = "\u03c0" assert re.match(r"((ab)+)c\1", "ababcabab") assert not re.match(r"((ab)+)c\1", "ababcab") assert not re.search(r"(a|(b))\2", "aa") assert re.match(r"((ab)+)c\1", "aBAbcAbaB", re.IGNORECASE) - assert re.match(r"((a.)+)c\1", u"a%sca%s" % (UPPER_PI, LOWER_PI), + assert re.match(r"((a.)+)c\1", "a%sca%s" % (UPPER_PI, LOWER_PI), re.IGNORECASE | re.UNICODE) def test_search_simple_groupref_exists(self): @@ -666,15 +666,15 @@ skip("locale error") def test_at_uni_boundary(self): - UPPER_PI = u"\u03a0" - LOWER_PI = u"\u03c0" + UPPER_PI = "\u03a0" + LOWER_PI = "\u03c0" opcodes = s.encode_literal("bl") + [s.OPCODES["any"], s.OPCODES["at"], s.ATCODES["at_uni_boundary"], s.OPCODES["success"]] - s.assert_match(opcodes, ["bla ha", u"bl%s ja" % UPPER_PI]) - s.assert_no_match(opcodes, [u"bla%s" % LOWER_PI]) + s.assert_match(opcodes, ["bla ha", "bl%s ja" % UPPER_PI]) + s.assert_no_match(opcodes, ["bla%s" % LOWER_PI]) opcodes = s.encode_literal("bl") + [s.OPCODES["any"], s.OPCODES["at"], s.ATCODES["at_uni_non_boundary"], s.OPCODES["success"]] - s.assert_match(opcodes, ["blaha", u"bl%sja" % UPPER_PI]) + s.assert_match(opcodes, ["blaha", "bl%sja" % UPPER_PI]) def test_category_loc_word(self): import locale @@ -685,11 +685,11 @@ opcodes2 = s.encode_literal("b") \ + [s.OPCODES["category"], s.CHCODES["category_loc_not_word"], s.OPCODES["success"]] s.assert_no_match(opcodes1, "b\xFC") - s.assert_no_match(opcodes1, u"b\u00FC") + s.assert_no_match(opcodes1, "b\u00FC") s.assert_match(opcodes2, "b\xFC") locale.setlocale(locale.LC_ALL, "de_DE") s.assert_match(opcodes1, "b\xFC") - s.assert_no_match(opcodes1, u"b\u00FC") + s.assert_no_match(opcodes1, "b\u00FC") s.assert_no_match(opcodes2, "b\xFC") s.void_locale() except locale.Error: @@ -777,10 +777,10 @@ s.assert_no_match(opcodes, ["bb", "bu"]) def test_not_literal_ignore(self): - UPPER_PI = u"\u03a0" + UPPER_PI = "\u03a0" opcodes = s.encode_literal("b") \ + [s.OPCODES["not_literal_ignore"], ord("a"), s.OPCODES["success"]] - s.assert_match(opcodes, ["bb", "bu", u"b%s" % UPPER_PI]) + s.assert_match(opcodes, ["bb", "bu", "b%s" % UPPER_PI]) s.assert_no_match(opcodes, ["ba", "bA"]) def test_in_ignore(self): From noreply at buildbot.pypy.org Sun May 6 15:16:13 2012 From: noreply at buildbot.pypy.org (arigo) Date: Sun, 6 May 2012 15:16:13 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: Fish from the history stm_perform_transaction(). Message-ID: <20120506131613.47E7982E46@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r54909:61d19c0aebe3 Date: 2012-05-06 15:15 +0200 http://bitbucket.org/pypy/pypy/changeset/61d19c0aebe3/ Log: Fish from the history stm_perform_transaction(). Now, it is the only way to start a non-inevitable transaction. diff --git a/pypy/rlib/rstm.py b/pypy/rlib/rstm.py --- a/pypy/rlib/rstm.py +++ b/pypy/rlib/rstm.py @@ -1,8 +1,12 @@ +import threading from pypy.translator.stm import stmgcintf from pypy.rlib.debug import ll_assert -from pypy.rpython.lltypesystem import lltype +from pypy.rlib.objectmodel import keepalive_until_here, specialize +from pypy.rpython.lltypesystem import lltype, llmemory, rffi, rclass from pypy.rpython.lltypesystem.lloperation import llop - +from pypy.rpython.annlowlevel import (cast_base_ptr_to_instance, + cast_instance_to_base_ptr, + llhelper) def before_external_call(): llop.stm_stop_transaction(lltype.Void) @@ -36,8 +40,31 @@ leave_callback_call._dont_reach_me_in_del_ = True leave_callback_call._transaction_break_ = True -def do_yield_thread(): - stmgcintf.StmOperations.do_yield_thread() -do_yield_thread._gctransformer_hint_close_stack_ = True -do_yield_thread._dont_reach_me_in_del_ = True -do_yield_thread._transaction_break_ = True +# ____________________________________________________________ + + at specialize.memo() +def _get_stm_callback(func, argcls): + def _stm_callback(llarg, retry_counter): + llop.stm_start_transaction(lltype.Void) + llarg = rffi.cast(rclass.OBJECTPTR, llarg) + arg = cast_base_ptr_to_instance(argcls, llarg) + try: + func(arg, retry_counter) + finally: + llop.stm_stop_transaction(lltype.Void) + return _stm_callback + + at specialize.arg(0, 1) +def perform_transaction(func, argcls, arg): + ll_assert(arg is None or isinstance(arg, argcls), + "perform_transaction: wrong class") + before_external_call() + llarg = cast_instance_to_base_ptr(arg) + llarg = rffi.cast(rffi.VOIDP, llarg) + adr_of_top = llop.gc_adr_of_root_stack_top(llmemory.Address) + # + callback = _get_stm_callback(func, argcls) + llcallback = llhelper(stmgcintf.StmOperations.CALLBACK_TX, callback) + stmgcintf.StmOperations.perform_transaction(llcallback, llarg, adr_of_top) + after_external_call() + keepalive_until_here(arg) diff --git a/pypy/translator/stm/src_stm/core.c b/pypy/translator/stm/src_stm/core.c --- a/pypy/translator/stm/src_stm/core.c +++ b/pypy/translator/stm/src_stm/core.c @@ -188,18 +188,15 @@ d->active = 0; } -static void tx_cleanup(struct tx_descriptor *d) +static void tx_restart(struct tx_descriptor *d) { // release the locks and restore version numbers releaseAndRevertLocks(d); + // notifies the CPU that we're potentially in a spin loop + tx_spinloop(0); // reset all lists common_cleanup(d); -} - -static void tx_restart(struct tx_descriptor *d) -{ - tx_cleanup(d); - tx_spinloop(0); + // jump back to the setjmp_buf (this call does not return) longjmp(*d->setjmp_buf, 1); } @@ -542,22 +539,27 @@ while (1) { - mutex_lock(); - unsigned long curtime = get_global_timestamp(d) & ~1; - if (change_global_timestamp(d, curtime, curtime + 1)) + unsigned long curtime = get_global_timestamp(d); + if (curtime & 1) { - d->start_time = curtime; - break; + /* already an inevitable transaction: wait */ + tx_spinloop(6); + mutex_lock(); + mutex_unlock(); } - mutex_unlock(); - tx_spinloop(6); + else + if (change_global_timestamp(d, curtime, curtime + 1)) + { + d->start_time = curtime; + break; + } } } void stm_commit_transaction(void) { struct tx_descriptor *d = thread_descriptor; - assert(d != NULL); + assert(d->active != 0); // if I don't have writes, I'm committed if (!redolog_any_entry(&d->redolog)) @@ -740,6 +742,27 @@ return is_inevitable(d); } +void stm_perform_transaction(void(*callback)(void*, long), void *arg, + void *save_and_restore) +{ + jmp_buf _jmpbuf; + long volatile v_counter = 0; + long counter; + void *volatile saved_value; + struct tx_descriptor *d = thread_descriptor; + assert(d->active == 0); + saved_value = *(void**)save_and_restore; + /***/ + setjmp(_jmpbuf); + /***/ + *(void**)save_and_restore = saved_value; + begin_transaction(&_jmpbuf); + counter = v_counter; + v_counter = counter + 1; + callback(arg, counter); + stm_commit_transaction(); +} + #undef GETVERSION #undef GETVERSIONREF #undef SETVERSION diff --git a/pypy/translator/stm/src_stm/et.h b/pypy/translator/stm/src_stm/et.h --- a/pypy/translator/stm/src_stm/et.h +++ b/pypy/translator/stm/src_stm/et.h @@ -30,6 +30,8 @@ long stm_in_transaction(void); long stm_is_inevitable(void); +void stm_perform_transaction(void(*)(void*, long), void*, void*); + /* these functions are declared by generated C code from pypy.rlib.rstm and from the GC (see llop.nop(...)) */ extern void pypy_g__stm_thread_starting(void); diff --git a/pypy/translator/stm/stmgcintf.py b/pypy/translator/stm/stmgcintf.py --- a/pypy/translator/stm/stmgcintf.py +++ b/pypy/translator/stm/stmgcintf.py @@ -35,8 +35,8 @@ '4f': rffi.FLOAT} INIT_DONE = lltype.Ptr(lltype.FuncType([], lltype.Void)) - RUN_TRANSACTION = lltype.Ptr(lltype.FuncType([rffi.VOIDP, lltype.Signed], - rffi.VOIDP)) + CALLBACK_TX = lltype.Ptr(lltype.FuncType([rffi.VOIDP, lltype.Signed], + lltype.Void)) GETSIZE = lltype.Ptr(lltype.FuncType([llmemory.Address], lltype.Signed)) CALLBACK_ENUM = lltype.Ptr(lltype.FuncType([llmemory.Address]*3, @@ -54,8 +54,9 @@ 'stm_begin_inevitable_transaction', [], lltype.Void) commit_transaction = smexternal( 'stm_commit_transaction', [], lltype.Void) - do_yield_thread = smexternal('stm_do_yield_thread', - [], lltype.Void) + perform_transaction = smexternal('stm_perform_transaction', + [CALLBACK_TX, rffi.VOIDP, llmemory.Address], + lltype.Void) # for the GC: store and read a thread-local-storage field, as well # as initialize and shut down the internal thread_descriptor diff --git a/pypy/translator/stm/test/targetdemo2.py b/pypy/translator/stm/test/targetdemo2.py --- a/pypy/translator/stm/test/targetdemo2.py +++ b/pypy/translator/stm/test/targetdemo2.py @@ -64,11 +64,15 @@ def run(self): try: for value in range(glob.LENGTH): - add_at_end_of_chained_list(glob.anchor, value, self.index) - #rstm.do_yield_thread() + self.value = value + rstm.perform_transaction(ThreadRunner.run_really, + ThreadRunner, self) finally: self.finished_lock.release() + def run_really(self, retry_counter): + add_at_end_of_chained_list(glob.anchor, self.value, self.index) + # ____________________________________________________________ # bah, we are really missing an RPython interface to threads From noreply at buildbot.pypy.org Sun May 6 18:32:46 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Sun, 6 May 2012 18:32:46 +0200 (CEST) Subject: [pypy-commit] pypy stdlib-unification: close before merge Message-ID: <20120506163246.4085F82E46@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: stdlib-unification Changeset: r54910:d4ebad455229 Date: 2012-05-06 18:29 +0200 http://bitbucket.org/pypy/pypy/changeset/d4ebad455229/ Log: close before merge From noreply at buildbot.pypy.org Sun May 6 18:33:14 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Sun, 6 May 2012 18:33:14 +0200 (CEST) Subject: [pypy-commit] pypy default: merge stdlib-unification Message-ID: <20120506163314.183F082E46@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r54911:6029ea369eb9 Date: 2012-05-06 18:30 +0200 http://bitbucket.org/pypy/pypy/changeset/6029ea369eb9/ Log: merge stdlib-unification diff too long, truncating to 10000 out of 744970 lines diff --git a/lib-python/2.7/UserDict.py b/lib-python/2.7/UserDict.py --- a/lib-python/2.7/UserDict.py +++ b/lib-python/2.7/UserDict.py @@ -80,8 +80,12 @@ def __iter__(self): return iter(self.data) -import _abcoll -_abcoll.MutableMapping.register(IterableUserDict) +try: + import _abcoll +except ImportError: + pass # e.g. no '_weakref' module on this pypy +else: + _abcoll.MutableMapping.register(IterableUserDict) class DictMixin: diff --git a/lib-python/2.7/_threading_local.py b/lib-python/2.7/_threading_local.py --- a/lib-python/2.7/_threading_local.py +++ b/lib-python/2.7/_threading_local.py @@ -155,7 +155,7 @@ object.__setattr__(self, '_local__args', (args, kw)) object.__setattr__(self, '_local__lock', RLock()) - if (args or kw) and (cls.__init__ is object.__init__): + if (args or kw) and (cls.__init__ == object.__init__): raise TypeError("Initialization arguments are not supported") # We need to create the thread dict in anticipation of diff --git a/lib-python/2.7/ctypes/__init__.py b/lib-python/2.7/ctypes/__init__.py --- a/lib-python/2.7/ctypes/__init__.py +++ b/lib-python/2.7/ctypes/__init__.py @@ -7,6 +7,7 @@ __version__ = "1.1.0" +import _ffi from _ctypes import Union, Structure, Array from _ctypes import _Pointer from _ctypes import CFuncPtr as _CFuncPtr @@ -350,16 +351,17 @@ self._FuncPtr = _FuncPtr if handle is None: - self._handle = _dlopen(self._name, mode) + self._handle = _ffi.CDLL(name, mode) else: self._handle = handle def __repr__(self): - return "<%s '%s', handle %x at %x>" % \ + return "<%s '%s', handle %r at %x>" % \ (self.__class__.__name__, self._name, - (self._handle & (_sys.maxint*2 + 1)), + (self._handle), id(self) & (_sys.maxint*2 + 1)) + def __getattr__(self, name): if name.startswith('__') and name.endswith('__'): raise AttributeError(name) @@ -487,9 +489,12 @@ _flags_ = _FUNCFLAG_CDECL | _FUNCFLAG_PYTHONAPI return CFunctionType -_cast = PYFUNCTYPE(py_object, c_void_p, py_object, py_object)(_cast_addr) def cast(obj, typ): - return _cast(obj, obj, typ) + try: + c_void_p.from_param(obj) + except TypeError, e: + raise ArgumentError(str(e)) + return _cast_addr(obj, obj, typ) _string_at = PYFUNCTYPE(py_object, c_void_p, c_int)(_string_at_addr) def string_at(ptr, size=-1): diff --git a/lib-python/2.7/ctypes/test/__init__.py b/lib-python/2.7/ctypes/test/__init__.py --- a/lib-python/2.7/ctypes/test/__init__.py +++ b/lib-python/2.7/ctypes/test/__init__.py @@ -206,3 +206,16 @@ result = unittest.TestResult() test(result) return result + +def xfail(method): + """ + Poor's man xfail: remove it when all the failures have been fixed + """ + def new_method(self, *args, **kwds): + try: + method(self, *args, **kwds) + except: + pass + else: + self.assertTrue(False, "DID NOT RAISE") + return new_method diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -1,12 +1,23 @@ import unittest from ctypes import * +from test.test_support import impl_detail formats = "bBhHiIlLqQfd" +# c_longdouble commented out for PyPy, look at the commend in test_longdouble formats = c_byte, c_ubyte, c_short, c_ushort, c_int, c_uint, \ - c_long, c_ulonglong, c_float, c_double, c_longdouble + c_long, c_ulonglong, c_float, c_double #, c_longdouble class ArrayTestCase(unittest.TestCase): + + @impl_detail('long double not supported by PyPy', pypy=False) + def test_longdouble(self): + """ + This test is empty. It's just here to remind that we commented out + c_longdouble in "formats". If pypy will ever supports c_longdouble, we + should kill this test and uncomment c_longdouble inside formats. + """ + def test_simple(self): # create classes holding simple numeric types, and check # various properties. diff --git a/lib-python/2.7/ctypes/test/test_bitfields.py b/lib-python/2.7/ctypes/test/test_bitfields.py --- a/lib-python/2.7/ctypes/test/test_bitfields.py +++ b/lib-python/2.7/ctypes/test/test_bitfields.py @@ -115,17 +115,21 @@ def test_nonint_types(self): # bit fields are not allowed on non-integer types. result = self.fail_fields(("a", c_char_p, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type c_char_p')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) result = self.fail_fields(("a", c_void_p, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type c_void_p')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) if c_int != c_long: result = self.fail_fields(("a", POINTER(c_int), 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type LP_c_int')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) result = self.fail_fields(("a", c_char, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type c_char')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) try: c_wchar @@ -133,13 +137,15 @@ pass else: result = self.fail_fields(("a", c_wchar, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type c_wchar')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) class Dummy(Structure): _fields_ = [] result = self.fail_fields(("a", Dummy, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type Dummy')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) def test_single_bitfield_size(self): for c_typ in int_types: diff --git a/lib-python/2.7/ctypes/test/test_byteswap.py b/lib-python/2.7/ctypes/test/test_byteswap.py --- a/lib-python/2.7/ctypes/test/test_byteswap.py +++ b/lib-python/2.7/ctypes/test/test_byteswap.py @@ -2,6 +2,7 @@ from binascii import hexlify from ctypes import * +from ctypes.test import xfail def bin(s): return hexlify(memoryview(s)).upper() @@ -21,6 +22,7 @@ setattr(bits, "i%s" % i, 1) dump(bits) + @xfail def test_endian_short(self): if sys.byteorder == "little": self.assertTrue(c_short.__ctype_le__ is c_short) @@ -48,6 +50,7 @@ self.assertEqual(bin(s), "3412") self.assertEqual(s.value, 0x1234) + @xfail def test_endian_int(self): if sys.byteorder == "little": self.assertTrue(c_int.__ctype_le__ is c_int) @@ -76,6 +79,7 @@ self.assertEqual(bin(s), "78563412") self.assertEqual(s.value, 0x12345678) + @xfail def test_endian_longlong(self): if sys.byteorder == "little": self.assertTrue(c_longlong.__ctype_le__ is c_longlong) @@ -104,6 +108,7 @@ self.assertEqual(bin(s), "EFCDAB9078563412") self.assertEqual(s.value, 0x1234567890ABCDEF) + @xfail def test_endian_float(self): if sys.byteorder == "little": self.assertTrue(c_float.__ctype_le__ is c_float) @@ -122,6 +127,7 @@ self.assertAlmostEqual(s.value, math.pi, 6) self.assertEqual(bin(struct.pack(">f", math.pi)), bin(s)) + @xfail def test_endian_double(self): if sys.byteorder == "little": self.assertTrue(c_double.__ctype_le__ is c_double) @@ -149,6 +155,7 @@ self.assertTrue(c_char.__ctype_le__ is c_char) self.assertTrue(c_char.__ctype_be__ is c_char) + @xfail def test_struct_fields_1(self): if sys.byteorder == "little": base = BigEndianStructure @@ -198,6 +205,7 @@ pass self.assertRaises(TypeError, setattr, S, "_fields_", [("s", T)]) + @xfail def test_struct_fields_2(self): # standard packing in struct uses no alignment. # So, we have to align using pad bytes. @@ -221,6 +229,7 @@ s2 = struct.pack(fmt, 0x12, 0x1234, 0x12345678, 3.14) self.assertEqual(bin(s1), bin(s2)) + @xfail def test_unaligned_nonnative_struct_fields(self): if sys.byteorder == "little": base = BigEndianStructure diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -1,5 +1,6 @@ import unittest from ctypes import * +from ctypes.test import xfail import _ctypes_test class Callbacks(unittest.TestCase): @@ -98,6 +99,7 @@ ## self.check_type(c_char_p, "abc") ## self.check_type(c_char_p, "def") + @xfail def test_pyobject(self): o = () from sys import getrefcount as grc diff --git a/lib-python/2.7/ctypes/test/test_cfuncs.py b/lib-python/2.7/ctypes/test/test_cfuncs.py --- a/lib-python/2.7/ctypes/test/test_cfuncs.py +++ b/lib-python/2.7/ctypes/test/test_cfuncs.py @@ -3,8 +3,8 @@ import unittest from ctypes import * - import _ctypes_test +from test.test_support import impl_detail class CFunctions(unittest.TestCase): _dll = CDLL(_ctypes_test.__file__) @@ -158,12 +158,14 @@ self.assertEqual(self._dll.tf_bd(0, 42.), 14.) self.assertEqual(self.S(), 42) + @impl_detail('long double not supported by PyPy', pypy=False) def test_longdouble(self): self._dll.tf_D.restype = c_longdouble self._dll.tf_D.argtypes = (c_longdouble,) self.assertEqual(self._dll.tf_D(42.), 14.) self.assertEqual(self.S(), 42) - + + @impl_detail('long double not supported by PyPy', pypy=False) def test_longdouble_plus(self): self._dll.tf_bD.restype = c_longdouble self._dll.tf_bD.argtypes = (c_byte, c_longdouble) diff --git a/lib-python/2.7/ctypes/test/test_delattr.py b/lib-python/2.7/ctypes/test/test_delattr.py --- a/lib-python/2.7/ctypes/test/test_delattr.py +++ b/lib-python/2.7/ctypes/test/test_delattr.py @@ -6,15 +6,15 @@ class TestCase(unittest.TestCase): def test_simple(self): - self.assertRaises(TypeError, + self.assertRaises((TypeError, AttributeError), delattr, c_int(42), "value") def test_chararray(self): - self.assertRaises(TypeError, + self.assertRaises((TypeError, AttributeError), delattr, (c_char * 5)(), "value") def test_struct(self): - self.assertRaises(TypeError, + self.assertRaises((TypeError, AttributeError), delattr, X(), "foo") if __name__ == "__main__": diff --git a/lib-python/2.7/ctypes/test/test_frombuffer.py b/lib-python/2.7/ctypes/test/test_frombuffer.py --- a/lib-python/2.7/ctypes/test/test_frombuffer.py +++ b/lib-python/2.7/ctypes/test/test_frombuffer.py @@ -2,6 +2,7 @@ import array import gc import unittest +from ctypes.test import xfail class X(Structure): _fields_ = [("c_int", c_int)] @@ -10,6 +11,7 @@ self._init_called = True class Test(unittest.TestCase): + @xfail def test_fom_buffer(self): a = array.array("i", range(16)) x = (c_int * 16).from_buffer(a) @@ -35,6 +37,7 @@ self.assertRaises(TypeError, (c_char * 16).from_buffer, "a" * 16) + @xfail def test_fom_buffer_with_offset(self): a = array.array("i", range(16)) x = (c_int * 15).from_buffer(a, sizeof(c_int)) @@ -43,6 +46,7 @@ self.assertRaises(ValueError, lambda: (c_int * 16).from_buffer(a, sizeof(c_int))) self.assertRaises(ValueError, lambda: (c_int * 1).from_buffer(a, 16 * sizeof(c_int))) + @xfail def test_from_buffer_copy(self): a = array.array("i", range(16)) x = (c_int * 16).from_buffer_copy(a) @@ -67,6 +71,7 @@ x = (c_char * 16).from_buffer_copy("a" * 16) self.assertEqual(x[:], "a" * 16) + @xfail def test_fom_buffer_copy_with_offset(self): a = array.array("i", range(16)) x = (c_int * 15).from_buffer_copy(a, sizeof(c_int)) diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -7,6 +7,8 @@ from ctypes import * import sys, unittest +from ctypes.test import xfail +from test.test_support import impl_detail try: WINFUNCTYPE @@ -143,6 +145,7 @@ self.assertEqual(result, -21) self.assertEqual(type(result), float) + @impl_detail('long double not supported by PyPy', pypy=False) def test_longdoubleresult(self): f = dll._testfunc_D_bhilfD f.argtypes = [c_byte, c_short, c_int, c_long, c_float, c_longdouble] @@ -393,6 +396,7 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + @xfail def test_sf1651235(self): # see http://www.python.org/sf/1651235 diff --git a/lib-python/2.7/ctypes/test/test_internals.py b/lib-python/2.7/ctypes/test/test_internals.py --- a/lib-python/2.7/ctypes/test/test_internals.py +++ b/lib-python/2.7/ctypes/test/test_internals.py @@ -33,7 +33,13 @@ refcnt = grc(s) cs = c_char_p(s) self.assertEqual(refcnt + 1, grc(s)) - self.assertSame(cs._objects, s) + try: + # Moving gcs need to allocate a nonmoving buffer + cs._objects._obj + except AttributeError: + self.assertSame(cs._objects, s) + else: + self.assertSame(cs._objects._obj, s) def test_simple_struct(self): class X(Structure): diff --git a/lib-python/2.7/ctypes/test/test_libc.py b/lib-python/2.7/ctypes/test/test_libc.py --- a/lib-python/2.7/ctypes/test/test_libc.py +++ b/lib-python/2.7/ctypes/test/test_libc.py @@ -25,5 +25,14 @@ lib.my_qsort(chars, len(chars)-1, sizeof(c_char), comparefunc(sort)) self.assertEqual(chars.raw, " ,,aaaadmmmnpppsss\x00") + def SKIPPED_test_no_more_xfail(self): + # We decided to not explicitly support the whole ctypes-2.7 + # and instead go for a case-by-case, demand-driven approach. + # So this test is skipped instead of failing. + import socket + import ctypes.test + self.assertTrue(not hasattr(ctypes.test, 'xfail'), + "You should incrementally grep for '@xfail' and remove them, they are real failures") + if __name__ == "__main__": unittest.main() diff --git a/lib-python/2.7/ctypes/test/test_loading.py b/lib-python/2.7/ctypes/test/test_loading.py --- a/lib-python/2.7/ctypes/test/test_loading.py +++ b/lib-python/2.7/ctypes/test/test_loading.py @@ -2,7 +2,7 @@ import sys, unittest import os from ctypes.util import find_library -from ctypes.test import is_resource_enabled +from ctypes.test import is_resource_enabled, xfail libc_name = None if os.name == "nt": @@ -75,6 +75,7 @@ self.assertRaises(AttributeError, dll.__getitem__, 1234) if os.name == "nt": + @xfail def test_1703286_A(self): from _ctypes import LoadLibrary, FreeLibrary # On winXP 64-bit, advapi32 loads at an address that does @@ -85,6 +86,7 @@ handle = LoadLibrary("advapi32") FreeLibrary(handle) + @xfail def test_1703286_B(self): # Since on winXP 64-bit advapi32 loads like described # above, the (arbitrarily selected) CloseEventLog function diff --git a/lib-python/2.7/ctypes/test/test_macholib.py b/lib-python/2.7/ctypes/test/test_macholib.py --- a/lib-python/2.7/ctypes/test/test_macholib.py +++ b/lib-python/2.7/ctypes/test/test_macholib.py @@ -52,7 +52,6 @@ '/usr/lib/libSystem.B.dylib') result = find_lib('z') - self.assertTrue(result.startswith('/usr/lib/libz.1')) self.assertTrue(result.endswith('.dylib')) self.assertEqual(find_lib('IOKit'), diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -1,6 +1,7 @@ from ctypes import * import unittest import struct +from ctypes.test import xfail def valid_ranges(*types): # given a sequence of numeric types, collect their _type_ @@ -89,12 +90,14 @@ ## self.assertRaises(ValueError, t, l-1) ## self.assertRaises(ValueError, t, h+1) + @xfail def test_from_param(self): # the from_param class method attribute always # returns PyCArgObject instances for t in signed_types + unsigned_types + float_types: self.assertEqual(ArgType, type(t.from_param(0))) + @xfail def test_byref(self): # calling byref returns also a PyCArgObject instance for t in signed_types + unsigned_types + float_types + bool_types: @@ -102,6 +105,7 @@ self.assertEqual(ArgType, type(parm)) + @xfail def test_floats(self): # c_float and c_double can be created from # Python int, long and float @@ -115,6 +119,7 @@ self.assertEqual(t(2L).value, 2.0) self.assertEqual(t(f).value, 2.0) + @xfail def test_integers(self): class FloatLike(object): def __float__(self): diff --git a/lib-python/2.7/ctypes/test/test_objects.py b/lib-python/2.7/ctypes/test/test_objects.py --- a/lib-python/2.7/ctypes/test/test_objects.py +++ b/lib-python/2.7/ctypes/test/test_objects.py @@ -22,7 +22,7 @@ >>> array[4] = 'foo bar' >>> array._objects -{'4': 'foo bar'} +{'4': } >>> array[4] 'foo bar' >>> @@ -47,9 +47,9 @@ >>> x.array[0] = 'spam spam spam' >>> x._objects -{'0:2': 'spam spam spam'} +{'0:2': } >>> x.array._b_base_._objects -{'0:2': 'spam spam spam'} +{'0:2': } >>> ''' diff --git a/lib-python/2.7/ctypes/test/test_parameters.py b/lib-python/2.7/ctypes/test/test_parameters.py --- a/lib-python/2.7/ctypes/test/test_parameters.py +++ b/lib-python/2.7/ctypes/test/test_parameters.py @@ -1,5 +1,7 @@ import unittest, sys +from ctypes.test import xfail + class SimpleTypesTestCase(unittest.TestCase): def setUp(self): @@ -49,6 +51,7 @@ self.assertEqual(CWCHARP.from_param("abc"), "abcabcabc") # XXX Replace by c_char_p tests + @xfail def test_cstrings(self): from ctypes import c_char_p, byref @@ -86,7 +89,10 @@ pa = c_wchar_p.from_param(c_wchar_p(u"123")) self.assertEqual(type(pa), c_wchar_p) + if sys.platform == "win32": + test_cw_strings = xfail(test_cw_strings) + @xfail def test_int_pointers(self): from ctypes import c_short, c_uint, c_int, c_long, POINTER, pointer LPINT = POINTER(c_int) diff --git a/lib-python/2.7/ctypes/test/test_pep3118.py b/lib-python/2.7/ctypes/test/test_pep3118.py --- a/lib-python/2.7/ctypes/test/test_pep3118.py +++ b/lib-python/2.7/ctypes/test/test_pep3118.py @@ -1,6 +1,7 @@ import unittest from ctypes import * import re, sys +from ctypes.test import xfail if sys.byteorder == "little": THIS_ENDIAN = "<" @@ -19,6 +20,7 @@ class Test(unittest.TestCase): + @xfail def test_native_types(self): for tp, fmt, shape, itemtp in native_types: ob = tp() @@ -46,6 +48,7 @@ print(tp) raise + @xfail def test_endian_types(self): for tp, fmt, shape, itemtp in endian_types: ob = tp() diff --git a/lib-python/2.7/ctypes/test/test_pickling.py b/lib-python/2.7/ctypes/test/test_pickling.py --- a/lib-python/2.7/ctypes/test/test_pickling.py +++ b/lib-python/2.7/ctypes/test/test_pickling.py @@ -3,6 +3,7 @@ from ctypes import * import _ctypes_test dll = CDLL(_ctypes_test.__file__) +from ctypes.test import xfail class X(Structure): _fields_ = [("a", c_int), ("b", c_double)] @@ -21,6 +22,7 @@ def loads(self, item): return pickle.loads(item) + @xfail def test_simple(self): for src in [ c_int(42), @@ -31,6 +33,7 @@ self.assertEqual(memoryview(src).tobytes(), memoryview(dst).tobytes()) + @xfail def test_struct(self): X.init_called = 0 @@ -49,6 +52,7 @@ self.assertEqual(memoryview(y).tobytes(), memoryview(x).tobytes()) + @xfail def test_unpickable(self): # ctypes objects that are pointers or contain pointers are # unpickable. @@ -66,6 +70,7 @@ ]: self.assertRaises(ValueError, lambda: self.dumps(item)) + @xfail def test_wchar(self): pickle.dumps(c_char("x")) # Issue 5049 diff --git a/lib-python/2.7/ctypes/test/test_python_api.py b/lib-python/2.7/ctypes/test/test_python_api.py --- a/lib-python/2.7/ctypes/test/test_python_api.py +++ b/lib-python/2.7/ctypes/test/test_python_api.py @@ -1,6 +1,6 @@ from ctypes import * import unittest, sys -from ctypes.test import is_resource_enabled +from ctypes.test import is_resource_enabled, xfail ################################################################ # This section should be moved into ctypes\__init__.py, when it's ready. @@ -17,6 +17,7 @@ class PythonAPITestCase(unittest.TestCase): + @xfail def test_PyString_FromStringAndSize(self): PyString_FromStringAndSize = pythonapi.PyString_FromStringAndSize @@ -25,6 +26,7 @@ self.assertEqual(PyString_FromStringAndSize("abcdefghi", 3), "abc") + @xfail def test_PyString_FromString(self): pythonapi.PyString_FromString.restype = py_object pythonapi.PyString_FromString.argtypes = (c_char_p,) @@ -56,6 +58,7 @@ del res self.assertEqual(grc(42), ref42) + @xfail def test_PyObj_FromPtr(self): s = "abc def ghi jkl" ref = grc(s) @@ -81,6 +84,7 @@ # not enough arguments self.assertRaises(TypeError, PyOS_snprintf, buf) + @xfail def test_pyobject_repr(self): self.assertEqual(repr(py_object()), "py_object()") self.assertEqual(repr(py_object(42)), "py_object(42)") diff --git a/lib-python/2.7/ctypes/test/test_refcounts.py b/lib-python/2.7/ctypes/test/test_refcounts.py --- a/lib-python/2.7/ctypes/test/test_refcounts.py +++ b/lib-python/2.7/ctypes/test/test_refcounts.py @@ -90,6 +90,7 @@ return a * b * 2 f = proto(func) + gc.collect() a = sys.getrefcount(ctypes.c_int) f(1, 2) self.assertEqual(sys.getrefcount(ctypes.c_int), a) diff --git a/lib-python/2.7/ctypes/test/test_stringptr.py b/lib-python/2.7/ctypes/test/test_stringptr.py --- a/lib-python/2.7/ctypes/test/test_stringptr.py +++ b/lib-python/2.7/ctypes/test/test_stringptr.py @@ -2,11 +2,13 @@ from ctypes import * import _ctypes_test +from ctypes.test import xfail lib = CDLL(_ctypes_test.__file__) class StringPtrTestCase(unittest.TestCase): + @xfail def test__POINTER_c_char(self): class X(Structure): _fields_ = [("str", POINTER(c_char))] @@ -27,6 +29,7 @@ self.assertRaises(TypeError, setattr, x, "str", "Hello, World") + @xfail def test__c_char_p(self): class X(Structure): _fields_ = [("str", c_char_p)] diff --git a/lib-python/2.7/ctypes/test/test_strings.py b/lib-python/2.7/ctypes/test/test_strings.py --- a/lib-python/2.7/ctypes/test/test_strings.py +++ b/lib-python/2.7/ctypes/test/test_strings.py @@ -31,8 +31,9 @@ buf.value = "Hello, World" self.assertEqual(buf.value, "Hello, World") - self.assertRaises(TypeError, setattr, buf, "value", memoryview("Hello, World")) - self.assertRaises(TypeError, setattr, buf, "value", memoryview("abc")) + if test_support.check_impl_detail(): + self.assertRaises(TypeError, setattr, buf, "value", memoryview("Hello, World")) + self.assertRaises(TypeError, setattr, buf, "value", memoryview("abc")) self.assertRaises(ValueError, setattr, buf, "raw", memoryview("x" * 100)) def test_c_buffer_raw(self, memoryview=memoryview): @@ -40,7 +41,8 @@ buf.raw = memoryview("Hello, World") self.assertEqual(buf.value, "Hello, World") - self.assertRaises(TypeError, setattr, buf, "value", memoryview("abc")) + if test_support.check_impl_detail(): + self.assertRaises(TypeError, setattr, buf, "value", memoryview("abc")) self.assertRaises(ValueError, setattr, buf, "raw", memoryview("x" * 100)) def test_c_buffer_deprecated(self): diff --git a/lib-python/2.7/ctypes/test/test_structures.py b/lib-python/2.7/ctypes/test/test_structures.py --- a/lib-python/2.7/ctypes/test/test_structures.py +++ b/lib-python/2.7/ctypes/test/test_structures.py @@ -194,8 +194,8 @@ self.assertEqual(X.b.offset, min(8, longlong_align)) - d = {"_fields_": [("a", "b"), - ("b", "q")], + d = {"_fields_": [("a", c_byte), + ("b", c_longlong)], "_pack_": -1} self.assertRaises(ValueError, type(Structure), "X", (Structure,), d) diff --git a/lib-python/2.7/ctypes/test/test_varsize_struct.py b/lib-python/2.7/ctypes/test/test_varsize_struct.py --- a/lib-python/2.7/ctypes/test/test_varsize_struct.py +++ b/lib-python/2.7/ctypes/test/test_varsize_struct.py @@ -1,7 +1,9 @@ from ctypes import * import unittest +from ctypes.test import xfail class VarSizeTest(unittest.TestCase): + @xfail def test_resize(self): class X(Structure): _fields_ = [("item", c_int), diff --git a/lib-python/2.7/ctypes/util.py b/lib-python/2.7/ctypes/util.py --- a/lib-python/2.7/ctypes/util.py +++ b/lib-python/2.7/ctypes/util.py @@ -72,8 +72,8 @@ return name if os.name == "posix" and sys.platform == "darwin": - from ctypes.macholib.dyld import dyld_find as _dyld_find def find_library(name): + from ctypes.macholib.dyld import dyld_find as _dyld_find possible = ['lib%s.dylib' % name, '%s.dylib' % name, '%s.framework/%s' % (name, name)] diff --git a/lib-python/2.7/distutils/command/bdist_wininst.py b/lib-python/2.7/distutils/command/bdist_wininst.py --- a/lib-python/2.7/distutils/command/bdist_wininst.py +++ b/lib-python/2.7/distutils/command/bdist_wininst.py @@ -298,7 +298,8 @@ bitmaplen, # number of bytes in bitmap ) file.write(header) - file.write(open(arcname, "rb").read()) + with open(arcname, "rb") as arcfile: + file.write(arcfile.read()) # create_exe() diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -184,7 +184,7 @@ # the 'libs' directory is for binary installs - we assume that # must be the *native* platform. But we don't really support # cross-compiling via a binary install anyway, so we let it go. - self.library_dirs.append(os.path.join(sys.exec_prefix, 'libs')) + self.library_dirs.append(os.path.join(sys.exec_prefix, 'include')) if self.debug: self.build_temp = os.path.join(self.build_temp, "Debug") else: @@ -192,8 +192,13 @@ # Append the source distribution include and library directories, # this allows distutils on windows to work in the source tree - self.include_dirs.append(os.path.join(sys.exec_prefix, 'PC')) - if MSVC_VERSION == 9: + if 0: + # pypy has no PC directory + self.include_dirs.append(os.path.join(sys.exec_prefix, 'PC')) + if 1: + # pypy has no PCBuild directory + pass + elif MSVC_VERSION == 9: # Use the .lib files for the correct architecture if self.plat_name == 'win32': suffix = '' @@ -695,24 +700,14 @@ shared extension. On most platforms, this is just 'ext.libraries'; on Windows and OS/2, we add the Python library (eg. python20.dll). """ - # The python library is always needed on Windows. For MSVC, this - # is redundant, since the library is mentioned in a pragma in - # pyconfig.h that MSVC groks. The other Windows compilers all seem - # to need it mentioned explicitly, though, so that's what we do. - # Append '_d' to the python import library on debug builds. + # The python library is always needed on Windows. if sys.platform == "win32": - from distutils.msvccompiler import MSVCCompiler - if not isinstance(self.compiler, MSVCCompiler): - template = "python%d%d" - if self.debug: - template = template + '_d' - pythonlib = (template % - (sys.hexversion >> 24, (sys.hexversion >> 16) & 0xff)) - # don't extend ext.libraries, it may be shared with other - # extensions, it is a reference to the original list - return ext.libraries + [pythonlib] - else: - return ext.libraries + template = "python%d%d" + pythonlib = (template % + (sys.hexversion >> 24, (sys.hexversion >> 16) & 0xff)) + # don't extend ext.libraries, it may be shared with other + # extensions, it is a reference to the original list + return ext.libraries + [pythonlib] elif sys.platform == "os2emx": # EMX/GCC requires the python library explicitly, and I # believe VACPP does as well (though not confirmed) - AIM Apr01 diff --git a/lib-python/2.7/distutils/command/install.py b/lib-python/2.7/distutils/command/install.py --- a/lib-python/2.7/distutils/command/install.py +++ b/lib-python/2.7/distutils/command/install.py @@ -83,6 +83,13 @@ 'scripts': '$userbase/bin', 'data' : '$userbase', }, + 'pypy': { + 'purelib': '$base/site-packages', + 'platlib': '$base/site-packages', + 'headers': '$base/include', + 'scripts': '$base/bin', + 'data' : '$base', + }, } # The keys to an installation scheme; if any new types of files are to be @@ -467,6 +474,8 @@ def select_scheme (self, name): # it's the caller's problem if they supply a bad name! + if hasattr(sys, 'pypy_version_info'): + name = 'pypy' scheme = INSTALL_SCHEMES[name] for key in SCHEME_KEYS: attrname = 'install_' + key diff --git a/lib-python/2.7/distutils/cygwinccompiler.py b/lib-python/2.7/distutils/cygwinccompiler.py --- a/lib-python/2.7/distutils/cygwinccompiler.py +++ b/lib-python/2.7/distutils/cygwinccompiler.py @@ -75,6 +75,9 @@ elif msc_ver == '1500': # VS2008 / MSVC 9.0 return ['msvcr90'] + elif msc_ver == '1600': + # VS2010 / MSVC 10.0 + return ['msvcr100'] else: raise ValueError("Unknown MS Compiler version %s " % msc_ver) diff --git a/lib-python/2.7/distutils/msvc9compiler.py b/lib-python/2.7/distutils/msvc9compiler.py --- a/lib-python/2.7/distutils/msvc9compiler.py +++ b/lib-python/2.7/distutils/msvc9compiler.py @@ -648,6 +648,7 @@ temp_manifest = os.path.join( build_temp, os.path.basename(output_filename) + ".manifest") + ld_args.append('/MANIFEST') ld_args.append('/MANIFESTFILE:' + temp_manifest) if extra_preargs: diff --git a/lib-python/2.7/distutils/spawn.py b/lib-python/2.7/distutils/spawn.py --- a/lib-python/2.7/distutils/spawn.py +++ b/lib-python/2.7/distutils/spawn.py @@ -58,7 +58,6 @@ def _spawn_nt(cmd, search_path=1, verbose=0, dry_run=0): executable = cmd[0] - cmd = _nt_quote_args(cmd) if search_path: # either we find one or it stays the same executable = find_executable(executable) or executable @@ -66,7 +65,8 @@ if not dry_run: # spawn for NT requires a full path to the .exe try: - rc = os.spawnv(os.P_WAIT, executable, cmd) + import subprocess + rc = subprocess.call(cmd) except OSError, exc: # this seems to happen when the command isn't found raise DistutilsExecError, \ diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -9,563 +9,21 @@ Email: """ -__revision__ = "$Id$" +__revision__ = "$Id: sysconfig.py 85358 2010-10-10 09:54:59Z antoine.pitrou $" -import os -import re -import string import sys -from distutils.errors import DistutilsPlatformError -# These are needed in a couple of spots, so just compute them once. -PREFIX = os.path.normpath(sys.prefix) -EXEC_PREFIX = os.path.normpath(sys.exec_prefix) +# The content of this file is redirected from +# sysconfig_cpython or sysconfig_pypy. -# Path to the base directory of the project. On Windows the binary may -# live in project/PCBuild9. If we're dealing with an x64 Windows build, -# it'll live in project/PCbuild/amd64. -project_base = os.path.dirname(os.path.abspath(sys.executable)) -if os.name == "nt" and "pcbuild" in project_base[-8:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir)) -# PC/VS7.1 -if os.name == "nt" and "\\pc\\v" in project_base[-10:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir, - os.path.pardir)) -# PC/AMD64 -if os.name == "nt" and "\\pcbuild\\amd64" in project_base[-14:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir, - os.path.pardir)) +if '__pypy__' in sys.builtin_module_names: + from distutils.sysconfig_pypy import * + from distutils.sysconfig_pypy import _config_vars # needed by setuptools + from distutils.sysconfig_pypy import _variable_rx # read_setup_file() +else: + from distutils.sysconfig_cpython import * + from distutils.sysconfig_cpython import _config_vars # needed by setuptools + from distutils.sysconfig_cpython import _variable_rx # read_setup_file() -# python_build: (Boolean) if true, we're either building Python or -# building an extension with an un-installed Python, so we use -# different (hard-wired) directories. -# Setup.local is available for Makefile builds including VPATH builds, -# Setup.dist is available on Windows -def _python_build(): - for fn in ("Setup.dist", "Setup.local"): - if os.path.isfile(os.path.join(project_base, "Modules", fn)): - return True - return False -python_build = _python_build() - -def get_python_version(): - """Return a string containing the major and minor Python version, - leaving off the patchlevel. Sample return values could be '1.5' - or '2.2'. - """ - return sys.version[:3] - - -def get_python_inc(plat_specific=0, prefix=None): - """Return the directory containing installed Python header files. - - If 'plat_specific' is false (the default), this is the path to the - non-platform-specific header files, i.e. Python.h and so on; - otherwise, this is the path to platform-specific header files - (namely pyconfig.h). - - If 'prefix' is supplied, use it instead of sys.prefix or - sys.exec_prefix -- i.e., ignore 'plat_specific'. - """ - if prefix is None: - prefix = plat_specific and EXEC_PREFIX or PREFIX - - if os.name == "posix": - if python_build: - buildir = os.path.dirname(sys.executable) - if plat_specific: - # python.h is located in the buildir - inc_dir = buildir - else: - # the source dir is relative to the buildir - srcdir = os.path.abspath(os.path.join(buildir, - get_config_var('srcdir'))) - # Include is located in the srcdir - inc_dir = os.path.join(srcdir, "Include") - return inc_dir - return os.path.join(prefix, "include", "python" + get_python_version()) - elif os.name == "nt": - return os.path.join(prefix, "include") - elif os.name == "os2": - return os.path.join(prefix, "Include") - else: - raise DistutilsPlatformError( - "I don't know where Python installs its C header files " - "on platform '%s'" % os.name) - - -def get_python_lib(plat_specific=0, standard_lib=0, prefix=None): - """Return the directory containing the Python library (standard or - site additions). - - If 'plat_specific' is true, return the directory containing - platform-specific modules, i.e. any module from a non-pure-Python - module distribution; otherwise, return the platform-shared library - directory. If 'standard_lib' is true, return the directory - containing standard Python library modules; otherwise, return the - directory for site-specific modules. - - If 'prefix' is supplied, use it instead of sys.prefix or - sys.exec_prefix -- i.e., ignore 'plat_specific'. - """ - if prefix is None: - prefix = plat_specific and EXEC_PREFIX or PREFIX - - if os.name == "posix": - libpython = os.path.join(prefix, - "lib", "python" + get_python_version()) - if standard_lib: - return libpython - else: - return os.path.join(libpython, "site-packages") - - elif os.name == "nt": - if standard_lib: - return os.path.join(prefix, "Lib") - else: - if get_python_version() < "2.2": - return prefix - else: - return os.path.join(prefix, "Lib", "site-packages") - - elif os.name == "os2": - if standard_lib: - return os.path.join(prefix, "Lib") - else: - return os.path.join(prefix, "Lib", "site-packages") - - else: - raise DistutilsPlatformError( - "I don't know where Python installs its library " - "on platform '%s'" % os.name) - - -def customize_compiler(compiler): - """Do any platform-specific customization of a CCompiler instance. - - Mainly needed on Unix, so we can plug in the information that - varies across Unices and is stored in Python's Makefile. - """ - if compiler.compiler_type == "unix": - (cc, cxx, opt, cflags, ccshared, ldshared, so_ext) = \ - get_config_vars('CC', 'CXX', 'OPT', 'CFLAGS', - 'CCSHARED', 'LDSHARED', 'SO') - - if 'CC' in os.environ: - cc = os.environ['CC'] - if 'CXX' in os.environ: - cxx = os.environ['CXX'] - if 'LDSHARED' in os.environ: - ldshared = os.environ['LDSHARED'] - if 'CPP' in os.environ: - cpp = os.environ['CPP'] - else: - cpp = cc + " -E" # not always - if 'LDFLAGS' in os.environ: - ldshared = ldshared + ' ' + os.environ['LDFLAGS'] - if 'CFLAGS' in os.environ: - cflags = opt + ' ' + os.environ['CFLAGS'] - ldshared = ldshared + ' ' + os.environ['CFLAGS'] - if 'CPPFLAGS' in os.environ: - cpp = cpp + ' ' + os.environ['CPPFLAGS'] - cflags = cflags + ' ' + os.environ['CPPFLAGS'] - ldshared = ldshared + ' ' + os.environ['CPPFLAGS'] - - cc_cmd = cc + ' ' + cflags - compiler.set_executables( - preprocessor=cpp, - compiler=cc_cmd, - compiler_so=cc_cmd + ' ' + ccshared, - compiler_cxx=cxx, - linker_so=ldshared, - linker_exe=cc) - - compiler.shared_lib_extension = so_ext - - -def get_config_h_filename(): - """Return full pathname of installed pyconfig.h file.""" - if python_build: - if os.name == "nt": - inc_dir = os.path.join(project_base, "PC") - else: - inc_dir = project_base - else: - inc_dir = get_python_inc(plat_specific=1) - if get_python_version() < '2.2': - config_h = 'config.h' - else: - # The name of the config.h file changed in 2.2 - config_h = 'pyconfig.h' - return os.path.join(inc_dir, config_h) - - -def get_makefile_filename(): - """Return full pathname of installed Makefile from the Python build.""" - if python_build: - return os.path.join(os.path.dirname(sys.executable), "Makefile") - lib_dir = get_python_lib(plat_specific=1, standard_lib=1) - return os.path.join(lib_dir, "config", "Makefile") - - -def parse_config_h(fp, g=None): - """Parse a config.h-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - if g is None: - g = {} - define_rx = re.compile("#define ([A-Z][A-Za-z0-9_]+) (.*)\n") - undef_rx = re.compile("/[*] #undef ([A-Z][A-Za-z0-9_]+) [*]/\n") - # - while 1: - line = fp.readline() - if not line: - break - m = define_rx.match(line) - if m: - n, v = m.group(1, 2) - try: v = int(v) - except ValueError: pass - g[n] = v - else: - m = undef_rx.match(line) - if m: - g[m.group(1)] = 0 - return g - - -# Regexes needed for parsing Makefile (and similar syntaxes, -# like old-style Setup files). -_variable_rx = re.compile("([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)") -_findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)") -_findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}") - -def parse_makefile(fn, g=None): - """Parse a Makefile-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - from distutils.text_file import TextFile - fp = TextFile(fn, strip_comments=1, skip_blanks=1, join_lines=1) - - if g is None: - g = {} - done = {} - notdone = {} - - while 1: - line = fp.readline() - if line is None: # eof - break - m = _variable_rx.match(line) - if m: - n, v = m.group(1, 2) - v = v.strip() - # `$$' is a literal `$' in make - tmpv = v.replace('$$', '') - - if "$" in tmpv: - notdone[n] = v - else: - try: - v = int(v) - except ValueError: - # insert literal `$' - done[n] = v.replace('$$', '$') - else: - done[n] = v - - # do variable interpolation here - while notdone: - for name in notdone.keys(): - value = notdone[name] - m = _findvar1_rx.search(value) or _findvar2_rx.search(value) - if m: - n = m.group(1) - found = True - if n in done: - item = str(done[n]) - elif n in notdone: - # get it on a subsequent round - found = False - elif n in os.environ: - # do it like make: fall back to environment - item = os.environ[n] - else: - done[n] = item = "" - if found: - after = value[m.end():] - value = value[:m.start()] + item + after - if "$" in after: - notdone[name] = value - else: - try: value = int(value) - except ValueError: - done[name] = value.strip() - else: - done[name] = value - del notdone[name] - else: - # bogus variable reference; just drop it since we can't deal - del notdone[name] - - fp.close() - - # strip spurious spaces - for k, v in done.items(): - if isinstance(v, str): - done[k] = v.strip() - - # save the results in the global dictionary - g.update(done) - return g - - -def expand_makefile_vars(s, vars): - """Expand Makefile-style variables -- "${foo}" or "$(foo)" -- in - 'string' according to 'vars' (a dictionary mapping variable names to - values). Variables not present in 'vars' are silently expanded to the - empty string. The variable values in 'vars' should not contain further - variable expansions; if 'vars' is the output of 'parse_makefile()', - you're fine. Returns a variable-expanded version of 's'. - """ - - # This algorithm does multiple expansion, so if vars['foo'] contains - # "${bar}", it will expand ${foo} to ${bar}, and then expand - # ${bar}... and so forth. This is fine as long as 'vars' comes from - # 'parse_makefile()', which takes care of such expansions eagerly, - # according to make's variable expansion semantics. - - while 1: - m = _findvar1_rx.search(s) or _findvar2_rx.search(s) - if m: - (beg, end) = m.span() - s = s[0:beg] + vars.get(m.group(1)) + s[end:] - else: - break - return s - - -_config_vars = None - -def _init_posix(): - """Initialize the module as appropriate for POSIX systems.""" - g = {} - # load the installed Makefile: - try: - filename = get_makefile_filename() - parse_makefile(filename, g) - except IOError, msg: - my_msg = "invalid Python installation: unable to open %s" % filename - if hasattr(msg, "strerror"): - my_msg = my_msg + " (%s)" % msg.strerror - - raise DistutilsPlatformError(my_msg) - - # load the installed pyconfig.h: - try: - filename = get_config_h_filename() - parse_config_h(file(filename), g) - except IOError, msg: - my_msg = "invalid Python installation: unable to open %s" % filename - if hasattr(msg, "strerror"): - my_msg = my_msg + " (%s)" % msg.strerror - - raise DistutilsPlatformError(my_msg) - - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in g: - cfg_target = g['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' - % (cur_target, cfg_target)) - raise DistutilsPlatformError(my_msg) - - # On AIX, there are wrong paths to the linker scripts in the Makefile - # -- these paths are relative to the Python source, but when installed - # the scripts are in another directory. - if python_build: - g['LDSHARED'] = g['BLDSHARED'] - - elif get_python_version() < '2.1': - # The following two branches are for 1.5.2 compatibility. - if sys.platform == 'aix4': # what about AIX 3.x ? - # Linker script is in the config directory, not in Modules as the - # Makefile says. - python_lib = get_python_lib(standard_lib=1) - ld_so_aix = os.path.join(python_lib, 'config', 'ld_so_aix') - python_exp = os.path.join(python_lib, 'config', 'python.exp') - - g['LDSHARED'] = "%s %s -bI:%s" % (ld_so_aix, g['CC'], python_exp) - - elif sys.platform == 'beos': - # Linker script is in the config directory. In the Makefile it is - # relative to the srcdir, which after installation no longer makes - # sense. - python_lib = get_python_lib(standard_lib=1) - linkerscript_path = string.split(g['LDSHARED'])[0] - linkerscript_name = os.path.basename(linkerscript_path) - linkerscript = os.path.join(python_lib, 'config', - linkerscript_name) - - # XXX this isn't the right place to do this: adding the Python - # library to the link, if needed, should be in the "build_ext" - # command. (It's also needed for non-MS compilers on Windows, and - # it's taken care of for them by the 'build_ext.get_libraries()' - # method.) - g['LDSHARED'] = ("%s -L%s/lib -lpython%s" % - (linkerscript, PREFIX, get_python_version())) - - global _config_vars - _config_vars = g - - -def _init_nt(): - """Initialize the module as appropriate for NT""" - g = {} - # set basic install directories - g['LIBDEST'] = get_python_lib(plat_specific=0, standard_lib=1) - g['BINLIBDEST'] = get_python_lib(plat_specific=1, standard_lib=1) - - # XXX hmmm.. a normal install puts include files here - g['INCLUDEPY'] = get_python_inc(plat_specific=0) - - g['SO'] = '.pyd' - g['EXE'] = ".exe" - g['VERSION'] = get_python_version().replace(".", "") - g['BINDIR'] = os.path.dirname(os.path.abspath(sys.executable)) - - global _config_vars - _config_vars = g - - -def _init_os2(): - """Initialize the module as appropriate for OS/2""" - g = {} - # set basic install directories - g['LIBDEST'] = get_python_lib(plat_specific=0, standard_lib=1) - g['BINLIBDEST'] = get_python_lib(plat_specific=1, standard_lib=1) - - # XXX hmmm.. a normal install puts include files here - g['INCLUDEPY'] = get_python_inc(plat_specific=0) - - g['SO'] = '.pyd' - g['EXE'] = ".exe" - - global _config_vars - _config_vars = g - - -def get_config_vars(*args): - """With no arguments, return a dictionary of all configuration - variables relevant for the current platform. Generally this includes - everything needed to build extensions and install both pure modules and - extensions. On Unix, this means every variable defined in Python's - installed Makefile; on Windows and Mac OS it's a much smaller set. - - With arguments, return a list of values that result from looking up - each argument in the configuration variable dictionary. - """ - global _config_vars - if _config_vars is None: - func = globals().get("_init_" + os.name) - if func: - func() - else: - _config_vars = {} - - # Normalized versions of prefix and exec_prefix are handy to have; - # in fact, these are the standard versions used most places in the - # Distutils. - _config_vars['prefix'] = PREFIX - _config_vars['exec_prefix'] = EXEC_PREFIX - - if sys.platform == 'darwin': - kernel_version = os.uname()[2] # Kernel version (8.4.3) - major_version = int(kernel_version.split('.')[0]) - - if major_version < 8: - # On Mac OS X before 10.4, check if -arch and -isysroot - # are in CFLAGS or LDFLAGS and remove them if they are. - # This is needed when building extensions on a 10.3 system - # using a universal build of python. - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - flags = _config_vars[key] - flags = re.sub('-arch\s+\w+\s', ' ', flags) - flags = re.sub('-isysroot [^ \t]*', ' ', flags) - _config_vars[key] = flags - - else: - - # Allow the user to override the architecture flags using - # an environment variable. - # NOTE: This name was introduced by Apple in OSX 10.5 and - # is used by several scripting languages distributed with - # that OS release. - - if 'ARCHFLAGS' in os.environ: - arch = os.environ['ARCHFLAGS'] - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - - flags = _config_vars[key] - flags = re.sub('-arch\s+\w+\s', ' ', flags) - flags = flags + ' ' + arch - _config_vars[key] = flags - - # If we're on OSX 10.5 or later and the user tries to - # compiles an extension using an SDK that is not present - # on the current machine it is better to not use an SDK - # than to fail. - # - # The major usecase for this is users using a Python.org - # binary installer on OSX 10.6: that installer uses - # the 10.4u SDK, but that SDK is not installed by default - # when you install Xcode. - # - m = re.search('-isysroot\s+(\S+)', _config_vars['CFLAGS']) - if m is not None: - sdk = m.group(1) - if not os.path.exists(sdk): - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - - flags = _config_vars[key] - flags = re.sub('-isysroot\s+\S+(\s|$)', ' ', flags) - _config_vars[key] = flags - - if args: - vals = [] - for name in args: - vals.append(_config_vars.get(name)) - return vals - else: - return _config_vars - -def get_config_var(name): - """Return the value of a single variable using the dictionary - returned by 'get_config_vars()'. Equivalent to - get_config_vars().get(name) - """ - return get_config_vars().get(name) diff --git a/lib-python/modified-2.7/distutils/sysconfig_cpython.py b/lib-python/2.7/distutils/sysconfig_cpython.py rename from lib-python/modified-2.7/distutils/sysconfig_cpython.py rename to lib-python/2.7/distutils/sysconfig_cpython.py diff --git a/lib-python/modified-2.7/distutils/sysconfig_pypy.py b/lib-python/2.7/distutils/sysconfig_pypy.py rename from lib-python/modified-2.7/distutils/sysconfig_pypy.py rename to lib-python/2.7/distutils/sysconfig_pypy.py diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -293,7 +293,7 @@ finally: os.chdir(old_wd) self.assertTrue(os.path.exists(so_file)) - self.assertEqual(os.path.splitext(so_file)[-1], + self.assertEqual(so_file[so_file.index(os.path.extsep):], sysconfig.get_config_var('SO')) so_dir = os.path.dirname(so_file) self.assertEqual(so_dir, other_tmp_dir) @@ -302,7 +302,7 @@ cmd.run() so_file = cmd.get_outputs()[0] self.assertTrue(os.path.exists(so_file)) - self.assertEqual(os.path.splitext(so_file)[-1], + self.assertEqual(so_file[so_file.index(os.path.extsep):], sysconfig.get_config_var('SO')) so_dir = os.path.dirname(so_file) self.assertEqual(so_dir, cmd.build_lib) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -2,6 +2,7 @@ import os import unittest +from test import test_support from test.test_support import run_unittest @@ -40,14 +41,15 @@ expected = os.path.normpath(expected) self.assertEqual(got, expected) - libdir = os.path.join(destination, "lib", "python") - check_path(cmd.install_lib, libdir) - check_path(cmd.install_platlib, libdir) - check_path(cmd.install_purelib, libdir) - check_path(cmd.install_headers, - os.path.join(destination, "include", "python", "foopkg")) - check_path(cmd.install_scripts, os.path.join(destination, "bin")) - check_path(cmd.install_data, destination) + if test_support.check_impl_detail(): + libdir = os.path.join(destination, "lib", "python") + check_path(cmd.install_lib, libdir) + check_path(cmd.install_platlib, libdir) + check_path(cmd.install_purelib, libdir) + check_path(cmd.install_headers, + os.path.join(destination, "include", "python", "foopkg")) + check_path(cmd.install_scripts, os.path.join(destination, "bin")) + check_path(cmd.install_data, destination) def test_suite(): diff --git a/lib-python/2.7/distutils/unixccompiler.py b/lib-python/2.7/distutils/unixccompiler.py --- a/lib-python/2.7/distutils/unixccompiler.py +++ b/lib-python/2.7/distutils/unixccompiler.py @@ -125,7 +125,22 @@ } if sys.platform[:6] == "darwin": + import platform + if platform.machine() == 'i386': + if platform.architecture()[0] == '32bit': + arch = 'i386' + else: + arch = 'x86_64' + else: + # just a guess + arch = platform.machine() executables['ranlib'] = ["ranlib"] + executables['linker_so'] += ['-undefined', 'dynamic_lookup'] + + for k, v in executables.iteritems(): + if v and v[0] == 'cc': + v += ['-arch', arch] + # Needed for the filename generation methods provided by the base # class, CCompiler. NB. whoever instantiates/uses a particular @@ -309,7 +324,7 @@ # On OSX users can specify an alternate SDK using # '-isysroot', calculate the SDK root if it is specified # (and use it further on) - cflags = sysconfig.get_config_var('CFLAGS') + cflags = sysconfig.get_config_var('CFLAGS') or '' m = re.search(r'-isysroot\s+(\S+)', cflags) if m is None: sysroot = '/' diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -193,6 +193,8 @@ Equivalent to: sorted(iterable, reverse=True)[:n] """ + if n < 0: # for consistency with the c impl + return [] it = iter(iterable) result = list(islice(it, n)) if not result: @@ -209,6 +211,8 @@ Equivalent to: sorted(iterable)[:n] """ + if n < 0: # for consistency with the c impl + return [] if hasattr(iterable, '__len__') and n * 10 <= len(iterable): # For smaller values of n, the bisect method is faster than a minheap. # It is also memory efficient, consuming only n elements of space. diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -1024,7 +1024,11 @@ kwds["buffering"] = True; response = self.response_class(*args, **kwds) - response.begin() + try: + response.begin() + except: + response.close() + raise assert response.will_close != _UNKNOWN self.__state = _CS_IDLE diff --git a/lib-python/2.7/idlelib/Delegator.py b/lib-python/2.7/idlelib/Delegator.py --- a/lib-python/2.7/idlelib/Delegator.py +++ b/lib-python/2.7/idlelib/Delegator.py @@ -12,6 +12,14 @@ self.__cache[name] = attr return attr + def __nonzero__(self): + # this is needed for PyPy: else, if self.delegate is None, the + # __getattr__ above picks NoneType.__nonzero__, which returns + # False. Thus, bool(Delegator()) is False as well, but it's not what + # we want. On CPython, bool(Delegator()) is True because NoneType + # does not have __nonzero__ + return True + def resetcache(self): for key in self.__cache.keys(): try: diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -746,8 +746,15 @@ 'varargs' and 'varkw' are the names of the * and ** arguments or None.""" if not iscode(co): - raise TypeError('{!r} is not a code object'.format(co)) + if hasattr(len, 'func_code') and type(co) is type(len.func_code): + # PyPy extension: built-in function objects have a func_code too. + # There is no co_code on it, but co_argcount and co_varnames and + # co_flags are present. + pass + else: + raise TypeError('{!r} is not a code object'.format(co)) + code = getattr(co, 'co_code', '') nargs = co.co_argcount names = co.co_varnames args = list(names[:nargs]) @@ -757,12 +764,12 @@ for i in range(nargs): if args[i][:1] in ('', '.'): stack, remain, count = [], [], [] - while step < len(co.co_code): - op = ord(co.co_code[step]) + while step < len(code): + op = ord(code[step]) step = step + 1 if op >= dis.HAVE_ARGUMENT: opname = dis.opname[op] - value = ord(co.co_code[step]) + ord(co.co_code[step+1])*256 + value = ord(code[step]) + ord(code[step+1])*256 step = step + 2 if opname in ('UNPACK_TUPLE', 'UNPACK_SEQUENCE'): remain.append(value) @@ -809,7 +816,9 @@ if ismethod(func): func = func.im_func - if not isfunction(func): + if not (isfunction(func) or + isbuiltin(func) and hasattr(func, 'func_code')): + # PyPy extension: this works for built-in functions too raise TypeError('{!r} is not a Python function'.format(func)) args, varargs, varkw = getargs(func.func_code) return ArgSpec(args, varargs, varkw, func.func_defaults) @@ -949,7 +958,7 @@ raise TypeError('%s() takes exactly 0 arguments ' '(%d given)' % (f_name, num_total)) else: - raise TypeError('%s() takes no arguments (%d given)' % + raise TypeError('%s() takes no argument (%d given)' % (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -2,14 +2,7 @@ """ import re -try: - from _json import encode_basestring_ascii as c_encode_basestring_ascii -except ImportError: - c_encode_basestring_ascii = None -try: - from _json import make_encoder as c_make_encoder -except ImportError: - c_make_encoder = None +from __pypy__.builders import StringBuilder, UnicodeBuilder ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') @@ -24,23 +17,22 @@ '\t': '\\t', } for i in range(0x20): - ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) - #ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) + ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) # Assume this produces an infinity on all machines (probably not guaranteed) INFINITY = float('1e66666') FLOAT_REPR = repr -def encode_basestring(s): +def raw_encode_basestring(s): """Return a JSON representation of a Python string """ def replace(match): return ESCAPE_DCT[match.group(0)] - return '"' + ESCAPE.sub(replace, s) + '"' + return ESCAPE.sub(replace, s) +encode_basestring = lambda s: '"' + raw_encode_basestring(s) + '"' - -def py_encode_basestring_ascii(s): +def raw_encode_basestring_ascii(s): """Return an ASCII-only JSON representation of a Python string """ @@ -53,21 +45,19 @@ except KeyError: n = ord(s) if n < 0x10000: - return '\\u{0:04x}'.format(n) - #return '\\u%04x' % (n,) + return '\\u%04x' % (n,) else: # surrogate pair n -= 0x10000 s1 = 0xd800 | ((n >> 10) & 0x3ff) s2 = 0xdc00 | (n & 0x3ff) - return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) - #return '\\u%04x\\u%04x' % (s1, s2) - return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' + return '\\u%04x\\u%04x' % (s1, s2) + if ESCAPE_ASCII.search(s): + return str(ESCAPE_ASCII.sub(replace, s)) + return s +encode_basestring_ascii = lambda s: '"' + raw_encode_basestring_ascii(s) + '"' -encode_basestring_ascii = ( - c_encode_basestring_ascii or py_encode_basestring_ascii) - class JSONEncoder(object): """Extensible JSON encoder for Python data structures. @@ -147,6 +137,17 @@ self.skipkeys = skipkeys self.ensure_ascii = ensure_ascii + if ensure_ascii: + self.encoder = raw_encode_basestring_ascii + else: + self.encoder = raw_encode_basestring + if encoding != 'utf-8': + orig_encoder = self.encoder + def encoder(o): + if isinstance(o, str): + o = o.decode(encoding) + return orig_encoder(o) + self.encoder = encoder self.check_circular = check_circular self.allow_nan = allow_nan self.sort_keys = sort_keys @@ -184,24 +185,126 @@ '{"foo": ["bar", "baz"]}' """ - # This is for extremely simple cases and benchmarks. + if self.check_circular: + markers = {} + else: + markers = None + if self.ensure_ascii: + builder = StringBuilder() + else: + builder = UnicodeBuilder() + self._encode(o, markers, builder, 0) + return builder.build() + + def _emit_indent(self, builder, _current_indent_level): + if self.indent is not None: + _current_indent_level += 1 + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent + builder.append(newline_indent) + else: + separator = self.item_separator + return separator, _current_indent_level + + def _emit_unindent(self, builder, _current_indent_level): + if self.indent is not None: + builder.append('\n') + builder.append(' ' * (self.indent * (_current_indent_level - 1))) + + def _encode(self, o, markers, builder, _current_indent_level): if isinstance(o, basestring): - if isinstance(o, str): - _encoding = self.encoding - if (_encoding is not None - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - if self.ensure_ascii: - return encode_basestring_ascii(o) + builder.append('"') + builder.append(self.encoder(o)) + builder.append('"') + elif o is None: + builder.append('null') + elif o is True: + builder.append('true') + elif o is False: + builder.append('false') + elif isinstance(o, (int, long)): + builder.append(str(o)) + elif isinstance(o, float): + builder.append(self._floatstr(o)) + elif isinstance(o, (list, tuple)): + if not o: + builder.append('[]') + return + self._encode_list(o, markers, builder, _current_indent_level) + elif isinstance(o, dict): + if not o: + builder.append('{}') + return + self._encode_dict(o, markers, builder, _current_indent_level) + else: + self._mark_markers(markers, o) + res = self.default(o) + self._encode(res, markers, builder, _current_indent_level) + self._remove_markers(markers, o) + return res + + def _encode_list(self, l, markers, builder, _current_indent_level): + self._mark_markers(markers, l) + builder.append('[') + first = True + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + for elem in l: + if first: + first = False else: - return encode_basestring(o) - # This doesn't pass the iterator directly to ''.join() because the - # exceptions aren't as detailed. The list call should be roughly - # equivalent to the PySequence_Fast that ''.join() would do. - chunks = self.iterencode(o, _one_shot=True) - if not isinstance(chunks, (list, tuple)): - chunks = list(chunks) - return ''.join(chunks) + builder.append(separator) + self._encode(elem, markers, builder, _current_indent_level) + del elem # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append(']') + self._remove_markers(markers, l) + + def _encode_dict(self, d, markers, builder, _current_indent_level): + self._mark_markers(markers, d) + first = True + builder.append('{') + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + if self.sort_keys: + items = sorted(d.items(), key=lambda kv: kv[0]) + else: + items = d.iteritems() + + for key, v in items: + if first: + first = False + else: + builder.append(separator) + if isinstance(key, basestring): + pass + # JavaScript is weakly typed for these, so it makes sense to + # also allow them. Many encoders seem to do something like this. + elif isinstance(key, float): + key = self._floatstr(key) + elif key is True: + key = 'true' + elif key is False: + key = 'false' + elif key is None: + key = 'null' + elif isinstance(key, (int, long)): + key = str(key) + elif self.skipkeys: + continue + else: + raise TypeError("key " + repr(key) + " is not a string") + builder.append('"') + builder.append(self.encoder(key)) + builder.append('"') + builder.append(self.key_separator) + self._encode(v, markers, builder, _current_indent_level) + del key + del v # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append('}') + self._remove_markers(markers, d) def iterencode(self, o, _one_shot=False): """Encode the given object and yield each string @@ -217,86 +320,54 @@ markers = {} else: markers = None - if self.ensure_ascii: - _encoder = encode_basestring_ascii + return self._iterencode(o, markers, 0) + + def _floatstr(self, o): + # Check for specials. Note that this type of test is processor + # and/or platform-specific, so do tests which don't depend on the + # internals. + + if o != o: + text = 'NaN' + elif o == INFINITY: + text = 'Infinity' + elif o == -INFINITY: + text = '-Infinity' else: - _encoder = encode_basestring - if self.encoding != 'utf-8': - def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding): - if isinstance(o, str): - o = o.decode(_encoding) - return _orig_encoder(o) + return FLOAT_REPR(o) - def floatstr(o, allow_nan=self.allow_nan, - _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY): - # Check for specials. Note that this type of test is processor - # and/or platform-specific, so do tests which don't depend on the - # internals. + if not self.allow_nan: + raise ValueError( + "Out of range float values are not JSON compliant: " + + repr(o)) - if o != o: - text = 'NaN' - elif o == _inf: - text = 'Infinity' - elif o == _neginf: - text = '-Infinity' - else: - return _repr(o) + return text - if not allow_nan: - raise ValueError( - "Out of range float values are not JSON compliant: " + - repr(o)) + def _mark_markers(self, markers, o): + if markers is not None: + if id(o) in markers: + raise ValueError("Circular reference detected") + markers[id(o)] = None - return text + def _remove_markers(self, markers, o): + if markers is not None: + del markers[id(o)] - - if (_one_shot and c_make_encoder is not None - and self.indent is None and not self.sort_keys): - _iterencode = c_make_encoder( - markers, self.default, _encoder, self.indent, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, self.allow_nan) - else: - _iterencode = _make_iterencode( - markers, self.default, _encoder, self.indent, floatstr, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, _one_shot) - return _iterencode(o, 0) - -def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, - _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot, - ## HACK: hand-optimized bytecode; turn globals into locals - ValueError=ValueError, - basestring=basestring, - dict=dict, - float=float, - id=id, - int=int, - isinstance=isinstance, - list=list, - long=long, - str=str, - tuple=tuple, - ): - - def _iterencode_list(lst, _current_indent_level): + def _iterencode_list(self, lst, markers, _current_indent_level): if not lst: yield '[]' return - if markers is not None: - markerid = id(lst) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = lst + self._mark_markers(markers, lst) buf = '[' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent buf += newline_indent else: newline_indent = None - separator = _item_separator + separator = self.item_separator first = True for value in lst: if first: @@ -304,7 +375,7 @@ else: buf = separator if isinstance(value, basestring): - yield buf + _encoder(value) + yield buf + '"' + self.encoder(value) + '"' elif value is None: yield buf + 'null' elif value is True: @@ -314,44 +385,43 @@ elif isinstance(value, (int, long)): yield buf + str(value) elif isinstance(value, float): - yield buf + _floatstr(value) + yield buf + self._floatstr(value) else: yield buf if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield ']' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, lst) - def _iterencode_dict(dct, _current_indent_level): + def _iterencode_dict(self, dct, markers, _current_indent_level): if not dct: yield '{}' return - if markers is not None: - markerid = id(dct) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = dct + self._mark_markers(markers, dct) yield '{' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - item_separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + item_separator = self.item_separator + newline_indent yield newline_indent else: newline_indent = None - item_separator = _item_separator + item_separator = self.item_separator first = True - if _sort_keys: + if self.sort_keys: items = sorted(dct.items(), key=lambda kv: kv[0]) else: items = dct.iteritems() @@ -361,7 +431,7 @@ # JavaScript is weakly typed for these, so it makes sense to # also allow them. Many encoders seem to do something like this. elif isinstance(key, float): - key = _floatstr(key) + key = self._floatstr(key) elif key is True: key = 'true' elif key is False: @@ -370,7 +440,7 @@ key = 'null' elif isinstance(key, (int, long)): key = str(key) - elif _skipkeys: + elif self.skipkeys: continue else: raise TypeError("key " + repr(key) + " is not a string") @@ -378,10 +448,10 @@ first = False else: yield item_separator - yield _encoder(key) - yield _key_separator + yield '"' + self.encoder(key) + '"' + yield self.key_separator if isinstance(value, basestring): - yield _encoder(value) + yield '"' + self.encoder(value) + '"' elif value is None: yield 'null' elif value is True: @@ -391,26 +461,28 @@ elif isinstance(value, (int, long)): yield str(value) elif isinstance(value, float): - yield _floatstr(value) + yield self._floatstr(value) else: if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield '}' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, dct) - def _iterencode(o, _current_indent_level): + def _iterencode(self, o, markers, _current_indent_level): if isinstance(o, basestring): - yield _encoder(o) + yield '"' + self.encoder(o) + '"' elif o is None: yield 'null' elif o is True: @@ -420,23 +492,19 @@ elif isinstance(o, (int, long)): yield str(o) elif isinstance(o, float): - yield _floatstr(o) + yield self._floatstr(o) elif isinstance(o, (list, tuple)): - for chunk in _iterencode_list(o, _current_indent_level): + for chunk in self._iterencode_list(o, markers, + _current_indent_level): yield chunk elif isinstance(o, dict): - for chunk in _iterencode_dict(o, _current_indent_level): + for chunk in self._iterencode_dict(o, markers, + _current_indent_level): yield chunk else: - if markers is not None: - markerid = id(o) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = o - o = _default(o) - for chunk in _iterencode(o, _current_indent_level): + self._mark_markers(markers, o) + obj = self.default(o) + for chunk in self._iterencode(obj, markers, + _current_indent_level): yield chunk - if markers is not None: - del markers[markerid] - - return _iterencode + self._remove_markers(markers, o) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -80,6 +80,12 @@ # Issue 10038. self.assertEqual(type(self.loads('"foo"')), unicode) + def test_encode_not_utf_8(self): + self.assertEqual(self.dumps('\xb1\xe6', encoding='iso8859-2'), + '"\\u0105\\u0107"') + self.assertEqual(self.dumps(['\xb1\xe6'], encoding='iso8859-2'), + '["\\u0105\\u0107"]') + class TestPyUnicode(TestUnicode, PyTest): pass class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -73,15 +73,12 @@ return getattr, (m.im_self, m.im_func.func_name) ForkingPickler.register(type(ForkingPickler.save), _reduce_method) -def _reduce_method_descriptor(m): - return getattr, (m.__objclass__, m.__name__) -ForkingPickler.register(type(list.append), _reduce_method_descriptor) -ForkingPickler.register(type(int.__add__), _reduce_method_descriptor) - -#def _reduce_builtin_function_or_method(m): -# return getattr, (m.__self__, m.__name__) -#ForkingPickler.register(type(list().append), _reduce_builtin_function_or_method) -#ForkingPickler.register(type(int().__add__), _reduce_builtin_function_or_method) +if type(list.append) is not type(ForkingPickler.save): + # Some python implementations have unbound methods even for builtin types + def _reduce_method_descriptor(m): + return getattr, (m.__objclass__, m.__name__) + ForkingPickler.register(type(list.append), _reduce_method_descriptor) + ForkingPickler.register(type(int.__add__), _reduce_method_descriptor) try: from functools import partial diff --git a/lib-python/2.7/opcode.py b/lib-python/2.7/opcode.py --- a/lib-python/2.7/opcode.py +++ b/lib-python/2.7/opcode.py @@ -1,4 +1,3 @@ - """ opcode module - potentially shared between dis and other modules which operate on bytecodes (e.g. peephole optimizers). @@ -189,4 +188,10 @@ def_op('SET_ADD', 146) def_op('MAP_ADD', 147) +# pypy modification, experimental bytecode +def_op('LOOKUP_METHOD', 201) # Index in name list +hasname.append(201) +def_op('CALL_METHOD', 202) # #args not including 'self' +def_op('BUILD_LIST_FROM_ARG', 203) + del def_op, name_op, jrel_op, jabs_op diff --git a/lib-python/2.7/pickle.py b/lib-python/2.7/pickle.py --- a/lib-python/2.7/pickle.py +++ b/lib-python/2.7/pickle.py @@ -168,7 +168,7 @@ # Pickling machinery -class Pickler: +class Pickler(object): def __init__(self, file, protocol=None): """This takes a file-like object for writing a pickle data stream. @@ -638,6 +638,10 @@ # else tmp is empty, and we're done def save_dict(self, obj): + modict_saver = self._pickle_moduledict(obj) + if modict_saver is not None: + return self.save_reduce(*modict_saver) + write = self.write if self.bin: @@ -687,6 +691,29 @@ write(SETITEM) # else tmp is empty, and we're done + def _pickle_moduledict(self, obj): + # save module dictionary as "getattr(module, '__dict__')" + + # build index of module dictionaries + try: + modict = self.module_dict_ids + except AttributeError: + modict = {} + from sys import modules + for mod in modules.values(): + if isinstance(mod, ModuleType): + modict[id(mod.__dict__)] = mod + self.module_dict_ids = modict + + thisid = id(obj) + try: + themodule = modict[thisid] + except KeyError: + return None + from __builtin__ import getattr + return getattr, (themodule, '__dict__') + + def save_inst(self, obj): cls = obj.__class__ @@ -727,6 +754,29 @@ dispatch[InstanceType] = save_inst + def save_function(self, obj): + try: + return self.save_global(obj) + except PicklingError, e: + pass + # Check copy_reg.dispatch_table + reduce = dispatch_table.get(type(obj)) + if reduce: + rv = reduce(obj) + else: + # Check for a __reduce_ex__ method, fall back to __reduce__ + reduce = getattr(obj, "__reduce_ex__", None) + if reduce: + rv = reduce(self.proto) + else: + reduce = getattr(obj, "__reduce__", None) + if reduce: + rv = reduce() + else: + raise e + return self.save_reduce(obj=obj, *rv) + dispatch[FunctionType] = save_function + def save_global(self, obj, name=None, pack=struct.pack): write = self.write memo = self.memo @@ -768,7 +818,6 @@ self.memoize(obj) dispatch[ClassType] = save_global - dispatch[FunctionType] = save_global dispatch[BuiltinFunctionType] = save_global dispatch[TypeType] = save_global @@ -824,7 +873,7 @@ # Unpickling machinery -class Unpickler: +class Unpickler(object): def __init__(self, file): """This takes a file-like object for reading a pickle data stream. diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib-python/2.7/pprint.py b/lib-python/2.7/pprint.py --- a/lib-python/2.7/pprint.py +++ b/lib-python/2.7/pprint.py @@ -144,7 +144,7 @@ return r = getattr(typ, "__repr__", None) - if issubclass(typ, dict) and r is dict.__repr__: + if issubclass(typ, dict) and r == dict.__repr__: write('{') if self._indent_per_level > 1: write((self._indent_per_level - 1) * ' ') @@ -173,10 +173,10 @@ write('}') return - if ((issubclass(typ, list) and r is list.__repr__) or - (issubclass(typ, tuple) and r is tuple.__repr__) or - (issubclass(typ, set) and r is set.__repr__) or - (issubclass(typ, frozenset) and r is frozenset.__repr__) + if ((issubclass(typ, list) and r == list.__repr__) or + (issubclass(typ, tuple) and r == tuple.__repr__) or + (issubclass(typ, set) and r == set.__repr__) or + (issubclass(typ, frozenset) and r == frozenset.__repr__) ): length = _len(object) if issubclass(typ, list): @@ -266,7 +266,7 @@ return ("%s%s%s" % (closure, sio.getvalue(), closure)), True, False r = getattr(typ, "__repr__", None) - if issubclass(typ, dict) and r is dict.__repr__: + if issubclass(typ, dict) and r == dict.__repr__: if not object: return "{}", True, False objid = _id(object) @@ -291,8 +291,8 @@ del context[objid] return "{%s}" % _commajoin(components), readable, recursive - if (issubclass(typ, list) and r is list.__repr__) or \ - (issubclass(typ, tuple) and r is tuple.__repr__): + if (issubclass(typ, list) and r == list.__repr__) or \ + (issubclass(typ, tuple) and r == tuple.__repr__): if issubclass(typ, list): if not object: return "[]", True, False diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -623,7 +623,9 @@ head, '#ffffff', '#7799ee', 'index
' + filelink + docloc) - modules = inspect.getmembers(object, inspect.ismodule) + def isnonbuiltinmodule(obj): + return inspect.ismodule(obj) and obj is not __builtin__ + modules = inspect.getmembers(object, isnonbuiltinmodule) classes, cdict = [], {} for key, value in inspect.getmembers(object, inspect.isclass): diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -41,7 +41,6 @@ from __future__ import division from warnings import warn as _warn -from types import MethodType as _MethodType, BuiltinMethodType as _BuiltinMethodType from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil from math import sqrt as _sqrt, acos as _acos, cos as _cos, sin as _sin from os import urandom as _urandom @@ -240,8 +239,7 @@ return self.randrange(a, b+1) - def _randbelow(self, n, _log=_log, int=int, _maxwidth=1L< n-1 > 2**(k-2) r = getrandbits(k) while r >= n: diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -75,7 +75,6 @@ USER_SITE = None USER_BASE = None - def makepath(*paths): dir = os.path.join(*paths) try: @@ -91,7 +90,10 @@ if hasattr(m, '__loader__'): continue # don't mess with a PEP 302-supplied __file__ try: - m.__file__ = os.path.abspath(m.__file__) + prev = m.__file__ + new = os.path.abspath(m.__file__) + if prev != new: + m.__file__ = new except (AttributeError, OSError): pass @@ -289,6 +291,7 @@ will find its `site-packages` subdirectory depending on the system environment, and will return a list of full paths. """ + is_pypy = '__pypy__' in sys.builtin_module_names sitepackages = [] seen = set() @@ -299,6 +302,10 @@ if sys.platform in ('os2emx', 'riscos'): sitepackages.append(os.path.join(prefix, "Lib", "site-packages")) + elif is_pypy: + from distutils.sysconfig import get_python_lib + sitedir = get_python_lib(standard_lib=False, prefix=prefix) + sitepackages.append(sitedir) elif os.sep == '/': sitepackages.append(os.path.join(prefix, "lib", "python" + sys.version[:3], @@ -435,22 +442,33 @@ if key == 'q': break +##def setcopyright(): +## """Set 'copyright' and 'credits' in __builtin__""" +## __builtin__.copyright = _Printer("copyright", sys.copyright) +## if sys.platform[:4] == 'java': +## __builtin__.credits = _Printer( +## "credits", +## "Jython is maintained by the Jython developers (www.jython.org).") +## else: +## __builtin__.credits = _Printer("credits", """\ +## Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands +## for supporting Python development. See www.python.org for more information.""") +## here = os.path.dirname(os.__file__) +## __builtin__.license = _Printer( +## "license", "See http://www.python.org/%.3s/license.html" % sys.version, +## ["LICENSE.txt", "LICENSE"], +## [os.path.join(here, os.pardir), here, os.curdir]) + def setcopyright(): - """Set 'copyright' and 'credits' in __builtin__""" + # XXX this is the PyPy-specific version. Should be unified with the above. __builtin__.copyright = _Printer("copyright", sys.copyright) - if sys.platform[:4] == 'java': - __builtin__.credits = _Printer( - "credits", - "Jython is maintained by the Jython developers (www.jython.org).") - else: - __builtin__.credits = _Printer("credits", """\ - Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands - for supporting Python development. See www.python.org for more information.""") - here = os.path.dirname(os.__file__) + __builtin__.credits = _Printer( + "credits", + "PyPy is maintained by the PyPy developers: http://pypy.org/") __builtin__.license = _Printer( - "license", "See http://www.python.org/%.3s/license.html" % sys.version, - ["LICENSE.txt", "LICENSE"], - [os.path.join(here, os.pardir), here, os.curdir]) + "license", + "See https://bitbucket.org/pypy/pypy/src/default/LICENSE") + class _Helper(object): @@ -476,7 +494,7 @@ if sys.platform == 'win32': import locale, codecs enc = locale.getdefaultlocale()[1] - if enc.startswith('cp'): # "cp***" ? + if enc is not None and enc.startswith('cp'): # "cp***" ? try: codecs.lookup(enc) except LookupError: @@ -532,9 +550,18 @@ "'import usercustomize' failed; use -v for traceback" +def import_builtin_stuff(): + """PyPy specific: pre-import a few built-in modules, because + some programs actually rely on them to be in sys.modules :-(""" + import exceptions + if 'zipimport' in sys.builtin_module_names: + import zipimport + + def main(): global ENABLE_USER_SITE + import_builtin_stuff() abs__file__() known_paths = removeduppaths() if (os.name == "posix" and sys.path and diff --git a/lib-python/2.7/socket.py b/lib-python/2.7/socket.py --- a/lib-python/2.7/socket.py +++ b/lib-python/2.7/socket.py @@ -46,8 +46,6 @@ import _socket from _socket import * -from functools import partial -from types import MethodType try: import _ssl @@ -159,11 +157,6 @@ if sys.platform == "riscos": _socketmethods = _socketmethods + ('sleeptaskw',) -# All the method names that must be delegated to either the real socket -# object or the _closedsocket object. -_delegate_methods = ("recv", "recvfrom", "recv_into", "recvfrom_into", - "send", "sendto") - class _closedsocket(object): __slots__ = [] def _dummy(*args): @@ -180,22 +173,43 @@ __doc__ = _realsocket.__doc__ - __slots__ = ["_sock", "__weakref__"] + list(_delegate_methods) - def __init__(self, family=AF_INET, type=SOCK_STREAM, proto=0, _sock=None): if _sock is None: _sock = _realsocket(family, type, proto) self._sock = _sock - for method in _delegate_methods: - setattr(self, method, getattr(_sock, method)) + self._io_refs = 0 + self._closed = False - def close(self, _closedsocket=_closedsocket, - _delegate_methods=_delegate_methods, setattr=setattr): + def send(self, data, flags=0): + return self._sock.send(data, flags=flags) + send.__doc__ = _realsocket.send.__doc__ + + def recv(self, buffersize, flags=0): + return self._sock.recv(buffersize, flags=flags) + recv.__doc__ = _realsocket.recv.__doc__ + + def recv_into(self, buffer, nbytes=0, flags=0): + return self._sock.recv_into(buffer, nbytes=nbytes, flags=flags) + recv_into.__doc__ = _realsocket.recv_into.__doc__ + + def recvfrom(self, buffersize, flags=0): + return self._sock.recvfrom(buffersize, flags=flags) + recvfrom.__doc__ = _realsocket.recvfrom.__doc__ + + def recvfrom_into(self, buffer, nbytes=0, flags=0): + return self._sock.recvfrom_into(buffer, nbytes=nbytes, flags=flags) + recvfrom_into.__doc__ = _realsocket.recvfrom_into.__doc__ + + def sendto(self, data, param2, param3=None): + if param3 is None: + return self._sock.sendto(data, param2) + else: + return self._sock.sendto(data, param2, param3) + sendto.__doc__ = _realsocket.sendto.__doc__ + + def close(self): # This function should not reference any globals. See issue #808164. self._sock = _closedsocket() - dummy = self._sock._dummy - for method in _delegate_methods: - setattr(self, method, dummy) close.__doc__ = _realsocket.close.__doc__ def accept(self): @@ -214,21 +228,49 @@ Return a regular file object corresponding to the socket. The mode and bufsize arguments are as for the built-in open() function.""" - return _fileobject(self._sock, mode, bufsize) + self._io_refs += 1 + return _fileobject(self, mode, bufsize) + + def _decref_socketios(self): + if self._io_refs > 0: + self._io_refs -= 1 + if self._closed: + self.close() + + def _real_close(self): + # This function should not reference any globals. See issue #808164. + self._sock.close() + + def close(self): + # This function should not reference any globals. See issue #808164. + self._closed = True + if self._io_refs <= 0: + self._real_close() family = property(lambda self: self._sock.family, doc="the socket family") type = property(lambda self: self._sock.type, doc="the socket type") proto = property(lambda self: self._sock.proto, doc="the socket protocol") -def meth(name,self,*args): - return getattr(self._sock,name)(*args) + # Delegate many calls to the raw socket object. + _s = ("def %(name)s(self, %(args)s): return self._sock.%(name)s(%(args)s)\n\n" + "%(name)s.__doc__ = _realsocket.%(name)s.__doc__\n") + for _m in _socketmethods: + # yupi! we're on pypy, all code objects have this interface + argcount = getattr(_realsocket, _m).im_func.func_code.co_argcount - 1 + exec _s % {'name': _m, 'args': ', '.join('arg%d' % i for i in range(argcount))} + del _m, _s, argcount -for _m in _socketmethods: - p = partial(meth,_m) - p.__name__ = _m - p.__doc__ = getattr(_realsocket,_m).__doc__ - m = MethodType(p,None,_socketobject) - setattr(_socketobject,_m,m) + # Delegation methods with default arguments, that the code above + # cannot handle correctly + def sendall(self, data, flags=0): + self._sock.sendall(data, flags) + sendall.__doc__ = _realsocket.sendall.__doc__ + + def getsockopt(self, level, optname, buflen=None): + if buflen is None: + return self._sock.getsockopt(level, optname) + return self._sock.getsockopt(level, optname, buflen) + getsockopt.__doc__ = _realsocket.getsockopt.__doc__ socket = SocketType = _socketobject @@ -278,8 +320,11 @@ if self._sock: self.flush() finally: - if self._close: - self._sock.close() + if self._sock: + if self._close: + self._sock.close() + else: + self._sock._decref_socketios() self._sock = None def __del__(self): diff --git a/lib-python/2.7/sqlite3/test/dbapi.py b/lib-python/2.7/sqlite3/test/dbapi.py --- a/lib-python/2.7/sqlite3/test/dbapi.py +++ b/lib-python/2.7/sqlite3/test/dbapi.py @@ -1,4 +1,4 @@ -#-*- coding: ISO-8859-1 -*- +#-*- coding: iso-8859-1 -*- # pysqlite2/test/dbapi.py: tests for DB-API compliance # # Copyright (C) 2004-2010 Gerhard H�ring @@ -332,6 +332,9 @@ def __init__(self): self.value = 5 + def __iter__(self): + return self + def next(self): if self.value == 10: raise StopIteration @@ -826,7 +829,7 @@ con = sqlite.connect(":memory:") con.close() try: - con() + con("select 1") self.fail("Should have raised a ProgrammingError") except sqlite.ProgrammingError: pass diff --git a/lib-python/2.7/sqlite3/test/regression.py b/lib-python/2.7/sqlite3/test/regression.py --- a/lib-python/2.7/sqlite3/test/regression.py +++ b/lib-python/2.7/sqlite3/test/regression.py @@ -264,6 +264,28 @@ """ self.assertRaises(sqlite.Warning, self.con, 1) + def CheckUpdateDescriptionNone(self): + """ + Call Cursor.update with an UPDATE query and check that it sets the + cursor's description to be None. + """ + cur = self.con.cursor() + cur.execute("CREATE TABLE foo (id INTEGER)") + cur.execute("UPDATE foo SET id = 3 WHERE id = 1") + self.assertEqual(cur.description, None) + + def CheckStatementCache(self): + cur = self.con.cursor() + cur.execute("CREATE TABLE foo (id INTEGER)") + values = [(i,) for i in xrange(5)] + cur.executemany("INSERT INTO foo (id) VALUES (?)", values) + + cur.execute("SELECT id FROM foo") + self.assertEqual(list(cur), values) + self.con.commit() + cur.execute("SELECT id FROM foo") + self.assertEqual(list(cur), values) + def suite(): regression_suite = unittest.makeSuite(RegressionTests, "Check") return unittest.TestSuite((regression_suite,)) diff --git a/lib-python/2.7/sqlite3/test/userfunctions.py b/lib-python/2.7/sqlite3/test/userfunctions.py --- a/lib-python/2.7/sqlite3/test/userfunctions.py +++ b/lib-python/2.7/sqlite3/test/userfunctions.py @@ -275,12 +275,14 @@ pass def CheckAggrNoStep(self): + # XXX it's better to raise OperationalError in order to stop + # the query earlier. cur = self.con.cursor() try: cur.execute("select nostep(t) from test") - self.fail("should have raised an AttributeError") - except AttributeError, e: - self.assertEqual(e.args[0], "AggrNoStep instance has no attribute 'step'") + self.fail("should have raised an OperationalError") + except sqlite.OperationalError, e: + self.assertEqual(e.args[0], "user-defined aggregate's 'step' method raised error") def CheckAggrNoFinalize(self): cur = self.con.cursor() diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -86,7 +86,7 @@ else: _PROTOCOL_NAMES[PROTOCOL_SSLv2] = "SSLv2" -from socket import socket, _fileobject, _delegate_methods, error as socket_error +from socket import socket, _fileobject, error as socket_error from socket import getnameinfo as _getnameinfo import base64 # for DER-to-PEM translation import errno @@ -103,14 +103,6 @@ do_handshake_on_connect=True, suppress_ragged_eofs=True, ciphers=None): socket.__init__(self, _sock=sock._sock) - # The initializer for socket overrides the methods send(), recv(), etc. - # in the instancce, which we don't need -- but we want to provide the - # methods defined in SSLSocket. - for attr in _delegate_methods: - try: - delattr(self, attr) - except AttributeError: - pass if certfile and not keyfile: keyfile = certfile diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -803,7 +803,7 @@ elif stderr == PIPE: errread, errwrite = _subprocess.CreatePipe(None, 0) elif stderr == STDOUT: - errwrite = c2pwrite + errwrite = c2pwrite.handle # pass id to not close it elif isinstance(stderr, int): errwrite = msvcrt.get_osfhandle(stderr) else: @@ -818,9 +818,13 @@ def _make_inheritable(self, handle): """Return a duplicate of handle, which is inheritable""" - return _subprocess.DuplicateHandle(_subprocess.GetCurrentProcess(), + dupl = _subprocess.DuplicateHandle(_subprocess.GetCurrentProcess(), handle, _subprocess.GetCurrentProcess(), 0, 1, _subprocess.DUPLICATE_SAME_ACCESS) + # If the initial handle was obtained with CreatePipe, close it. + if not isinstance(handle, int): + handle.Close() + return dupl def _find_w9xpopen(self): diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -26,6 +26,16 @@ 'scripts': '{base}/bin', 'data' : '{base}', }, + 'pypy': { + 'stdlib': '{base}/lib-python', + 'platstdlib': '{base}/lib-python', + 'purelib': '{base}/lib-python', + 'platlib': '{base}/lib-python', + 'include': '{base}/include', + 'platinclude': '{base}/include', + 'scripts': '{base}/bin', + 'data' : '{base}', + }, 'nt': { 'stdlib': '{base}/Lib', 'platstdlib': '{base}/Lib', @@ -158,7 +168,9 @@ return res def _get_default_scheme(): - if os.name == 'posix': + if '__pypy__' in sys.builtin_module_names: + return 'pypy' + elif os.name == 'posix': # the default scheme for posix is posix_prefix return 'posix_prefix' return os.name @@ -182,126 +194,9 @@ return env_base if env_base else joinuser("~", ".local") -def _parse_makefile(filename, vars=None): - """Parse a Makefile-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - import re - # Regexes needed for parsing Makefile (and similar syntaxes, - # like old-style Setup files). - _variable_rx = re.compile("([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)") - _findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)") - _findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}") - - if vars is None: - vars = {} - done = {} - notdone = {} - - with open(filename) as f: - lines = f.readlines() - - for line in lines: - if line.startswith('#') or line.strip() == '': - continue - m = _variable_rx.match(line) - if m: - n, v = m.group(1, 2) - v = v.strip() - # `$$' is a literal `$' in make - tmpv = v.replace('$$', '') - - if "$" in tmpv: - notdone[n] = v - else: - try: - v = int(v) - except ValueError: - # insert literal `$' - done[n] = v.replace('$$', '$') - else: - done[n] = v - - # do variable interpolation here - while notdone: - for name in notdone.keys(): - value = notdone[name] - m = _findvar1_rx.search(value) or _findvar2_rx.search(value) - if m: - n = m.group(1) - found = True - if n in done: - item = str(done[n]) - elif n in notdone: - # get it on a subsequent round - found = False - elif n in os.environ: - # do it like make: fall back to environment - item = os.environ[n] - else: - done[n] = item = "" - if found: - after = value[m.end():] - value = value[:m.start()] + item + after - if "$" in after: - notdone[name] = value - else: - try: value = int(value) - except ValueError: - done[name] = value.strip() - else: - done[name] = value - del notdone[name] - else: - # bogus variable reference; just drop it since we can't deal - del notdone[name] - # strip spurious spaces - for k, v in done.items(): - if isinstance(v, str): - done[k] = v.strip() - - # save the results in the global dictionary - vars.update(done) - return vars - - -def _get_makefile_filename(): - if _PYTHON_BUILD: - return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('platstdlib'), "config", "Makefile") - - def _init_posix(vars): """Initialize the module as appropriate for POSIX systems.""" - # load the installed Makefile: - makefile = _get_makefile_filename() - try: - _parse_makefile(makefile, vars) - except IOError, e: - msg = "invalid Python installation: unable to open %s" % makefile - if hasattr(e, "strerror"): - msg = msg + " (%s)" % e.strerror - raise IOError(msg) - - # load the installed pyconfig.h: - config_h = get_config_h_filename() - try: - with open(config_h) as f: - parse_config_h(f, vars) - except IOError, e: - msg = "invalid Python installation: unable to open %s" % config_h - if hasattr(e, "strerror"): - msg = msg + " (%s)" % e.strerror - raise IOError(msg) - - # On AIX, there are wrong paths to the linker scripts in the Makefile - # -- these paths are relative to the Python source, but when installed - # the scripts are in another directory. - if _PYTHON_BUILD: - vars['LDSHARED'] = vars['BLDSHARED'] + return def _init_non_posix(vars): """Initialize the module as appropriate for NT""" @@ -474,10 +369,11 @@ # patched up as well. 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - flags = _CONFIG_VARS[key] - flags = re.sub('-arch\s+\w+\s', ' ', flags) - flags = flags + ' ' + arch - _CONFIG_VARS[key] = flags + if key in _CONFIG_VARS: + flags = _CONFIG_VARS[key] + flags = re.sub('-arch\s+\w+\s', ' ', flags) + flags = flags + ' ' + arch + _CONFIG_VARS[key] = flags # If we're on OSX 10.5 or later and the user tries to # compiles an extension using an SDK that is not present diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -1716,9 +1716,6 @@ except (ImportError, AttributeError): raise CompressionError("gzip module is not available") - if fileobj is None: - fileobj = bltn_open(name, mode + "b") - try: t = cls.taropen(name, mode, gzip.GzipFile(name, mode, compresslevel, fileobj), diff --git a/lib-python/2.7/test/list_tests.py b/lib-python/2.7/test/list_tests.py --- a/lib-python/2.7/test/list_tests.py +++ b/lib-python/2.7/test/list_tests.py @@ -45,8 +45,12 @@ self.assertEqual(str(a2), "[0, 1, 2, [...], 3]") self.assertEqual(repr(a2), "[0, 1, 2, [...], 3]") + if test_support.check_impl_detail(): + depth = sys.getrecursionlimit() + 100 + else: + depth = 1000 * 1000 # should be enough to exhaust the stack l0 = [] - for i in xrange(sys.getrecursionlimit() + 100): + for i in xrange(depth): l0 = [l0] self.assertRaises(RuntimeError, repr, l0) @@ -472,7 +476,11 @@ u += "eggs" self.assertEqual(u, self.type2test("spameggs")) - self.assertRaises(TypeError, u.__iadd__, None) + def f_iadd(u, x): + u += x + return u + + self.assertRaises(TypeError, f_iadd, u, None) def test_imul(self): u = self.type2test([0, 1]) diff --git a/lib-python/2.7/test/mapping_tests.py b/lib-python/2.7/test/mapping_tests.py --- a/lib-python/2.7/test/mapping_tests.py +++ b/lib-python/2.7/test/mapping_tests.py @@ -531,7 +531,10 @@ self.assertEqual(va, int(ka)) kb, vb = tb = b.popitem() self.assertEqual(vb, int(kb)) - self.assertTrue(not(copymode < 0 and ta != tb)) + if copymode < 0 and test_support.check_impl_detail(): + # popitem() is not guaranteed to be deterministic on + # all implementations + self.assertEqual(ta, tb) self.assertTrue(not a) self.assertTrue(not b) diff --git a/lib-python/2.7/test/pickletester.py b/lib-python/2.7/test/pickletester.py --- a/lib-python/2.7/test/pickletester.py +++ b/lib-python/2.7/test/pickletester.py @@ -6,7 +6,7 @@ import pickletools import copy_reg -from test.test_support import TestFailed, have_unicode, TESTFN +from test.test_support import TestFailed, have_unicode, TESTFN, impl_detail # Tests that try a number of pickle protocols should have a # for proto in protocols: @@ -949,6 +949,7 @@ "Failed protocol %d: %r != %r" % (proto, obj, loaded)) + @impl_detail("pypy does not store attribute names", pypy=False) def test_attribute_name_interning(self): # Test that attribute names of pickled objects are interned when # unpickling. @@ -1091,6 +1092,7 @@ s = StringIO.StringIO("X''.") self.assertRaises(EOFError, self.module.load, s) + @impl_detail("no full restricted mode in pypy", pypy=False) def test_restricted(self): # issue7128: cPickle failed in restricted mode builtins = {self.module.__name__: self.module, diff --git a/lib-python/2.7/test/regrtest.py b/lib-python/2.7/test/regrtest.py --- a/lib-python/2.7/test/regrtest.py +++ b/lib-python/2.7/test/regrtest.py @@ -1388,7 +1388,26 @@ test_zipimport test_zlib """, - 'openbsd3': + 'openbsd4': + """ + test_ascii_formatd + test_bsddb + test_bsddb3 + test_ctypes + test_dl + test_epoll + test_gdbm + test_locale + test_normalization + test_ossaudiodev + test_pep277 + test_tcl + test_tk + test_ttk_guionly + test_ttk_textonly + test_multiprocessing + """, + 'openbsd5': """ test_ascii_formatd test_bsddb @@ -1503,13 +1522,7 @@ return self.expected if __name__ == '__main__': - # findtestdir() gets the dirname out of __file__, so we have to make it - # absolute before changing the working directory. - # For example __file__ may be relative when running trace or profile. - # See issue #9323. - __file__ = os.path.abspath(__file__) - - # sanity check + # Simplification for findtestdir(). assert __file__ == os.path.abspath(sys.argv[0]) # When tests are run from the Python build directory, it is best practice diff --git a/lib-python/2.7/test/seq_tests.py b/lib-python/2.7/test/seq_tests.py --- a/lib-python/2.7/test/seq_tests.py +++ b/lib-python/2.7/test/seq_tests.py @@ -307,12 +307,18 @@ def test_bigrepeat(self): import sys - if sys.maxint <= 2147483647: - x = self.type2test([0]) - x *= 2**16 - self.assertRaises(MemoryError, x.__mul__, 2**16) - if hasattr(x, '__imul__'): - self.assertRaises(MemoryError, x.__imul__, 2**16) + # we chose an N such as 2**16 * N does not fit into a cpu word + if sys.maxint == 2147483647: + # 32 bit system + N = 2**16 + else: + # 64 bit system + N = 2**48 + x = self.type2test([0]) + x *= 2**16 + self.assertRaises(MemoryError, x.__mul__, N) + if hasattr(x, '__imul__'): + self.assertRaises(MemoryError, x.__imul__, N) def test_subscript(self): a = self.type2test([10, 11]) diff --git a/lib-python/2.7/test/string_tests.py b/lib-python/2.7/test/string_tests.py --- a/lib-python/2.7/test/string_tests.py +++ b/lib-python/2.7/test/string_tests.py @@ -1024,7 +1024,10 @@ self.checkequal('abc', 'abc', '__mul__', 1) self.checkequal('abcabcabc', 'abc', '__mul__', 3) self.checkraises(TypeError, 'abc', '__mul__') - self.checkraises(TypeError, 'abc', '__mul__', '') + class Mul(object): + def mul(self, a, b): + return a * b + self.checkraises(TypeError, Mul(), 'mul', 'abc', '') # XXX: on a 64-bit system, this doesn't raise an overflow error, # but either raises a MemoryError, or succeeds (if you have 54TiB) #self.checkraises(OverflowError, 10000*'abc', '__mul__', 2000000000) diff --git a/lib-python/2.7/test/test_abstract_numbers.py b/lib-python/2.7/test/test_abstract_numbers.py --- a/lib-python/2.7/test/test_abstract_numbers.py +++ b/lib-python/2.7/test/test_abstract_numbers.py @@ -40,7 +40,8 @@ c1, c2 = complex(3, 2), complex(4,1) # XXX: This is not ideal, but see the comment in math_trunc(). - self.assertRaises(AttributeError, math.trunc, c1) + # Modified to suit PyPy, which gives TypeError in all cases + self.assertRaises((AttributeError, TypeError), math.trunc, c1) self.assertRaises(TypeError, float, c1) self.assertRaises(TypeError, int, c1) diff --git a/lib-python/2.7/test/test_aifc.py b/lib-python/2.7/test/test_aifc.py --- a/lib-python/2.7/test/test_aifc.py +++ b/lib-python/2.7/test/test_aifc.py @@ -1,4 +1,4 @@ -from test.test_support import findfile, run_unittest, TESTFN +from test.test_support import findfile, run_unittest, TESTFN, impl_detail import unittest import os @@ -68,6 +68,7 @@ self.assertEqual(f.getparams(), fout.getparams()) self.assertEqual(f.readframes(5), fout.readframes(5)) + @impl_detail("PyPy has no audioop module yet", pypy=False) def test_compress(self): f = self.f = aifc.open(self.sndfilepath) fout = self.fout = aifc.open(TESTFN, 'wb') diff --git a/lib-python/2.7/test/test_array.py b/lib-python/2.7/test/test_array.py --- a/lib-python/2.7/test/test_array.py +++ b/lib-python/2.7/test/test_array.py @@ -295,9 +295,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, a.__add__, b) - - self.assertRaises(TypeError, a.__add__, "bad") + with self.assertRaises(TypeError): + a + b + with self.assertRaises(TypeError): + a + 'bad' def test_iadd(self): a = array.array(self.typecode, self.example[::-1]) @@ -316,9 +317,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, a.__add__, b) - - self.assertRaises(TypeError, a.__iadd__, "bad") + with self.assertRaises(TypeError): + a += b + with self.assertRaises(TypeError): + a += 'bad' def test_mul(self): a = 5*array.array(self.typecode, self.example) @@ -345,7 +347,8 @@ array.array(self.typecode) ) - self.assertRaises(TypeError, a.__mul__, "bad") + with self.assertRaises(TypeError): + a * 'bad' def test_imul(self): a = array.array(self.typecode, self.example) @@ -374,7 +377,8 @@ a *= -1 self.assertEqual(a, array.array(self.typecode)) - self.assertRaises(TypeError, a.__imul__, "bad") + with self.assertRaises(TypeError): + a *= 'bad' def test_getitem(self): a = array.array(self.typecode, self.example) @@ -769,6 +773,7 @@ p = proxy(s) self.assertEqual(p.tostring(), s.tostring()) s = None + test_support.gc_collect() self.assertRaises(ReferenceError, len, p) def test_bug_782369(self): diff --git a/lib-python/2.7/test/test_ascii_formatd.py b/lib-python/2.7/test/test_ascii_formatd.py --- a/lib-python/2.7/test/test_ascii_formatd.py +++ b/lib-python/2.7/test/test_ascii_formatd.py @@ -4,6 +4,10 @@ import unittest from test.test_support import check_warnings, run_unittest, import_module +from test.test_support import check_impl_detail + +if not check_impl_detail(cpython=True): + raise unittest.SkipTest("this test is only for CPython") # Skip tests if _ctypes module does not exist import_module('_ctypes') diff --git a/lib-python/2.7/test/test_ast.py b/lib-python/2.7/test/test_ast.py --- a/lib-python/2.7/test/test_ast.py +++ b/lib-python/2.7/test/test_ast.py @@ -20,10 +20,24 @@ # These tests are compiled through "exec" # There should be atleast one test per statement exec_tests = [ + # None + "None", # FunctionDef "def f(): pass", + # FunctionDef with arg + "def f(a): pass", + # FunctionDef with arg and default value + "def f(a=0): pass", + # FunctionDef with varargs + "def f(*args): pass", + # FunctionDef with kwargs + "def f(**kwargs): pass", + # FunctionDef with all kind of args + "def f(a, b=1, c=None, d=[], e={}, *args, **kwargs): pass", # ClassDef "class C:pass", + # ClassDef, new style class + "class C(object): pass", # Return "def f():return 1", # Delete @@ -68,6 +82,27 @@ "for a,b in c: pass", "[(a,b) for a,b in c]", "((a,b) for a,b in c)", + "((a,b) for (a,b) in c)", + # Multiline generator expression + """( + ( + Aa + , + Bb + ) + for + Aa + , + Bb in Cc + )""", + # dictcomp + "{a : b for w in x for m in p if g}", + # dictcomp with naked tuple + "{a : b for v,w in x}", + # setcomp + "{r for l in x if g}", + # setcomp with naked tuple + "{r for l,m in x}", ] # These are compiled through "single" @@ -80,6 +115,8 @@ # These are compiled through "eval" # It should test all expressions eval_tests = [ + # None + "None", # BoolOp "a and b", # BinOp @@ -90,6 +127,16 @@ "lambda:None", # Dict "{ 1:2 }", + # Empty dict + "{}", + # Set + "{None,}", + # Multiline dict + """{ + 1 + : + 2 + }""", # ListComp "[a for b in c if d]", # GeneratorExp @@ -114,8 +161,14 @@ "v", # List "[1,2,3]", + # Empty list + "[]", # Tuple "1,2,3", + # Tuple + "(1,2,3)", + # Empty tuple + "()", # Combination "a.b.c.d(a.b[1:2])", @@ -141,6 +194,35 @@ elif value is not None: self._assertTrueorder(value, parent_pos) + def test_AST_objects(self): + if test_support.check_impl_detail(): + # PyPy also provides a __dict__ to the ast.AST base class. + + x = ast.AST() + try: + x.foobar = 21 + except AttributeError, e: + self.assertEquals(e.args[0], + "'_ast.AST' object has no attribute 'foobar'") + else: + self.assert_(False) + + try: + ast.AST(lineno=2) + except AttributeError, e: + self.assertEquals(e.args[0], + "'_ast.AST' object has no attribute 'lineno'") + else: + self.assert_(False) + + try: + ast.AST(2) + except TypeError, e: + self.assertEquals(e.args[0], + "_ast.AST constructor takes 0 positional arguments") + else: + self.assert_(False) + def test_snippets(self): for input, output, kind in ((exec_tests, exec_results, "exec"), (single_tests, single_results, "single"), @@ -169,6 +251,114 @@ self.assertTrue(issubclass(ast.comprehension, ast.AST)) self.assertTrue(issubclass(ast.Gt, ast.AST)) + def test_field_attr_existence(self): + for name, item in ast.__dict__.iteritems(): + if isinstance(item, type) and name != 'AST' and name[0].isupper(): # XXX: pypy does not allow abstract ast class instanciation + x = item() + if isinstance(x, ast.AST): + self.assertEquals(type(x._fields), tuple) + + def test_arguments(self): + x = ast.arguments() + self.assertEquals(x._fields, ('args', 'vararg', 'kwarg', 'defaults')) + try: + x.vararg + except AttributeError, e: + self.assertEquals(e.args[0], + "'arguments' object has no attribute 'vararg'") + else: + self.assert_(False) + x = ast.arguments(1, 2, 3, 4) + self.assertEquals(x.vararg, 2) + + def test_field_attr_writable(self): + x = ast.Num() + # We can assign to _fields + x._fields = 666 + self.assertEquals(x._fields, 666) + + def test_classattrs(self): + x = ast.Num() + self.assertEquals(x._fields, ('n',)) + try: + x.n + except AttributeError, e: + self.assertEquals(e.args[0], + "'Num' object has no attribute 'n'") + else: + self.assert_(False) + + x = ast.Num(42) + self.assertEquals(x.n, 42) + try: + x.lineno + except AttributeError, e: + self.assertEquals(e.args[0], + "'Num' object has no attribute 'lineno'") + else: + self.assert_(False) + + y = ast.Num() + x.lineno = y + self.assertEquals(x.lineno, y) + + try: + x.foobar + except AttributeError, e: + self.assertEquals(e.args[0], + "'Num' object has no attribute 'foobar'") + else: + self.assert_(False) + + x = ast.Num(lineno=2) + self.assertEquals(x.lineno, 2) + + x = ast.Num(42, lineno=0) + self.assertEquals(x.lineno, 0) + self.assertEquals(x._fields, ('n',)) + self.assertEquals(x.n, 42) + + self.assertRaises(TypeError, ast.Num, 1, 2) + self.assertRaises(TypeError, ast.Num, 1, 2, lineno=0) + + def test_module(self): + body = [ast.Num(42)] + x = ast.Module(body) + self.assertEquals(x.body, body) + + def test_nodeclass(self): + x = ast.BinOp() + self.assertEquals(x._fields, ('left', 'op', 'right')) + + # Zero arguments constructor explicitely allowed + x = ast.BinOp() + # Random attribute allowed too + x.foobarbaz = 5 + self.assertEquals(x.foobarbaz, 5) + + n1 = ast.Num(1) + n3 = ast.Num(3) + addop = ast.Add() + x = ast.BinOp(n1, addop, n3) + self.assertEquals(x.left, n1) + self.assertEquals(x.op, addop) + self.assertEquals(x.right, n3) + + x = ast.BinOp(1, 2, 3) + self.assertEquals(x.left, 1) + self.assertEquals(x.op, 2) + self.assertEquals(x.right, 3) + + x = ast.BinOp(1, 2, 3, lineno=0) + self.assertEquals(x.lineno, 0) + + def test_nodeclasses(self): + x = ast.BinOp(1, 2, 3, lineno=0) + self.assertEquals(x.left, 1) + self.assertEquals(x.op, 2) + self.assertEquals(x.right, 3) + self.assertEquals(x.lineno, 0) + def test_nodeclasses(self): x = ast.BinOp(1, 2, 3, lineno=0) self.assertEqual(x.left, 1) @@ -178,6 +368,12 @@ # node raises exception when not given enough arguments self.assertRaises(TypeError, ast.BinOp, 1, 2) + # node raises exception when given too many arguments + self.assertRaises(TypeError, ast.BinOp, 1, 2, 3, 4) + # node raises exception when not given enough arguments + self.assertRaises(TypeError, ast.BinOp, 1, 2, lineno=0) + # node raises exception when given too many arguments + self.assertRaises(TypeError, ast.BinOp, 1, 2, 3, 4, lineno=0) # can set attributes through kwargs too x = ast.BinOp(left=1, op=2, right=3, lineno=0) @@ -186,8 +382,14 @@ self.assertEqual(x.right, 3) self.assertEqual(x.lineno, 0) + # Random kwargs also allowed + x = ast.BinOp(1, 2, 3, foobarbaz=42) + self.assertEquals(x.foobarbaz, 42) + + def test_no_fields(self): # this used to fail because Sub._fields was None x = ast.Sub() + self.assertEquals(x._fields, ()) def test_pickling(self): import pickle @@ -330,8 +532,15 @@ #### EVERYTHING BELOW IS GENERATED ##### exec_results = [ +('Module', [('Expr', (1, 0), ('Name', (1, 0), 'None', ('Load',)))]), ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, None, []), [('Pass', (1, 9))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [('Name', (1, 6), 'a', ('Param',))], None, None, []), [('Pass', (1, 10))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [('Name', (1, 6), 'a', ('Param',))], None, None, [('Num', (1, 8), 0)]), [('Pass', (1, 12))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], 'args', None, []), [('Pass', (1, 14))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, 'kwargs', []), [('Pass', (1, 17))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [('Name', (1, 6), 'a', ('Param',)), ('Name', (1, 9), 'b', ('Param',)), ('Name', (1, 14), 'c', ('Param',)), ('Name', (1, 22), 'd', ('Param',)), ('Name', (1, 28), 'e', ('Param',))], 'args', 'kwargs', [('Num', (1, 11), 1), ('Name', (1, 16), 'None', ('Load',)), ('List', (1, 24), [], ('Load',)), ('Dict', (1, 30), [], [])]), [('Pass', (1, 52))], [])]), ('Module', [('ClassDef', (1, 0), 'C', [], [('Pass', (1, 8))], [])]), +('Module', [('ClassDef', (1, 0), 'C', [('Name', (1, 8), 'object', ('Load',))], [('Pass', (1, 17))], [])]), ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, None, []), [('Return', (1, 8), ('Num', (1, 15), 1))], [])]), ('Module', [('Delete', (1, 0), [('Name', (1, 4), 'v', ('Del',))])]), ('Module', [('Assign', (1, 0), [('Name', (1, 0), 'v', ('Store',))], ('Num', (1, 4), 1))]), @@ -355,16 +564,26 @@ ('Module', [('For', (1, 0), ('Tuple', (1, 4), [('Name', (1, 4), 'a', ('Store',)), ('Name', (1, 6), 'b', ('Store',))], ('Store',)), ('Name', (1, 11), 'c', ('Load',)), [('Pass', (1, 14))], [])]), ('Module', [('Expr', (1, 0), ('ListComp', (1, 1), ('Tuple', (1, 2), [('Name', (1, 2), 'a', ('Load',)), ('Name', (1, 4), 'b', ('Load',))], ('Load',)), [('comprehension', ('Tuple', (1, 11), [('Name', (1, 11), 'a', ('Store',)), ('Name', (1, 13), 'b', ('Store',))], ('Store',)), ('Name', (1, 18), 'c', ('Load',)), [])]))]), ('Module', [('Expr', (1, 0), ('GeneratorExp', (1, 1), ('Tuple', (1, 2), [('Name', (1, 2), 'a', ('Load',)), ('Name', (1, 4), 'b', ('Load',))], ('Load',)), [('comprehension', ('Tuple', (1, 11), [('Name', (1, 11), 'a', ('Store',)), ('Name', (1, 13), 'b', ('Store',))], ('Store',)), ('Name', (1, 18), 'c', ('Load',)), [])]))]), +('Module', [('Expr', (1, 0), ('GeneratorExp', (1, 1), ('Tuple', (1, 2), [('Name', (1, 2), 'a', ('Load',)), ('Name', (1, 4), 'b', ('Load',))], ('Load',)), [('comprehension', ('Tuple', (1, 12), [('Name', (1, 12), 'a', ('Store',)), ('Name', (1, 14), 'b', ('Store',))], ('Store',)), ('Name', (1, 20), 'c', ('Load',)), [])]))]), +('Module', [('Expr', (1, 0), ('GeneratorExp', (2, 4), ('Tuple', (3, 4), [('Name', (3, 4), 'Aa', ('Load',)), ('Name', (5, 7), 'Bb', ('Load',))], ('Load',)), [('comprehension', ('Tuple', (8, 4), [('Name', (8, 4), 'Aa', ('Store',)), ('Name', (10, 4), 'Bb', ('Store',))], ('Store',)), ('Name', (10, 10), 'Cc', ('Load',)), [])]))]), +('Module', [('Expr', (1, 0), ('DictComp', (1, 1), ('Name', (1, 1), 'a', ('Load',)), ('Name', (1, 5), 'b', ('Load',)), [('comprehension', ('Name', (1, 11), 'w', ('Store',)), ('Name', (1, 16), 'x', ('Load',)), []), ('comprehension', ('Name', (1, 22), 'm', ('Store',)), ('Name', (1, 27), 'p', ('Load',)), [('Name', (1, 32), 'g', ('Load',))])]))]), +('Module', [('Expr', (1, 0), ('DictComp', (1, 1), ('Name', (1, 1), 'a', ('Load',)), ('Name', (1, 5), 'b', ('Load',)), [('comprehension', ('Tuple', (1, 11), [('Name', (1, 11), 'v', ('Store',)), ('Name', (1, 13), 'w', ('Store',))], ('Store',)), ('Name', (1, 18), 'x', ('Load',)), [])]))]), +('Module', [('Expr', (1, 0), ('SetComp', (1, 1), ('Name', (1, 1), 'r', ('Load',)), [('comprehension', ('Name', (1, 7), 'l', ('Store',)), ('Name', (1, 12), 'x', ('Load',)), [('Name', (1, 17), 'g', ('Load',))])]))]), +('Module', [('Expr', (1, 0), ('SetComp', (1, 1), ('Name', (1, 1), 'r', ('Load',)), [('comprehension', ('Tuple', (1, 7), [('Name', (1, 7), 'l', ('Store',)), ('Name', (1, 9), 'm', ('Store',))], ('Store',)), ('Name', (1, 14), 'x', ('Load',)), [])]))]), ] single_results = [ ('Interactive', [('Expr', (1, 0), ('BinOp', (1, 0), ('Num', (1, 0), 1), ('Add',), ('Num', (1, 2), 2)))]), ] eval_results = [ +('Expression', ('Name', (1, 0), 'None', ('Load',))), ('Expression', ('BoolOp', (1, 0), ('And',), [('Name', (1, 0), 'a', ('Load',)), ('Name', (1, 6), 'b', ('Load',))])), ('Expression', ('BinOp', (1, 0), ('Name', (1, 0), 'a', ('Load',)), ('Add',), ('Name', (1, 4), 'b', ('Load',)))), ('Expression', ('UnaryOp', (1, 0), ('Not',), ('Name', (1, 4), 'v', ('Load',)))), ('Expression', ('Lambda', (1, 0), ('arguments', [], None, None, []), ('Name', (1, 7), 'None', ('Load',)))), ('Expression', ('Dict', (1, 0), [('Num', (1, 2), 1)], [('Num', (1, 4), 2)])), +('Expression', ('Dict', (1, 0), [], [])), +('Expression', ('Set', (1, 0), [('Name', (1, 1), 'None', ('Load',))])), +('Expression', ('Dict', (1, 0), [('Num', (2, 6), 1)], [('Num', (4, 10), 2)])), ('Expression', ('ListComp', (1, 1), ('Name', (1, 1), 'a', ('Load',)), [('comprehension', ('Name', (1, 7), 'b', ('Store',)), ('Name', (1, 12), 'c', ('Load',)), [('Name', (1, 17), 'd', ('Load',))])])), ('Expression', ('GeneratorExp', (1, 1), ('Name', (1, 1), 'a', ('Load',)), [('comprehension', ('Name', (1, 7), 'b', ('Store',)), ('Name', (1, 12), 'c', ('Load',)), [('Name', (1, 17), 'd', ('Load',))])])), ('Expression', ('Compare', (1, 0), ('Num', (1, 0), 1), [('Lt',), ('Lt',)], [('Num', (1, 4), 2), ('Num', (1, 8), 3)])), @@ -376,7 +595,10 @@ ('Expression', ('Subscript', (1, 0), ('Name', (1, 0), 'a', ('Load',)), ('Slice', ('Name', (1, 2), 'b', ('Load',)), ('Name', (1, 4), 'c', ('Load',)), None), ('Load',))), ('Expression', ('Name', (1, 0), 'v', ('Load',))), ('Expression', ('List', (1, 0), [('Num', (1, 1), 1), ('Num', (1, 3), 2), ('Num', (1, 5), 3)], ('Load',))), +('Expression', ('List', (1, 0), [], ('Load',))), ('Expression', ('Tuple', (1, 0), [('Num', (1, 0), 1), ('Num', (1, 2), 2), ('Num', (1, 4), 3)], ('Load',))), +('Expression', ('Tuple', (1, 1), [('Num', (1, 1), 1), ('Num', (1, 3), 2), ('Num', (1, 5), 3)], ('Load',))), +('Expression', ('Tuple', (1, 0), [], ('Load',))), ('Expression', ('Call', (1, 0), ('Attribute', (1, 0), ('Attribute', (1, 0), ('Attribute', (1, 0), ('Name', (1, 0), 'a', ('Load',)), 'b', ('Load',)), 'c', ('Load',)), 'd', ('Load',)), [('Subscript', (1, 8), ('Attribute', (1, 8), ('Name', (1, 8), 'a', ('Load',)), 'b', ('Load',)), ('Slice', ('Num', (1, 12), 1), ('Num', (1, 14), 2), None), ('Load',))], [], None, None)), ] main() diff --git a/lib-python/2.7/test/test_builtin.py b/lib-python/2.7/test/test_builtin.py --- a/lib-python/2.7/test/test_builtin.py +++ b/lib-python/2.7/test/test_builtin.py @@ -3,7 +3,8 @@ import platform import unittest from test.test_support import fcmp, have_unicode, TESTFN, unlink, \ - run_unittest, check_py3k_warnings + run_unittest, check_py3k_warnings, \ + check_impl_detail import warnings from operator import neg @@ -247,12 +248,14 @@ self.assertRaises(TypeError, compile) self.assertRaises(ValueError, compile, 'print 42\n', '', 'badmode') self.assertRaises(ValueError, compile, 'print 42\n', '', 'single', 0xff) - self.assertRaises(TypeError, compile, chr(0), 'f', 'exec') + if check_impl_detail(cpython=True): + self.assertRaises(TypeError, compile, chr(0), 'f', 'exec') self.assertRaises(TypeError, compile, 'pass', '?', 'exec', mode='eval', source='0', filename='tmp') if have_unicode: compile(unicode('print u"\xc3\xa5"\n', 'utf8'), '', 'exec') - self.assertRaises(TypeError, compile, unichr(0), 'f', 'exec') + if check_impl_detail(cpython=True): + self.assertRaises(TypeError, compile, unichr(0), 'f', 'exec') self.assertRaises(ValueError, compile, unicode('a = 1'), 'f', 'bad') @@ -395,12 +398,16 @@ self.assertEqual(eval('dir()', g, m), list('xyz')) self.assertEqual(eval('globals()', g, m), g) self.assertEqual(eval('locals()', g, m), m) - self.assertRaises(TypeError, eval, 'a', m) + # on top of CPython, the first dictionary (the globals) has to + # be a real dict. This is not the case on top of PyPy. + if check_impl_detail(pypy=False): + self.assertRaises(TypeError, eval, 'a', m) + class A: "Non-mapping" pass m = A() - self.assertRaises(TypeError, eval, 'a', g, m) + self.assertRaises((TypeError, AttributeError), eval, 'a', g, m) # Verify that dict subclasses work as well class D(dict): @@ -491,9 +498,10 @@ execfile(TESTFN, globals, locals) self.assertEqual(locals['z'], 2) + self.assertRaises(TypeError, execfile, TESTFN, {}, ()) unlink(TESTFN) self.assertRaises(TypeError, execfile) - self.assertRaises(TypeError, execfile, TESTFN, {}, ()) + self.assertRaises((TypeError, IOError), execfile, TESTFN, {}, ()) import os self.assertRaises(IOError, execfile, os.curdir) self.assertRaises(IOError, execfile, "I_dont_exist") @@ -1108,7 +1116,8 @@ def __cmp__(self, other): raise RuntimeError __hash__ = None # Invalid cmp makes this unhashable - self.assertRaises(RuntimeError, range, a, a + 1, badzero(1)) + if check_impl_detail(cpython=True): + self.assertRaises(RuntimeError, range, a, a + 1, badzero(1)) # Reject floats. self.assertRaises(TypeError, range, 1., 1., 1.) diff --git a/lib-python/2.7/test/test_bytes.py b/lib-python/2.7/test/test_bytes.py --- a/lib-python/2.7/test/test_bytes.py +++ b/lib-python/2.7/test/test_bytes.py @@ -694,6 +694,7 @@ self.assertEqual(b, b1) self.assertTrue(b is b1) + @test.test_support.impl_detail("undocumented bytes.__alloc__()") def test_alloc(self): b = bytearray() alloc = b.__alloc__() @@ -821,6 +822,8 @@ self.assertEqual(b, b"") self.assertEqual(c, b"") + @test.test_support.impl_detail( + "resizing semantics of CPython rely on refcounting") def test_resize_forbidden(self): # #4509: can't resize a bytearray when there are buffer exports, even # if it wouldn't reallocate the underlying buffer. @@ -853,6 +856,26 @@ self.assertRaises(BufferError, delslice) self.assertEqual(b, orig) + @test.test_support.impl_detail("resizing semantics", cpython=False) + def test_resize_forbidden_non_cpython(self): + # on non-CPython implementations, we cannot prevent changes to + # bytearrays just because there are buffers around. Instead, + # we get (on PyPy) a buffer that follows the changes and resizes. + b = bytearray(range(10)) + for v in [memoryview(b), buffer(b)]: + b[5] = 99 + self.assertIn(v[5], (99, chr(99))) + b[5] = 100 + b += b + b += b + b += b + self.assertEquals(len(v), 80) + self.assertIn(v[5], (100, chr(100))) + self.assertIn(v[79], (9, chr(9))) + del b[10:] + self.assertRaises(IndexError, lambda: v[10]) + self.assertEquals(len(v), 10) + def test_empty_bytearray(self): # Issue #7561: operations on empty bytearrays could crash in many # situations, due to a fragile implementation of the diff --git a/lib-python/2.7/test/test_bz2.py b/lib-python/2.7/test/test_bz2.py --- a/lib-python/2.7/test/test_bz2.py +++ b/lib-python/2.7/test/test_bz2.py @@ -50,6 +50,7 @@ self.filename = TESTFN def tearDown(self): + test_support.gc_collect() if os.path.isfile(self.filename): os.unlink(self.filename) @@ -246,6 +247,8 @@ for i in xrange(10000): o = BZ2File(self.filename) del o + if i % 100 == 0: + test_support.gc_collect() def testOpenNonexistent(self): # "Test opening a nonexistent file" @@ -310,6 +313,7 @@ for t in threads: t.join() + @test_support.impl_detail() def testMixedIterationReads(self): # Issue #8397: mixed iteration and reads should be forbidden. with bz2.BZ2File(self.filename, 'wb') as f: diff --git a/lib-python/2.7/test/test_cmd_line_script.py b/lib-python/2.7/test/test_cmd_line_script.py --- a/lib-python/2.7/test/test_cmd_line_script.py +++ b/lib-python/2.7/test/test_cmd_line_script.py @@ -112,6 +112,8 @@ self._check_script(script_dir, script_name, script_dir, '') def test_directory_compiled(self): + if test.test_support.check_impl_detail(pypy=True): + raise unittest.SkipTest("pypy won't load lone .pyc files") with temp_dir() as script_dir: script_name = _make_test_script(script_dir, '__main__') compiled_name = compile_script(script_name) @@ -173,6 +175,8 @@ script_name, 'test_pkg') def test_package_compiled(self): + if test.test_support.check_impl_detail(pypy=True): + raise unittest.SkipTest("pypy won't load lone .pyc files") with temp_dir() as script_dir: pkg_dir = os.path.join(script_dir, 'test_pkg') make_pkg(pkg_dir) diff --git a/lib-python/2.7/test/test_code.py b/lib-python/2.7/test/test_code.py --- a/lib-python/2.7/test/test_code.py +++ b/lib-python/2.7/test/test_code.py @@ -82,7 +82,7 @@ import unittest import weakref -import _testcapi +from test import test_support def consts(t): @@ -104,7 +104,9 @@ class CodeTest(unittest.TestCase): + @test_support.impl_detail("test for PyCode_NewEmpty") def test_newempty(self): + import _testcapi co = _testcapi.code_newempty("filename", "funcname", 15) self.assertEqual(co.co_filename, "filename") self.assertEqual(co.co_name, "funcname") @@ -132,6 +134,7 @@ coderef = weakref.ref(f.__code__, callback) self.assertTrue(bool(coderef())) del f + test_support.gc_collect() self.assertFalse(bool(coderef())) self.assertTrue(self.called) diff --git a/lib-python/2.7/test/test_codeop.py b/lib-python/2.7/test/test_codeop.py --- a/lib-python/2.7/test/test_codeop.py +++ b/lib-python/2.7/test/test_codeop.py @@ -3,7 +3,7 @@ Nick Mathewson """ import unittest -from test.test_support import run_unittest, is_jython +from test.test_support import run_unittest, is_jython, check_impl_detail from codeop import compile_command, PyCF_DONT_IMPLY_DEDENT @@ -270,7 +270,9 @@ ai("a = 'a\\\n") ai("a = 1","eval") - ai("a = (","eval") + if check_impl_detail(): # on PyPy it asks for more data, which is not + ai("a = (","eval") # completely correct but hard to fix and + # really a detail (in my opinion ) ai("]","eval") ai("())","eval") ai("[}","eval") diff --git a/lib-python/2.7/test/test_coercion.py b/lib-python/2.7/test/test_coercion.py --- a/lib-python/2.7/test/test_coercion.py +++ b/lib-python/2.7/test/test_coercion.py @@ -1,6 +1,7 @@ import copy import unittest -from test.test_support import run_unittest, TestFailed, check_warnings +from test.test_support import ( + run_unittest, TestFailed, check_warnings, check_impl_detail) # Fake a number that implements numeric methods through __coerce__ @@ -306,12 +307,18 @@ self.assertNotEqual(cmp(u'fish', evil_coercer), 0) self.assertNotEqual(cmp(slice(1), evil_coercer), 0) # ...but that this still works - class WackyComparer(object): - def __cmp__(slf, other): - self.assertTrue(other == 42, 'expected evil_coercer, got %r' % other) - return 0 - __hash__ = None # Invalid cmp makes this unhashable - self.assertEqual(cmp(WackyComparer(), evil_coercer), 0) + if check_impl_detail(): + # NB. I (arigo) would consider the following as implementation- + # specific. For example, in CPython, if we replace 42 with 42.0 + # both below and in CoerceTo() above, then the test fails. This + # hints that the behavior is really dependent on some obscure + # internal details. + class WackyComparer(object): + def __cmp__(slf, other): + self.assertTrue(other == 42, 'expected evil_coercer, got %r' % other) + return 0 + __hash__ = None # Invalid cmp makes this unhashable + self.assertEqual(cmp(WackyComparer(), evil_coercer), 0) # ...and classic classes too, since that code path is a little different class ClassicWackyComparer: def __cmp__(slf, other): diff --git a/lib-python/2.7/test/test_compile.py b/lib-python/2.7/test/test_compile.py --- a/lib-python/2.7/test/test_compile.py +++ b/lib-python/2.7/test/test_compile.py @@ -3,6 +3,7 @@ import _ast from test import test_support import textwrap +from test.test_support import check_impl_detail class TestSpecifics(unittest.TestCase): @@ -90,12 +91,13 @@ self.assertEqual(m.results, ('z', g)) exec 'z = locals()' in g, m self.assertEqual(m.results, ('z', m)) - try: - exec 'z = b' in m - except TypeError: - pass - else: - self.fail('Did not validate globals as a real dict') + if check_impl_detail(): + try: + exec 'z = b' in m + except TypeError: + pass + else: + self.fail('Did not validate globals as a real dict') class A: "Non-mapping" diff --git a/lib-python/2.7/test/test_copy.py b/lib-python/2.7/test/test_copy.py --- a/lib-python/2.7/test/test_copy.py +++ b/lib-python/2.7/test/test_copy.py @@ -637,6 +637,7 @@ self.assertEqual(v[c], d) self.assertEqual(len(v), 2) del c, d + test_support.gc_collect() self.assertEqual(len(v), 1) x, y = C(), C() # The underlying containers are decoupled @@ -666,6 +667,7 @@ self.assertEqual(v[a].i, b.i) self.assertEqual(v[c].i, d.i) del c + test_support.gc_collect() self.assertEqual(len(v), 1) def test_deepcopy_weakvaluedict(self): @@ -689,6 +691,7 @@ self.assertTrue(t is d) del x, y, z, t del d + test_support.gc_collect() self.assertEqual(len(v), 1) def test_deepcopy_bound_method(self): diff --git a/lib-python/2.7/test/test_cpickle.py b/lib-python/2.7/test/test_cpickle.py --- a/lib-python/2.7/test/test_cpickle.py +++ b/lib-python/2.7/test/test_cpickle.py @@ -61,27 +61,27 @@ error = cPickle.BadPickleGet def test_recursive_list(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_list, self) def test_recursive_tuple(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_tuple, self) def test_recursive_inst(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_inst, self) def test_recursive_dict(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_dict, self) def test_recursive_multi(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_multi, self) diff --git a/lib-python/2.7/test/test_csv.py b/lib-python/2.7/test/test_csv.py --- a/lib-python/2.7/test/test_csv.py +++ b/lib-python/2.7/test/test_csv.py @@ -54,8 +54,10 @@ self.assertEqual(obj.dialect.skipinitialspace, False) self.assertEqual(obj.dialect.strict, False) # Try deleting or changing attributes (they are read-only) - self.assertRaises(TypeError, delattr, obj.dialect, 'delimiter') - self.assertRaises(TypeError, setattr, obj.dialect, 'delimiter', ':') + self.assertRaises((TypeError, AttributeError), delattr, obj.dialect, + 'delimiter') + self.assertRaises((TypeError, AttributeError), setattr, obj.dialect, + 'delimiter', ':') self.assertRaises(AttributeError, delattr, obj.dialect, 'quoting') self.assertRaises(AttributeError, setattr, obj.dialect, 'quoting', None) diff --git a/lib-python/2.7/test/test_deque.py b/lib-python/2.7/test/test_deque.py --- a/lib-python/2.7/test/test_deque.py +++ b/lib-python/2.7/test/test_deque.py @@ -109,7 +109,7 @@ self.assertEqual(deque('abc', maxlen=4).maxlen, 4) self.assertEqual(deque('abc', maxlen=2).maxlen, 2) self.assertEqual(deque('abc', maxlen=0).maxlen, 0) - with self.assertRaises(AttributeError): + with self.assertRaises((AttributeError, TypeError)): d = deque('abc') d.maxlen = 10 @@ -352,7 +352,10 @@ for match in (True, False): d = deque(['ab']) d.extend([MutateCmp(d, match), 'c']) - self.assertRaises(IndexError, d.remove, 'c') + # On CPython we get IndexError: deque mutated during remove(). + # Why is it an IndexError during remove() only??? + # On PyPy it is a RuntimeError, as in the other operations. + self.assertRaises((IndexError, RuntimeError), d.remove, 'c') self.assertEqual(d, deque()) def test_repr(self): @@ -514,7 +517,7 @@ container = reversed(deque([obj, 1])) obj.x = iter(container) del obj, container - gc.collect() + test_support.gc_collect() self.assertTrue(ref() is None, "Cycle was not collected") class TestVariousIteratorArgs(unittest.TestCase): @@ -630,6 +633,7 @@ p = weakref.proxy(d) self.assertEqual(str(p), str(d)) d = None + test_support.gc_collect() self.assertRaises(ReferenceError, str, p) def test_strange_subclass(self): diff --git a/lib-python/2.7/test/test_descr.py b/lib-python/2.7/test/test_descr.py --- a/lib-python/2.7/test/test_descr.py +++ b/lib-python/2.7/test/test_descr.py @@ -2,6 +2,7 @@ import sys import types import unittest +import popen2 # trigger early the warning from popen2.py from copy import deepcopy from test import test_support @@ -1128,7 +1129,7 @@ # Test lookup leaks [SF bug 572567] import gc - if hasattr(gc, 'get_objects'): + if test_support.check_impl_detail(): class G(object): def __cmp__(self, other): return 0 @@ -1741,6 +1742,10 @@ raise MyException for name, runner, meth_impl, ok, env in specials: + if name == '__length_hint__' or name == '__sizeof__': + if not test_support.check_impl_detail(): + continue + class X(Checker): pass for attr, obj in env.iteritems(): @@ -1980,7 +1985,9 @@ except TypeError, msg: self.assertTrue(str(msg).find("weak reference") >= 0) else: - self.fail("weakref.ref(no) should be illegal") + if test_support.check_impl_detail(pypy=False): + self.fail("weakref.ref(no) should be illegal") + #else: pypy supports taking weakrefs to some more objects class Weak(object): __slots__ = ['foo', '__weakref__'] yes = Weak() @@ -3092,7 +3099,16 @@ class R(J): __slots__ = ["__dict__", "__weakref__"] - for cls, cls2 in ((G, H), (G, I), (I, H), (Q, R), (R, Q)): + if test_support.check_impl_detail(pypy=False): + lst = ((G, H), (G, I), (I, H), (Q, R), (R, Q)) + else: + # Not supported in pypy: changing the __class__ of an object + # to another __class__ that just happens to have the same slots. + # If needed, we can add the feature, but what we'll likely do + # then is to allow mostly any __class__ assignment, even if the + # classes have different __slots__, because we it's easier. + lst = ((Q, R), (R, Q)) + for cls, cls2 in lst: x = cls() x.a = 1 x.__class__ = cls2 @@ -3175,7 +3191,8 @@ except TypeError: pass else: - self.fail("%r's __dict__ can be modified" % cls) + if test_support.check_impl_detail(pypy=False): + self.fail("%r's __dict__ can be modified" % cls) # Modules also disallow __dict__ assignment class Module1(types.ModuleType, Base): @@ -4383,13 +4400,10 @@ self.assertTrue(l.__add__ != [5].__add__) self.assertTrue(l.__add__ != l.__mul__) self.assertTrue(l.__add__.__name__ == '__add__') - if hasattr(l.__add__, '__self__'): - # CPython - self.assertTrue(l.__add__.__self__ is l) + self.assertTrue(l.__add__.__self__ is l) + if hasattr(l.__add__, '__objclass__'): # CPython self.assertTrue(l.__add__.__objclass__ is list) - else: - # Python implementations where [].__add__ is a normal bound method - self.assertTrue(l.__add__.im_self is l) + else: # PyPy self.assertTrue(l.__add__.im_class is list) self.assertEqual(l.__add__.__doc__, list.__add__.__doc__) try: @@ -4578,8 +4592,12 @@ str.split(fake_str) # call a slot wrapper descriptor - with self.assertRaises(TypeError): - str.__add__(fake_str, "abc") + try: + r = str.__add__(fake_str, "abc") + except TypeError: + pass + else: + self.assertEqual(r, NotImplemented) class DictProxyTests(unittest.TestCase): diff --git a/lib-python/2.7/test/test_descrtut.py b/lib-python/2.7/test/test_descrtut.py --- a/lib-python/2.7/test/test_descrtut.py +++ b/lib-python/2.7/test/test_descrtut.py @@ -172,46 +172,12 @@ AttributeError: 'list' object has no attribute '__methods__' >>> -Instead, you can get the same information from the list type: +Instead, you can get the same information from the list type +(the following example filters out the numerous method names +starting with '_'): - >>> pprint.pprint(dir(list)) # like list.__dict__.keys(), but sorted - ['__add__', - '__class__', - '__contains__', - '__delattr__', - '__delitem__', - '__delslice__', - '__doc__', - '__eq__', - '__format__', - '__ge__', - '__getattribute__', - '__getitem__', - '__getslice__', - '__gt__', - '__hash__', - '__iadd__', - '__imul__', - '__init__', - '__iter__', - '__le__', - '__len__', - '__lt__', - '__mul__', - '__ne__', - '__new__', - '__reduce__', - '__reduce_ex__', - '__repr__', - '__reversed__', - '__rmul__', - '__setattr__', - '__setitem__', - '__setslice__', - '__sizeof__', - '__str__', - '__subclasshook__', - 'append', + >>> pprint.pprint([name for name in dir(list) if not name.startswith('_')]) + ['append', 'count', 'extend', 'index', diff --git a/lib-python/2.7/test/test_dict.py b/lib-python/2.7/test/test_dict.py --- a/lib-python/2.7/test/test_dict.py +++ b/lib-python/2.7/test/test_dict.py @@ -319,7 +319,8 @@ self.assertEqual(va, int(ka)) kb, vb = tb = b.popitem() self.assertEqual(vb, int(kb)) - self.assertFalse(copymode < 0 and ta != tb) + if test_support.check_impl_detail(): + self.assertFalse(copymode < 0 and ta != tb) self.assertFalse(a) self.assertFalse(b) diff --git a/lib-python/2.7/test/test_dis.py b/lib-python/2.7/test/test_dis.py --- a/lib-python/2.7/test/test_dis.py +++ b/lib-python/2.7/test/test_dis.py @@ -56,8 +56,8 @@ %-4d 0 LOAD_CONST 1 (0) 3 POP_JUMP_IF_TRUE 38 6 LOAD_GLOBAL 0 (AssertionError) - 9 BUILD_LIST 0 - 12 LOAD_FAST 0 (x) + 9 LOAD_FAST 0 (x) + 12 BUILD_LIST_FROM_ARG 0 15 GET_ITER >> 16 FOR_ITER 12 (to 31) 19 STORE_FAST 1 (s) diff --git a/lib-python/2.7/test/test_doctest.py b/lib-python/2.7/test/test_doctest.py --- a/lib-python/2.7/test/test_doctest.py +++ b/lib-python/2.7/test/test_doctest.py @@ -782,7 +782,7 @@ ... >>> x = 12 ... >>> print x//0 ... Traceback (most recent call last): - ... ZeroDivisionError: integer division or modulo by zero + ... ZeroDivisionError: integer division by zero ... ''' >>> test = doctest.DocTestFinder().find(f)[0] >>> doctest.DocTestRunner(verbose=False).run(test) @@ -799,7 +799,7 @@ ... >>> print 'pre-exception output', x//0 ... pre-exception output ... Traceback (most recent call last): - ... ZeroDivisionError: integer division or modulo by zero + ... ZeroDivisionError: integer division by zero ... ''' >>> test = doctest.DocTestFinder().find(f)[0] >>> doctest.DocTestRunner(verbose=False).run(test) @@ -810,7 +810,7 @@ print 'pre-exception output', x//0 Exception raised: ... - ZeroDivisionError: integer division or modulo by zero + ZeroDivisionError: integer division by zero TestResults(failed=1, attempted=2) Exception messages may contain newlines: @@ -978,7 +978,7 @@ Exception raised: Traceback (most recent call last): ... - ZeroDivisionError: integer division or modulo by zero + ZeroDivisionError: integer division by zero TestResults(failed=1, attempted=1) """ def displayhook(): r""" @@ -1924,7 +1924,7 @@ > (1)() -> calls_set_trace() (Pdb) print foo - *** NameError: name 'foo' is not defined + *** NameError: global name 'foo' is not defined (Pdb) continue TestResults(failed=0, attempted=2) """ @@ -2229,7 +2229,7 @@ favorite_color Exception raised: ... - NameError: name 'favorite_color' is not defined + NameError: global name 'favorite_color' is not defined @@ -2289,7 +2289,7 @@ favorite_color Exception raised: ... - NameError: name 'favorite_color' is not defined + NameError: global name 'favorite_color' is not defined ********************************************************************** 1 items had failures: 1 of 2 in test_doctest.txt @@ -2382,7 +2382,7 @@ favorite_color Exception raised: ... - NameError: name 'favorite_color' is not defined + NameError: global name 'favorite_color' is not defined TestResults(failed=1, attempted=2) >>> doctest.master = None # Reset master. diff --git a/lib-python/2.7/test/test_dumbdbm.py b/lib-python/2.7/test/test_dumbdbm.py --- a/lib-python/2.7/test/test_dumbdbm.py +++ b/lib-python/2.7/test/test_dumbdbm.py @@ -107,9 +107,11 @@ f.close() # Mangle the file by adding \r before each newline - data = open(_fname + '.dir').read() + with open(_fname + '.dir') as f: + data = f.read() data = data.replace('\n', '\r\n') - open(_fname + '.dir', 'wb').write(data) + with open(_fname + '.dir', 'wb') as f: + f.write(data) f = dumbdbm.open(_fname) self.assertEqual(f['1'], 'hello') diff --git a/lib-python/2.7/test/test_extcall.py b/lib-python/2.7/test/test_extcall.py --- a/lib-python/2.7/test/test_extcall.py +++ b/lib-python/2.7/test/test_extcall.py @@ -90,19 +90,19 @@ >>> class Nothing: pass ... - >>> g(*Nothing()) + >>> g(*Nothing()) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: g() argument after * must be a sequence, not instance + TypeError: ...argument after * must be a sequence, not instance >>> class Nothing: ... def __len__(self): return 5 ... - >>> g(*Nothing()) + >>> g(*Nothing()) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: g() argument after * must be a sequence, not instance + TypeError: ...argument after * must be a sequence, not instance >>> class Nothing(): ... def __len__(self): return 5 @@ -154,52 +154,50 @@ ... TypeError: g() got multiple values for keyword argument 'x' - >>> f(**{1:2}) + >>> f(**{1:2}) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: f() keywords must be strings + TypeError: ...keywords must be strings >>> h(**{'e': 2}) Traceback (most recent call last): ... TypeError: h() got an unexpected keyword argument 'e' - >>> h(*h) + >>> h(*h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: h() argument after * must be a sequence, not function + TypeError: ...argument after * must be a sequence, not function - >>> dir(*h) + >>> dir(*h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: dir() argument after * must be a sequence, not function + TypeError: ...argument after * must be a sequence, not function - >>> None(*h) + >>> None(*h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: NoneType object argument after * must be a sequence, \ -not function + TypeError: ...argument after * must be a sequence, not function - >>> h(**h) + >>> h(**h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: h() argument after ** must be a mapping, not function + TypeError: ...argument after ** must be a mapping, not function - >>> dir(**h) + >>> dir(**h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: dir() argument after ** must be a mapping, not function + TypeError: ...argument after ** must be a mapping, not function - >>> None(**h) + >>> None(**h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: NoneType object argument after ** must be a mapping, \ -not function + TypeError: ...argument after ** must be a mapping, not function - >>> dir(b=1, **{'b': 1}) + >>> dir(b=1, **{'b': 1}) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: dir() got multiple values for keyword argument 'b' + TypeError: ...got multiple values for keyword argument 'b' Another helper function @@ -247,10 +245,10 @@ ... False True - >>> id(1, **{'foo': 1}) + >>> id(1, **{'foo': 1}) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: id() takes no keyword arguments + TypeError: id() ... keyword argument... A corner case of keyword dictionary items being deleted during the function call setup. See . diff --git a/lib-python/2.7/test/test_fcntl.py b/lib-python/2.7/test/test_fcntl.py --- a/lib-python/2.7/test/test_fcntl.py +++ b/lib-python/2.7/test/test_fcntl.py @@ -32,7 +32,7 @@ 'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5', 'freebsd6', 'freebsd7', 'freebsd8', 'bsdos2', 'bsdos3', 'bsdos4', - 'openbsd', 'openbsd2', 'openbsd3', 'openbsd4'): + 'openbsd', 'openbsd2', 'openbsd3', 'openbsd4', 'openbsd5'): if struct.calcsize('l') == 8: off_t = 'l' pid_t = 'i' diff --git a/lib-python/2.7/test/test_file.py b/lib-python/2.7/test/test_file.py --- a/lib-python/2.7/test/test_file.py +++ b/lib-python/2.7/test/test_file.py @@ -12,7 +12,7 @@ import io import _pyio as pyio -from test.test_support import TESTFN, run_unittest +from test.test_support import TESTFN, run_unittest, gc_collect from UserList import UserList class AutoFileTests(unittest.TestCase): @@ -33,6 +33,7 @@ self.assertEqual(self.f.tell(), p.tell()) self.f.close() self.f = None + gc_collect() self.assertRaises(ReferenceError, getattr, p, 'tell') def testAttributes(self): @@ -157,7 +158,12 @@ def testStdin(self): # This causes the interpreter to exit on OSF1 v5.1. if sys.platform != 'osf1V5': - self.assertRaises((IOError, ValueError), sys.stdin.seek, -1) + if sys.stdin.isatty(): + self.assertRaises((IOError, ValueError), sys.stdin.seek, -1) + else: + print(( + ' Skipping sys.stdin.seek(-1): stdin is not a tty.' + ' Test manually.'), file=sys.__stdout__) else: print(( ' Skipping sys.stdin.seek(-1), it may crash the interpreter.' diff --git a/lib-python/2.7/test/test_file2k.py b/lib-python/2.7/test/test_file2k.py --- a/lib-python/2.7/test/test_file2k.py +++ b/lib-python/2.7/test/test_file2k.py @@ -11,7 +11,7 @@ threading = None from test import test_support -from test.test_support import TESTFN, run_unittest +from test.test_support import TESTFN, run_unittest, gc_collect from UserList import UserList class AutoFileTests(unittest.TestCase): @@ -32,6 +32,7 @@ self.assertEqual(self.f.tell(), p.tell()) self.f.close() self.f = None + gc_collect() self.assertRaises(ReferenceError, getattr, p, 'tell') def testAttributes(self): @@ -116,8 +117,12 @@ for methodname in methods: method = getattr(self.f, methodname) + args = {'readinto': (bytearray(''),), + 'seek': (0,), + 'write': ('',), + }.get(methodname, ()) # should raise on closed file - self.assertRaises(ValueError, method) + self.assertRaises(ValueError, method, *args) with test_support.check_py3k_warnings(): for methodname in deprecated_methods: method = getattr(self.f, methodname) @@ -216,7 +221,12 @@ def testStdin(self): # This causes the interpreter to exit on OSF1 v5.1. if sys.platform != 'osf1V5': - self.assertRaises(IOError, sys.stdin.seek, -1) + if sys.stdin.isatty(): + self.assertRaises(IOError, sys.stdin.seek, -1) + else: + print >>sys.__stdout__, ( + ' Skipping sys.stdin.seek(-1): stdin is not a tty.' + ' Test manualy.') else: print >>sys.__stdout__, ( ' Skipping sys.stdin.seek(-1), it may crash the interpreter.' @@ -336,8 +346,9 @@ except ValueError: pass else: - self.fail("%s%r after next() didn't raise ValueError" % - (methodname, args)) + if test_support.check_impl_detail(): + self.fail("%s%r after next() didn't raise ValueError" % + (methodname, args)) f.close() # Test to see if harmless (by accident) mixing of read* and @@ -388,6 +399,7 @@ if lines != testlines: self.fail("readlines() after next() with empty buffer " "failed. Got %r, expected %r" % (line, testline)) + f.close() # Reading after iteration hit EOF shouldn't hurt either f = open(TESTFN) try: @@ -438,6 +450,9 @@ self.close_count = 0 self.close_success_count = 0 self.use_buffering = False + # to prevent running out of file descriptors on PyPy, + # we only keep the 50 most recent files open + self.all_files = [None] * 50 def tearDown(self): if self.f: @@ -453,9 +468,14 @@ def _create_file(self): if self.use_buffering: - self.f = open(self.filename, "w+", buffering=1024*16) + f = open(self.filename, "w+", buffering=1024*16) else: - self.f = open(self.filename, "w+") + f = open(self.filename, "w+") + self.f = f + self.all_files.append(f) + oldf = self.all_files.pop(0) + if oldf is not None: + oldf.close() def _close_file(self): with self._count_lock: @@ -496,7 +516,6 @@ def _test_close_open_io(self, io_func, nb_workers=5): def worker(): - self._create_file() funcs = itertools.cycle(( lambda: io_func(), lambda: self._close_and_reopen_file(), @@ -508,7 +527,11 @@ f() except (IOError, ValueError): pass + self._create_file() self._run_workers(worker, nb_workers) + # make sure that all files can be closed now + del self.all_files + gc_collect() if test_support.verbose: # Useful verbose statistics when tuning this test to take # less time to run but still ensuring that its still useful. diff --git a/lib-python/2.7/test/test_fileio.py b/lib-python/2.7/test/test_fileio.py --- a/lib-python/2.7/test/test_fileio.py +++ b/lib-python/2.7/test/test_fileio.py @@ -12,6 +12,7 @@ from test.test_support import TESTFN, check_warnings, run_unittest, make_bad_fd from test.test_support import py3k_bytes as bytes +from test.test_support import gc_collect from test.script_helper import run_python from _io import FileIO as _FileIO @@ -34,6 +35,7 @@ self.assertEqual(self.f.tell(), p.tell()) self.f.close() self.f = None + gc_collect() self.assertRaises(ReferenceError, getattr, p, 'tell') def testSeekTell(self): @@ -104,8 +106,8 @@ self.assertTrue(f.closed) def testMethods(self): - methods = ['fileno', 'isatty', 'read', 'readinto', - 'seek', 'tell', 'truncate', 'write', 'seekable', + methods = ['fileno', 'isatty', 'read', + 'tell', 'truncate', 'seekable', 'readable', 'writable'] if sys.platform.startswith('atheos'): methods.remove('truncate') @@ -117,6 +119,10 @@ method = getattr(self.f, methodname) # should raise on closed file self.assertRaises(ValueError, method) + # methods with one argument + self.assertRaises(ValueError, self.f.readinto, 0) + self.assertRaises(ValueError, self.f.write, 0) + self.assertRaises(ValueError, self.f.seek, 0) def testOpendir(self): # Issue 3703: opening a directory should fill the errno diff --git a/lib-python/2.7/test/test_format.py b/lib-python/2.7/test/test_format.py --- a/lib-python/2.7/test/test_format.py +++ b/lib-python/2.7/test/test_format.py @@ -242,7 +242,7 @@ try: testformat(formatstr, args) except exception, exc: - if str(exc) == excmsg: + if str(exc) == excmsg or not test_support.check_impl_detail(): if verbose: print "yes" else: @@ -272,13 +272,16 @@ test_exc(u'no format', u'1', TypeError, "not all arguments converted during string formatting") - class Foobar(long): - def __oct__(self): - # Returning a non-string should not blow up. - return self + 1 - - test_exc('%o', Foobar(), TypeError, - "expected string or Unicode object, long found") + if test_support.check_impl_detail(): + # __oct__() is called if Foobar inherits from 'long', but + # not, say, 'object' or 'int' or 'str'. This seems strange + # enough to consider it a complete implementation detail. + class Foobar(long): + def __oct__(self): + # Returning a non-string should not blow up. + return self + 1 + test_exc('%o', Foobar(), TypeError, + "expected string or Unicode object, long found") if maxsize == 2**31-1: # crashes 2.2.1 and earlier: diff --git a/lib-python/2.7/test/test_funcattrs.py b/lib-python/2.7/test/test_funcattrs.py --- a/lib-python/2.7/test/test_funcattrs.py +++ b/lib-python/2.7/test/test_funcattrs.py @@ -14,6 +14,8 @@ self.b = b def cannot_set_attr(self, obj, name, value, exceptions): + if not test_support.check_impl_detail(): + exceptions = (TypeError, AttributeError) # Helper method for other tests. try: setattr(obj, name, value) @@ -286,13 +288,13 @@ def test_delete_func_dict(self): try: del self.b.__dict__ - except TypeError: + except (AttributeError, TypeError): pass else: self.fail("deleting function dictionary should raise TypeError") try: del self.b.func_dict - except TypeError: + except (AttributeError, TypeError): pass else: self.fail("deleting function dictionary should raise TypeError") diff --git a/lib-python/2.7/test/test_functools.py b/lib-python/2.7/test/test_functools.py --- a/lib-python/2.7/test/test_functools.py +++ b/lib-python/2.7/test/test_functools.py @@ -45,6 +45,8 @@ # attributes should not be writable if not isinstance(self.thetype, type): return + if not test_support.check_impl_detail(): + return self.assertRaises(TypeError, setattr, p, 'func', map) self.assertRaises(TypeError, setattr, p, 'args', (1, 2)) self.assertRaises(TypeError, setattr, p, 'keywords', dict(a=1, b=2)) @@ -136,6 +138,7 @@ p = proxy(f) self.assertEqual(f.func, p.func) f = None + test_support.gc_collect() self.assertRaises(ReferenceError, getattr, p, 'func') def test_with_bound_and_unbound_methods(self): @@ -172,7 +175,7 @@ updated=functools.WRAPPER_UPDATES): # Check attributes were assigned for name in assigned: - self.assertTrue(getattr(wrapper, name) is getattr(wrapped, name)) + self.assertTrue(getattr(wrapper, name) == getattr(wrapped, name), name) # Check attributes were updated for name in updated: wrapper_attr = getattr(wrapper, name) diff --git a/lib-python/2.7/test/test_generators.py b/lib-python/2.7/test/test_generators.py --- a/lib-python/2.7/test/test_generators.py +++ b/lib-python/2.7/test/test_generators.py @@ -190,7 +190,7 @@ File "", line 1, in ? File "", line 2, in g File "", line 2, in f - ZeroDivisionError: integer division or modulo by zero + ZeroDivisionError: integer division by zero >>> k.next() # and the generator cannot be resumed Traceback (most recent call last): File "", line 1, in ? @@ -733,14 +733,16 @@ ... yield 1 Traceback (most recent call last): .. -SyntaxError: 'return' with argument inside generator (, line 3) + File "", line 3 +SyntaxError: 'return' with argument inside generator >>> def f(): ... yield 1 ... return 22 Traceback (most recent call last): .. -SyntaxError: 'return' with argument inside generator (, line 3) + File "", line 3 +SyntaxError: 'return' with argument inside generator "return None" is not the same as "return" in a generator: @@ -749,7 +751,8 @@ ... return None Traceback (most recent call last): .. -SyntaxError: 'return' with argument inside generator (, line 3) + File "", line 3 +SyntaxError: 'return' with argument inside generator These are fine: @@ -878,7 +881,9 @@ ... if 0: ... yield 2 # because it's a generator (line 10) Traceback (most recent call last): -SyntaxError: 'return' with argument inside generator (, line 10) + ... + File "", line 10 +SyntaxError: 'return' with argument inside generator This one caused a crash (see SF bug 567538): @@ -1496,6 +1501,10 @@ """ coroutine_tests = """\ +A helper function to call gc.collect() without printing +>>> import gc +>>> def gc_collect(): gc.collect() + Sending a value into a started generator: >>> def f(): @@ -1570,13 +1579,14 @@ >>> def f(): return lambda x=(yield): 1 Traceback (most recent call last): ... -SyntaxError: 'return' with argument inside generator (, line 1) + File "", line 1 +SyntaxError: 'return' with argument inside generator >>> def f(): x = yield = y Traceback (most recent call last): ... File "", line 1 -SyntaxError: assignment to yield expression not possible +SyntaxError: can't assign to yield expression >>> def f(): (yield bar) = y Traceback (most recent call last): @@ -1665,7 +1675,7 @@ >>> f().throw("abc") # throw on just-opened generator Traceback (most recent call last): ... -TypeError: exceptions must be classes, or instances, not str +TypeError: exceptions must be old-style classes or derived from BaseException, not str Now let's try closing a generator: @@ -1697,7 +1707,7 @@ >>> g = f() >>> g.next() ->>> del g +>>> del g; gc_collect() exiting >>> class context(object): @@ -1708,7 +1718,7 @@ ... yield >>> g = f() >>> g.next() ->>> del g +>>> del g; gc_collect() exiting @@ -1721,7 +1731,7 @@ >>> g = f() >>> g.next() ->>> del g +>>> del g; gc_collect() finally @@ -1747,6 +1757,7 @@ >>> g = f() >>> g.next() >>> del g +>>> gc_collect() >>> sys.stderr.getvalue().startswith( ... "Exception RuntimeError: 'generator ignored GeneratorExit' in " ... ) @@ -1812,6 +1823,9 @@ references. We add it to the standard suite so the routine refleak-tests would trigger if it starts being uncleanable again. +>>> import gc +>>> def gc_collect(): gc.collect() + >>> import itertools >>> def leak(): ... class gen: @@ -1863,9 +1877,10 @@ ... ... l = Leaker() ... del l +... gc_collect() ... err = sys.stderr.getvalue().strip() ... err.startswith( -... "Exception RuntimeError: RuntimeError() in <" +... "Exception RuntimeError: RuntimeError() in " ... ) ... err.endswith("> ignored") ... len(err.splitlines()) diff --git a/lib-python/2.7/test/test_genexps.py b/lib-python/2.7/test/test_genexps.py --- a/lib-python/2.7/test/test_genexps.py +++ b/lib-python/2.7/test/test_genexps.py @@ -128,8 +128,9 @@ Verify re-use of tuples (a side benefit of using genexps over listcomps) + >>> from test.test_support import check_impl_detail >>> tupleids = map(id, ((i,i) for i in xrange(10))) - >>> int(max(tupleids) - min(tupleids)) + >>> int(max(tupleids) - min(tupleids)) if check_impl_detail() else 0 0 Verify that syntax error's are raised for genexps used as lvalues @@ -198,13 +199,13 @@ >>> g = (10 // i for i in (5, 0, 2)) >>> g.next() 2 - >>> g.next() + >>> g.next() # doctest: +ELLIPSIS Traceback (most recent call last): File "", line 1, in -toplevel- g.next() File "", line 1, in g = (10 // i for i in (5, 0, 2)) - ZeroDivisionError: integer division or modulo by zero + ZeroDivisionError: integer division...by zero >>> g.next() Traceback (most recent call last): File "", line 1, in -toplevel- diff --git a/lib-python/2.7/test/test_heapq.py b/lib-python/2.7/test/test_heapq.py --- a/lib-python/2.7/test/test_heapq.py +++ b/lib-python/2.7/test/test_heapq.py @@ -215,6 +215,11 @@ class TestHeapPython(TestHeap): module = py_heapq + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + @skipUnless(c_heapq, 'requires _heapq') class TestHeapC(TestHeap): diff --git a/lib-python/2.7/test/test_import.py b/lib-python/2.7/test/test_import.py --- a/lib-python/2.7/test/test_import.py +++ b/lib-python/2.7/test/test_import.py @@ -7,7 +7,8 @@ import sys import unittest from test.test_support import (unlink, TESTFN, unload, run_unittest, rmtree, - is_jython, check_warnings, EnvironmentVarGuard) + is_jython, check_warnings, EnvironmentVarGuard, + impl_detail, check_impl_detail) import textwrap from test import script_helper @@ -69,7 +70,8 @@ self.assertEqual(mod.b, b, "module loaded (%s) but contents invalid" % mod) finally: - unlink(source) + if check_impl_detail(pypy=False): + unlink(source) try: imp.reload(mod) @@ -149,13 +151,16 @@ # Compile & remove .py file, we only need .pyc (or .pyo). with open(filename, 'r') as f: py_compile.compile(filename) - unlink(filename) + if check_impl_detail(pypy=False): + # pypy refuses to import a .pyc if the .py does not exist + unlink(filename) # Need to be able to load from current dir. sys.path.append('') # This used to crash. exec 'import ' + module + reload(longlist) # Cleanup. del sys.path[-1] @@ -326,6 +331,7 @@ self.assertEqual(mod.code_filename, self.file_name) self.assertEqual(mod.func_filename, self.file_name) + @impl_detail("pypy refuses to import without a .py source", pypy=False) def test_module_without_source(self): target = "another_module.py" py_compile.compile(self.file_name, dfile=target) diff --git a/lib-python/2.7/test/test_inspect.py b/lib-python/2.7/test/test_inspect.py --- a/lib-python/2.7/test/test_inspect.py +++ b/lib-python/2.7/test/test_inspect.py @@ -4,11 +4,11 @@ import unittest import inspect import linecache -import datetime from UserList import UserList from UserDict import UserDict from test.test_support import run_unittest, check_py3k_warnings +from test.test_support import check_impl_detail with check_py3k_warnings( ("tuple parameter unpacking has been removed", SyntaxWarning), @@ -74,7 +74,8 @@ def test_excluding_predicates(self): self.istest(inspect.isbuiltin, 'sys.exit') - self.istest(inspect.isbuiltin, '[].append') + if check_impl_detail(): + self.istest(inspect.isbuiltin, '[].append') self.istest(inspect.iscode, 'mod.spam.func_code') self.istest(inspect.isframe, 'tb.tb_frame') self.istest(inspect.isfunction, 'mod.spam') @@ -92,9 +93,9 @@ else: self.assertFalse(inspect.isgetsetdescriptor(type(tb.tb_frame).f_locals)) if hasattr(types, 'MemberDescriptorType'): - self.istest(inspect.ismemberdescriptor, 'datetime.timedelta.days') + self.istest(inspect.ismemberdescriptor, 'type(lambda: None).func_globals') else: - self.assertFalse(inspect.ismemberdescriptor(datetime.timedelta.days)) + self.assertFalse(inspect.ismemberdescriptor(type(lambda: None).func_globals)) def test_isroutine(self): self.assertTrue(inspect.isroutine(mod.spam)) @@ -567,7 +568,8 @@ else: self.fail('Exception not raised') self.assertIs(type(ex1), type(ex2)) - self.assertEqual(str(ex1), str(ex2)) + if check_impl_detail(): + self.assertEqual(str(ex1), str(ex2)) def makeCallable(self, signature): """Create a function that returns its locals(), excluding the diff --git a/lib-python/2.7/test/test_int.py b/lib-python/2.7/test/test_int.py --- a/lib-python/2.7/test/test_int.py +++ b/lib-python/2.7/test/test_int.py @@ -1,7 +1,7 @@ import sys import unittest -from test.test_support import run_unittest, have_unicode +from test.test_support import run_unittest, have_unicode, check_impl_detail import math L = [ @@ -392,9 +392,10 @@ try: int(TruncReturnsNonIntegral()) except TypeError as e: - self.assertEqual(str(e), - "__trunc__ returned non-Integral" - " (type NonIntegral)") + if check_impl_detail(cpython=True): + self.assertEqual(str(e), + "__trunc__ returned non-Integral" + " (type NonIntegral)") else: self.fail("Failed to raise TypeError with %s" % ((base, trunc_result_base),)) diff --git a/lib-python/2.7/test/test_io.py b/lib-python/2.7/test/test_io.py --- a/lib-python/2.7/test/test_io.py +++ b/lib-python/2.7/test/test_io.py @@ -2561,6 +2561,31 @@ """Check that a partial write, when it gets interrupted, properly invokes the signal handler, and bubbles up the exception raised in the latter.""" + + # XXX This test has three flaws that appear when objects are + # XXX not reference counted. + + # - if wio.write() happens to trigger a garbage collection, + # the signal exception may be raised when some __del__ + # method is running; it will not reach the assertRaises() + # call. + + # - more subtle, if the wio object is not destroyed at once + # and survives this function, the next opened file is likely + # to have the same fileno (since the file descriptor was + # actively closed). When wio.__del__ is finally called, it + # will close the other's test file... To trigger this with + # CPython, try adding "global wio" in this function. + + # - This happens only for streams created by the _pyio module, + # because a wio.close() that fails still consider that the + # file needs to be closed again. You can try adding an + # "assert wio.closed" at the end of the function. + + # Fortunately, a little gc.gollect() seems to be enough to + # work around all these issues. + support.gc_collect() + read_results = [] def _read(): s = os.read(r, 1) diff --git a/lib-python/2.7/test/test_isinstance.py b/lib-python/2.7/test/test_isinstance.py --- a/lib-python/2.7/test/test_isinstance.py +++ b/lib-python/2.7/test/test_isinstance.py @@ -260,7 +260,18 @@ # Make sure that calling isinstance with a deeply nested tuple for its # argument will raise RuntimeError eventually. tuple_arg = (compare_to,) - for cnt in xrange(sys.getrecursionlimit()+5): + + + if test_support.check_impl_detail(cpython=True): + RECURSION_LIMIT = sys.getrecursionlimit() + else: + # on non-CPython implementations, the maximum + # actual recursion limit might be higher, but + # probably not higher than 99999 + # + RECURSION_LIMIT = 99999 + + for cnt in xrange(RECURSION_LIMIT+5): tuple_arg = (tuple_arg,) fxn(arg, tuple_arg) diff --git a/lib-python/2.7/test/test_itertools.py b/lib-python/2.7/test/test_itertools.py --- a/lib-python/2.7/test/test_itertools.py +++ b/lib-python/2.7/test/test_itertools.py @@ -137,6 +137,8 @@ self.assertEqual(result, list(combinations2(values, r))) # matches second pure python version self.assertEqual(result, list(combinations3(values, r))) # matches second pure python version + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_combinations_tuple_reuse(self): # Test implementation detail: tuple re-use self.assertEqual(len(set(map(id, combinations('abcde', 3)))), 1) self.assertNotEqual(len(set(map(id, list(combinations('abcde', 3))))), 1) @@ -207,7 +209,10 @@ self.assertEqual(result, list(cwr1(values, r))) # matches first pure python version self.assertEqual(result, list(cwr2(values, r))) # matches second pure python version + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_combinations_with_replacement_tuple_reuse(self): # Test implementation detail: tuple re-use + cwr = combinations_with_replacement self.assertEqual(len(set(map(id, cwr('abcde', 3)))), 1) self.assertNotEqual(len(set(map(id, list(cwr('abcde', 3))))), 1) @@ -271,6 +276,8 @@ self.assertEqual(result, list(permutations(values, None))) # test r as None self.assertEqual(result, list(permutations(values))) # test default r + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_permutations_tuple_reuse(self): # Test implementation detail: tuple re-use self.assertEqual(len(set(map(id, permutations('abcde', 3)))), 1) self.assertNotEqual(len(set(map(id, list(permutations('abcde', 3))))), 1) @@ -526,6 +533,9 @@ self.assertEqual(list(izip()), zip()) self.assertRaises(TypeError, izip, 3) self.assertRaises(TypeError, izip, range(3), 3) + + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_izip_tuple_reuse(self): # Check tuple re-use (implementation detail) self.assertEqual([tuple(list(pair)) for pair in izip('abc', 'def')], zip('abc', 'def')) @@ -575,6 +585,8 @@ else: self.fail('Did not raise Type in: ' + stmt) + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_iziplongest_tuple_reuse(self): # Check tuple re-use (implementation detail) self.assertEqual([tuple(list(pair)) for pair in izip_longest('abc', 'def')], zip('abc', 'def')) @@ -683,6 +695,8 @@ args = map(iter, args) self.assertEqual(len(list(product(*args))), expected_len) + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_product_tuple_reuse(self): # Test implementation detail: tuple re-use self.assertEqual(len(set(map(id, product('abc', 'def')))), 1) self.assertNotEqual(len(set(map(id, list(product('abc', 'def'))))), 1) @@ -771,11 +785,11 @@ self.assertRaises(ValueError, islice, xrange(10), 1, -5, -1) self.assertRaises(ValueError, islice, xrange(10), 1, 10, -1) self.assertRaises(ValueError, islice, xrange(10), 1, 10, 0) - self.assertRaises(ValueError, islice, xrange(10), 'a') - self.assertRaises(ValueError, islice, xrange(10), 'a', 1) - self.assertRaises(ValueError, islice, xrange(10), 1, 'a') - self.assertRaises(ValueError, islice, xrange(10), 'a', 1, 1) - self.assertRaises(ValueError, islice, xrange(10), 1, 'a', 1) + self.assertRaises((ValueError, TypeError), islice, xrange(10), 'a') + self.assertRaises((ValueError, TypeError), islice, xrange(10), 'a', 1) + self.assertRaises((ValueError, TypeError), islice, xrange(10), 1, 'a') + self.assertRaises((ValueError, TypeError), islice, xrange(10), 'a', 1, 1) + self.assertRaises((ValueError, TypeError), islice, xrange(10), 1, 'a', 1) self.assertEqual(len(list(islice(count(), 1, 10, maxsize))), 1) # Issue #10323: Less islice in a predictable state @@ -855,9 +869,17 @@ self.assertRaises(TypeError, tee, [1,2], 3, 'x') # tee object should be instantiable - a, b = tee('abc') - c = type(a)('def') - self.assertEqual(list(c), list('def')) + if test_support.check_impl_detail(): + # XXX I (arigo) would argue that 'type(a)(iterable)' has + # ill-defined semantics: it always return a fresh tee object, + # but depending on whether 'iterable' is itself a tee object + # or not, it is ok or not to continue using 'iterable' after + # the call. I cannot imagine why 'type(a)(non_tee_object)' + # would be useful, as 'iter(non_tee_obect)' is equivalent + # as far as I can see. + a, b = tee('abc') + c = type(a)('def') + self.assertEqual(list(c), list('def')) # test long-lagged and multi-way split a, b, c = tee(xrange(2000), 3) @@ -895,6 +917,7 @@ p = proxy(a) self.assertEqual(getattr(p, '__class__'), type(b)) del a + test_support.gc_collect() self.assertRaises(ReferenceError, getattr, p, '__class__') def test_StopIteration(self): @@ -1317,6 +1340,7 @@ class LengthTransparency(unittest.TestCase): + @test_support.impl_detail("__length_hint__() API is undocumented") def test_repeat(self): from test.test_iterlen import len self.assertEqual(len(repeat(None, 50)), 50) diff --git a/lib-python/2.7/test/test_linecache.py b/lib-python/2.7/test/test_linecache.py --- a/lib-python/2.7/test/test_linecache.py +++ b/lib-python/2.7/test/test_linecache.py @@ -54,13 +54,13 @@ # Check whether lines correspond to those from file iteration for entry in TESTS: - filename = os.path.join(TEST_PATH, entry) + '.py' + filename = support.findfile( entry + '.py') for index, line in enumerate(open(filename)): self.assertEqual(line, getline(filename, index + 1)) # Check module loading for entry in MODULES: - filename = os.path.join(MODULE_PATH, entry) + '.py' + filename = support.findfile( entry + '.py') for index, line in enumerate(open(filename)): self.assertEqual(line, getline(filename, index + 1)) @@ -78,7 +78,7 @@ def test_clearcache(self): cached = [] for entry in TESTS: - filename = os.path.join(TEST_PATH, entry) + '.py' + filename = support.findfile( entry + '.py') cached.append(filename) linecache.getline(filename, 1) diff --git a/lib-python/2.7/test/test_list.py b/lib-python/2.7/test/test_list.py --- a/lib-python/2.7/test/test_list.py +++ b/lib-python/2.7/test/test_list.py @@ -15,6 +15,10 @@ self.assertEqual(list(''), []) self.assertEqual(list('spam'), ['s', 'p', 'a', 'm']) + # the following test also works with pypy, but eats all your address + # space's RAM before raising and takes too long. + @test_support.impl_detail("eats all your RAM before working", pypy=False) + def test_segfault_1(self): if sys.maxsize == 0x7fffffff: # This test can currently only work on 32-bit machines. # XXX If/when PySequence_Length() returns a ssize_t, it should be @@ -32,6 +36,7 @@ # http://sources.redhat.com/ml/newlib/2002/msg00369.html self.assertRaises(MemoryError, list, xrange(sys.maxint // 2)) + def test_segfault_2(self): # This code used to segfault in Py2.4a3 x = [] x.extend(-y for y in x) diff --git a/lib-python/2.7/test/test_long.py b/lib-python/2.7/test/test_long.py --- a/lib-python/2.7/test/test_long.py +++ b/lib-python/2.7/test/test_long.py @@ -530,9 +530,10 @@ try: long(TruncReturnsNonIntegral()) except TypeError as e: - self.assertEqual(str(e), - "__trunc__ returned non-Integral" - " (type NonIntegral)") + if test_support.check_impl_detail(cpython=True): + self.assertEqual(str(e), + "__trunc__ returned non-Integral" + " (type NonIntegral)") else: self.fail("Failed to raise TypeError with %s" % ((base, trunc_result_base),)) diff --git a/lib-python/2.7/test/test_marshal.py b/lib-python/2.7/test/test_marshal.py --- a/lib-python/2.7/test/test_marshal.py +++ b/lib-python/2.7/test/test_marshal.py @@ -7,20 +7,31 @@ import unittest import os -class IntTestCase(unittest.TestCase): +class HelperMixin: + def helper(self, sample, *extra, **kwargs): + expected = kwargs.get('expected', sample) + new = marshal.loads(marshal.dumps(sample, *extra)) + self.assertEqual(expected, new) + self.assertEqual(type(expected), type(new)) + try: + with open(test_support.TESTFN, "wb") as f: + marshal.dump(sample, f, *extra) + with open(test_support.TESTFN, "rb") as f: + new = marshal.load(f) + self.assertEqual(expected, new) + self.assertEqual(type(expected), type(new)) + finally: + test_support.unlink(test_support.TESTFN) + + +class IntTestCase(unittest.TestCase, HelperMixin): def test_ints(self): # Test the full range of Python ints. n = sys.maxint while n: for expected in (-n, n): - s = marshal.dumps(expected) - got = marshal.loads(s) - self.assertEqual(expected, got) - marshal.dump(expected, file(test_support.TESTFN, "wb")) - got = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(expected, got) + self.helper(expected) n = n >> 1 - os.unlink(test_support.TESTFN) def test_int64(self): # Simulate int marshaling on a 64-bit box. This is most interesting if @@ -48,28 +59,16 @@ def test_bool(self): for b in (True, False): - new = marshal.loads(marshal.dumps(b)) - self.assertEqual(b, new) - self.assertEqual(type(b), type(new)) - marshal.dump(b, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(b, new) - self.assertEqual(type(b), type(new)) + self.helper(b) -class FloatTestCase(unittest.TestCase): +class FloatTestCase(unittest.TestCase, HelperMixin): def test_floats(self): # Test a few floats small = 1e-25 n = sys.maxint * 3.7e250 while n > small: for expected in (-n, n): - f = float(expected) - s = marshal.dumps(f) - got = marshal.loads(s) - self.assertEqual(f, got) - marshal.dump(f, file(test_support.TESTFN, "wb")) - got = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(f, got) + self.helper(expected) n /= 123.4567 f = 0.0 @@ -85,59 +84,25 @@ while n < small: for expected in (-n, n): f = float(expected) + self.helper(f) + self.helper(f, 1) + n *= 123.4567 - s = marshal.dumps(f) - got = marshal.loads(s) - self.assertEqual(f, got) - - s = marshal.dumps(f, 1) - got = marshal.loads(s) - self.assertEqual(f, got) - - marshal.dump(f, file(test_support.TESTFN, "wb")) - got = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(f, got) - - marshal.dump(f, file(test_support.TESTFN, "wb"), 1) - got = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(f, got) - n *= 123.4567 - os.unlink(test_support.TESTFN) - -class StringTestCase(unittest.TestCase): +class StringTestCase(unittest.TestCase, HelperMixin): def test_unicode(self): for s in [u"", u"Andr� Previn", u"abc", u" "*10000]: - new = marshal.loads(marshal.dumps(s)) - self.assertEqual(s, new) - self.assertEqual(type(s), type(new)) - marshal.dump(s, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(s, new) - self.assertEqual(type(s), type(new)) - os.unlink(test_support.TESTFN) + self.helper(s) def test_string(self): for s in ["", "Andr� Previn", "abc", " "*10000]: - new = marshal.loads(marshal.dumps(s)) - self.assertEqual(s, new) - self.assertEqual(type(s), type(new)) - marshal.dump(s, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(s, new) - self.assertEqual(type(s), type(new)) - os.unlink(test_support.TESTFN) + self.helper(s) def test_buffer(self): for s in ["", "Andr� Previn", "abc", " "*10000]: with test_support.check_py3k_warnings(("buffer.. not supported", DeprecationWarning)): b = buffer(s) - new = marshal.loads(marshal.dumps(b)) - self.assertEqual(s, new) - marshal.dump(b, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(s, new) - os.unlink(test_support.TESTFN) + self.helper(b, expected=s) class ExceptionTestCase(unittest.TestCase): def test_exceptions(self): @@ -150,7 +115,7 @@ new = marshal.loads(marshal.dumps(co)) self.assertEqual(co, new) -class ContainerTestCase(unittest.TestCase): +class ContainerTestCase(unittest.TestCase, HelperMixin): d = {'astring': 'foo at bar.baz.spam', 'afloat': 7283.43, 'anint': 2**20, @@ -161,42 +126,20 @@ 'aunicode': u"Andr� Previn" } def test_dict(self): - new = marshal.loads(marshal.dumps(self.d)) - self.assertEqual(self.d, new) - marshal.dump(self.d, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(self.d, new) - os.unlink(test_support.TESTFN) + self.helper(self.d) def test_list(self): lst = self.d.items() - new = marshal.loads(marshal.dumps(lst)) - self.assertEqual(lst, new) - marshal.dump(lst, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(lst, new) - os.unlink(test_support.TESTFN) + self.helper(lst) def test_tuple(self): t = tuple(self.d.keys()) - new = marshal.loads(marshal.dumps(t)) - self.assertEqual(t, new) - marshal.dump(t, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(t, new) - os.unlink(test_support.TESTFN) + self.helper(t) def test_sets(self): for constructor in (set, frozenset): t = constructor(self.d.keys()) - new = marshal.loads(marshal.dumps(t)) - self.assertEqual(t, new) - self.assertTrue(isinstance(new, constructor)) - self.assertNotEqual(id(t), id(new)) - marshal.dump(t, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(t, new) - os.unlink(test_support.TESTFN) + self.helper(t) class BugsTestCase(unittest.TestCase): def test_bug_5888452(self): @@ -226,6 +169,7 @@ s = 'c' + ('X' * 4*4) + '{' * 2**20 self.assertRaises(ValueError, marshal.loads, s) + @test_support.impl_detail('specific recursion check') def test_recursion_limit(self): # Create a deeply nested structure. head = last = [] diff --git a/lib-python/2.7/test/test_memoryio.py b/lib-python/2.7/test/test_memoryio.py --- a/lib-python/2.7/test/test_memoryio.py +++ b/lib-python/2.7/test/test_memoryio.py @@ -617,7 +617,7 @@ state = memio.__getstate__() self.assertEqual(len(state), 3) bytearray(state[0]) # Check if state[0] supports the buffer interface. - self.assertIsInstance(state[1], int) + self.assertIsInstance(state[1], (int, long)) self.assertTrue(isinstance(state[2], dict) or state[2] is None) memio.close() self.assertRaises(ValueError, memio.__getstate__) diff --git a/lib-python/2.7/test/test_memoryview.py b/lib-python/2.7/test/test_memoryview.py --- a/lib-python/2.7/test/test_memoryview.py +++ b/lib-python/2.7/test/test_memoryview.py @@ -26,7 +26,8 @@ def check_getitem_with_type(self, tp): item = self.getitem_type b = tp(self._source) - oldrefcount = sys.getrefcount(b) + if hasattr(sys, 'getrefcount'): + oldrefcount = sys.getrefcount(b) m = self._view(b) self.assertEqual(m[0], item(b"a")) self.assertIsInstance(m[0], bytes) @@ -43,7 +44,8 @@ self.assertRaises(TypeError, lambda: m[0.0]) self.assertRaises(TypeError, lambda: m["a"]) m = None - self.assertEqual(sys.getrefcount(b), oldrefcount) + if hasattr(sys, 'getrefcount'): + self.assertEqual(sys.getrefcount(b), oldrefcount) def test_getitem(self): for tp in self._types: @@ -65,7 +67,8 @@ if not self.ro_type: return b = self.ro_type(self._source) - oldrefcount = sys.getrefcount(b) + if hasattr(sys, 'getrefcount'): + oldrefcount = sys.getrefcount(b) m = self._view(b) def setitem(value): m[0] = value @@ -73,14 +76,16 @@ self.assertRaises(TypeError, setitem, 65) self.assertRaises(TypeError, setitem, memoryview(b"a")) m = None - self.assertEqual(sys.getrefcount(b), oldrefcount) + if hasattr(sys, 'getrefcount'): + self.assertEqual(sys.getrefcount(b), oldrefcount) def test_setitem_writable(self): if not self.rw_type: return tp = self.rw_type b = self.rw_type(self._source) - oldrefcount = sys.getrefcount(b) + if hasattr(sys, 'getrefcount'): + oldrefcount = sys.getrefcount(b) m = self._view(b) m[0] = tp(b"0") self._check_contents(tp, b, b"0bcdef") @@ -110,13 +115,14 @@ self.assertRaises(TypeError, setitem, (0,), b"a") self.assertRaises(TypeError, setitem, "a", b"a") # Trying to resize the memory object - self.assertRaises(ValueError, setitem, 0, b"") - self.assertRaises(ValueError, setitem, 0, b"ab") + self.assertRaises((ValueError, TypeError), setitem, 0, b"") + self.assertRaises((ValueError, TypeError), setitem, 0, b"ab") self.assertRaises(ValueError, setitem, slice(1,1), b"a") self.assertRaises(ValueError, setitem, slice(0,2), b"a") m = None - self.assertEqual(sys.getrefcount(b), oldrefcount) + if hasattr(sys, 'getrefcount'): + self.assertEqual(sys.getrefcount(b), oldrefcount) def test_delitem(self): for tp in self._types: @@ -292,6 +298,7 @@ def _check_contents(self, tp, obj, contents): self.assertEqual(obj[1:7], tp(contents)) + @unittest.skipUnless(hasattr(sys, 'getrefcount'), "Reference counting") def test_refs(self): for tp in self._types: m = memoryview(tp(self._source)) diff --git a/lib-python/2.7/test/test_mmap.py b/lib-python/2.7/test/test_mmap.py --- a/lib-python/2.7/test/test_mmap.py +++ b/lib-python/2.7/test/test_mmap.py @@ -119,7 +119,8 @@ def test_access_parameter(self): # Test for "access" keyword parameter mapsize = 10 - open(TESTFN, "wb").write("a"*mapsize) + with open(TESTFN, "wb") as f: + f.write("a"*mapsize) f = open(TESTFN, "rb") m = mmap.mmap(f.fileno(), mapsize, access=mmap.ACCESS_READ) self.assertEqual(m[:], 'a'*mapsize, "Readonly memory map data incorrect.") @@ -168,9 +169,11 @@ else: self.fail("Able to resize readonly memory map") f.close() + m.close() del m, f - self.assertEqual(open(TESTFN, "rb").read(), 'a'*mapsize, - "Readonly memory map data file was modified") + with open(TESTFN, "rb") as f: + self.assertEqual(f.read(), 'a'*mapsize, + "Readonly memory map data file was modified") # Opening mmap with size too big import sys @@ -220,11 +223,13 @@ self.assertEqual(m[:], 'd' * mapsize, "Copy-on-write memory map data not written correctly.") m.flush() - self.assertEqual(open(TESTFN, "rb").read(), 'c'*mapsize, - "Copy-on-write test data file should not be modified.") + f.close() + with open(TESTFN, "rb") as f: + self.assertEqual(f.read(), 'c'*mapsize, + "Copy-on-write test data file should not be modified.") # Ensuring copy-on-write maps cannot be resized self.assertRaises(TypeError, m.resize, 2*mapsize) - f.close() + m.close() del m, f # Ensuring invalid access parameter raises exception @@ -287,6 +292,7 @@ self.assertEqual(m.find('one', 1), 8) self.assertEqual(m.find('one', 1, -1), 8) self.assertEqual(m.find('one', 1, -2), -1) + m.close() def test_rfind(self): @@ -305,6 +311,7 @@ self.assertEqual(m.rfind('one', 0, -2), 0) self.assertEqual(m.rfind('one', 1, -1), 8) self.assertEqual(m.rfind('one', 1, -2), -1) + m.close() def test_double_close(self): @@ -533,7 +540,8 @@ if not hasattr(mmap, 'PROT_READ'): return mapsize = 10 - open(TESTFN, "wb").write("a"*mapsize) + with open(TESTFN, "wb") as f: + f.write("a"*mapsize) f = open(TESTFN, "rb") m = mmap.mmap(f.fileno(), mapsize, prot=mmap.PROT_READ) self.assertRaises(TypeError, m.write, "foo") @@ -545,7 +553,8 @@ def test_io_methods(self): data = "0123456789" - open(TESTFN, "wb").write("x"*len(data)) + with open(TESTFN, "wb") as f: + f.write("x"*len(data)) f = open(TESTFN, "r+b") m = mmap.mmap(f.fileno(), len(data)) f.close() @@ -574,6 +583,7 @@ self.assertEqual(m[:], "012bar6789") m.seek(8) self.assertRaises(ValueError, m.write, "bar") + m.close() if os.name == 'nt': def test_tagname(self): @@ -611,7 +621,8 @@ m.close() # Should not crash (Issue 5385) - open(TESTFN, "wb").write("x"*10) + with open(TESTFN, "wb") as f: + f.write("x"*10) f = open(TESTFN, "r+b") m = mmap.mmap(f.fileno(), 0) f.close() diff --git a/lib-python/2.7/test/test_module.py b/lib-python/2.7/test/test_module.py --- a/lib-python/2.7/test/test_module.py +++ b/lib-python/2.7/test/test_module.py @@ -1,6 +1,6 @@ # Test the module type import unittest -from test.test_support import run_unittest, gc_collect +from test.test_support import run_unittest, gc_collect, check_impl_detail import sys ModuleType = type(sys) @@ -10,8 +10,10 @@ # An uninitialized module has no __dict__ or __name__, # and __doc__ is None foo = ModuleType.__new__(ModuleType) - self.assertTrue(foo.__dict__ is None) - self.assertRaises(SystemError, dir, foo) + self.assertFalse(foo.__dict__) + if check_impl_detail(): + self.assertTrue(foo.__dict__ is None) + self.assertRaises(SystemError, dir, foo) try: s = foo.__name__ self.fail("__name__ = %s" % repr(s)) diff --git a/lib-python/2.7/test/test_multibytecodec.py b/lib-python/2.7/test/test_multibytecodec.py --- a/lib-python/2.7/test/test_multibytecodec.py +++ b/lib-python/2.7/test/test_multibytecodec.py @@ -42,7 +42,7 @@ dec = codecs.getdecoder('euc-kr') myreplace = lambda exc: (u'', sys.maxint+1) codecs.register_error('test.cjktest', myreplace) - self.assertRaises(IndexError, dec, + self.assertRaises((IndexError, OverflowError), dec, 'apple\x92ham\x93spam', 'test.cjktest') def test_codingspec(self): @@ -148,7 +148,8 @@ class Test_StreamReader(unittest.TestCase): def test_bug1728403(self): try: - open(TESTFN, 'w').write('\xa1') + with open(TESTFN, 'w') as f: + f.write('\xa1') f = codecs.open(TESTFN, encoding='cp949') self.assertRaises(UnicodeDecodeError, f.read, 2) finally: diff --git a/lib-python/2.7/test/test_multibytecodec_support.py b/lib-python/2.7/test/test_multibytecodec_support.py --- a/lib-python/2.7/test/test_multibytecodec_support.py +++ b/lib-python/2.7/test/test_multibytecodec_support.py @@ -110,8 +110,8 @@ def myreplace(exc): return (u'x', sys.maxint + 1) codecs.register_error("test.cjktest", myreplace) - self.assertRaises(IndexError, self.encode, self.unmappedunicode, - 'test.cjktest') + self.assertRaises((IndexError, OverflowError), self.encode, + self.unmappedunicode, 'test.cjktest') def test_callback_None_index(self): def myreplace(exc): @@ -330,7 +330,7 @@ repr(csetch), repr(unich), exc.reason)) def load_teststring(name): - dir = os.path.join(os.path.dirname(__file__), 'cjkencodings') + dir = test_support.findfile('cjkencodings') with open(os.path.join(dir, name + '.txt'), 'rb') as f: encoded = f.read() with open(os.path.join(dir, name + '-utf8.txt'), 'rb') as f: diff --git a/lib-python/2.7/test/test_multiprocessing.py b/lib-python/2.7/test/test_multiprocessing.py --- a/lib-python/2.7/test/test_multiprocessing.py +++ b/lib-python/2.7/test/test_multiprocessing.py @@ -1316,6 +1316,7 @@ queue = manager.get_queue() self.assertEqual(queue.get(), 'hello world') del queue + test_support.gc_collect() manager.shutdown() manager = QueueManager( address=addr, authkey=authkey, serializer=SERIALIZER) @@ -1605,6 +1606,10 @@ if len(blocks) > maxblocks: i = random.randrange(maxblocks) del blocks[i] + # XXX There should be a better way to release resources for a + # single block + if i % maxblocks == 0: + import gc; gc.collect() # get the heap object heap = multiprocessing.heap.BufferWrapper._heap @@ -1704,6 +1709,7 @@ a = Foo() util.Finalize(a, conn.send, args=('a',)) del a # triggers callback for a + test_support.gc_collect() b = Foo() close_b = util.Finalize(b, conn.send, args=('b',)) diff --git a/lib-python/2.7/test/test_mutants.py b/lib-python/2.7/test/test_mutants.py --- a/lib-python/2.7/test/test_mutants.py +++ b/lib-python/2.7/test/test_mutants.py @@ -1,4 +1,4 @@ -from test.test_support import verbose, TESTFN +from test.test_support import verbose, TESTFN, check_impl_detail import random import os @@ -137,10 +137,16 @@ while dict1 and len(dict1) == len(dict2): if verbose: print ".", - if random.random() < 0.5: - c = cmp(dict1, dict2) - else: - c = dict1 == dict2 + try: + if random.random() < 0.5: + c = cmp(dict1, dict2) + else: + c = dict1 == dict2 + except RuntimeError: + # CPython never raises RuntimeError here, but other implementations + # might, and it's fine. + if check_impl_detail(cpython=True): + raise if verbose: print diff --git a/lib-python/2.7/test/test_optparse.py b/lib-python/2.7/test/test_optparse.py --- a/lib-python/2.7/test/test_optparse.py +++ b/lib-python/2.7/test/test_optparse.py @@ -383,6 +383,7 @@ self.assertRaises(self.parser.remove_option, ('foo',), None, ValueError, "no such option 'foo'") + @test_support.impl_detail("sys.getrefcount") def test_refleak(self): # If an OptionParser is carrying around a reference to a large # object, various cycles can prevent it from being GC'd in diff --git a/lib-python/2.7/test/test_peepholer.py b/lib-python/2.7/test/test_peepholer.py --- a/lib-python/2.7/test/test_peepholer.py +++ b/lib-python/2.7/test/test_peepholer.py @@ -41,7 +41,7 @@ def test_none_as_constant(self): # LOAD_GLOBAL None --> LOAD_CONST None def f(x): - None + y = None return x asm = disassemble(f) for elem in ('LOAD_GLOBAL',): @@ -67,10 +67,13 @@ self.assertIn(elem, asm) def test_pack_unpack(self): + # On PyPy, "a, b = ..." is even more optimized, by removing + # the ROT_TWO. But the ROT_TWO is not removed if assigning + # to more complex expressions, so check that. for line, elem in ( ('a, = a,', 'LOAD_CONST',), - ('a, b = a, b', 'ROT_TWO',), - ('a, b, c = a, b, c', 'ROT_THREE',), + ('a[1], b = a, b', 'ROT_TWO',), + ('a, b[2], c = a, b, c', 'ROT_THREE',), ): asm = dis_single(line) self.assertIn(elem, asm) @@ -78,6 +81,8 @@ self.assertNotIn('UNPACK_TUPLE', asm) def test_folding_of_tuples_of_constants(self): + # On CPython, "a,b,c=1,2,3" turns into "a,b,c=" + # but on PyPy, it turns into "a=1;b=2;c=3". for line, elem in ( ('a = 1,2,3', '((1, 2, 3))'), ('("a","b","c")', "(('a', 'b', 'c'))"), @@ -86,7 +91,8 @@ ('((1, 2), 3, 4)', '(((1, 2), 3, 4))'), ): asm = dis_single(line) - self.assertIn(elem, asm) + self.assert_(elem in asm or ( + line == 'a,b,c = 1,2,3' and 'UNPACK_TUPLE' not in asm)) self.assertNotIn('BUILD_TUPLE', asm) # Bug 1053819: Tuple of constants misidentified when presented with: @@ -139,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib-python/2.7/test/test_pprint.py b/lib-python/2.7/test/test_pprint.py --- a/lib-python/2.7/test/test_pprint.py +++ b/lib-python/2.7/test/test_pprint.py @@ -233,7 +233,16 @@ frozenset([0, 2]), frozenset([0, 1])])}""" cube = test.test_set.cube(3) - self.assertEqual(pprint.pformat(cube), cube_repr_tgt) + # XXX issues of dictionary order, and for the case below, + # order of items in the frozenset([...]) representation. + # Whether we get precisely cube_repr_tgt or not is open + # to implementation-dependent choices (this test probably + # fails horribly in CPython if we tweak the dict order too). + got = pprint.pformat(cube) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cube_repr_tgt) + else: + self.assertEqual(eval(got), cube) cubo_repr_tgt = """\ {frozenset([frozenset([0, 2]), frozenset([0])]): frozenset([frozenset([frozenset([0, 2]), @@ -393,7 +402,11 @@ 2])])])}""" cubo = test.test_set.linegraph(cube) - self.assertEqual(pprint.pformat(cubo), cubo_repr_tgt) + got = pprint.pformat(cubo) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cubo_repr_tgt) + else: + self.assertEqual(eval(got), cubo) def test_depth(self): nested_tuple = (1, (2, (3, (4, (5, 6))))) diff --git a/lib-python/2.7/test/test_pydoc.py b/lib-python/2.7/test/test_pydoc.py --- a/lib-python/2.7/test/test_pydoc.py +++ b/lib-python/2.7/test/test_pydoc.py @@ -267,8 +267,8 @@ testpairs = ( ('i_am_not_here', 'i_am_not_here'), ('test.i_am_not_here_either', 'i_am_not_here_either'), - ('test.i_am_not_here.neither_am_i', 'i_am_not_here.neither_am_i'), - ('i_am_not_here.{}'.format(modname), 'i_am_not_here.{}'.format(modname)), + ('test.i_am_not_here.neither_am_i', 'i_am_not_here'), + ('i_am_not_here.{}'.format(modname), 'i_am_not_here'), ('test.{}'.format(modname), modname), ) @@ -292,8 +292,8 @@ result = run_pydoc(modname) finally: forget(modname) - expected = badimport_pattern % (modname, expectedinmsg) - self.assertEqual(expected, result) + expected = badimport_pattern % (modname, '(.+\\.)?' + expectedinmsg + '(\\..+)?$') + self.assertTrue(re.match(expected, result)) def test_input_strip(self): missing_module = " test.i_am_not_here " diff --git a/lib-python/2.7/test/test_pyexpat.py b/lib-python/2.7/test/test_pyexpat.py --- a/lib-python/2.7/test/test_pyexpat.py +++ b/lib-python/2.7/test/test_pyexpat.py @@ -570,6 +570,9 @@ self.assertEqual(self.n, 4) class MalformedInputText(unittest.TestCase): + # CPython seems to ship its own version of expat, they fixed it on this commit : + # http://svn.python.org/view?revision=74429&view=revision + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test1(self): xml = "\0\r\n" parser = expat.ParserCreate() @@ -579,6 +582,7 @@ except expat.ExpatError as e: self.assertEqual(str(e), 'unclosed token: line 2, column 0') + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test2(self): xml = "\r\n" parser = expat.ParserCreate() diff --git a/lib-python/2.7/test/test_repr.py b/lib-python/2.7/test/test_repr.py --- a/lib-python/2.7/test/test_repr.py +++ b/lib-python/2.7/test/test_repr.py @@ -9,6 +9,7 @@ import unittest from test.test_support import run_unittest, check_py3k_warnings +from test.test_support import check_impl_detail from repr import repr as r # Don't shadow builtin repr from repr import Repr @@ -145,8 +146,11 @@ # Functions eq(repr(hash), '') # Methods - self.assertTrue(repr(''.split).startswith( - '") def test_xrange(self): eq = self.assertEqual @@ -185,7 +189,10 @@ def test_descriptors(self): eq = self.assertEqual # method descriptors - eq(repr(dict.items), "") + if check_impl_detail(cpython=True): + eq(repr(dict.items), "") + elif check_impl_detail(pypy=True): + eq(repr(dict.items), "") # XXX member descriptors # XXX attribute descriptors # XXX slot descriptors @@ -247,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_runpy.py b/lib-python/2.7/test/test_runpy.py --- a/lib-python/2.7/test/test_runpy.py +++ b/lib-python/2.7/test/test_runpy.py @@ -5,10 +5,15 @@ import sys import re import tempfile -from test.test_support import verbose, run_unittest, forget +from test.test_support import verbose, run_unittest, forget, check_impl_detail from test.script_helper import (temp_dir, make_script, compile_script, make_pkg, make_zip_script, make_zip_pkg) +if check_impl_detail(pypy=True): + no_lone_pyc_file = True +else: + no_lone_pyc_file = False + from runpy import _run_code, _run_module_code, run_module, run_path # Note: This module can't safely test _run_module_as_main as it @@ -168,13 +173,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + d2 = run_module(mod_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -190,13 +196,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", pkg_name - d2 = run_module(pkg_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", pkg_name + d2 = run_module(pkg_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, pkg_name) if verbose: print "Package executed successfully" @@ -244,15 +251,17 @@ self.assertIn("sibling", d1) self.assertIn("nephew", d1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name, run_name=run_name) # Read from bytecode - self.assertIn("__package__", d2) - self.assertTrue(d2["__package__"] == pkg_name) - self.assertIn("sibling", d2) - self.assertIn("nephew", d2) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + # Read from bytecode + d2 = run_module(mod_name, run_name=run_name) + self.assertIn("__package__", d2) + self.assertTrue(d2["__package__"] == pkg_name) + self.assertIn("sibling", d2) + self.assertIn("nephew", d2) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -345,6 +354,8 @@ script_dir, '') def test_directory_compiled(self): + if no_lone_pyc_file: + return with temp_dir() as script_dir: mod_name = '__main__' script_name = self._make_test_script(script_dir, mod_name) diff --git a/lib-python/2.7/test/test_scope.py b/lib-python/2.7/test/test_scope.py --- a/lib-python/2.7/test/test_scope.py +++ b/lib-python/2.7/test/test_scope.py @@ -1,6 +1,6 @@ import unittest from test.test_support import check_syntax_error, check_py3k_warnings, \ - check_warnings, run_unittest + check_warnings, run_unittest, gc_collect class ScopeTests(unittest.TestCase): @@ -432,6 +432,7 @@ for i in range(100): f1() + gc_collect() self.assertEqual(Foo.count, 0) diff --git a/lib-python/2.7/test/test_set.py b/lib-python/2.7/test/test_set.py --- a/lib-python/2.7/test/test_set.py +++ b/lib-python/2.7/test/test_set.py @@ -309,6 +309,7 @@ fo.close() test_support.unlink(test_support.TESTFN) + @test_support.impl_detail(pypy=False) def test_do_not_rehash_dict_keys(self): n = 10 d = dict.fromkeys(map(HashCountingInt, xrange(n))) @@ -559,6 +560,7 @@ p = weakref.proxy(s) self.assertEqual(str(p), str(s)) s = None + test_support.gc_collect() self.assertRaises(ReferenceError, str, p) # C API test only available in a debug build @@ -590,6 +592,7 @@ s.__init__(self.otherword) self.assertEqual(s, set(self.word)) + @test_support.impl_detail() def test_singleton_empty_frozenset(self): f = frozenset() efs = [frozenset(), frozenset([]), frozenset(()), frozenset(''), @@ -770,9 +773,10 @@ for v in self.set: self.assertIn(v, self.values) setiter = iter(self.set) - # note: __length_hint__ is an internal undocumented API, - # don't rely on it in your own programs - self.assertEqual(setiter.__length_hint__(), len(self.set)) + if test_support.check_impl_detail(): + # note: __length_hint__ is an internal undocumented API, + # don't rely on it in your own programs + self.assertEqual(setiter.__length_hint__(), len(self.set)) def test_pickling(self): p = pickle.dumps(self.set) @@ -1564,7 +1568,7 @@ for meth in (s.union, s.intersection, s.difference, s.symmetric_difference, s.isdisjoint): for g in (G, I, Ig, L, R): expected = meth(data) - actual = meth(G(data)) + actual = meth(g(data)) if isinstance(expected, bool): self.assertEqual(actual, expected) else: diff --git a/lib-python/2.7/test/test_sets.py b/lib-python/2.7/test/test_sets.py --- a/lib-python/2.7/test/test_sets.py +++ b/lib-python/2.7/test/test_sets.py @@ -686,7 +686,9 @@ set_list = sorted(self.set) self.assertEqual(len(dup_list), len(set_list)) for i, el in enumerate(dup_list): - self.assertIs(el, set_list[i]) + # Object identity is not guarnteed for immutable objects, so we + # can't use assertIs here. + self.assertEqual(el, set_list[i]) def test_deep_copy(self): dup = copy.deepcopy(self.set) diff --git a/lib-python/2.7/test/test_site.py b/lib-python/2.7/test/test_site.py --- a/lib-python/2.7/test/test_site.py +++ b/lib-python/2.7/test/test_site.py @@ -226,6 +226,10 @@ self.assertEqual(len(dirs), 1) wanted = os.path.join('xoxo', 'Lib', 'site-packages') self.assertEqual(dirs[0], wanted) + elif '__pypy__' in sys.builtin_module_names: + self.assertEquals(len(dirs), 1) + wanted = os.path.join('xoxo', 'site-packages') + self.assertEquals(dirs[0], wanted) elif os.sep == '/': self.assertEqual(len(dirs), 2) wanted = os.path.join('xoxo', 'lib', 'python' + sys.version[:3], diff --git a/lib-python/2.7/test/test_socket.py b/lib-python/2.7/test/test_socket.py --- a/lib-python/2.7/test/test_socket.py +++ b/lib-python/2.7/test/test_socket.py @@ -252,6 +252,7 @@ self.assertEqual(p.fileno(), s.fileno()) s.close() s = None + test_support.gc_collect() try: p.fileno() except ReferenceError: @@ -285,32 +286,34 @@ s.sendto(u'\u2620', sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None) - self.assertIn('not NoneType', str(cm.exception)) + self.assertIn('NoneType', str(cm.exception)) # 3 args with self.assertRaises(UnicodeEncodeError): s.sendto(u'\u2620', 0, sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, None) - self.assertIn('not NoneType', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 'bar', sockname) - self.assertIn('an integer is required', str(cm.exception)) + self.assertIn('integer', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None, None) - self.assertIn('an integer is required', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('an integer is required', str(cm.exception)) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto('foo') - self.assertIn('(1 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, sockname, 4) - self.assertIn('(4 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) def testCrucialConstants(self): @@ -385,10 +388,10 @@ socket.htonl(k) socket.htons(k) for k in bad_values: - self.assertRaises(OverflowError, socket.ntohl, k) - self.assertRaises(OverflowError, socket.ntohs, k) - self.assertRaises(OverflowError, socket.htonl, k) - self.assertRaises(OverflowError, socket.htons, k) + self.assertRaises((OverflowError, ValueError), socket.ntohl, k) + self.assertRaises((OverflowError, ValueError), socket.ntohs, k) + self.assertRaises((OverflowError, ValueError), socket.htonl, k) + self.assertRaises((OverflowError, ValueError), socket.htons, k) def testGetServBy(self): eq = self.assertEqual @@ -428,8 +431,8 @@ if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. - self.assertRaises(OverflowError, socket.getservbyport, -1) - self.assertRaises(OverflowError, socket.getservbyport, 65536) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, -1) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout @@ -608,8 +611,8 @@ neg_port = port - 65536 sock = socket.socket() try: - self.assertRaises(OverflowError, sock.bind, (host, big_port)) - self.assertRaises(OverflowError, sock.bind, (host, neg_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, big_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, neg_port)) sock.bind((host, port)) finally: sock.close() @@ -1309,6 +1312,7 @@ closed = False def flush(self): pass def close(self): self.closed = True + def _decref_socketios(self): pass # must not close unless we request it: the original use of _fileobject # by module socket requires that the underlying socket not be closed until diff --git a/lib-python/2.7/test/test_sort.py b/lib-python/2.7/test/test_sort.py --- a/lib-python/2.7/test/test_sort.py +++ b/lib-python/2.7/test/test_sort.py @@ -140,7 +140,10 @@ return random.random() < 0.5 L = [C() for i in range(50)] - self.assertRaises(ValueError, L.sort) + try: + L.sort() + except ValueError: + pass def test_cmpNone(self): # Testing None as a comparison function. @@ -150,8 +153,10 @@ L.sort(None) self.assertEqual(L, range(50)) + @test_support.impl_detail(pypy=False) def test_undetected_mutation(self): # Python 2.4a1 did not always detect mutation + # So does pypy... memorywaster = [] for i in range(20): def mutating_cmp(x, y): @@ -226,7 +231,10 @@ def __del__(self): del data[:] data[:] = range(20) - self.assertRaises(ValueError, data.sort, key=SortKiller) + try: + data.sort(key=SortKiller) + except ValueError: + pass def test_key_with_mutating_del_and_exception(self): data = range(10) diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -881,6 +881,8 @@ c = socket.socket() c.connect((HOST, port)) listener_gone.wait() + # XXX why is it necessary? + test_support.gc_collect() try: ssl_sock = ssl.wrap_socket(c) except IOError: @@ -1330,10 +1332,8 @@ def test_main(verbose=False): global CERTFILE, SVN_PYTHON_ORG_ROOT_CERT - CERTFILE = os.path.join(os.path.dirname(__file__) or os.curdir, - "keycert.pem") - SVN_PYTHON_ORG_ROOT_CERT = os.path.join( - os.path.dirname(__file__) or os.curdir, + CERTFILE = test_support.findfile("keycert.pem") + SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( "https_svn_python_org_root.pem") if (not os.path.exists(CERTFILE) or diff --git a/lib-python/2.7/test/test_str.py b/lib-python/2.7/test/test_str.py --- a/lib-python/2.7/test/test_str.py +++ b/lib-python/2.7/test/test_str.py @@ -422,10 +422,11 @@ for meth in ('foo'.startswith, 'foo'.endswith): with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) def test_main(): test_support.run_unittest(StrTest) diff --git a/lib-python/2.7/test/test_struct.py b/lib-python/2.7/test/test_struct.py --- a/lib-python/2.7/test/test_struct.py +++ b/lib-python/2.7/test/test_struct.py @@ -535,7 +535,8 @@ @unittest.skipUnless(IS32BIT, "Specific to 32bit machines") def test_crasher(self): - self.assertRaises(MemoryError, struct.pack, "357913941c", "a") + self.assertRaises((MemoryError, struct.error), struct.pack, + "357913941c", "a") def test_count_overflow(self): hugecount = '{}b'.format(sys.maxsize+1) diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/2.7/test/test_support.py b/lib-python/2.7/test/test_support.py --- a/lib-python/2.7/test/test_support.py +++ b/lib-python/2.7/test/test_support.py @@ -431,16 +431,20 @@ rmtree(name) -def findfile(file, here=__file__, subdir=None): +def findfile(file, here=None, subdir=None): """Try to find a file on sys.path and the working directory. If it is not found the argument passed to the function is returned (this does not necessarily signal failure; could still be the legitimate path).""" + import test if os.path.isabs(file): return file if subdir is not None: file = os.path.join(subdir, file) path = sys.path - path = [os.path.dirname(here)] + path + if here is None: + path = test.__path__ + path + else: + path = [os.path.dirname(here)] + path for dn in path: fn = os.path.join(dn, file) if os.path.exists(fn): return fn @@ -1050,15 +1054,33 @@ guards, default = _parse_guards(guards) return guards.get(platform.python_implementation().lower(), default) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --pdb +# to get a pdb prompt in case of exceptions +ResultClass = unittest.TextTestRunner.resultclass + +class TestResultWithPdb(ResultClass): + + def addError(self, testcase, exc_info): + ResultClass.addError(self, testcase, exc_info) + if '--pdb' in sys.argv: + import pdb, traceback + traceback.print_tb(exc_info[2]) + pdb.post_mortem(exc_info[2]) + +# ---------------------------------- def _run_suite(suite): """Run tests from a unittest.TestSuite-derived class.""" if verbose: - runner = unittest.TextTestRunner(sys.stdout, verbosity=2) + runner = unittest.TextTestRunner(sys.stdout, verbosity=2, + resultclass=TestResultWithPdb) else: runner = BasicTestRunner() + result = runner.run(suite) if not result.wasSuccessful(): if len(result.errors) == 1 and not result.failures: @@ -1071,6 +1093,34 @@ err += "; run in verbose mode for details" raise TestFailed(err) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --filter bar +# to run only the test cases whose name contains bar + +def filter_maybe(suite): + try: + i = sys.argv.index('--filter') + filter = sys.argv[i+1] + except (ValueError, IndexError): + return suite + tests = [] + for test in linearize_suite(suite): + if filter in test._testMethodName: + tests.append(test) + return unittest.TestSuite(tests) + +def linearize_suite(suite_or_test): + try: + it = iter(suite_or_test) + except TypeError: + yield suite_or_test + return + for subsuite in it: + for item in linearize_suite(subsuite): + yield item + +# ---------------------------------- def run_unittest(*classes): """Run tests from unittest.TestCase-derived classes.""" @@ -1086,6 +1136,7 @@ suite.addTest(cls) else: suite.addTest(unittest.makeSuite(cls)) + suite = filter_maybe(suite) _run_suite(suite) diff --git a/lib-python/2.7/test/test_syntax.py b/lib-python/2.7/test/test_syntax.py --- a/lib-python/2.7/test/test_syntax.py +++ b/lib-python/2.7/test/test_syntax.py @@ -5,7 +5,8 @@ >>> def f(x): ... global x Traceback (most recent call last): -SyntaxError: name 'x' is local and global (, line 1) + File "", line 1 +SyntaxError: name 'x' is local and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that @@ -375,7 +376,7 @@ In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 - >>> while 1: + >>> while 1: # doctest:+SKIP ... while 2: ... while 3: ... while 4: diff --git a/lib-python/2.7/test/test_sys.py b/lib-python/2.7/test/test_sys.py --- a/lib-python/2.7/test/test_sys.py +++ b/lib-python/2.7/test/test_sys.py @@ -264,6 +264,7 @@ self.assertEqual(sys.getdlopenflags(), oldflags+1) sys.setdlopenflags(oldflags) + @test.test_support.impl_detail("reference counting") def test_refcount(self): # n here must be a global in order for this test to pass while # tracing with a python function. Tracing calls PyFrame_FastToLocals @@ -287,7 +288,7 @@ is sys._getframe().f_code ) - # sys._current_frames() is a CPython-only gimmick. + @test.test_support.impl_detail("current_frames") def test_current_frames(self): have_threads = True try: @@ -383,7 +384,10 @@ self.assertEqual(len(sys.float_info), 11) self.assertEqual(sys.float_info.radix, 2) self.assertEqual(len(sys.long_info), 2) - self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + if test.test_support.check_impl_detail(cpython=True): + self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + else: + self.assertTrue(sys.long_info.bits_per_digit >= 1) self.assertTrue(sys.long_info.sizeof_digit >= 1) self.assertEqual(type(sys.long_info.bits_per_digit), int) self.assertEqual(type(sys.long_info.sizeof_digit), int) @@ -432,6 +436,7 @@ self.assertEqual(type(getattr(sys.flags, attr)), int, attr) self.assertTrue(repr(sys.flags)) + @test.test_support.impl_detail("sys._clear_type_cache") def test_clear_type_cache(self): sys._clear_type_cache() @@ -473,6 +478,7 @@ p.wait() self.assertIn(executable, ["''", repr(sys.executable)]) + at unittest.skipUnless(test.test_support.check_impl_detail(), "sys.getsizeof()") class SizeofTest(unittest.TestCase): TPFLAGS_HAVE_GC = 1<<14 diff --git a/lib-python/2.7/test/test_sys_settrace.py b/lib-python/2.7/test/test_sys_settrace.py --- a/lib-python/2.7/test/test_sys_settrace.py +++ b/lib-python/2.7/test/test_sys_settrace.py @@ -213,12 +213,16 @@ "finally" def generator_example(): # any() will leave the generator before its end - x = any(generator_function()) + x = any(generator_function()); gc.collect() # the following lines were not traced for x in range(10): y = x +# On CPython, when the generator is decref'ed to zero, we see the trace +# for the "finally:" portion. On PyPy, we don't see it before the next +# garbage collection. That's why we put gc.collect() on the same line above. + generator_example.events = ([(0, 'call'), (2, 'line'), (-6, 'call'), @@ -282,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass @@ -323,17 +327,24 @@ self.run_test(tighterloop_example) def test_13_genexp(self): - self.run_test(generator_example) - # issue1265: if the trace function contains a generator, - # and if the traced function contains another generator - # that is not completely exhausted, the trace stopped. - # Worse: the 'finally' clause was not invoked. - tracer = Tracer() - sys.settrace(tracer.traceWithGenexp) - generator_example() - sys.settrace(None) - self.compare_events(generator_example.__code__.co_firstlineno, - tracer.events, generator_example.events) + if self.using_gc: + test_support.gc_collect() + gc.enable() + try: + self.run_test(generator_example) + # issue1265: if the trace function contains a generator, + # and if the traced function contains another generator + # that is not completely exhausted, the trace stopped. + # Worse: the 'finally' clause was not invoked. + tracer = Tracer() + sys.settrace(tracer.traceWithGenexp) + generator_example() + sys.settrace(None) + self.compare_events(generator_example.__code__.co_firstlineno, + tracer.events, generator_example.events) + finally: + if self.using_gc: + gc.disable() def test_14_onliner_if(self): def onliners(): diff --git a/lib-python/2.7/test/test_sysconfig.py b/lib-python/2.7/test/test_sysconfig.py --- a/lib-python/2.7/test/test_sysconfig.py +++ b/lib-python/2.7/test/test_sysconfig.py @@ -209,13 +209,22 @@ self.assertEqual(get_platform(), 'macosx-10.4-fat64') - for arch in ('ppc', 'i386', 'x86_64', 'ppc64'): + for arch in ('ppc', 'i386', 'ppc64', 'x86_64'): get_config_vars()['CFLAGS'] = ('-arch %s -isysroot ' '/Developer/SDKs/MacOSX10.4u.sdk ' '-fno-strict-aliasing -fno-common ' '-dynamic -DNDEBUG -g -O3'%(arch,)) self.assertEqual(get_platform(), 'macosx-10.4-%s'%(arch,)) + + # macosx with ARCHFLAGS set and empty _CONFIG_VARS + os.environ['ARCHFLAGS'] = '-arch i386' + sysconfig._CONFIG_VARS = None + + # this will attempt to recreate the _CONFIG_VARS based on environment + # variables; used to check a problem with the PyPy's _init_posix + # implementation; see: issue 705 + get_config_vars() # linux debian sarge os.name = 'posix' @@ -235,7 +244,7 @@ def test_get_scheme_names(self): wanted = ('nt', 'nt_user', 'os2', 'os2_home', 'osx_framework_user', - 'posix_home', 'posix_prefix', 'posix_user') + 'posix_home', 'posix_prefix', 'posix_user', 'pypy') self.assertEqual(get_scheme_names(), wanted) def test_symlink(self): diff --git a/lib-python/2.7/test/test_tarfile.py b/lib-python/2.7/test/test_tarfile.py --- a/lib-python/2.7/test/test_tarfile.py +++ b/lib-python/2.7/test/test_tarfile.py @@ -169,6 +169,7 @@ except tarfile.ReadError: self.fail("tarfile.open() failed on empty archive") self.assertListEqual(tar.getmembers(), []) + tar.close() def test_null_tarfile(self): # Test for issue6123: Allow opening empty archives. @@ -207,16 +208,21 @@ fobj = open(self.tarname, "rb") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, os.path.abspath(fobj.name)) + tar.close() def test_no_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) self.assertRaises(AttributeError, getattr, fobj, "name") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, None) def test_empty_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) fobj.name = "" tar = tarfile.open(fileobj=fobj, mode=self.mode) @@ -515,6 +521,7 @@ self.tar = tarfile.open(self.tarname, mode=self.mode, encoding="iso8859-1") tarinfo = self.tar.getmember("pax/umlauts-�������") self._test_member(tarinfo, size=7011, chksum=md5_regtype) + self.tar.close() class LongnameTest(ReadTest): @@ -675,6 +682,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.rmdir(path) @@ -692,6 +700,7 @@ tar.gettarinfo(target) tarinfo = tar.gettarinfo(link) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(target) os.remove(link) @@ -704,6 +713,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(path) @@ -722,6 +732,7 @@ tar.add(dstname) os.chdir(cwd) self.assertTrue(tar.getnames() == [], "added the archive to itself") + tar.close() def test_exclude(self): tempdir = os.path.join(TEMPDIR, "exclude") @@ -742,6 +753,7 @@ tar = tarfile.open(tmpname, "r") self.assertEqual(len(tar.getmembers()), 1) self.assertEqual(tar.getnames()[0], "empty_dir") + tar.close() finally: shutil.rmtree(tempdir) @@ -947,7 +959,9 @@ fobj.close() elif self.mode.endswith("bz2"): dec = bz2.BZ2Decompressor() - data = open(tmpname, "rb").read() + f = open(tmpname, "rb") + data = f.read() + f.close() data = dec.decompress(data) self.assertTrue(len(dec.unused_data) == 0, "found trailing data") @@ -1026,6 +1040,7 @@ "unable to read longname member") self.assertEqual(tarinfo.linkname, member.linkname, "unable to read longname member") + tar.close() def test_longname_1023(self): self._test(("longnam/" * 127) + "longnam") @@ -1118,6 +1133,7 @@ else: n = tar.getmembers()[0].name self.assertTrue(name == n, "PAX longname creation failed") + tar.close() def test_pax_global_header(self): pax_headers = { @@ -1146,6 +1162,7 @@ tarfile.PAX_NUMBER_FIELDS[key](val) except (TypeError, ValueError): self.fail("unable to convert pax header field") + tar.close() def test_pax_extended_header(self): # The fields from the pax header have priority over the @@ -1165,6 +1182,7 @@ self.assertEqual(t.pax_headers, pax_headers) self.assertEqual(t.name, "foo") self.assertEqual(t.uid, 123) + tar.close() class UstarUnicodeTest(unittest.TestCase): @@ -1208,6 +1226,7 @@ tarinfo.name = "foo" tarinfo.uname = u"���" self.assertRaises(UnicodeError, tar.addfile, tarinfo) + tar.close() def test_unicode_argument(self): tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict") @@ -1262,6 +1281,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="ascii", errors=handler) self.assertEqual(tar.getnames()[0], name) + tar.close() self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="ascii", errors="strict") @@ -1274,6 +1294,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1", errors="utf-8") self.assertEqual(tar.getnames()[0], "���/" + u"�".encode("utf8")) + tar.close() class AppendTest(unittest.TestCase): @@ -1301,6 +1322,7 @@ def _test(self, names=["bar"], fileobj=None): tar = tarfile.open(self.tarname, fileobj=fileobj) self.assertEqual(tar.getnames(), names) + tar.close() def test_non_existing(self): self._add_testfile() @@ -1319,7 +1341,9 @@ def test_fileobj(self): self._create_testtar() - data = open(self.tarname).read() + f = open(self.tarname) + data = f.read() + f.close() fobj = StringIO.StringIO(data) self._add_testfile(fobj) fobj.seek(0) @@ -1345,7 +1369,9 @@ # Append mode is supposed to fail if the tarfile to append to # does not end with a zero block. def _test_error(self, data): - open(self.tarname, "wb").write(data) + f = open(self.tarname, "wb") + f.write(data) + f.close() self.assertRaises(tarfile.ReadError, self._add_testfile) def test_null(self): diff --git a/lib-python/2.7/test/test_tempfile.py b/lib-python/2.7/test/test_tempfile.py --- a/lib-python/2.7/test/test_tempfile.py +++ b/lib-python/2.7/test/test_tempfile.py @@ -23,8 +23,8 @@ # TEST_FILES may need to be tweaked for systems depending on the maximum # number of files that can be opened at one time (see ulimit -n) -if sys.platform in ('openbsd3', 'openbsd4'): - TEST_FILES = 48 +if sys.platform.startswith("openbsd"): + TEST_FILES = 64 # ulimit -n defaults to 128 for normal users else: TEST_FILES = 100 @@ -244,6 +244,7 @@ dir = tempfile.mkdtemp() try: self.do_create(dir=dir).write("blat") + test_support.gc_collect() finally: os.rmdir(dir) @@ -528,12 +529,15 @@ self.do_create(suf="b") self.do_create(pre="a", suf="b") self.do_create(pre="aa", suf=".txt") + test_support.gc_collect() def test_many(self): # mktemp can choose many usable file names (stochastic) extant = range(TEST_FILES) for i in extant: extant[i] = self.do_create(pre="aa") + del extant + test_support.gc_collect() ## def test_warning(self): ## # mktemp issues a warning when used diff --git a/lib-python/2.7/test/test_thread.py b/lib-python/2.7/test/test_thread.py --- a/lib-python/2.7/test/test_thread.py +++ b/lib-python/2.7/test/test_thread.py @@ -128,6 +128,7 @@ del task while not done: time.sleep(0.01) + test_support.gc_collect() self.assertEqual(thread._count(), orig) diff --git a/lib-python/2.7/test/test_threading.py b/lib-python/2.7/test/test_threading.py --- a/lib-python/2.7/test/test_threading.py +++ b/lib-python/2.7/test/test_threading.py @@ -161,6 +161,7 @@ # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. + @test.test_support.cpython_only def test_PyThreadState_SetAsyncExc(self): try: import ctypes @@ -266,6 +267,7 @@ finally: threading._start_new_thread = _start_new_thread + @test.test_support.cpython_only def test_finalize_runnning_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for @@ -383,6 +385,7 @@ finally: sys.setcheckinterval(old_interval) + @test.test_support.cpython_only def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): @@ -425,6 +428,9 @@ def joiningfunc(mainthread): mainthread.join() print 'end of thread' + # stdout is fully buffered because not a tty, we have to flush + # before exit. + sys.stdout.flush() \n""" + script p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE) diff --git a/lib-python/2.7/test/test_threading_local.py b/lib-python/2.7/test/test_threading_local.py --- a/lib-python/2.7/test/test_threading_local.py +++ b/lib-python/2.7/test/test_threading_local.py @@ -173,8 +173,9 @@ obj = cls() obj.x = 5 self.assertEqual(obj.__dict__, {'x': 5}) - with self.assertRaises(AttributeError): - obj.__dict__ = {} + if test_support.check_impl_detail(): + with self.assertRaises(AttributeError): + obj.__dict__ = {} with self.assertRaises(AttributeError): del obj.__dict__ diff --git a/lib-python/2.7/test/test_traceback.py b/lib-python/2.7/test/test_traceback.py --- a/lib-python/2.7/test/test_traceback.py +++ b/lib-python/2.7/test/test_traceback.py @@ -5,7 +5,8 @@ import sys import unittest from imp import reload -from test.test_support import run_unittest, is_jython, Error +from test.test_support import run_unittest, Error +from test.test_support import impl_detail, check_impl_detail import traceback @@ -49,10 +50,8 @@ self.assertTrue(err[2].count('\n') == 1) # and no additional newline self.assertTrue(err[1].find("+") == err[2].find("^")) # in the right place + @impl_detail("other implementations may add a caret (why shouldn't they?)") def test_nocaret(self): - if is_jython: - # jython adds a caret in this case (why shouldn't it?) - return err = self.get_exception_format(self.syntax_error_without_caret, SyntaxError) self.assertTrue(len(err) == 3) @@ -63,8 +62,11 @@ IndentationError) self.assertTrue(len(err) == 4) self.assertTrue(err[1].strip() == "print 2") - self.assertIn("^", err[2]) - self.assertTrue(err[1].find("2") == err[2].find("^")) + if check_impl_detail(): + # on CPython, there is a "^" at the end of the line + # on PyPy, there is a "^" too, but at the start, more logically + self.assertIn("^", err[2]) + self.assertTrue(err[1].find("2") == err[2].find("^")) def test_bug737473(self): import os, tempfile, time @@ -74,7 +76,8 @@ try: sys.path.insert(0, testdir) testfile = os.path.join(testdir, 'test_bug737473.py') - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise ValueError""" @@ -96,7 +99,8 @@ # three seconds are needed for this test to pass reliably :-( time.sleep(4) - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise NotImplementedError""" reload(test_bug737473) diff --git a/lib-python/2.7/test/test_types.py b/lib-python/2.7/test/test_types.py --- a/lib-python/2.7/test/test_types.py +++ b/lib-python/2.7/test/test_types.py @@ -1,7 +1,8 @@ # Python test set -- part 6, built-in types from test.test_support import run_unittest, have_unicode, run_with_locale, \ - check_py3k_warnings + check_py3k_warnings, \ + impl_detail, check_impl_detail import unittest import sys import locale @@ -289,9 +290,14 @@ # array.array() returns an object that does not implement a char buffer, # something which int() uses for conversion. import array - try: int(buffer(array.array('c'))) + try: int(buffer(array.array('c', '5'))) except TypeError: pass - else: self.fail("char buffer (at C level) not working") + else: + if check_impl_detail(): + self.fail("char buffer (at C level) not working") + #else: + # it works on PyPy, which does not have the distinction + # between char buffer and binary buffer. XXX fine enough? def test_int__format__(self): def test(i, format_spec, result): @@ -741,6 +747,7 @@ for code in 'xXobns': self.assertRaises(ValueError, format, 0, ',' + code) + @impl_detail("the types' internal size attributes are CPython-only") def test_internal_sizes(self): self.assertGreater(object.__basicsize__, 0) self.assertGreater(tuple.__itemsize__, 0) diff --git a/lib-python/2.7/test/test_unicode.py b/lib-python/2.7/test/test_unicode.py --- a/lib-python/2.7/test/test_unicode.py +++ b/lib-python/2.7/test/test_unicode.py @@ -448,10 +448,11 @@ meth('\xff') with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) @test_support.run_with_locale('LC_ALL', 'de_DE', 'fr_FR') def test_format_float(self): @@ -1062,7 +1063,8 @@ # to take a 64-bit long, this test should apply to all platforms. if sys.maxint > (1 << 32) or struct.calcsize('P') != 4: return - self.assertRaises(OverflowError, u't\tt\t'.expandtabs, sys.maxint) + self.assertRaises((OverflowError, MemoryError), + u't\tt\t'.expandtabs, sys.maxint) def test__format__(self): def test(value, format, expected): diff --git a/lib-python/2.7/test/test_unicodedata.py b/lib-python/2.7/test/test_unicodedata.py --- a/lib-python/2.7/test/test_unicodedata.py +++ b/lib-python/2.7/test/test_unicodedata.py @@ -233,10 +233,12 @@ # been loaded in this process. popen = subprocess.Popen(args, stderr=subprocess.PIPE) popen.wait() - self.assertEqual(popen.returncode, 1) - error = "SyntaxError: (unicode error) \N escapes not supported " \ - "(can't load unicodedata module)" - self.assertIn(error, popen.stderr.read()) + self.assertIn(popen.returncode, [0, 1]) # at least it did not segfault + if test.test_support.check_impl_detail(): + self.assertEqual(popen.returncode, 1) + error = "SyntaxError: (unicode error) \N escapes not supported " \ + "(can't load unicodedata module)" + self.assertIn(error, popen.stderr.read()) def test_decimal_numeric_consistent(self): # Test that decimal and numeric are consistent, diff --git a/lib-python/2.7/test/test_unpack.py b/lib-python/2.7/test/test_unpack.py --- a/lib-python/2.7/test/test_unpack.py +++ b/lib-python/2.7/test/test_unpack.py @@ -62,14 +62,14 @@ >>> a, b = t Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking tuple of wrong size >>> a, b = l Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking sequence too short diff --git a/lib-python/2.7/test/test_urllib2.py b/lib-python/2.7/test/test_urllib2.py --- a/lib-python/2.7/test/test_urllib2.py +++ b/lib-python/2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/2.7/test/test_warnings.py b/lib-python/2.7/test/test_warnings.py --- a/lib-python/2.7/test/test_warnings.py +++ b/lib-python/2.7/test/test_warnings.py @@ -355,7 +355,8 @@ # test_support.import_fresh_module utility function def test_accelerated(self): self.assertFalse(original_warnings is self.module) - self.assertFalse(hasattr(self.module.warn, 'func_code')) + self.assertFalse(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class PyWarnTests(BaseTest, WarnTests): module = py_warnings @@ -364,7 +365,8 @@ # test_support.import_fresh_module utility function def test_pure_python(self): self.assertFalse(original_warnings is self.module) - self.assertTrue(hasattr(self.module.warn, 'func_code')) + self.assertTrue(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class WCmdLineTests(unittest.TestCase): diff --git a/lib-python/2.7/test/test_weakref.py b/lib-python/2.7/test/test_weakref.py --- a/lib-python/2.7/test/test_weakref.py +++ b/lib-python/2.7/test/test_weakref.py @@ -1,4 +1,3 @@ -import gc import sys import unittest import UserList @@ -6,6 +5,7 @@ import operator from test import test_support +from test.test_support import gc_collect # Used in ReferencesTestCase.test_ref_created_during_del() . ref_from_del = None @@ -70,6 +70,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(ref1() is None, "expected reference to be invalidated") self.assertTrue(ref2() is None, @@ -101,13 +102,16 @@ ref1 = weakref.proxy(o, self.callback) ref2 = weakref.proxy(o, self.callback) del o + gc_collect() def check(proxy): proxy.bar self.assertRaises(weakref.ReferenceError, check, ref1) self.assertRaises(weakref.ReferenceError, check, ref2) - self.assertRaises(weakref.ReferenceError, bool, weakref.proxy(C())) + ref3 = weakref.proxy(C()) + gc_collect() + self.assertRaises(weakref.ReferenceError, bool, ref3) self.assertTrue(self.cbcalled == 2) def check_basic_ref(self, factory): @@ -124,6 +128,7 @@ o = factory() ref = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(self.cbcalled == 1, "callback did not properly set 'cbcalled'") self.assertTrue(ref() is None, @@ -148,6 +153,7 @@ self.assertTrue(weakref.getweakrefcount(o) == 2, "wrong weak ref count for object") del proxy + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 1, "wrong weak ref count for object after deleting proxy") @@ -325,6 +331,7 @@ "got wrong number of weak reference objects") del ref1, ref2, proxy1, proxy2 + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 0, "weak reference objects not unlinked from" " referent when discarded.") @@ -338,6 +345,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref2], "list of refs does not match") @@ -345,10 +353,12 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref2 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref1], "list of refs does not match") del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [], "list of refs not cleared") @@ -400,13 +410,11 @@ # when the second attempt to remove the instance from the "list # of all objects" occurs. - import gc - class C(object): pass c = C() - wr = weakref.ref(c, lambda ignore: gc.collect()) + wr = weakref.ref(c, lambda ignore: gc_collect()) del c # There endeth the first part. It gets worse. @@ -414,7 +422,7 @@ c1 = C() c1.i = C() - wr = weakref.ref(c1.i, lambda ignore: gc.collect()) + wr = weakref.ref(c1.i, lambda ignore: gc_collect()) c2 = C() c2.c1 = c1 @@ -430,8 +438,6 @@ del c2 def test_callback_in_cycle_1(self): - import gc - class J(object): pass @@ -467,11 +473,9 @@ # search II.__mro__, but that's NULL. The result was a segfault in # a release build, and an assert failure in a debug build. del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_2(self): - import gc - # This is just like test_callback_in_cycle_1, except that II is an # old-style class. The symptom is different then: an instance of an # old-style class looks in its own __dict__ first. 'J' happens to @@ -496,11 +500,9 @@ I.wr = weakref.ref(J, I.acallback) del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_3(self): - import gc - # This one broke the first patch that fixed the last two. In this # case, the objects reachable from the callback aren't also reachable # from the object (c1) *triggering* the callback: you can get to @@ -520,11 +522,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2 - gc.collect() + gc_collect() def test_callback_in_cycle_4(self): - import gc - # Like test_callback_in_cycle_3, except c2 and c1 have different # classes. c2's class (C) isn't reachable from c1 then, so protecting # objects reachable from the dying object (c1) isn't enough to stop @@ -548,11 +548,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2, C, D - gc.collect() + gc_collect() def test_callback_in_cycle_resurrection(self): - import gc - # Do something nasty in a weakref callback: resurrect objects # from dead cycles. For this to be attempted, the weakref and # its callback must also be part of the cyclic trash (else the @@ -583,7 +581,7 @@ del c1, c2, C # make them all trash self.assertEqual(alist, []) # del isn't enough to reclaim anything - gc.collect() + gc_collect() # c1.wr and c2.wr were part of the cyclic trash, so should have # been cleared without their callbacks executing. OTOH, the weakref # to C is bound to a function local (wr), and wasn't trash, so that @@ -593,12 +591,10 @@ self.assertEqual(wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_callbacks_on_callback(self): - import gc - # Set up weakref callbacks *on* weakref callbacks. alist = [] def safe_callback(ignore): @@ -626,12 +622,12 @@ del callback, c, d, C self.assertEqual(alist, []) # del isn't enough to clean up cycles - gc.collect() + gc_collect() self.assertEqual(alist, ["safe_callback called"]) self.assertEqual(external_wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_gc_during_ref_creation(self): @@ -641,9 +637,11 @@ self.check_gc_during_creation(weakref.proxy) def check_gc_during_creation(self, makeref): - thresholds = gc.get_threshold() - gc.set_threshold(1, 1, 1) - gc.collect() + if test_support.check_impl_detail(): + import gc + thresholds = gc.get_threshold() + gc.set_threshold(1, 1, 1) + gc_collect() class A: pass @@ -663,7 +661,8 @@ weakref.ref(referenced, callback) finally: - gc.set_threshold(*thresholds) + if test_support.check_impl_detail(): + gc.set_threshold(*thresholds) def test_ref_created_during_del(self): # Bug #1377858 @@ -683,7 +682,7 @@ r = weakref.ref(Exception) self.assertRaises(TypeError, r.__init__, 0, 0, 0, 0, 0) # No exception should be raised here - gc.collect() + gc_collect() def test_classes(self): # Check that both old-style classes and new-style classes @@ -696,12 +695,12 @@ weakref.ref(int) a = weakref.ref(A, l.append) A = None - gc.collect() + gc_collect() self.assertEqual(a(), None) self.assertEqual(l, [a]) b = weakref.ref(B, l.append) B = None - gc.collect() + gc_collect() self.assertEqual(b(), None) self.assertEqual(l, [a, b]) @@ -722,6 +721,7 @@ self.assertTrue(mr.called) self.assertEqual(mr.value, 24) del o + gc_collect() self.assertTrue(mr() is None) self.assertTrue(mr.called) @@ -738,9 +738,11 @@ self.assertEqual(weakref.getweakrefcount(o), 3) refs = weakref.getweakrefs(o) self.assertEqual(len(refs), 3) - self.assertTrue(r2 is refs[0]) - self.assertIn(r1, refs[1:]) - self.assertIn(r3, refs[1:]) + assert set(refs) == set((r1, r2, r3)) + if test_support.check_impl_detail(): + self.assertTrue(r2 is refs[0]) + self.assertIn(r1, refs[1:]) + self.assertIn(r3, refs[1:]) def test_subclass_refs_dont_conflate_callbacks(self): class MyRef(weakref.ref): @@ -839,15 +841,18 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the values did not clear the dictionary") # regression on SF bug #447152: dict = weakref.WeakValueDictionary() self.assertRaises(KeyError, dict.__getitem__, 1) dict[2] = C() + gc_collect() self.assertRaises(KeyError, dict.__getitem__, 2) def test_weak_keys(self): @@ -868,9 +873,11 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the keys did not clear the dictionary") o = Object(42) @@ -986,13 +993,13 @@ self.assertTrue(len(weakdict) == 2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 1) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 0) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) @@ -1137,6 +1144,7 @@ for o in objs: count += 1 del d[o] + gc_collect() self.assertEqual(len(d), 0) self.assertEqual(count, 2) @@ -1177,6 +1185,7 @@ >>> o is o2 True >>> del o, o2 +>>> gc_collect() >>> print r() None @@ -1229,6 +1238,7 @@ >>> id2obj(a_id) is a True >>> del a +>>> gc_collect() >>> try: ... id2obj(a_id) ... except KeyError: diff --git a/lib-python/2.7/test/test_weakset.py b/lib-python/2.7/test/test_weakset.py --- a/lib-python/2.7/test/test_weakset.py +++ b/lib-python/2.7/test/test_weakset.py @@ -57,6 +57,7 @@ self.assertEqual(len(self.s), len(self.d)) self.assertEqual(len(self.fs), 1) del self.obj + test_support.gc_collect() self.assertEqual(len(self.fs), 0) def test_contains(self): @@ -66,6 +67,7 @@ self.assertNotIn(1, self.s) self.assertIn(self.obj, self.fs) del self.obj + test_support.gc_collect() self.assertNotIn(SomeClass('F'), self.fs) def test_union(self): @@ -204,6 +206,7 @@ self.assertEqual(self.s, dup) self.assertRaises(TypeError, self.s.add, []) self.fs.add(Foo()) + test_support.gc_collect() self.assertTrue(len(self.fs) == 1) self.fs.add(self.obj) self.assertTrue(len(self.fs) == 1) @@ -330,10 +333,11 @@ next(it) # Trigger internal iteration # Destroy an item del items[-1] - gc.collect() # just in case + test_support.gc_collect() # We have removed either the first consumed items, or another one self.assertIn(len(list(it)), [len(items), len(items) - 1]) del it + test_support.gc_collect() # The removal has been committed self.assertEqual(len(s), len(items)) diff --git a/lib-python/2.7/test/test_xml_etree.py b/lib-python/2.7/test/test_xml_etree.py --- a/lib-python/2.7/test/test_xml_etree.py +++ b/lib-python/2.7/test/test_xml_etree.py @@ -1633,10 +1633,10 @@ Check reference leak. >>> xmltoolkit63() - >>> count = sys.getrefcount(None) + >>> count = sys.getrefcount(None) #doctest: +SKIP >>> for i in range(1000): ... xmltoolkit63() - >>> sys.getrefcount(None) - count + >>> sys.getrefcount(None) - count #doctest: +SKIP 0 """ diff --git a/lib-python/2.7/test/test_xmlrpc.py b/lib-python/2.7/test/test_xmlrpc.py --- a/lib-python/2.7/test/test_xmlrpc.py +++ b/lib-python/2.7/test/test_xmlrpc.py @@ -308,7 +308,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -367,7 +367,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -435,6 +435,7 @@ def tearDown(self): # wait on the server thread to terminate + test_support.gc_collect() # to close the active connections self.evt.wait(10) # disable traceback reporting @@ -472,9 +473,6 @@ # protocol error; provide additional information in test output self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) - def test_unicode_host(self): - server = xmlrpclib.ServerProxy(u"http://%s:%d/RPC2"%(ADDR, PORT)) - self.assertEqual(server.add("a", u"\xe9"), u"a\xe9") # [ch] The test 404 is causing lots of false alarms. def XXXtest_404(self): @@ -589,12 +587,6 @@ # This avoids waiting for the socket timeout. self.test_simple1() - def test_partial_post(self): - # Check that a partial POST doesn't make the server loop: issue #14001. - conn = httplib.HTTPConnection(ADDR, PORT) - conn.request('POST', '/RPC2 HTTP/1.0\r\nContent-Length: 100\r\n\r\nbye') - conn.close() - class MultiPathServerTestCase(BaseServerTestCase): threadFunc = staticmethod(http_multi_server) request_count = 2 diff --git a/lib-python/2.7/test/test_zlib.py b/lib-python/2.7/test/test_zlib.py --- a/lib-python/2.7/test/test_zlib.py +++ b/lib-python/2.7/test/test_zlib.py @@ -1,6 +1,7 @@ import unittest from test.test_support import TESTFN, run_unittest, import_module, unlink, requires import binascii +import os import random from test.test_support import precisionbigmemtest, _1G, _4G import sys @@ -99,14 +100,7 @@ class BaseCompressTestCase(object): def check_big_compress_buffer(self, size, compress_func): - _1M = 1024 * 1024 - fmt = "%%0%dx" % (2 * _1M) - # Generate 10MB worth of random, and expand it by repeating it. - # The assumption is that zlib's memory is not big enough to exploit - # such spread out redundancy. - data = ''.join([binascii.a2b_hex(fmt % random.getrandbits(8 * _1M)) - for i in range(10)]) - data = data * (size // len(data) + 1) + data = os.urandom(size) try: compress_func(data) finally: diff --git a/lib-python/2.7/trace.py b/lib-python/2.7/trace.py --- a/lib-python/2.7/trace.py +++ b/lib-python/2.7/trace.py @@ -559,6 +559,10 @@ if len(funcs) == 1: dicts = [d for d in gc.get_referrers(funcs[0]) if isinstance(d, dict)] + if len(dicts) == 0: + # PyPy may store functions directly on the class + # (more exactly: the container is not a Python object) + dicts = funcs if len(dicts) == 1: classes = [c for c in gc.get_referrers(dicts[0]) if hasattr(c, "__bases__")] diff --git a/lib-python/2.7/urllib2.py b/lib-python/2.7/urllib2.py --- a/lib-python/2.7/urllib2.py +++ b/lib-python/2.7/urllib2.py @@ -1171,6 +1171,7 @@ except TypeError: #buffering kw not supported r = h.getresponse() except socket.error, err: # XXX what error? + h.close() raise URLError(err) # Pick apart the HTTPResponse object to get the addinfourl diff --git a/lib-python/2.7/uuid.py b/lib-python/2.7/uuid.py --- a/lib-python/2.7/uuid.py +++ b/lib-python/2.7/uuid.py @@ -406,8 +406,12 @@ continue if hasattr(lib, 'uuid_generate_random'): _uuid_generate_random = lib.uuid_generate_random + _uuid_generate_random.argtypes = [ctypes.c_char * 16] + _uuid_generate_random.restype = None if hasattr(lib, 'uuid_generate_time'): _uuid_generate_time = lib.uuid_generate_time + _uuid_generate_time.argtypes = [ctypes.c_char * 16] + _uuid_generate_time.restype = None # The uuid_generate_* functions are broken on MacOS X 10.5, as noted # in issue #8621 the function generates the same sequence of values @@ -436,6 +440,9 @@ lib = None _UuidCreate = getattr(lib, 'UuidCreateSequential', getattr(lib, 'UuidCreate', None)) + if _UuidCreate is not None: + _UuidCreate.argtypes = [ctypes.c_char * 16] + _UuidCreate.restype = ctypes.c_int except: pass diff --git a/lib-python/3.2/__future__.py b/lib-python/3.2/__future__.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__future__.py @@ -0,0 +1,134 @@ +"""Record of phased-in incompatible language changes. + +Each line is of the form: + + FeatureName = "_Feature(" OptionalRelease "," MandatoryRelease "," + CompilerFlag ")" + +where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples +of the same form as sys.version_info: + + (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int + PY_MINOR_VERSION, # the 1; an int + PY_MICRO_VERSION, # the 0; an int + PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string + PY_RELEASE_SERIAL # the 3; an int + ) + +OptionalRelease records the first release in which + + from __future__ import FeatureName + +was accepted. + +In the case of MandatoryReleases that have not yet occurred, +MandatoryRelease predicts the release in which the feature will become part +of the language. + +Else MandatoryRelease records when the feature became part of the language; +in releases at or after that, modules no longer need + + from __future__ import FeatureName + +to use the feature in question, but may continue to use such imports. + +MandatoryRelease may also be None, meaning that a planned feature got +dropped. + +Instances of class _Feature have two corresponding methods, +.getOptionalRelease() and .getMandatoryRelease(). + +CompilerFlag is the (bitfield) flag that should be passed in the fourth +argument to the builtin function compile() to enable the feature in +dynamically compiled code. This flag is stored in the .compiler_flag +attribute on _Future instances. These values must match the appropriate +#defines of CO_xxx flags in Include/compile.h. + +No feature line is ever to be deleted from this file. +""" + +all_feature_names = [ + "nested_scopes", + "generators", + "division", + "absolute_import", + "with_statement", + "print_function", + "unicode_literals", + "barry_as_FLUFL", +] + +__all__ = ["all_feature_names"] + all_feature_names + +# The CO_xxx symbols are defined here under the same names used by +# compile.h, so that an editor search will find them here. However, +# they're not exported in __all__, because they don't really belong to +# this module. +CO_NESTED = 0x0010 # nested_scopes +CO_GENERATOR_ALLOWED = 0 # generators (obsolete, was 0x1000) +CO_FUTURE_DIVISION = 0x2000 # division +CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default +CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement +CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function +CO_FUTURE_UNICODE_LITERALS = 0x20000 # unicode string literals +CO_FUTURE_BARRY_AS_BDFL = 0x40000 + +class _Feature: + def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): + self.optional = optionalRelease + self.mandatory = mandatoryRelease + self.compiler_flag = compiler_flag + + def getOptionalRelease(self): + """Return first release in which this feature was recognized. + + This is a 5-tuple, of the same form as sys.version_info. + """ + + return self.optional + + def getMandatoryRelease(self): + """Return release in which this feature will become mandatory. + + This is a 5-tuple, of the same form as sys.version_info, or, if + the feature was dropped, is None. + """ + + return self.mandatory + + def __repr__(self): + return "_Feature" + repr((self.optional, + self.mandatory, + self.compiler_flag)) + +nested_scopes = _Feature((2, 1, 0, "beta", 1), + (2, 2, 0, "alpha", 0), + CO_NESTED) + +generators = _Feature((2, 2, 0, "alpha", 1), + (2, 3, 0, "final", 0), + CO_GENERATOR_ALLOWED) + +division = _Feature((2, 2, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_DIVISION) + +absolute_import = _Feature((2, 5, 0, "alpha", 1), + (2, 7, 0, "alpha", 0), + CO_FUTURE_ABSOLUTE_IMPORT) + +with_statement = _Feature((2, 5, 0, "alpha", 1), + (2, 6, 0, "alpha", 0), + CO_FUTURE_WITH_STATEMENT) + +print_function = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_PRINT_FUNCTION) + +unicode_literals = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_UNICODE_LITERALS) + +barry_as_FLUFL = _Feature((3, 1, 0, "alpha", 2), + (3, 9, 0, "alpha", 0), + CO_FUTURE_BARRY_AS_BDFL) diff --git a/lib-python/3.2/__phello__.foo.py b/lib-python/3.2/__phello__.foo.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__phello__.foo.py @@ -0,0 +1,1 @@ +# This file exists as a helper for the test.test_frozen module. diff --git a/lib-python/3.2/_abcoll.py b/lib-python/3.2/_abcoll.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_abcoll.py @@ -0,0 +1,623 @@ +# Copyright 2007 Google, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""Abstract Base Classes (ABCs) for collections, according to PEP 3119. + +DON'T USE THIS MODULE DIRECTLY! The classes here should be imported +via collections; they are defined here only to alleviate certain +bootstrapping issues. Unit tests are in test_collections. +""" + +from abc import ABCMeta, abstractmethod +import sys + +__all__ = ["Hashable", "Iterable", "Iterator", + "Sized", "Container", "Callable", + "Set", "MutableSet", + "Mapping", "MutableMapping", + "MappingView", "KeysView", "ItemsView", "ValuesView", + "Sequence", "MutableSequence", + "ByteString", + ] + + +### collection related types which are not exposed through builtin ### +## iterators ## +bytes_iterator = type(iter(b'')) +bytearray_iterator = type(iter(bytearray())) +#callable_iterator = ??? +dict_keyiterator = type(iter({}.keys())) +dict_valueiterator = type(iter({}.values())) +dict_itemiterator = type(iter({}.items())) +list_iterator = type(iter([])) +list_reverseiterator = type(iter(reversed([]))) +range_iterator = type(iter(range(0))) +set_iterator = type(iter(set())) +str_iterator = type(iter("")) +tuple_iterator = type(iter(())) +zip_iterator = type(iter(zip())) +## views ## +dict_keys = type({}.keys()) +dict_values = type({}.values()) +dict_items = type({}.items()) +## misc ## +dict_proxy = type(type.__dict__) + + +### ONE-TRICK PONIES ### + +class Hashable(metaclass=ABCMeta): + + @abstractmethod + def __hash__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Hashable: + for B in C.__mro__: + if "__hash__" in B.__dict__: + if B.__dict__["__hash__"]: + return True + break + return NotImplemented + + +class Iterable(metaclass=ABCMeta): + + @abstractmethod + def __iter__(self): + while False: + yield None + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterable: + if any("__iter__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Iterator(Iterable): + + @abstractmethod + def __next__(self): + raise StopIteration + + def __iter__(self): + return self + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterator: + if (any("__next__" in B.__dict__ for B in C.__mro__) and + any("__iter__" in B.__dict__ for B in C.__mro__)): + return True + return NotImplemented + +Iterator.register(bytes_iterator) +Iterator.register(bytearray_iterator) +#Iterator.register(callable_iterator) +Iterator.register(dict_keyiterator) +Iterator.register(dict_valueiterator) +Iterator.register(dict_itemiterator) +Iterator.register(list_iterator) +Iterator.register(list_reverseiterator) +Iterator.register(range_iterator) +Iterator.register(set_iterator) +Iterator.register(str_iterator) +Iterator.register(tuple_iterator) +Iterator.register(zip_iterator) + +class Sized(metaclass=ABCMeta): + + @abstractmethod + def __len__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Sized: + if any("__len__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Container(metaclass=ABCMeta): + + @abstractmethod + def __contains__(self, x): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Container: + if any("__contains__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Callable(metaclass=ABCMeta): + + @abstractmethod + def __call__(self, *args, **kwds): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Callable: + if any("__call__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +### SETS ### + + +class Set(Sized, Iterable, Container): + + """A set is a finite, iterable container. + + This class provides concrete generic implementations of all + methods except for __contains__, __iter__ and __len__. + + To override the comparisons (presumably for speed, as the + semantics are fixed), all you have to do is redefine __le__ and + then the other operations will automatically follow suit. + """ + + def __le__(self, other): + if not isinstance(other, Set): + return NotImplemented + if len(self) > len(other): + return False + for elem in self: + if elem not in other: + return False + return True + + def __lt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) < len(other) and self.__le__(other) + + def __gt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other < self + + def __ge__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other <= self + + def __eq__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) == len(other) and self.__le__(other) + + def __ne__(self, other): + return not (self == other) + + @classmethod + def _from_iterable(cls, it): + '''Construct an instance of the class from any iterable input. + + Must override this method if the class constructor signature + does not accept an iterable for an input. + ''' + return cls(it) + + def __and__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + return self._from_iterable(value for value in other if value in self) + + def isdisjoint(self, other): + for value in other: + if value in self: + return False + return True + + def __or__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + chain = (e for s in (self, other) for e in s) + return self._from_iterable(chain) + + def __sub__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return self._from_iterable(value for value in self + if value not in other) + + def __xor__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return (self - other) | (other - self) + + def _hash(self): + """Compute the hash value of a set. + + Note that we don't define __hash__: not all sets are hashable. + But if you define a hashable set type, its __hash__ should + call this function. + + This must be compatible __eq__. + + All sets ought to compare equal if they contain the same + elements, regardless of how they are implemented, and + regardless of the order of the elements; so there's not much + freedom for __eq__ or __hash__. We match the algorithm used + by the built-in frozenset type. + """ + MAX = sys.maxsize + MASK = 2 * MAX + 1 + n = len(self) + h = 1927868237 * (n + 1) + h &= MASK + for x in self: + hx = hash(x) + h ^= (hx ^ (hx << 16) ^ 89869747) * 3644798167 + h &= MASK + h = h * 69069 + 907133923 + h &= MASK + if h > MAX: + h -= MASK + 1 + if h == -1: + h = 590923713 + return h + +Set.register(frozenset) + + +class MutableSet(Set): + + @abstractmethod + def add(self, value): + """Add an element.""" + raise NotImplementedError + + @abstractmethod + def discard(self, value): + """Remove an element. Do not raise an exception if absent.""" + raise NotImplementedError + + def remove(self, value): + """Remove an element. If not a member, raise a KeyError.""" + if value not in self: + raise KeyError(value) + self.discard(value) + + def pop(self): + """Return the popped value. Raise KeyError if empty.""" + it = iter(self) + try: + value = next(it) + except StopIteration: + raise KeyError + self.discard(value) + return value + + def clear(self): + """This is slow (creates N new iterators!) but effective.""" + try: + while True: + self.pop() + except KeyError: + pass + + def __ior__(self, it): + for value in it: + self.add(value) + return self + + def __iand__(self, it): + for value in (self - it): + self.discard(value) + return self + + def __ixor__(self, it): + if it is self: + self.clear() + else: + if not isinstance(it, Set): + it = self._from_iterable(it) + for value in it: + if value in self: + self.discard(value) + else: + self.add(value) + return self + + def __isub__(self, it): + if it is self: + self.clear() + else: + for value in it: + self.discard(value) + return self + +MutableSet.register(set) + + +### MAPPINGS ### + + +class Mapping(Sized, Iterable, Container): + + @abstractmethod + def __getitem__(self, key): + raise KeyError + + def get(self, key, default=None): + try: + return self[key] + except KeyError: + return default + + def __contains__(self, key): + try: + self[key] + except KeyError: + return False + else: + return True + + def keys(self): + return KeysView(self) + + def items(self): + return ItemsView(self) + + def values(self): + return ValuesView(self) + + def __eq__(self, other): + if not isinstance(other, Mapping): + return NotImplemented + return dict(self.items()) == dict(other.items()) + + def __ne__(self, other): + return not (self == other) + + +class MappingView(Sized): + + def __init__(self, mapping): + self._mapping = mapping + + def __len__(self): + return len(self._mapping) + + def __repr__(self): + return '{0.__class__.__name__}({0._mapping!r})'.format(self) + + +class KeysView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, key): + return key in self._mapping + + def __iter__(self): + for key in self._mapping: + yield key + +KeysView.register(dict_keys) + + +class ItemsView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, item): + key, value = item + try: + v = self._mapping[key] + except KeyError: + return False + else: + return v == value + + def __iter__(self): + for key in self._mapping: + yield (key, self._mapping[key]) + +ItemsView.register(dict_items) + + +class ValuesView(MappingView): + + def __contains__(self, value): + for key in self._mapping: + if value == self._mapping[key]: + return True + return False + + def __iter__(self): + for key in self._mapping: + yield self._mapping[key] + +ValuesView.register(dict_values) + + +class MutableMapping(Mapping): + + @abstractmethod + def __setitem__(self, key, value): + raise KeyError + + @abstractmethod + def __delitem__(self, key): + raise KeyError + + __marker = object() + + def pop(self, key, default=__marker): + try: + value = self[key] + except KeyError: + if default is self.__marker: + raise + return default + else: + del self[key] + return value + + def popitem(self): + try: + key = next(iter(self)) + except StopIteration: + raise KeyError + value = self[key] + del self[key] + return key, value + + def clear(self): + try: + while True: + self.popitem() + except KeyError: + pass + + def update(*args, **kwds): + if len(args) > 2: + raise TypeError("update() takes at most 2 positional " + "arguments ({} given)".format(len(args))) + elif not args: + raise TypeError("update() takes at least 1 argument (0 given)") + self = args[0] + other = args[1] if len(args) >= 2 else () + + if isinstance(other, Mapping): + for key in other: + self[key] = other[key] + elif hasattr(other, "keys"): + for key in other.keys(): + self[key] = other[key] + else: + for key, value in other: + self[key] = value + for key, value in kwds.items(): + self[key] = value + + def setdefault(self, key, default=None): + try: + return self[key] + except KeyError: + self[key] = default + return default + +MutableMapping.register(dict) + + +### SEQUENCES ### + + +class Sequence(Sized, Iterable, Container): + + """All the operations on a read-only sequence. + + Concrete subclasses must override __new__ or __init__, + __getitem__, and __len__. + """ + + @abstractmethod + def __getitem__(self, index): + raise IndexError + + def __iter__(self): + i = 0 + try: + while True: + v = self[i] + yield v + i += 1 + except IndexError: + return + + def __contains__(self, value): + for v in self: + if v == value: + return True + return False + + def __reversed__(self): + for i in reversed(range(len(self))): + yield self[i] + + def index(self, value): + for i, v in enumerate(self): + if v == value: + return i + raise ValueError + + def count(self, value): + return sum(1 for v in self if v == value) + +Sequence.register(tuple) +Sequence.register(str) +Sequence.register(range) + + +class ByteString(Sequence): + + """This unifies bytes and bytearray. + + XXX Should add all their methods. + """ + +ByteString.register(bytes) +ByteString.register(bytearray) + + +class MutableSequence(Sequence): + + @abstractmethod + def __setitem__(self, index, value): + raise IndexError + + @abstractmethod + def __delitem__(self, index): + raise IndexError + + @abstractmethod + def insert(self, index, value): + raise IndexError + + def append(self, value): + self.insert(len(self), value) + + def reverse(self): + n = len(self) + for i in range(n//2): + self[i], self[n-i-1] = self[n-i-1], self[i] + + def extend(self, values): + for v in values: + self.append(v) + + def pop(self, index=-1): + v = self[index] + del self[index] + return v + + def remove(self, value): + del self[self.index(value)] + + def __iadd__(self, values): + self.extend(values) + return self + +MutableSequence.register(list) +MutableSequence.register(bytearray) # Multiply inheriting, see ByteString diff --git a/lib-python/3.2/_compat_pickle.py b/lib-python/3.2/_compat_pickle.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_compat_pickle.py @@ -0,0 +1,81 @@ +# This module is used to map the old Python 2 names to the new names used in +# Python 3 for the pickle module. This needed to make pickle streams +# generated with Python 2 loadable by Python 3. + +# This is a copy of lib2to3.fixes.fix_imports.MAPPING. We cannot import +# lib2to3 and use the mapping defined there, because lib2to3 uses pickle. +# Thus, this could cause the module to be imported recursively. +IMPORT_MAPPING = { + 'StringIO': 'io', + 'cStringIO': 'io', + 'cPickle': 'pickle', + '__builtin__' : 'builtins', + 'copy_reg': 'copyreg', + 'Queue': 'queue', + 'SocketServer': 'socketserver', + 'ConfigParser': 'configparser', + 'repr': 'reprlib', + 'FileDialog': 'tkinter.filedialog', + 'tkFileDialog': 'tkinter.filedialog', + 'SimpleDialog': 'tkinter.simpledialog', + 'tkSimpleDialog': 'tkinter.simpledialog', + 'tkColorChooser': 'tkinter.colorchooser', + 'tkCommonDialog': 'tkinter.commondialog', + 'Dialog': 'tkinter.dialog', + 'Tkdnd': 'tkinter.dnd', + 'tkFont': 'tkinter.font', + 'tkMessageBox': 'tkinter.messagebox', + 'ScrolledText': 'tkinter.scrolledtext', + 'Tkconstants': 'tkinter.constants', + 'Tix': 'tkinter.tix', + 'ttk': 'tkinter.ttk', + 'Tkinter': 'tkinter', + 'markupbase': '_markupbase', + '_winreg': 'winreg', + 'thread': '_thread', + 'dummy_thread': '_dummy_thread', + 'dbhash': 'dbm.bsd', + 'dumbdbm': 'dbm.dumb', + 'dbm': 'dbm.ndbm', + 'gdbm': 'dbm.gnu', + 'xmlrpclib': 'xmlrpc.client', + 'DocXMLRPCServer': 'xmlrpc.server', + 'SimpleXMLRPCServer': 'xmlrpc.server', + 'httplib': 'http.client', + 'htmlentitydefs' : 'html.entities', + 'HTMLParser' : 'html.parser', + 'Cookie': 'http.cookies', + 'cookielib': 'http.cookiejar', + 'BaseHTTPServer': 'http.server', + 'SimpleHTTPServer': 'http.server', + 'CGIHTTPServer': 'http.server', + 'test.test_support': 'test.support', + 'commands': 'subprocess', + 'UserString' : 'collections', + 'UserList' : 'collections', + 'urlparse' : 'urllib.parse', + 'robotparser' : 'urllib.robotparser', + 'whichdb': 'dbm', + 'anydbm': 'dbm' +} + + +# This contains rename rules that are easy to handle. We ignore the more +# complex stuff (e.g. mapping the names in the urllib and types modules). +# These rules should be run before import names are fixed. +NAME_MAPPING = { + ('__builtin__', 'xrange'): ('builtins', 'range'), + ('__builtin__', 'reduce'): ('functools', 'reduce'), + ('__builtin__', 'intern'): ('sys', 'intern'), + ('__builtin__', 'unichr'): ('builtins', 'chr'), + ('__builtin__', 'basestring'): ('builtins', 'str'), + ('__builtin__', 'long'): ('builtins', 'int'), + ('itertools', 'izip'): ('builtins', 'zip'), + ('itertools', 'imap'): ('builtins', 'map'), + ('itertools', 'ifilter'): ('builtins', 'filter'), + ('itertools', 'ifilterfalse'): ('itertools', 'filterfalse'), +} + +# Same, but for 3.x to 2.x +REVERSE_IMPORT_MAPPING = dict((v, k) for (k, v) in IMPORT_MAPPING.items()) +REVERSE_NAME_MAPPING = dict((v, k) for (k, v) in NAME_MAPPING.items()) diff --git a/lib-python/3.2/_dummy_thread.py b/lib-python/3.2/_dummy_thread.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_dummy_thread.py @@ -0,0 +1,155 @@ +"""Drop-in replacement for the thread module. + +Meant to be used as a brain-dead substitute so that threaded code does +not need to be rewritten for when the thread module is not present. + +Suggested usage is:: + + try: + import _thread + except ImportError: + import _dummy_thread as _thread + +""" +# Exports only things specified by thread documentation; +# skipping obsolete synonyms allocate(), start_new(), exit_thread(). +__all__ = ['error', 'start_new_thread', 'exit', 'get_ident', 'allocate_lock', + 'interrupt_main', 'LockType'] + +# A dummy value +TIMEOUT_MAX = 2**31 + +# NOTE: this module can be imported early in the extension building process, +# and so top level imports of other modules should be avoided. Instead, all +# imports are done when needed on a function-by-function basis. Since threads +# are disabled, the import lock should not be an issue anyway (??). + +class error(Exception): + """Dummy implementation of _thread.error.""" + + def __init__(self, *args): + self.args = args + +def start_new_thread(function, args, kwargs={}): + """Dummy implementation of _thread.start_new_thread(). + + Compatibility is maintained by making sure that ``args`` is a + tuple and ``kwargs`` is a dictionary. If an exception is raised + and it is SystemExit (which can be done by _thread.exit()) it is + caught and nothing is done; all other exceptions are printed out + by using traceback.print_exc(). + + If the executed function calls interrupt_main the KeyboardInterrupt will be + raised when the function returns. + + """ + if type(args) != type(tuple()): + raise TypeError("2nd arg must be a tuple") + if type(kwargs) != type(dict()): + raise TypeError("3rd arg must be a dict") + global _main + _main = False + try: + function(*args, **kwargs) + except SystemExit: + pass + except: + import traceback + traceback.print_exc() + _main = True + global _interrupt + if _interrupt: + _interrupt = False + raise KeyboardInterrupt + +def exit(): + """Dummy implementation of _thread.exit().""" + raise SystemExit + +def get_ident(): + """Dummy implementation of _thread.get_ident(). + + Since this module should only be used when _threadmodule is not + available, it is safe to assume that the current process is the + only thread. Thus a constant can be safely returned. + """ + return -1 + +def allocate_lock(): + """Dummy implementation of _thread.allocate_lock().""" + return LockType() + +def stack_size(size=None): + """Dummy implementation of _thread.stack_size().""" + if size is not None: + raise error("setting thread stack size not supported") + return 0 + +class LockType(object): + """Class implementing dummy implementation of _thread.LockType. + + Compatibility is maintained by maintaining self.locked_status + which is a boolean that stores the state of the lock. Pickling of + the lock, though, should not be done since if the _thread module is + then used with an unpickled ``lock()`` from here problems could + occur from this class not having atomic methods. + + """ + + def __init__(self): + self.locked_status = False + + def acquire(self, waitflag=None, timeout=-1): + """Dummy implementation of acquire(). + + For blocking calls, self.locked_status is automatically set to + True and returned appropriately based on value of + ``waitflag``. If it is non-blocking, then the value is + actually checked and not set if it is already acquired. This + is all done so that threading.Condition's assert statements + aren't triggered and throw a little fit. + + """ + if waitflag is None or waitflag: + self.locked_status = True + return True + else: + if not self.locked_status: + self.locked_status = True + return True + else: + if timeout > 0: + import time + time.sleep(timeout) + return False + + __enter__ = acquire + + def __exit__(self, typ, val, tb): + self.release() + + def release(self): + """Release the dummy lock.""" + # XXX Perhaps shouldn't actually bother to test? Could lead + # to problems for complex, threaded code. + if not self.locked_status: + raise error + self.locked_status = False + return True + + def locked(self): + return self.locked_status + +# Used to signal that interrupt_main was called in a "thread" +_interrupt = False +# True when not executing in a "thread" +_main = True + +def interrupt_main(): + """Set _interrupt flag to True to have start_new_thread raise + KeyboardInterrupt upon exiting.""" + if _main: + raise KeyboardInterrupt + else: + global _interrupt + _interrupt = True diff --git a/lib-python/3.2/_markupbase.py b/lib-python/3.2/_markupbase.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_markupbase.py @@ -0,0 +1,395 @@ +"""Shared support for scanning document type declarations in HTML and XHTML. + +This module is used as a foundation for the html.parser module. It has no +documented public API and should not be used directly. + +""" + +import re + +_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9]*\s*').match +_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match +_commentclose = re.compile(r'--\s*>') +_markedsectionclose = re.compile(r']\s*]\s*>') + +# An analysis of the MS-Word extensions is available at +# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf + +_msmarkedsectionclose = re.compile(r']\s*>') + +del re + + +class ParserBase: + """Parser base class which provides some common support methods used + by the SGML/HTML and XHTML parsers.""" + + def __init__(self): + if self.__class__ is ParserBase: + raise RuntimeError( + "_markupbase.ParserBase must be subclassed") + + def error(self, message): + raise NotImplementedError( + "subclasses of ParserBase must override error()") + + def reset(self): + self.lineno = 1 + self.offset = 0 + + def getpos(self): + """Return current line number and offset.""" + return self.lineno, self.offset + + # Internal -- update line number and offset. This should be + # called for each piece of data exactly once, in order -- in other + # words the concatenation of all the input strings to this + # function should be exactly the entire input. + def updatepos(self, i, j): + if i >= j: + return j + rawdata = self.rawdata + nlines = rawdata.count("\n", i, j) + if nlines: + self.lineno = self.lineno + nlines + pos = rawdata.rindex("\n", i, j) # Should not fail + self.offset = j-(pos+1) + else: + self.offset = self.offset + j-i + return j + + _decl_otherchars = '' + + # Internal -- parse declaration (for use by subclasses). + def parse_declaration(self, i): + # This is some sort of declaration; in "HTML as + # deployed," this should only be the document type + # declaration (""). + # ISO 8879:1986, however, has more complex + # declaration syntax for elements in , including: + # --comment-- + # [marked section] + # name in the following list: ENTITY, DOCTYPE, ELEMENT, + # ATTLIST, NOTATION, SHORTREF, USEMAP, + # LINKTYPE, LINK, IDLINK, USELINK, SYSTEM + rawdata = self.rawdata + j = i + 2 + assert rawdata[i:j] == "": + # the empty comment + return j + 1 + if rawdata[j:j+1] in ("-", ""): + # Start of comment followed by buffer boundary, + # or just a buffer boundary. + return -1 + # A simple, practical version could look like: ((name|stringlit) S*) + '>' + n = len(rawdata) + if rawdata[j:j+2] == '--': #comment + # Locate --.*-- as the body of the comment + return self.parse_comment(i) + elif rawdata[j] == '[': #marked section + # Locate [statusWord [...arbitrary SGML...]] as the body of the marked section + # Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA + # Note that this is extended by Microsoft Office "Save as Web" function + # to include [if...] and [endif]. + return self.parse_marked_section(i) + else: #all other declaration elements + decltype, j = self._scan_name(j, i) + if j < 0: + return j + if decltype == "doctype": + self._decl_otherchars = '' + while j < n: + c = rawdata[j] + if c == ">": + # end of declaration syntax + data = rawdata[i+2:j] + if decltype == "doctype": + self.handle_decl(data) + else: + # According to the HTML5 specs sections "8.2.4.44 Bogus + # comment state" and "8.2.4.45 Markup declaration open + # state", a comment token should be emitted. + # Calling unknown_decl provides more flexibility though. + self.unknown_decl(data) + return j + 1 + if c in "\"'": + m = _declstringlit_match(rawdata, j) + if not m: + return -1 # incomplete + j = m.end() + elif c in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ": + name, j = self._scan_name(j, i) + elif c in self._decl_otherchars: + j = j + 1 + elif c == "[": + # this could be handled in a separate doctype parser + if decltype == "doctype": + j = self._parse_doctype_subset(j + 1, i) + elif decltype in {"attlist", "linktype", "link", "element"}: + # must tolerate []'d groups in a content model in an element declaration + # also in data attribute specifications of attlist declaration + # also link type declaration subsets in linktype declarations + # also link attribute specification lists in link declarations + self.error("unsupported '[' char in %s declaration" % decltype) + else: + self.error("unexpected '[' char in declaration") + else: + self.error( + "unexpected %r char in declaration" % rawdata[j]) + if j < 0: + return j + return -1 # incomplete + + # Internal -- parse a marked section + # Override this to handle MS-word extension syntax content + def parse_marked_section(self, i, report=1): + rawdata= self.rawdata + assert rawdata[i:i+3] == ' ending + match= _markedsectionclose.search(rawdata, i+3) + elif sectName in {"if", "else", "endif"}: + # look for MS Office ]> ending + match= _msmarkedsectionclose.search(rawdata, i+3) + else: + self.error('unknown status keyword %r in marked section' % rawdata[i+3:j]) + if not match: + return -1 + if report: + j = match.start(0) + self.unknown_decl(rawdata[i+3: j]) + return match.end(0) + + # Internal -- parse comment, return length or -1 if not terminated + def parse_comment(self, i, report=1): + rawdata = self.rawdata + if rawdata[i:i+4] != ' LOAD_CONST None def f(x): - None + y = None return x asm = disassemble(f) for elem in ('LOAD_GLOBAL',): @@ -67,10 +67,13 @@ self.assertIn(elem, asm) def test_pack_unpack(self): + # On PyPy, "a, b = ..." is even more optimized, by removing + # the ROT_TWO. But the ROT_TWO is not removed if assigning + # to more complex expressions, so check that. for line, elem in ( ('a, = a,', 'LOAD_CONST',), - ('a, b = a, b', 'ROT_TWO',), - ('a, b, c = a, b, c', 'ROT_THREE',), + ('a[1], b = a, b', 'ROT_TWO',), + ('a, b[2], c = a, b, c', 'ROT_THREE',), ): asm = dis_single(line) self.assertIn(elem, asm) @@ -78,6 +81,8 @@ self.assertNotIn('UNPACK_TUPLE', asm) def test_folding_of_tuples_of_constants(self): + # On CPython, "a,b,c=1,2,3" turns into "a,b,c=" + # but on PyPy, it turns into "a=1;b=2;c=3". for line, elem in ( ('a = 1,2,3', '((1, 2, 3))'), ('("a","b","c")', "(('a', 'b', 'c'))"), @@ -86,7 +91,8 @@ ('((1, 2), 3, 4)', '(((1, 2), 3, 4))'), ): asm = dis_single(line) - self.assertIn(elem, asm) + self.assert_(elem in asm or ( + line == 'a,b,c = 1,2,3' and 'UNPACK_TUPLE' not in asm)) self.assertNotIn('BUILD_TUPLE', asm) # Bug 1053819: Tuple of constants misidentified when presented with: @@ -139,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib-python/2.7/test/test_pprint.py b/lib-python/2.7/test/test_pprint.py --- a/lib-python/2.7/test/test_pprint.py +++ b/lib-python/2.7/test/test_pprint.py @@ -233,7 +233,16 @@ frozenset([0, 2]), frozenset([0, 1])])}""" cube = test.test_set.cube(3) - self.assertEqual(pprint.pformat(cube), cube_repr_tgt) + # XXX issues of dictionary order, and for the case below, + # order of items in the frozenset([...]) representation. + # Whether we get precisely cube_repr_tgt or not is open + # to implementation-dependent choices (this test probably + # fails horribly in CPython if we tweak the dict order too). + got = pprint.pformat(cube) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cube_repr_tgt) + else: + self.assertEqual(eval(got), cube) cubo_repr_tgt = """\ {frozenset([frozenset([0, 2]), frozenset([0])]): frozenset([frozenset([frozenset([0, 2]), @@ -393,7 +402,11 @@ 2])])])}""" cubo = test.test_set.linegraph(cube) - self.assertEqual(pprint.pformat(cubo), cubo_repr_tgt) + got = pprint.pformat(cubo) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cubo_repr_tgt) + else: + self.assertEqual(eval(got), cubo) def test_depth(self): nested_tuple = (1, (2, (3, (4, (5, 6))))) diff --git a/lib-python/2.7/test/test_pydoc.py b/lib-python/2.7/test/test_pydoc.py --- a/lib-python/2.7/test/test_pydoc.py +++ b/lib-python/2.7/test/test_pydoc.py @@ -267,8 +267,8 @@ testpairs = ( ('i_am_not_here', 'i_am_not_here'), ('test.i_am_not_here_either', 'i_am_not_here_either'), - ('test.i_am_not_here.neither_am_i', 'i_am_not_here.neither_am_i'), - ('i_am_not_here.{}'.format(modname), 'i_am_not_here.{}'.format(modname)), + ('test.i_am_not_here.neither_am_i', 'i_am_not_here'), + ('i_am_not_here.{}'.format(modname), 'i_am_not_here'), ('test.{}'.format(modname), modname), ) @@ -292,8 +292,8 @@ result = run_pydoc(modname) finally: forget(modname) - expected = badimport_pattern % (modname, expectedinmsg) - self.assertEqual(expected, result) + expected = badimport_pattern % (modname, '(.+\\.)?' + expectedinmsg + '(\\..+)?$') + self.assertTrue(re.match(expected, result)) def test_input_strip(self): missing_module = " test.i_am_not_here " diff --git a/lib-python/2.7/test/test_pyexpat.py b/lib-python/2.7/test/test_pyexpat.py --- a/lib-python/2.7/test/test_pyexpat.py +++ b/lib-python/2.7/test/test_pyexpat.py @@ -570,6 +570,9 @@ self.assertEqual(self.n, 4) class MalformedInputText(unittest.TestCase): + # CPython seems to ship its own version of expat, they fixed it on this commit : + # http://svn.python.org/view?revision=74429&view=revision + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test1(self): xml = "\0\r\n" parser = expat.ParserCreate() @@ -579,6 +582,7 @@ except expat.ExpatError as e: self.assertEqual(str(e), 'unclosed token: line 2, column 0') + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test2(self): xml = "\r\n" parser = expat.ParserCreate() diff --git a/lib-python/2.7/test/test_repr.py b/lib-python/2.7/test/test_repr.py --- a/lib-python/2.7/test/test_repr.py +++ b/lib-python/2.7/test/test_repr.py @@ -9,6 +9,7 @@ import unittest from test.test_support import run_unittest, check_py3k_warnings +from test.test_support import check_impl_detail from repr import repr as r # Don't shadow builtin repr from repr import Repr @@ -145,8 +146,11 @@ # Functions eq(repr(hash), '') # Methods - self.assertTrue(repr(''.split).startswith( - '") def test_xrange(self): eq = self.assertEqual @@ -185,7 +189,10 @@ def test_descriptors(self): eq = self.assertEqual # method descriptors - eq(repr(dict.items), "") + if check_impl_detail(cpython=True): + eq(repr(dict.items), "") + elif check_impl_detail(pypy=True): + eq(repr(dict.items), "") # XXX member descriptors # XXX attribute descriptors # XXX slot descriptors @@ -247,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_runpy.py b/lib-python/2.7/test/test_runpy.py --- a/lib-python/2.7/test/test_runpy.py +++ b/lib-python/2.7/test/test_runpy.py @@ -5,10 +5,15 @@ import sys import re import tempfile -from test.test_support import verbose, run_unittest, forget +from test.test_support import verbose, run_unittest, forget, check_impl_detail from test.script_helper import (temp_dir, make_script, compile_script, make_pkg, make_zip_script, make_zip_pkg) +if check_impl_detail(pypy=True): + no_lone_pyc_file = True +else: + no_lone_pyc_file = False + from runpy import _run_code, _run_module_code, run_module, run_path # Note: This module can't safely test _run_module_as_main as it @@ -168,13 +173,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + d2 = run_module(mod_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -190,13 +196,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", pkg_name - d2 = run_module(pkg_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", pkg_name + d2 = run_module(pkg_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, pkg_name) if verbose: print "Package executed successfully" @@ -244,15 +251,17 @@ self.assertIn("sibling", d1) self.assertIn("nephew", d1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name, run_name=run_name) # Read from bytecode - self.assertIn("__package__", d2) - self.assertTrue(d2["__package__"] == pkg_name) - self.assertIn("sibling", d2) - self.assertIn("nephew", d2) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + # Read from bytecode + d2 = run_module(mod_name, run_name=run_name) + self.assertIn("__package__", d2) + self.assertTrue(d2["__package__"] == pkg_name) + self.assertIn("sibling", d2) + self.assertIn("nephew", d2) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -345,6 +354,8 @@ script_dir, '') def test_directory_compiled(self): + if no_lone_pyc_file: + return with temp_dir() as script_dir: mod_name = '__main__' script_name = self._make_test_script(script_dir, mod_name) diff --git a/lib-python/2.7/test/test_scope.py b/lib-python/2.7/test/test_scope.py --- a/lib-python/2.7/test/test_scope.py +++ b/lib-python/2.7/test/test_scope.py @@ -1,6 +1,6 @@ import unittest from test.test_support import check_syntax_error, check_py3k_warnings, \ - check_warnings, run_unittest + check_warnings, run_unittest, gc_collect class ScopeTests(unittest.TestCase): @@ -432,6 +432,7 @@ for i in range(100): f1() + gc_collect() self.assertEqual(Foo.count, 0) diff --git a/lib-python/2.7/test/test_set.py b/lib-python/2.7/test/test_set.py --- a/lib-python/2.7/test/test_set.py +++ b/lib-python/2.7/test/test_set.py @@ -309,6 +309,7 @@ fo.close() test_support.unlink(test_support.TESTFN) + @test_support.impl_detail(pypy=False) def test_do_not_rehash_dict_keys(self): n = 10 d = dict.fromkeys(map(HashCountingInt, xrange(n))) @@ -559,6 +560,7 @@ p = weakref.proxy(s) self.assertEqual(str(p), str(s)) s = None + test_support.gc_collect() self.assertRaises(ReferenceError, str, p) # C API test only available in a debug build @@ -590,6 +592,7 @@ s.__init__(self.otherword) self.assertEqual(s, set(self.word)) + @test_support.impl_detail() def test_singleton_empty_frozenset(self): f = frozenset() efs = [frozenset(), frozenset([]), frozenset(()), frozenset(''), @@ -770,9 +773,10 @@ for v in self.set: self.assertIn(v, self.values) setiter = iter(self.set) - # note: __length_hint__ is an internal undocumented API, - # don't rely on it in your own programs - self.assertEqual(setiter.__length_hint__(), len(self.set)) + if test_support.check_impl_detail(): + # note: __length_hint__ is an internal undocumented API, + # don't rely on it in your own programs + self.assertEqual(setiter.__length_hint__(), len(self.set)) def test_pickling(self): p = pickle.dumps(self.set) @@ -1564,7 +1568,7 @@ for meth in (s.union, s.intersection, s.difference, s.symmetric_difference, s.isdisjoint): for g in (G, I, Ig, L, R): expected = meth(data) - actual = meth(G(data)) + actual = meth(g(data)) if isinstance(expected, bool): self.assertEqual(actual, expected) else: diff --git a/lib-python/2.7/test/test_sets.py b/lib-python/2.7/test/test_sets.py --- a/lib-python/2.7/test/test_sets.py +++ b/lib-python/2.7/test/test_sets.py @@ -686,7 +686,9 @@ set_list = sorted(self.set) self.assertEqual(len(dup_list), len(set_list)) for i, el in enumerate(dup_list): - self.assertIs(el, set_list[i]) + # Object identity is not guarnteed for immutable objects, so we + # can't use assertIs here. + self.assertEqual(el, set_list[i]) def test_deep_copy(self): dup = copy.deepcopy(self.set) diff --git a/lib-python/2.7/test/test_site.py b/lib-python/2.7/test/test_site.py --- a/lib-python/2.7/test/test_site.py +++ b/lib-python/2.7/test/test_site.py @@ -226,6 +226,10 @@ self.assertEqual(len(dirs), 1) wanted = os.path.join('xoxo', 'Lib', 'site-packages') self.assertEqual(dirs[0], wanted) + elif '__pypy__' in sys.builtin_module_names: + self.assertEquals(len(dirs), 1) + wanted = os.path.join('xoxo', 'site-packages') + self.assertEquals(dirs[0], wanted) elif os.sep == '/': self.assertEqual(len(dirs), 2) wanted = os.path.join('xoxo', 'lib', 'python' + sys.version[:3], diff --git a/lib-python/2.7/test/test_socket.py b/lib-python/2.7/test/test_socket.py --- a/lib-python/2.7/test/test_socket.py +++ b/lib-python/2.7/test/test_socket.py @@ -252,6 +252,7 @@ self.assertEqual(p.fileno(), s.fileno()) s.close() s = None + test_support.gc_collect() try: p.fileno() except ReferenceError: @@ -285,32 +286,34 @@ s.sendto(u'\u2620', sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None) - self.assertIn('not NoneType', str(cm.exception)) + self.assertIn('NoneType', str(cm.exception)) # 3 args with self.assertRaises(UnicodeEncodeError): s.sendto(u'\u2620', 0, sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, None) - self.assertIn('not NoneType', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 'bar', sockname) - self.assertIn('an integer is required', str(cm.exception)) + self.assertIn('integer', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None, None) - self.assertIn('an integer is required', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('an integer is required', str(cm.exception)) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto('foo') - self.assertIn('(1 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, sockname, 4) - self.assertIn('(4 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) def testCrucialConstants(self): @@ -385,10 +388,10 @@ socket.htonl(k) socket.htons(k) for k in bad_values: - self.assertRaises(OverflowError, socket.ntohl, k) - self.assertRaises(OverflowError, socket.ntohs, k) - self.assertRaises(OverflowError, socket.htonl, k) - self.assertRaises(OverflowError, socket.htons, k) + self.assertRaises((OverflowError, ValueError), socket.ntohl, k) + self.assertRaises((OverflowError, ValueError), socket.ntohs, k) + self.assertRaises((OverflowError, ValueError), socket.htonl, k) + self.assertRaises((OverflowError, ValueError), socket.htons, k) def testGetServBy(self): eq = self.assertEqual @@ -428,8 +431,8 @@ if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. - self.assertRaises(OverflowError, socket.getservbyport, -1) - self.assertRaises(OverflowError, socket.getservbyport, 65536) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, -1) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout @@ -608,8 +611,8 @@ neg_port = port - 65536 sock = socket.socket() try: - self.assertRaises(OverflowError, sock.bind, (host, big_port)) - self.assertRaises(OverflowError, sock.bind, (host, neg_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, big_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, neg_port)) sock.bind((host, port)) finally: sock.close() @@ -1309,6 +1312,7 @@ closed = False def flush(self): pass def close(self): self.closed = True + def _decref_socketios(self): pass # must not close unless we request it: the original use of _fileobject # by module socket requires that the underlying socket not be closed until diff --git a/lib-python/2.7/test/test_sort.py b/lib-python/2.7/test/test_sort.py --- a/lib-python/2.7/test/test_sort.py +++ b/lib-python/2.7/test/test_sort.py @@ -140,7 +140,10 @@ return random.random() < 0.5 L = [C() for i in range(50)] - self.assertRaises(ValueError, L.sort) + try: + L.sort() + except ValueError: + pass def test_cmpNone(self): # Testing None as a comparison function. @@ -150,8 +153,10 @@ L.sort(None) self.assertEqual(L, range(50)) + @test_support.impl_detail(pypy=False) def test_undetected_mutation(self): # Python 2.4a1 did not always detect mutation + # So does pypy... memorywaster = [] for i in range(20): def mutating_cmp(x, y): @@ -226,7 +231,10 @@ def __del__(self): del data[:] data[:] = range(20) - self.assertRaises(ValueError, data.sort, key=SortKiller) + try: + data.sort(key=SortKiller) + except ValueError: + pass def test_key_with_mutating_del_and_exception(self): data = range(10) diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -881,6 +881,8 @@ c = socket.socket() c.connect((HOST, port)) listener_gone.wait() + # XXX why is it necessary? + test_support.gc_collect() try: ssl_sock = ssl.wrap_socket(c) except IOError: @@ -1330,10 +1332,8 @@ def test_main(verbose=False): global CERTFILE, SVN_PYTHON_ORG_ROOT_CERT - CERTFILE = os.path.join(os.path.dirname(__file__) or os.curdir, - "keycert.pem") - SVN_PYTHON_ORG_ROOT_CERT = os.path.join( - os.path.dirname(__file__) or os.curdir, + CERTFILE = test_support.findfile("keycert.pem") + SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( "https_svn_python_org_root.pem") if (not os.path.exists(CERTFILE) or diff --git a/lib-python/2.7/test/test_str.py b/lib-python/2.7/test/test_str.py --- a/lib-python/2.7/test/test_str.py +++ b/lib-python/2.7/test/test_str.py @@ -422,10 +422,11 @@ for meth in ('foo'.startswith, 'foo'.endswith): with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) def test_main(): test_support.run_unittest(StrTest) diff --git a/lib-python/2.7/test/test_struct.py b/lib-python/2.7/test/test_struct.py --- a/lib-python/2.7/test/test_struct.py +++ b/lib-python/2.7/test/test_struct.py @@ -535,7 +535,8 @@ @unittest.skipUnless(IS32BIT, "Specific to 32bit machines") def test_crasher(self): - self.assertRaises(MemoryError, struct.pack, "357913941c", "a") + self.assertRaises((MemoryError, struct.error), struct.pack, + "357913941c", "a") def test_count_overflow(self): hugecount = '{}b'.format(sys.maxsize+1) diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/2.7/test/test_support.py b/lib-python/2.7/test/test_support.py --- a/lib-python/2.7/test/test_support.py +++ b/lib-python/2.7/test/test_support.py @@ -431,16 +431,20 @@ rmtree(name) -def findfile(file, here=__file__, subdir=None): +def findfile(file, here=None, subdir=None): """Try to find a file on sys.path and the working directory. If it is not found the argument passed to the function is returned (this does not necessarily signal failure; could still be the legitimate path).""" + import test if os.path.isabs(file): return file if subdir is not None: file = os.path.join(subdir, file) path = sys.path - path = [os.path.dirname(here)] + path + if here is None: + path = test.__path__ + path + else: + path = [os.path.dirname(here)] + path for dn in path: fn = os.path.join(dn, file) if os.path.exists(fn): return fn @@ -1050,15 +1054,33 @@ guards, default = _parse_guards(guards) return guards.get(platform.python_implementation().lower(), default) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --pdb +# to get a pdb prompt in case of exceptions +ResultClass = unittest.TextTestRunner.resultclass + +class TestResultWithPdb(ResultClass): + + def addError(self, testcase, exc_info): + ResultClass.addError(self, testcase, exc_info) + if '--pdb' in sys.argv: + import pdb, traceback + traceback.print_tb(exc_info[2]) + pdb.post_mortem(exc_info[2]) + +# ---------------------------------- def _run_suite(suite): """Run tests from a unittest.TestSuite-derived class.""" if verbose: - runner = unittest.TextTestRunner(sys.stdout, verbosity=2) + runner = unittest.TextTestRunner(sys.stdout, verbosity=2, + resultclass=TestResultWithPdb) else: runner = BasicTestRunner() + result = runner.run(suite) if not result.wasSuccessful(): if len(result.errors) == 1 and not result.failures: @@ -1071,6 +1093,34 @@ err += "; run in verbose mode for details" raise TestFailed(err) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --filter bar +# to run only the test cases whose name contains bar + +def filter_maybe(suite): + try: + i = sys.argv.index('--filter') + filter = sys.argv[i+1] + except (ValueError, IndexError): + return suite + tests = [] + for test in linearize_suite(suite): + if filter in test._testMethodName: + tests.append(test) + return unittest.TestSuite(tests) + +def linearize_suite(suite_or_test): + try: + it = iter(suite_or_test) + except TypeError: + yield suite_or_test + return + for subsuite in it: + for item in linearize_suite(subsuite): + yield item + +# ---------------------------------- def run_unittest(*classes): """Run tests from unittest.TestCase-derived classes.""" @@ -1086,6 +1136,7 @@ suite.addTest(cls) else: suite.addTest(unittest.makeSuite(cls)) + suite = filter_maybe(suite) _run_suite(suite) diff --git a/lib-python/2.7/test/test_syntax.py b/lib-python/2.7/test/test_syntax.py --- a/lib-python/2.7/test/test_syntax.py +++ b/lib-python/2.7/test/test_syntax.py @@ -5,7 +5,8 @@ >>> def f(x): ... global x Traceback (most recent call last): -SyntaxError: name 'x' is local and global (, line 1) + File "", line 1 +SyntaxError: name 'x' is local and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that @@ -375,7 +376,7 @@ In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 - >>> while 1: + >>> while 1: # doctest:+SKIP ... while 2: ... while 3: ... while 4: diff --git a/lib-python/2.7/test/test_sys.py b/lib-python/2.7/test/test_sys.py --- a/lib-python/2.7/test/test_sys.py +++ b/lib-python/2.7/test/test_sys.py @@ -264,6 +264,7 @@ self.assertEqual(sys.getdlopenflags(), oldflags+1) sys.setdlopenflags(oldflags) + @test.test_support.impl_detail("reference counting") def test_refcount(self): # n here must be a global in order for this test to pass while # tracing with a python function. Tracing calls PyFrame_FastToLocals @@ -287,7 +288,7 @@ is sys._getframe().f_code ) - # sys._current_frames() is a CPython-only gimmick. + @test.test_support.impl_detail("current_frames") def test_current_frames(self): have_threads = True try: @@ -383,7 +384,10 @@ self.assertEqual(len(sys.float_info), 11) self.assertEqual(sys.float_info.radix, 2) self.assertEqual(len(sys.long_info), 2) - self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + if test.test_support.check_impl_detail(cpython=True): + self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + else: + self.assertTrue(sys.long_info.bits_per_digit >= 1) self.assertTrue(sys.long_info.sizeof_digit >= 1) self.assertEqual(type(sys.long_info.bits_per_digit), int) self.assertEqual(type(sys.long_info.sizeof_digit), int) @@ -432,6 +436,7 @@ self.assertEqual(type(getattr(sys.flags, attr)), int, attr) self.assertTrue(repr(sys.flags)) + @test.test_support.impl_detail("sys._clear_type_cache") def test_clear_type_cache(self): sys._clear_type_cache() @@ -473,6 +478,7 @@ p.wait() self.assertIn(executable, ["''", repr(sys.executable)]) + at unittest.skipUnless(test.test_support.check_impl_detail(), "sys.getsizeof()") class SizeofTest(unittest.TestCase): TPFLAGS_HAVE_GC = 1<<14 diff --git a/lib-python/2.7/test/test_sys_settrace.py b/lib-python/2.7/test/test_sys_settrace.py --- a/lib-python/2.7/test/test_sys_settrace.py +++ b/lib-python/2.7/test/test_sys_settrace.py @@ -213,12 +213,16 @@ "finally" def generator_example(): # any() will leave the generator before its end - x = any(generator_function()) + x = any(generator_function()); gc.collect() # the following lines were not traced for x in range(10): y = x +# On CPython, when the generator is decref'ed to zero, we see the trace +# for the "finally:" portion. On PyPy, we don't see it before the next +# garbage collection. That's why we put gc.collect() on the same line above. + generator_example.events = ([(0, 'call'), (2, 'line'), (-6, 'call'), @@ -282,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass @@ -323,17 +327,24 @@ self.run_test(tighterloop_example) def test_13_genexp(self): - self.run_test(generator_example) - # issue1265: if the trace function contains a generator, - # and if the traced function contains another generator - # that is not completely exhausted, the trace stopped. - # Worse: the 'finally' clause was not invoked. - tracer = Tracer() - sys.settrace(tracer.traceWithGenexp) - generator_example() - sys.settrace(None) - self.compare_events(generator_example.__code__.co_firstlineno, - tracer.events, generator_example.events) + if self.using_gc: + test_support.gc_collect() + gc.enable() + try: + self.run_test(generator_example) + # issue1265: if the trace function contains a generator, + # and if the traced function contains another generator + # that is not completely exhausted, the trace stopped. + # Worse: the 'finally' clause was not invoked. + tracer = Tracer() + sys.settrace(tracer.traceWithGenexp) + generator_example() + sys.settrace(None) + self.compare_events(generator_example.__code__.co_firstlineno, + tracer.events, generator_example.events) + finally: + if self.using_gc: + gc.disable() def test_14_onliner_if(self): def onliners(): diff --git a/lib-python/2.7/test/test_sysconfig.py b/lib-python/2.7/test/test_sysconfig.py --- a/lib-python/2.7/test/test_sysconfig.py +++ b/lib-python/2.7/test/test_sysconfig.py @@ -209,13 +209,22 @@ self.assertEqual(get_platform(), 'macosx-10.4-fat64') - for arch in ('ppc', 'i386', 'x86_64', 'ppc64'): + for arch in ('ppc', 'i386', 'ppc64', 'x86_64'): get_config_vars()['CFLAGS'] = ('-arch %s -isysroot ' '/Developer/SDKs/MacOSX10.4u.sdk ' '-fno-strict-aliasing -fno-common ' '-dynamic -DNDEBUG -g -O3'%(arch,)) self.assertEqual(get_platform(), 'macosx-10.4-%s'%(arch,)) + + # macosx with ARCHFLAGS set and empty _CONFIG_VARS + os.environ['ARCHFLAGS'] = '-arch i386' + sysconfig._CONFIG_VARS = None + + # this will attempt to recreate the _CONFIG_VARS based on environment + # variables; used to check a problem with the PyPy's _init_posix + # implementation; see: issue 705 + get_config_vars() # linux debian sarge os.name = 'posix' @@ -235,7 +244,7 @@ def test_get_scheme_names(self): wanted = ('nt', 'nt_user', 'os2', 'os2_home', 'osx_framework_user', - 'posix_home', 'posix_prefix', 'posix_user') + 'posix_home', 'posix_prefix', 'posix_user', 'pypy') self.assertEqual(get_scheme_names(), wanted) def test_symlink(self): diff --git a/lib-python/2.7/test/test_tarfile.py b/lib-python/2.7/test/test_tarfile.py --- a/lib-python/2.7/test/test_tarfile.py +++ b/lib-python/2.7/test/test_tarfile.py @@ -169,6 +169,7 @@ except tarfile.ReadError: self.fail("tarfile.open() failed on empty archive") self.assertListEqual(tar.getmembers(), []) + tar.close() def test_null_tarfile(self): # Test for issue6123: Allow opening empty archives. @@ -207,16 +208,21 @@ fobj = open(self.tarname, "rb") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, os.path.abspath(fobj.name)) + tar.close() def test_no_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) self.assertRaises(AttributeError, getattr, fobj, "name") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, None) def test_empty_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) fobj.name = "" tar = tarfile.open(fileobj=fobj, mode=self.mode) @@ -515,6 +521,7 @@ self.tar = tarfile.open(self.tarname, mode=self.mode, encoding="iso8859-1") tarinfo = self.tar.getmember("pax/umlauts-�������") self._test_member(tarinfo, size=7011, chksum=md5_regtype) + self.tar.close() class LongnameTest(ReadTest): @@ -675,6 +682,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.rmdir(path) @@ -692,6 +700,7 @@ tar.gettarinfo(target) tarinfo = tar.gettarinfo(link) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(target) os.remove(link) @@ -704,6 +713,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(path) @@ -722,6 +732,7 @@ tar.add(dstname) os.chdir(cwd) self.assertTrue(tar.getnames() == [], "added the archive to itself") + tar.close() def test_exclude(self): tempdir = os.path.join(TEMPDIR, "exclude") @@ -742,6 +753,7 @@ tar = tarfile.open(tmpname, "r") self.assertEqual(len(tar.getmembers()), 1) self.assertEqual(tar.getnames()[0], "empty_dir") + tar.close() finally: shutil.rmtree(tempdir) @@ -947,7 +959,9 @@ fobj.close() elif self.mode.endswith("bz2"): dec = bz2.BZ2Decompressor() - data = open(tmpname, "rb").read() + f = open(tmpname, "rb") + data = f.read() + f.close() data = dec.decompress(data) self.assertTrue(len(dec.unused_data) == 0, "found trailing data") @@ -1026,6 +1040,7 @@ "unable to read longname member") self.assertEqual(tarinfo.linkname, member.linkname, "unable to read longname member") + tar.close() def test_longname_1023(self): self._test(("longnam/" * 127) + "longnam") @@ -1118,6 +1133,7 @@ else: n = tar.getmembers()[0].name self.assertTrue(name == n, "PAX longname creation failed") + tar.close() def test_pax_global_header(self): pax_headers = { @@ -1146,6 +1162,7 @@ tarfile.PAX_NUMBER_FIELDS[key](val) except (TypeError, ValueError): self.fail("unable to convert pax header field") + tar.close() def test_pax_extended_header(self): # The fields from the pax header have priority over the @@ -1165,6 +1182,7 @@ self.assertEqual(t.pax_headers, pax_headers) self.assertEqual(t.name, "foo") self.assertEqual(t.uid, 123) + tar.close() class UstarUnicodeTest(unittest.TestCase): @@ -1208,6 +1226,7 @@ tarinfo.name = "foo" tarinfo.uname = u"���" self.assertRaises(UnicodeError, tar.addfile, tarinfo) + tar.close() def test_unicode_argument(self): tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict") @@ -1262,6 +1281,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="ascii", errors=handler) self.assertEqual(tar.getnames()[0], name) + tar.close() self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="ascii", errors="strict") @@ -1274,6 +1294,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1", errors="utf-8") self.assertEqual(tar.getnames()[0], "���/" + u"�".encode("utf8")) + tar.close() class AppendTest(unittest.TestCase): @@ -1301,6 +1322,7 @@ def _test(self, names=["bar"], fileobj=None): tar = tarfile.open(self.tarname, fileobj=fileobj) self.assertEqual(tar.getnames(), names) + tar.close() def test_non_existing(self): self._add_testfile() @@ -1319,7 +1341,9 @@ def test_fileobj(self): self._create_testtar() - data = open(self.tarname).read() + f = open(self.tarname) + data = f.read() + f.close() fobj = StringIO.StringIO(data) self._add_testfile(fobj) fobj.seek(0) @@ -1345,7 +1369,9 @@ # Append mode is supposed to fail if the tarfile to append to # does not end with a zero block. def _test_error(self, data): - open(self.tarname, "wb").write(data) + f = open(self.tarname, "wb") + f.write(data) + f.close() self.assertRaises(tarfile.ReadError, self._add_testfile) def test_null(self): diff --git a/lib-python/2.7/test/test_tempfile.py b/lib-python/2.7/test/test_tempfile.py --- a/lib-python/2.7/test/test_tempfile.py +++ b/lib-python/2.7/test/test_tempfile.py @@ -23,8 +23,8 @@ # TEST_FILES may need to be tweaked for systems depending on the maximum # number of files that can be opened at one time (see ulimit -n) -if sys.platform in ('openbsd3', 'openbsd4'): - TEST_FILES = 48 +if sys.platform.startswith("openbsd"): + TEST_FILES = 64 # ulimit -n defaults to 128 for normal users else: TEST_FILES = 100 @@ -244,6 +244,7 @@ dir = tempfile.mkdtemp() try: self.do_create(dir=dir).write("blat") + test_support.gc_collect() finally: os.rmdir(dir) @@ -528,12 +529,15 @@ self.do_create(suf="b") self.do_create(pre="a", suf="b") self.do_create(pre="aa", suf=".txt") + test_support.gc_collect() def test_many(self): # mktemp can choose many usable file names (stochastic) extant = range(TEST_FILES) for i in extant: extant[i] = self.do_create(pre="aa") + del extant + test_support.gc_collect() ## def test_warning(self): ## # mktemp issues a warning when used diff --git a/lib-python/2.7/test/test_thread.py b/lib-python/2.7/test/test_thread.py --- a/lib-python/2.7/test/test_thread.py +++ b/lib-python/2.7/test/test_thread.py @@ -128,6 +128,7 @@ del task while not done: time.sleep(0.01) + test_support.gc_collect() self.assertEqual(thread._count(), orig) diff --git a/lib-python/2.7/test/test_threading.py b/lib-python/2.7/test/test_threading.py --- a/lib-python/2.7/test/test_threading.py +++ b/lib-python/2.7/test/test_threading.py @@ -161,6 +161,7 @@ # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. + @test.test_support.cpython_only def test_PyThreadState_SetAsyncExc(self): try: import ctypes @@ -266,6 +267,7 @@ finally: threading._start_new_thread = _start_new_thread + @test.test_support.cpython_only def test_finalize_runnning_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for @@ -383,6 +385,7 @@ finally: sys.setcheckinterval(old_interval) + @test.test_support.cpython_only def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): @@ -425,6 +428,9 @@ def joiningfunc(mainthread): mainthread.join() print 'end of thread' + # stdout is fully buffered because not a tty, we have to flush + # before exit. + sys.stdout.flush() \n""" + script p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE) diff --git a/lib-python/2.7/test/test_threading_local.py b/lib-python/2.7/test/test_threading_local.py --- a/lib-python/2.7/test/test_threading_local.py +++ b/lib-python/2.7/test/test_threading_local.py @@ -173,8 +173,9 @@ obj = cls() obj.x = 5 self.assertEqual(obj.__dict__, {'x': 5}) - with self.assertRaises(AttributeError): - obj.__dict__ = {} + if test_support.check_impl_detail(): + with self.assertRaises(AttributeError): + obj.__dict__ = {} with self.assertRaises(AttributeError): del obj.__dict__ diff --git a/lib-python/2.7/test/test_traceback.py b/lib-python/2.7/test/test_traceback.py --- a/lib-python/2.7/test/test_traceback.py +++ b/lib-python/2.7/test/test_traceback.py @@ -5,7 +5,8 @@ import sys import unittest from imp import reload -from test.test_support import run_unittest, is_jython, Error +from test.test_support import run_unittest, Error +from test.test_support import impl_detail, check_impl_detail import traceback @@ -49,10 +50,8 @@ self.assertTrue(err[2].count('\n') == 1) # and no additional newline self.assertTrue(err[1].find("+") == err[2].find("^")) # in the right place + @impl_detail("other implementations may add a caret (why shouldn't they?)") def test_nocaret(self): - if is_jython: - # jython adds a caret in this case (why shouldn't it?) - return err = self.get_exception_format(self.syntax_error_without_caret, SyntaxError) self.assertTrue(len(err) == 3) @@ -63,8 +62,11 @@ IndentationError) self.assertTrue(len(err) == 4) self.assertTrue(err[1].strip() == "print 2") - self.assertIn("^", err[2]) - self.assertTrue(err[1].find("2") == err[2].find("^")) + if check_impl_detail(): + # on CPython, there is a "^" at the end of the line + # on PyPy, there is a "^" too, but at the start, more logically + self.assertIn("^", err[2]) + self.assertTrue(err[1].find("2") == err[2].find("^")) def test_bug737473(self): import os, tempfile, time @@ -74,7 +76,8 @@ try: sys.path.insert(0, testdir) testfile = os.path.join(testdir, 'test_bug737473.py') - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise ValueError""" @@ -96,7 +99,8 @@ # three seconds are needed for this test to pass reliably :-( time.sleep(4) - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise NotImplementedError""" reload(test_bug737473) diff --git a/lib-python/2.7/test/test_types.py b/lib-python/2.7/test/test_types.py --- a/lib-python/2.7/test/test_types.py +++ b/lib-python/2.7/test/test_types.py @@ -1,7 +1,8 @@ # Python test set -- part 6, built-in types from test.test_support import run_unittest, have_unicode, run_with_locale, \ - check_py3k_warnings + check_py3k_warnings, \ + impl_detail, check_impl_detail import unittest import sys import locale @@ -289,9 +290,14 @@ # array.array() returns an object that does not implement a char buffer, # something which int() uses for conversion. import array - try: int(buffer(array.array('c'))) + try: int(buffer(array.array('c', '5'))) except TypeError: pass - else: self.fail("char buffer (at C level) not working") + else: + if check_impl_detail(): + self.fail("char buffer (at C level) not working") + #else: + # it works on PyPy, which does not have the distinction + # between char buffer and binary buffer. XXX fine enough? def test_int__format__(self): def test(i, format_spec, result): @@ -741,6 +747,7 @@ for code in 'xXobns': self.assertRaises(ValueError, format, 0, ',' + code) + @impl_detail("the types' internal size attributes are CPython-only") def test_internal_sizes(self): self.assertGreater(object.__basicsize__, 0) self.assertGreater(tuple.__itemsize__, 0) diff --git a/lib-python/2.7/test/test_unicode.py b/lib-python/2.7/test/test_unicode.py --- a/lib-python/2.7/test/test_unicode.py +++ b/lib-python/2.7/test/test_unicode.py @@ -448,10 +448,11 @@ meth('\xff') with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) @test_support.run_with_locale('LC_ALL', 'de_DE', 'fr_FR') def test_format_float(self): @@ -1062,7 +1063,8 @@ # to take a 64-bit long, this test should apply to all platforms. if sys.maxint > (1 << 32) or struct.calcsize('P') != 4: return - self.assertRaises(OverflowError, u't\tt\t'.expandtabs, sys.maxint) + self.assertRaises((OverflowError, MemoryError), + u't\tt\t'.expandtabs, sys.maxint) def test__format__(self): def test(value, format, expected): diff --git a/lib-python/2.7/test/test_unicodedata.py b/lib-python/2.7/test/test_unicodedata.py --- a/lib-python/2.7/test/test_unicodedata.py +++ b/lib-python/2.7/test/test_unicodedata.py @@ -233,10 +233,12 @@ # been loaded in this process. popen = subprocess.Popen(args, stderr=subprocess.PIPE) popen.wait() - self.assertEqual(popen.returncode, 1) - error = "SyntaxError: (unicode error) \N escapes not supported " \ - "(can't load unicodedata module)" - self.assertIn(error, popen.stderr.read()) + self.assertIn(popen.returncode, [0, 1]) # at least it did not segfault + if test.test_support.check_impl_detail(): + self.assertEqual(popen.returncode, 1) + error = "SyntaxError: (unicode error) \N escapes not supported " \ + "(can't load unicodedata module)" + self.assertIn(error, popen.stderr.read()) def test_decimal_numeric_consistent(self): # Test that decimal and numeric are consistent, diff --git a/lib-python/2.7/test/test_unpack.py b/lib-python/2.7/test/test_unpack.py --- a/lib-python/2.7/test/test_unpack.py +++ b/lib-python/2.7/test/test_unpack.py @@ -62,14 +62,14 @@ >>> a, b = t Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking tuple of wrong size >>> a, b = l Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking sequence too short diff --git a/lib-python/2.7/test/test_urllib2.py b/lib-python/2.7/test/test_urllib2.py --- a/lib-python/2.7/test/test_urllib2.py +++ b/lib-python/2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/2.7/test/test_warnings.py b/lib-python/2.7/test/test_warnings.py --- a/lib-python/2.7/test/test_warnings.py +++ b/lib-python/2.7/test/test_warnings.py @@ -355,7 +355,8 @@ # test_support.import_fresh_module utility function def test_accelerated(self): self.assertFalse(original_warnings is self.module) - self.assertFalse(hasattr(self.module.warn, 'func_code')) + self.assertFalse(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class PyWarnTests(BaseTest, WarnTests): module = py_warnings @@ -364,7 +365,8 @@ # test_support.import_fresh_module utility function def test_pure_python(self): self.assertFalse(original_warnings is self.module) - self.assertTrue(hasattr(self.module.warn, 'func_code')) + self.assertTrue(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class WCmdLineTests(unittest.TestCase): diff --git a/lib-python/2.7/test/test_weakref.py b/lib-python/2.7/test/test_weakref.py --- a/lib-python/2.7/test/test_weakref.py +++ b/lib-python/2.7/test/test_weakref.py @@ -1,4 +1,3 @@ -import gc import sys import unittest import UserList @@ -6,6 +5,7 @@ import operator from test import test_support +from test.test_support import gc_collect # Used in ReferencesTestCase.test_ref_created_during_del() . ref_from_del = None @@ -70,6 +70,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(ref1() is None, "expected reference to be invalidated") self.assertTrue(ref2() is None, @@ -101,13 +102,16 @@ ref1 = weakref.proxy(o, self.callback) ref2 = weakref.proxy(o, self.callback) del o + gc_collect() def check(proxy): proxy.bar self.assertRaises(weakref.ReferenceError, check, ref1) self.assertRaises(weakref.ReferenceError, check, ref2) - self.assertRaises(weakref.ReferenceError, bool, weakref.proxy(C())) + ref3 = weakref.proxy(C()) + gc_collect() + self.assertRaises(weakref.ReferenceError, bool, ref3) self.assertTrue(self.cbcalled == 2) def check_basic_ref(self, factory): @@ -124,6 +128,7 @@ o = factory() ref = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(self.cbcalled == 1, "callback did not properly set 'cbcalled'") self.assertTrue(ref() is None, @@ -148,6 +153,7 @@ self.assertTrue(weakref.getweakrefcount(o) == 2, "wrong weak ref count for object") del proxy + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 1, "wrong weak ref count for object after deleting proxy") @@ -325,6 +331,7 @@ "got wrong number of weak reference objects") del ref1, ref2, proxy1, proxy2 + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 0, "weak reference objects not unlinked from" " referent when discarded.") @@ -338,6 +345,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref2], "list of refs does not match") @@ -345,10 +353,12 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref2 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref1], "list of refs does not match") del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [], "list of refs not cleared") @@ -400,13 +410,11 @@ # when the second attempt to remove the instance from the "list # of all objects" occurs. - import gc - class C(object): pass c = C() - wr = weakref.ref(c, lambda ignore: gc.collect()) + wr = weakref.ref(c, lambda ignore: gc_collect()) del c # There endeth the first part. It gets worse. @@ -414,7 +422,7 @@ c1 = C() c1.i = C() - wr = weakref.ref(c1.i, lambda ignore: gc.collect()) + wr = weakref.ref(c1.i, lambda ignore: gc_collect()) c2 = C() c2.c1 = c1 @@ -430,8 +438,6 @@ del c2 def test_callback_in_cycle_1(self): - import gc - class J(object): pass @@ -467,11 +473,9 @@ # search II.__mro__, but that's NULL. The result was a segfault in # a release build, and an assert failure in a debug build. del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_2(self): - import gc - # This is just like test_callback_in_cycle_1, except that II is an # old-style class. The symptom is different then: an instance of an # old-style class looks in its own __dict__ first. 'J' happens to @@ -496,11 +500,9 @@ I.wr = weakref.ref(J, I.acallback) del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_3(self): - import gc - # This one broke the first patch that fixed the last two. In this # case, the objects reachable from the callback aren't also reachable # from the object (c1) *triggering* the callback: you can get to @@ -520,11 +522,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2 - gc.collect() + gc_collect() def test_callback_in_cycle_4(self): - import gc - # Like test_callback_in_cycle_3, except c2 and c1 have different # classes. c2's class (C) isn't reachable from c1 then, so protecting # objects reachable from the dying object (c1) isn't enough to stop @@ -548,11 +548,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2, C, D - gc.collect() + gc_collect() def test_callback_in_cycle_resurrection(self): - import gc - # Do something nasty in a weakref callback: resurrect objects # from dead cycles. For this to be attempted, the weakref and # its callback must also be part of the cyclic trash (else the @@ -583,7 +581,7 @@ del c1, c2, C # make them all trash self.assertEqual(alist, []) # del isn't enough to reclaim anything - gc.collect() + gc_collect() # c1.wr and c2.wr were part of the cyclic trash, so should have # been cleared without their callbacks executing. OTOH, the weakref # to C is bound to a function local (wr), and wasn't trash, so that @@ -593,12 +591,10 @@ self.assertEqual(wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_callbacks_on_callback(self): - import gc - # Set up weakref callbacks *on* weakref callbacks. alist = [] def safe_callback(ignore): @@ -626,12 +622,12 @@ del callback, c, d, C self.assertEqual(alist, []) # del isn't enough to clean up cycles - gc.collect() + gc_collect() self.assertEqual(alist, ["safe_callback called"]) self.assertEqual(external_wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_gc_during_ref_creation(self): @@ -641,9 +637,11 @@ self.check_gc_during_creation(weakref.proxy) def check_gc_during_creation(self, makeref): - thresholds = gc.get_threshold() - gc.set_threshold(1, 1, 1) - gc.collect() + if test_support.check_impl_detail(): + import gc + thresholds = gc.get_threshold() + gc.set_threshold(1, 1, 1) + gc_collect() class A: pass @@ -663,7 +661,8 @@ weakref.ref(referenced, callback) finally: - gc.set_threshold(*thresholds) + if test_support.check_impl_detail(): + gc.set_threshold(*thresholds) def test_ref_created_during_del(self): # Bug #1377858 @@ -683,7 +682,7 @@ r = weakref.ref(Exception) self.assertRaises(TypeError, r.__init__, 0, 0, 0, 0, 0) # No exception should be raised here - gc.collect() + gc_collect() def test_classes(self): # Check that both old-style classes and new-style classes @@ -696,12 +695,12 @@ weakref.ref(int) a = weakref.ref(A, l.append) A = None - gc.collect() + gc_collect() self.assertEqual(a(), None) self.assertEqual(l, [a]) b = weakref.ref(B, l.append) B = None - gc.collect() + gc_collect() self.assertEqual(b(), None) self.assertEqual(l, [a, b]) @@ -722,6 +721,7 @@ self.assertTrue(mr.called) self.assertEqual(mr.value, 24) del o + gc_collect() self.assertTrue(mr() is None) self.assertTrue(mr.called) @@ -738,9 +738,11 @@ self.assertEqual(weakref.getweakrefcount(o), 3) refs = weakref.getweakrefs(o) self.assertEqual(len(refs), 3) - self.assertTrue(r2 is refs[0]) - self.assertIn(r1, refs[1:]) - self.assertIn(r3, refs[1:]) + assert set(refs) == set((r1, r2, r3)) + if test_support.check_impl_detail(): + self.assertTrue(r2 is refs[0]) + self.assertIn(r1, refs[1:]) + self.assertIn(r3, refs[1:]) def test_subclass_refs_dont_conflate_callbacks(self): class MyRef(weakref.ref): @@ -839,15 +841,18 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the values did not clear the dictionary") # regression on SF bug #447152: dict = weakref.WeakValueDictionary() self.assertRaises(KeyError, dict.__getitem__, 1) dict[2] = C() + gc_collect() self.assertRaises(KeyError, dict.__getitem__, 2) def test_weak_keys(self): @@ -868,9 +873,11 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the keys did not clear the dictionary") o = Object(42) @@ -986,13 +993,13 @@ self.assertTrue(len(weakdict) == 2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 1) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 0) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) @@ -1137,6 +1144,7 @@ for o in objs: count += 1 del d[o] + gc_collect() self.assertEqual(len(d), 0) self.assertEqual(count, 2) @@ -1177,6 +1185,7 @@ >>> o is o2 True >>> del o, o2 +>>> gc_collect() >>> print r() None @@ -1229,6 +1238,7 @@ >>> id2obj(a_id) is a True >>> del a +>>> gc_collect() >>> try: ... id2obj(a_id) ... except KeyError: diff --git a/lib-python/2.7/test/test_weakset.py b/lib-python/2.7/test/test_weakset.py --- a/lib-python/2.7/test/test_weakset.py +++ b/lib-python/2.7/test/test_weakset.py @@ -57,6 +57,7 @@ self.assertEqual(len(self.s), len(self.d)) self.assertEqual(len(self.fs), 1) del self.obj + test_support.gc_collect() self.assertEqual(len(self.fs), 0) def test_contains(self): @@ -66,6 +67,7 @@ self.assertNotIn(1, self.s) self.assertIn(self.obj, self.fs) del self.obj + test_support.gc_collect() self.assertNotIn(SomeClass('F'), self.fs) def test_union(self): @@ -204,6 +206,7 @@ self.assertEqual(self.s, dup) self.assertRaises(TypeError, self.s.add, []) self.fs.add(Foo()) + test_support.gc_collect() self.assertTrue(len(self.fs) == 1) self.fs.add(self.obj) self.assertTrue(len(self.fs) == 1) @@ -330,10 +333,11 @@ next(it) # Trigger internal iteration # Destroy an item del items[-1] - gc.collect() # just in case + test_support.gc_collect() # We have removed either the first consumed items, or another one self.assertIn(len(list(it)), [len(items), len(items) - 1]) del it + test_support.gc_collect() # The removal has been committed self.assertEqual(len(s), len(items)) diff --git a/lib-python/2.7/test/test_xml_etree.py b/lib-python/2.7/test/test_xml_etree.py --- a/lib-python/2.7/test/test_xml_etree.py +++ b/lib-python/2.7/test/test_xml_etree.py @@ -1633,10 +1633,10 @@ Check reference leak. >>> xmltoolkit63() - >>> count = sys.getrefcount(None) + >>> count = sys.getrefcount(None) #doctest: +SKIP >>> for i in range(1000): ... xmltoolkit63() - >>> sys.getrefcount(None) - count + >>> sys.getrefcount(None) - count #doctest: +SKIP 0 """ diff --git a/lib-python/2.7/test/test_xmlrpc.py b/lib-python/2.7/test/test_xmlrpc.py --- a/lib-python/2.7/test/test_xmlrpc.py +++ b/lib-python/2.7/test/test_xmlrpc.py @@ -308,7 +308,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -367,7 +367,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -435,6 +435,7 @@ def tearDown(self): # wait on the server thread to terminate + test_support.gc_collect() # to close the active connections self.evt.wait(10) # disable traceback reporting @@ -472,9 +473,6 @@ # protocol error; provide additional information in test output self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) - def test_unicode_host(self): - server = xmlrpclib.ServerProxy(u"http://%s:%d/RPC2"%(ADDR, PORT)) - self.assertEqual(server.add("a", u"\xe9"), u"a\xe9") # [ch] The test 404 is causing lots of false alarms. def XXXtest_404(self): @@ -589,12 +587,6 @@ # This avoids waiting for the socket timeout. self.test_simple1() - def test_partial_post(self): - # Check that a partial POST doesn't make the server loop: issue #14001. - conn = httplib.HTTPConnection(ADDR, PORT) - conn.request('POST', '/RPC2 HTTP/1.0\r\nContent-Length: 100\r\n\r\nbye') - conn.close() - class MultiPathServerTestCase(BaseServerTestCase): threadFunc = staticmethod(http_multi_server) request_count = 2 diff --git a/lib-python/2.7/test/test_zlib.py b/lib-python/2.7/test/test_zlib.py --- a/lib-python/2.7/test/test_zlib.py +++ b/lib-python/2.7/test/test_zlib.py @@ -1,6 +1,7 @@ import unittest from test.test_support import TESTFN, run_unittest, import_module, unlink, requires import binascii +import os import random from test.test_support import precisionbigmemtest, _1G, _4G import sys @@ -99,14 +100,7 @@ class BaseCompressTestCase(object): def check_big_compress_buffer(self, size, compress_func): - _1M = 1024 * 1024 - fmt = "%%0%dx" % (2 * _1M) - # Generate 10MB worth of random, and expand it by repeating it. - # The assumption is that zlib's memory is not big enough to exploit - # such spread out redundancy. - data = ''.join([binascii.a2b_hex(fmt % random.getrandbits(8 * _1M)) - for i in range(10)]) - data = data * (size // len(data) + 1) + data = os.urandom(size) try: compress_func(data) finally: diff --git a/lib-python/2.7/trace.py b/lib-python/2.7/trace.py --- a/lib-python/2.7/trace.py +++ b/lib-python/2.7/trace.py @@ -559,6 +559,10 @@ if len(funcs) == 1: dicts = [d for d in gc.get_referrers(funcs[0]) if isinstance(d, dict)] + if len(dicts) == 0: + # PyPy may store functions directly on the class + # (more exactly: the container is not a Python object) + dicts = funcs if len(dicts) == 1: classes = [c for c in gc.get_referrers(dicts[0]) if hasattr(c, "__bases__")] diff --git a/lib-python/2.7/urllib2.py b/lib-python/2.7/urllib2.py --- a/lib-python/2.7/urllib2.py +++ b/lib-python/2.7/urllib2.py @@ -1171,6 +1171,7 @@ except TypeError: #buffering kw not supported r = h.getresponse() except socket.error, err: # XXX what error? + h.close() raise URLError(err) # Pick apart the HTTPResponse object to get the addinfourl diff --git a/lib-python/2.7/uuid.py b/lib-python/2.7/uuid.py --- a/lib-python/2.7/uuid.py +++ b/lib-python/2.7/uuid.py @@ -406,8 +406,12 @@ continue if hasattr(lib, 'uuid_generate_random'): _uuid_generate_random = lib.uuid_generate_random + _uuid_generate_random.argtypes = [ctypes.c_char * 16] + _uuid_generate_random.restype = None if hasattr(lib, 'uuid_generate_time'): _uuid_generate_time = lib.uuid_generate_time + _uuid_generate_time.argtypes = [ctypes.c_char * 16] + _uuid_generate_time.restype = None # The uuid_generate_* functions are broken on MacOS X 10.5, as noted # in issue #8621 the function generates the same sequence of values @@ -436,6 +440,9 @@ lib = None _UuidCreate = getattr(lib, 'UuidCreateSequential', getattr(lib, 'UuidCreate', None)) + if _UuidCreate is not None: + _UuidCreate.argtypes = [ctypes.c_char * 16] + _UuidCreate.restype = ctypes.c_int except: pass diff --git a/lib-python/3.2/_markupbase.py b/lib-python/3.2/_markupbase.py --- a/lib-python/3.2/_markupbase.py +++ b/lib-python/3.2/_markupbase.py @@ -107,6 +107,10 @@ if decltype == "doctype": self.handle_decl(data) else: + # According to the HTML5 specs sections "8.2.4.44 Bogus + # comment state" and "8.2.4.45 Markup declaration open + # state", a comment token should be emitted. + # Calling unknown_decl provides more flexibility though. self.unknown_decl(data) return j + 1 if c in "\"'": diff --git a/lib-python/3.2/_pyio.py b/lib-python/3.2/_pyio.py --- a/lib-python/3.2/_pyio.py +++ b/lib-python/3.2/_pyio.py @@ -6,6 +6,7 @@ import abc import codecs import warnings +import errno # Import _thread instead of threading to reduce startup cost try: from _thread import allocate_lock as Lock @@ -720,8 +721,11 @@ def close(self): if self.raw is not None and not self.closed: - self.flush() - self.raw.close() + try: + # may raise BlockingIOError or BrokenPipeError etc + self.flush() + finally: + self.raw.close() def detach(self): if self.raw is None: @@ -1080,13 +1084,9 @@ # XXX we can implement some more tricks to try and avoid # partial writes if len(self._write_buf) > self.buffer_size: - # We're full, so let's pre-flush the buffer - try: - self._flush_unlocked() - except BlockingIOError as e: - # We can't accept anything else. - # XXX Why not just let the exception pass through? - raise BlockingIOError(e.errno, e.strerror, 0) + # We're full, so let's pre-flush the buffer. (This may + # raise BlockingIOError with characters_written == 0.) + self._flush_unlocked() before = len(self._write_buf) self._write_buf.extend(b) written = len(self._write_buf) - before @@ -1117,24 +1117,23 @@ def _flush_unlocked(self): if self.closed: raise ValueError("flush of closed file") - written = 0 - try: - while self._write_buf: - try: - n = self.raw.write(self._write_buf) - except IOError as e: - if e.errno != EINTR: - raise - continue - if n > len(self._write_buf) or n < 0: - raise IOError("write() returned incorrect number of bytes") - del self._write_buf[:n] - written += n - except BlockingIOError as e: - n = e.characters_written + while self._write_buf: + try: + n = self.raw.write(self._write_buf) + except BlockingIOError: + raise RuntimeError("self.raw should implement RawIOBase: it " + "should not raise BlockingIOError") + except IOError as e: + if e.errno != EINTR: + raise + continue + if n is None: + raise BlockingIOError( + errno.EAGAIN, + "write could not complete without blocking", 0) + if n > len(self._write_buf) or n < 0: + raise IOError("write() returned incorrect number of bytes") del self._write_buf[:n] - written += n - raise BlockingIOError(e.errno, e.strerror, written) def tell(self): return _BufferedIOMixin.tell(self) + len(self._write_buf) @@ -1460,7 +1459,7 @@ enabled. With this enabled, on input, the lines endings '\n', '\r', or '\r\n' are translated to '\n' before being returned to the caller. Conversely, on output, '\n' is translated to the system - default line seperator, os.linesep. If newline is any other of its + default line separator, os.linesep. If newline is any other of its legal values, that newline becomes the newline when the file is read and it is returned untranslated. On output, '\n' is converted to the newline. diff --git a/lib-python/3.2/aifc.py b/lib-python/3.2/aifc.py --- a/lib-python/3.2/aifc.py +++ b/lib-python/3.2/aifc.py @@ -162,6 +162,12 @@ except struct.error: raise EOFError +def _read_ushort(file): + try: + return struct.unpack('>H', file.read(2))[0] + except struct.error: + raise EOFError + def _read_string(file): length = ord(file.read(1)) if length == 0: @@ -194,13 +200,19 @@ def _write_short(f, x): f.write(struct.pack('>h', x)) +def _write_ushort(f, x): + f.write(struct.pack('>H', x)) + def _write_long(f, x): + f.write(struct.pack('>l', x)) + +def _write_ulong(f, x): f.write(struct.pack('>L', x)) def _write_string(f, s): if len(s) > 255: raise ValueError("string exceeds maximum pstring length") - f.write(struct.pack('b', len(s))) + f.write(struct.pack('B', len(s))) f.write(s) if len(s) & 1 == 0: f.write(b'\x00') @@ -218,7 +230,7 @@ lomant = 0 else: fmant, expon = math.frexp(x) - if expon > 16384 or fmant >= 1: # Infinity or NaN + if expon > 16384 or fmant >= 1 or fmant != fmant: # Infinity or NaN expon = sign|0x7FFF himant = 0 lomant = 0 @@ -234,9 +246,9 @@ fmant = math.ldexp(fmant - fsmant, 32) fsmant = math.floor(fmant) lomant = int(fsmant) - _write_short(f, expon) - _write_long(f, himant) - _write_long(f, lomant) + _write_ushort(f, expon) + _write_ulong(f, himant) + _write_ulong(f, lomant) from chunk import Chunk @@ -539,8 +551,7 @@ self._aifc = 1 # AIFF-C is default def __del__(self): - if self._file: - self.close() + self.close() # # User visible methods. @@ -643,8 +654,8 @@ raise Error('marker ID must be > 0') if pos < 0: raise Error('marker position must be >= 0') - if not isinstance(name, str): - raise Error('marker name must be a string') + if not isinstance(name, bytes): + raise Error('marker name must be bytes') for i in range(len(self._markers)): if id == self._markers[i][0]: self._markers[i] = id, pos, name @@ -681,19 +692,21 @@ self._patchheader() def close(self): - self._ensure_header_written(0) - if self._datawritten & 1: - # quick pad to even size - self._file.write(b'\x00') - self._datawritten = self._datawritten + 1 - self._writemarkers() - if self._nframeswritten != self._nframes or \ - self._datalength != self._datawritten or \ - self._marklength: - self._patchheader() - # Prevent ref cycles - self._convert = None - self._file.close() + if self._file: + self._ensure_header_written(0) + if self._datawritten & 1: + # quick pad to even size + self._file.write(b'\x00') + self._datawritten = self._datawritten + 1 + self._writemarkers() + if self._nframeswritten != self._nframes or \ + self._datalength != self._datawritten or \ + self._marklength: + self._patchheader() + # Prevent ref cycles + self._convert = None + self._file.close() + self._file = None # # Internal methods. @@ -716,18 +729,12 @@ def _ensure_header_written(self, datasize): if not self._nframeswritten: - if self._comptype in (b'ULAW', b'ALAW'): + if self._comptype in (b'ULAW', b'ulaw', b'ALAW', b'alaw', b'G722'): if not self._sampwidth: self._sampwidth = 2 if self._sampwidth != 2: raise Error('sample width must be 2 when compressing ' - 'with ulaw/ULAW or alaw/ALAW') - if self._comptype == b'G722': - if not self._sampwidth: - self._sampwidth = 2 - if self._sampwidth != 2: - raise Error('sample width must be 2 when compressing ' - 'with G7.22 (ADPCM)') + 'with ulaw/ULAW, alaw/ALAW or G7.22 (ADPCM)') if not self._nchannels: raise Error('# channels not specified') if not self._sampwidth: @@ -743,8 +750,6 @@ self._convert = self._lin2ulaw elif self._comptype in (b'alaw', b'ALAW'): self._convert = self._lin2alaw - else: - raise Error('unsupported compression type') def _write_header(self, initlength): if self._aifc and self._comptype != b'NONE': @@ -769,15 +774,15 @@ if self._aifc: self._file.write(b'AIFC') self._file.write(b'FVER') - _write_long(self._file, 4) - _write_long(self._file, self._version) + _write_ulong(self._file, 4) + _write_ulong(self._file, self._version) else: self._file.write(b'AIFF') self._file.write(b'COMM') - _write_long(self._file, commlength) + _write_ulong(self._file, commlength) _write_short(self._file, self._nchannels) self._nframes_pos = self._file.tell() - _write_long(self._file, self._nframes) + _write_ulong(self._file, self._nframes) _write_short(self._file, self._sampwidth * 8) _write_float(self._file, self._framerate) if self._aifc: @@ -785,9 +790,9 @@ _write_string(self._file, self._compname) self._file.write(b'SSND') self._ssnd_length_pos = self._file.tell() - _write_long(self._file, self._datalength + 8) - _write_long(self._file, 0) - _write_long(self._file, 0) + _write_ulong(self._file, self._datalength + 8) + _write_ulong(self._file, 0) + _write_ulong(self._file, 0) def _write_form_length(self, datalength): if self._aifc: @@ -798,8 +803,8 @@ else: commlength = 18 verslength = 0 - _write_long(self._file, 4 + verslength + self._marklength + \ - 8 + commlength + 16 + datalength) + _write_ulong(self._file, 4 + verslength + self._marklength + \ + 8 + commlength + 16 + datalength) return commlength def _patchheader(self): @@ -817,9 +822,9 @@ self._file.seek(self._form_length_pos, 0) dummy = self._write_form_length(datalength) self._file.seek(self._nframes_pos, 0) - _write_long(self._file, self._nframeswritten) + _write_ulong(self._file, self._nframeswritten) self._file.seek(self._ssnd_length_pos, 0) - _write_long(self._file, datalength + 8) + _write_ulong(self._file, datalength + 8) self._file.seek(curpos, 0) self._nframes = self._nframeswritten self._datalength = datalength @@ -834,13 +839,13 @@ length = length + len(name) + 1 + 6 if len(name) & 1 == 0: length = length + 1 - _write_long(self._file, length) + _write_ulong(self._file, length) self._marklength = length + 8 _write_short(self._file, len(self._markers)) for marker in self._markers: id, pos, name = marker _write_short(self._file, id) - _write_long(self._file, pos) + _write_ulong(self._file, pos) _write_string(self._file, name) def open(f, mode=None): diff --git a/lib-python/3.2/argparse.py b/lib-python/3.2/argparse.py --- a/lib-python/3.2/argparse.py +++ b/lib-python/3.2/argparse.py @@ -92,10 +92,6 @@ from gettext import gettext as _, ngettext -def _callable(obj): - return hasattr(obj, '__call__') or hasattr(obj, '__bases__') - - SUPPRESS = '==SUPPRESS==' OPTIONAL = '?' @@ -1286,13 +1282,13 @@ # create the action object, and add it to the parser action_class = self._pop_action_class(kwargs) - if not _callable(action_class): + if not callable(action_class): raise ValueError('unknown action "%s"' % (action_class,)) action = action_class(**kwargs) # raise an error if the action type is not callable type_func = self._registry_get('type', action.type, action.type) - if not _callable(type_func): + if not callable(type_func): raise ValueError('%r is not callable' % (type_func,)) # raise an error if the metavar does not match the type @@ -2240,7 +2236,7 @@ def _get_value(self, action, arg_string): type_func = self._registry_get('type', action.type, action.type) - if not _callable(type_func): + if not callable(type_func): msg = _('%r is not callable') raise ArgumentError(action, msg % type_func) diff --git a/lib-python/3.2/base64.py b/lib-python/3.2/base64.py old mode 100644 new mode 100755 diff --git a/lib-python/3.2/cProfile.py b/lib-python/3.2/cProfile.py old mode 100644 new mode 100755 diff --git a/lib-python/3.2/cgi.py b/lib-python/3.2/cgi.py old mode 100644 new mode 100755 --- a/lib-python/3.2/cgi.py +++ b/lib-python/3.2/cgi.py @@ -291,7 +291,7 @@ while s[:1] == ';': s = s[1:] end = s.find(';') - while end > 0 and s.count('"', 0, end) % 2: + while end > 0 and (s.count('"', 0, end) - s.count('\\"', 0, end)) % 2: end = s.find(';', end + 1) if end < 0: end = len(s) diff --git a/lib-python/3.2/cmd.py b/lib-python/3.2/cmd.py --- a/lib-python/3.2/cmd.py +++ b/lib-python/3.2/cmd.py @@ -205,6 +205,8 @@ if cmd is None: return self.default(line) self.lastcmd = line + if line == 'EOF' : + self.lastcmd = '' if cmd == '': return self.default(line) else: diff --git a/lib-python/3.2/collections.py b/lib-python/3.2/collections.py --- a/lib-python/3.2/collections.py +++ b/lib-python/3.2/collections.py @@ -33,7 +33,7 @@ # The circular doubly linked list starts and ends with a sentinel element. # The sentinel element never gets deleted (this simplifies the algorithm). # The sentinel is in self.__hardroot with a weakref proxy in self.__root. - # The prev/next links are weakref proxies (to prevent circular references). + # The prev links are weakref proxies (to prevent circular references). # Individual links are kept alive by the hard reference in self.__map. # Those hard references disappear when a key is deleted from an OrderedDict. @@ -583,8 +583,12 @@ def __repr__(self): if not self: return '%s()' % self.__class__.__name__ - items = ', '.join(map('%r: %r'.__mod__, self.most_common())) - return '%s({%s})' % (self.__class__.__name__, items) + try: + items = ', '.join(map('%r: %r'.__mod__, self.most_common())) + return '%s({%s})' % (self.__class__.__name__, items) + except TypeError: + # handle case where values are not orderable + return '{0}({1!r})'.format(self.__class__.__name__, dict(self)) # Multiset-style mathematical operations discussed in: # Knuth TAOCP Volume II section 4.6.3 exercise 19 diff --git a/lib-python/3.2/compileall.py b/lib-python/3.2/compileall.py --- a/lib-python/3.2/compileall.py +++ b/lib-python/3.2/compileall.py @@ -142,7 +142,7 @@ Arguments (all optional): - skip_curdir: if true, skip current directory (default true) + skip_curdir: if true, skip current directory (default True) maxlevels: max recursion level (default 0) force: as for compile_dir() (default False) quiet: as for compile_dir() (default False) @@ -177,17 +177,17 @@ help='use legacy (pre-PEP3147) compiled file locations') parser.add_argument('-d', metavar='DESTDIR', dest='ddir', default=None, help=('directory to prepend to file paths for use in ' - 'compile time tracebacks and in runtime ' + 'compile-time tracebacks and in runtime ' 'tracebacks in cases where the source file is ' 'unavailable')) parser.add_argument('-x', metavar='REGEXP', dest='rx', default=None, - help=('skip files matching the regular expression. ' - 'The regexp is searched for in the full path ' - 'to each file considered for compilation.')) + help=('skip files matching the regular expression; ' + 'the regexp is searched for in the full path ' + 'of each file considered for compilation')) parser.add_argument('-i', metavar='FILE', dest='flist', help=('add all the files and directories listed in ' - 'FILE to the list considered for compilation. ' - 'If "-", names are read from stdin.')) + 'FILE to the list considered for compilation; ' + 'if "-", names are read from stdin')) parser.add_argument('compile_dest', metavar='FILE|DIR', nargs='*', help=('zero or more file and directory names ' 'to compile; if no arguments given, defaults ' diff --git a/lib-python/3.2/concurrent/futures/process.py b/lib-python/3.2/concurrent/futures/process.py --- a/lib-python/3.2/concurrent/futures/process.py +++ b/lib-python/3.2/concurrent/futures/process.py @@ -213,9 +213,7 @@ work_item.future.set_exception(result_item.exception) else: work_item.future.set_result(result_item.result) - continue - # If we come here, we either got a timeout or were explicitly woken up. - # In either case, check whether we should start shutting down. + # Check whether we should start shutting down. executor = executor_reference() # No more work items can be added if: # - The interpreter is shutting down OR @@ -234,9 +232,6 @@ p.join() call_queue.close() return - else: - # Start shutting down by telling a process it can exit. - shutdown_one_process() del executor _system_limits_checked = False diff --git a/lib-python/3.2/configparser.py b/lib-python/3.2/configparser.py --- a/lib-python/3.2/configparser.py +++ b/lib-python/3.2/configparser.py @@ -381,7 +381,7 @@ would resolve the "%(dir)s" to the value of dir. All reference expansions are done late, on demand. If a user needs to use a bare % in - a configuration file, she can escape it by writing %%. Other other % usage + a configuration file, she can escape it by writing %%. Other % usage is considered a user error and raises `InterpolationSyntaxError'.""" _KEYCRE = re.compile(r"%\(([^)]+)\)s") diff --git a/lib-python/3.2/copyreg.py b/lib-python/3.2/copyreg.py --- a/lib-python/3.2/copyreg.py +++ b/lib-python/3.2/copyreg.py @@ -10,7 +10,7 @@ dispatch_table = {} def pickle(ob_type, pickle_function, constructor_ob=None): - if not hasattr(pickle_function, '__call__'): + if not callable(pickle_function): raise TypeError("reduction functions must be callable") dispatch_table[ob_type] = pickle_function @@ -20,7 +20,7 @@ constructor(constructor_ob) def constructor(object): - if not hasattr(object, '__call__'): + if not callable(object): raise TypeError("constructors must be callable") # Example: provide pickling support for complex numbers. diff --git a/lib-python/3.2/ctypes/__init__.py b/lib-python/3.2/ctypes/__init__.py --- a/lib-python/3.2/ctypes/__init__.py +++ b/lib-python/3.2/ctypes/__init__.py @@ -265,7 +265,21 @@ class c_wchar(_SimpleCData): _type_ = "u" -POINTER(c_wchar).from_param = c_wchar_p.from_param #_SimpleCData.c_wchar_p_from_param +def _reset_cache(): + _pointer_type_cache.clear() + _c_functype_cache.clear() + if _os.name in ("nt", "ce"): + _win_functype_cache.clear() + # _SimpleCData.c_wchar_p_from_param + POINTER(c_wchar).from_param = c_wchar_p.from_param + # _SimpleCData.c_char_p_from_param + POINTER(c_char).from_param = c_char_p.from_param + _pointer_type_cache[None] = c_void_p + # XXX for whatever reasons, creating the first instance of a callback + # function is needed for the unittests on Win64 to succeed. This MAY + # be a compiler bug, since the problem occurs only when _ctypes is + # compiled with the MS SDK compiler. Or an uninitialized variable? + CFUNCTYPE(c_int)(lambda: None) def create_unicode_buffer(init, size=None): """create_unicode_buffer(aString) -> character array @@ -285,7 +299,6 @@ return buf raise TypeError(init) -POINTER(c_char).from_param = c_char_p.from_param #_SimpleCData.c_char_p_from_param # XXX Deprecated def SetPointerType(pointer, cls): @@ -445,8 +458,6 @@ descr = FormatError(code).strip() return WindowsError(code, descr) -_pointer_type_cache[None] = c_void_p - if sizeof(c_uint) == sizeof(c_void_p): c_size_t = c_uint c_ssize_t = c_int @@ -529,8 +540,4 @@ elif sizeof(kind) == 8: c_uint64 = kind del(kind) -# XXX for whatever reasons, creating the first instance of a callback -# function is needed for the unittests on Win64 to succeed. This MAY -# be a compiler bug, since the problem occurs only when _ctypes is -# compiled with the MS SDK compiler. Or an uninitialized variable? -CFUNCTYPE(c_int)(lambda: None) +_reset_cache() diff --git a/lib-python/3.2/ctypes/_endian.py b/lib-python/3.2/ctypes/_endian.py --- a/lib-python/3.2/ctypes/_endian.py +++ b/lib-python/3.2/ctypes/_endian.py @@ -1,7 +1,7 @@ import sys from ctypes import * -_array_type = type(c_int * 3) +_array_type = type(Array) def _other_endian(typ): """Return the type with the 'other' byte order. Simple types like diff --git a/lib-python/3.2/ctypes/macholib/fetch_macholib b/lib-python/3.2/ctypes/macholib/fetch_macholib old mode 100644 new mode 100755 diff --git a/lib-python/3.2/ctypes/test/test_arrays.py b/lib-python/3.2/ctypes/test/test_arrays.py --- a/lib-python/3.2/ctypes/test/test_arrays.py +++ b/lib-python/3.2/ctypes/test/test_arrays.py @@ -127,5 +127,57 @@ t2 = my_int * 1 self.assertTrue(t1 is t2) + def test_subclass(self): + class T(Array): + _type_ = c_int + _length_ = 13 + class U(T): + pass + class V(U): + pass + class W(V): + pass + class X(T): + _type_ = c_short + class Y(T): + _length_ = 187 + + for c in [T, U, V, W]: + self.assertEqual(c._type_, c_int) + self.assertEqual(c._length_, 13) + self.assertEqual(c()._type_, c_int) + self.assertEqual(c()._length_, 13) + + self.assertEqual(X._type_, c_short) + self.assertEqual(X._length_, 13) + self.assertEqual(X()._type_, c_short) + self.assertEqual(X()._length_, 13) + + self.assertEqual(Y._type_, c_int) + self.assertEqual(Y._length_, 187) + self.assertEqual(Y()._type_, c_int) + self.assertEqual(Y()._length_, 187) + + def test_bad_subclass(self): + import sys + + with self.assertRaises(AttributeError): + class T(Array): + pass + with self.assertRaises(AttributeError): + class T(Array): + _type_ = c_int + with self.assertRaises(AttributeError): + class T(Array): + _length_ = 13 + with self.assertRaises(OverflowError): + class T(Array): + _type_ = c_int + _length_ = sys.maxsize * 2 + with self.assertRaises(AttributeError): + class T(Array): + _type_ = c_int + _length_ = 1.87 + if __name__ == '__main__': unittest.main() diff --git a/lib-python/3.2/ctypes/test/test_as_parameter.py b/lib-python/3.2/ctypes/test/test_as_parameter.py --- a/lib-python/3.2/ctypes/test/test_as_parameter.py +++ b/lib-python/3.2/ctypes/test/test_as_parameter.py @@ -74,6 +74,7 @@ def test_callbacks(self): f = dll._testfunc_callback_i_if f.restype = c_int + f.argtypes = None MyCallback = CFUNCTYPE(c_int, c_int) diff --git a/lib-python/3.2/ctypes/test/test_callbacks.py b/lib-python/3.2/ctypes/test/test_callbacks.py --- a/lib-python/3.2/ctypes/test/test_callbacks.py +++ b/lib-python/3.2/ctypes/test/test_callbacks.py @@ -134,6 +134,14 @@ if isinstance(x, X)] self.assertEqual(len(live), 0) + def test_issue12483(self): + import gc + class Nasty: + def __del__(self): + gc.collect() + CFUNCTYPE(None)(lambda x=Nasty(): None) + + try: WINFUNCTYPE except NameError: diff --git a/lib-python/3.2/ctypes/test/test_functions.py b/lib-python/3.2/ctypes/test/test_functions.py --- a/lib-python/3.2/ctypes/test/test_functions.py +++ b/lib-python/3.2/ctypes/test/test_functions.py @@ -250,6 +250,7 @@ def test_callbacks(self): f = dll._testfunc_callback_i_if f.restype = c_int + f.argtypes = None MyCallback = CFUNCTYPE(c_int, c_int) diff --git a/lib-python/3.2/ctypes/test/test_structures.py b/lib-python/3.2/ctypes/test/test_structures.py --- a/lib-python/3.2/ctypes/test/test_structures.py +++ b/lib-python/3.2/ctypes/test/test_structures.py @@ -239,6 +239,14 @@ pass self.assertRaises(TypeError, setattr, POINT, "_fields_", [("x", 1), ("y", 2)]) + def test_invalid_name(self): + # field name must be string + def declare_with_name(name): + class S(Structure): + _fields_ = [(name, c_int)] + + self.assertRaises(TypeError, declare_with_name, b"x") + def test_intarray_fields(self): class SomeInts(Structure): _fields_ = [("a", c_int * 4)] @@ -318,6 +326,18 @@ else: self.assertEqual(msg, "(Phone) TypeError: too many initializers") + def test_huge_field_name(self): + # issue12881: segfault with large structure field names + def create_class(length): + class S(Structure): + _fields_ = [('x' * length, c_int)] + + for length in [10 ** i for i in range(0, 8)]: + try: + create_class(length) + except MemoryError: + # MemoryErrors are OK, we just don't want to segfault + pass def get_except(self, func, *args): try: diff --git a/lib-python/3.2/ctypes/util.py b/lib-python/3.2/ctypes/util.py --- a/lib-python/3.2/ctypes/util.py +++ b/lib-python/3.2/ctypes/util.py @@ -171,22 +171,6 @@ else: - def _findLib_ldconfig(name): - # XXX assuming GLIBC's ldconfig (with option -p) - expr = r'/[^\(\)\s]*lib%s\.[^\(\)\s]*' % re.escape(name) - with contextlib.closing(os.popen('/sbin/ldconfig -p 2>/dev/null')) as f: - data = f.read() - res = re.search(expr, data) - if not res: - # Hm, this works only for libs needed by the python executable. - cmd = 'ldd %s 2>/dev/null' % sys.executable - with contextlib.closing(os.popen(cmd)) as f: - data = f.read() - res = re.search(expr, data) - if not res: - return None - return res.group(0) - def _findSoname_ldconfig(name): import struct if struct.calcsize('l') == 4: @@ -203,8 +187,7 @@ abi_type = mach_map.get(machine, 'libc6') # XXX assuming GLIBC's ldconfig (with option -p) - expr = r'(\S+)\s+\((%s(?:, OS ABI:[^\)]*)?)\)[^/]*(/[^\(\)\s]*lib%s\.[^\(\)\s]*)' \ - % (abi_type, re.escape(name)) + expr = r'\s+(lib%s\.[^\s]+)\s+\(%s' % (re.escape(name), abi_type) with contextlib.closing(os.popen('LC_ALL=C LANG=C /sbin/ldconfig -p 2>/dev/null')) as f: data = f.read() res = re.search(expr, data) diff --git a/lib-python/3.2/curses/__init__.py b/lib-python/3.2/curses/__init__.py --- a/lib-python/3.2/curses/__init__.py +++ b/lib-python/3.2/curses/__init__.py @@ -54,4 +54,4 @@ try: has_key except NameError: - from has_key import has_key + from .has_key import has_key diff --git a/lib-python/3.2/datetime.py b/lib-python/3.2/datetime.py --- a/lib-python/3.2/datetime.py +++ b/lib-python/3.2/datetime.py @@ -2057,7 +2057,7 @@ Because we know z.d said z was in daylight time (else [5] would have held and we would have stopped then), and we know z.d != z'.d (else [8] would have held -and we we have stopped then), and there are only 2 possible values dst() can +and we have stopped then), and there are only 2 possible values dst() can return in Eastern, it follows that z'.d must be 0 (which it is in the example, but the reasoning doesn't depend on the example -- it depends on there being two possible dst() outcomes, one zero and the other non-zero). Therefore diff --git a/lib-python/3.2/dbm/__init__.py b/lib-python/3.2/dbm/__init__.py --- a/lib-python/3.2/dbm/__init__.py +++ b/lib-python/3.2/dbm/__init__.py @@ -166,7 +166,7 @@ return "" # Check for GNU dbm - if magic == 0x13579ace: + if magic in (0x13579ace, 0x13579acd, 0x13579acf): return "dbm.gnu" # Later versions of Berkeley db hash file have a 12-byte pad in diff --git a/lib-python/3.2/distutils/__init__.py b/lib-python/3.2/distutils/__init__.py --- a/lib-python/3.2/distutils/__init__.py +++ b/lib-python/3.2/distutils/__init__.py @@ -13,5 +13,5 @@ # Updated automatically by the Python release process. # #--start constants-- -__version__ = "3.2.2" +__version__ = "3.2.3" #--end constants-- diff --git a/lib-python/3.2/distutils/command/bdist_rpm.py b/lib-python/3.2/distutils/command/bdist_rpm.py --- a/lib-python/3.2/distutils/command/bdist_rpm.py +++ b/lib-python/3.2/distutils/command/bdist_rpm.py @@ -365,16 +365,28 @@ self.spawn(rpm_cmd) if not self.dry_run: + if self.distribution.has_ext_modules(): + pyversion = get_python_version() + else: + pyversion = 'any' + if not self.binary_only: srpm = os.path.join(rpm_dir['SRPMS'], source_rpm) assert(os.path.exists(srpm)) self.move_file(srpm, self.dist_dir) + filename = os.path.join(self.dist_dir, source_rpm) + self.distribution.dist_files.append( + ('bdist_rpm', pyversion, filename)) if not self.source_only: for rpm in binary_rpms: rpm = os.path.join(rpm_dir['RPMS'], rpm) if os.path.exists(rpm): self.move_file(rpm, self.dist_dir) + filename = os.path.join(self.dist_dir, + os.path.basename(rpm)) + self.distribution.dist_files.append( + ('bdist_rpm', pyversion, filename)) def _dist_path(self, path): return os.path.join(self.dist_dir, os.path.basename(path)) diff --git a/lib-python/3.2/distutils/command/build_ext.py b/lib-python/3.2/distutils/command/build_ext.py --- a/lib-python/3.2/distutils/command/build_ext.py +++ b/lib-python/3.2/distutils/command/build_ext.py @@ -165,8 +165,7 @@ if plat_py_include != py_include: self.include_dirs.append(plat_py_include) - if isinstance(self.libraries, str): - self.libraries = [self.libraries] + self.ensure_string_list('libraries') # Life is easier if we're not forever checking for None, so # simplify these options to empty lists if unset diff --git a/lib-python/3.2/distutils/command/build_py.py b/lib-python/3.2/distutils/command/build_py.py --- a/lib-python/3.2/distutils/command/build_py.py +++ b/lib-python/3.2/distutils/command/build_py.py @@ -2,7 +2,8 @@ Implements the Distutils 'build_py' command.""" -import sys, os +import os +import imp import sys from glob import glob @@ -311,9 +312,11 @@ outputs.append(filename) if include_bytecode: if self.compile: - outputs.append(filename + "c") + outputs.append(imp.cache_from_source(filename, + debug_override=True)) if self.optimize > 0: - outputs.append(filename + "o") + outputs.append(imp.cache_from_source(filename, + debug_override=False)) outputs += [ os.path.join(build_dir, filename) diff --git a/lib-python/3.2/distutils/command/install_egg_info.py b/lib-python/3.2/distutils/command/install_egg_info.py --- a/lib-python/3.2/distutils/command/install_egg_info.py +++ b/lib-python/3.2/distutils/command/install_egg_info.py @@ -40,9 +40,8 @@ "Creating "+self.install_dir) log.info("Writing %s", target) if not self.dry_run: - f = open(target, 'w') - self.distribution.metadata.write_pkg_file(f) - f.close() + with open(target, 'w', encoding='UTF-8') as f: + self.distribution.metadata.write_pkg_file(f) def get_outputs(self): return self.outputs diff --git a/lib-python/3.2/distutils/command/install_lib.py b/lib-python/3.2/distutils/command/install_lib.py --- a/lib-python/3.2/distutils/command/install_lib.py +++ b/lib-python/3.2/distutils/command/install_lib.py @@ -4,6 +4,7 @@ (install all Python modules).""" import os +import imp import sys from distutils.core import Command @@ -164,9 +165,11 @@ if ext != PYTHON_SOURCE_EXTENSION: continue if self.compile: - bytecode_files.append(py_file + "c") + bytecode_files.append(imp.cache_from_source( + py_file, debug_override=True)) if self.optimize > 0: - bytecode_files.append(py_file + "o") + bytecode_files.append(imp.cache_from_source( + py_file, debug_override=False)) return bytecode_files diff --git a/lib-python/3.2/distutils/command/sdist.py b/lib-python/3.2/distutils/command/sdist.py --- a/lib-python/3.2/distutils/command/sdist.py +++ b/lib-python/3.2/distutils/command/sdist.py @@ -306,7 +306,10 @@ try: self.filelist.process_template_line(line) - except DistutilsTemplateError as msg: + # the call above can raise a DistutilsTemplateError for + # malformed lines, or a ValueError from the lower-level + # convert_path function + except (DistutilsTemplateError, ValueError) as msg: self.warn("%s, line %d: %s" % (template.filename, template.current_line, msg)) diff --git a/lib-python/3.2/distutils/dist.py b/lib-python/3.2/distutils/dist.py --- a/lib-python/3.2/distutils/dist.py +++ b/lib-python/3.2/distutils/dist.py @@ -537,7 +537,7 @@ for (help_option, short, desc, func) in cmd_class.help_options: if hasattr(opts, parser.get_attr_name(help_option)): help_option_found=1 - if hasattr(func, '__call__'): + if callable(func): func() else: raise DistutilsClassError( @@ -1010,17 +1010,16 @@ def write_pkg_info(self, base_dir): """Write the PKG-INFO file into the release tree. """ - pkg_info = open(os.path.join(base_dir, 'PKG-INFO'), 'w') - try: + with open(os.path.join(base_dir, 'PKG-INFO'), 'w', + encoding='UTF-8') as pkg_info: self.write_pkg_file(pkg_info) - finally: - pkg_info.close() def write_pkg_file(self, file): """Write the PKG-INFO format data to a file object. """ version = '1.0' - if self.provides or self.requires or self.obsoletes: + if (self.provides or self.requires or self.obsoletes or + self.classifiers or self.download_url): version = '1.1' file.write('Metadata-Version: %s\n' % version) diff --git a/lib-python/3.2/distutils/filelist.py b/lib-python/3.2/distutils/filelist.py --- a/lib-python/3.2/distutils/filelist.py +++ b/lib-python/3.2/distutils/filelist.py @@ -201,6 +201,7 @@ Return True if files are found, False otherwise. """ + # XXX docstring lying about what the special chars are? files_found = False pattern_re = translate_pattern(pattern, anchor, prefix, is_regex) self.debug_print("include_pattern: applying regex r'%s'" % @@ -284,11 +285,14 @@ # IMHO is wrong -- '?' and '*' aren't supposed to match slash in Unix, # and by extension they shouldn't match such "special characters" under # any OS. So change all non-escaped dots in the RE to match any - # character except the special characters. - # XXX currently the "special characters" are just slash -- i.e. this is - # Unix-only. - pattern_re = re.sub(r'((?\s*" manifest_buf = re.sub(pattern, "", manifest_buf) + # Now see if any other assemblies are referenced - if not, we + # don't want a manifest embedded. + pattern = re.compile( + r"""|)""", re.DOTALL) + if re.search(pattern, manifest_buf) is None: + return None + manifest_f = open(manifest_file, 'w') try: manifest_f.write(manifest_buf) + return manifest_file finally: manifest_f.close() except IOError: diff --git a/lib-python/3.2/distutils/sysconfig.py b/lib-python/3.2/distutils/sysconfig.py --- a/lib-python/3.2/distutils/sysconfig.py +++ b/lib-python/3.2/distutils/sysconfig.py @@ -146,6 +146,7 @@ "I don't know where Python installs its library " "on platform '%s'" % os.name) +_USE_CLANG = None def customize_compiler(compiler): """Do any platform-specific customization of a CCompiler instance. @@ -158,8 +159,38 @@ get_config_vars('CC', 'CXX', 'OPT', 'CFLAGS', 'CCSHARED', 'LDSHARED', 'SO', 'AR', 'ARFLAGS') + newcc = None if 'CC' in os.environ: - cc = os.environ['CC'] + newcc = os.environ['CC'] + elif sys.platform == 'darwin' and cc == 'gcc-4.2': + # Issue #13590: + # Since Apple removed gcc-4.2 in Xcode 4.2, we can no + # longer assume it is available for extension module builds. + # If Python was built with gcc-4.2, check first to see if + # it is available on this system; if not, try to use clang + # instead unless the caller explicitly set CC. + global _USE_CLANG + if _USE_CLANG is None: + from distutils import log + from subprocess import Popen, PIPE + p = Popen("! type gcc-4.2 && type clang && exit 2", + shell=True, stdout=PIPE, stderr=PIPE) + p.wait() + if p.returncode == 2: + _USE_CLANG = True + log.warn("gcc-4.2 not found, using clang instead") + else: + _USE_CLANG = False + if _USE_CLANG: + newcc = 'clang' + if newcc: + # On OS X, if CC is overridden, use that as the default + # command for LDSHARED as well + if (sys.platform == 'darwin' + and 'LDSHARED' not in os.environ + and ldshared.startswith(cc)): + ldshared = newcc + ldshared[len(cc):] + cc = newcc if 'CXX' in os.environ: cxx = os.environ['CXX'] if 'LDSHARED' in os.environ: @@ -218,7 +249,7 @@ """Return full pathname of installed Makefile from the Python build.""" if python_build: return os.path.join(os.path.dirname(sys.executable), "Makefile") - lib_dir = get_python_lib(plat_specific=1, standard_lib=1) + lib_dir = get_python_lib(plat_specific=0, standard_lib=1) config_file = 'config-{}{}'.format(get_python_version(), build_flags) return os.path.join(lib_dir, config_file, 'Makefile') diff --git a/lib-python/3.2/distutils/tests/support.py b/lib-python/3.2/distutils/tests/support.py --- a/lib-python/3.2/distutils/tests/support.py +++ b/lib-python/3.2/distutils/tests/support.py @@ -141,9 +141,9 @@ Example use: - def test_compile(self): - copy_xxmodule_c(self.tmpdir) - self.assertIn('xxmodule.c', os.listdir(self.tmpdir) + def test_compile(self): + copy_xxmodule_c(self.tmpdir) + self.assertIn('xxmodule.c', os.listdir(self.tmpdir)) If the source file can be found, it will be copied to *directory*. If not, the test will be skipped. Errors during copy are not caught. @@ -175,10 +175,9 @@ def fixup_build_ext(cmd): """Function needed to make build_ext tests pass. - When Python was build with --enable-shared on Unix, -L. is not good - enough to find the libpython.so. This is because regrtest runs - it under a tempdir, not in the top level where the .so lives. By the - time we've gotten here, Python's already been chdir'd to the tempdir. + When Python was built with --enable-shared on Unix, -L. is not enough to + find libpython.so, because regrtest runs in a tempdir, not in the + source directory where the .so lives. When Python was built with in debug mode on Windows, build_ext commands need their debug attribute set, and it is not done automatically for @@ -189,6 +188,9 @@ cmd = build_ext(dist) support.fixup_build_ext(cmd) cmd.ensure_finalized() + + Unlike most other Unix platforms, Mac OS X embeds absolute paths + to shared libraries into executables, so the fixup is not needed there. """ if os.name == 'nt': cmd.debug = sys.executable.endswith('_d.exe') @@ -200,5 +202,8 @@ if runshared is None: cmd.library_dirs = ['.'] else: - name, equals, value = runshared.partition('=') - cmd.library_dirs = value.split(os.pathsep) + if sys.platform == 'darwin': + cmd.library_dirs = [] + else: + name, equals, value = runshared.partition('=') + cmd.library_dirs = value.split(os.pathsep) diff --git a/lib-python/3.2/distutils/tests/test_bdist_dumb.py b/lib-python/3.2/distutils/tests/test_bdist_dumb.py --- a/lib-python/3.2/distutils/tests/test_bdist_dumb.py +++ b/lib-python/3.2/distutils/tests/test_bdist_dumb.py @@ -1,8 +1,10 @@ """Tests for distutils.command.bdist_dumb.""" +import os +import imp +import sys +import zipfile import unittest -import sys -import os from test.support import run_unittest from distutils.core import Distribution @@ -72,15 +74,24 @@ # see what we have dist_created = os.listdir(os.path.join(pkg_dir, 'dist')) - base = "%s.%s" % (dist.get_fullname(), cmd.plat_name) + base = "%s.%s.zip" % (dist.get_fullname(), cmd.plat_name) if os.name == 'os2': base = base.replace(':', '-') - wanted = ['%s.zip' % base] - self.assertEqual(dist_created, wanted) + self.assertEqual(dist_created, [base]) # now let's check what we have in the zip file - # XXX to be done + fp = zipfile.ZipFile(os.path.join('dist', base)) + try: + contents = fp.namelist() + finally: + fp.close() + + contents = sorted(os.path.basename(fn) for fn in contents) + wanted = ['foo-0.1-py%s.%s.egg-info' % sys.version_info[:2], + 'foo.%s.pyc' % imp.get_tag(), + 'foo.py'] + self.assertEqual(contents, sorted(wanted)) def test_suite(): return unittest.makeSuite(BuildDumbTestCase) diff --git a/lib-python/3.2/distutils/tests/test_bdist_rpm.py b/lib-python/3.2/distutils/tests/test_bdist_rpm.py --- a/lib-python/3.2/distutils/tests/test_bdist_rpm.py +++ b/lib-python/3.2/distutils/tests/test_bdist_rpm.py @@ -78,6 +78,10 @@ dist_created = os.listdir(os.path.join(pkg_dir, 'dist')) self.assertTrue('foo-0.1-1.noarch.rpm' in dist_created) + # bug #2945: upload ignores bdist_rpm files + self.assertIn(('bdist_rpm', 'any', 'dist/foo-0.1-1.src.rpm'), dist.dist_files) + self.assertIn(('bdist_rpm', 'any', 'dist/foo-0.1-1.noarch.rpm'), dist.dist_files) + def test_no_optimize_flag(self): # XXX I am unable yet to make this test work without @@ -117,6 +121,11 @@ dist_created = os.listdir(os.path.join(pkg_dir, 'dist')) self.assertTrue('foo-0.1-1.noarch.rpm' in dist_created) + + # bug #2945: upload ignores bdist_rpm files + self.assertIn(('bdist_rpm', 'any', 'dist/foo-0.1-1.src.rpm'), dist.dist_files) + self.assertIn(('bdist_rpm', 'any', 'dist/foo-0.1-1.noarch.rpm'), dist.dist_files) + os.remove(os.path.join(pkg_dir, 'dist', 'foo-0.1-1.noarch.rpm')) def test_suite(): diff --git a/lib-python/3.2/distutils/tests/test_build_ext.py b/lib-python/3.2/distutils/tests/test_build_ext.py --- a/lib-python/3.2/distutils/tests/test_build_ext.py +++ b/lib-python/3.2/distutils/tests/test_build_ext.py @@ -178,21 +178,22 @@ # make sure cmd.libraries is turned into a list # if it's a string cmd = build_ext(dist) - cmd.libraries = 'my_lib' + cmd.libraries = 'my_lib, other_lib lastlib' cmd.finalize_options() - self.assertEqual(cmd.libraries, ['my_lib']) + self.assertEqual(cmd.libraries, ['my_lib', 'other_lib', 'lastlib']) # make sure cmd.library_dirs is turned into a list # if it's a string cmd = build_ext(dist) - cmd.library_dirs = 'my_lib_dir' + cmd.library_dirs = 'my_lib_dir%sother_lib_dir' % os.pathsep cmd.finalize_options() - self.assertTrue('my_lib_dir' in cmd.library_dirs) + self.assertIn('my_lib_dir', cmd.library_dirs) + self.assertIn('other_lib_dir', cmd.library_dirs) # make sure rpath is turned into a list - # if it's a list of os.pathsep's paths + # if it's a string cmd = build_ext(dist) - cmd.rpath = os.pathsep.join(['one', 'two']) + cmd.rpath = 'one%stwo' % os.pathsep cmd.finalize_options() self.assertEqual(cmd.rpath, ['one', 'two']) diff --git a/lib-python/3.2/distutils/tests/test_build_py.py b/lib-python/3.2/distutils/tests/test_build_py.py --- a/lib-python/3.2/distutils/tests/test_build_py.py +++ b/lib-python/3.2/distutils/tests/test_build_py.py @@ -2,7 +2,7 @@ import os import sys -import io +import imp import unittest from distutils.command.build_py import build_py @@ -53,23 +53,20 @@ # This makes sure the list of outputs includes byte-compiled # files for Python modules but not for package data files # (there shouldn't *be* byte-code files for those!). - # self.assertEqual(len(cmd.get_outputs()), 3) pkgdest = os.path.join(destination, "pkg") files = os.listdir(pkgdest) + pycache_dir = os.path.join(pkgdest, "__pycache__") self.assertIn("__init__.py", files) self.assertIn("README.txt", files) - # XXX even with -O, distutils writes pyc, not pyo; bug? if sys.dont_write_bytecode: - self.assertNotIn("__init__.pyc", files) + self.assertFalse(os.path.exists(pycache_dir)) else: - self.assertIn("__init__.pyc", files) + pyc_files = os.listdir(pycache_dir) + self.assertIn("__init__.%s.pyc" % imp.get_tag(), pyc_files) def test_empty_package_dir(self): - # See SF 1668596/1720897. - cwd = os.getcwd() - - # create the distribution files. + # See bugs #1668596/#1720897 sources = self.mkdtemp() open(os.path.join(sources, "__init__.py"), "w").close() @@ -78,30 +75,55 @@ open(os.path.join(testdir, "testfile"), "w").close() os.chdir(sources) - old_stdout = sys.stdout - sys.stdout = io.StringIO() + dist = Distribution({"packages": ["pkg"], + "package_dir": {"pkg": ""}, + "package_data": {"pkg": ["doc/*"]}}) + # script_name need not exist, it just need to be initialized + dist.script_name = os.path.join(sources, "setup.py") + dist.script_args = ["build"] + dist.parse_command_line() try: - dist = Distribution({"packages": ["pkg"], - "package_dir": {"pkg": ""}, - "package_data": {"pkg": ["doc/*"]}}) - # script_name need not exist, it just need to be initialized - dist.script_name = os.path.join(sources, "setup.py") - dist.script_args = ["build"] - dist.parse_command_line() + dist.run_commands() + except DistutilsFileError: + self.fail("failed package_data test when package_dir is ''") - try: - dist.run_commands() - except DistutilsFileError: - self.fail("failed package_data test when package_dir is ''") - finally: - # Restore state. - os.chdir(cwd) - sys.stdout = old_stdout + @unittest.skipIf(sys.dont_write_bytecode, 'byte-compile disabled') + def test_byte_compile(self): + project_dir, dist = self.create_dist(py_modules=['boiledeggs']) + os.chdir(project_dir) + self.write_file('boiledeggs.py', 'import antigravity') + cmd = build_py(dist) + cmd.compile = 1 + cmd.build_lib = 'here' + cmd.finalize_options() + cmd.run() + + found = os.listdir(cmd.build_lib) + self.assertEqual(sorted(found), ['__pycache__', 'boiledeggs.py']) + found = os.listdir(os.path.join(cmd.build_lib, '__pycache__')) + self.assertEqual(found, ['boiledeggs.%s.pyc' % imp.get_tag()]) + + @unittest.skipIf(sys.dont_write_bytecode, 'byte-compile disabled') + def test_byte_compile_optimized(self): + project_dir, dist = self.create_dist(py_modules=['boiledeggs']) + os.chdir(project_dir) + self.write_file('boiledeggs.py', 'import antigravity') + cmd = build_py(dist) + cmd.compile = 0 + cmd.optimize = 1 + cmd.build_lib = 'here' + cmd.finalize_options() + cmd.run() + + found = os.listdir(cmd.build_lib) + self.assertEqual(sorted(found), ['__pycache__', 'boiledeggs.py']) + found = os.listdir(os.path.join(cmd.build_lib, '__pycache__')) + self.assertEqual(sorted(found), ['boiledeggs.%s.pyo' % imp.get_tag()]) def test_dont_write_bytecode(self): # makes sure byte_compile is not used - pkg_dir, dist = self.create_dist() + dist = self.create_dist()[1] cmd = build_py(dist) cmd.compile = 1 cmd.optimize = 1 @@ -115,6 +137,7 @@ self.assertIn('byte-compiling is disabled', self.logs[0][1]) + def test_suite(): return unittest.makeSuite(BuildPyTestCase) diff --git a/lib-python/3.2/distutils/tests/test_check.py b/lib-python/3.2/distutils/tests/test_check.py --- a/lib-python/3.2/distutils/tests/test_check.py +++ b/lib-python/3.2/distutils/tests/test_check.py @@ -46,6 +46,15 @@ cmd = self._run(metadata, strict=1) self.assertEqual(cmd._warnings, 0) + # now a test with non-ASCII characters + metadata = {'url': 'xxx', 'author': '\u00c9ric', + 'author_email': 'xxx', 'name': 'xxx', + 'version': 'xxx', + 'description': 'Something about esszet \u00df', + 'long_description': 'More things about esszet \u00df'} + cmd = self._run(metadata) + self.assertEqual(cmd._warnings, 0) + def test_check_document(self): if not HAS_DOCUTILS: # won't test without docutils return @@ -80,8 +89,8 @@ self.assertRaises(DistutilsSetupError, self._run, metadata, **{'strict': 1, 'restructuredtext': 1}) - # and non-broken rest - metadata['long_description'] = 'title\n=====\n\ntest' + # and non-broken rest, including a non-ASCII character to test #12114 + metadata['long_description'] = 'title\n=====\n\ntest \u00df' cmd = self._run(metadata, strict=1, restructuredtext=1) self.assertEqual(cmd._warnings, 0) diff --git a/lib-python/3.2/distutils/tests/test_config_cmd.py b/lib-python/3.2/distutils/tests/test_config_cmd.py --- a/lib-python/3.2/distutils/tests/test_config_cmd.py +++ b/lib-python/3.2/distutils/tests/test_config_cmd.py @@ -44,10 +44,10 @@ cmd = config(dist) # simple pattern searches - match = cmd.search_cpp(pattern='xxx', body='// xxx') + match = cmd.search_cpp(pattern='xxx', body='/* xxx */') self.assertEqual(match, 0) - match = cmd.search_cpp(pattern='_configtest', body='// xxx') + match = cmd.search_cpp(pattern='_configtest', body='/* xxx */') self.assertEqual(match, 1) def test_finalize_options(self): diff --git a/lib-python/3.2/distutils/tests/test_dist.py b/lib-python/3.2/distutils/tests/test_dist.py --- a/lib-python/3.2/distutils/tests/test_dist.py +++ b/lib-python/3.2/distutils/tests/test_dist.py @@ -74,7 +74,7 @@ self.assertEqual(d.get_command_packages(), ["distutils.command", "foo.bar", "distutils.tests"]) cmd = d.get_command_obj("test_dist") - self.assertTrue(isinstance(cmd, test_dist)) + self.assertIsInstance(cmd, test_dist) self.assertEqual(cmd.sample_option, "sometext") def test_command_packages_configfile(self): @@ -106,28 +106,23 @@ def test_empty_options(self): # an empty options dictionary should not stay in the # list of attributes - klass = Distribution # catching warnings warns = [] + def _warn(msg): warns.append(msg) - old_warn = warnings.warn + self.addCleanup(setattr, warnings, 'warn', warnings.warn) warnings.warn = _warn - try: - dist = klass(attrs={'author': 'xxx', - 'name': 'xxx', - 'version': 'xxx', - 'url': 'xxxx', - 'options': {}}) - finally: - warnings.warn = old_warn + dist = Distribution(attrs={'author': 'xxx', 'name': 'xxx', + 'version': 'xxx', 'url': 'xxxx', + 'options': {}}) self.assertEqual(len(warns), 0) + self.assertNotIn('options', dir(dist)) def test_finalize_options(self): - attrs = {'keywords': 'one,two', 'platforms': 'one,two'} @@ -150,7 +145,6 @@ cmds = dist.get_command_packages() self.assertEqual(cmds, ['distutils.command', 'one', 'two']) - def test_announce(self): # make sure the level is known dist = Distribution() @@ -158,6 +152,7 @@ kwargs = {'level': 'ok2'} self.assertRaises(ValueError, dist.announce, args, kwargs) + class MetadataTestCase(support.TempdirManager, support.EnvironGuard, unittest.TestCase): @@ -170,15 +165,20 @@ sys.argv[:] = self.argv[1] super(MetadataTestCase, self).tearDown() + def format_metadata(self, dist): + sio = io.StringIO() + dist.metadata.write_pkg_file(sio) + return sio.getvalue() + def test_simple_metadata(self): attrs = {"name": "package", "version": "1.0"} dist = Distribution(attrs) meta = self.format_metadata(dist) - self.assertTrue("Metadata-Version: 1.0" in meta) - self.assertTrue("provides:" not in meta.lower()) - self.assertTrue("requires:" not in meta.lower()) - self.assertTrue("obsoletes:" not in meta.lower()) + self.assertIn("Metadata-Version: 1.0", meta) + self.assertNotIn("provides:", meta.lower()) + self.assertNotIn("requires:", meta.lower()) + self.assertNotIn("obsoletes:", meta.lower()) def test_provides(self): attrs = {"name": "package", @@ -190,9 +190,9 @@ self.assertEqual(dist.get_provides(), ["package", "package.sub"]) meta = self.format_metadata(dist) - self.assertTrue("Metadata-Version: 1.1" in meta) - self.assertTrue("requires:" not in meta.lower()) - self.assertTrue("obsoletes:" not in meta.lower()) + self.assertIn("Metadata-Version: 1.1", meta) + self.assertNotIn("requires:", meta.lower()) + self.assertNotIn("obsoletes:", meta.lower()) def test_provides_illegal(self): self.assertRaises(ValueError, Distribution, @@ -210,11 +210,11 @@ self.assertEqual(dist.get_requires(), ["other", "another (==1.0)"]) meta = self.format_metadata(dist) - self.assertTrue("Metadata-Version: 1.1" in meta) - self.assertTrue("provides:" not in meta.lower()) - self.assertTrue("Requires: other" in meta) - self.assertTrue("Requires: another (==1.0)" in meta) - self.assertTrue("obsoletes:" not in meta.lower()) + self.assertIn("Metadata-Version: 1.1", meta) + self.assertNotIn("provides:", meta.lower()) + self.assertIn("Requires: other", meta) + self.assertIn("Requires: another (==1.0)", meta) + self.assertNotIn("obsoletes:", meta.lower()) def test_requires_illegal(self): self.assertRaises(ValueError, Distribution, @@ -232,11 +232,11 @@ self.assertEqual(dist.get_obsoletes(), ["other", "another (<1.0)"]) meta = self.format_metadata(dist) - self.assertTrue("Metadata-Version: 1.1" in meta) - self.assertTrue("provides:" not in meta.lower()) - self.assertTrue("requires:" not in meta.lower()) - self.assertTrue("Obsoletes: other" in meta) - self.assertTrue("Obsoletes: another (<1.0)" in meta) + self.assertIn("Metadata-Version: 1.1", meta) + self.assertNotIn("provides:", meta.lower()) + self.assertNotIn("requires:", meta.lower()) + self.assertIn("Obsoletes: other", meta) + self.assertIn("Obsoletes: another (<1.0)", meta) def test_obsoletes_illegal(self): self.assertRaises(ValueError, Distribution, @@ -244,10 +244,34 @@ "version": "1.0", "obsoletes": ["my.pkg (splat)"]}) - def format_metadata(self, dist): - sio = io.StringIO() - dist.metadata.write_pkg_file(sio) - return sio.getvalue() + def test_classifier(self): + attrs = {'name': 'Boa', 'version': '3.0', + 'classifiers': ['Programming Language :: Python :: 3']} + dist = Distribution(attrs) + meta = self.format_metadata(dist) + self.assertIn('Metadata-Version: 1.1', meta) + + def test_download_url(self): + attrs = {'name': 'Boa', 'version': '3.0', + 'download_url': 'http://example.org/boa'} + dist = Distribution(attrs) + meta = self.format_metadata(dist) + self.assertIn('Metadata-Version: 1.1', meta) + + def test_long_description(self): + long_desc = textwrap.dedent("""\ + example:: + We start here + and continue here + and end here.""") + attrs = {"name": "package", + "version": "1.0", + "long_description": long_desc} + + dist = Distribution(attrs) + meta = self.format_metadata(dist) + meta = meta.replace('\n' + 8 * ' ', '\n') + self.assertIn(long_desc, meta) def test_custom_pydistutils(self): # fixes #2166 @@ -272,14 +296,14 @@ if sys.platform in ('linux', 'darwin'): os.environ['HOME'] = temp_dir files = dist.find_config_files() - self.assertTrue(user_filename in files) + self.assertIn(user_filename, files) # win32-style if sys.platform == 'win32': # home drive should be found os.environ['HOME'] = temp_dir files = dist.find_config_files() - self.assertTrue(user_filename in files, + self.assertIn(user_filename, files, '%r not found in %r' % (user_filename, files)) finally: os.remove(user_filename) @@ -301,22 +325,8 @@ output = [line for line in s.getvalue().split('\n') if line.strip() != ''] - self.assertTrue(len(output) > 0) + self.assertTrue(output) - def test_long_description(self): - long_desc = textwrap.dedent("""\ - example:: - We start here - and continue here - and end here.""") - attrs = {"name": "package", - "version": "1.0", - "long_description": long_desc} - - dist = Distribution(attrs) - meta = self.format_metadata(dist) - meta = meta.replace('\n' + 8 * ' ', '\n') - self.assertTrue(long_desc in meta) def test_suite(): suite = unittest.TestSuite() diff --git a/lib-python/3.2/distutils/tests/test_filelist.py b/lib-python/3.2/distutils/tests/test_filelist.py --- a/lib-python/3.2/distutils/tests/test_filelist.py +++ b/lib-python/3.2/distutils/tests/test_filelist.py @@ -1,40 +1,297 @@ """Tests for distutils.filelist.""" +import os +import re import unittest +from distutils import debug +from distutils.log import WARN +from distutils.errors import DistutilsTemplateError +from distutils.filelist import glob_to_re, translate_pattern, FileList -from distutils.filelist import glob_to_re, FileList from test.support import captured_stdout, run_unittest -from distutils import debug +from distutils.tests import support -class FileListTestCase(unittest.TestCase): +MANIFEST_IN = """\ +include ok +include xo +exclude xo +include foo.tmp +include buildout.cfg +global-include *.x +global-include *.txt +global-exclude *.tmp +recursive-include f *.oo +recursive-exclude global *.x +graft dir +prune dir3 +""" + + +def make_local_path(s): + """Converts '/' in a string to os.sep""" + return s.replace('/', os.sep) + + +class FileListTestCase(support.LoggingSilencer, + unittest.TestCase): + + def assertNoWarnings(self): + self.assertEqual(self.get_logs(WARN), []) + self.clear_logs() + + def assertWarnings(self): + self.assertGreater(len(self.get_logs(WARN)), 0) + self.clear_logs() def test_glob_to_re(self): - # simple cases - self.assertEqual(glob_to_re('foo*'), 'foo[^/]*\\Z(?ms)') - self.assertEqual(glob_to_re('foo?'), 'foo[^/]\\Z(?ms)') - self.assertEqual(glob_to_re('foo??'), 'foo[^/][^/]\\Z(?ms)') + sep = os.sep + if os.sep == '\\': + sep = re.escape(os.sep) - # special cases - self.assertEqual(glob_to_re(r'foo\\*'), r'foo\\\\[^/]*\Z(?ms)') - self.assertEqual(glob_to_re(r'foo\\\*'), r'foo\\\\\\[^/]*\Z(?ms)') - self.assertEqual(glob_to_re('foo????'), r'foo[^/][^/][^/][^/]\Z(?ms)') - self.assertEqual(glob_to_re(r'foo\\??'), r'foo\\\\[^/][^/]\Z(?ms)') + for glob, regex in ( + # simple cases + ('foo*', r'foo[^%(sep)s]*\Z(?ms)'), + ('foo?', r'foo[^%(sep)s]\Z(?ms)'), + ('foo??', r'foo[^%(sep)s][^%(sep)s]\Z(?ms)'), + # special cases + (r'foo\\*', r'foo\\\\[^%(sep)s]*\Z(?ms)'), + (r'foo\\\*', r'foo\\\\\\[^%(sep)s]*\Z(?ms)'), + ('foo????', r'foo[^%(sep)s][^%(sep)s][^%(sep)s][^%(sep)s]\Z(?ms)'), + (r'foo\\??', r'foo\\\\[^%(sep)s][^%(sep)s]\Z(?ms)')): + regex = regex % {'sep': sep} + self.assertEqual(glob_to_re(glob), regex) + + def test_process_template_line(self): + # testing all MANIFEST.in template patterns + file_list = FileList() + l = make_local_path + + # simulated file list + file_list.allfiles = ['foo.tmp', 'ok', 'xo', 'four.txt', + 'buildout.cfg', + # filelist does not filter out VCS directories, + # it's sdist that does + l('.hg/last-message.txt'), + l('global/one.txt'), + l('global/two.txt'), + l('global/files.x'), + l('global/here.tmp'), + l('f/o/f.oo'), + l('dir/graft-one'), + l('dir/dir2/graft2'), + l('dir3/ok'), + l('dir3/sub/ok.txt'), + ] + + for line in MANIFEST_IN.split('\n'): + if line.strip() == '': + continue + file_list.process_template_line(line) + + wanted = ['ok', + 'buildout.cfg', + 'four.txt', + l('.hg/last-message.txt'), + l('global/one.txt'), + l('global/two.txt'), + l('f/o/f.oo'), + l('dir/graft-one'), + l('dir/dir2/graft2'), + ] + + self.assertEqual(file_list.files, wanted) def test_debug_print(self): file_list = FileList() with captured_stdout() as stdout: file_list.debug_print('xxx') - stdout.seek(0) - self.assertEqual(stdout.read(), '') + self.assertEqual(stdout.getvalue(), '') debug.DEBUG = True try: with captured_stdout() as stdout: file_list.debug_print('xxx') - stdout.seek(0) - self.assertEqual(stdout.read(), 'xxx\n') + self.assertEqual(stdout.getvalue(), 'xxx\n') finally: debug.DEBUG = False + def test_set_allfiles(self): + file_list = FileList() + files = ['a', 'b', 'c'] + file_list.set_allfiles(files) + self.assertEqual(file_list.allfiles, files) + + def test_remove_duplicates(self): + file_list = FileList() + file_list.files = ['a', 'b', 'a', 'g', 'c', 'g'] + # files must be sorted beforehand (sdist does it) + file_list.sort() + file_list.remove_duplicates() + self.assertEqual(file_list.files, ['a', 'b', 'c', 'g']) + + def test_translate_pattern(self): + # not regex + self.assertTrue(hasattr( + translate_pattern('a', anchor=True, is_regex=False), + 'search')) + + # is a regex + regex = re.compile('a') + self.assertEqual( + translate_pattern(regex, anchor=True, is_regex=True), + regex) + + # plain string flagged as regex + self.assertTrue(hasattr( + translate_pattern('a', anchor=True, is_regex=True), + 'search')) + + # glob support + self.assertTrue(translate_pattern( + '*.py', anchor=True, is_regex=False).search('filelist.py')) + + def test_exclude_pattern(self): + # return False if no match + file_list = FileList() + self.assertFalse(file_list.exclude_pattern('*.py')) + + # return True if files match + file_list = FileList() + file_list.files = ['a.py', 'b.py'] + self.assertTrue(file_list.exclude_pattern('*.py')) + + # test excludes + file_list = FileList() + file_list.files = ['a.py', 'a.txt'] + file_list.exclude_pattern('*.py') + self.assertEqual(file_list.files, ['a.txt']) + + def test_include_pattern(self): + # return False if no match + file_list = FileList() + file_list.set_allfiles([]) + self.assertFalse(file_list.include_pattern('*.py')) + + # return True if files match + file_list = FileList() + file_list.set_allfiles(['a.py', 'b.txt']) + self.assertTrue(file_list.include_pattern('*.py')) + + # test * matches all files + file_list = FileList() + self.assertIsNone(file_list.allfiles) + file_list.set_allfiles(['a.py', 'b.txt']) + file_list.include_pattern('*') + self.assertEqual(file_list.allfiles, ['a.py', 'b.txt']) + + def test_process_template(self): + l = make_local_path + # invalid lines + file_list = FileList() + for action in ('include', 'exclude', 'global-include', + 'global-exclude', 'recursive-include', + 'recursive-exclude', 'graft', 'prune', 'blarg'): + self.assertRaises(DistutilsTemplateError, + file_list.process_template_line, action) + + # include + file_list = FileList() + file_list.set_allfiles(['a.py', 'b.txt', l('d/c.py')]) + + file_list.process_template_line('include *.py') + self.assertEqual(file_list.files, ['a.py']) + self.assertNoWarnings() + + file_list.process_template_line('include *.rb') + self.assertEqual(file_list.files, ['a.py']) + self.assertWarnings() + + # exclude + file_list = FileList() + file_list.files = ['a.py', 'b.txt', l('d/c.py')] + + file_list.process_template_line('exclude *.py') + self.assertEqual(file_list.files, ['b.txt', l('d/c.py')]) + self.assertNoWarnings() + + file_list.process_template_line('exclude *.rb') + self.assertEqual(file_list.files, ['b.txt', l('d/c.py')]) + self.assertWarnings() + + # global-include + file_list = FileList() + file_list.set_allfiles(['a.py', 'b.txt', l('d/c.py')]) + + file_list.process_template_line('global-include *.py') + self.assertEqual(file_list.files, ['a.py', l('d/c.py')]) + self.assertNoWarnings() + + file_list.process_template_line('global-include *.rb') + self.assertEqual(file_list.files, ['a.py', l('d/c.py')]) + self.assertWarnings() + + # global-exclude + file_list = FileList() + file_list.files = ['a.py', 'b.txt', l('d/c.py')] + + file_list.process_template_line('global-exclude *.py') + self.assertEqual(file_list.files, ['b.txt']) + self.assertNoWarnings() + + file_list.process_template_line('global-exclude *.rb') + self.assertEqual(file_list.files, ['b.txt']) + self.assertWarnings() + + # recursive-include + file_list = FileList() + file_list.set_allfiles(['a.py', l('d/b.py'), l('d/c.txt'), + l('d/d/e.py')]) + + file_list.process_template_line('recursive-include d *.py') + self.assertEqual(file_list.files, [l('d/b.py'), l('d/d/e.py')]) + self.assertNoWarnings() + + file_list.process_template_line('recursive-include e *.py') + self.assertEqual(file_list.files, [l('d/b.py'), l('d/d/e.py')]) + self.assertWarnings() + + # recursive-exclude + file_list = FileList() + file_list.files = ['a.py', l('d/b.py'), l('d/c.txt'), l('d/d/e.py')] + + file_list.process_template_line('recursive-exclude d *.py') + self.assertEqual(file_list.files, ['a.py', l('d/c.txt')]) + self.assertNoWarnings() + + file_list.process_template_line('recursive-exclude e *.py') + self.assertEqual(file_list.files, ['a.py', l('d/c.txt')]) + self.assertWarnings() + + # graft + file_list = FileList() + file_list.set_allfiles(['a.py', l('d/b.py'), l('d/d/e.py'), + l('f/f.py')]) + + file_list.process_template_line('graft d') + self.assertEqual(file_list.files, [l('d/b.py'), l('d/d/e.py')]) + self.assertNoWarnings() + + file_list.process_template_line('graft e') + self.assertEqual(file_list.files, [l('d/b.py'), l('d/d/e.py')]) + self.assertWarnings() + + # prune + file_list = FileList() + file_list.files = ['a.py', l('d/b.py'), l('d/d/e.py'), l('f/f.py')] + + file_list.process_template_line('prune d') + self.assertEqual(file_list.files, ['a.py', l('f/f.py')]) + self.assertNoWarnings() + + file_list.process_template_line('prune e') + self.assertEqual(file_list.files, ['a.py', l('f/f.py')]) + self.assertWarnings() + + def test_suite(): return unittest.makeSuite(FileListTestCase) diff --git a/lib-python/3.2/distutils/tests/test_install.py b/lib-python/3.2/distutils/tests/test_install.py --- a/lib-python/3.2/distutils/tests/test_install.py +++ b/lib-python/3.2/distutils/tests/test_install.py @@ -1,6 +1,7 @@ """Tests for distutils.command.install.""" import os +import imp import sys import unittest import site @@ -20,9 +21,8 @@ def _make_ext_name(modname): - if os.name == 'nt': - if sys.executable.endswith('_d.exe'): - modname += '_d' + if os.name == 'nt' and sys.executable.endswith('_d.exe'): + modname += '_d' return modname + sysconfig.get_config_var('SO') @@ -68,10 +68,7 @@ check_path(cmd.install_data, destination) def test_user_site(self): - # site.USER_SITE was introduced in 2.6 - if sys.version < '2.6': - return - + # test install with --user # preparing the environment for the test self.old_user_base = site.USER_BASE self.old_user_site = site.USER_SITE @@ -88,19 +85,17 @@ self.old_expand = os.path.expanduser os.path.expanduser = _expanduser - try: - # this is the actual test - self._test_user_site() - finally: + def cleanup(): site.USER_BASE = self.old_user_base site.USER_SITE = self.old_user_site install_module.USER_BASE = self.old_user_base install_module.USER_SITE = self.old_user_site os.path.expanduser = self.old_expand - def _test_user_site(self): + self.addCleanup(cleanup) + for key in ('nt_user', 'unix_user', 'os2_home'): - self.assertTrue(key in INSTALL_SCHEMES) + self.assertIn(key, INSTALL_SCHEMES) dist = Distribution({'name': 'xx'}) cmd = install(dist) @@ -108,14 +103,14 @@ # making sure the user option is there options = [name for name, short, lable in cmd.user_options] - self.assertTrue('user' in options) + self.assertIn('user', options) # setting a value cmd.user = 1 # user base and site shouldn't be created yet - self.assertTrue(not os.path.exists(self.user_base)) - self.assertTrue(not os.path.exists(self.user_site)) + self.assertFalse(os.path.exists(self.user_base)) + self.assertFalse(os.path.exists(self.user_site)) # let's run finalize cmd.ensure_finalized() @@ -124,8 +119,8 @@ self.assertTrue(os.path.exists(self.user_base)) self.assertTrue(os.path.exists(self.user_site)) - self.assertTrue('userbase' in cmd.config_vars) - self.assertTrue('usersite' in cmd.config_vars) + self.assertIn('userbase', cmd.config_vars) + self.assertIn('usersite', cmd.config_vars) def test_handle_extra_path(self): dist = Distribution({'name': 'xx', 'extra_path': 'path,dirs'}) @@ -178,15 +173,16 @@ def test_record(self): install_dir = self.mkdtemp() - project_dir, dist = self.create_dist(scripts=['hello']) - self.addCleanup(os.chdir, os.getcwd()) + project_dir, dist = self.create_dist(py_modules=['hello'], + scripts=['sayhi']) os.chdir(project_dir) - self.write_file('hello', "print('o hai')") + self.write_file('hello.py', "def main(): print('o hai')") + self.write_file('sayhi', 'from hello import main; main()') cmd = install(dist) dist.command_obj['install'] = cmd cmd.root = install_dir - cmd.record = os.path.join(project_dir, 'RECORD') + cmd.record = os.path.join(project_dir, 'filelist') cmd.ensure_finalized() cmd.run() @@ -197,7 +193,7 @@ f.close() found = [os.path.basename(line) for line in content.splitlines()] - expected = ['hello', + expected = ['hello.py', 'hello.%s.pyc' % imp.get_tag(), 'sayhi', 'UNKNOWN-0.0.0-py%s.%s.egg-info' % sys.version_info[:2]] self.assertEqual(found, expected) @@ -205,7 +201,6 @@ install_dir = self.mkdtemp() project_dir, dist = self.create_dist(ext_modules=[ Extension('xx', ['xxmodule.c'])]) - self.addCleanup(os.chdir, os.getcwd()) os.chdir(project_dir) support.copy_xxmodule_c(project_dir) @@ -217,7 +212,7 @@ dist.command_obj['install'] = cmd dist.command_obj['build_ext'] = buildextcmd cmd.root = install_dir - cmd.record = os.path.join(project_dir, 'RECORD') + cmd.record = os.path.join(project_dir, 'filelist') cmd.ensure_finalized() cmd.run() @@ -243,6 +238,7 @@ install_module.DEBUG = False self.assertTrue(len(self.logs) > old_logs_len) + def test_suite(): return unittest.makeSuite(InstallTestCase) diff --git a/lib-python/3.2/distutils/tests/test_install_lib.py b/lib-python/3.2/distutils/tests/test_install_lib.py --- a/lib-python/3.2/distutils/tests/test_install_lib.py +++ b/lib-python/3.2/distutils/tests/test_install_lib.py @@ -1,6 +1,7 @@ """Tests for distutils.command.install_data.""" import sys import os +import imp import unittest from distutils.command.install_lib import install_lib @@ -9,13 +10,14 @@ from distutils.errors import DistutilsOptionError from test.support import run_unittest + class InstallLibTestCase(support.TempdirManager, support.LoggingSilencer, support.EnvironGuard, unittest.TestCase): From noreply at buildbot.pypy.org Sun May 6 18:38:15 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Sun, 6 May 2012 18:38:15 +0200 (CEST) Subject: [pypy-commit] pypy win32-cleanup2: merge default Message-ID: <20120506163815.0518682E46@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: win32-cleanup2 Changeset: r54914:43d10e49640d Date: 2012-05-06 18:37 +0200 http://bitbucket.org/pypy/pypy/changeset/43d10e49640d/ Log: merge default diff too long, truncating to 10000 out of 754062 lines diff --git a/lib-python/2.7/UserDict.py b/lib-python/2.7/UserDict.py --- a/lib-python/2.7/UserDict.py +++ b/lib-python/2.7/UserDict.py @@ -80,8 +80,12 @@ def __iter__(self): return iter(self.data) -import _abcoll -_abcoll.MutableMapping.register(IterableUserDict) +try: + import _abcoll +except ImportError: + pass # e.g. no '_weakref' module on this pypy +else: + _abcoll.MutableMapping.register(IterableUserDict) class DictMixin: diff --git a/lib-python/2.7/_threading_local.py b/lib-python/2.7/_threading_local.py --- a/lib-python/2.7/_threading_local.py +++ b/lib-python/2.7/_threading_local.py @@ -155,7 +155,7 @@ object.__setattr__(self, '_local__args', (args, kw)) object.__setattr__(self, '_local__lock', RLock()) - if (args or kw) and (cls.__init__ is object.__init__): + if (args or kw) and (cls.__init__ == object.__init__): raise TypeError("Initialization arguments are not supported") # We need to create the thread dict in anticipation of diff --git a/lib-python/2.7/ctypes/__init__.py b/lib-python/2.7/ctypes/__init__.py --- a/lib-python/2.7/ctypes/__init__.py +++ b/lib-python/2.7/ctypes/__init__.py @@ -7,6 +7,7 @@ __version__ = "1.1.0" +import _ffi from _ctypes import Union, Structure, Array from _ctypes import _Pointer from _ctypes import CFuncPtr as _CFuncPtr @@ -350,16 +351,17 @@ self._FuncPtr = _FuncPtr if handle is None: - self._handle = _dlopen(self._name, mode) + self._handle = _ffi.CDLL(name, mode) else: self._handle = handle def __repr__(self): - return "<%s '%s', handle %x at %x>" % \ + return "<%s '%s', handle %r at %x>" % \ (self.__class__.__name__, self._name, - (self._handle & (_sys.maxint*2 + 1)), + (self._handle), id(self) & (_sys.maxint*2 + 1)) + def __getattr__(self, name): if name.startswith('__') and name.endswith('__'): raise AttributeError(name) @@ -449,6 +451,7 @@ GetLastError = windll.kernel32.GetLastError else: GetLastError = windll.coredll.GetLastError + GetLastError.argtypes=[] from _ctypes import get_last_error, set_last_error def WinError(code=None, descr=None): @@ -487,9 +490,12 @@ _flags_ = _FUNCFLAG_CDECL | _FUNCFLAG_PYTHONAPI return CFunctionType -_cast = PYFUNCTYPE(py_object, c_void_p, py_object, py_object)(_cast_addr) def cast(obj, typ): - return _cast(obj, obj, typ) + try: + c_void_p.from_param(obj) + except TypeError, e: + raise ArgumentError(str(e)) + return _cast_addr(obj, obj, typ) _string_at = PYFUNCTYPE(py_object, c_void_p, c_int)(_string_at_addr) def string_at(ptr, size=-1): diff --git a/lib-python/2.7/ctypes/test/__init__.py b/lib-python/2.7/ctypes/test/__init__.py --- a/lib-python/2.7/ctypes/test/__init__.py +++ b/lib-python/2.7/ctypes/test/__init__.py @@ -206,3 +206,16 @@ result = unittest.TestResult() test(result) return result + +def xfail(method): + """ + Poor's man xfail: remove it when all the failures have been fixed + """ + def new_method(self, *args, **kwds): + try: + method(self, *args, **kwds) + except: + pass + else: + self.assertTrue(False, "DID NOT RAISE") + return new_method diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -1,12 +1,23 @@ import unittest from ctypes import * +from test.test_support import impl_detail formats = "bBhHiIlLqQfd" +# c_longdouble commented out for PyPy, look at the commend in test_longdouble formats = c_byte, c_ubyte, c_short, c_ushort, c_int, c_uint, \ - c_long, c_ulonglong, c_float, c_double, c_longdouble + c_long, c_ulonglong, c_float, c_double #, c_longdouble class ArrayTestCase(unittest.TestCase): + + @impl_detail('long double not supported by PyPy', pypy=False) + def test_longdouble(self): + """ + This test is empty. It's just here to remind that we commented out + c_longdouble in "formats". If pypy will ever supports c_longdouble, we + should kill this test and uncomment c_longdouble inside formats. + """ + def test_simple(self): # create classes holding simple numeric types, and check # various properties. diff --git a/lib-python/2.7/ctypes/test/test_bitfields.py b/lib-python/2.7/ctypes/test/test_bitfields.py --- a/lib-python/2.7/ctypes/test/test_bitfields.py +++ b/lib-python/2.7/ctypes/test/test_bitfields.py @@ -115,17 +115,21 @@ def test_nonint_types(self): # bit fields are not allowed on non-integer types. result = self.fail_fields(("a", c_char_p, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type c_char_p')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) result = self.fail_fields(("a", c_void_p, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type c_void_p')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) if c_int != c_long: result = self.fail_fields(("a", POINTER(c_int), 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type LP_c_int')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) result = self.fail_fields(("a", c_char, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type c_char')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) try: c_wchar @@ -133,13 +137,15 @@ pass else: result = self.fail_fields(("a", c_wchar, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type c_wchar')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) class Dummy(Structure): _fields_ = [] result = self.fail_fields(("a", Dummy, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type Dummy')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) def test_single_bitfield_size(self): for c_typ in int_types: diff --git a/lib-python/2.7/ctypes/test/test_byteswap.py b/lib-python/2.7/ctypes/test/test_byteswap.py --- a/lib-python/2.7/ctypes/test/test_byteswap.py +++ b/lib-python/2.7/ctypes/test/test_byteswap.py @@ -2,6 +2,7 @@ from binascii import hexlify from ctypes import * +from ctypes.test import xfail def bin(s): return hexlify(memoryview(s)).upper() @@ -21,6 +22,7 @@ setattr(bits, "i%s" % i, 1) dump(bits) + @xfail def test_endian_short(self): if sys.byteorder == "little": self.assertTrue(c_short.__ctype_le__ is c_short) @@ -48,6 +50,7 @@ self.assertEqual(bin(s), "3412") self.assertEqual(s.value, 0x1234) + @xfail def test_endian_int(self): if sys.byteorder == "little": self.assertTrue(c_int.__ctype_le__ is c_int) @@ -76,6 +79,7 @@ self.assertEqual(bin(s), "78563412") self.assertEqual(s.value, 0x12345678) + @xfail def test_endian_longlong(self): if sys.byteorder == "little": self.assertTrue(c_longlong.__ctype_le__ is c_longlong) @@ -104,6 +108,7 @@ self.assertEqual(bin(s), "EFCDAB9078563412") self.assertEqual(s.value, 0x1234567890ABCDEF) + @xfail def test_endian_float(self): if sys.byteorder == "little": self.assertTrue(c_float.__ctype_le__ is c_float) @@ -122,6 +127,7 @@ self.assertAlmostEqual(s.value, math.pi, 6) self.assertEqual(bin(struct.pack(">f", math.pi)), bin(s)) + @xfail def test_endian_double(self): if sys.byteorder == "little": self.assertTrue(c_double.__ctype_le__ is c_double) @@ -149,6 +155,7 @@ self.assertTrue(c_char.__ctype_le__ is c_char) self.assertTrue(c_char.__ctype_be__ is c_char) + @xfail def test_struct_fields_1(self): if sys.byteorder == "little": base = BigEndianStructure @@ -198,6 +205,7 @@ pass self.assertRaises(TypeError, setattr, S, "_fields_", [("s", T)]) + @xfail def test_struct_fields_2(self): # standard packing in struct uses no alignment. # So, we have to align using pad bytes. @@ -221,6 +229,7 @@ s2 = struct.pack(fmt, 0x12, 0x1234, 0x12345678, 3.14) self.assertEqual(bin(s1), bin(s2)) + @xfail def test_unaligned_nonnative_struct_fields(self): if sys.byteorder == "little": base = BigEndianStructure diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -1,5 +1,6 @@ import unittest from ctypes import * +from ctypes.test import xfail import _ctypes_test class Callbacks(unittest.TestCase): @@ -98,6 +99,7 @@ ## self.check_type(c_char_p, "abc") ## self.check_type(c_char_p, "def") + @xfail def test_pyobject(self): o = () from sys import getrefcount as grc diff --git a/lib-python/2.7/ctypes/test/test_cfuncs.py b/lib-python/2.7/ctypes/test/test_cfuncs.py --- a/lib-python/2.7/ctypes/test/test_cfuncs.py +++ b/lib-python/2.7/ctypes/test/test_cfuncs.py @@ -3,8 +3,8 @@ import unittest from ctypes import * - import _ctypes_test +from test.test_support import impl_detail class CFunctions(unittest.TestCase): _dll = CDLL(_ctypes_test.__file__) @@ -158,12 +158,14 @@ self.assertEqual(self._dll.tf_bd(0, 42.), 14.) self.assertEqual(self.S(), 42) + @impl_detail('long double not supported by PyPy', pypy=False) def test_longdouble(self): self._dll.tf_D.restype = c_longdouble self._dll.tf_D.argtypes = (c_longdouble,) self.assertEqual(self._dll.tf_D(42.), 14.) self.assertEqual(self.S(), 42) - + + @impl_detail('long double not supported by PyPy', pypy=False) def test_longdouble_plus(self): self._dll.tf_bD.restype = c_longdouble self._dll.tf_bD.argtypes = (c_byte, c_longdouble) diff --git a/lib-python/2.7/ctypes/test/test_delattr.py b/lib-python/2.7/ctypes/test/test_delattr.py --- a/lib-python/2.7/ctypes/test/test_delattr.py +++ b/lib-python/2.7/ctypes/test/test_delattr.py @@ -6,15 +6,15 @@ class TestCase(unittest.TestCase): def test_simple(self): - self.assertRaises(TypeError, + self.assertRaises((TypeError, AttributeError), delattr, c_int(42), "value") def test_chararray(self): - self.assertRaises(TypeError, + self.assertRaises((TypeError, AttributeError), delattr, (c_char * 5)(), "value") def test_struct(self): - self.assertRaises(TypeError, + self.assertRaises((TypeError, AttributeError), delattr, X(), "foo") if __name__ == "__main__": diff --git a/lib-python/2.7/ctypes/test/test_frombuffer.py b/lib-python/2.7/ctypes/test/test_frombuffer.py --- a/lib-python/2.7/ctypes/test/test_frombuffer.py +++ b/lib-python/2.7/ctypes/test/test_frombuffer.py @@ -2,6 +2,7 @@ import array import gc import unittest +from ctypes.test import xfail class X(Structure): _fields_ = [("c_int", c_int)] @@ -10,6 +11,7 @@ self._init_called = True class Test(unittest.TestCase): + @xfail def test_fom_buffer(self): a = array.array("i", range(16)) x = (c_int * 16).from_buffer(a) @@ -35,6 +37,7 @@ self.assertRaises(TypeError, (c_char * 16).from_buffer, "a" * 16) + @xfail def test_fom_buffer_with_offset(self): a = array.array("i", range(16)) x = (c_int * 15).from_buffer(a, sizeof(c_int)) @@ -43,6 +46,7 @@ self.assertRaises(ValueError, lambda: (c_int * 16).from_buffer(a, sizeof(c_int))) self.assertRaises(ValueError, lambda: (c_int * 1).from_buffer(a, 16 * sizeof(c_int))) + @xfail def test_from_buffer_copy(self): a = array.array("i", range(16)) x = (c_int * 16).from_buffer_copy(a) @@ -67,6 +71,7 @@ x = (c_char * 16).from_buffer_copy("a" * 16) self.assertEqual(x[:], "a" * 16) + @xfail def test_fom_buffer_copy_with_offset(self): a = array.array("i", range(16)) x = (c_int * 15).from_buffer_copy(a, sizeof(c_int)) diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -7,6 +7,8 @@ from ctypes import * import sys, unittest +from ctypes.test import xfail +from test.test_support import impl_detail try: WINFUNCTYPE @@ -143,6 +145,7 @@ self.assertEqual(result, -21) self.assertEqual(type(result), float) + @impl_detail('long double not supported by PyPy', pypy=False) def test_longdoubleresult(self): f = dll._testfunc_D_bhilfD f.argtypes = [c_byte, c_short, c_int, c_long, c_float, c_longdouble] @@ -393,6 +396,7 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + @xfail def test_sf1651235(self): # see http://www.python.org/sf/1651235 diff --git a/lib-python/2.7/ctypes/test/test_internals.py b/lib-python/2.7/ctypes/test/test_internals.py --- a/lib-python/2.7/ctypes/test/test_internals.py +++ b/lib-python/2.7/ctypes/test/test_internals.py @@ -33,7 +33,13 @@ refcnt = grc(s) cs = c_char_p(s) self.assertEqual(refcnt + 1, grc(s)) - self.assertSame(cs._objects, s) + try: + # Moving gcs need to allocate a nonmoving buffer + cs._objects._obj + except AttributeError: + self.assertSame(cs._objects, s) + else: + self.assertSame(cs._objects._obj, s) def test_simple_struct(self): class X(Structure): diff --git a/lib-python/2.7/ctypes/test/test_libc.py b/lib-python/2.7/ctypes/test/test_libc.py --- a/lib-python/2.7/ctypes/test/test_libc.py +++ b/lib-python/2.7/ctypes/test/test_libc.py @@ -25,5 +25,14 @@ lib.my_qsort(chars, len(chars)-1, sizeof(c_char), comparefunc(sort)) self.assertEqual(chars.raw, " ,,aaaadmmmnpppsss\x00") + def SKIPPED_test_no_more_xfail(self): + # We decided to not explicitly support the whole ctypes-2.7 + # and instead go for a case-by-case, demand-driven approach. + # So this test is skipped instead of failing. + import socket + import ctypes.test + self.assertTrue(not hasattr(ctypes.test, 'xfail'), + "You should incrementally grep for '@xfail' and remove them, they are real failures") + if __name__ == "__main__": unittest.main() diff --git a/lib-python/2.7/ctypes/test/test_loading.py b/lib-python/2.7/ctypes/test/test_loading.py --- a/lib-python/2.7/ctypes/test/test_loading.py +++ b/lib-python/2.7/ctypes/test/test_loading.py @@ -2,7 +2,7 @@ import sys, unittest import os from ctypes.util import find_library -from ctypes.test import is_resource_enabled +from ctypes.test import is_resource_enabled, xfail libc_name = None if os.name == "nt": @@ -75,6 +75,7 @@ self.assertRaises(AttributeError, dll.__getitem__, 1234) if os.name == "nt": + @xfail def test_1703286_A(self): from _ctypes import LoadLibrary, FreeLibrary # On winXP 64-bit, advapi32 loads at an address that does @@ -85,6 +86,7 @@ handle = LoadLibrary("advapi32") FreeLibrary(handle) + @xfail def test_1703286_B(self): # Since on winXP 64-bit advapi32 loads like described # above, the (arbitrarily selected) CloseEventLog function diff --git a/lib-python/2.7/ctypes/test/test_macholib.py b/lib-python/2.7/ctypes/test/test_macholib.py --- a/lib-python/2.7/ctypes/test/test_macholib.py +++ b/lib-python/2.7/ctypes/test/test_macholib.py @@ -52,7 +52,6 @@ '/usr/lib/libSystem.B.dylib') result = find_lib('z') - self.assertTrue(result.startswith('/usr/lib/libz.1')) self.assertTrue(result.endswith('.dylib')) self.assertEqual(find_lib('IOKit'), diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -1,6 +1,7 @@ from ctypes import * import unittest import struct +from ctypes.test import xfail def valid_ranges(*types): # given a sequence of numeric types, collect their _type_ @@ -89,12 +90,14 @@ ## self.assertRaises(ValueError, t, l-1) ## self.assertRaises(ValueError, t, h+1) + @xfail def test_from_param(self): # the from_param class method attribute always # returns PyCArgObject instances for t in signed_types + unsigned_types + float_types: self.assertEqual(ArgType, type(t.from_param(0))) + @xfail def test_byref(self): # calling byref returns also a PyCArgObject instance for t in signed_types + unsigned_types + float_types + bool_types: @@ -102,6 +105,7 @@ self.assertEqual(ArgType, type(parm)) + @xfail def test_floats(self): # c_float and c_double can be created from # Python int, long and float @@ -115,6 +119,7 @@ self.assertEqual(t(2L).value, 2.0) self.assertEqual(t(f).value, 2.0) + @xfail def test_integers(self): class FloatLike(object): def __float__(self): diff --git a/lib-python/2.7/ctypes/test/test_objects.py b/lib-python/2.7/ctypes/test/test_objects.py --- a/lib-python/2.7/ctypes/test/test_objects.py +++ b/lib-python/2.7/ctypes/test/test_objects.py @@ -22,7 +22,7 @@ >>> array[4] = 'foo bar' >>> array._objects -{'4': 'foo bar'} +{'4': } >>> array[4] 'foo bar' >>> @@ -47,9 +47,9 @@ >>> x.array[0] = 'spam spam spam' >>> x._objects -{'0:2': 'spam spam spam'} +{'0:2': } >>> x.array._b_base_._objects -{'0:2': 'spam spam spam'} +{'0:2': } >>> ''' diff --git a/lib-python/2.7/ctypes/test/test_parameters.py b/lib-python/2.7/ctypes/test/test_parameters.py --- a/lib-python/2.7/ctypes/test/test_parameters.py +++ b/lib-python/2.7/ctypes/test/test_parameters.py @@ -1,5 +1,7 @@ import unittest, sys +from ctypes.test import xfail + class SimpleTypesTestCase(unittest.TestCase): def setUp(self): @@ -49,6 +51,7 @@ self.assertEqual(CWCHARP.from_param("abc"), "abcabcabc") # XXX Replace by c_char_p tests + @xfail def test_cstrings(self): from ctypes import c_char_p, byref @@ -86,7 +89,10 @@ pa = c_wchar_p.from_param(c_wchar_p(u"123")) self.assertEqual(type(pa), c_wchar_p) + if sys.platform == "win32": + test_cw_strings = xfail(test_cw_strings) + @xfail def test_int_pointers(self): from ctypes import c_short, c_uint, c_int, c_long, POINTER, pointer LPINT = POINTER(c_int) diff --git a/lib-python/2.7/ctypes/test/test_pep3118.py b/lib-python/2.7/ctypes/test/test_pep3118.py --- a/lib-python/2.7/ctypes/test/test_pep3118.py +++ b/lib-python/2.7/ctypes/test/test_pep3118.py @@ -1,6 +1,7 @@ import unittest from ctypes import * import re, sys +from ctypes.test import xfail if sys.byteorder == "little": THIS_ENDIAN = "<" @@ -19,6 +20,7 @@ class Test(unittest.TestCase): + @xfail def test_native_types(self): for tp, fmt, shape, itemtp in native_types: ob = tp() @@ -46,6 +48,7 @@ print(tp) raise + @xfail def test_endian_types(self): for tp, fmt, shape, itemtp in endian_types: ob = tp() diff --git a/lib-python/2.7/ctypes/test/test_pickling.py b/lib-python/2.7/ctypes/test/test_pickling.py --- a/lib-python/2.7/ctypes/test/test_pickling.py +++ b/lib-python/2.7/ctypes/test/test_pickling.py @@ -3,6 +3,7 @@ from ctypes import * import _ctypes_test dll = CDLL(_ctypes_test.__file__) +from ctypes.test import xfail class X(Structure): _fields_ = [("a", c_int), ("b", c_double)] @@ -21,6 +22,7 @@ def loads(self, item): return pickle.loads(item) + @xfail def test_simple(self): for src in [ c_int(42), @@ -31,6 +33,7 @@ self.assertEqual(memoryview(src).tobytes(), memoryview(dst).tobytes()) + @xfail def test_struct(self): X.init_called = 0 @@ -49,6 +52,7 @@ self.assertEqual(memoryview(y).tobytes(), memoryview(x).tobytes()) + @xfail def test_unpickable(self): # ctypes objects that are pointers or contain pointers are # unpickable. @@ -66,6 +70,7 @@ ]: self.assertRaises(ValueError, lambda: self.dumps(item)) + @xfail def test_wchar(self): pickle.dumps(c_char("x")) # Issue 5049 diff --git a/lib-python/2.7/ctypes/test/test_python_api.py b/lib-python/2.7/ctypes/test/test_python_api.py --- a/lib-python/2.7/ctypes/test/test_python_api.py +++ b/lib-python/2.7/ctypes/test/test_python_api.py @@ -1,6 +1,6 @@ from ctypes import * import unittest, sys -from ctypes.test import is_resource_enabled +from ctypes.test import is_resource_enabled, xfail ################################################################ # This section should be moved into ctypes\__init__.py, when it's ready. @@ -17,6 +17,7 @@ class PythonAPITestCase(unittest.TestCase): + @xfail def test_PyString_FromStringAndSize(self): PyString_FromStringAndSize = pythonapi.PyString_FromStringAndSize @@ -25,6 +26,7 @@ self.assertEqual(PyString_FromStringAndSize("abcdefghi", 3), "abc") + @xfail def test_PyString_FromString(self): pythonapi.PyString_FromString.restype = py_object pythonapi.PyString_FromString.argtypes = (c_char_p,) @@ -56,6 +58,7 @@ del res self.assertEqual(grc(42), ref42) + @xfail def test_PyObj_FromPtr(self): s = "abc def ghi jkl" ref = grc(s) @@ -81,6 +84,7 @@ # not enough arguments self.assertRaises(TypeError, PyOS_snprintf, buf) + @xfail def test_pyobject_repr(self): self.assertEqual(repr(py_object()), "py_object()") self.assertEqual(repr(py_object(42)), "py_object(42)") diff --git a/lib-python/2.7/ctypes/test/test_refcounts.py b/lib-python/2.7/ctypes/test/test_refcounts.py --- a/lib-python/2.7/ctypes/test/test_refcounts.py +++ b/lib-python/2.7/ctypes/test/test_refcounts.py @@ -90,6 +90,7 @@ return a * b * 2 f = proto(func) + gc.collect() a = sys.getrefcount(ctypes.c_int) f(1, 2) self.assertEqual(sys.getrefcount(ctypes.c_int), a) diff --git a/lib-python/2.7/ctypes/test/test_stringptr.py b/lib-python/2.7/ctypes/test/test_stringptr.py --- a/lib-python/2.7/ctypes/test/test_stringptr.py +++ b/lib-python/2.7/ctypes/test/test_stringptr.py @@ -2,11 +2,13 @@ from ctypes import * import _ctypes_test +from ctypes.test import xfail lib = CDLL(_ctypes_test.__file__) class StringPtrTestCase(unittest.TestCase): + @xfail def test__POINTER_c_char(self): class X(Structure): _fields_ = [("str", POINTER(c_char))] @@ -27,6 +29,7 @@ self.assertRaises(TypeError, setattr, x, "str", "Hello, World") + @xfail def test__c_char_p(self): class X(Structure): _fields_ = [("str", c_char_p)] diff --git a/lib-python/2.7/ctypes/test/test_strings.py b/lib-python/2.7/ctypes/test/test_strings.py --- a/lib-python/2.7/ctypes/test/test_strings.py +++ b/lib-python/2.7/ctypes/test/test_strings.py @@ -31,8 +31,9 @@ buf.value = "Hello, World" self.assertEqual(buf.value, "Hello, World") - self.assertRaises(TypeError, setattr, buf, "value", memoryview("Hello, World")) - self.assertRaises(TypeError, setattr, buf, "value", memoryview("abc")) + if test_support.check_impl_detail(): + self.assertRaises(TypeError, setattr, buf, "value", memoryview("Hello, World")) + self.assertRaises(TypeError, setattr, buf, "value", memoryview("abc")) self.assertRaises(ValueError, setattr, buf, "raw", memoryview("x" * 100)) def test_c_buffer_raw(self, memoryview=memoryview): @@ -40,7 +41,8 @@ buf.raw = memoryview("Hello, World") self.assertEqual(buf.value, "Hello, World") - self.assertRaises(TypeError, setattr, buf, "value", memoryview("abc")) + if test_support.check_impl_detail(): + self.assertRaises(TypeError, setattr, buf, "value", memoryview("abc")) self.assertRaises(ValueError, setattr, buf, "raw", memoryview("x" * 100)) def test_c_buffer_deprecated(self): diff --git a/lib-python/2.7/ctypes/test/test_structures.py b/lib-python/2.7/ctypes/test/test_structures.py --- a/lib-python/2.7/ctypes/test/test_structures.py +++ b/lib-python/2.7/ctypes/test/test_structures.py @@ -194,8 +194,8 @@ self.assertEqual(X.b.offset, min(8, longlong_align)) - d = {"_fields_": [("a", "b"), - ("b", "q")], + d = {"_fields_": [("a", c_byte), + ("b", c_longlong)], "_pack_": -1} self.assertRaises(ValueError, type(Structure), "X", (Structure,), d) diff --git a/lib-python/2.7/ctypes/test/test_varsize_struct.py b/lib-python/2.7/ctypes/test/test_varsize_struct.py --- a/lib-python/2.7/ctypes/test/test_varsize_struct.py +++ b/lib-python/2.7/ctypes/test/test_varsize_struct.py @@ -1,7 +1,9 @@ from ctypes import * import unittest +from ctypes.test import xfail class VarSizeTest(unittest.TestCase): + @xfail def test_resize(self): class X(Structure): _fields_ = [("item", c_int), diff --git a/lib-python/2.7/ctypes/util.py b/lib-python/2.7/ctypes/util.py --- a/lib-python/2.7/ctypes/util.py +++ b/lib-python/2.7/ctypes/util.py @@ -72,8 +72,8 @@ return name if os.name == "posix" and sys.platform == "darwin": - from ctypes.macholib.dyld import dyld_find as _dyld_find def find_library(name): + from ctypes.macholib.dyld import dyld_find as _dyld_find possible = ['lib%s.dylib' % name, '%s.dylib' % name, '%s.framework/%s' % (name, name)] diff --git a/lib-python/2.7/distutils/command/bdist_wininst.py b/lib-python/2.7/distutils/command/bdist_wininst.py --- a/lib-python/2.7/distutils/command/bdist_wininst.py +++ b/lib-python/2.7/distutils/command/bdist_wininst.py @@ -298,7 +298,8 @@ bitmaplen, # number of bytes in bitmap ) file.write(header) - file.write(open(arcname, "rb").read()) + with open(arcname, "rb") as arcfile: + file.write(arcfile.read()) # create_exe() diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -184,7 +184,7 @@ # the 'libs' directory is for binary installs - we assume that # must be the *native* platform. But we don't really support # cross-compiling via a binary install anyway, so we let it go. - self.library_dirs.append(os.path.join(sys.exec_prefix, 'libs')) + self.library_dirs.append(os.path.join(sys.exec_prefix, 'include')) if self.debug: self.build_temp = os.path.join(self.build_temp, "Debug") else: @@ -192,8 +192,13 @@ # Append the source distribution include and library directories, # this allows distutils on windows to work in the source tree - self.include_dirs.append(os.path.join(sys.exec_prefix, 'PC')) - if MSVC_VERSION == 9: + if 0: + # pypy has no PC directory + self.include_dirs.append(os.path.join(sys.exec_prefix, 'PC')) + if 1: + # pypy has no PCBuild directory + pass + elif MSVC_VERSION == 9: # Use the .lib files for the correct architecture if self.plat_name == 'win32': suffix = '' @@ -695,24 +700,14 @@ shared extension. On most platforms, this is just 'ext.libraries'; on Windows and OS/2, we add the Python library (eg. python20.dll). """ - # The python library is always needed on Windows. For MSVC, this - # is redundant, since the library is mentioned in a pragma in - # pyconfig.h that MSVC groks. The other Windows compilers all seem - # to need it mentioned explicitly, though, so that's what we do. - # Append '_d' to the python import library on debug builds. + # The python library is always needed on Windows. if sys.platform == "win32": - from distutils.msvccompiler import MSVCCompiler - if not isinstance(self.compiler, MSVCCompiler): - template = "python%d%d" - if self.debug: - template = template + '_d' - pythonlib = (template % - (sys.hexversion >> 24, (sys.hexversion >> 16) & 0xff)) - # don't extend ext.libraries, it may be shared with other - # extensions, it is a reference to the original list - return ext.libraries + [pythonlib] - else: - return ext.libraries + template = "python%d%d" + pythonlib = (template % + (sys.hexversion >> 24, (sys.hexversion >> 16) & 0xff)) + # don't extend ext.libraries, it may be shared with other + # extensions, it is a reference to the original list + return ext.libraries + [pythonlib] elif sys.platform == "os2emx": # EMX/GCC requires the python library explicitly, and I # believe VACPP does as well (though not confirmed) - AIM Apr01 diff --git a/lib-python/2.7/distutils/command/install.py b/lib-python/2.7/distutils/command/install.py --- a/lib-python/2.7/distutils/command/install.py +++ b/lib-python/2.7/distutils/command/install.py @@ -83,6 +83,13 @@ 'scripts': '$userbase/bin', 'data' : '$userbase', }, + 'pypy': { + 'purelib': '$base/site-packages', + 'platlib': '$base/site-packages', + 'headers': '$base/include', + 'scripts': '$base/bin', + 'data' : '$base', + }, } # The keys to an installation scheme; if any new types of files are to be @@ -467,6 +474,8 @@ def select_scheme (self, name): # it's the caller's problem if they supply a bad name! + if hasattr(sys, 'pypy_version_info'): + name = 'pypy' scheme = INSTALL_SCHEMES[name] for key in SCHEME_KEYS: attrname = 'install_' + key diff --git a/lib-python/2.7/distutils/cygwinccompiler.py b/lib-python/2.7/distutils/cygwinccompiler.py --- a/lib-python/2.7/distutils/cygwinccompiler.py +++ b/lib-python/2.7/distutils/cygwinccompiler.py @@ -75,6 +75,9 @@ elif msc_ver == '1500': # VS2008 / MSVC 9.0 return ['msvcr90'] + elif msc_ver == '1600': + # VS2010 / MSVC 10.0 + return ['msvcr100'] else: raise ValueError("Unknown MS Compiler version %s " % msc_ver) diff --git a/lib-python/2.7/distutils/msvc9compiler.py b/lib-python/2.7/distutils/msvc9compiler.py --- a/lib-python/2.7/distutils/msvc9compiler.py +++ b/lib-python/2.7/distutils/msvc9compiler.py @@ -648,6 +648,7 @@ temp_manifest = os.path.join( build_temp, os.path.basename(output_filename) + ".manifest") + ld_args.append('/MANIFEST') ld_args.append('/MANIFESTFILE:' + temp_manifest) if extra_preargs: diff --git a/lib-python/2.7/distutils/spawn.py b/lib-python/2.7/distutils/spawn.py --- a/lib-python/2.7/distutils/spawn.py +++ b/lib-python/2.7/distutils/spawn.py @@ -58,7 +58,6 @@ def _spawn_nt(cmd, search_path=1, verbose=0, dry_run=0): executable = cmd[0] - cmd = _nt_quote_args(cmd) if search_path: # either we find one or it stays the same executable = find_executable(executable) or executable @@ -66,7 +65,8 @@ if not dry_run: # spawn for NT requires a full path to the .exe try: - rc = os.spawnv(os.P_WAIT, executable, cmd) + import subprocess + rc = subprocess.call(cmd) except OSError, exc: # this seems to happen when the command isn't found raise DistutilsExecError, \ diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -9,563 +9,21 @@ Email: """ -__revision__ = "$Id$" +__revision__ = "$Id: sysconfig.py 85358 2010-10-10 09:54:59Z antoine.pitrou $" -import os -import re -import string import sys -from distutils.errors import DistutilsPlatformError -# These are needed in a couple of spots, so just compute them once. -PREFIX = os.path.normpath(sys.prefix) -EXEC_PREFIX = os.path.normpath(sys.exec_prefix) +# The content of this file is redirected from +# sysconfig_cpython or sysconfig_pypy. -# Path to the base directory of the project. On Windows the binary may -# live in project/PCBuild9. If we're dealing with an x64 Windows build, -# it'll live in project/PCbuild/amd64. -project_base = os.path.dirname(os.path.abspath(sys.executable)) -if os.name == "nt" and "pcbuild" in project_base[-8:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir)) -# PC/VS7.1 -if os.name == "nt" and "\\pc\\v" in project_base[-10:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir, - os.path.pardir)) -# PC/AMD64 -if os.name == "nt" and "\\pcbuild\\amd64" in project_base[-14:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir, - os.path.pardir)) +if '__pypy__' in sys.builtin_module_names: + from distutils.sysconfig_pypy import * + from distutils.sysconfig_pypy import _config_vars # needed by setuptools + from distutils.sysconfig_pypy import _variable_rx # read_setup_file() +else: + from distutils.sysconfig_cpython import * + from distutils.sysconfig_cpython import _config_vars # needed by setuptools + from distutils.sysconfig_cpython import _variable_rx # read_setup_file() -# python_build: (Boolean) if true, we're either building Python or -# building an extension with an un-installed Python, so we use -# different (hard-wired) directories. -# Setup.local is available for Makefile builds including VPATH builds, -# Setup.dist is available on Windows -def _python_build(): - for fn in ("Setup.dist", "Setup.local"): - if os.path.isfile(os.path.join(project_base, "Modules", fn)): - return True - return False -python_build = _python_build() - -def get_python_version(): - """Return a string containing the major and minor Python version, - leaving off the patchlevel. Sample return values could be '1.5' - or '2.2'. - """ - return sys.version[:3] - - -def get_python_inc(plat_specific=0, prefix=None): - """Return the directory containing installed Python header files. - - If 'plat_specific' is false (the default), this is the path to the - non-platform-specific header files, i.e. Python.h and so on; - otherwise, this is the path to platform-specific header files - (namely pyconfig.h). - - If 'prefix' is supplied, use it instead of sys.prefix or - sys.exec_prefix -- i.e., ignore 'plat_specific'. - """ - if prefix is None: - prefix = plat_specific and EXEC_PREFIX or PREFIX - - if os.name == "posix": - if python_build: - buildir = os.path.dirname(sys.executable) - if plat_specific: - # python.h is located in the buildir - inc_dir = buildir - else: - # the source dir is relative to the buildir - srcdir = os.path.abspath(os.path.join(buildir, - get_config_var('srcdir'))) - # Include is located in the srcdir - inc_dir = os.path.join(srcdir, "Include") - return inc_dir - return os.path.join(prefix, "include", "python" + get_python_version()) - elif os.name == "nt": - return os.path.join(prefix, "include") - elif os.name == "os2": - return os.path.join(prefix, "Include") - else: - raise DistutilsPlatformError( - "I don't know where Python installs its C header files " - "on platform '%s'" % os.name) - - -def get_python_lib(plat_specific=0, standard_lib=0, prefix=None): - """Return the directory containing the Python library (standard or - site additions). - - If 'plat_specific' is true, return the directory containing - platform-specific modules, i.e. any module from a non-pure-Python - module distribution; otherwise, return the platform-shared library - directory. If 'standard_lib' is true, return the directory - containing standard Python library modules; otherwise, return the - directory for site-specific modules. - - If 'prefix' is supplied, use it instead of sys.prefix or - sys.exec_prefix -- i.e., ignore 'plat_specific'. - """ - if prefix is None: - prefix = plat_specific and EXEC_PREFIX or PREFIX - - if os.name == "posix": - libpython = os.path.join(prefix, - "lib", "python" + get_python_version()) - if standard_lib: - return libpython - else: - return os.path.join(libpython, "site-packages") - - elif os.name == "nt": - if standard_lib: - return os.path.join(prefix, "Lib") - else: - if get_python_version() < "2.2": - return prefix - else: - return os.path.join(prefix, "Lib", "site-packages") - - elif os.name == "os2": - if standard_lib: - return os.path.join(prefix, "Lib") - else: - return os.path.join(prefix, "Lib", "site-packages") - - else: - raise DistutilsPlatformError( - "I don't know where Python installs its library " - "on platform '%s'" % os.name) - - -def customize_compiler(compiler): - """Do any platform-specific customization of a CCompiler instance. - - Mainly needed on Unix, so we can plug in the information that - varies across Unices and is stored in Python's Makefile. - """ - if compiler.compiler_type == "unix": - (cc, cxx, opt, cflags, ccshared, ldshared, so_ext) = \ - get_config_vars('CC', 'CXX', 'OPT', 'CFLAGS', - 'CCSHARED', 'LDSHARED', 'SO') - - if 'CC' in os.environ: - cc = os.environ['CC'] - if 'CXX' in os.environ: - cxx = os.environ['CXX'] - if 'LDSHARED' in os.environ: - ldshared = os.environ['LDSHARED'] - if 'CPP' in os.environ: - cpp = os.environ['CPP'] - else: - cpp = cc + " -E" # not always - if 'LDFLAGS' in os.environ: - ldshared = ldshared + ' ' + os.environ['LDFLAGS'] - if 'CFLAGS' in os.environ: - cflags = opt + ' ' + os.environ['CFLAGS'] - ldshared = ldshared + ' ' + os.environ['CFLAGS'] - if 'CPPFLAGS' in os.environ: - cpp = cpp + ' ' + os.environ['CPPFLAGS'] - cflags = cflags + ' ' + os.environ['CPPFLAGS'] - ldshared = ldshared + ' ' + os.environ['CPPFLAGS'] - - cc_cmd = cc + ' ' + cflags - compiler.set_executables( - preprocessor=cpp, - compiler=cc_cmd, - compiler_so=cc_cmd + ' ' + ccshared, - compiler_cxx=cxx, - linker_so=ldshared, - linker_exe=cc) - - compiler.shared_lib_extension = so_ext - - -def get_config_h_filename(): - """Return full pathname of installed pyconfig.h file.""" - if python_build: - if os.name == "nt": - inc_dir = os.path.join(project_base, "PC") - else: - inc_dir = project_base - else: - inc_dir = get_python_inc(plat_specific=1) - if get_python_version() < '2.2': - config_h = 'config.h' - else: - # The name of the config.h file changed in 2.2 - config_h = 'pyconfig.h' - return os.path.join(inc_dir, config_h) - - -def get_makefile_filename(): - """Return full pathname of installed Makefile from the Python build.""" - if python_build: - return os.path.join(os.path.dirname(sys.executable), "Makefile") - lib_dir = get_python_lib(plat_specific=1, standard_lib=1) - return os.path.join(lib_dir, "config", "Makefile") - - -def parse_config_h(fp, g=None): - """Parse a config.h-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - if g is None: - g = {} - define_rx = re.compile("#define ([A-Z][A-Za-z0-9_]+) (.*)\n") - undef_rx = re.compile("/[*] #undef ([A-Z][A-Za-z0-9_]+) [*]/\n") - # - while 1: - line = fp.readline() - if not line: - break - m = define_rx.match(line) - if m: - n, v = m.group(1, 2) - try: v = int(v) - except ValueError: pass - g[n] = v - else: - m = undef_rx.match(line) - if m: - g[m.group(1)] = 0 - return g - - -# Regexes needed for parsing Makefile (and similar syntaxes, -# like old-style Setup files). -_variable_rx = re.compile("([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)") -_findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)") -_findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}") - -def parse_makefile(fn, g=None): - """Parse a Makefile-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - from distutils.text_file import TextFile - fp = TextFile(fn, strip_comments=1, skip_blanks=1, join_lines=1) - - if g is None: - g = {} - done = {} - notdone = {} - - while 1: - line = fp.readline() - if line is None: # eof - break - m = _variable_rx.match(line) - if m: - n, v = m.group(1, 2) - v = v.strip() - # `$$' is a literal `$' in make - tmpv = v.replace('$$', '') - - if "$" in tmpv: - notdone[n] = v - else: - try: - v = int(v) - except ValueError: - # insert literal `$' - done[n] = v.replace('$$', '$') - else: - done[n] = v - - # do variable interpolation here - while notdone: - for name in notdone.keys(): - value = notdone[name] - m = _findvar1_rx.search(value) or _findvar2_rx.search(value) - if m: - n = m.group(1) - found = True - if n in done: - item = str(done[n]) - elif n in notdone: - # get it on a subsequent round - found = False - elif n in os.environ: - # do it like make: fall back to environment - item = os.environ[n] - else: - done[n] = item = "" - if found: - after = value[m.end():] - value = value[:m.start()] + item + after - if "$" in after: - notdone[name] = value - else: - try: value = int(value) - except ValueError: - done[name] = value.strip() - else: - done[name] = value - del notdone[name] - else: - # bogus variable reference; just drop it since we can't deal - del notdone[name] - - fp.close() - - # strip spurious spaces - for k, v in done.items(): - if isinstance(v, str): - done[k] = v.strip() - - # save the results in the global dictionary - g.update(done) - return g - - -def expand_makefile_vars(s, vars): - """Expand Makefile-style variables -- "${foo}" or "$(foo)" -- in - 'string' according to 'vars' (a dictionary mapping variable names to - values). Variables not present in 'vars' are silently expanded to the - empty string. The variable values in 'vars' should not contain further - variable expansions; if 'vars' is the output of 'parse_makefile()', - you're fine. Returns a variable-expanded version of 's'. - """ - - # This algorithm does multiple expansion, so if vars['foo'] contains - # "${bar}", it will expand ${foo} to ${bar}, and then expand - # ${bar}... and so forth. This is fine as long as 'vars' comes from - # 'parse_makefile()', which takes care of such expansions eagerly, - # according to make's variable expansion semantics. - - while 1: - m = _findvar1_rx.search(s) or _findvar2_rx.search(s) - if m: - (beg, end) = m.span() - s = s[0:beg] + vars.get(m.group(1)) + s[end:] - else: - break - return s - - -_config_vars = None - -def _init_posix(): - """Initialize the module as appropriate for POSIX systems.""" - g = {} - # load the installed Makefile: - try: - filename = get_makefile_filename() - parse_makefile(filename, g) - except IOError, msg: - my_msg = "invalid Python installation: unable to open %s" % filename - if hasattr(msg, "strerror"): - my_msg = my_msg + " (%s)" % msg.strerror - - raise DistutilsPlatformError(my_msg) - - # load the installed pyconfig.h: - try: - filename = get_config_h_filename() - parse_config_h(file(filename), g) - except IOError, msg: - my_msg = "invalid Python installation: unable to open %s" % filename - if hasattr(msg, "strerror"): - my_msg = my_msg + " (%s)" % msg.strerror - - raise DistutilsPlatformError(my_msg) - - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in g: - cfg_target = g['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' - % (cur_target, cfg_target)) - raise DistutilsPlatformError(my_msg) - - # On AIX, there are wrong paths to the linker scripts in the Makefile - # -- these paths are relative to the Python source, but when installed - # the scripts are in another directory. - if python_build: - g['LDSHARED'] = g['BLDSHARED'] - - elif get_python_version() < '2.1': - # The following two branches are for 1.5.2 compatibility. - if sys.platform == 'aix4': # what about AIX 3.x ? - # Linker script is in the config directory, not in Modules as the - # Makefile says. - python_lib = get_python_lib(standard_lib=1) - ld_so_aix = os.path.join(python_lib, 'config', 'ld_so_aix') - python_exp = os.path.join(python_lib, 'config', 'python.exp') - - g['LDSHARED'] = "%s %s -bI:%s" % (ld_so_aix, g['CC'], python_exp) - - elif sys.platform == 'beos': - # Linker script is in the config directory. In the Makefile it is - # relative to the srcdir, which after installation no longer makes - # sense. - python_lib = get_python_lib(standard_lib=1) - linkerscript_path = string.split(g['LDSHARED'])[0] - linkerscript_name = os.path.basename(linkerscript_path) - linkerscript = os.path.join(python_lib, 'config', - linkerscript_name) - - # XXX this isn't the right place to do this: adding the Python - # library to the link, if needed, should be in the "build_ext" - # command. (It's also needed for non-MS compilers on Windows, and - # it's taken care of for them by the 'build_ext.get_libraries()' - # method.) - g['LDSHARED'] = ("%s -L%s/lib -lpython%s" % - (linkerscript, PREFIX, get_python_version())) - - global _config_vars - _config_vars = g - - -def _init_nt(): - """Initialize the module as appropriate for NT""" - g = {} - # set basic install directories - g['LIBDEST'] = get_python_lib(plat_specific=0, standard_lib=1) - g['BINLIBDEST'] = get_python_lib(plat_specific=1, standard_lib=1) - - # XXX hmmm.. a normal install puts include files here - g['INCLUDEPY'] = get_python_inc(plat_specific=0) - - g['SO'] = '.pyd' - g['EXE'] = ".exe" - g['VERSION'] = get_python_version().replace(".", "") - g['BINDIR'] = os.path.dirname(os.path.abspath(sys.executable)) - - global _config_vars - _config_vars = g - - -def _init_os2(): - """Initialize the module as appropriate for OS/2""" - g = {} - # set basic install directories - g['LIBDEST'] = get_python_lib(plat_specific=0, standard_lib=1) - g['BINLIBDEST'] = get_python_lib(plat_specific=1, standard_lib=1) - - # XXX hmmm.. a normal install puts include files here - g['INCLUDEPY'] = get_python_inc(plat_specific=0) - - g['SO'] = '.pyd' - g['EXE'] = ".exe" - - global _config_vars - _config_vars = g - - -def get_config_vars(*args): - """With no arguments, return a dictionary of all configuration - variables relevant for the current platform. Generally this includes - everything needed to build extensions and install both pure modules and - extensions. On Unix, this means every variable defined in Python's - installed Makefile; on Windows and Mac OS it's a much smaller set. - - With arguments, return a list of values that result from looking up - each argument in the configuration variable dictionary. - """ - global _config_vars - if _config_vars is None: - func = globals().get("_init_" + os.name) - if func: - func() - else: - _config_vars = {} - - # Normalized versions of prefix and exec_prefix are handy to have; - # in fact, these are the standard versions used most places in the - # Distutils. - _config_vars['prefix'] = PREFIX - _config_vars['exec_prefix'] = EXEC_PREFIX - - if sys.platform == 'darwin': - kernel_version = os.uname()[2] # Kernel version (8.4.3) - major_version = int(kernel_version.split('.')[0]) - - if major_version < 8: - # On Mac OS X before 10.4, check if -arch and -isysroot - # are in CFLAGS or LDFLAGS and remove them if they are. - # This is needed when building extensions on a 10.3 system - # using a universal build of python. - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - flags = _config_vars[key] - flags = re.sub('-arch\s+\w+\s', ' ', flags) - flags = re.sub('-isysroot [^ \t]*', ' ', flags) - _config_vars[key] = flags - - else: - - # Allow the user to override the architecture flags using - # an environment variable. - # NOTE: This name was introduced by Apple in OSX 10.5 and - # is used by several scripting languages distributed with - # that OS release. - - if 'ARCHFLAGS' in os.environ: - arch = os.environ['ARCHFLAGS'] - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - - flags = _config_vars[key] - flags = re.sub('-arch\s+\w+\s', ' ', flags) - flags = flags + ' ' + arch - _config_vars[key] = flags - - # If we're on OSX 10.5 or later and the user tries to - # compiles an extension using an SDK that is not present - # on the current machine it is better to not use an SDK - # than to fail. - # - # The major usecase for this is users using a Python.org - # binary installer on OSX 10.6: that installer uses - # the 10.4u SDK, but that SDK is not installed by default - # when you install Xcode. - # - m = re.search('-isysroot\s+(\S+)', _config_vars['CFLAGS']) - if m is not None: - sdk = m.group(1) - if not os.path.exists(sdk): - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - - flags = _config_vars[key] - flags = re.sub('-isysroot\s+\S+(\s|$)', ' ', flags) - _config_vars[key] = flags - - if args: - vals = [] - for name in args: - vals.append(_config_vars.get(name)) - return vals - else: - return _config_vars - -def get_config_var(name): - """Return the value of a single variable using the dictionary - returned by 'get_config_vars()'. Equivalent to - get_config_vars().get(name) - """ - return get_config_vars().get(name) diff --git a/lib-python/modified-2.7/distutils/sysconfig_cpython.py b/lib-python/2.7/distutils/sysconfig_cpython.py rename from lib-python/modified-2.7/distutils/sysconfig_cpython.py rename to lib-python/2.7/distutils/sysconfig_cpython.py diff --git a/lib-python/modified-2.7/distutils/sysconfig_pypy.py b/lib-python/2.7/distutils/sysconfig_pypy.py rename from lib-python/modified-2.7/distutils/sysconfig_pypy.py rename to lib-python/2.7/distutils/sysconfig_pypy.py diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -293,7 +293,7 @@ finally: os.chdir(old_wd) self.assertTrue(os.path.exists(so_file)) - self.assertEqual(os.path.splitext(so_file)[-1], + self.assertEqual(so_file[so_file.index(os.path.extsep):], sysconfig.get_config_var('SO')) so_dir = os.path.dirname(so_file) self.assertEqual(so_dir, other_tmp_dir) @@ -302,7 +302,7 @@ cmd.run() so_file = cmd.get_outputs()[0] self.assertTrue(os.path.exists(so_file)) - self.assertEqual(os.path.splitext(so_file)[-1], + self.assertEqual(so_file[so_file.index(os.path.extsep):], sysconfig.get_config_var('SO')) so_dir = os.path.dirname(so_file) self.assertEqual(so_dir, cmd.build_lib) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -2,6 +2,7 @@ import os import unittest +from test import test_support from test.test_support import run_unittest @@ -40,14 +41,15 @@ expected = os.path.normpath(expected) self.assertEqual(got, expected) - libdir = os.path.join(destination, "lib", "python") - check_path(cmd.install_lib, libdir) - check_path(cmd.install_platlib, libdir) - check_path(cmd.install_purelib, libdir) - check_path(cmd.install_headers, - os.path.join(destination, "include", "python", "foopkg")) - check_path(cmd.install_scripts, os.path.join(destination, "bin")) - check_path(cmd.install_data, destination) + if test_support.check_impl_detail(): + libdir = os.path.join(destination, "lib", "python") + check_path(cmd.install_lib, libdir) + check_path(cmd.install_platlib, libdir) + check_path(cmd.install_purelib, libdir) + check_path(cmd.install_headers, + os.path.join(destination, "include", "python", "foopkg")) + check_path(cmd.install_scripts, os.path.join(destination, "bin")) + check_path(cmd.install_data, destination) def test_suite(): diff --git a/lib-python/2.7/distutils/unixccompiler.py b/lib-python/2.7/distutils/unixccompiler.py --- a/lib-python/2.7/distutils/unixccompiler.py +++ b/lib-python/2.7/distutils/unixccompiler.py @@ -125,7 +125,22 @@ } if sys.platform[:6] == "darwin": + import platform + if platform.machine() == 'i386': + if platform.architecture()[0] == '32bit': + arch = 'i386' + else: + arch = 'x86_64' + else: + # just a guess + arch = platform.machine() executables['ranlib'] = ["ranlib"] + executables['linker_so'] += ['-undefined', 'dynamic_lookup'] + + for k, v in executables.iteritems(): + if v and v[0] == 'cc': + v += ['-arch', arch] + # Needed for the filename generation methods provided by the base # class, CCompiler. NB. whoever instantiates/uses a particular @@ -309,7 +324,7 @@ # On OSX users can specify an alternate SDK using # '-isysroot', calculate the SDK root if it is specified # (and use it further on) - cflags = sysconfig.get_config_var('CFLAGS') + cflags = sysconfig.get_config_var('CFLAGS') or '' m = re.search(r'-isysroot\s+(\S+)', cflags) if m is None: sysroot = '/' diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -193,6 +193,8 @@ Equivalent to: sorted(iterable, reverse=True)[:n] """ + if n < 0: # for consistency with the c impl + return [] it = iter(iterable) result = list(islice(it, n)) if not result: @@ -209,6 +211,8 @@ Equivalent to: sorted(iterable)[:n] """ + if n < 0: # for consistency with the c impl + return [] if hasattr(iterable, '__len__') and n * 10 <= len(iterable): # For smaller values of n, the bisect method is faster than a minheap. # It is also memory efficient, consuming only n elements of space. diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -1024,7 +1024,11 @@ kwds["buffering"] = True; response = self.response_class(*args, **kwds) - response.begin() + try: + response.begin() + except: + response.close() + raise assert response.will_close != _UNKNOWN self.__state = _CS_IDLE diff --git a/lib-python/2.7/idlelib/Delegator.py b/lib-python/2.7/idlelib/Delegator.py --- a/lib-python/2.7/idlelib/Delegator.py +++ b/lib-python/2.7/idlelib/Delegator.py @@ -12,6 +12,14 @@ self.__cache[name] = attr return attr + def __nonzero__(self): + # this is needed for PyPy: else, if self.delegate is None, the + # __getattr__ above picks NoneType.__nonzero__, which returns + # False. Thus, bool(Delegator()) is False as well, but it's not what + # we want. On CPython, bool(Delegator()) is True because NoneType + # does not have __nonzero__ + return True + def resetcache(self): for key in self.__cache.keys(): try: diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -746,8 +746,15 @@ 'varargs' and 'varkw' are the names of the * and ** arguments or None.""" if not iscode(co): - raise TypeError('{!r} is not a code object'.format(co)) + if hasattr(len, 'func_code') and type(co) is type(len.func_code): + # PyPy extension: built-in function objects have a func_code too. + # There is no co_code on it, but co_argcount and co_varnames and + # co_flags are present. + pass + else: + raise TypeError('{!r} is not a code object'.format(co)) + code = getattr(co, 'co_code', '') nargs = co.co_argcount names = co.co_varnames args = list(names[:nargs]) @@ -757,12 +764,12 @@ for i in range(nargs): if args[i][:1] in ('', '.'): stack, remain, count = [], [], [] - while step < len(co.co_code): - op = ord(co.co_code[step]) + while step < len(code): + op = ord(code[step]) step = step + 1 if op >= dis.HAVE_ARGUMENT: opname = dis.opname[op] - value = ord(co.co_code[step]) + ord(co.co_code[step+1])*256 + value = ord(code[step]) + ord(code[step+1])*256 step = step + 2 if opname in ('UNPACK_TUPLE', 'UNPACK_SEQUENCE'): remain.append(value) @@ -809,7 +816,9 @@ if ismethod(func): func = func.im_func - if not isfunction(func): + if not (isfunction(func) or + isbuiltin(func) and hasattr(func, 'func_code')): + # PyPy extension: this works for built-in functions too raise TypeError('{!r} is not a Python function'.format(func)) args, varargs, varkw = getargs(func.func_code) return ArgSpec(args, varargs, varkw, func.func_defaults) @@ -949,7 +958,7 @@ raise TypeError('%s() takes exactly 0 arguments ' '(%d given)' % (f_name, num_total)) else: - raise TypeError('%s() takes no arguments (%d given)' % + raise TypeError('%s() takes no argument (%d given)' % (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -2,14 +2,7 @@ """ import re -try: - from _json import encode_basestring_ascii as c_encode_basestring_ascii -except ImportError: - c_encode_basestring_ascii = None -try: - from _json import make_encoder as c_make_encoder -except ImportError: - c_make_encoder = None +from __pypy__.builders import StringBuilder, UnicodeBuilder ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') @@ -24,23 +17,22 @@ '\t': '\\t', } for i in range(0x20): - ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) - #ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) + ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) # Assume this produces an infinity on all machines (probably not guaranteed) INFINITY = float('1e66666') FLOAT_REPR = repr -def encode_basestring(s): +def raw_encode_basestring(s): """Return a JSON representation of a Python string """ def replace(match): return ESCAPE_DCT[match.group(0)] - return '"' + ESCAPE.sub(replace, s) + '"' + return ESCAPE.sub(replace, s) +encode_basestring = lambda s: '"' + raw_encode_basestring(s) + '"' - -def py_encode_basestring_ascii(s): +def raw_encode_basestring_ascii(s): """Return an ASCII-only JSON representation of a Python string """ @@ -53,21 +45,19 @@ except KeyError: n = ord(s) if n < 0x10000: - return '\\u{0:04x}'.format(n) - #return '\\u%04x' % (n,) + return '\\u%04x' % (n,) else: # surrogate pair n -= 0x10000 s1 = 0xd800 | ((n >> 10) & 0x3ff) s2 = 0xdc00 | (n & 0x3ff) - return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) - #return '\\u%04x\\u%04x' % (s1, s2) - return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' + return '\\u%04x\\u%04x' % (s1, s2) + if ESCAPE_ASCII.search(s): + return str(ESCAPE_ASCII.sub(replace, s)) + return s +encode_basestring_ascii = lambda s: '"' + raw_encode_basestring_ascii(s) + '"' -encode_basestring_ascii = ( - c_encode_basestring_ascii or py_encode_basestring_ascii) - class JSONEncoder(object): """Extensible JSON encoder for Python data structures. @@ -147,6 +137,17 @@ self.skipkeys = skipkeys self.ensure_ascii = ensure_ascii + if ensure_ascii: + self.encoder = raw_encode_basestring_ascii + else: + self.encoder = raw_encode_basestring + if encoding != 'utf-8': + orig_encoder = self.encoder + def encoder(o): + if isinstance(o, str): + o = o.decode(encoding) + return orig_encoder(o) + self.encoder = encoder self.check_circular = check_circular self.allow_nan = allow_nan self.sort_keys = sort_keys @@ -184,24 +185,126 @@ '{"foo": ["bar", "baz"]}' """ - # This is for extremely simple cases and benchmarks. + if self.check_circular: + markers = {} + else: + markers = None + if self.ensure_ascii: + builder = StringBuilder() + else: + builder = UnicodeBuilder() + self._encode(o, markers, builder, 0) + return builder.build() + + def _emit_indent(self, builder, _current_indent_level): + if self.indent is not None: + _current_indent_level += 1 + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent + builder.append(newline_indent) + else: + separator = self.item_separator + return separator, _current_indent_level + + def _emit_unindent(self, builder, _current_indent_level): + if self.indent is not None: + builder.append('\n') + builder.append(' ' * (self.indent * (_current_indent_level - 1))) + + def _encode(self, o, markers, builder, _current_indent_level): if isinstance(o, basestring): - if isinstance(o, str): - _encoding = self.encoding - if (_encoding is not None - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - if self.ensure_ascii: - return encode_basestring_ascii(o) + builder.append('"') + builder.append(self.encoder(o)) + builder.append('"') + elif o is None: + builder.append('null') + elif o is True: + builder.append('true') + elif o is False: + builder.append('false') + elif isinstance(o, (int, long)): + builder.append(str(o)) + elif isinstance(o, float): + builder.append(self._floatstr(o)) + elif isinstance(o, (list, tuple)): + if not o: + builder.append('[]') + return + self._encode_list(o, markers, builder, _current_indent_level) + elif isinstance(o, dict): + if not o: + builder.append('{}') + return + self._encode_dict(o, markers, builder, _current_indent_level) + else: + self._mark_markers(markers, o) + res = self.default(o) + self._encode(res, markers, builder, _current_indent_level) + self._remove_markers(markers, o) + return res + + def _encode_list(self, l, markers, builder, _current_indent_level): + self._mark_markers(markers, l) + builder.append('[') + first = True + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + for elem in l: + if first: + first = False else: - return encode_basestring(o) - # This doesn't pass the iterator directly to ''.join() because the - # exceptions aren't as detailed. The list call should be roughly - # equivalent to the PySequence_Fast that ''.join() would do. - chunks = self.iterencode(o, _one_shot=True) - if not isinstance(chunks, (list, tuple)): - chunks = list(chunks) - return ''.join(chunks) + builder.append(separator) + self._encode(elem, markers, builder, _current_indent_level) + del elem # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append(']') + self._remove_markers(markers, l) + + def _encode_dict(self, d, markers, builder, _current_indent_level): + self._mark_markers(markers, d) + first = True + builder.append('{') + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + if self.sort_keys: + items = sorted(d.items(), key=lambda kv: kv[0]) + else: + items = d.iteritems() + + for key, v in items: + if first: + first = False + else: + builder.append(separator) + if isinstance(key, basestring): + pass + # JavaScript is weakly typed for these, so it makes sense to + # also allow them. Many encoders seem to do something like this. + elif isinstance(key, float): + key = self._floatstr(key) + elif key is True: + key = 'true' + elif key is False: + key = 'false' + elif key is None: + key = 'null' + elif isinstance(key, (int, long)): + key = str(key) + elif self.skipkeys: + continue + else: + raise TypeError("key " + repr(key) + " is not a string") + builder.append('"') + builder.append(self.encoder(key)) + builder.append('"') + builder.append(self.key_separator) + self._encode(v, markers, builder, _current_indent_level) + del key + del v # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append('}') + self._remove_markers(markers, d) def iterencode(self, o, _one_shot=False): """Encode the given object and yield each string @@ -217,86 +320,54 @@ markers = {} else: markers = None - if self.ensure_ascii: - _encoder = encode_basestring_ascii + return self._iterencode(o, markers, 0) + + def _floatstr(self, o): + # Check for specials. Note that this type of test is processor + # and/or platform-specific, so do tests which don't depend on the + # internals. + + if o != o: + text = 'NaN' + elif o == INFINITY: + text = 'Infinity' + elif o == -INFINITY: + text = '-Infinity' else: - _encoder = encode_basestring - if self.encoding != 'utf-8': - def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding): - if isinstance(o, str): - o = o.decode(_encoding) - return _orig_encoder(o) + return FLOAT_REPR(o) - def floatstr(o, allow_nan=self.allow_nan, - _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY): - # Check for specials. Note that this type of test is processor - # and/or platform-specific, so do tests which don't depend on the - # internals. + if not self.allow_nan: + raise ValueError( + "Out of range float values are not JSON compliant: " + + repr(o)) - if o != o: - text = 'NaN' - elif o == _inf: - text = 'Infinity' - elif o == _neginf: - text = '-Infinity' - else: - return _repr(o) + return text - if not allow_nan: - raise ValueError( - "Out of range float values are not JSON compliant: " + - repr(o)) + def _mark_markers(self, markers, o): + if markers is not None: + if id(o) in markers: + raise ValueError("Circular reference detected") + markers[id(o)] = None - return text + def _remove_markers(self, markers, o): + if markers is not None: + del markers[id(o)] - - if (_one_shot and c_make_encoder is not None - and self.indent is None and not self.sort_keys): - _iterencode = c_make_encoder( - markers, self.default, _encoder, self.indent, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, self.allow_nan) - else: - _iterencode = _make_iterencode( - markers, self.default, _encoder, self.indent, floatstr, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, _one_shot) - return _iterencode(o, 0) - -def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, - _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot, - ## HACK: hand-optimized bytecode; turn globals into locals - ValueError=ValueError, - basestring=basestring, - dict=dict, - float=float, - id=id, - int=int, - isinstance=isinstance, - list=list, - long=long, - str=str, - tuple=tuple, - ): - - def _iterencode_list(lst, _current_indent_level): + def _iterencode_list(self, lst, markers, _current_indent_level): if not lst: yield '[]' return - if markers is not None: - markerid = id(lst) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = lst + self._mark_markers(markers, lst) buf = '[' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent buf += newline_indent else: newline_indent = None - separator = _item_separator + separator = self.item_separator first = True for value in lst: if first: @@ -304,7 +375,7 @@ else: buf = separator if isinstance(value, basestring): - yield buf + _encoder(value) + yield buf + '"' + self.encoder(value) + '"' elif value is None: yield buf + 'null' elif value is True: @@ -314,44 +385,43 @@ elif isinstance(value, (int, long)): yield buf + str(value) elif isinstance(value, float): - yield buf + _floatstr(value) + yield buf + self._floatstr(value) else: yield buf if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield ']' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, lst) - def _iterencode_dict(dct, _current_indent_level): + def _iterencode_dict(self, dct, markers, _current_indent_level): if not dct: yield '{}' return - if markers is not None: - markerid = id(dct) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = dct + self._mark_markers(markers, dct) yield '{' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - item_separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + item_separator = self.item_separator + newline_indent yield newline_indent else: newline_indent = None - item_separator = _item_separator + item_separator = self.item_separator first = True - if _sort_keys: + if self.sort_keys: items = sorted(dct.items(), key=lambda kv: kv[0]) else: items = dct.iteritems() @@ -361,7 +431,7 @@ # JavaScript is weakly typed for these, so it makes sense to # also allow them. Many encoders seem to do something like this. elif isinstance(key, float): - key = _floatstr(key) + key = self._floatstr(key) elif key is True: key = 'true' elif key is False: @@ -370,7 +440,7 @@ key = 'null' elif isinstance(key, (int, long)): key = str(key) - elif _skipkeys: + elif self.skipkeys: continue else: raise TypeError("key " + repr(key) + " is not a string") @@ -378,10 +448,10 @@ first = False else: yield item_separator - yield _encoder(key) - yield _key_separator + yield '"' + self.encoder(key) + '"' + yield self.key_separator if isinstance(value, basestring): - yield _encoder(value) + yield '"' + self.encoder(value) + '"' elif value is None: yield 'null' elif value is True: @@ -391,26 +461,28 @@ elif isinstance(value, (int, long)): yield str(value) elif isinstance(value, float): - yield _floatstr(value) + yield self._floatstr(value) else: if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield '}' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, dct) - def _iterencode(o, _current_indent_level): + def _iterencode(self, o, markers, _current_indent_level): if isinstance(o, basestring): - yield _encoder(o) + yield '"' + self.encoder(o) + '"' elif o is None: yield 'null' elif o is True: @@ -420,23 +492,19 @@ elif isinstance(o, (int, long)): yield str(o) elif isinstance(o, float): - yield _floatstr(o) + yield self._floatstr(o) elif isinstance(o, (list, tuple)): - for chunk in _iterencode_list(o, _current_indent_level): + for chunk in self._iterencode_list(o, markers, + _current_indent_level): yield chunk elif isinstance(o, dict): - for chunk in _iterencode_dict(o, _current_indent_level): + for chunk in self._iterencode_dict(o, markers, + _current_indent_level): yield chunk else: - if markers is not None: - markerid = id(o) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = o - o = _default(o) - for chunk in _iterencode(o, _current_indent_level): + self._mark_markers(markers, o) + obj = self.default(o) + for chunk in self._iterencode(obj, markers, + _current_indent_level): yield chunk - if markers is not None: - del markers[markerid] - - return _iterencode + self._remove_markers(markers, o) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -80,6 +80,12 @@ # Issue 10038. self.assertEqual(type(self.loads('"foo"')), unicode) + def test_encode_not_utf_8(self): + self.assertEqual(self.dumps('\xb1\xe6', encoding='iso8859-2'), + '"\\u0105\\u0107"') + self.assertEqual(self.dumps(['\xb1\xe6'], encoding='iso8859-2'), + '["\\u0105\\u0107"]') + class TestPyUnicode(TestUnicode, PyTest): pass class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py --- a/lib-python/2.7/mailbox.py +++ b/lib-python/2.7/mailbox.py @@ -619,7 +619,9 @@ """Write any pending changes to disk.""" if not self._pending: return - + if self._file.closed: + self._pending = False + return # In order to be writing anything out at all, self._toc must # already have been generated (and presumably has been modified # by adding or deleting an item). @@ -747,6 +749,7 @@ """Return a file-like representation or raise a KeyError.""" start, stop = self._lookup(key) self._file.seek(start) + if not from_: self._file.readline() return _PartialFile(self._file, self._file.tell(), stop) @@ -1818,6 +1821,10 @@ else: self._pos = pos + def __del__(self): + if hasattr(self,'_file'): + self.close() + def read(self, size=None): """Read bytes.""" return self._read(size, self._file.read) @@ -1854,6 +1861,7 @@ def close(self): """Close the file.""" + self._file.close() del self._file def _read(self, size, read_method): diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -73,15 +73,12 @@ return getattr, (m.im_self, m.im_func.func_name) ForkingPickler.register(type(ForkingPickler.save), _reduce_method) -def _reduce_method_descriptor(m): - return getattr, (m.__objclass__, m.__name__) -ForkingPickler.register(type(list.append), _reduce_method_descriptor) -ForkingPickler.register(type(int.__add__), _reduce_method_descriptor) - -#def _reduce_builtin_function_or_method(m): -# return getattr, (m.__self__, m.__name__) -#ForkingPickler.register(type(list().append), _reduce_builtin_function_or_method) -#ForkingPickler.register(type(int().__add__), _reduce_builtin_function_or_method) +if type(list.append) is not type(ForkingPickler.save): + # Some python implementations have unbound methods even for builtin types + def _reduce_method_descriptor(m): + return getattr, (m.__objclass__, m.__name__) + ForkingPickler.register(type(list.append), _reduce_method_descriptor) + ForkingPickler.register(type(int.__add__), _reduce_method_descriptor) try: from functools import partial diff --git a/lib-python/2.7/opcode.py b/lib-python/2.7/opcode.py --- a/lib-python/2.7/opcode.py +++ b/lib-python/2.7/opcode.py @@ -1,4 +1,3 @@ - """ opcode module - potentially shared between dis and other modules which operate on bytecodes (e.g. peephole optimizers). @@ -189,4 +188,10 @@ def_op('SET_ADD', 146) def_op('MAP_ADD', 147) +# pypy modification, experimental bytecode +def_op('LOOKUP_METHOD', 201) # Index in name list +hasname.append(201) +def_op('CALL_METHOD', 202) # #args not including 'self' +def_op('BUILD_LIST_FROM_ARG', 203) + del def_op, name_op, jrel_op, jabs_op diff --git a/lib-python/2.7/pickle.py b/lib-python/2.7/pickle.py --- a/lib-python/2.7/pickle.py +++ b/lib-python/2.7/pickle.py @@ -168,7 +168,7 @@ # Pickling machinery -class Pickler: +class Pickler(object): def __init__(self, file, protocol=None): """This takes a file-like object for writing a pickle data stream. @@ -638,6 +638,10 @@ # else tmp is empty, and we're done def save_dict(self, obj): + modict_saver = self._pickle_moduledict(obj) + if modict_saver is not None: + return self.save_reduce(*modict_saver) + write = self.write if self.bin: @@ -687,6 +691,29 @@ write(SETITEM) # else tmp is empty, and we're done + def _pickle_moduledict(self, obj): + # save module dictionary as "getattr(module, '__dict__')" + + # build index of module dictionaries + try: + modict = self.module_dict_ids + except AttributeError: + modict = {} + from sys import modules + for mod in modules.values(): + if isinstance(mod, ModuleType): + modict[id(mod.__dict__)] = mod + self.module_dict_ids = modict + + thisid = id(obj) + try: + themodule = modict[thisid] + except KeyError: + return None + from __builtin__ import getattr + return getattr, (themodule, '__dict__') + + def save_inst(self, obj): cls = obj.__class__ @@ -727,6 +754,29 @@ dispatch[InstanceType] = save_inst + def save_function(self, obj): + try: + return self.save_global(obj) + except PicklingError, e: + pass + # Check copy_reg.dispatch_table + reduce = dispatch_table.get(type(obj)) + if reduce: + rv = reduce(obj) + else: + # Check for a __reduce_ex__ method, fall back to __reduce__ + reduce = getattr(obj, "__reduce_ex__", None) + if reduce: + rv = reduce(self.proto) + else: + reduce = getattr(obj, "__reduce__", None) + if reduce: + rv = reduce() + else: + raise e + return self.save_reduce(obj=obj, *rv) + dispatch[FunctionType] = save_function + def save_global(self, obj, name=None, pack=struct.pack): write = self.write memo = self.memo @@ -768,7 +818,6 @@ self.memoize(obj) dispatch[ClassType] = save_global - dispatch[FunctionType] = save_global dispatch[BuiltinFunctionType] = save_global dispatch[TypeType] = save_global @@ -824,7 +873,7 @@ # Unpickling machinery -class Unpickler: +class Unpickler(object): def __init__(self, file): """This takes a file-like object for reading a pickle data stream. diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib-python/2.7/pprint.py b/lib-python/2.7/pprint.py --- a/lib-python/2.7/pprint.py +++ b/lib-python/2.7/pprint.py @@ -144,7 +144,7 @@ return r = getattr(typ, "__repr__", None) - if issubclass(typ, dict) and r is dict.__repr__: + if issubclass(typ, dict) and r == dict.__repr__: write('{') if self._indent_per_level > 1: write((self._indent_per_level - 1) * ' ') @@ -173,10 +173,10 @@ write('}') return - if ((issubclass(typ, list) and r is list.__repr__) or - (issubclass(typ, tuple) and r is tuple.__repr__) or - (issubclass(typ, set) and r is set.__repr__) or - (issubclass(typ, frozenset) and r is frozenset.__repr__) + if ((issubclass(typ, list) and r == list.__repr__) or + (issubclass(typ, tuple) and r == tuple.__repr__) or + (issubclass(typ, set) and r == set.__repr__) or + (issubclass(typ, frozenset) and r == frozenset.__repr__) ): length = _len(object) if issubclass(typ, list): @@ -266,7 +266,7 @@ return ("%s%s%s" % (closure, sio.getvalue(), closure)), True, False r = getattr(typ, "__repr__", None) - if issubclass(typ, dict) and r is dict.__repr__: + if issubclass(typ, dict) and r == dict.__repr__: if not object: return "{}", True, False objid = _id(object) @@ -291,8 +291,8 @@ del context[objid] return "{%s}" % _commajoin(components), readable, recursive - if (issubclass(typ, list) and r is list.__repr__) or \ - (issubclass(typ, tuple) and r is tuple.__repr__): + if (issubclass(typ, list) and r == list.__repr__) or \ + (issubclass(typ, tuple) and r == tuple.__repr__): if issubclass(typ, list): if not object: return "[]", True, False diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -623,7 +623,9 @@ head, '#ffffff', '#7799ee', 'index
' + filelink + docloc) - modules = inspect.getmembers(object, inspect.ismodule) + def isnonbuiltinmodule(obj): + return inspect.ismodule(obj) and obj is not __builtin__ + modules = inspect.getmembers(object, isnonbuiltinmodule) classes, cdict = [], {} for key, value in inspect.getmembers(object, inspect.isclass): diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -41,7 +41,6 @@ from __future__ import division from warnings import warn as _warn -from types import MethodType as _MethodType, BuiltinMethodType as _BuiltinMethodType from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil from math import sqrt as _sqrt, acos as _acos, cos as _cos, sin as _sin from os import urandom as _urandom @@ -240,8 +239,7 @@ return self.randrange(a, b+1) - def _randbelow(self, n, _log=_log, int=int, _maxwidth=1L< n-1 > 2**(k-2) r = getrandbits(k) while r >= n: diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -75,7 +75,6 @@ USER_SITE = None USER_BASE = None - def makepath(*paths): dir = os.path.join(*paths) try: @@ -91,7 +90,10 @@ if hasattr(m, '__loader__'): continue # don't mess with a PEP 302-supplied __file__ try: - m.__file__ = os.path.abspath(m.__file__) + prev = m.__file__ + new = os.path.abspath(m.__file__) + if prev != new: + m.__file__ = new except (AttributeError, OSError): pass @@ -289,6 +291,7 @@ will find its `site-packages` subdirectory depending on the system environment, and will return a list of full paths. """ + is_pypy = '__pypy__' in sys.builtin_module_names sitepackages = [] seen = set() @@ -299,6 +302,10 @@ if sys.platform in ('os2emx', 'riscos'): sitepackages.append(os.path.join(prefix, "Lib", "site-packages")) + elif is_pypy: + from distutils.sysconfig import get_python_lib + sitedir = get_python_lib(standard_lib=False, prefix=prefix) + sitepackages.append(sitedir) elif os.sep == '/': sitepackages.append(os.path.join(prefix, "lib", "python" + sys.version[:3], @@ -435,22 +442,33 @@ if key == 'q': break +##def setcopyright(): +## """Set 'copyright' and 'credits' in __builtin__""" +## __builtin__.copyright = _Printer("copyright", sys.copyright) +## if sys.platform[:4] == 'java': +## __builtin__.credits = _Printer( +## "credits", +## "Jython is maintained by the Jython developers (www.jython.org).") +## else: +## __builtin__.credits = _Printer("credits", """\ +## Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands +## for supporting Python development. See www.python.org for more information.""") +## here = os.path.dirname(os.__file__) +## __builtin__.license = _Printer( +## "license", "See http://www.python.org/%.3s/license.html" % sys.version, +## ["LICENSE.txt", "LICENSE"], +## [os.path.join(here, os.pardir), here, os.curdir]) + def setcopyright(): - """Set 'copyright' and 'credits' in __builtin__""" + # XXX this is the PyPy-specific version. Should be unified with the above. __builtin__.copyright = _Printer("copyright", sys.copyright) - if sys.platform[:4] == 'java': - __builtin__.credits = _Printer( - "credits", - "Jython is maintained by the Jython developers (www.jython.org).") - else: - __builtin__.credits = _Printer("credits", """\ - Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands - for supporting Python development. See www.python.org for more information.""") - here = os.path.dirname(os.__file__) + __builtin__.credits = _Printer( + "credits", + "PyPy is maintained by the PyPy developers: http://pypy.org/") __builtin__.license = _Printer( - "license", "See http://www.python.org/%.3s/license.html" % sys.version, - ["LICENSE.txt", "LICENSE"], - [os.path.join(here, os.pardir), here, os.curdir]) + "license", + "See https://bitbucket.org/pypy/pypy/src/default/LICENSE") + class _Helper(object): @@ -476,7 +494,7 @@ if sys.platform == 'win32': import locale, codecs enc = locale.getdefaultlocale()[1] - if enc.startswith('cp'): # "cp***" ? + if enc is not None and enc.startswith('cp'): # "cp***" ? try: codecs.lookup(enc) except LookupError: @@ -532,9 +550,18 @@ "'import usercustomize' failed; use -v for traceback" +def import_builtin_stuff(): + """PyPy specific: pre-import a few built-in modules, because + some programs actually rely on them to be in sys.modules :-(""" + import exceptions + if 'zipimport' in sys.builtin_module_names: + import zipimport + + def main(): global ENABLE_USER_SITE + import_builtin_stuff() abs__file__() known_paths = removeduppaths() if (os.name == "posix" and sys.path and diff --git a/lib-python/2.7/socket.py b/lib-python/2.7/socket.py --- a/lib-python/2.7/socket.py +++ b/lib-python/2.7/socket.py @@ -46,8 +46,6 @@ import _socket from _socket import * -from functools import partial -from types import MethodType try: import _ssl @@ -159,11 +157,6 @@ if sys.platform == "riscos": _socketmethods = _socketmethods + ('sleeptaskw',) -# All the method names that must be delegated to either the real socket -# object or the _closedsocket object. -_delegate_methods = ("recv", "recvfrom", "recv_into", "recvfrom_into", - "send", "sendto") - class _closedsocket(object): __slots__ = [] def _dummy(*args): @@ -180,22 +173,43 @@ __doc__ = _realsocket.__doc__ - __slots__ = ["_sock", "__weakref__"] + list(_delegate_methods) - def __init__(self, family=AF_INET, type=SOCK_STREAM, proto=0, _sock=None): if _sock is None: _sock = _realsocket(family, type, proto) self._sock = _sock - for method in _delegate_methods: - setattr(self, method, getattr(_sock, method)) + self._io_refs = 0 + self._closed = False - def close(self, _closedsocket=_closedsocket, - _delegate_methods=_delegate_methods, setattr=setattr): + def send(self, data, flags=0): + return self._sock.send(data, flags=flags) + send.__doc__ = _realsocket.send.__doc__ + + def recv(self, buffersize, flags=0): + return self._sock.recv(buffersize, flags=flags) + recv.__doc__ = _realsocket.recv.__doc__ + + def recv_into(self, buffer, nbytes=0, flags=0): + return self._sock.recv_into(buffer, nbytes=nbytes, flags=flags) + recv_into.__doc__ = _realsocket.recv_into.__doc__ + + def recvfrom(self, buffersize, flags=0): + return self._sock.recvfrom(buffersize, flags=flags) + recvfrom.__doc__ = _realsocket.recvfrom.__doc__ + + def recvfrom_into(self, buffer, nbytes=0, flags=0): + return self._sock.recvfrom_into(buffer, nbytes=nbytes, flags=flags) + recvfrom_into.__doc__ = _realsocket.recvfrom_into.__doc__ + + def sendto(self, data, param2, param3=None): + if param3 is None: + return self._sock.sendto(data, param2) + else: + return self._sock.sendto(data, param2, param3) + sendto.__doc__ = _realsocket.sendto.__doc__ + + def close(self): # This function should not reference any globals. See issue #808164. self._sock = _closedsocket() - dummy = self._sock._dummy - for method in _delegate_methods: - setattr(self, method, dummy) close.__doc__ = _realsocket.close.__doc__ def accept(self): @@ -214,21 +228,49 @@ Return a regular file object corresponding to the socket. The mode and bufsize arguments are as for the built-in open() function.""" - return _fileobject(self._sock, mode, bufsize) + self._io_refs += 1 + return _fileobject(self, mode, bufsize) + + def _decref_socketios(self): + if self._io_refs > 0: + self._io_refs -= 1 + if self._closed: + self.close() + + def _real_close(self): + # This function should not reference any globals. See issue #808164. + self._sock.close() + + def close(self): + # This function should not reference any globals. See issue #808164. + self._closed = True + if self._io_refs <= 0: + self._real_close() family = property(lambda self: self._sock.family, doc="the socket family") type = property(lambda self: self._sock.type, doc="the socket type") proto = property(lambda self: self._sock.proto, doc="the socket protocol") -def meth(name,self,*args): - return getattr(self._sock,name)(*args) + # Delegate many calls to the raw socket object. + _s = ("def %(name)s(self, %(args)s): return self._sock.%(name)s(%(args)s)\n\n" + "%(name)s.__doc__ = _realsocket.%(name)s.__doc__\n") + for _m in _socketmethods: + # yupi! we're on pypy, all code objects have this interface + argcount = getattr(_realsocket, _m).im_func.func_code.co_argcount - 1 + exec _s % {'name': _m, 'args': ', '.join('arg%d' % i for i in range(argcount))} + del _m, _s, argcount -for _m in _socketmethods: - p = partial(meth,_m) - p.__name__ = _m - p.__doc__ = getattr(_realsocket,_m).__doc__ - m = MethodType(p,None,_socketobject) - setattr(_socketobject,_m,m) + # Delegation methods with default arguments, that the code above + # cannot handle correctly + def sendall(self, data, flags=0): + self._sock.sendall(data, flags) + sendall.__doc__ = _realsocket.sendall.__doc__ + + def getsockopt(self, level, optname, buflen=None): + if buflen is None: + return self._sock.getsockopt(level, optname) + return self._sock.getsockopt(level, optname, buflen) + getsockopt.__doc__ = _realsocket.getsockopt.__doc__ socket = SocketType = _socketobject @@ -278,8 +320,11 @@ if self._sock: self.flush() finally: - if self._close: - self._sock.close() + if self._sock: + if self._close: + self._sock.close() + else: + self._sock._decref_socketios() self._sock = None def __del__(self): diff --git a/lib-python/2.7/sqlite3/test/dbapi.py b/lib-python/2.7/sqlite3/test/dbapi.py --- a/lib-python/2.7/sqlite3/test/dbapi.py +++ b/lib-python/2.7/sqlite3/test/dbapi.py @@ -1,4 +1,4 @@ -#-*- coding: ISO-8859-1 -*- +#-*- coding: iso-8859-1 -*- # pysqlite2/test/dbapi.py: tests for DB-API compliance # # Copyright (C) 2004-2010 Gerhard H�ring @@ -332,6 +332,9 @@ def __init__(self): self.value = 5 + def __iter__(self): + return self + def next(self): if self.value == 10: raise StopIteration @@ -826,7 +829,7 @@ con = sqlite.connect(":memory:") con.close() try: - con() + con("select 1") self.fail("Should have raised a ProgrammingError") except sqlite.ProgrammingError: pass diff --git a/lib-python/2.7/sqlite3/test/regression.py b/lib-python/2.7/sqlite3/test/regression.py --- a/lib-python/2.7/sqlite3/test/regression.py +++ b/lib-python/2.7/sqlite3/test/regression.py @@ -264,6 +264,28 @@ """ self.assertRaises(sqlite.Warning, self.con, 1) + def CheckUpdateDescriptionNone(self): + """ + Call Cursor.update with an UPDATE query and check that it sets the + cursor's description to be None. + """ + cur = self.con.cursor() + cur.execute("CREATE TABLE foo (id INTEGER)") + cur.execute("UPDATE foo SET id = 3 WHERE id = 1") + self.assertEqual(cur.description, None) + + def CheckStatementCache(self): + cur = self.con.cursor() + cur.execute("CREATE TABLE foo (id INTEGER)") + values = [(i,) for i in xrange(5)] + cur.executemany("INSERT INTO foo (id) VALUES (?)", values) + + cur.execute("SELECT id FROM foo") + self.assertEqual(list(cur), values) + self.con.commit() + cur.execute("SELECT id FROM foo") + self.assertEqual(list(cur), values) + def suite(): regression_suite = unittest.makeSuite(RegressionTests, "Check") return unittest.TestSuite((regression_suite,)) diff --git a/lib-python/2.7/sqlite3/test/userfunctions.py b/lib-python/2.7/sqlite3/test/userfunctions.py --- a/lib-python/2.7/sqlite3/test/userfunctions.py +++ b/lib-python/2.7/sqlite3/test/userfunctions.py @@ -275,12 +275,14 @@ pass def CheckAggrNoStep(self): + # XXX it's better to raise OperationalError in order to stop + # the query earlier. cur = self.con.cursor() try: cur.execute("select nostep(t) from test") - self.fail("should have raised an AttributeError") - except AttributeError, e: - self.assertEqual(e.args[0], "AggrNoStep instance has no attribute 'step'") + self.fail("should have raised an OperationalError") + except sqlite.OperationalError, e: + self.assertEqual(e.args[0], "user-defined aggregate's 'step' method raised error") def CheckAggrNoFinalize(self): cur = self.con.cursor() diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -86,7 +86,7 @@ else: _PROTOCOL_NAMES[PROTOCOL_SSLv2] = "SSLv2" -from socket import socket, _fileobject, _delegate_methods, error as socket_error +from socket import socket, _fileobject, error as socket_error from socket import getnameinfo as _getnameinfo import base64 # for DER-to-PEM translation import errno @@ -103,14 +103,6 @@ do_handshake_on_connect=True, suppress_ragged_eofs=True, ciphers=None): socket.__init__(self, _sock=sock._sock) - # The initializer for socket overrides the methods send(), recv(), etc. - # in the instancce, which we don't need -- but we want to provide the - # methods defined in SSLSocket. - for attr in _delegate_methods: - try: - delattr(self, attr) - except AttributeError: - pass if certfile and not keyfile: keyfile = certfile diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -803,7 +803,7 @@ elif stderr == PIPE: errread, errwrite = _subprocess.CreatePipe(None, 0) elif stderr == STDOUT: - errwrite = c2pwrite + errwrite = c2pwrite.handle # pass id to not close it elif isinstance(stderr, int): errwrite = msvcrt.get_osfhandle(stderr) else: @@ -818,9 +818,13 @@ def _make_inheritable(self, handle): """Return a duplicate of handle, which is inheritable""" - return _subprocess.DuplicateHandle(_subprocess.GetCurrentProcess(), + dupl = _subprocess.DuplicateHandle(_subprocess.GetCurrentProcess(), handle, _subprocess.GetCurrentProcess(), 0, 1, _subprocess.DUPLICATE_SAME_ACCESS) + # If the initial handle was obtained with CreatePipe, close it. + if not isinstance(handle, int): + handle.Close() + return dupl def _find_w9xpopen(self): diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -26,6 +26,16 @@ 'scripts': '{base}/bin', 'data' : '{base}', }, + 'pypy': { + 'stdlib': '{base}/lib-python', + 'platstdlib': '{base}/lib-python', + 'purelib': '{base}/lib-python', + 'platlib': '{base}/lib-python', + 'include': '{base}/include', + 'platinclude': '{base}/include', + 'scripts': '{base}/bin', + 'data' : '{base}', + }, 'nt': { 'stdlib': '{base}/Lib', 'platstdlib': '{base}/Lib', @@ -158,7 +168,9 @@ return res def _get_default_scheme(): - if os.name == 'posix': + if '__pypy__' in sys.builtin_module_names: + return 'pypy' + elif os.name == 'posix': # the default scheme for posix is posix_prefix return 'posix_prefix' return os.name @@ -182,126 +194,9 @@ return env_base if env_base else joinuser("~", ".local") -def _parse_makefile(filename, vars=None): - """Parse a Makefile-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - import re - # Regexes needed for parsing Makefile (and similar syntaxes, - # like old-style Setup files). - _variable_rx = re.compile("([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)") - _findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)") - _findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}") - - if vars is None: - vars = {} - done = {} - notdone = {} - - with open(filename) as f: - lines = f.readlines() - - for line in lines: - if line.startswith('#') or line.strip() == '': - continue - m = _variable_rx.match(line) - if m: - n, v = m.group(1, 2) - v = v.strip() - # `$$' is a literal `$' in make - tmpv = v.replace('$$', '') - - if "$" in tmpv: - notdone[n] = v - else: - try: - v = int(v) - except ValueError: - # insert literal `$' - done[n] = v.replace('$$', '$') - else: - done[n] = v - - # do variable interpolation here - while notdone: - for name in notdone.keys(): - value = notdone[name] - m = _findvar1_rx.search(value) or _findvar2_rx.search(value) - if m: - n = m.group(1) - found = True - if n in done: - item = str(done[n]) - elif n in notdone: - # get it on a subsequent round - found = False - elif n in os.environ: - # do it like make: fall back to environment - item = os.environ[n] - else: - done[n] = item = "" - if found: - after = value[m.end():] - value = value[:m.start()] + item + after - if "$" in after: - notdone[name] = value - else: - try: value = int(value) - except ValueError: - done[name] = value.strip() - else: - done[name] = value - del notdone[name] - else: - # bogus variable reference; just drop it since we can't deal - del notdone[name] - # strip spurious spaces - for k, v in done.items(): - if isinstance(v, str): - done[k] = v.strip() - - # save the results in the global dictionary - vars.update(done) - return vars - - -def _get_makefile_filename(): - if _PYTHON_BUILD: - return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('platstdlib'), "config", "Makefile") - - def _init_posix(vars): """Initialize the module as appropriate for POSIX systems.""" - # load the installed Makefile: - makefile = _get_makefile_filename() - try: - _parse_makefile(makefile, vars) - except IOError, e: - msg = "invalid Python installation: unable to open %s" % makefile - if hasattr(e, "strerror"): - msg = msg + " (%s)" % e.strerror - raise IOError(msg) - - # load the installed pyconfig.h: - config_h = get_config_h_filename() - try: - with open(config_h) as f: - parse_config_h(f, vars) - except IOError, e: - msg = "invalid Python installation: unable to open %s" % config_h - if hasattr(e, "strerror"): - msg = msg + " (%s)" % e.strerror - raise IOError(msg) - - # On AIX, there are wrong paths to the linker scripts in the Makefile - # -- these paths are relative to the Python source, but when installed - # the scripts are in another directory. - if _PYTHON_BUILD: - vars['LDSHARED'] = vars['BLDSHARED'] + return def _init_non_posix(vars): """Initialize the module as appropriate for NT""" @@ -474,10 +369,11 @@ # patched up as well. 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - flags = _CONFIG_VARS[key] - flags = re.sub('-arch\s+\w+\s', ' ', flags) - flags = flags + ' ' + arch - _CONFIG_VARS[key] = flags + if key in _CONFIG_VARS: + flags = _CONFIG_VARS[key] + flags = re.sub('-arch\s+\w+\s', ' ', flags) + flags = flags + ' ' + arch + _CONFIG_VARS[key] = flags # If we're on OSX 10.5 or later and the user tries to # compiles an extension using an SDK that is not present diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -425,10 +425,16 @@ raise CompressionError("zlib module is not available") self.zlib = zlib self.crc = zlib.crc32("") & 0xffffffffL - if mode == "r": - self._init_read_gz() - else: - self._init_write_gz() + try: + if mode == "r": + self._init_read_gz() + else: + self._init_write_gz() + except: + if not self._extfileobj: + fileobj.close() + self.closed = True + raise if comptype == "bz2": try: @@ -1682,13 +1688,14 @@ if filemode not in "rw": raise ValueError("mode must be 'r' or 'w'") - - t = cls(name, filemode, - _Stream(name, filemode, comptype, fileobj, bufsize), - **kwargs) - t._extfileobj = False - return t - + fid = _Stream(name, filemode, comptype, fileobj, bufsize) + try: + t = cls(name, filemode, fid, **kwargs) + t._extfileobj = False + return t + except: + fid.close() + raise elif mode in "aw": return cls.taropen(name, mode, fileobj, **kwargs) @@ -1715,16 +1722,18 @@ gzip.GzipFile except (ImportError, AttributeError): raise CompressionError("gzip module is not available") - - if fileobj is None: - fileobj = bltn_open(name, mode + "b") - + gz_fid = None try: - t = cls.taropen(name, mode, - gzip.GzipFile(name, mode, compresslevel, fileobj), - **kwargs) + gz_fid = gzip.GzipFile(name, mode, compresslevel, fileobj) + t = cls.taropen(name, mode, gz_fid, **kwargs) except IOError: + if gz_fid: + gz_fid.close() raise ReadError("not a gzip file") + except: + if gz_fid: + gz_fid.close() + raise t._extfileobj = False return t @@ -1741,15 +1750,21 @@ except ImportError: raise CompressionError("bz2 module is not available") - if fileobj is not None: - fileobj = _BZ2Proxy(fileobj, mode) - else: - fileobj = bz2.BZ2File(name, mode, compresslevel=compresslevel) + try: + if fileobj is not None: + bzfileobj = _BZ2Proxy(fileobj, mode) + else: + bzfileobj = bz2.BZ2File(name, mode, compresslevel=compresslevel) + t = cls.taropen(name, mode, bzfileobj, **kwargs) - try: - t = cls.taropen(name, mode, fileobj, **kwargs) except (IOError, EOFError): + if fileobj is None: + bzfileobj.close() raise ReadError("not a bzip2 file") + except: + if fileobj is None: + bzfileobj.close() + raise t._extfileobj = False return t diff --git a/lib-python/2.7/test/list_tests.py b/lib-python/2.7/test/list_tests.py --- a/lib-python/2.7/test/list_tests.py +++ b/lib-python/2.7/test/list_tests.py @@ -45,8 +45,12 @@ self.assertEqual(str(a2), "[0, 1, 2, [...], 3]") self.assertEqual(repr(a2), "[0, 1, 2, [...], 3]") + if test_support.check_impl_detail(): + depth = sys.getrecursionlimit() + 100 + else: + depth = 1000 * 1000 # should be enough to exhaust the stack l0 = [] - for i in xrange(sys.getrecursionlimit() + 100): + for i in xrange(depth): l0 = [l0] self.assertRaises(RuntimeError, repr, l0) @@ -472,7 +476,11 @@ u += "eggs" self.assertEqual(u, self.type2test("spameggs")) - self.assertRaises(TypeError, u.__iadd__, None) + def f_iadd(u, x): + u += x + return u + + self.assertRaises(TypeError, f_iadd, u, None) def test_imul(self): u = self.type2test([0, 1]) diff --git a/lib-python/2.7/test/mapping_tests.py b/lib-python/2.7/test/mapping_tests.py --- a/lib-python/2.7/test/mapping_tests.py +++ b/lib-python/2.7/test/mapping_tests.py @@ -531,7 +531,10 @@ self.assertEqual(va, int(ka)) kb, vb = tb = b.popitem() self.assertEqual(vb, int(kb)) - self.assertTrue(not(copymode < 0 and ta != tb)) + if copymode < 0 and test_support.check_impl_detail(): + # popitem() is not guaranteed to be deterministic on + # all implementations + self.assertEqual(ta, tb) self.assertTrue(not a) self.assertTrue(not b) diff --git a/lib-python/2.7/test/pickletester.py b/lib-python/2.7/test/pickletester.py --- a/lib-python/2.7/test/pickletester.py +++ b/lib-python/2.7/test/pickletester.py @@ -6,7 +6,7 @@ import pickletools import copy_reg -from test.test_support import TestFailed, have_unicode, TESTFN +from test.test_support import TestFailed, have_unicode, TESTFN, impl_detail # Tests that try a number of pickle protocols should have a # for proto in protocols: @@ -949,6 +949,7 @@ "Failed protocol %d: %r != %r" % (proto, obj, loaded)) + @impl_detail("pypy does not store attribute names", pypy=False) def test_attribute_name_interning(self): # Test that attribute names of pickled objects are interned when # unpickling. @@ -1091,6 +1092,7 @@ s = StringIO.StringIO("X''.") self.assertRaises(EOFError, self.module.load, s) + @impl_detail("no full restricted mode in pypy", pypy=False) def test_restricted(self): # issue7128: cPickle failed in restricted mode builtins = {self.module.__name__: self.module, diff --git a/lib-python/2.7/test/regrtest.py b/lib-python/2.7/test/regrtest.py --- a/lib-python/2.7/test/regrtest.py +++ b/lib-python/2.7/test/regrtest.py @@ -1388,7 +1388,26 @@ test_zipimport test_zlib """, - 'openbsd3': + 'openbsd4': + """ + test_ascii_formatd + test_bsddb + test_bsddb3 + test_ctypes + test_dl + test_epoll + test_gdbm + test_locale + test_normalization + test_ossaudiodev + test_pep277 + test_tcl + test_tk + test_ttk_guionly + test_ttk_textonly + test_multiprocessing + """, + 'openbsd5': """ test_ascii_formatd test_bsddb @@ -1503,13 +1522,7 @@ return self.expected if __name__ == '__main__': - # findtestdir() gets the dirname out of __file__, so we have to make it - # absolute before changing the working directory. - # For example __file__ may be relative when running trace or profile. - # See issue #9323. - __file__ = os.path.abspath(__file__) - - # sanity check + # Simplification for findtestdir(). assert __file__ == os.path.abspath(sys.argv[0]) # When tests are run from the Python build directory, it is best practice diff --git a/lib-python/2.7/test/seq_tests.py b/lib-python/2.7/test/seq_tests.py --- a/lib-python/2.7/test/seq_tests.py +++ b/lib-python/2.7/test/seq_tests.py @@ -307,12 +307,18 @@ def test_bigrepeat(self): import sys - if sys.maxint <= 2147483647: - x = self.type2test([0]) - x *= 2**16 - self.assertRaises(MemoryError, x.__mul__, 2**16) - if hasattr(x, '__imul__'): - self.assertRaises(MemoryError, x.__imul__, 2**16) + # we chose an N such as 2**16 * N does not fit into a cpu word + if sys.maxint == 2147483647: + # 32 bit system + N = 2**16 + else: + # 64 bit system + N = 2**48 + x = self.type2test([0]) + x *= 2**16 + self.assertRaises(MemoryError, x.__mul__, N) + if hasattr(x, '__imul__'): + self.assertRaises(MemoryError, x.__imul__, N) def test_subscript(self): a = self.type2test([10, 11]) diff --git a/lib-python/2.7/test/string_tests.py b/lib-python/2.7/test/string_tests.py --- a/lib-python/2.7/test/string_tests.py +++ b/lib-python/2.7/test/string_tests.py @@ -1024,7 +1024,10 @@ self.checkequal('abc', 'abc', '__mul__', 1) self.checkequal('abcabcabc', 'abc', '__mul__', 3) self.checkraises(TypeError, 'abc', '__mul__') - self.checkraises(TypeError, 'abc', '__mul__', '') + class Mul(object): + def mul(self, a, b): + return a * b + self.checkraises(TypeError, Mul(), 'mul', 'abc', '') # XXX: on a 64-bit system, this doesn't raise an overflow error, # but either raises a MemoryError, or succeeds (if you have 54TiB) #self.checkraises(OverflowError, 10000*'abc', '__mul__', 2000000000) diff --git a/lib-python/2.7/test/test_abstract_numbers.py b/lib-python/2.7/test/test_abstract_numbers.py --- a/lib-python/2.7/test/test_abstract_numbers.py +++ b/lib-python/2.7/test/test_abstract_numbers.py @@ -40,7 +40,8 @@ c1, c2 = complex(3, 2), complex(4,1) # XXX: This is not ideal, but see the comment in math_trunc(). - self.assertRaises(AttributeError, math.trunc, c1) + # Modified to suit PyPy, which gives TypeError in all cases + self.assertRaises((AttributeError, TypeError), math.trunc, c1) self.assertRaises(TypeError, float, c1) self.assertRaises(TypeError, int, c1) diff --git a/lib-python/2.7/test/test_aifc.py b/lib-python/2.7/test/test_aifc.py --- a/lib-python/2.7/test/test_aifc.py +++ b/lib-python/2.7/test/test_aifc.py @@ -1,4 +1,4 @@ -from test.test_support import findfile, run_unittest, TESTFN +from test.test_support import findfile, run_unittest, TESTFN, impl_detail import unittest import os @@ -68,6 +68,7 @@ self.assertEqual(f.getparams(), fout.getparams()) self.assertEqual(f.readframes(5), fout.readframes(5)) + @impl_detail("PyPy has no audioop module yet", pypy=False) def test_compress(self): f = self.f = aifc.open(self.sndfilepath) fout = self.fout = aifc.open(TESTFN, 'wb') diff --git a/lib-python/2.7/test/test_array.py b/lib-python/2.7/test/test_array.py --- a/lib-python/2.7/test/test_array.py +++ b/lib-python/2.7/test/test_array.py @@ -295,9 +295,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, a.__add__, b) - - self.assertRaises(TypeError, a.__add__, "bad") + with self.assertRaises(TypeError): + a + b + with self.assertRaises(TypeError): + a + 'bad' def test_iadd(self): a = array.array(self.typecode, self.example[::-1]) @@ -316,9 +317,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, a.__add__, b) - - self.assertRaises(TypeError, a.__iadd__, "bad") + with self.assertRaises(TypeError): + a += b + with self.assertRaises(TypeError): + a += 'bad' def test_mul(self): a = 5*array.array(self.typecode, self.example) @@ -345,7 +347,8 @@ array.array(self.typecode) ) - self.assertRaises(TypeError, a.__mul__, "bad") + with self.assertRaises(TypeError): + a * 'bad' def test_imul(self): a = array.array(self.typecode, self.example) @@ -374,7 +377,8 @@ a *= -1 self.assertEqual(a, array.array(self.typecode)) - self.assertRaises(TypeError, a.__imul__, "bad") + with self.assertRaises(TypeError): + a *= 'bad' def test_getitem(self): a = array.array(self.typecode, self.example) @@ -769,6 +773,7 @@ p = proxy(s) self.assertEqual(p.tostring(), s.tostring()) s = None + test_support.gc_collect() self.assertRaises(ReferenceError, len, p) def test_bug_782369(self): diff --git a/lib-python/2.7/test/test_ascii_formatd.py b/lib-python/2.7/test/test_ascii_formatd.py --- a/lib-python/2.7/test/test_ascii_formatd.py +++ b/lib-python/2.7/test/test_ascii_formatd.py @@ -4,6 +4,10 @@ import unittest from test.test_support import check_warnings, run_unittest, import_module +from test.test_support import check_impl_detail + +if not check_impl_detail(cpython=True): + raise unittest.SkipTest("this test is only for CPython") # Skip tests if _ctypes module does not exist import_module('_ctypes') diff --git a/lib-python/2.7/test/test_ast.py b/lib-python/2.7/test/test_ast.py --- a/lib-python/2.7/test/test_ast.py +++ b/lib-python/2.7/test/test_ast.py @@ -20,10 +20,24 @@ # These tests are compiled through "exec" # There should be atleast one test per statement exec_tests = [ + # None + "None", # FunctionDef "def f(): pass", + # FunctionDef with arg + "def f(a): pass", + # FunctionDef with arg and default value + "def f(a=0): pass", + # FunctionDef with varargs + "def f(*args): pass", + # FunctionDef with kwargs + "def f(**kwargs): pass", + # FunctionDef with all kind of args + "def f(a, b=1, c=None, d=[], e={}, *args, **kwargs): pass", # ClassDef "class C:pass", + # ClassDef, new style class + "class C(object): pass", # Return "def f():return 1", # Delete @@ -68,6 +82,27 @@ "for a,b in c: pass", "[(a,b) for a,b in c]", "((a,b) for a,b in c)", + "((a,b) for (a,b) in c)", + # Multiline generator expression + """( + ( + Aa + , + Bb + ) + for + Aa + , + Bb in Cc + )""", + # dictcomp + "{a : b for w in x for m in p if g}", + # dictcomp with naked tuple + "{a : b for v,w in x}", + # setcomp + "{r for l in x if g}", + # setcomp with naked tuple + "{r for l,m in x}", ] # These are compiled through "single" @@ -80,6 +115,8 @@ # These are compiled through "eval" # It should test all expressions eval_tests = [ + # None + "None", # BoolOp "a and b", # BinOp @@ -90,6 +127,16 @@ "lambda:None", # Dict "{ 1:2 }", + # Empty dict + "{}", + # Set + "{None,}", + # Multiline dict + """{ + 1 + : + 2 + }""", # ListComp "[a for b in c if d]", # GeneratorExp @@ -114,8 +161,14 @@ "v", # List "[1,2,3]", + # Empty list + "[]", # Tuple "1,2,3", + # Tuple + "(1,2,3)", + # Empty tuple + "()", # Combination "a.b.c.d(a.b[1:2])", @@ -141,6 +194,35 @@ elif value is not None: self._assertTrueorder(value, parent_pos) + def test_AST_objects(self): + if test_support.check_impl_detail(): + # PyPy also provides a __dict__ to the ast.AST base class. + + x = ast.AST() + try: + x.foobar = 21 + except AttributeError, e: + self.assertEquals(e.args[0], + "'_ast.AST' object has no attribute 'foobar'") + else: + self.assert_(False) + + try: + ast.AST(lineno=2) + except AttributeError, e: + self.assertEquals(e.args[0], + "'_ast.AST' object has no attribute 'lineno'") + else: + self.assert_(False) + + try: + ast.AST(2) + except TypeError, e: + self.assertEquals(e.args[0], + "_ast.AST constructor takes 0 positional arguments") + else: + self.assert_(False) + def test_snippets(self): for input, output, kind in ((exec_tests, exec_results, "exec"), (single_tests, single_results, "single"), @@ -169,6 +251,114 @@ self.assertTrue(issubclass(ast.comprehension, ast.AST)) self.assertTrue(issubclass(ast.Gt, ast.AST)) + def test_field_attr_existence(self): + for name, item in ast.__dict__.iteritems(): + if isinstance(item, type) and name != 'AST' and name[0].isupper(): # XXX: pypy does not allow abstract ast class instanciation + x = item() + if isinstance(x, ast.AST): + self.assertEquals(type(x._fields), tuple) + + def test_arguments(self): + x = ast.arguments() + self.assertEquals(x._fields, ('args', 'vararg', 'kwarg', 'defaults')) + try: + x.vararg + except AttributeError, e: + self.assertEquals(e.args[0], + "'arguments' object has no attribute 'vararg'") + else: + self.assert_(False) + x = ast.arguments(1, 2, 3, 4) + self.assertEquals(x.vararg, 2) + + def test_field_attr_writable(self): + x = ast.Num() + # We can assign to _fields + x._fields = 666 + self.assertEquals(x._fields, 666) + + def test_classattrs(self): + x = ast.Num() + self.assertEquals(x._fields, ('n',)) + try: + x.n + except AttributeError, e: + self.assertEquals(e.args[0], + "'Num' object has no attribute 'n'") + else: + self.assert_(False) + + x = ast.Num(42) + self.assertEquals(x.n, 42) + try: + x.lineno + except AttributeError, e: + self.assertEquals(e.args[0], + "'Num' object has no attribute 'lineno'") + else: + self.assert_(False) + + y = ast.Num() + x.lineno = y + self.assertEquals(x.lineno, y) + + try: + x.foobar + except AttributeError, e: + self.assertEquals(e.args[0], + "'Num' object has no attribute 'foobar'") + else: + self.assert_(False) + + x = ast.Num(lineno=2) + self.assertEquals(x.lineno, 2) + + x = ast.Num(42, lineno=0) + self.assertEquals(x.lineno, 0) + self.assertEquals(x._fields, ('n',)) + self.assertEquals(x.n, 42) + + self.assertRaises(TypeError, ast.Num, 1, 2) + self.assertRaises(TypeError, ast.Num, 1, 2, lineno=0) + + def test_module(self): + body = [ast.Num(42)] + x = ast.Module(body) + self.assertEquals(x.body, body) + + def test_nodeclass(self): + x = ast.BinOp() + self.assertEquals(x._fields, ('left', 'op', 'right')) + + # Zero arguments constructor explicitely allowed + x = ast.BinOp() + # Random attribute allowed too + x.foobarbaz = 5 + self.assertEquals(x.foobarbaz, 5) + + n1 = ast.Num(1) + n3 = ast.Num(3) + addop = ast.Add() + x = ast.BinOp(n1, addop, n3) + self.assertEquals(x.left, n1) + self.assertEquals(x.op, addop) + self.assertEquals(x.right, n3) + + x = ast.BinOp(1, 2, 3) + self.assertEquals(x.left, 1) + self.assertEquals(x.op, 2) + self.assertEquals(x.right, 3) + + x = ast.BinOp(1, 2, 3, lineno=0) + self.assertEquals(x.lineno, 0) + + def test_nodeclasses(self): + x = ast.BinOp(1, 2, 3, lineno=0) + self.assertEquals(x.left, 1) + self.assertEquals(x.op, 2) + self.assertEquals(x.right, 3) + self.assertEquals(x.lineno, 0) + def test_nodeclasses(self): x = ast.BinOp(1, 2, 3, lineno=0) self.assertEqual(x.left, 1) @@ -178,6 +368,12 @@ # node raises exception when not given enough arguments self.assertRaises(TypeError, ast.BinOp, 1, 2) + # node raises exception when given too many arguments + self.assertRaises(TypeError, ast.BinOp, 1, 2, 3, 4) + # node raises exception when not given enough arguments + self.assertRaises(TypeError, ast.BinOp, 1, 2, lineno=0) + # node raises exception when given too many arguments + self.assertRaises(TypeError, ast.BinOp, 1, 2, 3, 4, lineno=0) # can set attributes through kwargs too x = ast.BinOp(left=1, op=2, right=3, lineno=0) @@ -186,8 +382,14 @@ self.assertEqual(x.right, 3) self.assertEqual(x.lineno, 0) + # Random kwargs also allowed + x = ast.BinOp(1, 2, 3, foobarbaz=42) + self.assertEquals(x.foobarbaz, 42) + + def test_no_fields(self): # this used to fail because Sub._fields was None x = ast.Sub() + self.assertEquals(x._fields, ()) def test_pickling(self): import pickle @@ -330,8 +532,15 @@ #### EVERYTHING BELOW IS GENERATED ##### exec_results = [ +('Module', [('Expr', (1, 0), ('Name', (1, 0), 'None', ('Load',)))]), ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, None, []), [('Pass', (1, 9))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [('Name', (1, 6), 'a', ('Param',))], None, None, []), [('Pass', (1, 10))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [('Name', (1, 6), 'a', ('Param',))], None, None, [('Num', (1, 8), 0)]), [('Pass', (1, 12))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], 'args', None, []), [('Pass', (1, 14))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, 'kwargs', []), [('Pass', (1, 17))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [('Name', (1, 6), 'a', ('Param',)), ('Name', (1, 9), 'b', ('Param',)), ('Name', (1, 14), 'c', ('Param',)), ('Name', (1, 22), 'd', ('Param',)), ('Name', (1, 28), 'e', ('Param',))], 'args', 'kwargs', [('Num', (1, 11), 1), ('Name', (1, 16), 'None', ('Load',)), ('List', (1, 24), [], ('Load',)), ('Dict', (1, 30), [], [])]), [('Pass', (1, 52))], [])]), ('Module', [('ClassDef', (1, 0), 'C', [], [('Pass', (1, 8))], [])]), +('Module', [('ClassDef', (1, 0), 'C', [('Name', (1, 8), 'object', ('Load',))], [('Pass', (1, 17))], [])]), ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, None, []), [('Return', (1, 8), ('Num', (1, 15), 1))], [])]), ('Module', [('Delete', (1, 0), [('Name', (1, 4), 'v', ('Del',))])]), ('Module', [('Assign', (1, 0), [('Name', (1, 0), 'v', ('Store',))], ('Num', (1, 4), 1))]), @@ -355,16 +564,26 @@ ('Module', [('For', (1, 0), ('Tuple', (1, 4), [('Name', (1, 4), 'a', ('Store',)), ('Name', (1, 6), 'b', ('Store',))], ('Store',)), ('Name', (1, 11), 'c', ('Load',)), [('Pass', (1, 14))], [])]), ('Module', [('Expr', (1, 0), ('ListComp', (1, 1), ('Tuple', (1, 2), [('Name', (1, 2), 'a', ('Load',)), ('Name', (1, 4), 'b', ('Load',))], ('Load',)), [('comprehension', ('Tuple', (1, 11), [('Name', (1, 11), 'a', ('Store',)), ('Name', (1, 13), 'b', ('Store',))], ('Store',)), ('Name', (1, 18), 'c', ('Load',)), [])]))]), ('Module', [('Expr', (1, 0), ('GeneratorExp', (1, 1), ('Tuple', (1, 2), [('Name', (1, 2), 'a', ('Load',)), ('Name', (1, 4), 'b', ('Load',))], ('Load',)), [('comprehension', ('Tuple', (1, 11), [('Name', (1, 11), 'a', ('Store',)), ('Name', (1, 13), 'b', ('Store',))], ('Store',)), ('Name', (1, 18), 'c', ('Load',)), [])]))]), +('Module', [('Expr', (1, 0), ('GeneratorExp', (1, 1), ('Tuple', (1, 2), [('Name', (1, 2), 'a', ('Load',)), ('Name', (1, 4), 'b', ('Load',))], ('Load',)), [('comprehension', ('Tuple', (1, 12), [('Name', (1, 12), 'a', ('Store',)), ('Name', (1, 14), 'b', ('Store',))], ('Store',)), ('Name', (1, 20), 'c', ('Load',)), [])]))]), +('Module', [('Expr', (1, 0), ('GeneratorExp', (2, 4), ('Tuple', (3, 4), [('Name', (3, 4), 'Aa', ('Load',)), ('Name', (5, 7), 'Bb', ('Load',))], ('Load',)), [('comprehension', ('Tuple', (8, 4), [('Name', (8, 4), 'Aa', ('Store',)), ('Name', (10, 4), 'Bb', ('Store',))], ('Store',)), ('Name', (10, 10), 'Cc', ('Load',)), [])]))]), +('Module', [('Expr', (1, 0), ('DictComp', (1, 1), ('Name', (1, 1), 'a', ('Load',)), ('Name', (1, 5), 'b', ('Load',)), [('comprehension', ('Name', (1, 11), 'w', ('Store',)), ('Name', (1, 16), 'x', ('Load',)), []), ('comprehension', ('Name', (1, 22), 'm', ('Store',)), ('Name', (1, 27), 'p', ('Load',)), [('Name', (1, 32), 'g', ('Load',))])]))]), +('Module', [('Expr', (1, 0), ('DictComp', (1, 1), ('Name', (1, 1), 'a', ('Load',)), ('Name', (1, 5), 'b', ('Load',)), [('comprehension', ('Tuple', (1, 11), [('Name', (1, 11), 'v', ('Store',)), ('Name', (1, 13), 'w', ('Store',))], ('Store',)), ('Name', (1, 18), 'x', ('Load',)), [])]))]), +('Module', [('Expr', (1, 0), ('SetComp', (1, 1), ('Name', (1, 1), 'r', ('Load',)), [('comprehension', ('Name', (1, 7), 'l', ('Store',)), ('Name', (1, 12), 'x', ('Load',)), [('Name', (1, 17), 'g', ('Load',))])]))]), +('Module', [('Expr', (1, 0), ('SetComp', (1, 1), ('Name', (1, 1), 'r', ('Load',)), [('comprehension', ('Tuple', (1, 7), [('Name', (1, 7), 'l', ('Store',)), ('Name', (1, 9), 'm', ('Store',))], ('Store',)), ('Name', (1, 14), 'x', ('Load',)), [])]))]), ] single_results = [ ('Interactive', [('Expr', (1, 0), ('BinOp', (1, 0), ('Num', (1, 0), 1), ('Add',), ('Num', (1, 2), 2)))]), ] eval_results = [ +('Expression', ('Name', (1, 0), 'None', ('Load',))), ('Expression', ('BoolOp', (1, 0), ('And',), [('Name', (1, 0), 'a', ('Load',)), ('Name', (1, 6), 'b', ('Load',))])), ('Expression', ('BinOp', (1, 0), ('Name', (1, 0), 'a', ('Load',)), ('Add',), ('Name', (1, 4), 'b', ('Load',)))), ('Expression', ('UnaryOp', (1, 0), ('Not',), ('Name', (1, 4), 'v', ('Load',)))), ('Expression', ('Lambda', (1, 0), ('arguments', [], None, None, []), ('Name', (1, 7), 'None', ('Load',)))), ('Expression', ('Dict', (1, 0), [('Num', (1, 2), 1)], [('Num', (1, 4), 2)])), +('Expression', ('Dict', (1, 0), [], [])), +('Expression', ('Set', (1, 0), [('Name', (1, 1), 'None', ('Load',))])), +('Expression', ('Dict', (1, 0), [('Num', (2, 6), 1)], [('Num', (4, 10), 2)])), ('Expression', ('ListComp', (1, 1), ('Name', (1, 1), 'a', ('Load',)), [('comprehension', ('Name', (1, 7), 'b', ('Store',)), ('Name', (1, 12), 'c', ('Load',)), [('Name', (1, 17), 'd', ('Load',))])])), ('Expression', ('GeneratorExp', (1, 1), ('Name', (1, 1), 'a', ('Load',)), [('comprehension', ('Name', (1, 7), 'b', ('Store',)), ('Name', (1, 12), 'c', ('Load',)), [('Name', (1, 17), 'd', ('Load',))])])), ('Expression', ('Compare', (1, 0), ('Num', (1, 0), 1), [('Lt',), ('Lt',)], [('Num', (1, 4), 2), ('Num', (1, 8), 3)])), @@ -376,7 +595,10 @@ ('Expression', ('Subscript', (1, 0), ('Name', (1, 0), 'a', ('Load',)), ('Slice', ('Name', (1, 2), 'b', ('Load',)), ('Name', (1, 4), 'c', ('Load',)), None), ('Load',))), ('Expression', ('Name', (1, 0), 'v', ('Load',))), ('Expression', ('List', (1, 0), [('Num', (1, 1), 1), ('Num', (1, 3), 2), ('Num', (1, 5), 3)], ('Load',))), +('Expression', ('List', (1, 0), [], ('Load',))), ('Expression', ('Tuple', (1, 0), [('Num', (1, 0), 1), ('Num', (1, 2), 2), ('Num', (1, 4), 3)], ('Load',))), +('Expression', ('Tuple', (1, 1), [('Num', (1, 1), 1), ('Num', (1, 3), 2), ('Num', (1, 5), 3)], ('Load',))), +('Expression', ('Tuple', (1, 0), [], ('Load',))), ('Expression', ('Call', (1, 0), ('Attribute', (1, 0), ('Attribute', (1, 0), ('Attribute', (1, 0), ('Name', (1, 0), 'a', ('Load',)), 'b', ('Load',)), 'c', ('Load',)), 'd', ('Load',)), [('Subscript', (1, 8), ('Attribute', (1, 8), ('Name', (1, 8), 'a', ('Load',)), 'b', ('Load',)), ('Slice', ('Num', (1, 12), 1), ('Num', (1, 14), 2), None), ('Load',))], [], None, None)), ] main() diff --git a/lib-python/2.7/test/test_builtin.py b/lib-python/2.7/test/test_builtin.py --- a/lib-python/2.7/test/test_builtin.py +++ b/lib-python/2.7/test/test_builtin.py @@ -3,7 +3,8 @@ import platform import unittest from test.test_support import fcmp, have_unicode, TESTFN, unlink, \ - run_unittest, check_py3k_warnings + run_unittest, check_py3k_warnings, \ + check_impl_detail import warnings from operator import neg @@ -247,12 +248,14 @@ self.assertRaises(TypeError, compile) self.assertRaises(ValueError, compile, 'print 42\n', '', 'badmode') self.assertRaises(ValueError, compile, 'print 42\n', '', 'single', 0xff) - self.assertRaises(TypeError, compile, chr(0), 'f', 'exec') + if check_impl_detail(cpython=True): + self.assertRaises(TypeError, compile, chr(0), 'f', 'exec') self.assertRaises(TypeError, compile, 'pass', '?', 'exec', mode='eval', source='0', filename='tmp') if have_unicode: compile(unicode('print u"\xc3\xa5"\n', 'utf8'), '', 'exec') - self.assertRaises(TypeError, compile, unichr(0), 'f', 'exec') + if check_impl_detail(cpython=True): + self.assertRaises(TypeError, compile, unichr(0), 'f', 'exec') self.assertRaises(ValueError, compile, unicode('a = 1'), 'f', 'bad') @@ -395,12 +398,16 @@ self.assertEqual(eval('dir()', g, m), list('xyz')) self.assertEqual(eval('globals()', g, m), g) self.assertEqual(eval('locals()', g, m), m) - self.assertRaises(TypeError, eval, 'a', m) + # on top of CPython, the first dictionary (the globals) has to + # be a real dict. This is not the case on top of PyPy. + if check_impl_detail(pypy=False): + self.assertRaises(TypeError, eval, 'a', m) + class A: "Non-mapping" pass m = A() - self.assertRaises(TypeError, eval, 'a', g, m) + self.assertRaises((TypeError, AttributeError), eval, 'a', g, m) # Verify that dict subclasses work as well class D(dict): @@ -491,9 +498,10 @@ execfile(TESTFN, globals, locals) self.assertEqual(locals['z'], 2) + self.assertRaises(TypeError, execfile, TESTFN, {}, ()) unlink(TESTFN) self.assertRaises(TypeError, execfile) - self.assertRaises(TypeError, execfile, TESTFN, {}, ()) + self.assertRaises((TypeError, IOError), execfile, TESTFN, {}, ()) import os self.assertRaises(IOError, execfile, os.curdir) self.assertRaises(IOError, execfile, "I_dont_exist") @@ -1108,7 +1116,8 @@ def __cmp__(self, other): raise RuntimeError __hash__ = None # Invalid cmp makes this unhashable - self.assertRaises(RuntimeError, range, a, a + 1, badzero(1)) + if check_impl_detail(cpython=True): + self.assertRaises(RuntimeError, range, a, a + 1, badzero(1)) # Reject floats. self.assertRaises(TypeError, range, 1., 1., 1.) diff --git a/lib-python/2.7/test/test_bytes.py b/lib-python/2.7/test/test_bytes.py --- a/lib-python/2.7/test/test_bytes.py +++ b/lib-python/2.7/test/test_bytes.py @@ -694,6 +694,7 @@ self.assertEqual(b, b1) self.assertTrue(b is b1) + @test.test_support.impl_detail("undocumented bytes.__alloc__()") def test_alloc(self): b = bytearray() alloc = b.__alloc__() @@ -821,6 +822,8 @@ self.assertEqual(b, b"") self.assertEqual(c, b"") + @test.test_support.impl_detail( + "resizing semantics of CPython rely on refcounting") def test_resize_forbidden(self): # #4509: can't resize a bytearray when there are buffer exports, even # if it wouldn't reallocate the underlying buffer. @@ -853,6 +856,26 @@ self.assertRaises(BufferError, delslice) self.assertEqual(b, orig) + @test.test_support.impl_detail("resizing semantics", cpython=False) + def test_resize_forbidden_non_cpython(self): + # on non-CPython implementations, we cannot prevent changes to + # bytearrays just because there are buffers around. Instead, + # we get (on PyPy) a buffer that follows the changes and resizes. + b = bytearray(range(10)) + for v in [memoryview(b), buffer(b)]: + b[5] = 99 + self.assertIn(v[5], (99, chr(99))) + b[5] = 100 + b += b + b += b + b += b + self.assertEquals(len(v), 80) + self.assertIn(v[5], (100, chr(100))) + self.assertIn(v[79], (9, chr(9))) + del b[10:] + self.assertRaises(IndexError, lambda: v[10]) + self.assertEquals(len(v), 10) + def test_empty_bytearray(self): # Issue #7561: operations on empty bytearrays could crash in many # situations, due to a fragile implementation of the diff --git a/lib-python/2.7/test/test_bz2.py b/lib-python/2.7/test/test_bz2.py --- a/lib-python/2.7/test/test_bz2.py +++ b/lib-python/2.7/test/test_bz2.py @@ -50,6 +50,7 @@ self.filename = TESTFN def tearDown(self): + test_support.gc_collect() if os.path.isfile(self.filename): os.unlink(self.filename) @@ -246,6 +247,8 @@ for i in xrange(10000): o = BZ2File(self.filename) del o + if i % 100 == 0: + test_support.gc_collect() def testOpenNonexistent(self): # "Test opening a nonexistent file" @@ -310,6 +313,7 @@ for t in threads: t.join() + @test_support.impl_detail() def testMixedIterationReads(self): # Issue #8397: mixed iteration and reads should be forbidden. with bz2.BZ2File(self.filename, 'wb') as f: diff --git a/lib-python/2.7/test/test_cmd_line_script.py b/lib-python/2.7/test/test_cmd_line_script.py --- a/lib-python/2.7/test/test_cmd_line_script.py +++ b/lib-python/2.7/test/test_cmd_line_script.py @@ -112,6 +112,8 @@ self._check_script(script_dir, script_name, script_dir, '') def test_directory_compiled(self): + if test.test_support.check_impl_detail(pypy=True): + raise unittest.SkipTest("pypy won't load lone .pyc files") with temp_dir() as script_dir: script_name = _make_test_script(script_dir, '__main__') compiled_name = compile_script(script_name) @@ -173,6 +175,8 @@ script_name, 'test_pkg') def test_package_compiled(self): + if test.test_support.check_impl_detail(pypy=True): + raise unittest.SkipTest("pypy won't load lone .pyc files") with temp_dir() as script_dir: pkg_dir = os.path.join(script_dir, 'test_pkg') make_pkg(pkg_dir) diff --git a/lib-python/2.7/test/test_code.py b/lib-python/2.7/test/test_code.py --- a/lib-python/2.7/test/test_code.py +++ b/lib-python/2.7/test/test_code.py @@ -82,7 +82,7 @@ import unittest import weakref -import _testcapi +from test import test_support def consts(t): @@ -104,7 +104,9 @@ class CodeTest(unittest.TestCase): + @test_support.impl_detail("test for PyCode_NewEmpty") def test_newempty(self): + import _testcapi co = _testcapi.code_newempty("filename", "funcname", 15) self.assertEqual(co.co_filename, "filename") self.assertEqual(co.co_name, "funcname") @@ -132,6 +134,7 @@ coderef = weakref.ref(f.__code__, callback) self.assertTrue(bool(coderef())) del f + test_support.gc_collect() self.assertFalse(bool(coderef())) self.assertTrue(self.called) diff --git a/lib-python/2.7/test/test_codeop.py b/lib-python/2.7/test/test_codeop.py --- a/lib-python/2.7/test/test_codeop.py +++ b/lib-python/2.7/test/test_codeop.py @@ -3,7 +3,7 @@ Nick Mathewson """ import unittest -from test.test_support import run_unittest, is_jython +from test.test_support import run_unittest, is_jython, check_impl_detail from codeop import compile_command, PyCF_DONT_IMPLY_DEDENT @@ -270,7 +270,9 @@ ai("a = 'a\\\n") ai("a = 1","eval") - ai("a = (","eval") + if check_impl_detail(): # on PyPy it asks for more data, which is not + ai("a = (","eval") # completely correct but hard to fix and + # really a detail (in my opinion ) ai("]","eval") ai("())","eval") ai("[}","eval") diff --git a/lib-python/2.7/test/test_coercion.py b/lib-python/2.7/test/test_coercion.py --- a/lib-python/2.7/test/test_coercion.py +++ b/lib-python/2.7/test/test_coercion.py @@ -1,6 +1,7 @@ import copy import unittest -from test.test_support import run_unittest, TestFailed, check_warnings +from test.test_support import ( + run_unittest, TestFailed, check_warnings, check_impl_detail) # Fake a number that implements numeric methods through __coerce__ @@ -306,12 +307,18 @@ self.assertNotEqual(cmp(u'fish', evil_coercer), 0) self.assertNotEqual(cmp(slice(1), evil_coercer), 0) # ...but that this still works - class WackyComparer(object): - def __cmp__(slf, other): - self.assertTrue(other == 42, 'expected evil_coercer, got %r' % other) - return 0 - __hash__ = None # Invalid cmp makes this unhashable - self.assertEqual(cmp(WackyComparer(), evil_coercer), 0) + if check_impl_detail(): + # NB. I (arigo) would consider the following as implementation- + # specific. For example, in CPython, if we replace 42 with 42.0 + # both below and in CoerceTo() above, then the test fails. This + # hints that the behavior is really dependent on some obscure + # internal details. + class WackyComparer(object): + def __cmp__(slf, other): + self.assertTrue(other == 42, 'expected evil_coercer, got %r' % other) + return 0 + __hash__ = None # Invalid cmp makes this unhashable + self.assertEqual(cmp(WackyComparer(), evil_coercer), 0) # ...and classic classes too, since that code path is a little different class ClassicWackyComparer: def __cmp__(slf, other): diff --git a/lib-python/2.7/test/test_compile.py b/lib-python/2.7/test/test_compile.py --- a/lib-python/2.7/test/test_compile.py +++ b/lib-python/2.7/test/test_compile.py @@ -3,6 +3,7 @@ import _ast from test import test_support import textwrap +from test.test_support import check_impl_detail class TestSpecifics(unittest.TestCase): @@ -90,12 +91,13 @@ self.assertEqual(m.results, ('z', g)) exec 'z = locals()' in g, m self.assertEqual(m.results, ('z', m)) - try: - exec 'z = b' in m - except TypeError: - pass - else: - self.fail('Did not validate globals as a real dict') + if check_impl_detail(): + try: + exec 'z = b' in m + except TypeError: + pass + else: + self.fail('Did not validate globals as a real dict') class A: "Non-mapping" diff --git a/lib-python/2.7/test/test_copy.py b/lib-python/2.7/test/test_copy.py --- a/lib-python/2.7/test/test_copy.py +++ b/lib-python/2.7/test/test_copy.py @@ -637,6 +637,7 @@ self.assertEqual(v[c], d) self.assertEqual(len(v), 2) del c, d + test_support.gc_collect() self.assertEqual(len(v), 1) x, y = C(), C() # The underlying containers are decoupled @@ -666,6 +667,7 @@ self.assertEqual(v[a].i, b.i) self.assertEqual(v[c].i, d.i) del c + test_support.gc_collect() self.assertEqual(len(v), 1) def test_deepcopy_weakvaluedict(self): @@ -689,6 +691,7 @@ self.assertTrue(t is d) del x, y, z, t del d + test_support.gc_collect() self.assertEqual(len(v), 1) def test_deepcopy_bound_method(self): diff --git a/lib-python/2.7/test/test_cpickle.py b/lib-python/2.7/test/test_cpickle.py --- a/lib-python/2.7/test/test_cpickle.py +++ b/lib-python/2.7/test/test_cpickle.py @@ -61,27 +61,27 @@ error = cPickle.BadPickleGet def test_recursive_list(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_list, self) def test_recursive_tuple(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_tuple, self) def test_recursive_inst(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_inst, self) def test_recursive_dict(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_dict, self) def test_recursive_multi(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_multi, self) diff --git a/lib-python/2.7/test/test_csv.py b/lib-python/2.7/test/test_csv.py --- a/lib-python/2.7/test/test_csv.py +++ b/lib-python/2.7/test/test_csv.py @@ -54,8 +54,10 @@ self.assertEqual(obj.dialect.skipinitialspace, False) self.assertEqual(obj.dialect.strict, False) # Try deleting or changing attributes (they are read-only) - self.assertRaises(TypeError, delattr, obj.dialect, 'delimiter') - self.assertRaises(TypeError, setattr, obj.dialect, 'delimiter', ':') + self.assertRaises((TypeError, AttributeError), delattr, obj.dialect, + 'delimiter') + self.assertRaises((TypeError, AttributeError), setattr, obj.dialect, + 'delimiter', ':') self.assertRaises(AttributeError, delattr, obj.dialect, 'quoting') self.assertRaises(AttributeError, setattr, obj.dialect, 'quoting', None) diff --git a/lib-python/2.7/test/test_deque.py b/lib-python/2.7/test/test_deque.py --- a/lib-python/2.7/test/test_deque.py +++ b/lib-python/2.7/test/test_deque.py @@ -109,7 +109,7 @@ self.assertEqual(deque('abc', maxlen=4).maxlen, 4) self.assertEqual(deque('abc', maxlen=2).maxlen, 2) self.assertEqual(deque('abc', maxlen=0).maxlen, 0) - with self.assertRaises(AttributeError): + with self.assertRaises((AttributeError, TypeError)): d = deque('abc') d.maxlen = 10 @@ -352,7 +352,10 @@ for match in (True, False): d = deque(['ab']) d.extend([MutateCmp(d, match), 'c']) - self.assertRaises(IndexError, d.remove, 'c') + # On CPython we get IndexError: deque mutated during remove(). + # Why is it an IndexError during remove() only??? + # On PyPy it is a RuntimeError, as in the other operations. + self.assertRaises((IndexError, RuntimeError), d.remove, 'c') self.assertEqual(d, deque()) def test_repr(self): @@ -514,7 +517,7 @@ container = reversed(deque([obj, 1])) obj.x = iter(container) del obj, container - gc.collect() + test_support.gc_collect() self.assertTrue(ref() is None, "Cycle was not collected") class TestVariousIteratorArgs(unittest.TestCase): @@ -630,6 +633,7 @@ p = weakref.proxy(d) self.assertEqual(str(p), str(d)) d = None + test_support.gc_collect() self.assertRaises(ReferenceError, str, p) def test_strange_subclass(self): diff --git a/lib-python/2.7/test/test_descr.py b/lib-python/2.7/test/test_descr.py --- a/lib-python/2.7/test/test_descr.py +++ b/lib-python/2.7/test/test_descr.py @@ -2,6 +2,7 @@ import sys import types import unittest +import popen2 # trigger early the warning from popen2.py from copy import deepcopy from test import test_support @@ -1128,7 +1129,7 @@ # Test lookup leaks [SF bug 572567] import gc - if hasattr(gc, 'get_objects'): + if test_support.check_impl_detail(): class G(object): def __cmp__(self, other): return 0 @@ -1741,6 +1742,10 @@ raise MyException for name, runner, meth_impl, ok, env in specials: + if name == '__length_hint__' or name == '__sizeof__': + if not test_support.check_impl_detail(): + continue + class X(Checker): pass for attr, obj in env.iteritems(): @@ -1980,7 +1985,9 @@ except TypeError, msg: self.assertTrue(str(msg).find("weak reference") >= 0) else: - self.fail("weakref.ref(no) should be illegal") + if test_support.check_impl_detail(pypy=False): + self.fail("weakref.ref(no) should be illegal") + #else: pypy supports taking weakrefs to some more objects class Weak(object): __slots__ = ['foo', '__weakref__'] yes = Weak() @@ -3092,7 +3099,16 @@ class R(J): __slots__ = ["__dict__", "__weakref__"] - for cls, cls2 in ((G, H), (G, I), (I, H), (Q, R), (R, Q)): + if test_support.check_impl_detail(pypy=False): + lst = ((G, H), (G, I), (I, H), (Q, R), (R, Q)) + else: + # Not supported in pypy: changing the __class__ of an object + # to another __class__ that just happens to have the same slots. + # If needed, we can add the feature, but what we'll likely do + # then is to allow mostly any __class__ assignment, even if the + # classes have different __slots__, because we it's easier. + lst = ((Q, R), (R, Q)) + for cls, cls2 in lst: x = cls() x.a = 1 x.__class__ = cls2 @@ -3175,7 +3191,8 @@ except TypeError: pass else: - self.fail("%r's __dict__ can be modified" % cls) + if test_support.check_impl_detail(pypy=False): + self.fail("%r's __dict__ can be modified" % cls) # Modules also disallow __dict__ assignment class Module1(types.ModuleType, Base): @@ -4383,13 +4400,10 @@ self.assertTrue(l.__add__ != [5].__add__) self.assertTrue(l.__add__ != l.__mul__) self.assertTrue(l.__add__.__name__ == '__add__') - if hasattr(l.__add__, '__self__'): - # CPython - self.assertTrue(l.__add__.__self__ is l) + self.assertTrue(l.__add__.__self__ is l) + if hasattr(l.__add__, '__objclass__'): # CPython self.assertTrue(l.__add__.__objclass__ is list) - else: - # Python implementations where [].__add__ is a normal bound method - self.assertTrue(l.__add__.im_self is l) + else: # PyPy self.assertTrue(l.__add__.im_class is list) self.assertEqual(l.__add__.__doc__, list.__add__.__doc__) try: @@ -4578,8 +4592,12 @@ str.split(fake_str) # call a slot wrapper descriptor - with self.assertRaises(TypeError): - str.__add__(fake_str, "abc") + try: + r = str.__add__(fake_str, "abc") + except TypeError: + pass + else: + self.assertEqual(r, NotImplemented) class DictProxyTests(unittest.TestCase): diff --git a/lib-python/2.7/test/test_descrtut.py b/lib-python/2.7/test/test_descrtut.py --- a/lib-python/2.7/test/test_descrtut.py +++ b/lib-python/2.7/test/test_descrtut.py @@ -172,46 +172,12 @@ AttributeError: 'list' object has no attribute '__methods__' >>> -Instead, you can get the same information from the list type: +Instead, you can get the same information from the list type +(the following example filters out the numerous method names +starting with '_'): - >>> pprint.pprint(dir(list)) # like list.__dict__.keys(), but sorted - ['__add__', - '__class__', - '__contains__', - '__delattr__', - '__delitem__', - '__delslice__', - '__doc__', - '__eq__', - '__format__', - '__ge__', - '__getattribute__', - '__getitem__', - '__getslice__', - '__gt__', - '__hash__', - '__iadd__', - '__imul__', - '__init__', - '__iter__', - '__le__', - '__len__', - '__lt__', - '__mul__', - '__ne__', - '__new__', - '__reduce__', - '__reduce_ex__', - '__repr__', - '__reversed__', - '__rmul__', - '__setattr__', - '__setitem__', - '__setslice__', - '__sizeof__', - '__str__', - '__subclasshook__', - 'append', + >>> pprint.pprint([name for name in dir(list) if not name.startswith('_')]) + ['append', 'count', 'extend', 'index', diff --git a/lib-python/2.7/test/test_dict.py b/lib-python/2.7/test/test_dict.py --- a/lib-python/2.7/test/test_dict.py +++ b/lib-python/2.7/test/test_dict.py @@ -319,7 +319,8 @@ self.assertEqual(va, int(ka)) kb, vb = tb = b.popitem() self.assertEqual(vb, int(kb)) - self.assertFalse(copymode < 0 and ta != tb) + if test_support.check_impl_detail(): + self.assertFalse(copymode < 0 and ta != tb) self.assertFalse(a) self.assertFalse(b) diff --git a/lib-python/2.7/test/test_dis.py b/lib-python/2.7/test/test_dis.py --- a/lib-python/2.7/test/test_dis.py +++ b/lib-python/2.7/test/test_dis.py @@ -56,8 +56,8 @@ %-4d 0 LOAD_CONST 1 (0) 3 POP_JUMP_IF_TRUE 38 6 LOAD_GLOBAL 0 (AssertionError) - 9 BUILD_LIST 0 - 12 LOAD_FAST 0 (x) + 9 LOAD_FAST 0 (x) + 12 BUILD_LIST_FROM_ARG 0 15 GET_ITER >> 16 FOR_ITER 12 (to 31) 19 STORE_FAST 1 (s) diff --git a/lib-python/2.7/test/test_doctest.py b/lib-python/2.7/test/test_doctest.py --- a/lib-python/2.7/test/test_doctest.py +++ b/lib-python/2.7/test/test_doctest.py @@ -782,7 +782,7 @@ ... >>> x = 12 ... >>> print x//0 ... Traceback (most recent call last): - ... ZeroDivisionError: integer division or modulo by zero + ... ZeroDivisionError: integer division by zero ... ''' >>> test = doctest.DocTestFinder().find(f)[0] >>> doctest.DocTestRunner(verbose=False).run(test) @@ -799,7 +799,7 @@ ... >>> print 'pre-exception output', x//0 ... pre-exception output ... Traceback (most recent call last): - ... ZeroDivisionError: integer division or modulo by zero + ... ZeroDivisionError: integer division by zero ... ''' >>> test = doctest.DocTestFinder().find(f)[0] >>> doctest.DocTestRunner(verbose=False).run(test) @@ -810,7 +810,7 @@ print 'pre-exception output', x//0 Exception raised: ... - ZeroDivisionError: integer division or modulo by zero + ZeroDivisionError: integer division by zero TestResults(failed=1, attempted=2) Exception messages may contain newlines: @@ -978,7 +978,7 @@ Exception raised: Traceback (most recent call last): ... - ZeroDivisionError: integer division or modulo by zero + ZeroDivisionError: integer division by zero TestResults(failed=1, attempted=1) """ def displayhook(): r""" @@ -1924,7 +1924,7 @@ > (1)() -> calls_set_trace() (Pdb) print foo - *** NameError: name 'foo' is not defined + *** NameError: global name 'foo' is not defined (Pdb) continue TestResults(failed=0, attempted=2) """ @@ -2229,7 +2229,7 @@ favorite_color Exception raised: ... - NameError: name 'favorite_color' is not defined + NameError: global name 'favorite_color' is not defined @@ -2289,7 +2289,7 @@ favorite_color Exception raised: ... - NameError: name 'favorite_color' is not defined + NameError: global name 'favorite_color' is not defined ********************************************************************** 1 items had failures: 1 of 2 in test_doctest.txt @@ -2382,7 +2382,7 @@ favorite_color Exception raised: ... - NameError: name 'favorite_color' is not defined + NameError: global name 'favorite_color' is not defined TestResults(failed=1, attempted=2) >>> doctest.master = None # Reset master. diff --git a/lib-python/2.7/test/test_dumbdbm.py b/lib-python/2.7/test/test_dumbdbm.py --- a/lib-python/2.7/test/test_dumbdbm.py +++ b/lib-python/2.7/test/test_dumbdbm.py @@ -107,9 +107,11 @@ f.close() # Mangle the file by adding \r before each newline - data = open(_fname + '.dir').read() + with open(_fname + '.dir') as f: + data = f.read() data = data.replace('\n', '\r\n') - open(_fname + '.dir', 'wb').write(data) + with open(_fname + '.dir', 'wb') as f: + f.write(data) f = dumbdbm.open(_fname) self.assertEqual(f['1'], 'hello') diff --git a/lib-python/2.7/test/test_extcall.py b/lib-python/2.7/test/test_extcall.py --- a/lib-python/2.7/test/test_extcall.py +++ b/lib-python/2.7/test/test_extcall.py @@ -90,19 +90,19 @@ >>> class Nothing: pass ... - >>> g(*Nothing()) + >>> g(*Nothing()) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: g() argument after * must be a sequence, not instance + TypeError: ...argument after * must be a sequence, not instance >>> class Nothing: ... def __len__(self): return 5 ... - >>> g(*Nothing()) + >>> g(*Nothing()) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: g() argument after * must be a sequence, not instance + TypeError: ...argument after * must be a sequence, not instance >>> class Nothing(): ... def __len__(self): return 5 @@ -154,52 +154,50 @@ ... TypeError: g() got multiple values for keyword argument 'x' - >>> f(**{1:2}) + >>> f(**{1:2}) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: f() keywords must be strings + TypeError: ...keywords must be strings >>> h(**{'e': 2}) Traceback (most recent call last): ... TypeError: h() got an unexpected keyword argument 'e' - >>> h(*h) + >>> h(*h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: h() argument after * must be a sequence, not function + TypeError: ...argument after * must be a sequence, not function - >>> dir(*h) + >>> dir(*h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: dir() argument after * must be a sequence, not function + TypeError: ...argument after * must be a sequence, not function - >>> None(*h) + >>> None(*h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: NoneType object argument after * must be a sequence, \ -not function + TypeError: ...argument after * must be a sequence, not function - >>> h(**h) + >>> h(**h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: h() argument after ** must be a mapping, not function + TypeError: ...argument after ** must be a mapping, not function - >>> dir(**h) + >>> dir(**h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: dir() argument after ** must be a mapping, not function + TypeError: ...argument after ** must be a mapping, not function - >>> None(**h) + >>> None(**h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: NoneType object argument after ** must be a mapping, \ -not function + TypeError: ...argument after ** must be a mapping, not function - >>> dir(b=1, **{'b': 1}) + >>> dir(b=1, **{'b': 1}) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: dir() got multiple values for keyword argument 'b' + TypeError: ...got multiple values for keyword argument 'b' Another helper function @@ -247,10 +245,10 @@ ... False True - >>> id(1, **{'foo': 1}) + >>> id(1, **{'foo': 1}) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: id() takes no keyword arguments + TypeError: id() ... keyword argument... A corner case of keyword dictionary items being deleted during the function call setup. See . diff --git a/lib-python/2.7/test/test_fcntl.py b/lib-python/2.7/test/test_fcntl.py --- a/lib-python/2.7/test/test_fcntl.py +++ b/lib-python/2.7/test/test_fcntl.py @@ -32,7 +32,7 @@ 'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5', 'freebsd6', 'freebsd7', 'freebsd8', 'bsdos2', 'bsdos3', 'bsdos4', - 'openbsd', 'openbsd2', 'openbsd3', 'openbsd4'): + 'openbsd', 'openbsd2', 'openbsd3', 'openbsd4', 'openbsd5'): if struct.calcsize('l') == 8: off_t = 'l' pid_t = 'i' diff --git a/lib-python/2.7/test/test_file.py b/lib-python/2.7/test/test_file.py --- a/lib-python/2.7/test/test_file.py +++ b/lib-python/2.7/test/test_file.py @@ -12,7 +12,7 @@ import io import _pyio as pyio -from test.test_support import TESTFN, run_unittest +from test.test_support import TESTFN, run_unittest, gc_collect from UserList import UserList class AutoFileTests(unittest.TestCase): @@ -33,6 +33,7 @@ self.assertEqual(self.f.tell(), p.tell()) self.f.close() self.f = None + gc_collect() self.assertRaises(ReferenceError, getattr, p, 'tell') def testAttributes(self): @@ -157,7 +158,12 @@ def testStdin(self): # This causes the interpreter to exit on OSF1 v5.1. if sys.platform != 'osf1V5': - self.assertRaises((IOError, ValueError), sys.stdin.seek, -1) + if sys.stdin.isatty(): + self.assertRaises((IOError, ValueError), sys.stdin.seek, -1) + else: + print(( + ' Skipping sys.stdin.seek(-1): stdin is not a tty.' + ' Test manually.'), file=sys.__stdout__) else: print(( ' Skipping sys.stdin.seek(-1), it may crash the interpreter.' diff --git a/lib-python/2.7/test/test_file2k.py b/lib-python/2.7/test/test_file2k.py --- a/lib-python/2.7/test/test_file2k.py +++ b/lib-python/2.7/test/test_file2k.py @@ -11,7 +11,7 @@ threading = None from test import test_support -from test.test_support import TESTFN, run_unittest +from test.test_support import TESTFN, run_unittest, gc_collect from UserList import UserList class AutoFileTests(unittest.TestCase): @@ -32,6 +32,7 @@ self.assertEqual(self.f.tell(), p.tell()) self.f.close() self.f = None + gc_collect() self.assertRaises(ReferenceError, getattr, p, 'tell') def testAttributes(self): @@ -116,8 +117,12 @@ for methodname in methods: method = getattr(self.f, methodname) + args = {'readinto': (bytearray(''),), + 'seek': (0,), + 'write': ('',), + }.get(methodname, ()) # should raise on closed file - self.assertRaises(ValueError, method) + self.assertRaises(ValueError, method, *args) with test_support.check_py3k_warnings(): for methodname in deprecated_methods: method = getattr(self.f, methodname) @@ -216,7 +221,12 @@ def testStdin(self): # This causes the interpreter to exit on OSF1 v5.1. if sys.platform != 'osf1V5': - self.assertRaises(IOError, sys.stdin.seek, -1) + if sys.stdin.isatty(): + self.assertRaises(IOError, sys.stdin.seek, -1) + else: + print >>sys.__stdout__, ( + ' Skipping sys.stdin.seek(-1): stdin is not a tty.' + ' Test manualy.') else: print >>sys.__stdout__, ( ' Skipping sys.stdin.seek(-1), it may crash the interpreter.' @@ -336,8 +346,9 @@ except ValueError: pass else: - self.fail("%s%r after next() didn't raise ValueError" % - (methodname, args)) + if test_support.check_impl_detail(): + self.fail("%s%r after next() didn't raise ValueError" % + (methodname, args)) f.close() # Test to see if harmless (by accident) mixing of read* and @@ -388,6 +399,7 @@ if lines != testlines: self.fail("readlines() after next() with empty buffer " "failed. Got %r, expected %r" % (line, testline)) + f.close() # Reading after iteration hit EOF shouldn't hurt either f = open(TESTFN) try: @@ -438,6 +450,9 @@ self.close_count = 0 self.close_success_count = 0 self.use_buffering = False + # to prevent running out of file descriptors on PyPy, + # we only keep the 50 most recent files open + self.all_files = [None] * 50 def tearDown(self): if self.f: @@ -453,9 +468,14 @@ def _create_file(self): if self.use_buffering: - self.f = open(self.filename, "w+", buffering=1024*16) + f = open(self.filename, "w+", buffering=1024*16) else: - self.f = open(self.filename, "w+") + f = open(self.filename, "w+") + self.f = f + self.all_files.append(f) + oldf = self.all_files.pop(0) + if oldf is not None: + oldf.close() def _close_file(self): with self._count_lock: @@ -496,7 +516,6 @@ def _test_close_open_io(self, io_func, nb_workers=5): def worker(): - self._create_file() funcs = itertools.cycle(( lambda: io_func(), lambda: self._close_and_reopen_file(), @@ -508,7 +527,11 @@ f() except (IOError, ValueError): pass + self._create_file() self._run_workers(worker, nb_workers) + # make sure that all files can be closed now + del self.all_files + gc_collect() if test_support.verbose: # Useful verbose statistics when tuning this test to take # less time to run but still ensuring that its still useful. diff --git a/lib-python/2.7/test/test_fileio.py b/lib-python/2.7/test/test_fileio.py --- a/lib-python/2.7/test/test_fileio.py +++ b/lib-python/2.7/test/test_fileio.py @@ -12,6 +12,7 @@ from test.test_support import TESTFN, check_warnings, run_unittest, make_bad_fd from test.test_support import py3k_bytes as bytes +from test.test_support import gc_collect from test.script_helper import run_python from _io import FileIO as _FileIO @@ -34,6 +35,7 @@ self.assertEqual(self.f.tell(), p.tell()) self.f.close() self.f = None + gc_collect() self.assertRaises(ReferenceError, getattr, p, 'tell') def testSeekTell(self): @@ -104,8 +106,8 @@ self.assertTrue(f.closed) def testMethods(self): - methods = ['fileno', 'isatty', 'read', 'readinto', - 'seek', 'tell', 'truncate', 'write', 'seekable', + methods = ['fileno', 'isatty', 'read', + 'tell', 'truncate', 'seekable', 'readable', 'writable'] if sys.platform.startswith('atheos'): methods.remove('truncate') @@ -117,6 +119,10 @@ method = getattr(self.f, methodname) # should raise on closed file self.assertRaises(ValueError, method) + # methods with one argument + self.assertRaises(ValueError, self.f.readinto, 0) + self.assertRaises(ValueError, self.f.write, 0) + self.assertRaises(ValueError, self.f.seek, 0) def testOpendir(self): # Issue 3703: opening a directory should fill the errno diff --git a/lib-python/2.7/test/test_format.py b/lib-python/2.7/test/test_format.py --- a/lib-python/2.7/test/test_format.py +++ b/lib-python/2.7/test/test_format.py @@ -242,7 +242,7 @@ try: testformat(formatstr, args) except exception, exc: - if str(exc) == excmsg: + if str(exc) == excmsg or not test_support.check_impl_detail(): if verbose: print "yes" else: @@ -272,13 +272,16 @@ test_exc(u'no format', u'1', TypeError, "not all arguments converted during string formatting") - class Foobar(long): - def __oct__(self): - # Returning a non-string should not blow up. - return self + 1 - - test_exc('%o', Foobar(), TypeError, - "expected string or Unicode object, long found") + if test_support.check_impl_detail(): + # __oct__() is called if Foobar inherits from 'long', but + # not, say, 'object' or 'int' or 'str'. This seems strange + # enough to consider it a complete implementation detail. + class Foobar(long): + def __oct__(self): + # Returning a non-string should not blow up. + return self + 1 + test_exc('%o', Foobar(), TypeError, + "expected string or Unicode object, long found") if maxsize == 2**31-1: # crashes 2.2.1 and earlier: diff --git a/lib-python/2.7/test/test_funcattrs.py b/lib-python/2.7/test/test_funcattrs.py --- a/lib-python/2.7/test/test_funcattrs.py +++ b/lib-python/2.7/test/test_funcattrs.py @@ -14,6 +14,8 @@ self.b = b def cannot_set_attr(self, obj, name, value, exceptions): + if not test_support.check_impl_detail(): + exceptions = (TypeError, AttributeError) # Helper method for other tests. try: setattr(obj, name, value) @@ -286,13 +288,13 @@ def test_delete_func_dict(self): try: del self.b.__dict__ - except TypeError: + except (AttributeError, TypeError): pass else: self.fail("deleting function dictionary should raise TypeError") try: del self.b.func_dict - except TypeError: + except (AttributeError, TypeError): pass else: self.fail("deleting function dictionary should raise TypeError") diff --git a/lib-python/2.7/test/test_functools.py b/lib-python/2.7/test/test_functools.py --- a/lib-python/2.7/test/test_functools.py +++ b/lib-python/2.7/test/test_functools.py @@ -45,6 +45,8 @@ # attributes should not be writable if not isinstance(self.thetype, type): return + if not test_support.check_impl_detail(): + return self.assertRaises(TypeError, setattr, p, 'func', map) self.assertRaises(TypeError, setattr, p, 'args', (1, 2)) self.assertRaises(TypeError, setattr, p, 'keywords', dict(a=1, b=2)) @@ -136,6 +138,7 @@ p = proxy(f) self.assertEqual(f.func, p.func) f = None + test_support.gc_collect() self.assertRaises(ReferenceError, getattr, p, 'func') def test_with_bound_and_unbound_methods(self): @@ -172,7 +175,7 @@ updated=functools.WRAPPER_UPDATES): # Check attributes were assigned for name in assigned: - self.assertTrue(getattr(wrapper, name) is getattr(wrapped, name)) + self.assertTrue(getattr(wrapper, name) == getattr(wrapped, name), name) # Check attributes were updated for name in updated: wrapper_attr = getattr(wrapper, name) diff --git a/lib-python/2.7/test/test_generators.py b/lib-python/2.7/test/test_generators.py --- a/lib-python/2.7/test/test_generators.py +++ b/lib-python/2.7/test/test_generators.py @@ -190,7 +190,7 @@ File "", line 1, in ? File "", line 2, in g File "", line 2, in f - ZeroDivisionError: integer division or modulo by zero + ZeroDivisionError: integer division by zero >>> k.next() # and the generator cannot be resumed Traceback (most recent call last): File "", line 1, in ? @@ -733,14 +733,16 @@ ... yield 1 Traceback (most recent call last): .. -SyntaxError: 'return' with argument inside generator (, line 3) + File "", line 3 +SyntaxError: 'return' with argument inside generator >>> def f(): ... yield 1 ... return 22 Traceback (most recent call last): .. -SyntaxError: 'return' with argument inside generator (, line 3) + File "", line 3 +SyntaxError: 'return' with argument inside generator "return None" is not the same as "return" in a generator: @@ -749,7 +751,8 @@ ... return None Traceback (most recent call last): .. -SyntaxError: 'return' with argument inside generator (, line 3) + File "", line 3 +SyntaxError: 'return' with argument inside generator These are fine: @@ -878,7 +881,9 @@ ... if 0: ... yield 2 # because it's a generator (line 10) Traceback (most recent call last): -SyntaxError: 'return' with argument inside generator (, line 10) + ... + File "", line 10 +SyntaxError: 'return' with argument inside generator This one caused a crash (see SF bug 567538): @@ -1496,6 +1501,10 @@ """ coroutine_tests = """\ +A helper function to call gc.collect() without printing +>>> import gc +>>> def gc_collect(): gc.collect() + Sending a value into a started generator: >>> def f(): @@ -1570,13 +1579,14 @@ >>> def f(): return lambda x=(yield): 1 Traceback (most recent call last): ... -SyntaxError: 'return' with argument inside generator (, line 1) + File "", line 1 +SyntaxError: 'return' with argument inside generator >>> def f(): x = yield = y Traceback (most recent call last): ... File "", line 1 -SyntaxError: assignment to yield expression not possible +SyntaxError: can't assign to yield expression >>> def f(): (yield bar) = y Traceback (most recent call last): @@ -1665,7 +1675,7 @@ >>> f().throw("abc") # throw on just-opened generator Traceback (most recent call last): ... -TypeError: exceptions must be classes, or instances, not str +TypeError: exceptions must be old-style classes or derived from BaseException, not str Now let's try closing a generator: @@ -1697,7 +1707,7 @@ >>> g = f() >>> g.next() ->>> del g +>>> del g; gc_collect() exiting >>> class context(object): @@ -1708,7 +1718,7 @@ ... yield >>> g = f() >>> g.next() ->>> del g +>>> del g; gc_collect() exiting @@ -1721,7 +1731,7 @@ >>> g = f() >>> g.next() ->>> del g +>>> del g; gc_collect() finally @@ -1747,6 +1757,7 @@ >>> g = f() >>> g.next() >>> del g +>>> gc_collect() >>> sys.stderr.getvalue().startswith( ... "Exception RuntimeError: 'generator ignored GeneratorExit' in " ... ) @@ -1812,6 +1823,9 @@ references. We add it to the standard suite so the routine refleak-tests would trigger if it starts being uncleanable again. +>>> import gc +>>> def gc_collect(): gc.collect() + >>> import itertools >>> def leak(): ... class gen: @@ -1863,9 +1877,10 @@ ... ... l = Leaker() ... del l +... gc_collect() ... err = sys.stderr.getvalue().strip() ... err.startswith( -... "Exception RuntimeError: RuntimeError() in <" +... "Exception RuntimeError: RuntimeError() in " ... ) ... err.endswith("> ignored") ... len(err.splitlines()) diff --git a/lib-python/2.7/test/test_genexps.py b/lib-python/2.7/test/test_genexps.py --- a/lib-python/2.7/test/test_genexps.py +++ b/lib-python/2.7/test/test_genexps.py @@ -128,8 +128,9 @@ Verify re-use of tuples (a side benefit of using genexps over listcomps) + >>> from test.test_support import check_impl_detail >>> tupleids = map(id, ((i,i) for i in xrange(10))) - >>> int(max(tupleids) - min(tupleids)) + >>> int(max(tupleids) - min(tupleids)) if check_impl_detail() else 0 0 Verify that syntax error's are raised for genexps used as lvalues @@ -198,13 +199,13 @@ >>> g = (10 // i for i in (5, 0, 2)) >>> g.next() 2 - >>> g.next() + >>> g.next() # doctest: +ELLIPSIS Traceback (most recent call last): File "", line 1, in -toplevel- g.next() File "", line 1, in g = (10 // i for i in (5, 0, 2)) - ZeroDivisionError: integer division or modulo by zero + ZeroDivisionError: integer division...by zero >>> g.next() Traceback (most recent call last): File "", line 1, in -toplevel- diff --git a/lib-python/2.7/test/test_heapq.py b/lib-python/2.7/test/test_heapq.py --- a/lib-python/2.7/test/test_heapq.py +++ b/lib-python/2.7/test/test_heapq.py @@ -215,6 +215,11 @@ class TestHeapPython(TestHeap): module = py_heapq + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + @skipUnless(c_heapq, 'requires _heapq') class TestHeapC(TestHeap): diff --git a/lib-python/2.7/test/test_import.py b/lib-python/2.7/test/test_import.py --- a/lib-python/2.7/test/test_import.py +++ b/lib-python/2.7/test/test_import.py @@ -7,7 +7,8 @@ import sys import unittest from test.test_support import (unlink, TESTFN, unload, run_unittest, rmtree, - is_jython, check_warnings, EnvironmentVarGuard) + is_jython, check_warnings, EnvironmentVarGuard, + impl_detail, check_impl_detail) import textwrap from test import script_helper @@ -69,7 +70,8 @@ self.assertEqual(mod.b, b, "module loaded (%s) but contents invalid" % mod) finally: - unlink(source) + if check_impl_detail(pypy=False): + unlink(source) try: imp.reload(mod) @@ -149,13 +151,16 @@ # Compile & remove .py file, we only need .pyc (or .pyo). with open(filename, 'r') as f: py_compile.compile(filename) - unlink(filename) + if check_impl_detail(pypy=False): + # pypy refuses to import a .pyc if the .py does not exist + unlink(filename) # Need to be able to load from current dir. sys.path.append('') # This used to crash. exec 'import ' + module + reload(longlist) # Cleanup. del sys.path[-1] @@ -326,6 +331,7 @@ self.assertEqual(mod.code_filename, self.file_name) self.assertEqual(mod.func_filename, self.file_name) + @impl_detail("pypy refuses to import without a .py source", pypy=False) def test_module_without_source(self): target = "another_module.py" py_compile.compile(self.file_name, dfile=target) diff --git a/lib-python/2.7/test/test_inspect.py b/lib-python/2.7/test/test_inspect.py --- a/lib-python/2.7/test/test_inspect.py +++ b/lib-python/2.7/test/test_inspect.py @@ -4,11 +4,11 @@ import unittest import inspect import linecache -import datetime from UserList import UserList from UserDict import UserDict from test.test_support import run_unittest, check_py3k_warnings +from test.test_support import check_impl_detail with check_py3k_warnings( ("tuple parameter unpacking has been removed", SyntaxWarning), @@ -74,7 +74,8 @@ def test_excluding_predicates(self): self.istest(inspect.isbuiltin, 'sys.exit') - self.istest(inspect.isbuiltin, '[].append') + if check_impl_detail(): + self.istest(inspect.isbuiltin, '[].append') self.istest(inspect.iscode, 'mod.spam.func_code') self.istest(inspect.isframe, 'tb.tb_frame') self.istest(inspect.isfunction, 'mod.spam') @@ -92,9 +93,9 @@ else: self.assertFalse(inspect.isgetsetdescriptor(type(tb.tb_frame).f_locals)) if hasattr(types, 'MemberDescriptorType'): - self.istest(inspect.ismemberdescriptor, 'datetime.timedelta.days') + self.istest(inspect.ismemberdescriptor, 'type(lambda: None).func_globals') else: - self.assertFalse(inspect.ismemberdescriptor(datetime.timedelta.days)) + self.assertFalse(inspect.ismemberdescriptor(type(lambda: None).func_globals)) def test_isroutine(self): self.assertTrue(inspect.isroutine(mod.spam)) @@ -567,7 +568,8 @@ else: self.fail('Exception not raised') self.assertIs(type(ex1), type(ex2)) - self.assertEqual(str(ex1), str(ex2)) + if check_impl_detail(): + self.assertEqual(str(ex1), str(ex2)) def makeCallable(self, signature): """Create a function that returns its locals(), excluding the diff --git a/lib-python/2.7/test/test_int.py b/lib-python/2.7/test/test_int.py --- a/lib-python/2.7/test/test_int.py +++ b/lib-python/2.7/test/test_int.py @@ -1,7 +1,7 @@ import sys import unittest -from test.test_support import run_unittest, have_unicode +from test.test_support import run_unittest, have_unicode, check_impl_detail import math L = [ @@ -392,9 +392,10 @@ try: int(TruncReturnsNonIntegral()) except TypeError as e: - self.assertEqual(str(e), - "__trunc__ returned non-Integral" - " (type NonIntegral)") + if check_impl_detail(cpython=True): + self.assertEqual(str(e), + "__trunc__ returned non-Integral" + " (type NonIntegral)") else: self.fail("Failed to raise TypeError with %s" % ((base, trunc_result_base),)) diff --git a/lib-python/2.7/test/test_io.py b/lib-python/2.7/test/test_io.py --- a/lib-python/2.7/test/test_io.py +++ b/lib-python/2.7/test/test_io.py @@ -2561,6 +2561,31 @@ """Check that a partial write, when it gets interrupted, properly invokes the signal handler, and bubbles up the exception raised in the latter.""" + + # XXX This test has three flaws that appear when objects are + # XXX not reference counted. + + # - if wio.write() happens to trigger a garbage collection, + # the signal exception may be raised when some __del__ + # method is running; it will not reach the assertRaises() + # call. + + # - more subtle, if the wio object is not destroyed at once + # and survives this function, the next opened file is likely + # to have the same fileno (since the file descriptor was + # actively closed). When wio.__del__ is finally called, it + # will close the other's test file... To trigger this with + # CPython, try adding "global wio" in this function. + + # - This happens only for streams created by the _pyio module, + # because a wio.close() that fails still consider that the + # file needs to be closed again. You can try adding an + # "assert wio.closed" at the end of the function. + + # Fortunately, a little gc.gollect() seems to be enough to + # work around all these issues. + support.gc_collect() + read_results = [] def _read(): s = os.read(r, 1) diff --git a/lib-python/2.7/test/test_isinstance.py b/lib-python/2.7/test/test_isinstance.py --- a/lib-python/2.7/test/test_isinstance.py +++ b/lib-python/2.7/test/test_isinstance.py @@ -260,7 +260,18 @@ # Make sure that calling isinstance with a deeply nested tuple for its # argument will raise RuntimeError eventually. tuple_arg = (compare_to,) - for cnt in xrange(sys.getrecursionlimit()+5): + + + if test_support.check_impl_detail(cpython=True): + RECURSION_LIMIT = sys.getrecursionlimit() + else: + # on non-CPython implementations, the maximum + # actual recursion limit might be higher, but + # probably not higher than 99999 + # + RECURSION_LIMIT = 99999 + + for cnt in xrange(RECURSION_LIMIT+5): tuple_arg = (tuple_arg,) fxn(arg, tuple_arg) diff --git a/lib-python/2.7/test/test_itertools.py b/lib-python/2.7/test/test_itertools.py --- a/lib-python/2.7/test/test_itertools.py +++ b/lib-python/2.7/test/test_itertools.py @@ -137,6 +137,8 @@ self.assertEqual(result, list(combinations2(values, r))) # matches second pure python version self.assertEqual(result, list(combinations3(values, r))) # matches second pure python version + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_combinations_tuple_reuse(self): # Test implementation detail: tuple re-use self.assertEqual(len(set(map(id, combinations('abcde', 3)))), 1) self.assertNotEqual(len(set(map(id, list(combinations('abcde', 3))))), 1) @@ -207,7 +209,10 @@ self.assertEqual(result, list(cwr1(values, r))) # matches first pure python version self.assertEqual(result, list(cwr2(values, r))) # matches second pure python version + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_combinations_with_replacement_tuple_reuse(self): # Test implementation detail: tuple re-use + cwr = combinations_with_replacement self.assertEqual(len(set(map(id, cwr('abcde', 3)))), 1) self.assertNotEqual(len(set(map(id, list(cwr('abcde', 3))))), 1) @@ -271,6 +276,8 @@ self.assertEqual(result, list(permutations(values, None))) # test r as None self.assertEqual(result, list(permutations(values))) # test default r + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_permutations_tuple_reuse(self): # Test implementation detail: tuple re-use self.assertEqual(len(set(map(id, permutations('abcde', 3)))), 1) self.assertNotEqual(len(set(map(id, list(permutations('abcde', 3))))), 1) @@ -526,6 +533,9 @@ self.assertEqual(list(izip()), zip()) self.assertRaises(TypeError, izip, 3) self.assertRaises(TypeError, izip, range(3), 3) + + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_izip_tuple_reuse(self): # Check tuple re-use (implementation detail) self.assertEqual([tuple(list(pair)) for pair in izip('abc', 'def')], zip('abc', 'def')) @@ -575,6 +585,8 @@ else: self.fail('Did not raise Type in: ' + stmt) + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_iziplongest_tuple_reuse(self): # Check tuple re-use (implementation detail) self.assertEqual([tuple(list(pair)) for pair in izip_longest('abc', 'def')], zip('abc', 'def')) @@ -683,6 +695,8 @@ args = map(iter, args) self.assertEqual(len(list(product(*args))), expected_len) + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_product_tuple_reuse(self): # Test implementation detail: tuple re-use self.assertEqual(len(set(map(id, product('abc', 'def')))), 1) self.assertNotEqual(len(set(map(id, list(product('abc', 'def'))))), 1) @@ -771,11 +785,11 @@ self.assertRaises(ValueError, islice, xrange(10), 1, -5, -1) self.assertRaises(ValueError, islice, xrange(10), 1, 10, -1) self.assertRaises(ValueError, islice, xrange(10), 1, 10, 0) - self.assertRaises(ValueError, islice, xrange(10), 'a') - self.assertRaises(ValueError, islice, xrange(10), 'a', 1) - self.assertRaises(ValueError, islice, xrange(10), 1, 'a') - self.assertRaises(ValueError, islice, xrange(10), 'a', 1, 1) - self.assertRaises(ValueError, islice, xrange(10), 1, 'a', 1) + self.assertRaises((ValueError, TypeError), islice, xrange(10), 'a') + self.assertRaises((ValueError, TypeError), islice, xrange(10), 'a', 1) + self.assertRaises((ValueError, TypeError), islice, xrange(10), 1, 'a') + self.assertRaises((ValueError, TypeError), islice, xrange(10), 'a', 1, 1) + self.assertRaises((ValueError, TypeError), islice, xrange(10), 1, 'a', 1) self.assertEqual(len(list(islice(count(), 1, 10, maxsize))), 1) # Issue #10323: Less islice in a predictable state @@ -855,9 +869,17 @@ self.assertRaises(TypeError, tee, [1,2], 3, 'x') # tee object should be instantiable - a, b = tee('abc') - c = type(a)('def') - self.assertEqual(list(c), list('def')) + if test_support.check_impl_detail(): + # XXX I (arigo) would argue that 'type(a)(iterable)' has + # ill-defined semantics: it always return a fresh tee object, + # but depending on whether 'iterable' is itself a tee object + # or not, it is ok or not to continue using 'iterable' after + # the call. I cannot imagine why 'type(a)(non_tee_object)' + # would be useful, as 'iter(non_tee_obect)' is equivalent + # as far as I can see. + a, b = tee('abc') + c = type(a)('def') + self.assertEqual(list(c), list('def')) # test long-lagged and multi-way split a, b, c = tee(xrange(2000), 3) @@ -895,6 +917,7 @@ p = proxy(a) self.assertEqual(getattr(p, '__class__'), type(b)) del a + test_support.gc_collect() self.assertRaises(ReferenceError, getattr, p, '__class__') def test_StopIteration(self): @@ -1317,6 +1340,7 @@ class LengthTransparency(unittest.TestCase): + @test_support.impl_detail("__length_hint__() API is undocumented") def test_repeat(self): from test.test_iterlen import len self.assertEqual(len(repeat(None, 50)), 50) diff --git a/lib-python/2.7/test/test_linecache.py b/lib-python/2.7/test/test_linecache.py --- a/lib-python/2.7/test/test_linecache.py +++ b/lib-python/2.7/test/test_linecache.py @@ -54,13 +54,13 @@ # Check whether lines correspond to those from file iteration for entry in TESTS: - filename = os.path.join(TEST_PATH, entry) + '.py' + filename = support.findfile( entry + '.py') for index, line in enumerate(open(filename)): self.assertEqual(line, getline(filename, index + 1)) # Check module loading for entry in MODULES: - filename = os.path.join(MODULE_PATH, entry) + '.py' + filename = support.findfile( entry + '.py') for index, line in enumerate(open(filename)): self.assertEqual(line, getline(filename, index + 1)) @@ -78,7 +78,7 @@ def test_clearcache(self): cached = [] for entry in TESTS: - filename = os.path.join(TEST_PATH, entry) + '.py' + filename = support.findfile( entry + '.py') cached.append(filename) linecache.getline(filename, 1) diff --git a/lib-python/2.7/test/test_list.py b/lib-python/2.7/test/test_list.py --- a/lib-python/2.7/test/test_list.py +++ b/lib-python/2.7/test/test_list.py @@ -15,6 +15,10 @@ self.assertEqual(list(''), []) self.assertEqual(list('spam'), ['s', 'p', 'a', 'm']) + # the following test also works with pypy, but eats all your address + # space's RAM before raising and takes too long. + @test_support.impl_detail("eats all your RAM before working", pypy=False) + def test_segfault_1(self): if sys.maxsize == 0x7fffffff: # This test can currently only work on 32-bit machines. # XXX If/when PySequence_Length() returns a ssize_t, it should be @@ -32,6 +36,7 @@ # http://sources.redhat.com/ml/newlib/2002/msg00369.html self.assertRaises(MemoryError, list, xrange(sys.maxint // 2)) + def test_segfault_2(self): # This code used to segfault in Py2.4a3 x = [] x.extend(-y for y in x) diff --git a/lib-python/2.7/test/test_long.py b/lib-python/2.7/test/test_long.py --- a/lib-python/2.7/test/test_long.py +++ b/lib-python/2.7/test/test_long.py @@ -530,9 +530,10 @@ try: long(TruncReturnsNonIntegral()) except TypeError as e: - self.assertEqual(str(e), - "__trunc__ returned non-Integral" - " (type NonIntegral)") + if test_support.check_impl_detail(cpython=True): + self.assertEqual(str(e), + "__trunc__ returned non-Integral" + " (type NonIntegral)") else: self.fail("Failed to raise TypeError with %s" % ((base, trunc_result_base),)) diff --git a/lib-python/2.7/test/test_mailbox.py b/lib-python/2.7/test/test_mailbox.py --- a/lib-python/2.7/test/test_mailbox.py +++ b/lib-python/2.7/test/test_mailbox.py @@ -60,6 +60,8 @@ def tearDown(self): self._box.close() + if os.name == 'nt': + time.sleep(0.1) #Allow all syncing to take place self._delete_recursively(self._path) def test_add(self): @@ -137,6 +139,7 @@ msg = self._box.get(key1) self.assertEqual(msg['from'], 'foo') self.assertEqual(msg.fp.read(), '1') + msg.fp.close() def test_getitem(self): # Retrieve message using __getitem__() @@ -169,10 +172,12 @@ # Get file representations of messages key0 = self._box.add(self._template % 0) key1 = self._box.add(_sample_message) - self.assertEqual(self._box.get_file(key0).read().replace(os.linesep, '\n'), - self._template % 0) - self.assertEqual(self._box.get_file(key1).read().replace(os.linesep, '\n'), - _sample_message) + msg0 = self._box.get_file(key0) + self.assertEqual(msg0.read().replace(os.linesep, '\n'), self._template % 0) + msg1 = self._box.get_file(key1) + self.assertEqual(msg1.read().replace(os.linesep, '\n'), _sample_message) + msg0.close() + msg1.close() def test_iterkeys(self): # Get keys using iterkeys() @@ -786,6 +791,8 @@ class _TestMboxMMDF(TestMailbox): def tearDown(self): + if os.name == 'nt': + time.sleep(0.1) #Allow os to sync files self._box.close() self._delete_recursively(self._path) for lock_remnant in glob.glob(self._path + '.*'): @@ -1837,7 +1844,9 @@ self.createMessage("cur") self.mbox = mailbox.Maildir(test_support.TESTFN) #self.assertTrue(len(self.mbox.boxes) == 1) - self.assertIsNot(self.mbox.next(), None) + msg = self.mbox.next() + self.assertIsNot(msg, None) + msg.fp.close() self.assertIs(self.mbox.next(), None) self.assertIs(self.mbox.next(), None) @@ -1845,7 +1854,9 @@ self.createMessage("new") self.mbox = mailbox.Maildir(test_support.TESTFN) #self.assertTrue(len(self.mbox.boxes) == 1) - self.assertIsNot(self.mbox.next(), None) + msg = self.mbox.next() + self.assertIsNot(msg, None) + msg.fp.close() self.assertIs(self.mbox.next(), None) self.assertIs(self.mbox.next(), None) @@ -1854,8 +1865,12 @@ self.createMessage("new") self.mbox = mailbox.Maildir(test_support.TESTFN) #self.assertTrue(len(self.mbox.boxes) == 2) - self.assertIsNot(self.mbox.next(), None) - self.assertIsNot(self.mbox.next(), None) + msg = self.mbox.next() + self.assertIsNot(msg, None) + msg.fp.close() + msg = self.mbox.next() + self.assertIsNot(msg, None) + msg.fp.close() self.assertIs(self.mbox.next(), None) self.assertIs(self.mbox.next(), None) @@ -1864,11 +1879,13 @@ import email.parser fname = self.createMessage("cur", True) n = 0 - for msg in mailbox.PortableUnixMailbox(open(fname), + fid = open(fname) + for msg in mailbox.PortableUnixMailbox(fid, email.parser.Parser().parse): n += 1 self.assertEqual(msg["subject"], "Simple Test") self.assertEqual(len(str(msg)), len(FROM_)+len(DUMMY_MESSAGE)) + fid.close() self.assertEqual(n, 1) ## End: classes from the original module (for backward compatibility). diff --git a/lib-python/2.7/test/test_marshal.py b/lib-python/2.7/test/test_marshal.py --- a/lib-python/2.7/test/test_marshal.py +++ b/lib-python/2.7/test/test_marshal.py @@ -7,20 +7,31 @@ import unittest import os -class IntTestCase(unittest.TestCase): +class HelperMixin: + def helper(self, sample, *extra, **kwargs): + expected = kwargs.get('expected', sample) + new = marshal.loads(marshal.dumps(sample, *extra)) + self.assertEqual(expected, new) + self.assertEqual(type(expected), type(new)) + try: + with open(test_support.TESTFN, "wb") as f: + marshal.dump(sample, f, *extra) + with open(test_support.TESTFN, "rb") as f: + new = marshal.load(f) + self.assertEqual(expected, new) + self.assertEqual(type(expected), type(new)) + finally: + test_support.unlink(test_support.TESTFN) + + +class IntTestCase(unittest.TestCase, HelperMixin): def test_ints(self): # Test the full range of Python ints. n = sys.maxint while n: for expected in (-n, n): - s = marshal.dumps(expected) - got = marshal.loads(s) - self.assertEqual(expected, got) - marshal.dump(expected, file(test_support.TESTFN, "wb")) - got = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(expected, got) + self.helper(expected) n = n >> 1 - os.unlink(test_support.TESTFN) def test_int64(self): # Simulate int marshaling on a 64-bit box. This is most interesting if @@ -48,28 +59,16 @@ def test_bool(self): for b in (True, False): - new = marshal.loads(marshal.dumps(b)) - self.assertEqual(b, new) - self.assertEqual(type(b), type(new)) - marshal.dump(b, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(b, new) - self.assertEqual(type(b), type(new)) + self.helper(b) -class FloatTestCase(unittest.TestCase): +class FloatTestCase(unittest.TestCase, HelperMixin): def test_floats(self): # Test a few floats small = 1e-25 n = sys.maxint * 3.7e250 while n > small: for expected in (-n, n): - f = float(expected) - s = marshal.dumps(f) - got = marshal.loads(s) - self.assertEqual(f, got) - marshal.dump(f, file(test_support.TESTFN, "wb")) - got = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(f, got) + self.helper(expected) n /= 123.4567 f = 0.0 @@ -85,59 +84,25 @@ while n < small: for expected in (-n, n): f = float(expected) + self.helper(f) + self.helper(f, 1) + n *= 123.4567 - s = marshal.dumps(f) - got = marshal.loads(s) - self.assertEqual(f, got) - - s = marshal.dumps(f, 1) - got = marshal.loads(s) - self.assertEqual(f, got) - - marshal.dump(f, file(test_support.TESTFN, "wb")) - got = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(f, got) - - marshal.dump(f, file(test_support.TESTFN, "wb"), 1) - got = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(f, got) - n *= 123.4567 - os.unlink(test_support.TESTFN) - -class StringTestCase(unittest.TestCase): +class StringTestCase(unittest.TestCase, HelperMixin): def test_unicode(self): for s in [u"", u"Andr� Previn", u"abc", u" "*10000]: - new = marshal.loads(marshal.dumps(s)) - self.assertEqual(s, new) - self.assertEqual(type(s), type(new)) - marshal.dump(s, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(s, new) - self.assertEqual(type(s), type(new)) - os.unlink(test_support.TESTFN) + self.helper(s) def test_string(self): for s in ["", "Andr� Previn", "abc", " "*10000]: - new = marshal.loads(marshal.dumps(s)) - self.assertEqual(s, new) - self.assertEqual(type(s), type(new)) - marshal.dump(s, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(s, new) - self.assertEqual(type(s), type(new)) - os.unlink(test_support.TESTFN) + self.helper(s) def test_buffer(self): for s in ["", "Andr� Previn", "abc", " "*10000]: with test_support.check_py3k_warnings(("buffer.. not supported", DeprecationWarning)): b = buffer(s) - new = marshal.loads(marshal.dumps(b)) - self.assertEqual(s, new) - marshal.dump(b, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(s, new) - os.unlink(test_support.TESTFN) + self.helper(b, expected=s) class ExceptionTestCase(unittest.TestCase): def test_exceptions(self): @@ -150,7 +115,7 @@ new = marshal.loads(marshal.dumps(co)) self.assertEqual(co, new) -class ContainerTestCase(unittest.TestCase): +class ContainerTestCase(unittest.TestCase, HelperMixin): d = {'astring': 'foo at bar.baz.spam', 'afloat': 7283.43, 'anint': 2**20, @@ -161,42 +126,20 @@ 'aunicode': u"Andr� Previn" } def test_dict(self): - new = marshal.loads(marshal.dumps(self.d)) - self.assertEqual(self.d, new) - marshal.dump(self.d, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(self.d, new) - os.unlink(test_support.TESTFN) + self.helper(self.d) def test_list(self): lst = self.d.items() - new = marshal.loads(marshal.dumps(lst)) - self.assertEqual(lst, new) - marshal.dump(lst, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(lst, new) - os.unlink(test_support.TESTFN) + self.helper(lst) def test_tuple(self): t = tuple(self.d.keys()) - new = marshal.loads(marshal.dumps(t)) - self.assertEqual(t, new) - marshal.dump(t, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(t, new) - os.unlink(test_support.TESTFN) + self.helper(t) def test_sets(self): for constructor in (set, frozenset): t = constructor(self.d.keys()) - new = marshal.loads(marshal.dumps(t)) - self.assertEqual(t, new) - self.assertTrue(isinstance(new, constructor)) - self.assertNotEqual(id(t), id(new)) - marshal.dump(t, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(t, new) - os.unlink(test_support.TESTFN) + self.helper(t) class BugsTestCase(unittest.TestCase): def test_bug_5888452(self): @@ -226,6 +169,7 @@ s = 'c' + ('X' * 4*4) + '{' * 2**20 self.assertRaises(ValueError, marshal.loads, s) + @test_support.impl_detail('specific recursion check') def test_recursion_limit(self): # Create a deeply nested structure. head = last = [] diff --git a/lib-python/2.7/test/test_memoryio.py b/lib-python/2.7/test/test_memoryio.py --- a/lib-python/2.7/test/test_memoryio.py +++ b/lib-python/2.7/test/test_memoryio.py @@ -617,7 +617,7 @@ state = memio.__getstate__() self.assertEqual(len(state), 3) bytearray(state[0]) # Check if state[0] supports the buffer interface. - self.assertIsInstance(state[1], int) + self.assertIsInstance(state[1], (int, long)) self.assertTrue(isinstance(state[2], dict) or state[2] is None) memio.close() self.assertRaises(ValueError, memio.__getstate__) diff --git a/lib-python/2.7/test/test_memoryview.py b/lib-python/2.7/test/test_memoryview.py --- a/lib-python/2.7/test/test_memoryview.py +++ b/lib-python/2.7/test/test_memoryview.py @@ -26,7 +26,8 @@ def check_getitem_with_type(self, tp): item = self.getitem_type b = tp(self._source) - oldrefcount = sys.getrefcount(b) + if hasattr(sys, 'getrefcount'): + oldrefcount = sys.getrefcount(b) m = self._view(b) self.assertEqual(m[0], item(b"a")) self.assertIsInstance(m[0], bytes) @@ -43,7 +44,8 @@ self.assertRaises(TypeError, lambda: m[0.0]) self.assertRaises(TypeError, lambda: m["a"]) m = None - self.assertEqual(sys.getrefcount(b), oldrefcount) + if hasattr(sys, 'getrefcount'): + self.assertEqual(sys.getrefcount(b), oldrefcount) def test_getitem(self): for tp in self._types: @@ -65,7 +67,8 @@ if not self.ro_type: return b = self.ro_type(self._source) - oldrefcount = sys.getrefcount(b) + if hasattr(sys, 'getrefcount'): + oldrefcount = sys.getrefcount(b) m = self._view(b) def setitem(value): m[0] = value @@ -73,14 +76,16 @@ self.assertRaises(TypeError, setitem, 65) self.assertRaises(TypeError, setitem, memoryview(b"a")) m = None - self.assertEqual(sys.getrefcount(b), oldrefcount) + if hasattr(sys, 'getrefcount'): + self.assertEqual(sys.getrefcount(b), oldrefcount) def test_setitem_writable(self): if not self.rw_type: return tp = self.rw_type b = self.rw_type(self._source) - oldrefcount = sys.getrefcount(b) + if hasattr(sys, 'getrefcount'): + oldrefcount = sys.getrefcount(b) m = self._view(b) m[0] = tp(b"0") self._check_contents(tp, b, b"0bcdef") @@ -110,13 +115,14 @@ self.assertRaises(TypeError, setitem, (0,), b"a") self.assertRaises(TypeError, setitem, "a", b"a") # Trying to resize the memory object - self.assertRaises(ValueError, setitem, 0, b"") - self.assertRaises(ValueError, setitem, 0, b"ab") + self.assertRaises((ValueError, TypeError), setitem, 0, b"") + self.assertRaises((ValueError, TypeError), setitem, 0, b"ab") self.assertRaises(ValueError, setitem, slice(1,1), b"a") self.assertRaises(ValueError, setitem, slice(0,2), b"a") m = None - self.assertEqual(sys.getrefcount(b), oldrefcount) + if hasattr(sys, 'getrefcount'): + self.assertEqual(sys.getrefcount(b), oldrefcount) def test_delitem(self): for tp in self._types: @@ -292,6 +298,7 @@ def _check_contents(self, tp, obj, contents): self.assertEqual(obj[1:7], tp(contents)) + @unittest.skipUnless(hasattr(sys, 'getrefcount'), "Reference counting") def test_refs(self): for tp in self._types: m = memoryview(tp(self._source)) diff --git a/lib-python/2.7/test/test_mmap.py b/lib-python/2.7/test/test_mmap.py --- a/lib-python/2.7/test/test_mmap.py +++ b/lib-python/2.7/test/test_mmap.py @@ -119,7 +119,8 @@ def test_access_parameter(self): # Test for "access" keyword parameter mapsize = 10 - open(TESTFN, "wb").write("a"*mapsize) + with open(TESTFN, "wb") as f: + f.write("a"*mapsize) f = open(TESTFN, "rb") m = mmap.mmap(f.fileno(), mapsize, access=mmap.ACCESS_READ) self.assertEqual(m[:], 'a'*mapsize, "Readonly memory map data incorrect.") @@ -168,9 +169,11 @@ else: self.fail("Able to resize readonly memory map") f.close() + m.close() del m, f - self.assertEqual(open(TESTFN, "rb").read(), 'a'*mapsize, - "Readonly memory map data file was modified") + with open(TESTFN, "rb") as f: + self.assertEqual(f.read(), 'a'*mapsize, + "Readonly memory map data file was modified") # Opening mmap with size too big import sys @@ -220,11 +223,13 @@ self.assertEqual(m[:], 'd' * mapsize, "Copy-on-write memory map data not written correctly.") m.flush() - self.assertEqual(open(TESTFN, "rb").read(), 'c'*mapsize, - "Copy-on-write test data file should not be modified.") + f.close() + with open(TESTFN, "rb") as f: + self.assertEqual(f.read(), 'c'*mapsize, + "Copy-on-write test data file should not be modified.") # Ensuring copy-on-write maps cannot be resized self.assertRaises(TypeError, m.resize, 2*mapsize) - f.close() + m.close() del m, f # Ensuring invalid access parameter raises exception @@ -287,6 +292,7 @@ self.assertEqual(m.find('one', 1), 8) self.assertEqual(m.find('one', 1, -1), 8) self.assertEqual(m.find('one', 1, -2), -1) + m.close() def test_rfind(self): @@ -305,6 +311,7 @@ self.assertEqual(m.rfind('one', 0, -2), 0) self.assertEqual(m.rfind('one', 1, -1), 8) self.assertEqual(m.rfind('one', 1, -2), -1) + m.close() def test_double_close(self): @@ -533,7 +540,8 @@ if not hasattr(mmap, 'PROT_READ'): return mapsize = 10 - open(TESTFN, "wb").write("a"*mapsize) + with open(TESTFN, "wb") as f: + f.write("a"*mapsize) f = open(TESTFN, "rb") m = mmap.mmap(f.fileno(), mapsize, prot=mmap.PROT_READ) self.assertRaises(TypeError, m.write, "foo") @@ -545,7 +553,8 @@ def test_io_methods(self): data = "0123456789" - open(TESTFN, "wb").write("x"*len(data)) + with open(TESTFN, "wb") as f: + f.write("x"*len(data)) f = open(TESTFN, "r+b") m = mmap.mmap(f.fileno(), len(data)) f.close() @@ -574,6 +583,7 @@ self.assertEqual(m[:], "012bar6789") m.seek(8) self.assertRaises(ValueError, m.write, "bar") + m.close() if os.name == 'nt': def test_tagname(self): @@ -611,7 +621,8 @@ m.close() # Should not crash (Issue 5385) - open(TESTFN, "wb").write("x"*10) + with open(TESTFN, "wb") as f: + f.write("x"*10) f = open(TESTFN, "r+b") m = mmap.mmap(f.fileno(), 0) f.close() diff --git a/lib-python/2.7/test/test_module.py b/lib-python/2.7/test/test_module.py --- a/lib-python/2.7/test/test_module.py +++ b/lib-python/2.7/test/test_module.py @@ -1,6 +1,6 @@ # Test the module type import unittest -from test.test_support import run_unittest, gc_collect +from test.test_support import run_unittest, gc_collect, check_impl_detail import sys ModuleType = type(sys) @@ -10,8 +10,10 @@ # An uninitialized module has no __dict__ or __name__, # and __doc__ is None foo = ModuleType.__new__(ModuleType) - self.assertTrue(foo.__dict__ is None) - self.assertRaises(SystemError, dir, foo) + self.assertFalse(foo.__dict__) + if check_impl_detail(): + self.assertTrue(foo.__dict__ is None) + self.assertRaises(SystemError, dir, foo) try: s = foo.__name__ self.fail("__name__ = %s" % repr(s)) diff --git a/lib-python/2.7/test/test_multibytecodec.py b/lib-python/2.7/test/test_multibytecodec.py --- a/lib-python/2.7/test/test_multibytecodec.py +++ b/lib-python/2.7/test/test_multibytecodec.py @@ -42,7 +42,7 @@ dec = codecs.getdecoder('euc-kr') myreplace = lambda exc: (u'', sys.maxint+1) codecs.register_error('test.cjktest', myreplace) - self.assertRaises(IndexError, dec, + self.assertRaises((IndexError, OverflowError), dec, 'apple\x92ham\x93spam', 'test.cjktest') def test_codingspec(self): @@ -148,7 +148,8 @@ class Test_StreamReader(unittest.TestCase): def test_bug1728403(self): try: - open(TESTFN, 'w').write('\xa1') + with open(TESTFN, 'w') as f: + f.write('\xa1') f = codecs.open(TESTFN, encoding='cp949') self.assertRaises(UnicodeDecodeError, f.read, 2) finally: diff --git a/lib-python/2.7/test/test_multibytecodec_support.py b/lib-python/2.7/test/test_multibytecodec_support.py --- a/lib-python/2.7/test/test_multibytecodec_support.py +++ b/lib-python/2.7/test/test_multibytecodec_support.py @@ -110,8 +110,8 @@ def myreplace(exc): return (u'x', sys.maxint + 1) codecs.register_error("test.cjktest", myreplace) - self.assertRaises(IndexError, self.encode, self.unmappedunicode, - 'test.cjktest') + self.assertRaises((IndexError, OverflowError), self.encode, + self.unmappedunicode, 'test.cjktest') def test_callback_None_index(self): def myreplace(exc): @@ -330,7 +330,7 @@ repr(csetch), repr(unich), exc.reason)) def load_teststring(name): - dir = os.path.join(os.path.dirname(__file__), 'cjkencodings') + dir = test_support.findfile('cjkencodings') with open(os.path.join(dir, name + '.txt'), 'rb') as f: encoded = f.read() with open(os.path.join(dir, name + '-utf8.txt'), 'rb') as f: diff --git a/lib-python/2.7/test/test_multiprocessing.py b/lib-python/2.7/test/test_multiprocessing.py --- a/lib-python/2.7/test/test_multiprocessing.py +++ b/lib-python/2.7/test/test_multiprocessing.py @@ -1316,6 +1316,7 @@ queue = manager.get_queue() self.assertEqual(queue.get(), 'hello world') del queue + test_support.gc_collect() manager.shutdown() manager = QueueManager( address=addr, authkey=authkey, serializer=SERIALIZER) @@ -1605,6 +1606,10 @@ if len(blocks) > maxblocks: i = random.randrange(maxblocks) del blocks[i] + # XXX There should be a better way to release resources for a + # single block + if i % maxblocks == 0: + import gc; gc.collect() # get the heap object heap = multiprocessing.heap.BufferWrapper._heap @@ -1704,6 +1709,7 @@ a = Foo() util.Finalize(a, conn.send, args=('a',)) del a # triggers callback for a + test_support.gc_collect() b = Foo() close_b = util.Finalize(b, conn.send, args=('b',)) diff --git a/lib-python/2.7/test/test_mutants.py b/lib-python/2.7/test/test_mutants.py --- a/lib-python/2.7/test/test_mutants.py +++ b/lib-python/2.7/test/test_mutants.py @@ -1,4 +1,4 @@ -from test.test_support import verbose, TESTFN +from test.test_support import verbose, TESTFN, check_impl_detail import random import os @@ -137,10 +137,16 @@ while dict1 and len(dict1) == len(dict2): if verbose: print ".", - if random.random() < 0.5: - c = cmp(dict1, dict2) - else: - c = dict1 == dict2 + try: + if random.random() < 0.5: + c = cmp(dict1, dict2) + else: + c = dict1 == dict2 + except RuntimeError: + # CPython never raises RuntimeError here, but other implementations + # might, and it's fine. + if check_impl_detail(cpython=True): + raise if verbose: print diff --git a/lib-python/2.7/test/test_old_mailbox.py b/lib-python/2.7/test/test_old_mailbox.py --- a/lib-python/2.7/test/test_old_mailbox.py +++ b/lib-python/2.7/test/test_old_mailbox.py @@ -73,7 +73,9 @@ self.createMessage("cur") self.mbox = mailbox.Maildir(test_support.TESTFN) self.assertTrue(len(self.mbox) == 1) - self.assertTrue(self.mbox.next() is not None) + msg = self.mbox.next() + self.assertTrue(msg is not None) + msg.fp.close() self.assertTrue(self.mbox.next() is None) self.assertTrue(self.mbox.next() is None) @@ -81,7 +83,9 @@ self.createMessage("new") self.mbox = mailbox.Maildir(test_support.TESTFN) self.assertTrue(len(self.mbox) == 1) - self.assertTrue(self.mbox.next() is not None) + msg = self.mbox.next() + self.assertTrue(msg is not None) + msg.fp.close() self.assertTrue(self.mbox.next() is None) self.assertTrue(self.mbox.next() is None) @@ -90,8 +94,12 @@ self.createMessage("new") self.mbox = mailbox.Maildir(test_support.TESTFN) self.assertTrue(len(self.mbox) == 2) - self.assertTrue(self.mbox.next() is not None) - self.assertTrue(self.mbox.next() is not None) + msg = self.mbox.next() + self.assertTrue(msg is not None) + msg.fp.close() + msg = self.mbox.next() + self.assertTrue(msg is not None) + msg.fp.close() self.assertTrue(self.mbox.next() is None) self.assertTrue(self.mbox.next() is None) diff --git a/lib-python/2.7/test/test_optparse.py b/lib-python/2.7/test/test_optparse.py --- a/lib-python/2.7/test/test_optparse.py +++ b/lib-python/2.7/test/test_optparse.py @@ -383,6 +383,7 @@ self.assertRaises(self.parser.remove_option, ('foo',), None, ValueError, "no such option 'foo'") + @test_support.impl_detail("sys.getrefcount") def test_refleak(self): # If an OptionParser is carrying around a reference to a large # object, various cycles can prevent it from being GC'd in diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + fid = open(name, "w") + fid.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/2.7/test/test_peepholer.py b/lib-python/2.7/test/test_peepholer.py --- a/lib-python/2.7/test/test_peepholer.py +++ b/lib-python/2.7/test/test_peepholer.py @@ -41,7 +41,7 @@ def test_none_as_constant(self): # LOAD_GLOBAL None --> LOAD_CONST None def f(x): - None + y = None return x asm = disassemble(f) for elem in ('LOAD_GLOBAL',): @@ -67,10 +67,13 @@ self.assertIn(elem, asm) def test_pack_unpack(self): + # On PyPy, "a, b = ..." is even more optimized, by removing + # the ROT_TWO. But the ROT_TWO is not removed if assigning + # to more complex expressions, so check that. for line, elem in ( ('a, = a,', 'LOAD_CONST',), - ('a, b = a, b', 'ROT_TWO',), - ('a, b, c = a, b, c', 'ROT_THREE',), + ('a[1], b = a, b', 'ROT_TWO',), + ('a, b[2], c = a, b, c', 'ROT_THREE',), ): asm = dis_single(line) self.assertIn(elem, asm) @@ -78,6 +81,8 @@ self.assertNotIn('UNPACK_TUPLE', asm) def test_folding_of_tuples_of_constants(self): + # On CPython, "a,b,c=1,2,3" turns into "a,b,c=" + # but on PyPy, it turns into "a=1;b=2;c=3". for line, elem in ( ('a = 1,2,3', '((1, 2, 3))'), ('("a","b","c")', "(('a', 'b', 'c'))"), @@ -86,7 +91,8 @@ ('((1, 2), 3, 4)', '(((1, 2), 3, 4))'), ): asm = dis_single(line) - self.assertIn(elem, asm) + self.assert_(elem in asm or ( + line == 'a,b,c = 1,2,3' and 'UNPACK_TUPLE' not in asm)) self.assertNotIn('BUILD_TUPLE', asm) # Bug 1053819: Tuple of constants misidentified when presented with: @@ -139,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib-python/2.7/test/test_pprint.py b/lib-python/2.7/test/test_pprint.py --- a/lib-python/2.7/test/test_pprint.py +++ b/lib-python/2.7/test/test_pprint.py @@ -233,7 +233,16 @@ frozenset([0, 2]), frozenset([0, 1])])}""" cube = test.test_set.cube(3) - self.assertEqual(pprint.pformat(cube), cube_repr_tgt) + # XXX issues of dictionary order, and for the case below, + # order of items in the frozenset([...]) representation. + # Whether we get precisely cube_repr_tgt or not is open + # to implementation-dependent choices (this test probably + # fails horribly in CPython if we tweak the dict order too). + got = pprint.pformat(cube) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cube_repr_tgt) + else: + self.assertEqual(eval(got), cube) cubo_repr_tgt = """\ {frozenset([frozenset([0, 2]), frozenset([0])]): frozenset([frozenset([frozenset([0, 2]), @@ -393,7 +402,11 @@ 2])])])}""" cubo = test.test_set.linegraph(cube) - self.assertEqual(pprint.pformat(cubo), cubo_repr_tgt) + got = pprint.pformat(cubo) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cubo_repr_tgt) + else: + self.assertEqual(eval(got), cubo) def test_depth(self): nested_tuple = (1, (2, (3, (4, (5, 6))))) diff --git a/lib-python/2.7/test/test_pydoc.py b/lib-python/2.7/test/test_pydoc.py --- a/lib-python/2.7/test/test_pydoc.py +++ b/lib-python/2.7/test/test_pydoc.py @@ -267,8 +267,8 @@ testpairs = ( ('i_am_not_here', 'i_am_not_here'), ('test.i_am_not_here_either', 'i_am_not_here_either'), - ('test.i_am_not_here.neither_am_i', 'i_am_not_here.neither_am_i'), - ('i_am_not_here.{}'.format(modname), 'i_am_not_here.{}'.format(modname)), + ('test.i_am_not_here.neither_am_i', 'i_am_not_here'), + ('i_am_not_here.{}'.format(modname), 'i_am_not_here'), ('test.{}'.format(modname), modname), ) @@ -292,8 +292,8 @@ result = run_pydoc(modname) finally: forget(modname) - expected = badimport_pattern % (modname, expectedinmsg) - self.assertEqual(expected, result) + expected = badimport_pattern % (modname, '(.+\\.)?' + expectedinmsg + '(\\..+)?$') + self.assertTrue(re.match(expected, result)) def test_input_strip(self): missing_module = " test.i_am_not_here " diff --git a/lib-python/2.7/test/test_pyexpat.py b/lib-python/2.7/test/test_pyexpat.py --- a/lib-python/2.7/test/test_pyexpat.py +++ b/lib-python/2.7/test/test_pyexpat.py @@ -570,6 +570,9 @@ self.assertEqual(self.n, 4) class MalformedInputText(unittest.TestCase): + # CPython seems to ship its own version of expat, they fixed it on this commit : + # http://svn.python.org/view?revision=74429&view=revision + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test1(self): xml = "\0\r\n" parser = expat.ParserCreate() @@ -579,6 +582,7 @@ except expat.ExpatError as e: self.assertEqual(str(e), 'unclosed token: line 2, column 0') + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test2(self): xml = "\r\n" parser = expat.ParserCreate() diff --git a/lib-python/2.7/test/test_repr.py b/lib-python/2.7/test/test_repr.py --- a/lib-python/2.7/test/test_repr.py +++ b/lib-python/2.7/test/test_repr.py @@ -9,6 +9,7 @@ import unittest from test.test_support import run_unittest, check_py3k_warnings +from test.test_support import check_impl_detail from repr import repr as r # Don't shadow builtin repr from repr import Repr @@ -145,8 +146,11 @@ # Functions eq(repr(hash), '') # Methods - self.assertTrue(repr(''.split).startswith( - '") def test_xrange(self): eq = self.assertEqual @@ -185,7 +189,10 @@ def test_descriptors(self): eq = self.assertEqual # method descriptors - eq(repr(dict.items), "") + if check_impl_detail(cpython=True): + eq(repr(dict.items), "") + elif check_impl_detail(pypy=True): + eq(repr(dict.items), "") # XXX member descriptors # XXX attribute descriptors # XXX slot descriptors @@ -247,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_runpy.py b/lib-python/2.7/test/test_runpy.py --- a/lib-python/2.7/test/test_runpy.py +++ b/lib-python/2.7/test/test_runpy.py @@ -5,10 +5,15 @@ import sys import re import tempfile -from test.test_support import verbose, run_unittest, forget +from test.test_support import verbose, run_unittest, forget, check_impl_detail from test.script_helper import (temp_dir, make_script, compile_script, make_pkg, make_zip_script, make_zip_pkg) +if check_impl_detail(pypy=True): + no_lone_pyc_file = True +else: + no_lone_pyc_file = False + from runpy import _run_code, _run_module_code, run_module, run_path # Note: This module can't safely test _run_module_as_main as it @@ -168,13 +173,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + d2 = run_module(mod_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -190,13 +196,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", pkg_name - d2 = run_module(pkg_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", pkg_name + d2 = run_module(pkg_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, pkg_name) if verbose: print "Package executed successfully" @@ -244,15 +251,17 @@ self.assertIn("sibling", d1) self.assertIn("nephew", d1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name, run_name=run_name) # Read from bytecode - self.assertIn("__package__", d2) - self.assertTrue(d2["__package__"] == pkg_name) - self.assertIn("sibling", d2) - self.assertIn("nephew", d2) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + # Read from bytecode + d2 = run_module(mod_name, run_name=run_name) + self.assertIn("__package__", d2) + self.assertTrue(d2["__package__"] == pkg_name) + self.assertIn("sibling", d2) + self.assertIn("nephew", d2) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -345,6 +354,8 @@ script_dir, '') def test_directory_compiled(self): + if no_lone_pyc_file: + return with temp_dir() as script_dir: mod_name = '__main__' script_name = self._make_test_script(script_dir, mod_name) diff --git a/lib-python/2.7/test/test_scope.py b/lib-python/2.7/test/test_scope.py --- a/lib-python/2.7/test/test_scope.py +++ b/lib-python/2.7/test/test_scope.py @@ -1,6 +1,6 @@ import unittest from test.test_support import check_syntax_error, check_py3k_warnings, \ - check_warnings, run_unittest + check_warnings, run_unittest, gc_collect class ScopeTests(unittest.TestCase): @@ -432,6 +432,7 @@ for i in range(100): f1() + gc_collect() self.assertEqual(Foo.count, 0) diff --git a/lib-python/2.7/test/test_set.py b/lib-python/2.7/test/test_set.py --- a/lib-python/2.7/test/test_set.py +++ b/lib-python/2.7/test/test_set.py @@ -309,6 +309,7 @@ fo.close() test_support.unlink(test_support.TESTFN) + @test_support.impl_detail(pypy=False) def test_do_not_rehash_dict_keys(self): n = 10 d = dict.fromkeys(map(HashCountingInt, xrange(n))) @@ -559,6 +560,7 @@ p = weakref.proxy(s) self.assertEqual(str(p), str(s)) s = None + test_support.gc_collect() self.assertRaises(ReferenceError, str, p) # C API test only available in a debug build @@ -590,6 +592,7 @@ s.__init__(self.otherword) self.assertEqual(s, set(self.word)) + @test_support.impl_detail() def test_singleton_empty_frozenset(self): f = frozenset() efs = [frozenset(), frozenset([]), frozenset(()), frozenset(''), @@ -770,9 +773,10 @@ for v in self.set: self.assertIn(v, self.values) setiter = iter(self.set) - # note: __length_hint__ is an internal undocumented API, - # don't rely on it in your own programs - self.assertEqual(setiter.__length_hint__(), len(self.set)) + if test_support.check_impl_detail(): + # note: __length_hint__ is an internal undocumented API, + # don't rely on it in your own programs + self.assertEqual(setiter.__length_hint__(), len(self.set)) def test_pickling(self): p = pickle.dumps(self.set) @@ -1564,7 +1568,7 @@ for meth in (s.union, s.intersection, s.difference, s.symmetric_difference, s.isdisjoint): for g in (G, I, Ig, L, R): expected = meth(data) - actual = meth(G(data)) + actual = meth(g(data)) if isinstance(expected, bool): self.assertEqual(actual, expected) else: diff --git a/lib-python/2.7/test/test_sets.py b/lib-python/2.7/test/test_sets.py --- a/lib-python/2.7/test/test_sets.py +++ b/lib-python/2.7/test/test_sets.py @@ -686,7 +686,9 @@ set_list = sorted(self.set) self.assertEqual(len(dup_list), len(set_list)) for i, el in enumerate(dup_list): - self.assertIs(el, set_list[i]) + # Object identity is not guarnteed for immutable objects, so we + # can't use assertIs here. + self.assertEqual(el, set_list[i]) def test_deep_copy(self): dup = copy.deepcopy(self.set) diff --git a/lib-python/2.7/test/test_site.py b/lib-python/2.7/test/test_site.py --- a/lib-python/2.7/test/test_site.py +++ b/lib-python/2.7/test/test_site.py @@ -226,6 +226,10 @@ self.assertEqual(len(dirs), 1) wanted = os.path.join('xoxo', 'Lib', 'site-packages') self.assertEqual(dirs[0], wanted) + elif '__pypy__' in sys.builtin_module_names: + self.assertEquals(len(dirs), 1) + wanted = os.path.join('xoxo', 'site-packages') + self.assertEquals(dirs[0], wanted) elif os.sep == '/': self.assertEqual(len(dirs), 2) wanted = os.path.join('xoxo', 'lib', 'python' + sys.version[:3], diff --git a/lib-python/2.7/test/test_socket.py b/lib-python/2.7/test/test_socket.py --- a/lib-python/2.7/test/test_socket.py +++ b/lib-python/2.7/test/test_socket.py @@ -252,6 +252,7 @@ self.assertEqual(p.fileno(), s.fileno()) s.close() s = None + test_support.gc_collect() try: p.fileno() except ReferenceError: @@ -285,32 +286,34 @@ s.sendto(u'\u2620', sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None) - self.assertIn('not NoneType', str(cm.exception)) + self.assertIn('NoneType', str(cm.exception)) # 3 args with self.assertRaises(UnicodeEncodeError): s.sendto(u'\u2620', 0, sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, None) - self.assertIn('not NoneType', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 'bar', sockname) - self.assertIn('an integer is required', str(cm.exception)) + self.assertIn('integer', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None, None) - self.assertIn('an integer is required', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('an integer is required', str(cm.exception)) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto('foo') - self.assertIn('(1 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, sockname, 4) - self.assertIn('(4 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) def testCrucialConstants(self): @@ -385,10 +388,10 @@ socket.htonl(k) socket.htons(k) for k in bad_values: - self.assertRaises(OverflowError, socket.ntohl, k) - self.assertRaises(OverflowError, socket.ntohs, k) - self.assertRaises(OverflowError, socket.htonl, k) - self.assertRaises(OverflowError, socket.htons, k) + self.assertRaises((OverflowError, ValueError), socket.ntohl, k) + self.assertRaises((OverflowError, ValueError), socket.ntohs, k) + self.assertRaises((OverflowError, ValueError), socket.htonl, k) + self.assertRaises((OverflowError, ValueError), socket.htons, k) def testGetServBy(self): eq = self.assertEqual @@ -428,8 +431,8 @@ if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. - self.assertRaises(OverflowError, socket.getservbyport, -1) - self.assertRaises(OverflowError, socket.getservbyport, 65536) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, -1) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout @@ -608,8 +611,8 @@ neg_port = port - 65536 sock = socket.socket() try: - self.assertRaises(OverflowError, sock.bind, (host, big_port)) - self.assertRaises(OverflowError, sock.bind, (host, neg_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, big_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, neg_port)) sock.bind((host, port)) finally: sock.close() @@ -1309,6 +1312,7 @@ closed = False def flush(self): pass def close(self): self.closed = True + def _decref_socketios(self): pass # must not close unless we request it: the original use of _fileobject # by module socket requires that the underlying socket not be closed until diff --git a/lib-python/2.7/test/test_sort.py b/lib-python/2.7/test/test_sort.py --- a/lib-python/2.7/test/test_sort.py +++ b/lib-python/2.7/test/test_sort.py @@ -140,7 +140,10 @@ return random.random() < 0.5 L = [C() for i in range(50)] - self.assertRaises(ValueError, L.sort) + try: + L.sort() + except ValueError: + pass def test_cmpNone(self): # Testing None as a comparison function. @@ -150,8 +153,10 @@ L.sort(None) self.assertEqual(L, range(50)) + @test_support.impl_detail(pypy=False) def test_undetected_mutation(self): # Python 2.4a1 did not always detect mutation + # So does pypy... memorywaster = [] for i in range(20): def mutating_cmp(x, y): @@ -226,7 +231,10 @@ def __del__(self): del data[:] data[:] = range(20) - self.assertRaises(ValueError, data.sort, key=SortKiller) + try: + data.sort(key=SortKiller) + except ValueError: + pass def test_key_with_mutating_del_and_exception(self): data = range(10) diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -881,6 +881,8 @@ c = socket.socket() c.connect((HOST, port)) listener_gone.wait() + # XXX why is it necessary? + test_support.gc_collect() try: ssl_sock = ssl.wrap_socket(c) except IOError: @@ -1330,10 +1332,8 @@ def test_main(verbose=False): global CERTFILE, SVN_PYTHON_ORG_ROOT_CERT - CERTFILE = os.path.join(os.path.dirname(__file__) or os.curdir, - "keycert.pem") - SVN_PYTHON_ORG_ROOT_CERT = os.path.join( - os.path.dirname(__file__) or os.curdir, + CERTFILE = test_support.findfile("keycert.pem") + SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( "https_svn_python_org_root.pem") if (not os.path.exists(CERTFILE) or diff --git a/lib-python/2.7/test/test_str.py b/lib-python/2.7/test/test_str.py --- a/lib-python/2.7/test/test_str.py +++ b/lib-python/2.7/test/test_str.py @@ -422,10 +422,11 @@ for meth in ('foo'.startswith, 'foo'.endswith): with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) def test_main(): test_support.run_unittest(StrTest) diff --git a/lib-python/2.7/test/test_struct.py b/lib-python/2.7/test/test_struct.py --- a/lib-python/2.7/test/test_struct.py +++ b/lib-python/2.7/test/test_struct.py @@ -535,7 +535,8 @@ @unittest.skipUnless(IS32BIT, "Specific to 32bit machines") def test_crasher(self): - self.assertRaises(MemoryError, struct.pack, "357913941c", "a") + self.assertRaises((MemoryError, struct.error), struct.pack, + "357913941c", "a") def test_count_overflow(self): hugecount = '{}b'.format(sys.maxsize+1) diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/2.7/test/test_support.py b/lib-python/2.7/test/test_support.py --- a/lib-python/2.7/test/test_support.py +++ b/lib-python/2.7/test/test_support.py @@ -431,16 +431,20 @@ rmtree(name) -def findfile(file, here=__file__, subdir=None): +def findfile(file, here=None, subdir=None): """Try to find a file on sys.path and the working directory. If it is not found the argument passed to the function is returned (this does not necessarily signal failure; could still be the legitimate path).""" + import test if os.path.isabs(file): return file if subdir is not None: file = os.path.join(subdir, file) path = sys.path - path = [os.path.dirname(here)] + path + if here is None: + path = test.__path__ + path + else: + path = [os.path.dirname(here)] + path for dn in path: fn = os.path.join(dn, file) if os.path.exists(fn): return fn @@ -1050,15 +1054,33 @@ guards, default = _parse_guards(guards) return guards.get(platform.python_implementation().lower(), default) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --pdb +# to get a pdb prompt in case of exceptions +ResultClass = unittest.TextTestRunner.resultclass + +class TestResultWithPdb(ResultClass): + + def addError(self, testcase, exc_info): + ResultClass.addError(self, testcase, exc_info) + if '--pdb' in sys.argv: + import pdb, traceback + traceback.print_tb(exc_info[2]) + pdb.post_mortem(exc_info[2]) + +# ---------------------------------- def _run_suite(suite): """Run tests from a unittest.TestSuite-derived class.""" if verbose: - runner = unittest.TextTestRunner(sys.stdout, verbosity=2) + runner = unittest.TextTestRunner(sys.stdout, verbosity=2, + resultclass=TestResultWithPdb) else: runner = BasicTestRunner() + result = runner.run(suite) if not result.wasSuccessful(): if len(result.errors) == 1 and not result.failures: @@ -1071,6 +1093,34 @@ err += "; run in verbose mode for details" raise TestFailed(err) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --filter bar +# to run only the test cases whose name contains bar + +def filter_maybe(suite): + try: + i = sys.argv.index('--filter') + filter = sys.argv[i+1] + except (ValueError, IndexError): + return suite + tests = [] + for test in linearize_suite(suite): + if filter in test._testMethodName: + tests.append(test) + return unittest.TestSuite(tests) + +def linearize_suite(suite_or_test): + try: + it = iter(suite_or_test) + except TypeError: + yield suite_or_test + return + for subsuite in it: + for item in linearize_suite(subsuite): + yield item + +# ---------------------------------- def run_unittest(*classes): """Run tests from unittest.TestCase-derived classes.""" @@ -1086,6 +1136,7 @@ suite.addTest(cls) else: suite.addTest(unittest.makeSuite(cls)) + suite = filter_maybe(suite) _run_suite(suite) diff --git a/lib-python/2.7/test/test_syntax.py b/lib-python/2.7/test/test_syntax.py --- a/lib-python/2.7/test/test_syntax.py +++ b/lib-python/2.7/test/test_syntax.py @@ -5,7 +5,8 @@ >>> def f(x): ... global x Traceback (most recent call last): -SyntaxError: name 'x' is local and global (, line 1) + File "", line 1 +SyntaxError: name 'x' is local and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that @@ -375,7 +376,7 @@ In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 - >>> while 1: + >>> while 1: # doctest:+SKIP ... while 2: ... while 3: ... while 4: diff --git a/lib-python/2.7/test/test_sys.py b/lib-python/2.7/test/test_sys.py --- a/lib-python/2.7/test/test_sys.py +++ b/lib-python/2.7/test/test_sys.py @@ -264,6 +264,7 @@ self.assertEqual(sys.getdlopenflags(), oldflags+1) sys.setdlopenflags(oldflags) + @test.test_support.impl_detail("reference counting") def test_refcount(self): # n here must be a global in order for this test to pass while # tracing with a python function. Tracing calls PyFrame_FastToLocals @@ -287,7 +288,7 @@ is sys._getframe().f_code ) - # sys._current_frames() is a CPython-only gimmick. + @test.test_support.impl_detail("current_frames") def test_current_frames(self): have_threads = True try: @@ -383,7 +384,10 @@ self.assertEqual(len(sys.float_info), 11) self.assertEqual(sys.float_info.radix, 2) self.assertEqual(len(sys.long_info), 2) - self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + if test.test_support.check_impl_detail(cpython=True): + self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + else: + self.assertTrue(sys.long_info.bits_per_digit >= 1) self.assertTrue(sys.long_info.sizeof_digit >= 1) self.assertEqual(type(sys.long_info.bits_per_digit), int) self.assertEqual(type(sys.long_info.sizeof_digit), int) @@ -432,6 +436,7 @@ self.assertEqual(type(getattr(sys.flags, attr)), int, attr) self.assertTrue(repr(sys.flags)) + @test.test_support.impl_detail("sys._clear_type_cache") def test_clear_type_cache(self): sys._clear_type_cache() @@ -473,6 +478,7 @@ p.wait() self.assertIn(executable, ["''", repr(sys.executable)]) + at unittest.skipUnless(test.test_support.check_impl_detail(), "sys.getsizeof()") class SizeofTest(unittest.TestCase): TPFLAGS_HAVE_GC = 1<<14 diff --git a/lib-python/2.7/test/test_sys_settrace.py b/lib-python/2.7/test/test_sys_settrace.py --- a/lib-python/2.7/test/test_sys_settrace.py +++ b/lib-python/2.7/test/test_sys_settrace.py @@ -213,12 +213,16 @@ "finally" def generator_example(): # any() will leave the generator before its end - x = any(generator_function()) + x = any(generator_function()); gc.collect() # the following lines were not traced for x in range(10): y = x +# On CPython, when the generator is decref'ed to zero, we see the trace +# for the "finally:" portion. On PyPy, we don't see it before the next +# garbage collection. That's why we put gc.collect() on the same line above. + generator_example.events = ([(0, 'call'), (2, 'line'), (-6, 'call'), @@ -282,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass @@ -323,17 +327,24 @@ self.run_test(tighterloop_example) def test_13_genexp(self): - self.run_test(generator_example) - # issue1265: if the trace function contains a generator, - # and if the traced function contains another generator - # that is not completely exhausted, the trace stopped. - # Worse: the 'finally' clause was not invoked. - tracer = Tracer() - sys.settrace(tracer.traceWithGenexp) - generator_example() - sys.settrace(None) - self.compare_events(generator_example.__code__.co_firstlineno, - tracer.events, generator_example.events) + if self.using_gc: + test_support.gc_collect() + gc.enable() + try: + self.run_test(generator_example) + # issue1265: if the trace function contains a generator, + # and if the traced function contains another generator + # that is not completely exhausted, the trace stopped. + # Worse: the 'finally' clause was not invoked. + tracer = Tracer() + sys.settrace(tracer.traceWithGenexp) + generator_example() + sys.settrace(None) + self.compare_events(generator_example.__code__.co_firstlineno, + tracer.events, generator_example.events) + finally: + if self.using_gc: + gc.disable() def test_14_onliner_if(self): def onliners(): diff --git a/lib-python/2.7/test/test_sysconfig.py b/lib-python/2.7/test/test_sysconfig.py --- a/lib-python/2.7/test/test_sysconfig.py +++ b/lib-python/2.7/test/test_sysconfig.py @@ -209,13 +209,22 @@ self.assertEqual(get_platform(), 'macosx-10.4-fat64') - for arch in ('ppc', 'i386', 'x86_64', 'ppc64'): + for arch in ('ppc', 'i386', 'ppc64', 'x86_64'): get_config_vars()['CFLAGS'] = ('-arch %s -isysroot ' '/Developer/SDKs/MacOSX10.4u.sdk ' '-fno-strict-aliasing -fno-common ' '-dynamic -DNDEBUG -g -O3'%(arch,)) self.assertEqual(get_platform(), 'macosx-10.4-%s'%(arch,)) + + # macosx with ARCHFLAGS set and empty _CONFIG_VARS + os.environ['ARCHFLAGS'] = '-arch i386' + sysconfig._CONFIG_VARS = None + + # this will attempt to recreate the _CONFIG_VARS based on environment + # variables; used to check a problem with the PyPy's _init_posix + # implementation; see: issue 705 + get_config_vars() # linux debian sarge os.name = 'posix' @@ -235,7 +244,7 @@ def test_get_scheme_names(self): wanted = ('nt', 'nt_user', 'os2', 'os2_home', 'osx_framework_user', - 'posix_home', 'posix_prefix', 'posix_user') + 'posix_home', 'posix_prefix', 'posix_user', 'pypy') self.assertEqual(get_scheme_names(), wanted) def test_symlink(self): diff --git a/lib-python/2.7/test/test_tarfile.py b/lib-python/2.7/test/test_tarfile.py --- a/lib-python/2.7/test/test_tarfile.py +++ b/lib-python/2.7/test/test_tarfile.py @@ -169,6 +169,7 @@ except tarfile.ReadError: self.fail("tarfile.open() failed on empty archive") self.assertListEqual(tar.getmembers(), []) + tar.close() def test_null_tarfile(self): # Test for issue6123: Allow opening empty archives. @@ -207,16 +208,21 @@ fobj = open(self.tarname, "rb") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, os.path.abspath(fobj.name)) + tar.close() def test_no_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) self.assertRaises(AttributeError, getattr, fobj, "name") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, None) def test_empty_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) fobj.name = "" tar = tarfile.open(fileobj=fobj, mode=self.mode) @@ -334,7 +340,8 @@ # constructor in case of an error. For the test we rely on # the fact that opening an empty file raises a ReadError. empty = os.path.join(TEMPDIR, "empty") - open(empty, "wb").write("") + with open(empty, "wb") as fid: + fid.write("") try: tar = object.__new__(tarfile.TarFile) @@ -515,6 +522,7 @@ self.tar = tarfile.open(self.tarname, mode=self.mode, encoding="iso8859-1") tarinfo = self.tar.getmember("pax/umlauts-�������") self._test_member(tarinfo, size=7011, chksum=md5_regtype) + self.tar.close() class LongnameTest(ReadTest): @@ -675,6 +683,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.rmdir(path) @@ -692,6 +701,7 @@ tar.gettarinfo(target) tarinfo = tar.gettarinfo(link) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(target) os.remove(link) @@ -704,6 +714,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(path) @@ -722,6 +733,7 @@ tar.add(dstname) os.chdir(cwd) self.assertTrue(tar.getnames() == [], "added the archive to itself") + tar.close() def test_exclude(self): tempdir = os.path.join(TEMPDIR, "exclude") @@ -742,6 +754,7 @@ tar = tarfile.open(tmpname, "r") self.assertEqual(len(tar.getmembers()), 1) self.assertEqual(tar.getnames()[0], "empty_dir") + tar.close() finally: shutil.rmtree(tempdir) @@ -947,7 +960,9 @@ fobj.close() elif self.mode.endswith("bz2"): dec = bz2.BZ2Decompressor() - data = open(tmpname, "rb").read() + f = open(tmpname, "rb") + data = f.read() + f.close() data = dec.decompress(data) self.assertTrue(len(dec.unused_data) == 0, "found trailing data") @@ -1026,6 +1041,7 @@ "unable to read longname member") self.assertEqual(tarinfo.linkname, member.linkname, "unable to read longname member") + tar.close() def test_longname_1023(self): self._test(("longnam/" * 127) + "longnam") @@ -1118,6 +1134,7 @@ else: n = tar.getmembers()[0].name self.assertTrue(name == n, "PAX longname creation failed") + tar.close() def test_pax_global_header(self): pax_headers = { @@ -1146,6 +1163,7 @@ tarfile.PAX_NUMBER_FIELDS[key](val) except (TypeError, ValueError): self.fail("unable to convert pax header field") + tar.close() def test_pax_extended_header(self): # The fields from the pax header have priority over the @@ -1165,6 +1183,7 @@ self.assertEqual(t.pax_headers, pax_headers) self.assertEqual(t.name, "foo") self.assertEqual(t.uid, 123) + tar.close() class UstarUnicodeTest(unittest.TestCase): @@ -1208,6 +1227,7 @@ tarinfo.name = "foo" tarinfo.uname = u"���" self.assertRaises(UnicodeError, tar.addfile, tarinfo) + tar.close() def test_unicode_argument(self): tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict") @@ -1262,6 +1282,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="ascii", errors=handler) self.assertEqual(tar.getnames()[0], name) + tar.close() self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="ascii", errors="strict") @@ -1274,6 +1295,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1", errors="utf-8") self.assertEqual(tar.getnames()[0], "���/" + u"�".encode("utf8")) + tar.close() class AppendTest(unittest.TestCase): @@ -1301,6 +1323,7 @@ def _test(self, names=["bar"], fileobj=None): tar = tarfile.open(self.tarname, fileobj=fileobj) self.assertEqual(tar.getnames(), names) + tar.close() def test_non_existing(self): self._add_testfile() @@ -1319,7 +1342,9 @@ def test_fileobj(self): self._create_testtar() - data = open(self.tarname).read() + f = open(self.tarname) + data = f.read() + f.close() fobj = StringIO.StringIO(data) self._add_testfile(fobj) fobj.seek(0) @@ -1345,7 +1370,9 @@ # Append mode is supposed to fail if the tarfile to append to # does not end with a zero block. def _test_error(self, data): - open(self.tarname, "wb").write(data) + f = open(self.tarname, "wb") + f.write(data) + f.close() self.assertRaises(tarfile.ReadError, self._add_testfile) def test_null(self): diff --git a/lib-python/2.7/test/test_tempfile.py b/lib-python/2.7/test/test_tempfile.py --- a/lib-python/2.7/test/test_tempfile.py +++ b/lib-python/2.7/test/test_tempfile.py @@ -23,8 +23,8 @@ # TEST_FILES may need to be tweaked for systems depending on the maximum # number of files that can be opened at one time (see ulimit -n) -if sys.platform in ('openbsd3', 'openbsd4'): - TEST_FILES = 48 +if sys.platform.startswith("openbsd"): + TEST_FILES = 64 # ulimit -n defaults to 128 for normal users else: TEST_FILES = 100 @@ -244,6 +244,7 @@ dir = tempfile.mkdtemp() try: self.do_create(dir=dir).write("blat") + test_support.gc_collect() finally: os.rmdir(dir) @@ -528,12 +529,15 @@ self.do_create(suf="b") self.do_create(pre="a", suf="b") self.do_create(pre="aa", suf=".txt") + test_support.gc_collect() def test_many(self): # mktemp can choose many usable file names (stochastic) extant = range(TEST_FILES) for i in extant: extant[i] = self.do_create(pre="aa") + del extant + test_support.gc_collect() ## def test_warning(self): ## # mktemp issues a warning when used diff --git a/lib-python/2.7/test/test_thread.py b/lib-python/2.7/test/test_thread.py --- a/lib-python/2.7/test/test_thread.py +++ b/lib-python/2.7/test/test_thread.py @@ -128,6 +128,7 @@ del task while not done: time.sleep(0.01) + test_support.gc_collect() self.assertEqual(thread._count(), orig) diff --git a/lib-python/2.7/test/test_threading.py b/lib-python/2.7/test/test_threading.py --- a/lib-python/2.7/test/test_threading.py +++ b/lib-python/2.7/test/test_threading.py @@ -161,6 +161,7 @@ # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. + @test.test_support.cpython_only def test_PyThreadState_SetAsyncExc(self): try: import ctypes @@ -266,6 +267,7 @@ finally: threading._start_new_thread = _start_new_thread + @test.test_support.cpython_only def test_finalize_runnning_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for @@ -383,6 +385,7 @@ finally: sys.setcheckinterval(old_interval) + @test.test_support.cpython_only def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): @@ -425,6 +428,9 @@ def joiningfunc(mainthread): mainthread.join() print 'end of thread' + # stdout is fully buffered because not a tty, we have to flush + # before exit. + sys.stdout.flush() \n""" + script p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE) diff --git a/lib-python/2.7/test/test_threading_local.py b/lib-python/2.7/test/test_threading_local.py --- a/lib-python/2.7/test/test_threading_local.py +++ b/lib-python/2.7/test/test_threading_local.py @@ -173,8 +173,9 @@ obj = cls() obj.x = 5 self.assertEqual(obj.__dict__, {'x': 5}) - with self.assertRaises(AttributeError): - obj.__dict__ = {} + if test_support.check_impl_detail(): + with self.assertRaises(AttributeError): + obj.__dict__ = {} with self.assertRaises(AttributeError): del obj.__dict__ diff --git a/lib-python/2.7/test/test_traceback.py b/lib-python/2.7/test/test_traceback.py --- a/lib-python/2.7/test/test_traceback.py +++ b/lib-python/2.7/test/test_traceback.py @@ -5,7 +5,8 @@ import sys import unittest from imp import reload -from test.test_support import run_unittest, is_jython, Error +from test.test_support import run_unittest, Error +from test.test_support import impl_detail, check_impl_detail import traceback @@ -49,10 +50,8 @@ self.assertTrue(err[2].count('\n') == 1) # and no additional newline self.assertTrue(err[1].find("+") == err[2].find("^")) # in the right place + @impl_detail("other implementations may add a caret (why shouldn't they?)") def test_nocaret(self): - if is_jython: - # jython adds a caret in this case (why shouldn't it?) - return err = self.get_exception_format(self.syntax_error_without_caret, SyntaxError) self.assertTrue(len(err) == 3) @@ -63,8 +62,11 @@ IndentationError) self.assertTrue(len(err) == 4) self.assertTrue(err[1].strip() == "print 2") - self.assertIn("^", err[2]) - self.assertTrue(err[1].find("2") == err[2].find("^")) + if check_impl_detail(): + # on CPython, there is a "^" at the end of the line + # on PyPy, there is a "^" too, but at the start, more logically + self.assertIn("^", err[2]) + self.assertTrue(err[1].find("2") == err[2].find("^")) def test_bug737473(self): import os, tempfile, time @@ -74,7 +76,8 @@ try: sys.path.insert(0, testdir) testfile = os.path.join(testdir, 'test_bug737473.py') - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise ValueError""" @@ -96,7 +99,8 @@ # three seconds are needed for this test to pass reliably :-( time.sleep(4) - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise NotImplementedError""" reload(test_bug737473) diff --git a/lib-python/2.7/test/test_types.py b/lib-python/2.7/test/test_types.py --- a/lib-python/2.7/test/test_types.py +++ b/lib-python/2.7/test/test_types.py @@ -1,7 +1,8 @@ # Python test set -- part 6, built-in types from test.test_support import run_unittest, have_unicode, run_with_locale, \ - check_py3k_warnings + check_py3k_warnings, \ + impl_detail, check_impl_detail import unittest import sys import locale @@ -289,9 +290,14 @@ # array.array() returns an object that does not implement a char buffer, # something which int() uses for conversion. import array - try: int(buffer(array.array('c'))) + try: int(buffer(array.array('c', '5'))) except TypeError: pass - else: self.fail("char buffer (at C level) not working") + else: + if check_impl_detail(): + self.fail("char buffer (at C level) not working") + #else: + # it works on PyPy, which does not have the distinction + # between char buffer and binary buffer. XXX fine enough? def test_int__format__(self): def test(i, format_spec, result): @@ -741,6 +747,7 @@ for code in 'xXobns': self.assertRaises(ValueError, format, 0, ',' + code) + @impl_detail("the types' internal size attributes are CPython-only") def test_internal_sizes(self): self.assertGreater(object.__basicsize__, 0) self.assertGreater(tuple.__itemsize__, 0) diff --git a/lib-python/2.7/test/test_unicode.py b/lib-python/2.7/test/test_unicode.py --- a/lib-python/2.7/test/test_unicode.py +++ b/lib-python/2.7/test/test_unicode.py @@ -448,10 +448,11 @@ meth('\xff') with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) @test_support.run_with_locale('LC_ALL', 'de_DE', 'fr_FR') def test_format_float(self): @@ -1062,7 +1063,8 @@ # to take a 64-bit long, this test should apply to all platforms. if sys.maxint > (1 << 32) or struct.calcsize('P') != 4: return - self.assertRaises(OverflowError, u't\tt\t'.expandtabs, sys.maxint) + self.assertRaises((OverflowError, MemoryError), + u't\tt\t'.expandtabs, sys.maxint) def test__format__(self): def test(value, format, expected): diff --git a/lib-python/2.7/test/test_unicodedata.py b/lib-python/2.7/test/test_unicodedata.py --- a/lib-python/2.7/test/test_unicodedata.py +++ b/lib-python/2.7/test/test_unicodedata.py @@ -233,10 +233,12 @@ # been loaded in this process. popen = subprocess.Popen(args, stderr=subprocess.PIPE) popen.wait() - self.assertEqual(popen.returncode, 1) - error = "SyntaxError: (unicode error) \N escapes not supported " \ - "(can't load unicodedata module)" - self.assertIn(error, popen.stderr.read()) + self.assertIn(popen.returncode, [0, 1]) # at least it did not segfault + if test.test_support.check_impl_detail(): + self.assertEqual(popen.returncode, 1) + error = "SyntaxError: (unicode error) \N escapes not supported " \ + "(can't load unicodedata module)" + self.assertIn(error, popen.stderr.read()) def test_decimal_numeric_consistent(self): # Test that decimal and numeric are consistent, diff --git a/lib-python/2.7/test/test_unpack.py b/lib-python/2.7/test/test_unpack.py --- a/lib-python/2.7/test/test_unpack.py +++ b/lib-python/2.7/test/test_unpack.py @@ -62,14 +62,14 @@ >>> a, b = t Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking tuple of wrong size >>> a, b = l Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking sequence too short diff --git a/lib-python/2.7/test/test_urllib2.py b/lib-python/2.7/test/test_urllib2.py --- a/lib-python/2.7/test/test_urllib2.py +++ b/lib-python/2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/2.7/test/test_warnings.py b/lib-python/2.7/test/test_warnings.py --- a/lib-python/2.7/test/test_warnings.py +++ b/lib-python/2.7/test/test_warnings.py @@ -355,7 +355,8 @@ # test_support.import_fresh_module utility function def test_accelerated(self): self.assertFalse(original_warnings is self.module) - self.assertFalse(hasattr(self.module.warn, 'func_code')) + self.assertFalse(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class PyWarnTests(BaseTest, WarnTests): module = py_warnings @@ -364,7 +365,8 @@ # test_support.import_fresh_module utility function def test_pure_python(self): self.assertFalse(original_warnings is self.module) - self.assertTrue(hasattr(self.module.warn, 'func_code')) + self.assertTrue(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class WCmdLineTests(unittest.TestCase): diff --git a/lib-python/2.7/test/test_weakref.py b/lib-python/2.7/test/test_weakref.py --- a/lib-python/2.7/test/test_weakref.py +++ b/lib-python/2.7/test/test_weakref.py @@ -1,4 +1,3 @@ -import gc import sys import unittest import UserList @@ -6,6 +5,7 @@ import operator from test import test_support +from test.test_support import gc_collect # Used in ReferencesTestCase.test_ref_created_during_del() . ref_from_del = None @@ -70,6 +70,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(ref1() is None, "expected reference to be invalidated") self.assertTrue(ref2() is None, @@ -101,13 +102,16 @@ ref1 = weakref.proxy(o, self.callback) ref2 = weakref.proxy(o, self.callback) del o + gc_collect() def check(proxy): proxy.bar self.assertRaises(weakref.ReferenceError, check, ref1) self.assertRaises(weakref.ReferenceError, check, ref2) - self.assertRaises(weakref.ReferenceError, bool, weakref.proxy(C())) + ref3 = weakref.proxy(C()) + gc_collect() + self.assertRaises(weakref.ReferenceError, bool, ref3) self.assertTrue(self.cbcalled == 2) def check_basic_ref(self, factory): @@ -124,6 +128,7 @@ o = factory() ref = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(self.cbcalled == 1, "callback did not properly set 'cbcalled'") self.assertTrue(ref() is None, @@ -148,6 +153,7 @@ self.assertTrue(weakref.getweakrefcount(o) == 2, "wrong weak ref count for object") del proxy + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 1, "wrong weak ref count for object after deleting proxy") @@ -325,6 +331,7 @@ "got wrong number of weak reference objects") del ref1, ref2, proxy1, proxy2 + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 0, "weak reference objects not unlinked from" " referent when discarded.") @@ -338,6 +345,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref2], "list of refs does not match") @@ -345,10 +353,12 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref2 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref1], "list of refs does not match") del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [], "list of refs not cleared") @@ -400,13 +410,11 @@ # when the second attempt to remove the instance from the "list # of all objects" occurs. - import gc - class C(object): pass c = C() - wr = weakref.ref(c, lambda ignore: gc.collect()) + wr = weakref.ref(c, lambda ignore: gc_collect()) del c # There endeth the first part. It gets worse. @@ -414,7 +422,7 @@ c1 = C() c1.i = C() - wr = weakref.ref(c1.i, lambda ignore: gc.collect()) + wr = weakref.ref(c1.i, lambda ignore: gc_collect()) c2 = C() c2.c1 = c1 @@ -430,8 +438,6 @@ del c2 def test_callback_in_cycle_1(self): - import gc - class J(object): pass @@ -467,11 +473,9 @@ # search II.__mro__, but that's NULL. The result was a segfault in # a release build, and an assert failure in a debug build. del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_2(self): - import gc - # This is just like test_callback_in_cycle_1, except that II is an # old-style class. The symptom is different then: an instance of an # old-style class looks in its own __dict__ first. 'J' happens to @@ -496,11 +500,9 @@ I.wr = weakref.ref(J, I.acallback) del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_3(self): - import gc - # This one broke the first patch that fixed the last two. In this # case, the objects reachable from the callback aren't also reachable # from the object (c1) *triggering* the callback: you can get to @@ -520,11 +522,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2 - gc.collect() + gc_collect() def test_callback_in_cycle_4(self): - import gc - # Like test_callback_in_cycle_3, except c2 and c1 have different # classes. c2's class (C) isn't reachable from c1 then, so protecting # objects reachable from the dying object (c1) isn't enough to stop @@ -548,11 +548,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2, C, D - gc.collect() + gc_collect() def test_callback_in_cycle_resurrection(self): - import gc - # Do something nasty in a weakref callback: resurrect objects # from dead cycles. For this to be attempted, the weakref and # its callback must also be part of the cyclic trash (else the @@ -583,7 +581,7 @@ del c1, c2, C # make them all trash self.assertEqual(alist, []) # del isn't enough to reclaim anything - gc.collect() + gc_collect() # c1.wr and c2.wr were part of the cyclic trash, so should have # been cleared without their callbacks executing. OTOH, the weakref # to C is bound to a function local (wr), and wasn't trash, so that @@ -593,12 +591,10 @@ self.assertEqual(wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_callbacks_on_callback(self): - import gc - # Set up weakref callbacks *on* weakref callbacks. alist = [] def safe_callback(ignore): @@ -626,12 +622,12 @@ del callback, c, d, C self.assertEqual(alist, []) # del isn't enough to clean up cycles - gc.collect() + gc_collect() self.assertEqual(alist, ["safe_callback called"]) self.assertEqual(external_wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_gc_during_ref_creation(self): @@ -641,9 +637,11 @@ self.check_gc_during_creation(weakref.proxy) def check_gc_during_creation(self, makeref): - thresholds = gc.get_threshold() - gc.set_threshold(1, 1, 1) - gc.collect() + if test_support.check_impl_detail(): + import gc + thresholds = gc.get_threshold() + gc.set_threshold(1, 1, 1) + gc_collect() class A: pass @@ -663,7 +661,8 @@ weakref.ref(referenced, callback) finally: - gc.set_threshold(*thresholds) + if test_support.check_impl_detail(): + gc.set_threshold(*thresholds) def test_ref_created_during_del(self): # Bug #1377858 @@ -683,7 +682,7 @@ r = weakref.ref(Exception) self.assertRaises(TypeError, r.__init__, 0, 0, 0, 0, 0) # No exception should be raised here - gc.collect() + gc_collect() def test_classes(self): # Check that both old-style classes and new-style classes @@ -696,12 +695,12 @@ weakref.ref(int) a = weakref.ref(A, l.append) A = None - gc.collect() + gc_collect() self.assertEqual(a(), None) self.assertEqual(l, [a]) b = weakref.ref(B, l.append) B = None - gc.collect() + gc_collect() self.assertEqual(b(), None) self.assertEqual(l, [a, b]) @@ -722,6 +721,7 @@ self.assertTrue(mr.called) self.assertEqual(mr.value, 24) del o + gc_collect() self.assertTrue(mr() is None) self.assertTrue(mr.called) @@ -738,9 +738,11 @@ self.assertEqual(weakref.getweakrefcount(o), 3) refs = weakref.getweakrefs(o) self.assertEqual(len(refs), 3) - self.assertTrue(r2 is refs[0]) - self.assertIn(r1, refs[1:]) - self.assertIn(r3, refs[1:]) + assert set(refs) == set((r1, r2, r3)) + if test_support.check_impl_detail(): + self.assertTrue(r2 is refs[0]) + self.assertIn(r1, refs[1:]) + self.assertIn(r3, refs[1:]) def test_subclass_refs_dont_conflate_callbacks(self): class MyRef(weakref.ref): @@ -839,15 +841,18 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the values did not clear the dictionary") # regression on SF bug #447152: dict = weakref.WeakValueDictionary() self.assertRaises(KeyError, dict.__getitem__, 1) dict[2] = C() + gc_collect() self.assertRaises(KeyError, dict.__getitem__, 2) def test_weak_keys(self): @@ -868,9 +873,11 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the keys did not clear the dictionary") o = Object(42) @@ -986,13 +993,13 @@ self.assertTrue(len(weakdict) == 2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 1) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 0) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) @@ -1137,6 +1144,7 @@ for o in objs: count += 1 del d[o] + gc_collect() self.assertEqual(len(d), 0) self.assertEqual(count, 2) @@ -1177,6 +1185,7 @@ >>> o is o2 True >>> del o, o2 +>>> gc_collect() >>> print r() None @@ -1229,6 +1238,7 @@ >>> id2obj(a_id) is a True >>> del a +>>> gc_collect() >>> try: ... id2obj(a_id) ... except KeyError: diff --git a/lib-python/2.7/test/test_weakset.py b/lib-python/2.7/test/test_weakset.py --- a/lib-python/2.7/test/test_weakset.py +++ b/lib-python/2.7/test/test_weakset.py @@ -57,6 +57,7 @@ self.assertEqual(len(self.s), len(self.d)) self.assertEqual(len(self.fs), 1) del self.obj + test_support.gc_collect() self.assertEqual(len(self.fs), 0) def test_contains(self): @@ -66,6 +67,7 @@ self.assertNotIn(1, self.s) self.assertIn(self.obj, self.fs) del self.obj + test_support.gc_collect() self.assertNotIn(SomeClass('F'), self.fs) def test_union(self): @@ -204,6 +206,7 @@ self.assertEqual(self.s, dup) self.assertRaises(TypeError, self.s.add, []) self.fs.add(Foo()) + test_support.gc_collect() self.assertTrue(len(self.fs) == 1) self.fs.add(self.obj) self.assertTrue(len(self.fs) == 1) @@ -330,10 +333,11 @@ next(it) # Trigger internal iteration # Destroy an item del items[-1] - gc.collect() # just in case + test_support.gc_collect() # We have removed either the first consumed items, or another one self.assertIn(len(list(it)), [len(items), len(items) - 1]) del it + test_support.gc_collect() # The removal has been committed self.assertEqual(len(s), len(items)) diff --git a/lib-python/2.7/test/test_xml_etree.py b/lib-python/2.7/test/test_xml_etree.py --- a/lib-python/2.7/test/test_xml_etree.py +++ b/lib-python/2.7/test/test_xml_etree.py @@ -1633,10 +1633,10 @@ Check reference leak. >>> xmltoolkit63() - >>> count = sys.getrefcount(None) + >>> count = sys.getrefcount(None) #doctest: +SKIP >>> for i in range(1000): ... xmltoolkit63() - >>> sys.getrefcount(None) - count + >>> sys.getrefcount(None) - count #doctest: +SKIP 0 """ diff --git a/lib-python/2.7/test/test_xmlrpc.py b/lib-python/2.7/test/test_xmlrpc.py --- a/lib-python/2.7/test/test_xmlrpc.py +++ b/lib-python/2.7/test/test_xmlrpc.py @@ -308,7 +308,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -367,7 +367,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -435,6 +435,7 @@ def tearDown(self): # wait on the server thread to terminate + test_support.gc_collect() # to close the active connections self.evt.wait(10) # disable traceback reporting @@ -472,9 +473,6 @@ # protocol error; provide additional information in test output self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) - def test_unicode_host(self): - server = xmlrpclib.ServerProxy(u"http://%s:%d/RPC2"%(ADDR, PORT)) - self.assertEqual(server.add("a", u"\xe9"), u"a\xe9") # [ch] The test 404 is causing lots of false alarms. def XXXtest_404(self): @@ -589,12 +587,6 @@ # This avoids waiting for the socket timeout. self.test_simple1() - def test_partial_post(self): - # Check that a partial POST doesn't make the server loop: issue #14001. - conn = httplib.HTTPConnection(ADDR, PORT) - conn.request('POST', '/RPC2 HTTP/1.0\r\nContent-Length: 100\r\n\r\nbye') - conn.close() - class MultiPathServerTestCase(BaseServerTestCase): threadFunc = staticmethod(http_multi_server) request_count = 2 diff --git a/lib-python/2.7/test/test_zipfile.py b/lib-python/2.7/test/test_zipfile.py --- a/lib-python/2.7/test/test_zipfile.py +++ b/lib-python/2.7/test/test_zipfile.py @@ -234,8 +234,9 @@ # Read the ZIP archive with zipfile.ZipFile(f, "r") as zipfp: - for line, zipline in zip(self.line_gen, zipfp.open(TESTFN)): - self.assertEqual(zipline, line + '\n') + with zipfp.open(TESTFN) as f: + for line, zipline in zip(self.line_gen, f): + self.assertEqual(zipline, line + '\n') def test_readline_read_stored(self): # Issue #7610: calls to readline() interleaved with calls to read(). @@ -340,7 +341,8 @@ produces the expected result.""" with zipfile.ZipFile(TESTFN2, "w") as zipfp: zipfp.write(TESTFN) - self.assertEqual(zipfp.read(TESTFN), open(TESTFN).read()) + with open(TESTFN) as f: + self.assertEqual(zipfp.read(TESTFN), f.read()) @skipUnless(zlib, "requires zlib") def test_per_file_compression(self): @@ -382,7 +384,8 @@ self.assertEqual(writtenfile, correctfile) # make sure correct data is in correct file - self.assertEqual(fdata, open(writtenfile, "rb").read()) + with open(writtenfile, "rb") as fid: + self.assertEqual(fdata, fid.read()) os.remove(writtenfile) # remove the test file subdirectories @@ -401,24 +404,25 @@ else: outfile = os.path.join(os.getcwd(), fpath) - self.assertEqual(fdata, open(outfile, "rb").read()) + with open(outfile, "rb") as fid: + self.assertEqual(fdata, fid.read()) os.remove(outfile) # remove the test file subdirectories shutil.rmtree(os.path.join(os.getcwd(), 'ziptest2dir')) def test_writestr_compression(self): - zipfp = zipfile.ZipFile(TESTFN2, "w") - zipfp.writestr("a.txt", "hello world", compress_type=zipfile.ZIP_STORED) - if zlib: - zipfp.writestr("b.txt", "hello world", compress_type=zipfile.ZIP_DEFLATED) + with zipfile.ZipFile(TESTFN2, "w") as zipfp: + zipfp.writestr("a.txt", "hello world", compress_type=zipfile.ZIP_STORED) + if zlib: + zipfp.writestr("b.txt", "hello world", compress_type=zipfile.ZIP_DEFLATED) - info = zipfp.getinfo('a.txt') - self.assertEqual(info.compress_type, zipfile.ZIP_STORED) + info = zipfp.getinfo('a.txt') + self.assertEqual(info.compress_type, zipfile.ZIP_STORED) - if zlib: - info = zipfp.getinfo('b.txt') - self.assertEqual(info.compress_type, zipfile.ZIP_DEFLATED) + if zlib: + info = zipfp.getinfo('b.txt') + self.assertEqual(info.compress_type, zipfile.ZIP_DEFLATED) def zip_test_writestr_permissions(self, f, compression): @@ -646,7 +650,8 @@ def test_write_non_pyfile(self): with zipfile.PyZipFile(TemporaryFile(), "w") as zipfp: - open(TESTFN, 'w').write('most definitely not a python file') + with open(TESTFN, 'w') as f: + f.write('most definitely not a python file') self.assertRaises(RuntimeError, zipfp.writepy, TESTFN) os.remove(TESTFN) @@ -795,7 +800,8 @@ self.assertRaises(RuntimeError, zipf.open, "foo.txt") self.assertRaises(RuntimeError, zipf.testzip) self.assertRaises(RuntimeError, zipf.writestr, "bogus.txt", "bogus") - open(TESTFN, 'w').write('zipfile test data') + with open(TESTFN, 'w') as fp: + fp.write('zipfile test data') self.assertRaises(RuntimeError, zipf.write, TESTFN) def test_bad_constructor_mode(self): @@ -803,7 +809,6 @@ self.assertRaises(RuntimeError, zipfile.ZipFile, TESTFN, "q") def test_bad_open_mode(self): - """Check that bad modes passed to ZipFile.open are caught.""" with zipfile.ZipFile(TESTFN, mode="w") as zipf: zipf.writestr("foo.txt", "O, for a Muse of Fire!") @@ -851,7 +856,6 @@ def test_comments(self): """Check that comments on the archive are handled properly.""" - # check default comment is empty with zipfile.ZipFile(TESTFN, mode="w") as zipf: self.assertEqual(zipf.comment, '') @@ -953,14 +957,16 @@ with zipfile.ZipFile(TESTFN, mode="w") as zipf: pass try: - zipf = zipfile.ZipFile(TESTFN, mode="r") + with zipfile.ZipFile(TESTFN, mode="r") as zipf: + pass except zipfile.BadZipfile: self.fail("Unable to create empty ZIP file in 'w' mode") with zipfile.ZipFile(TESTFN, mode="a") as zipf: pass try: - zipf = zipfile.ZipFile(TESTFN, mode="r") + with zipfile.ZipFile(TESTFN, mode="r") as zipf: + pass except: self.fail("Unable to create empty ZIP file in 'a' mode") @@ -1160,6 +1166,8 @@ data1 += zopen1.read(500) data2 += zopen2.read(500) self.assertEqual(data1, data2) + zopen1.close() + zopen2.close() def test_different_file(self): # Verify that (when the ZipFile is in control of creating file objects) @@ -1207,9 +1215,9 @@ def test_store_dir(self): os.mkdir(os.path.join(TESTFN2, "x")) - zipf = zipfile.ZipFile(TESTFN, "w") - zipf.write(os.path.join(TESTFN2, "x"), "x") - self.assertTrue(zipf.filelist[0].filename.endswith("x/")) + with zipfile.ZipFile(TESTFN, "w") as zipf: + zipf.write(os.path.join(TESTFN2, "x"), "x") + self.assertTrue(zipf.filelist[0].filename.endswith("x/")) def tearDown(self): shutil.rmtree(TESTFN2) @@ -1226,7 +1234,8 @@ for n, s in enumerate(self.seps): self.arcdata[s] = s.join(self.line_gen) + s self.arcfiles[s] = '%s-%d' % (TESTFN, n) - open(self.arcfiles[s], "wb").write(self.arcdata[s]) + with open(self.arcfiles[s], "wb") as f: + f.write(self.arcdata[s]) def make_test_archive(self, f, compression): # Create the ZIP archive @@ -1295,8 +1304,9 @@ # Read the ZIP archive with zipfile.ZipFile(f, "r") as zipfp: for sep, fn in self.arcfiles.items(): - for line, zipline in zip(self.line_gen, zipfp.open(fn, "rU")): - self.assertEqual(zipline, line + '\n') + with zipfp.open(fn, "rU") as f: + for line, zipline in zip(self.line_gen, f): + self.assertEqual(zipline, line + '\n') def test_read_stored(self): for f in (TESTFN2, TemporaryFile(), StringIO()): diff --git a/lib-python/2.7/test/test_zlib.py b/lib-python/2.7/test/test_zlib.py --- a/lib-python/2.7/test/test_zlib.py +++ b/lib-python/2.7/test/test_zlib.py @@ -1,6 +1,7 @@ import unittest from test.test_support import TESTFN, run_unittest, import_module, unlink, requires import binascii +import os import random from test.test_support import precisionbigmemtest, _1G, _4G import sys @@ -99,14 +100,7 @@ class BaseCompressTestCase(object): def check_big_compress_buffer(self, size, compress_func): - _1M = 1024 * 1024 - fmt = "%%0%dx" % (2 * _1M) - # Generate 10MB worth of random, and expand it by repeating it. - # The assumption is that zlib's memory is not big enough to exploit - # such spread out redundancy. - data = ''.join([binascii.a2b_hex(fmt % random.getrandbits(8 * _1M)) - for i in range(10)]) - data = data * (size // len(data) + 1) + data = os.urandom(size) try: compress_func(data) finally: diff --git a/lib-python/2.7/trace.py b/lib-python/2.7/trace.py --- a/lib-python/2.7/trace.py +++ b/lib-python/2.7/trace.py @@ -559,6 +559,10 @@ if len(funcs) == 1: dicts = [d for d in gc.get_referrers(funcs[0]) if isinstance(d, dict)] + if len(dicts) == 0: + # PyPy may store functions directly on the class + # (more exactly: the container is not a Python object) + dicts = funcs if len(dicts) == 1: classes = [c for c in gc.get_referrers(dicts[0]) if hasattr(c, "__bases__")] diff --git a/lib-python/2.7/urllib2.py b/lib-python/2.7/urllib2.py --- a/lib-python/2.7/urllib2.py +++ b/lib-python/2.7/urllib2.py @@ -1171,6 +1171,7 @@ except TypeError: #buffering kw not supported r = h.getresponse() except socket.error, err: # XXX what error? + h.close() raise URLError(err) # Pick apart the HTTPResponse object to get the addinfourl diff --git a/lib-python/2.7/uuid.py b/lib-python/2.7/uuid.py --- a/lib-python/2.7/uuid.py +++ b/lib-python/2.7/uuid.py @@ -406,8 +406,12 @@ continue if hasattr(lib, 'uuid_generate_random'): _uuid_generate_random = lib.uuid_generate_random + _uuid_generate_random.argtypes = [ctypes.c_char * 16] + _uuid_generate_random.restype = None if hasattr(lib, 'uuid_generate_time'): _uuid_generate_time = lib.uuid_generate_time + _uuid_generate_time.argtypes = [ctypes.c_char * 16] + _uuid_generate_time.restype = None # The uuid_generate_* functions are broken on MacOS X 10.5, as noted # in issue #8621 the function generates the same sequence of values @@ -436,6 +440,9 @@ lib = None _UuidCreate = getattr(lib, 'UuidCreateSequential', getattr(lib, 'UuidCreate', None)) + if _UuidCreate is not None: + _UuidCreate.argtypes = [ctypes.c_char * 16] + _UuidCreate.restype = ctypes.c_int except: pass diff --git a/lib-python/2.7/zipfile.py b/lib-python/2.7/zipfile.py --- a/lib-python/2.7/zipfile.py +++ b/lib-python/2.7/zipfile.py @@ -648,6 +648,10 @@ return data +class ZipExtFileWithClose(ZipExtFile): + def close(self): + self._fileobj.close() + class ZipFile: """ Class with methods to open, read, write, close, list zip files. @@ -843,9 +847,9 @@ try: # Read by chunks, to avoid an OverflowError or a # MemoryError with very large embedded files. - f = self.open(zinfo.filename, "r") - while f.read(chunk_size): # Check CRC-32 - pass + with self.open(zinfo.filename, "r") as f: + while f.read(chunk_size): # Check CRC-32 + pass except BadZipfile: return zinfo.filename @@ -864,7 +868,9 @@ def read(self, name, pwd=None): """Return file bytes (as a string) for name.""" - return self.open(name, "r", pwd).read() + with self.open(name, "r", pwd) as f: + retval = f.read() + return retval def open(self, name, mode="r", pwd=None): """Return file-like object for 'name'.""" @@ -881,59 +887,66 @@ else: zef_file = open(self.filename, 'rb') - # Make sure we have an info object - if isinstance(name, ZipInfo): - # 'name' is already an info object - zinfo = name + try: + # Make sure we have an info object + if isinstance(name, ZipInfo): + # 'name' is already an info object + zinfo = name + else: + # Get info object for name + zinfo = self.getinfo(name) + + zef_file.seek(zinfo.header_offset, 0) + + # Skip the file header: + fheader = zef_file.read(sizeFileHeader) + if fheader[0:4] != stringFileHeader: + raise BadZipfile, "Bad magic number for file header" + + fheader = struct.unpack(structFileHeader, fheader) + fname = zef_file.read(fheader[_FH_FILENAME_LENGTH]) + if fheader[_FH_EXTRA_FIELD_LENGTH]: + zef_file.read(fheader[_FH_EXTRA_FIELD_LENGTH]) + + if fname != zinfo.orig_filename: + raise BadZipfile, \ + 'File name in directory "%s" and header "%s" differ.' % ( + zinfo.orig_filename, fname) + + # check for encrypted flag & handle password + is_encrypted = zinfo.flag_bits & 0x1 + zd = None + if is_encrypted: + if not pwd: + pwd = self.pwd + if not pwd: + raise RuntimeError, "File %s is encrypted, " \ + "password required for extraction" % name + + zd = _ZipDecrypter(pwd) + # The first 12 bytes in the cypher stream is an encryption header + # used to strengthen the algorithm. The first 11 bytes are + # completely random, while the 12th contains the MSB of the CRC, + # or the MSB of the file time depending on the header type + # and is used to check the correctness of the password. + bytes = zef_file.read(12) + h = map(zd, bytes[0:12]) + if zinfo.flag_bits & 0x8: + # compare against the file type from extended local headers + check_byte = (zinfo._raw_time >> 8) & 0xff + else: + # compare against the CRC otherwise + check_byte = (zinfo.CRC >> 24) & 0xff + if ord(h[11]) != check_byte: + raise RuntimeError("Bad password for file", name) + except: + if not self._filePassed: + zef_file.close() + raise + if self._filePassed: + return ZipExtFile(zef_file, mode, zinfo, zd) else: - # Get info object for name - zinfo = self.getinfo(name) - - zef_file.seek(zinfo.header_offset, 0) - - # Skip the file header: - fheader = zef_file.read(sizeFileHeader) - if fheader[0:4] != stringFileHeader: - raise BadZipfile, "Bad magic number for file header" - - fheader = struct.unpack(structFileHeader, fheader) - fname = zef_file.read(fheader[_FH_FILENAME_LENGTH]) - if fheader[_FH_EXTRA_FIELD_LENGTH]: - zef_file.read(fheader[_FH_EXTRA_FIELD_LENGTH]) - - if fname != zinfo.orig_filename: - raise BadZipfile, \ - 'File name in directory "%s" and header "%s" differ.' % ( - zinfo.orig_filename, fname) - - # check for encrypted flag & handle password - is_encrypted = zinfo.flag_bits & 0x1 - zd = None - if is_encrypted: - if not pwd: - pwd = self.pwd - if not pwd: - raise RuntimeError, "File %s is encrypted, " \ - "password required for extraction" % name - - zd = _ZipDecrypter(pwd) - # The first 12 bytes in the cypher stream is an encryption header - # used to strengthen the algorithm. The first 11 bytes are - # completely random, while the 12th contains the MSB of the CRC, - # or the MSB of the file time depending on the header type - # and is used to check the correctness of the password. - bytes = zef_file.read(12) - h = map(zd, bytes[0:12]) - if zinfo.flag_bits & 0x8: - # compare against the file type from extended local headers - check_byte = (zinfo._raw_time >> 8) & 0xff - else: - # compare against the CRC otherwise - check_byte = (zinfo.CRC >> 24) & 0xff - if ord(h[11]) != check_byte: - raise RuntimeError("Bad password for file", name) - - return ZipExtFile(zef_file, mode, zinfo, zd) + return ZipExtFileWithClose(zef_file, mode, zinfo, zd) def extract(self, member, path=None, pwd=None): """Extract a member from the archive to the current working directory, @@ -989,7 +1002,6 @@ if not os.path.isdir(targetpath): os.mkdir(targetpath) return targetpath - source = self.open(member, pwd=pwd) target = file(targetpath, "wb") shutil.copyfileobj(source, target) diff --git a/lib-python/3.2/__future__.py b/lib-python/3.2/__future__.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__future__.py @@ -0,0 +1,134 @@ +"""Record of phased-in incompatible language changes. + +Each line is of the form: + + FeatureName = "_Feature(" OptionalRelease "," MandatoryRelease "," + CompilerFlag ")" + +where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples +of the same form as sys.version_info: + + (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int + PY_MINOR_VERSION, # the 1; an int + PY_MICRO_VERSION, # the 0; an int + PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string + PY_RELEASE_SERIAL # the 3; an int + ) + +OptionalRelease records the first release in which + + from __future__ import FeatureName + +was accepted. + +In the case of MandatoryReleases that have not yet occurred, +MandatoryRelease predicts the release in which the feature will become part +of the language. + +Else MandatoryRelease records when the feature became part of the language; +in releases at or after that, modules no longer need + + from __future__ import FeatureName + +to use the feature in question, but may continue to use such imports. + +MandatoryRelease may also be None, meaning that a planned feature got +dropped. + +Instances of class _Feature have two corresponding methods, +.getOptionalRelease() and .getMandatoryRelease(). + +CompilerFlag is the (bitfield) flag that should be passed in the fourth +argument to the builtin function compile() to enable the feature in +dynamically compiled code. This flag is stored in the .compiler_flag +attribute on _Future instances. These values must match the appropriate +#defines of CO_xxx flags in Include/compile.h. + +No feature line is ever to be deleted from this file. +""" + +all_feature_names = [ + "nested_scopes", + "generators", + "division", + "absolute_import", + "with_statement", + "print_function", + "unicode_literals", + "barry_as_FLUFL", +] + +__all__ = ["all_feature_names"] + all_feature_names + +# The CO_xxx symbols are defined here under the same names used by +# compile.h, so that an editor search will find them here. However, +# they're not exported in __all__, because they don't really belong to +# this module. +CO_NESTED = 0x0010 # nested_scopes +CO_GENERATOR_ALLOWED = 0 # generators (obsolete, was 0x1000) +CO_FUTURE_DIVISION = 0x2000 # division +CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default +CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement +CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function +CO_FUTURE_UNICODE_LITERALS = 0x20000 # unicode string literals +CO_FUTURE_BARRY_AS_BDFL = 0x40000 + +class _Feature: + def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): + self.optional = optionalRelease + self.mandatory = mandatoryRelease + self.compiler_flag = compiler_flag + + def getOptionalRelease(self): + """Return first release in which this feature was recognized. + + This is a 5-tuple, of the same form as sys.version_info. + """ + + return self.optional + + def getMandatoryRelease(self): + """Return release in which this feature will become mandatory. + + This is a 5-tuple, of the same form as sys.version_info, or, if + the feature was dropped, is None. + """ + + return self.mandatory + + def __repr__(self): + return "_Feature" + repr((self.optional, + self.mandatory, + self.compiler_flag)) + +nested_scopes = _Feature((2, 1, 0, "beta", 1), + (2, 2, 0, "alpha", 0), + CO_NESTED) + +generators = _Feature((2, 2, 0, "alpha", 1), + (2, 3, 0, "final", 0), + CO_GENERATOR_ALLOWED) + +division = _Feature((2, 2, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_DIVISION) + +absolute_import = _Feature((2, 5, 0, "alpha", 1), + (2, 7, 0, "alpha", 0), + CO_FUTURE_ABSOLUTE_IMPORT) + +with_statement = _Feature((2, 5, 0, "alpha", 1), + (2, 6, 0, "alpha", 0), + CO_FUTURE_WITH_STATEMENT) + +print_function = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_PRINT_FUNCTION) + +unicode_literals = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_UNICODE_LITERALS) + +barry_as_FLUFL = _Feature((3, 1, 0, "alpha", 2), + (3, 9, 0, "alpha", 0), + CO_FUTURE_BARRY_AS_BDFL) diff --git a/lib-python/3.2/__phello__.foo.py b/lib-python/3.2/__phello__.foo.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__phello__.foo.py @@ -0,0 +1,1 @@ +# This file exists as a helper for the test.test_frozen module. diff --git a/lib-python/3.2/_abcoll.py b/lib-python/3.2/_abcoll.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_abcoll.py @@ -0,0 +1,623 @@ +# Copyright 2007 Google, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""Abstract Base Classes (ABCs) for collections, according to PEP 3119. + +DON'T USE THIS MODULE DIRECTLY! The classes here should be imported +via collections; they are defined here only to alleviate certain +bootstrapping issues. Unit tests are in test_collections. +""" + +from abc import ABCMeta, abstractmethod +import sys + +__all__ = ["Hashable", "Iterable", "Iterator", + "Sized", "Container", "Callable", + "Set", "MutableSet", + "Mapping", "MutableMapping", + "MappingView", "KeysView", "ItemsView", "ValuesView", + "Sequence", "MutableSequence", + "ByteString", + ] + + +### collection related types which are not exposed through builtin ### +## iterators ## +bytes_iterator = type(iter(b'')) +bytearray_iterator = type(iter(bytearray())) +#callable_iterator = ??? +dict_keyiterator = type(iter({}.keys())) +dict_valueiterator = type(iter({}.values())) +dict_itemiterator = type(iter({}.items())) +list_iterator = type(iter([])) +list_reverseiterator = type(iter(reversed([]))) +range_iterator = type(iter(range(0))) +set_iterator = type(iter(set())) +str_iterator = type(iter("")) +tuple_iterator = type(iter(())) +zip_iterator = type(iter(zip())) +## views ## +dict_keys = type({}.keys()) +dict_values = type({}.values()) +dict_items = type({}.items()) +## misc ## +dict_proxy = type(type.__dict__) + + +### ONE-TRICK PONIES ### + +class Hashable(metaclass=ABCMeta): + + @abstractmethod + def __hash__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Hashable: + for B in C.__mro__: + if "__hash__" in B.__dict__: + if B.__dict__["__hash__"]: + return True + break + return NotImplemented + + +class Iterable(metaclass=ABCMeta): + + @abstractmethod + def __iter__(self): + while False: + yield None + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterable: + if any("__iter__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Iterator(Iterable): + + @abstractmethod + def __next__(self): + raise StopIteration + + def __iter__(self): + return self + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterator: + if (any("__next__" in B.__dict__ for B in C.__mro__) and + any("__iter__" in B.__dict__ for B in C.__mro__)): + return True + return NotImplemented + +Iterator.register(bytes_iterator) +Iterator.register(bytearray_iterator) +#Iterator.register(callable_iterator) +Iterator.register(dict_keyiterator) +Iterator.register(dict_valueiterator) +Iterator.register(dict_itemiterator) +Iterator.register(list_iterator) +Iterator.register(list_reverseiterator) +Iterator.register(range_iterator) +Iterator.register(set_iterator) +Iterator.register(str_iterator) +Iterator.register(tuple_iterator) +Iterator.register(zip_iterator) + +class Sized(metaclass=ABCMeta): + + @abstractmethod + def __len__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Sized: + if any("__len__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Container(metaclass=ABCMeta): + + @abstractmethod + def __contains__(self, x): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Container: + if any("__contains__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Callable(metaclass=ABCMeta): + + @abstractmethod + def __call__(self, *args, **kwds): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Callable: + if any("__call__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +### SETS ### + + +class Set(Sized, Iterable, Container): + + """A set is a finite, iterable container. + + This class provides concrete generic implementations of all + methods except for __contains__, __iter__ and __len__. + + To override the comparisons (presumably for speed, as the + semantics are fixed), all you have to do is redefine __le__ and + then the other operations will automatically follow suit. + """ + + def __le__(self, other): + if not isinstance(other, Set): + return NotImplemented + if len(self) > len(other): + return False + for elem in self: + if elem not in other: + return False + return True + + def __lt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) < len(other) and self.__le__(other) + + def __gt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other < self + + def __ge__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other <= self + + def __eq__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) == len(other) and self.__le__(other) + + def __ne__(self, other): + return not (self == other) + + @classmethod + def _from_iterable(cls, it): + '''Construct an instance of the class from any iterable input. + + Must override this method if the class constructor signature + does not accept an iterable for an input. + ''' + return cls(it) + + def __and__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + return self._from_iterable(value for value in other if value in self) + + def isdisjoint(self, other): + for value in other: + if value in self: + return False + return True + + def __or__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + chain = (e for s in (self, other) for e in s) + return self._from_iterable(chain) + + def __sub__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return self._from_iterable(value for value in self + if value not in other) + + def __xor__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return (self - other) | (other - self) + + def _hash(self): + """Compute the hash value of a set. + + Note that we don't define __hash__: not all sets are hashable. + But if you define a hashable set type, its __hash__ should + call this function. + + This must be compatible __eq__. + + All sets ought to compare equal if they contain the same + elements, regardless of how they are implemented, and + regardless of the order of the elements; so there's not much + freedom for __eq__ or __hash__. We match the algorithm used + by the built-in frozenset type. + """ + MAX = sys.maxsize + MASK = 2 * MAX + 1 + n = len(self) + h = 1927868237 * (n + 1) + h &= MASK + for x in self: + hx = hash(x) + h ^= (hx ^ (hx << 16) ^ 89869747) * 3644798167 + h &= MASK + h = h * 69069 + 907133923 + h &= MASK + if h > MAX: + h -= MASK + 1 + if h == -1: + h = 590923713 + return h + +Set.register(frozenset) + + +class MutableSet(Set): + + @abstractmethod + def add(self, value): + """Add an element.""" + raise NotImplementedError + + @abstractmethod + def discard(self, value): + """Remove an element. Do not raise an exception if absent.""" + raise NotImplementedError + + def remove(self, value): + """Remove an element. If not a member, raise a KeyError.""" + if value not in self: + raise KeyError(value) + self.discard(value) + + def pop(self): + """Return the popped value. Raise KeyError if empty.""" + it = iter(self) + try: + value = next(it) + except StopIteration: + raise KeyError + self.discard(value) + return value + + def clear(self): + """This is slow (creates N new iterators!) but effective.""" + try: + while True: + self.pop() + except KeyError: + pass + + def __ior__(self, it): + for value in it: + self.add(value) + return self + + def __iand__(self, it): + for value in (self - it): + self.discard(value) + return self + + def __ixor__(self, it): + if it is self: + self.clear() + else: + if not isinstance(it, Set): + it = self._from_iterable(it) + for value in it: + if value in self: + self.discard(value) + else: + self.add(value) + return self + + def __isub__(self, it): + if it is self: + self.clear() + else: + for value in it: + self.discard(value) + return self + +MutableSet.register(set) + + +### MAPPINGS ### + + +class Mapping(Sized, Iterable, Container): + + @abstractmethod + def __getitem__(self, key): + raise KeyError + + def get(self, key, default=None): + try: + return self[key] + except KeyError: + return default + + def __contains__(self, key): + try: + self[key] + except KeyError: + return False + else: + return True + + def keys(self): + return KeysView(self) + + def items(self): + return ItemsView(self) + + def values(self): + return ValuesView(self) + + def __eq__(self, other): + if not isinstance(other, Mapping): + return NotImplemented + return dict(self.items()) == dict(other.items()) + + def __ne__(self, other): + return not (self == other) + + +class MappingView(Sized): + + def __init__(self, mapping): + self._mapping = mapping + + def __len__(self): + return len(self._mapping) + + def __repr__(self): + return '{0.__class__.__name__}({0._mapping!r})'.format(self) + + +class KeysView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, key): + return key in self._mapping + + def __iter__(self): + for key in self._mapping: + yield key + +KeysView.register(dict_keys) + + +class ItemsView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, item): + key, value = item + try: + v = self._mapping[key] + except KeyError: + return False + else: + return v == value + + def __iter__(self): + for key in self._mapping: + yield (key, self._mapping[key]) + +ItemsView.register(dict_items) + + +class ValuesView(MappingView): + + def __contains__(self, value): + for key in self._mapping: + if value == self._mapping[key]: + return True + return False + + def __iter__(self): + for key in self._mapping: + yield self._mapping[key] + +ValuesView.register(dict_values) + + +class MutableMapping(Mapping): + + @abstractmethod + def __setitem__(self, key, value): + raise KeyError + + @abstractmethod + def __delitem__(self, key): + raise KeyError + + __marker = object() + + def pop(self, key, default=__marker): + try: + value = self[key] + except KeyError: + if default is self.__marker: + raise + return default + else: + del self[key] + return value + + def popitem(self): + try: + key = next(iter(self)) + except StopIteration: + raise KeyError + value = self[key] + del self[key] + return key, value + + def clear(self): + try: + while True: + self.popitem() + except KeyError: + pass + + def update(*args, **kwds): + if len(args) > 2: + raise TypeError("update() takes at most 2 positional " + "arguments ({} given)".format(len(args))) + elif not args: + raise TypeError("update() takes at least 1 argument (0 given)") + self = args[0] + other = args[1] if len(args) >= 2 else () + + if isinstance(other, Mapping): + for key in other: + self[key] = other[key] + elif hasattr(other, "keys"): + for key in other.keys(): + self[key] = other[key] + else: + for key, value in other: + self[key] = value + for key, value in kwds.items(): + self[key] = value + + def setdefault(self, key, default=None): + try: + return self[key] + except KeyError: + self[key] = default + return default + +MutableMapping.register(dict) + + +### SEQUENCES ### + + +class Sequence(Sized, Iterable, Container): + + """All the operations on a read-only sequence. + + Concrete subclasses must override __new__ or __init__, + __getitem__, and __len__. + """ + + @abstractmethod + def __getitem__(self, index): + raise IndexError + + def __iter__(self): + i = 0 + try: + while True: + v = self[i] + yield v + i += 1 + except IndexError: + return + + def __contains__(self, value): + for v in self: + if v == value: + return True + return False + + def __reversed__(self): + for i in reversed(range(len(self))): + yield self[i] + + def index(self, value): + for i, v in enumerate(self): + if v == value: + return i + raise ValueError + + def count(self, value): + return sum(1 for v in self if v == value) + +Sequence.register(tuple) +Sequence.register(str) +Sequence.register(range) + + +class ByteString(Sequence): + + """This unifies bytes and bytearray. + + XXX Should add all their methods. + """ + +ByteString.register(bytes) +ByteString.register(bytearray) + + +class MutableSequence(Sequence): + + @abstractmethod + def __setitem__(self, index, value): + raise IndexError + + @abstractmethod + def __delitem__(self, index): + raise IndexError + + @abstractmethod + def insert(self, index, value): + raise IndexError + + def append(self, value): + self.insert(len(self), value) + + def reverse(self): + n = len(self) + for i in range(n//2): + self[i], self[n-i-1] = self[n-i-1], self[i] + + def extend(self, values): + for v in values: + self.append(v) + + def pop(self, index=-1): + v = self[index] + del self[index] + return v + + def remove(self, value): + del self[self.index(value)] + + def __iadd__(self, values): + self.extend(values) + return self + +MutableSequence.register(list) +MutableSequence.register(bytearray) # Multiply inheriting, see ByteString diff --git a/lib-python/3.2/_compat_pickle.py b/lib-python/3.2/_compat_pickle.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_compat_pickle.py @@ -0,0 +1,81 @@ +# This module is used to map the old Python 2 names to the new names used in +# Python 3 for the pickle module. This needed to make pickle streams +# generated with Python 2 loadable by Python 3. + +# This is a copy of lib2to3.fixes.fix_imports.MAPPING. We cannot import +# lib2to3 and use the mapping defined there, because lib2to3 uses pickle. +# Thus, this could cause the module to be imported recursively. +IMPORT_MAPPING = { + 'StringIO': 'io', + 'cStringIO': 'io', + 'cPickle': 'pickle', + '__builtin__' : 'builtins', + 'copy_reg': 'copyreg', + 'Queue': 'queue', + 'SocketServer': 'socketserver', + 'ConfigParser': 'configparser', + 'repr': 'reprlib', + 'FileDialog': 'tkinter.filedialog', + 'tkFileDialog': 'tkinter.filedialog', + 'SimpleDialog': 'tkinter.simpledialog', + 'tkSimpleDialog': 'tkinter.simpledialog', + 'tkColorChooser': 'tkinter.colorchooser', + 'tkCommonDialog': 'tkinter.commondialog', + 'Dialog': 'tkinter.dialog', + 'Tkdnd': 'tkinter.dnd', + 'tkFont': 'tkinter.font', + 'tkMessageBox': 'tkinter.messagebox', + 'ScrolledText': 'tkinter.scrolledtext', + 'Tkconstants': 'tkinter.constants', + 'Tix': 'tkinter.tix', + 'ttk': 'tkinter.ttk', + 'Tkinter': 'tkinter', + 'markupbase': '_markupbase', + '_winreg': 'winreg', + 'thread': '_thread', + 'dummy_thread': '_dummy_thread', + 'dbhash': 'dbm.bsd', + 'dumbdbm': 'dbm.dumb', + 'dbm': 'dbm.ndbm', + 'gdbm': 'dbm.gnu', + 'xmlrpclib': 'xmlrpc.client', + 'DocXMLRPCServer': 'xmlrpc.server', + 'SimpleXMLRPCServer': 'xmlrpc.server', + 'httplib': 'http.client', + 'htmlentitydefs' : 'html.entities', + 'HTMLParser' : 'html.parser', + 'Cookie': 'http.cookies', + 'cookielib': 'http.cookiejar', + 'BaseHTTPServer': 'http.server', + 'SimpleHTTPServer': 'http.server', + 'CGIHTTPServer': 'http.server', + 'test.test_support': 'test.support', + 'commands': 'subprocess', + 'UserString' : 'collections', + 'UserList' : 'collections', + 'urlparse' : 'urllib.parse', + 'robotparser' : 'urllib.robotparser', + 'whichdb': 'dbm', + 'anydbm': 'dbm' +} + + +# This contains rename rules that are easy to handle. We ignore the more +# complex stuff (e.g. mapping the names in the urllib and types modules). +# These rules should be run before import names are fixed. +NAME_MAPPING = { + ('__builtin__', 'xrange'): ('builtins', 'range'), + ('__builtin__', 'reduce'): ('functools', 'reduce'), + ('__builtin__', 'intern'): ('sys', 'intern'), + ('__builtin__', 'unichr'): ('builtins', 'chr'), + ('__builtin__', 'basestring'): ('builtins', 'str'), + ('__builtin__', 'long'): ('builtins', 'int'), + ('itertools', 'izip'): ('builtins', 'zip'), + ('itertools', 'imap'): ('builtins', 'map'), + ('itertools', 'ifilter'): ('builtins', 'filter'), + ('itertools', 'ifilterfalse'): ('itertools', 'filterfalse'), +} + +# Same, but for 3.x to 2.x +REVERSE_IMPORT_MAPPING = dict((v, k) for (k, v) in IMPORT_MAPPING.items()) +REVERSE_NAME_MAPPING = dict((v, k) for (k, v) in NAME_MAPPING.items()) diff --git a/lib-python/3.2/_dummy_thread.py b/lib-python/3.2/_dummy_thread.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_dummy_thread.py @@ -0,0 +1,155 @@ +"""Drop-in replacement for the thread module. + +Meant to be used as a brain-dead substitute so that threaded code does +not need to be rewritten for when the thread module is not present. + +Suggested usage is:: + + try: + import _thread + except ImportError: + import _dummy_thread as _thread + +""" +# Exports only things specified by thread documentation; +# skipping obsolete synonyms allocate(), start_new(), exit_thread(). +__all__ = ['error', 'start_new_thread', 'exit', 'get_ident', 'allocate_lock', + 'interrupt_main', 'LockType'] + +# A dummy value +TIMEOUT_MAX = 2**31 + +# NOTE: this module can be imported early in the extension building process, +# and so top level imports of other modules should be avoided. Instead, all +# imports are done when needed on a function-by-function basis. Since threads +# are disabled, the import lock should not be an issue anyway (??). + +class error(Exception): + """Dummy implementation of _thread.error.""" + + def __init__(self, *args): + self.args = args + +def start_new_thread(function, args, kwargs={}): + """Dummy implementation of _thread.start_new_thread(). + + Compatibility is maintained by making sure that ``args`` is a + tuple and ``kwargs`` is a dictionary. If an exception is raised + and it is SystemExit (which can be done by _thread.exit()) it is + caught and nothing is done; all other exceptions are printed out + by using traceback.print_exc(). + + If the executed function calls interrupt_main the KeyboardInterrupt will be + raised when the function returns. + + """ + if type(args) != type(tuple()): + raise TypeError("2nd arg must be a tuple") + if type(kwargs) != type(dict()): + raise TypeError("3rd arg must be a dict") + global _main + _main = False + try: + function(*args, **kwargs) + except SystemExit: + pass + except: + import traceback + traceback.print_exc() + _main = True + global _interrupt + if _interrupt: + _interrupt = False + raise KeyboardInterrupt + +def exit(): + """Dummy implementation of _thread.exit().""" + raise SystemExit + +def get_ident(): + """Dummy implementation of _thread.get_ident(). + + Since this module should only be used when _threadmodule is not + available, it is safe to assume that the current process is the + only thread. Thus a constant can be safely returned. + """ + return -1 + +def allocate_lock(): + """Dummy implementation of _thread.allocate_lock().""" + return LockType() + +def stack_size(size=None): + """Dummy implementation of _thread.stack_size().""" + if size is not None: + raise error("setting thread stack size not supported") + return 0 + +class LockType(object): + """Class implementing dummy implementation of _thread.LockType. + + Compatibility is maintained by maintaining self.locked_status + which is a boolean that stores the state of the lock. Pickling of + the lock, though, should not be done since if the _thread module is + then used with an unpickled ``lock()`` from here problems could + occur from this class not having atomic methods. + + """ + + def __init__(self): + self.locked_status = False + + def acquire(self, waitflag=None, timeout=-1): + """Dummy implementation of acquire(). + + For blocking calls, self.locked_status is automatically set to + True and returned appropriately based on value of + ``waitflag``. If it is non-blocking, then the value is + actually checked and not set if it is already acquired. This + is all done so that threading.Condition's assert statements + aren't triggered and throw a little fit. + + """ + if waitflag is None or waitflag: + self.locked_status = True + return True + else: + if not self.locked_status: + self.locked_status = True + return True + else: + if timeout > 0: + import time + time.sleep(timeout) + return False + + __enter__ = acquire + + def __exit__(self, typ, val, tb): + self.release() + + def release(self): + """Release the dummy lock.""" + # XXX Perhaps shouldn't actually bother to test? Could lead + # to problems for complex, threaded code. + if not self.locked_status: + raise error + self.locked_status = False + return True + + def locked(self): + return self.locked_status + +# Used to signal that interrupt_main was called in a "thread" +_interrupt = False +# True when not executing in a "thread" +_main = True + +def interrupt_main(): + """Set _interrupt flag to True to have start_new_thread raise + KeyboardInterrupt upon exiting.""" + if _main: + raise KeyboardInterrupt + else: + global _interrupt + _interrupt = True diff --git a/lib-python/3.2/_markupbase.py b/lib-python/3.2/_markupbase.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_markupbase.py @@ -0,0 +1,395 @@ +"""Shared support for scanning document type declarations in HTML and XHTML. + +This module is used as a foundation for the html.parser module. It has no +documented public API and should not be used directly. + +""" + +import re + +_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9]*\s*').match +_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match +_commentclose = re.compile(r'--\s*>') +_markedsectionclose = re.compile(r']\s*]\s*>') + +# An analysis of the MS-Word extensions is available at +# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf + +_msmarkedsectionclose = re.compile(r']\s*>') + +del re + + +class ParserBase: + """Parser base class which provides some common support methods used + by the SGML/HTML and XHTML parsers.""" + + def __init__(self): + if self.__class__ is ParserBase: + raise RuntimeError( + "_markupbase.ParserBase must be subclassed") + + def error(self, message): + raise NotImplementedError( + "subclasses of ParserBase must override error()") + + def reset(self): + self.lineno = 1 + self.offset = 0 + + def getpos(self): + """Return current line number and offset.""" + return self.lineno, self.offset + + # Internal -- update line number and offset. This should be + # called for each piece of data exactly once, in order -- in other + # words the concatenation of all the input strings to this + # function should be exactly the entire input. + def updatepos(self, i, j): + if i >= j: + return j + rawdata = self.rawdata + nlines = rawdata.count("\n", i, j) + if nlines: + self.lineno = self.lineno + nlines + pos = rawdata.rindex("\n", i, j) # Should not fail + self.offset = j-(pos+1) + else: + self.offset = self.offset + j-i + return j + + _decl_otherchars = '' + + # Internal -- parse declaration (for use by subclasses). + def parse_declaration(self, i): + # This is some sort of declaration; in "HTML as + # deployed," this should only be the document type + # declaration (""). + # ISO 8879:1986, however, has more complex + # declaration syntax for elements in , including: + # --comment-- + # [marked section] + # name in the following list: ENTITY, DOCTYPE, ELEMENT, + # ATTLIST, NOTATION, SHORTREF, USEMAP, + # LINKTYPE, LINK, IDLINK, USELINK, SYSTEM + rawdata = self.rawdata + j = i + 2 + assert rawdata[i:j] == "": + # the empty comment + return j + 1 + if rawdata[j:j+1] in ("-", ""): + # Start of comment followed by buffer boundary, + # or just a buffer boundary. + return -1 + # A simple, practical version could look like: ((name|stringlit) S*) + '>' + n = len(rawdata) + if rawdata[j:j+2] == '--': #comment + # Locate --.*-- as the body of the comment + return self.parse_comment(i) + elif rawdata[j] == '[': #marked section + # Locate [statusWord [...arbitrary SGML...]] as the body of the marked section + # Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA + # Note that this is extended by Microsoft Office "Save as Web" function + # to include [if...] and [endif]. + return self.parse_marked_section(i) + else: #all other declaration elements + decltype, j = self._scan_name(j, i) + if j < 0: + return j + if decltype == "doctype": + self._decl_otherchars = '' + while j < n: + c = rawdata[j] + if c == ">": + # end of declaration syntax + data = rawdata[i+2:j] + if decltype == "doctype": + self.handle_decl(data) + else: + # According to the HTML5 specs sections "8.2.4.44 Bogus + # comment state" and "8.2.4.45 Markup declaration open + # state", a comment token should be emitted. + # Calling unknown_decl provides more flexibility though. + self.unknown_decl(data) + return j + 1 + if c in "\"'": + m = _declstringlit_match(rawdata, j) + if not m: + return -1 # incomplete + j = m.end() + elif c in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ": + name, j = self._scan_name(j, i) + elif c in self._decl_otherchars: + j = j + 1 + elif c == "[": + # this could be handled in a separate doctype parser + if decltype == "doctype": + j = self._parse_doctype_subset(j + 1, i) + elif decltype in {"attlist", "linktype", "link", "element"}: + # must tolerate []'d groups in a content model in an element declaration + # also in data attribute specifications of attlist declaration + # also link type declaration subsets in linktype declarations + # also link attribute specification lists in link declarations + self.error("unsupported '[' char in %s declaration" % decltype) + else: + self.error("unexpected '[' char in declaration") + else: + self.error( + "unexpected %r char in declaration" % rawdata[j]) + if j < 0: + return j + return -1 # incomplete + + # Internal -- parse a marked section + # Override this to handle MS-word extension syntax content + def parse_marked_section(self, i, report=1): + rawdata= self.rawdata + assert rawdata[i:i+3] == ' ending + match= _markedsectionclose.search(rawdata, i+3) + elif sectName in {"if", "else", "endif"}: + # look for MS Office ]> ending + match= _msmarkedsectionclose.search(rawdata, i+3) + else: + self.error('unknown status keyword %r in marked section' % rawdata[i+3:j]) + if not match: + return -1 + if report: + j = match.start(0) + self.unknown_decl(rawdata[i+3: j]) + return match.end(0) + + # Internal -- parse comment, return length or -1 if not terminated + def parse_comment(self, i, report=1): + rawdata = self.rawdata + if rawdata[i:i+4] != ' LOAD_CONST None def f(x): - None + y = None return x asm = disassemble(f) for elem in ('LOAD_GLOBAL',): @@ -67,10 +67,13 @@ self.assertIn(elem, asm) def test_pack_unpack(self): + # On PyPy, "a, b = ..." is even more optimized, by removing + # the ROT_TWO. But the ROT_TWO is not removed if assigning + # to more complex expressions, so check that. for line, elem in ( ('a, = a,', 'LOAD_CONST',), - ('a, b = a, b', 'ROT_TWO',), - ('a, b, c = a, b, c', 'ROT_THREE',), + ('a[1], b = a, b', 'ROT_TWO',), + ('a, b[2], c = a, b, c', 'ROT_THREE',), ): asm = dis_single(line) self.assertIn(elem, asm) @@ -78,6 +81,8 @@ self.assertNotIn('UNPACK_TUPLE', asm) def test_folding_of_tuples_of_constants(self): + # On CPython, "a,b,c=1,2,3" turns into "a,b,c=" + # but on PyPy, it turns into "a=1;b=2;c=3". for line, elem in ( ('a = 1,2,3', '((1, 2, 3))'), ('("a","b","c")', "(('a', 'b', 'c'))"), @@ -86,7 +91,8 @@ ('((1, 2), 3, 4)', '(((1, 2), 3, 4))'), ): asm = dis_single(line) - self.assertIn(elem, asm) + self.assert_(elem in asm or ( + line == 'a,b,c = 1,2,3' and 'UNPACK_TUPLE' not in asm)) self.assertNotIn('BUILD_TUPLE', asm) # Bug 1053819: Tuple of constants misidentified when presented with: @@ -139,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib-python/2.7/test/test_pprint.py b/lib-python/2.7/test/test_pprint.py --- a/lib-python/2.7/test/test_pprint.py +++ b/lib-python/2.7/test/test_pprint.py @@ -233,7 +233,16 @@ frozenset([0, 2]), frozenset([0, 1])])}""" cube = test.test_set.cube(3) - self.assertEqual(pprint.pformat(cube), cube_repr_tgt) + # XXX issues of dictionary order, and for the case below, + # order of items in the frozenset([...]) representation. + # Whether we get precisely cube_repr_tgt or not is open + # to implementation-dependent choices (this test probably + # fails horribly in CPython if we tweak the dict order too). + got = pprint.pformat(cube) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cube_repr_tgt) + else: + self.assertEqual(eval(got), cube) cubo_repr_tgt = """\ {frozenset([frozenset([0, 2]), frozenset([0])]): frozenset([frozenset([frozenset([0, 2]), @@ -393,7 +402,11 @@ 2])])])}""" cubo = test.test_set.linegraph(cube) - self.assertEqual(pprint.pformat(cubo), cubo_repr_tgt) + got = pprint.pformat(cubo) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cubo_repr_tgt) + else: + self.assertEqual(eval(got), cubo) def test_depth(self): nested_tuple = (1, (2, (3, (4, (5, 6))))) diff --git a/lib-python/2.7/test/test_pydoc.py b/lib-python/2.7/test/test_pydoc.py --- a/lib-python/2.7/test/test_pydoc.py +++ b/lib-python/2.7/test/test_pydoc.py @@ -267,8 +267,8 @@ testpairs = ( ('i_am_not_here', 'i_am_not_here'), ('test.i_am_not_here_either', 'i_am_not_here_either'), - ('test.i_am_not_here.neither_am_i', 'i_am_not_here.neither_am_i'), - ('i_am_not_here.{}'.format(modname), 'i_am_not_here.{}'.format(modname)), + ('test.i_am_not_here.neither_am_i', 'i_am_not_here'), + ('i_am_not_here.{}'.format(modname), 'i_am_not_here'), ('test.{}'.format(modname), modname), ) @@ -292,8 +292,8 @@ result = run_pydoc(modname) finally: forget(modname) - expected = badimport_pattern % (modname, expectedinmsg) - self.assertEqual(expected, result) + expected = badimport_pattern % (modname, '(.+\\.)?' + expectedinmsg + '(\\..+)?$') + self.assertTrue(re.match(expected, result)) def test_input_strip(self): missing_module = " test.i_am_not_here " diff --git a/lib-python/2.7/test/test_pyexpat.py b/lib-python/2.7/test/test_pyexpat.py --- a/lib-python/2.7/test/test_pyexpat.py +++ b/lib-python/2.7/test/test_pyexpat.py @@ -570,6 +570,9 @@ self.assertEqual(self.n, 4) class MalformedInputText(unittest.TestCase): + # CPython seems to ship its own version of expat, they fixed it on this commit : + # http://svn.python.org/view?revision=74429&view=revision + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test1(self): xml = "\0\r\n" parser = expat.ParserCreate() @@ -579,6 +582,7 @@ except expat.ExpatError as e: self.assertEqual(str(e), 'unclosed token: line 2, column 0') + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test2(self): xml = "\r\n" parser = expat.ParserCreate() diff --git a/lib-python/2.7/test/test_repr.py b/lib-python/2.7/test/test_repr.py --- a/lib-python/2.7/test/test_repr.py +++ b/lib-python/2.7/test/test_repr.py @@ -9,6 +9,7 @@ import unittest from test.test_support import run_unittest, check_py3k_warnings +from test.test_support import check_impl_detail from repr import repr as r # Don't shadow builtin repr from repr import Repr @@ -145,8 +146,11 @@ # Functions eq(repr(hash), '') # Methods - self.assertTrue(repr(''.split).startswith( - '") def test_xrange(self): eq = self.assertEqual @@ -185,7 +189,10 @@ def test_descriptors(self): eq = self.assertEqual # method descriptors - eq(repr(dict.items), "") + if check_impl_detail(cpython=True): + eq(repr(dict.items), "") + elif check_impl_detail(pypy=True): + eq(repr(dict.items), "") # XXX member descriptors # XXX attribute descriptors # XXX slot descriptors @@ -247,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_runpy.py b/lib-python/2.7/test/test_runpy.py --- a/lib-python/2.7/test/test_runpy.py +++ b/lib-python/2.7/test/test_runpy.py @@ -5,10 +5,15 @@ import sys import re import tempfile -from test.test_support import verbose, run_unittest, forget +from test.test_support import verbose, run_unittest, forget, check_impl_detail from test.script_helper import (temp_dir, make_script, compile_script, make_pkg, make_zip_script, make_zip_pkg) +if check_impl_detail(pypy=True): + no_lone_pyc_file = True +else: + no_lone_pyc_file = False + from runpy import _run_code, _run_module_code, run_module, run_path # Note: This module can't safely test _run_module_as_main as it @@ -168,13 +173,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + d2 = run_module(mod_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -190,13 +196,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", pkg_name - d2 = run_module(pkg_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", pkg_name + d2 = run_module(pkg_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, pkg_name) if verbose: print "Package executed successfully" @@ -244,15 +251,17 @@ self.assertIn("sibling", d1) self.assertIn("nephew", d1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name, run_name=run_name) # Read from bytecode - self.assertIn("__package__", d2) - self.assertTrue(d2["__package__"] == pkg_name) - self.assertIn("sibling", d2) - self.assertIn("nephew", d2) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + # Read from bytecode + d2 = run_module(mod_name, run_name=run_name) + self.assertIn("__package__", d2) + self.assertTrue(d2["__package__"] == pkg_name) + self.assertIn("sibling", d2) + self.assertIn("nephew", d2) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -345,6 +354,8 @@ script_dir, '') def test_directory_compiled(self): + if no_lone_pyc_file: + return with temp_dir() as script_dir: mod_name = '__main__' script_name = self._make_test_script(script_dir, mod_name) diff --git a/lib-python/2.7/test/test_scope.py b/lib-python/2.7/test/test_scope.py --- a/lib-python/2.7/test/test_scope.py +++ b/lib-python/2.7/test/test_scope.py @@ -1,6 +1,6 @@ import unittest from test.test_support import check_syntax_error, check_py3k_warnings, \ - check_warnings, run_unittest + check_warnings, run_unittest, gc_collect class ScopeTests(unittest.TestCase): @@ -432,6 +432,7 @@ for i in range(100): f1() + gc_collect() self.assertEqual(Foo.count, 0) diff --git a/lib-python/2.7/test/test_set.py b/lib-python/2.7/test/test_set.py --- a/lib-python/2.7/test/test_set.py +++ b/lib-python/2.7/test/test_set.py @@ -309,6 +309,7 @@ fo.close() test_support.unlink(test_support.TESTFN) + @test_support.impl_detail(pypy=False) def test_do_not_rehash_dict_keys(self): n = 10 d = dict.fromkeys(map(HashCountingInt, xrange(n))) @@ -559,6 +560,7 @@ p = weakref.proxy(s) self.assertEqual(str(p), str(s)) s = None + test_support.gc_collect() self.assertRaises(ReferenceError, str, p) # C API test only available in a debug build @@ -590,6 +592,7 @@ s.__init__(self.otherword) self.assertEqual(s, set(self.word)) + @test_support.impl_detail() def test_singleton_empty_frozenset(self): f = frozenset() efs = [frozenset(), frozenset([]), frozenset(()), frozenset(''), @@ -770,9 +773,10 @@ for v in self.set: self.assertIn(v, self.values) setiter = iter(self.set) - # note: __length_hint__ is an internal undocumented API, - # don't rely on it in your own programs - self.assertEqual(setiter.__length_hint__(), len(self.set)) + if test_support.check_impl_detail(): + # note: __length_hint__ is an internal undocumented API, + # don't rely on it in your own programs + self.assertEqual(setiter.__length_hint__(), len(self.set)) def test_pickling(self): p = pickle.dumps(self.set) @@ -1564,7 +1568,7 @@ for meth in (s.union, s.intersection, s.difference, s.symmetric_difference, s.isdisjoint): for g in (G, I, Ig, L, R): expected = meth(data) - actual = meth(G(data)) + actual = meth(g(data)) if isinstance(expected, bool): self.assertEqual(actual, expected) else: diff --git a/lib-python/2.7/test/test_sets.py b/lib-python/2.7/test/test_sets.py --- a/lib-python/2.7/test/test_sets.py +++ b/lib-python/2.7/test/test_sets.py @@ -686,7 +686,9 @@ set_list = sorted(self.set) self.assertEqual(len(dup_list), len(set_list)) for i, el in enumerate(dup_list): - self.assertIs(el, set_list[i]) + # Object identity is not guarnteed for immutable objects, so we + # can't use assertIs here. + self.assertEqual(el, set_list[i]) def test_deep_copy(self): dup = copy.deepcopy(self.set) diff --git a/lib-python/2.7/test/test_site.py b/lib-python/2.7/test/test_site.py --- a/lib-python/2.7/test/test_site.py +++ b/lib-python/2.7/test/test_site.py @@ -226,6 +226,10 @@ self.assertEqual(len(dirs), 1) wanted = os.path.join('xoxo', 'Lib', 'site-packages') self.assertEqual(dirs[0], wanted) + elif '__pypy__' in sys.builtin_module_names: + self.assertEquals(len(dirs), 1) + wanted = os.path.join('xoxo', 'site-packages') + self.assertEquals(dirs[0], wanted) elif os.sep == '/': self.assertEqual(len(dirs), 2) wanted = os.path.join('xoxo', 'lib', 'python' + sys.version[:3], diff --git a/lib-python/2.7/test/test_socket.py b/lib-python/2.7/test/test_socket.py --- a/lib-python/2.7/test/test_socket.py +++ b/lib-python/2.7/test/test_socket.py @@ -252,6 +252,7 @@ self.assertEqual(p.fileno(), s.fileno()) s.close() s = None + test_support.gc_collect() try: p.fileno() except ReferenceError: @@ -285,32 +286,34 @@ s.sendto(u'\u2620', sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None) - self.assertIn('not NoneType', str(cm.exception)) + self.assertIn('NoneType', str(cm.exception)) # 3 args with self.assertRaises(UnicodeEncodeError): s.sendto(u'\u2620', 0, sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, None) - self.assertIn('not NoneType', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 'bar', sockname) - self.assertIn('an integer is required', str(cm.exception)) + self.assertIn('integer', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None, None) - self.assertIn('an integer is required', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('an integer is required', str(cm.exception)) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto('foo') - self.assertIn('(1 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, sockname, 4) - self.assertIn('(4 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) def testCrucialConstants(self): @@ -385,10 +388,10 @@ socket.htonl(k) socket.htons(k) for k in bad_values: - self.assertRaises(OverflowError, socket.ntohl, k) - self.assertRaises(OverflowError, socket.ntohs, k) - self.assertRaises(OverflowError, socket.htonl, k) - self.assertRaises(OverflowError, socket.htons, k) + self.assertRaises((OverflowError, ValueError), socket.ntohl, k) + self.assertRaises((OverflowError, ValueError), socket.ntohs, k) + self.assertRaises((OverflowError, ValueError), socket.htonl, k) + self.assertRaises((OverflowError, ValueError), socket.htons, k) def testGetServBy(self): eq = self.assertEqual @@ -428,8 +431,8 @@ if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. - self.assertRaises(OverflowError, socket.getservbyport, -1) - self.assertRaises(OverflowError, socket.getservbyport, 65536) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, -1) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout @@ -608,8 +611,8 @@ neg_port = port - 65536 sock = socket.socket() try: - self.assertRaises(OverflowError, sock.bind, (host, big_port)) - self.assertRaises(OverflowError, sock.bind, (host, neg_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, big_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, neg_port)) sock.bind((host, port)) finally: sock.close() @@ -1309,6 +1312,7 @@ closed = False def flush(self): pass def close(self): self.closed = True + def _decref_socketios(self): pass # must not close unless we request it: the original use of _fileobject # by module socket requires that the underlying socket not be closed until diff --git a/lib-python/2.7/test/test_sort.py b/lib-python/2.7/test/test_sort.py --- a/lib-python/2.7/test/test_sort.py +++ b/lib-python/2.7/test/test_sort.py @@ -140,7 +140,10 @@ return random.random() < 0.5 L = [C() for i in range(50)] - self.assertRaises(ValueError, L.sort) + try: + L.sort() + except ValueError: + pass def test_cmpNone(self): # Testing None as a comparison function. @@ -150,8 +153,10 @@ L.sort(None) self.assertEqual(L, range(50)) + @test_support.impl_detail(pypy=False) def test_undetected_mutation(self): # Python 2.4a1 did not always detect mutation + # So does pypy... memorywaster = [] for i in range(20): def mutating_cmp(x, y): @@ -226,7 +231,10 @@ def __del__(self): del data[:] data[:] = range(20) - self.assertRaises(ValueError, data.sort, key=SortKiller) + try: + data.sort(key=SortKiller) + except ValueError: + pass def test_key_with_mutating_del_and_exception(self): data = range(10) diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -881,6 +881,8 @@ c = socket.socket() c.connect((HOST, port)) listener_gone.wait() + # XXX why is it necessary? + test_support.gc_collect() try: ssl_sock = ssl.wrap_socket(c) except IOError: @@ -1330,10 +1332,8 @@ def test_main(verbose=False): global CERTFILE, SVN_PYTHON_ORG_ROOT_CERT - CERTFILE = os.path.join(os.path.dirname(__file__) or os.curdir, - "keycert.pem") - SVN_PYTHON_ORG_ROOT_CERT = os.path.join( - os.path.dirname(__file__) or os.curdir, + CERTFILE = test_support.findfile("keycert.pem") + SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( "https_svn_python_org_root.pem") if (not os.path.exists(CERTFILE) or diff --git a/lib-python/2.7/test/test_str.py b/lib-python/2.7/test/test_str.py --- a/lib-python/2.7/test/test_str.py +++ b/lib-python/2.7/test/test_str.py @@ -422,10 +422,11 @@ for meth in ('foo'.startswith, 'foo'.endswith): with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) def test_main(): test_support.run_unittest(StrTest) diff --git a/lib-python/2.7/test/test_struct.py b/lib-python/2.7/test/test_struct.py --- a/lib-python/2.7/test/test_struct.py +++ b/lib-python/2.7/test/test_struct.py @@ -535,7 +535,8 @@ @unittest.skipUnless(IS32BIT, "Specific to 32bit machines") def test_crasher(self): - self.assertRaises(MemoryError, struct.pack, "357913941c", "a") + self.assertRaises((MemoryError, struct.error), struct.pack, + "357913941c", "a") def test_count_overflow(self): hugecount = '{}b'.format(sys.maxsize+1) diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/2.7/test/test_support.py b/lib-python/2.7/test/test_support.py --- a/lib-python/2.7/test/test_support.py +++ b/lib-python/2.7/test/test_support.py @@ -431,16 +431,20 @@ rmtree(name) -def findfile(file, here=__file__, subdir=None): +def findfile(file, here=None, subdir=None): """Try to find a file on sys.path and the working directory. If it is not found the argument passed to the function is returned (this does not necessarily signal failure; could still be the legitimate path).""" + import test if os.path.isabs(file): return file if subdir is not None: file = os.path.join(subdir, file) path = sys.path - path = [os.path.dirname(here)] + path + if here is None: + path = test.__path__ + path + else: + path = [os.path.dirname(here)] + path for dn in path: fn = os.path.join(dn, file) if os.path.exists(fn): return fn @@ -1050,15 +1054,33 @@ guards, default = _parse_guards(guards) return guards.get(platform.python_implementation().lower(), default) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --pdb +# to get a pdb prompt in case of exceptions +ResultClass = unittest.TextTestRunner.resultclass + +class TestResultWithPdb(ResultClass): + + def addError(self, testcase, exc_info): + ResultClass.addError(self, testcase, exc_info) + if '--pdb' in sys.argv: + import pdb, traceback + traceback.print_tb(exc_info[2]) + pdb.post_mortem(exc_info[2]) + +# ---------------------------------- def _run_suite(suite): """Run tests from a unittest.TestSuite-derived class.""" if verbose: - runner = unittest.TextTestRunner(sys.stdout, verbosity=2) + runner = unittest.TextTestRunner(sys.stdout, verbosity=2, + resultclass=TestResultWithPdb) else: runner = BasicTestRunner() + result = runner.run(suite) if not result.wasSuccessful(): if len(result.errors) == 1 and not result.failures: @@ -1071,6 +1093,34 @@ err += "; run in verbose mode for details" raise TestFailed(err) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --filter bar +# to run only the test cases whose name contains bar + +def filter_maybe(suite): + try: + i = sys.argv.index('--filter') + filter = sys.argv[i+1] + except (ValueError, IndexError): + return suite + tests = [] + for test in linearize_suite(suite): + if filter in test._testMethodName: + tests.append(test) + return unittest.TestSuite(tests) + +def linearize_suite(suite_or_test): + try: + it = iter(suite_or_test) + except TypeError: + yield suite_or_test + return + for subsuite in it: + for item in linearize_suite(subsuite): + yield item + +# ---------------------------------- def run_unittest(*classes): """Run tests from unittest.TestCase-derived classes.""" @@ -1086,6 +1136,7 @@ suite.addTest(cls) else: suite.addTest(unittest.makeSuite(cls)) + suite = filter_maybe(suite) _run_suite(suite) diff --git a/lib-python/2.7/test/test_syntax.py b/lib-python/2.7/test/test_syntax.py --- a/lib-python/2.7/test/test_syntax.py +++ b/lib-python/2.7/test/test_syntax.py @@ -5,7 +5,8 @@ >>> def f(x): ... global x Traceback (most recent call last): -SyntaxError: name 'x' is local and global (, line 1) + File "", line 1 +SyntaxError: name 'x' is local and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that @@ -375,7 +376,7 @@ In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 - >>> while 1: + >>> while 1: # doctest:+SKIP ... while 2: ... while 3: ... while 4: diff --git a/lib-python/2.7/test/test_sys.py b/lib-python/2.7/test/test_sys.py --- a/lib-python/2.7/test/test_sys.py +++ b/lib-python/2.7/test/test_sys.py @@ -264,6 +264,7 @@ self.assertEqual(sys.getdlopenflags(), oldflags+1) sys.setdlopenflags(oldflags) + @test.test_support.impl_detail("reference counting") def test_refcount(self): # n here must be a global in order for this test to pass while # tracing with a python function. Tracing calls PyFrame_FastToLocals @@ -287,7 +288,7 @@ is sys._getframe().f_code ) - # sys._current_frames() is a CPython-only gimmick. + @test.test_support.impl_detail("current_frames") def test_current_frames(self): have_threads = True try: @@ -383,7 +384,10 @@ self.assertEqual(len(sys.float_info), 11) self.assertEqual(sys.float_info.radix, 2) self.assertEqual(len(sys.long_info), 2) - self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + if test.test_support.check_impl_detail(cpython=True): + self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + else: + self.assertTrue(sys.long_info.bits_per_digit >= 1) self.assertTrue(sys.long_info.sizeof_digit >= 1) self.assertEqual(type(sys.long_info.bits_per_digit), int) self.assertEqual(type(sys.long_info.sizeof_digit), int) @@ -432,6 +436,7 @@ self.assertEqual(type(getattr(sys.flags, attr)), int, attr) self.assertTrue(repr(sys.flags)) + @test.test_support.impl_detail("sys._clear_type_cache") def test_clear_type_cache(self): sys._clear_type_cache() @@ -473,6 +478,7 @@ p.wait() self.assertIn(executable, ["''", repr(sys.executable)]) + at unittest.skipUnless(test.test_support.check_impl_detail(), "sys.getsizeof()") class SizeofTest(unittest.TestCase): TPFLAGS_HAVE_GC = 1<<14 diff --git a/lib-python/2.7/test/test_sys_settrace.py b/lib-python/2.7/test/test_sys_settrace.py --- a/lib-python/2.7/test/test_sys_settrace.py +++ b/lib-python/2.7/test/test_sys_settrace.py @@ -213,12 +213,16 @@ "finally" def generator_example(): # any() will leave the generator before its end - x = any(generator_function()) + x = any(generator_function()); gc.collect() # the following lines were not traced for x in range(10): y = x +# On CPython, when the generator is decref'ed to zero, we see the trace +# for the "finally:" portion. On PyPy, we don't see it before the next +# garbage collection. That's why we put gc.collect() on the same line above. + generator_example.events = ([(0, 'call'), (2, 'line'), (-6, 'call'), @@ -282,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass @@ -323,17 +327,24 @@ self.run_test(tighterloop_example) def test_13_genexp(self): - self.run_test(generator_example) - # issue1265: if the trace function contains a generator, - # and if the traced function contains another generator - # that is not completely exhausted, the trace stopped. - # Worse: the 'finally' clause was not invoked. - tracer = Tracer() - sys.settrace(tracer.traceWithGenexp) - generator_example() - sys.settrace(None) - self.compare_events(generator_example.__code__.co_firstlineno, - tracer.events, generator_example.events) + if self.using_gc: + test_support.gc_collect() + gc.enable() + try: + self.run_test(generator_example) + # issue1265: if the trace function contains a generator, + # and if the traced function contains another generator + # that is not completely exhausted, the trace stopped. + # Worse: the 'finally' clause was not invoked. + tracer = Tracer() + sys.settrace(tracer.traceWithGenexp) + generator_example() + sys.settrace(None) + self.compare_events(generator_example.__code__.co_firstlineno, + tracer.events, generator_example.events) + finally: + if self.using_gc: + gc.disable() def test_14_onliner_if(self): def onliners(): diff --git a/lib-python/2.7/test/test_sysconfig.py b/lib-python/2.7/test/test_sysconfig.py --- a/lib-python/2.7/test/test_sysconfig.py +++ b/lib-python/2.7/test/test_sysconfig.py @@ -209,13 +209,22 @@ self.assertEqual(get_platform(), 'macosx-10.4-fat64') - for arch in ('ppc', 'i386', 'x86_64', 'ppc64'): + for arch in ('ppc', 'i386', 'ppc64', 'x86_64'): get_config_vars()['CFLAGS'] = ('-arch %s -isysroot ' '/Developer/SDKs/MacOSX10.4u.sdk ' '-fno-strict-aliasing -fno-common ' '-dynamic -DNDEBUG -g -O3'%(arch,)) self.assertEqual(get_platform(), 'macosx-10.4-%s'%(arch,)) + + # macosx with ARCHFLAGS set and empty _CONFIG_VARS + os.environ['ARCHFLAGS'] = '-arch i386' + sysconfig._CONFIG_VARS = None + + # this will attempt to recreate the _CONFIG_VARS based on environment + # variables; used to check a problem with the PyPy's _init_posix + # implementation; see: issue 705 + get_config_vars() # linux debian sarge os.name = 'posix' @@ -235,7 +244,7 @@ def test_get_scheme_names(self): wanted = ('nt', 'nt_user', 'os2', 'os2_home', 'osx_framework_user', - 'posix_home', 'posix_prefix', 'posix_user') + 'posix_home', 'posix_prefix', 'posix_user', 'pypy') self.assertEqual(get_scheme_names(), wanted) def test_symlink(self): diff --git a/lib-python/2.7/test/test_tarfile.py b/lib-python/2.7/test/test_tarfile.py --- a/lib-python/2.7/test/test_tarfile.py +++ b/lib-python/2.7/test/test_tarfile.py @@ -169,6 +169,7 @@ except tarfile.ReadError: self.fail("tarfile.open() failed on empty archive") self.assertListEqual(tar.getmembers(), []) + tar.close() def test_null_tarfile(self): # Test for issue6123: Allow opening empty archives. @@ -207,16 +208,21 @@ fobj = open(self.tarname, "rb") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, os.path.abspath(fobj.name)) + tar.close() def test_no_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) self.assertRaises(AttributeError, getattr, fobj, "name") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, None) def test_empty_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) fobj.name = "" tar = tarfile.open(fileobj=fobj, mode=self.mode) @@ -515,6 +521,7 @@ self.tar = tarfile.open(self.tarname, mode=self.mode, encoding="iso8859-1") tarinfo = self.tar.getmember("pax/umlauts-�������") self._test_member(tarinfo, size=7011, chksum=md5_regtype) + self.tar.close() class LongnameTest(ReadTest): @@ -675,6 +682,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.rmdir(path) @@ -692,6 +700,7 @@ tar.gettarinfo(target) tarinfo = tar.gettarinfo(link) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(target) os.remove(link) @@ -704,6 +713,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(path) @@ -722,6 +732,7 @@ tar.add(dstname) os.chdir(cwd) self.assertTrue(tar.getnames() == [], "added the archive to itself") + tar.close() def test_exclude(self): tempdir = os.path.join(TEMPDIR, "exclude") @@ -742,6 +753,7 @@ tar = tarfile.open(tmpname, "r") self.assertEqual(len(tar.getmembers()), 1) self.assertEqual(tar.getnames()[0], "empty_dir") + tar.close() finally: shutil.rmtree(tempdir) @@ -947,7 +959,9 @@ fobj.close() elif self.mode.endswith("bz2"): dec = bz2.BZ2Decompressor() - data = open(tmpname, "rb").read() + f = open(tmpname, "rb") + data = f.read() + f.close() data = dec.decompress(data) self.assertTrue(len(dec.unused_data) == 0, "found trailing data") @@ -1026,6 +1040,7 @@ "unable to read longname member") self.assertEqual(tarinfo.linkname, member.linkname, "unable to read longname member") + tar.close() def test_longname_1023(self): self._test(("longnam/" * 127) + "longnam") @@ -1118,6 +1133,7 @@ else: n = tar.getmembers()[0].name self.assertTrue(name == n, "PAX longname creation failed") + tar.close() def test_pax_global_header(self): pax_headers = { @@ -1146,6 +1162,7 @@ tarfile.PAX_NUMBER_FIELDS[key](val) except (TypeError, ValueError): self.fail("unable to convert pax header field") + tar.close() def test_pax_extended_header(self): # The fields from the pax header have priority over the @@ -1165,6 +1182,7 @@ self.assertEqual(t.pax_headers, pax_headers) self.assertEqual(t.name, "foo") self.assertEqual(t.uid, 123) + tar.close() class UstarUnicodeTest(unittest.TestCase): @@ -1208,6 +1226,7 @@ tarinfo.name = "foo" tarinfo.uname = u"���" self.assertRaises(UnicodeError, tar.addfile, tarinfo) + tar.close() def test_unicode_argument(self): tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict") @@ -1262,6 +1281,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="ascii", errors=handler) self.assertEqual(tar.getnames()[0], name) + tar.close() self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="ascii", errors="strict") @@ -1274,6 +1294,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1", errors="utf-8") self.assertEqual(tar.getnames()[0], "���/" + u"�".encode("utf8")) + tar.close() class AppendTest(unittest.TestCase): @@ -1301,6 +1322,7 @@ def _test(self, names=["bar"], fileobj=None): tar = tarfile.open(self.tarname, fileobj=fileobj) self.assertEqual(tar.getnames(), names) + tar.close() def test_non_existing(self): self._add_testfile() @@ -1319,7 +1341,9 @@ def test_fileobj(self): self._create_testtar() - data = open(self.tarname).read() + f = open(self.tarname) + data = f.read() + f.close() fobj = StringIO.StringIO(data) self._add_testfile(fobj) fobj.seek(0) @@ -1345,7 +1369,9 @@ # Append mode is supposed to fail if the tarfile to append to # does not end with a zero block. def _test_error(self, data): - open(self.tarname, "wb").write(data) + f = open(self.tarname, "wb") + f.write(data) + f.close() self.assertRaises(tarfile.ReadError, self._add_testfile) def test_null(self): diff --git a/lib-python/2.7/test/test_tempfile.py b/lib-python/2.7/test/test_tempfile.py --- a/lib-python/2.7/test/test_tempfile.py +++ b/lib-python/2.7/test/test_tempfile.py @@ -23,8 +23,8 @@ # TEST_FILES may need to be tweaked for systems depending on the maximum # number of files that can be opened at one time (see ulimit -n) -if sys.platform in ('openbsd3', 'openbsd4'): - TEST_FILES = 48 +if sys.platform.startswith("openbsd"): + TEST_FILES = 64 # ulimit -n defaults to 128 for normal users else: TEST_FILES = 100 @@ -244,6 +244,7 @@ dir = tempfile.mkdtemp() try: self.do_create(dir=dir).write("blat") + test_support.gc_collect() finally: os.rmdir(dir) @@ -528,12 +529,15 @@ self.do_create(suf="b") self.do_create(pre="a", suf="b") self.do_create(pre="aa", suf=".txt") + test_support.gc_collect() def test_many(self): # mktemp can choose many usable file names (stochastic) extant = range(TEST_FILES) for i in extant: extant[i] = self.do_create(pre="aa") + del extant + test_support.gc_collect() ## def test_warning(self): ## # mktemp issues a warning when used diff --git a/lib-python/2.7/test/test_thread.py b/lib-python/2.7/test/test_thread.py --- a/lib-python/2.7/test/test_thread.py +++ b/lib-python/2.7/test/test_thread.py @@ -128,6 +128,7 @@ del task while not done: time.sleep(0.01) + test_support.gc_collect() self.assertEqual(thread._count(), orig) diff --git a/lib-python/2.7/test/test_threading.py b/lib-python/2.7/test/test_threading.py --- a/lib-python/2.7/test/test_threading.py +++ b/lib-python/2.7/test/test_threading.py @@ -161,6 +161,7 @@ # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. + @test.test_support.cpython_only def test_PyThreadState_SetAsyncExc(self): try: import ctypes @@ -266,6 +267,7 @@ finally: threading._start_new_thread = _start_new_thread + @test.test_support.cpython_only def test_finalize_runnning_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for @@ -383,6 +385,7 @@ finally: sys.setcheckinterval(old_interval) + @test.test_support.cpython_only def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): @@ -425,6 +428,9 @@ def joiningfunc(mainthread): mainthread.join() print 'end of thread' + # stdout is fully buffered because not a tty, we have to flush + # before exit. + sys.stdout.flush() \n""" + script p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE) diff --git a/lib-python/2.7/test/test_threading_local.py b/lib-python/2.7/test/test_threading_local.py --- a/lib-python/2.7/test/test_threading_local.py +++ b/lib-python/2.7/test/test_threading_local.py @@ -173,8 +173,9 @@ obj = cls() obj.x = 5 self.assertEqual(obj.__dict__, {'x': 5}) - with self.assertRaises(AttributeError): - obj.__dict__ = {} + if test_support.check_impl_detail(): + with self.assertRaises(AttributeError): + obj.__dict__ = {} with self.assertRaises(AttributeError): del obj.__dict__ diff --git a/lib-python/2.7/test/test_traceback.py b/lib-python/2.7/test/test_traceback.py --- a/lib-python/2.7/test/test_traceback.py +++ b/lib-python/2.7/test/test_traceback.py @@ -5,7 +5,8 @@ import sys import unittest from imp import reload -from test.test_support import run_unittest, is_jython, Error +from test.test_support import run_unittest, Error +from test.test_support import impl_detail, check_impl_detail import traceback @@ -49,10 +50,8 @@ self.assertTrue(err[2].count('\n') == 1) # and no additional newline self.assertTrue(err[1].find("+") == err[2].find("^")) # in the right place + @impl_detail("other implementations may add a caret (why shouldn't they?)") def test_nocaret(self): - if is_jython: - # jython adds a caret in this case (why shouldn't it?) - return err = self.get_exception_format(self.syntax_error_without_caret, SyntaxError) self.assertTrue(len(err) == 3) @@ -63,8 +62,11 @@ IndentationError) self.assertTrue(len(err) == 4) self.assertTrue(err[1].strip() == "print 2") - self.assertIn("^", err[2]) - self.assertTrue(err[1].find("2") == err[2].find("^")) + if check_impl_detail(): + # on CPython, there is a "^" at the end of the line + # on PyPy, there is a "^" too, but at the start, more logically + self.assertIn("^", err[2]) + self.assertTrue(err[1].find("2") == err[2].find("^")) def test_bug737473(self): import os, tempfile, time @@ -74,7 +76,8 @@ try: sys.path.insert(0, testdir) testfile = os.path.join(testdir, 'test_bug737473.py') - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise ValueError""" @@ -96,7 +99,8 @@ # three seconds are needed for this test to pass reliably :-( time.sleep(4) - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise NotImplementedError""" reload(test_bug737473) diff --git a/lib-python/2.7/test/test_types.py b/lib-python/2.7/test/test_types.py --- a/lib-python/2.7/test/test_types.py +++ b/lib-python/2.7/test/test_types.py @@ -1,7 +1,8 @@ # Python test set -- part 6, built-in types from test.test_support import run_unittest, have_unicode, run_with_locale, \ - check_py3k_warnings + check_py3k_warnings, \ + impl_detail, check_impl_detail import unittest import sys import locale @@ -289,9 +290,14 @@ # array.array() returns an object that does not implement a char buffer, # something which int() uses for conversion. import array - try: int(buffer(array.array('c'))) + try: int(buffer(array.array('c', '5'))) except TypeError: pass - else: self.fail("char buffer (at C level) not working") + else: + if check_impl_detail(): + self.fail("char buffer (at C level) not working") + #else: + # it works on PyPy, which does not have the distinction + # between char buffer and binary buffer. XXX fine enough? def test_int__format__(self): def test(i, format_spec, result): @@ -741,6 +747,7 @@ for code in 'xXobns': self.assertRaises(ValueError, format, 0, ',' + code) + @impl_detail("the types' internal size attributes are CPython-only") def test_internal_sizes(self): self.assertGreater(object.__basicsize__, 0) self.assertGreater(tuple.__itemsize__, 0) diff --git a/lib-python/2.7/test/test_unicode.py b/lib-python/2.7/test/test_unicode.py --- a/lib-python/2.7/test/test_unicode.py +++ b/lib-python/2.7/test/test_unicode.py @@ -448,10 +448,11 @@ meth('\xff') with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) @test_support.run_with_locale('LC_ALL', 'de_DE', 'fr_FR') def test_format_float(self): @@ -1062,7 +1063,8 @@ # to take a 64-bit long, this test should apply to all platforms. if sys.maxint > (1 << 32) or struct.calcsize('P') != 4: return - self.assertRaises(OverflowError, u't\tt\t'.expandtabs, sys.maxint) + self.assertRaises((OverflowError, MemoryError), + u't\tt\t'.expandtabs, sys.maxint) def test__format__(self): def test(value, format, expected): diff --git a/lib-python/2.7/test/test_unicodedata.py b/lib-python/2.7/test/test_unicodedata.py --- a/lib-python/2.7/test/test_unicodedata.py +++ b/lib-python/2.7/test/test_unicodedata.py @@ -233,10 +233,12 @@ # been loaded in this process. popen = subprocess.Popen(args, stderr=subprocess.PIPE) popen.wait() - self.assertEqual(popen.returncode, 1) - error = "SyntaxError: (unicode error) \N escapes not supported " \ - "(can't load unicodedata module)" - self.assertIn(error, popen.stderr.read()) + self.assertIn(popen.returncode, [0, 1]) # at least it did not segfault + if test.test_support.check_impl_detail(): + self.assertEqual(popen.returncode, 1) + error = "SyntaxError: (unicode error) \N escapes not supported " \ + "(can't load unicodedata module)" + self.assertIn(error, popen.stderr.read()) def test_decimal_numeric_consistent(self): # Test that decimal and numeric are consistent, diff --git a/lib-python/2.7/test/test_unpack.py b/lib-python/2.7/test/test_unpack.py --- a/lib-python/2.7/test/test_unpack.py +++ b/lib-python/2.7/test/test_unpack.py @@ -62,14 +62,14 @@ >>> a, b = t Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking tuple of wrong size >>> a, b = l Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking sequence too short diff --git a/lib-python/2.7/test/test_urllib2.py b/lib-python/2.7/test/test_urllib2.py --- a/lib-python/2.7/test/test_urllib2.py +++ b/lib-python/2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/2.7/test/test_warnings.py b/lib-python/2.7/test/test_warnings.py --- a/lib-python/2.7/test/test_warnings.py +++ b/lib-python/2.7/test/test_warnings.py @@ -355,7 +355,8 @@ # test_support.import_fresh_module utility function def test_accelerated(self): self.assertFalse(original_warnings is self.module) - self.assertFalse(hasattr(self.module.warn, 'func_code')) + self.assertFalse(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class PyWarnTests(BaseTest, WarnTests): module = py_warnings @@ -364,7 +365,8 @@ # test_support.import_fresh_module utility function def test_pure_python(self): self.assertFalse(original_warnings is self.module) - self.assertTrue(hasattr(self.module.warn, 'func_code')) + self.assertTrue(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class WCmdLineTests(unittest.TestCase): diff --git a/lib-python/2.7/test/test_weakref.py b/lib-python/2.7/test/test_weakref.py --- a/lib-python/2.7/test/test_weakref.py +++ b/lib-python/2.7/test/test_weakref.py @@ -1,4 +1,3 @@ -import gc import sys import unittest import UserList @@ -6,6 +5,7 @@ import operator from test import test_support +from test.test_support import gc_collect # Used in ReferencesTestCase.test_ref_created_during_del() . ref_from_del = None @@ -70,6 +70,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(ref1() is None, "expected reference to be invalidated") self.assertTrue(ref2() is None, @@ -101,13 +102,16 @@ ref1 = weakref.proxy(o, self.callback) ref2 = weakref.proxy(o, self.callback) del o + gc_collect() def check(proxy): proxy.bar self.assertRaises(weakref.ReferenceError, check, ref1) self.assertRaises(weakref.ReferenceError, check, ref2) - self.assertRaises(weakref.ReferenceError, bool, weakref.proxy(C())) + ref3 = weakref.proxy(C()) + gc_collect() + self.assertRaises(weakref.ReferenceError, bool, ref3) self.assertTrue(self.cbcalled == 2) def check_basic_ref(self, factory): @@ -124,6 +128,7 @@ o = factory() ref = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(self.cbcalled == 1, "callback did not properly set 'cbcalled'") self.assertTrue(ref() is None, @@ -148,6 +153,7 @@ self.assertTrue(weakref.getweakrefcount(o) == 2, "wrong weak ref count for object") del proxy + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 1, "wrong weak ref count for object after deleting proxy") @@ -325,6 +331,7 @@ "got wrong number of weak reference objects") del ref1, ref2, proxy1, proxy2 + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 0, "weak reference objects not unlinked from" " referent when discarded.") @@ -338,6 +345,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref2], "list of refs does not match") @@ -345,10 +353,12 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref2 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref1], "list of refs does not match") del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [], "list of refs not cleared") @@ -400,13 +410,11 @@ # when the second attempt to remove the instance from the "list # of all objects" occurs. - import gc - class C(object): pass c = C() - wr = weakref.ref(c, lambda ignore: gc.collect()) + wr = weakref.ref(c, lambda ignore: gc_collect()) del c # There endeth the first part. It gets worse. @@ -414,7 +422,7 @@ c1 = C() c1.i = C() - wr = weakref.ref(c1.i, lambda ignore: gc.collect()) + wr = weakref.ref(c1.i, lambda ignore: gc_collect()) c2 = C() c2.c1 = c1 @@ -430,8 +438,6 @@ del c2 def test_callback_in_cycle_1(self): - import gc - class J(object): pass @@ -467,11 +473,9 @@ # search II.__mro__, but that's NULL. The result was a segfault in # a release build, and an assert failure in a debug build. del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_2(self): - import gc - # This is just like test_callback_in_cycle_1, except that II is an # old-style class. The symptom is different then: an instance of an # old-style class looks in its own __dict__ first. 'J' happens to @@ -496,11 +500,9 @@ I.wr = weakref.ref(J, I.acallback) del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_3(self): - import gc - # This one broke the first patch that fixed the last two. In this # case, the objects reachable from the callback aren't also reachable # from the object (c1) *triggering* the callback: you can get to @@ -520,11 +522,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2 - gc.collect() + gc_collect() def test_callback_in_cycle_4(self): - import gc - # Like test_callback_in_cycle_3, except c2 and c1 have different # classes. c2's class (C) isn't reachable from c1 then, so protecting # objects reachable from the dying object (c1) isn't enough to stop @@ -548,11 +548,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2, C, D - gc.collect() + gc_collect() def test_callback_in_cycle_resurrection(self): - import gc - # Do something nasty in a weakref callback: resurrect objects # from dead cycles. For this to be attempted, the weakref and # its callback must also be part of the cyclic trash (else the @@ -583,7 +581,7 @@ del c1, c2, C # make them all trash self.assertEqual(alist, []) # del isn't enough to reclaim anything - gc.collect() + gc_collect() # c1.wr and c2.wr were part of the cyclic trash, so should have # been cleared without their callbacks executing. OTOH, the weakref # to C is bound to a function local (wr), and wasn't trash, so that @@ -593,12 +591,10 @@ self.assertEqual(wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_callbacks_on_callback(self): - import gc - # Set up weakref callbacks *on* weakref callbacks. alist = [] def safe_callback(ignore): @@ -626,12 +622,12 @@ del callback, c, d, C self.assertEqual(alist, []) # del isn't enough to clean up cycles - gc.collect() + gc_collect() self.assertEqual(alist, ["safe_callback called"]) self.assertEqual(external_wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_gc_during_ref_creation(self): @@ -641,9 +637,11 @@ self.check_gc_during_creation(weakref.proxy) def check_gc_during_creation(self, makeref): - thresholds = gc.get_threshold() - gc.set_threshold(1, 1, 1) - gc.collect() + if test_support.check_impl_detail(): + import gc + thresholds = gc.get_threshold() + gc.set_threshold(1, 1, 1) + gc_collect() class A: pass @@ -663,7 +661,8 @@ weakref.ref(referenced, callback) finally: - gc.set_threshold(*thresholds) + if test_support.check_impl_detail(): + gc.set_threshold(*thresholds) def test_ref_created_during_del(self): # Bug #1377858 @@ -683,7 +682,7 @@ r = weakref.ref(Exception) self.assertRaises(TypeError, r.__init__, 0, 0, 0, 0, 0) # No exception should be raised here - gc.collect() + gc_collect() def test_classes(self): # Check that both old-style classes and new-style classes @@ -696,12 +695,12 @@ weakref.ref(int) a = weakref.ref(A, l.append) A = None - gc.collect() + gc_collect() self.assertEqual(a(), None) self.assertEqual(l, [a]) b = weakref.ref(B, l.append) B = None - gc.collect() + gc_collect() self.assertEqual(b(), None) self.assertEqual(l, [a, b]) @@ -722,6 +721,7 @@ self.assertTrue(mr.called) self.assertEqual(mr.value, 24) del o + gc_collect() self.assertTrue(mr() is None) self.assertTrue(mr.called) @@ -738,9 +738,11 @@ self.assertEqual(weakref.getweakrefcount(o), 3) refs = weakref.getweakrefs(o) self.assertEqual(len(refs), 3) - self.assertTrue(r2 is refs[0]) - self.assertIn(r1, refs[1:]) - self.assertIn(r3, refs[1:]) + assert set(refs) == set((r1, r2, r3)) + if test_support.check_impl_detail(): + self.assertTrue(r2 is refs[0]) + self.assertIn(r1, refs[1:]) + self.assertIn(r3, refs[1:]) def test_subclass_refs_dont_conflate_callbacks(self): class MyRef(weakref.ref): @@ -839,15 +841,18 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the values did not clear the dictionary") # regression on SF bug #447152: dict = weakref.WeakValueDictionary() self.assertRaises(KeyError, dict.__getitem__, 1) dict[2] = C() + gc_collect() self.assertRaises(KeyError, dict.__getitem__, 2) def test_weak_keys(self): @@ -868,9 +873,11 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the keys did not clear the dictionary") o = Object(42) @@ -986,13 +993,13 @@ self.assertTrue(len(weakdict) == 2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 1) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 0) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) @@ -1137,6 +1144,7 @@ for o in objs: count += 1 del d[o] + gc_collect() self.assertEqual(len(d), 0) self.assertEqual(count, 2) @@ -1177,6 +1185,7 @@ >>> o is o2 True >>> del o, o2 +>>> gc_collect() >>> print r() None @@ -1229,6 +1238,7 @@ >>> id2obj(a_id) is a True >>> del a +>>> gc_collect() >>> try: ... id2obj(a_id) ... except KeyError: diff --git a/lib-python/2.7/test/test_weakset.py b/lib-python/2.7/test/test_weakset.py --- a/lib-python/2.7/test/test_weakset.py +++ b/lib-python/2.7/test/test_weakset.py @@ -57,6 +57,7 @@ self.assertEqual(len(self.s), len(self.d)) self.assertEqual(len(self.fs), 1) del self.obj + test_support.gc_collect() self.assertEqual(len(self.fs), 0) def test_contains(self): @@ -66,6 +67,7 @@ self.assertNotIn(1, self.s) self.assertIn(self.obj, self.fs) del self.obj + test_support.gc_collect() self.assertNotIn(SomeClass('F'), self.fs) def test_union(self): @@ -204,6 +206,7 @@ self.assertEqual(self.s, dup) self.assertRaises(TypeError, self.s.add, []) self.fs.add(Foo()) + test_support.gc_collect() self.assertTrue(len(self.fs) == 1) self.fs.add(self.obj) self.assertTrue(len(self.fs) == 1) @@ -330,10 +333,11 @@ next(it) # Trigger internal iteration # Destroy an item del items[-1] - gc.collect() # just in case + test_support.gc_collect() # We have removed either the first consumed items, or another one self.assertIn(len(list(it)), [len(items), len(items) - 1]) del it + test_support.gc_collect() # The removal has been committed self.assertEqual(len(s), len(items)) diff --git a/lib-python/2.7/test/test_xml_etree.py b/lib-python/2.7/test/test_xml_etree.py --- a/lib-python/2.7/test/test_xml_etree.py +++ b/lib-python/2.7/test/test_xml_etree.py @@ -1633,10 +1633,10 @@ Check reference leak. >>> xmltoolkit63() - >>> count = sys.getrefcount(None) + >>> count = sys.getrefcount(None) #doctest: +SKIP >>> for i in range(1000): ... xmltoolkit63() - >>> sys.getrefcount(None) - count + >>> sys.getrefcount(None) - count #doctest: +SKIP 0 """ diff --git a/lib-python/2.7/test/test_xmlrpc.py b/lib-python/2.7/test/test_xmlrpc.py --- a/lib-python/2.7/test/test_xmlrpc.py +++ b/lib-python/2.7/test/test_xmlrpc.py @@ -308,7 +308,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -367,7 +367,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -435,6 +435,7 @@ def tearDown(self): # wait on the server thread to terminate + test_support.gc_collect() # to close the active connections self.evt.wait(10) # disable traceback reporting @@ -472,9 +473,6 @@ # protocol error; provide additional information in test output self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) - def test_unicode_host(self): - server = xmlrpclib.ServerProxy(u"http://%s:%d/RPC2"%(ADDR, PORT)) - self.assertEqual(server.add("a", u"\xe9"), u"a\xe9") # [ch] The test 404 is causing lots of false alarms. def XXXtest_404(self): @@ -589,12 +587,6 @@ # This avoids waiting for the socket timeout. self.test_simple1() - def test_partial_post(self): - # Check that a partial POST doesn't make the server loop: issue #14001. - conn = httplib.HTTPConnection(ADDR, PORT) - conn.request('POST', '/RPC2 HTTP/1.0\r\nContent-Length: 100\r\n\r\nbye') - conn.close() - class MultiPathServerTestCase(BaseServerTestCase): threadFunc = staticmethod(http_multi_server) request_count = 2 diff --git a/lib-python/2.7/test/test_zlib.py b/lib-python/2.7/test/test_zlib.py --- a/lib-python/2.7/test/test_zlib.py +++ b/lib-python/2.7/test/test_zlib.py @@ -1,6 +1,7 @@ import unittest from test.test_support import TESTFN, run_unittest, import_module, unlink, requires import binascii +import os import random from test.test_support import precisionbigmemtest, _1G, _4G import sys @@ -99,14 +100,7 @@ class BaseCompressTestCase(object): def check_big_compress_buffer(self, size, compress_func): - _1M = 1024 * 1024 - fmt = "%%0%dx" % (2 * _1M) - # Generate 10MB worth of random, and expand it by repeating it. - # The assumption is that zlib's memory is not big enough to exploit - # such spread out redundancy. - data = ''.join([binascii.a2b_hex(fmt % random.getrandbits(8 * _1M)) - for i in range(10)]) - data = data * (size // len(data) + 1) + data = os.urandom(size) try: compress_func(data) finally: diff --git a/lib-python/2.7/trace.py b/lib-python/2.7/trace.py --- a/lib-python/2.7/trace.py +++ b/lib-python/2.7/trace.py @@ -559,6 +559,10 @@ if len(funcs) == 1: dicts = [d for d in gc.get_referrers(funcs[0]) if isinstance(d, dict)] + if len(dicts) == 0: + # PyPy may store functions directly on the class + # (more exactly: the container is not a Python object) + dicts = funcs if len(dicts) == 1: classes = [c for c in gc.get_referrers(dicts[0]) if hasattr(c, "__bases__")] diff --git a/lib-python/2.7/urllib2.py b/lib-python/2.7/urllib2.py --- a/lib-python/2.7/urllib2.py +++ b/lib-python/2.7/urllib2.py @@ -1171,6 +1171,7 @@ except TypeError: #buffering kw not supported r = h.getresponse() except socket.error, err: # XXX what error? + h.close() raise URLError(err) # Pick apart the HTTPResponse object to get the addinfourl diff --git a/lib-python/2.7/uuid.py b/lib-python/2.7/uuid.py --- a/lib-python/2.7/uuid.py +++ b/lib-python/2.7/uuid.py @@ -406,8 +406,12 @@ continue if hasattr(lib, 'uuid_generate_random'): _uuid_generate_random = lib.uuid_generate_random + _uuid_generate_random.argtypes = [ctypes.c_char * 16] + _uuid_generate_random.restype = None if hasattr(lib, 'uuid_generate_time'): _uuid_generate_time = lib.uuid_generate_time + _uuid_generate_time.argtypes = [ctypes.c_char * 16] + _uuid_generate_time.restype = None # The uuid_generate_* functions are broken on MacOS X 10.5, as noted # in issue #8621 the function generates the same sequence of values @@ -436,6 +440,9 @@ lib = None _UuidCreate = getattr(lib, 'UuidCreateSequential', getattr(lib, 'UuidCreate', None)) + if _UuidCreate is not None: + _UuidCreate.argtypes = [ctypes.c_char * 16] + _UuidCreate.restype = ctypes.c_int except: pass diff --git a/lib-python/3.2/__future__.py b/lib-python/3.2/__future__.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__future__.py @@ -0,0 +1,134 @@ +"""Record of phased-in incompatible language changes. + +Each line is of the form: + + FeatureName = "_Feature(" OptionalRelease "," MandatoryRelease "," + CompilerFlag ")" + +where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples +of the same form as sys.version_info: + + (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int + PY_MINOR_VERSION, # the 1; an int + PY_MICRO_VERSION, # the 0; an int + PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string + PY_RELEASE_SERIAL # the 3; an int + ) + +OptionalRelease records the first release in which + + from __future__ import FeatureName + +was accepted. + +In the case of MandatoryReleases that have not yet occurred, +MandatoryRelease predicts the release in which the feature will become part +of the language. + +Else MandatoryRelease records when the feature became part of the language; +in releases at or after that, modules no longer need + + from __future__ import FeatureName + +to use the feature in question, but may continue to use such imports. + +MandatoryRelease may also be None, meaning that a planned feature got +dropped. + +Instances of class _Feature have two corresponding methods, +.getOptionalRelease() and .getMandatoryRelease(). + +CompilerFlag is the (bitfield) flag that should be passed in the fourth +argument to the builtin function compile() to enable the feature in +dynamically compiled code. This flag is stored in the .compiler_flag +attribute on _Future instances. These values must match the appropriate +#defines of CO_xxx flags in Include/compile.h. + +No feature line is ever to be deleted from this file. +""" + +all_feature_names = [ + "nested_scopes", + "generators", + "division", + "absolute_import", + "with_statement", + "print_function", + "unicode_literals", + "barry_as_FLUFL", +] + +__all__ = ["all_feature_names"] + all_feature_names + +# The CO_xxx symbols are defined here under the same names used by +# compile.h, so that an editor search will find them here. However, +# they're not exported in __all__, because they don't really belong to +# this module. +CO_NESTED = 0x0010 # nested_scopes +CO_GENERATOR_ALLOWED = 0 # generators (obsolete, was 0x1000) +CO_FUTURE_DIVISION = 0x2000 # division +CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default +CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement +CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function +CO_FUTURE_UNICODE_LITERALS = 0x20000 # unicode string literals +CO_FUTURE_BARRY_AS_BDFL = 0x40000 + +class _Feature: + def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): + self.optional = optionalRelease + self.mandatory = mandatoryRelease + self.compiler_flag = compiler_flag + + def getOptionalRelease(self): + """Return first release in which this feature was recognized. + + This is a 5-tuple, of the same form as sys.version_info. + """ + + return self.optional + + def getMandatoryRelease(self): + """Return release in which this feature will become mandatory. + + This is a 5-tuple, of the same form as sys.version_info, or, if + the feature was dropped, is None. + """ + + return self.mandatory + + def __repr__(self): + return "_Feature" + repr((self.optional, + self.mandatory, + self.compiler_flag)) + +nested_scopes = _Feature((2, 1, 0, "beta", 1), + (2, 2, 0, "alpha", 0), + CO_NESTED) + +generators = _Feature((2, 2, 0, "alpha", 1), + (2, 3, 0, "final", 0), + CO_GENERATOR_ALLOWED) + +division = _Feature((2, 2, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_DIVISION) + +absolute_import = _Feature((2, 5, 0, "alpha", 1), + (2, 7, 0, "alpha", 0), + CO_FUTURE_ABSOLUTE_IMPORT) + +with_statement = _Feature((2, 5, 0, "alpha", 1), + (2, 6, 0, "alpha", 0), + CO_FUTURE_WITH_STATEMENT) + +print_function = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_PRINT_FUNCTION) + +unicode_literals = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_UNICODE_LITERALS) + +barry_as_FLUFL = _Feature((3, 1, 0, "alpha", 2), + (3, 9, 0, "alpha", 0), + CO_FUTURE_BARRY_AS_BDFL) diff --git a/lib-python/3.2/__phello__.foo.py b/lib-python/3.2/__phello__.foo.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__phello__.foo.py @@ -0,0 +1,1 @@ +# This file exists as a helper for the test.test_frozen module. diff --git a/lib-python/3.2/_abcoll.py b/lib-python/3.2/_abcoll.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_abcoll.py @@ -0,0 +1,623 @@ +# Copyright 2007 Google, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""Abstract Base Classes (ABCs) for collections, according to PEP 3119. + +DON'T USE THIS MODULE DIRECTLY! The classes here should be imported +via collections; they are defined here only to alleviate certain +bootstrapping issues. Unit tests are in test_collections. +""" + +from abc import ABCMeta, abstractmethod +import sys + +__all__ = ["Hashable", "Iterable", "Iterator", + "Sized", "Container", "Callable", + "Set", "MutableSet", + "Mapping", "MutableMapping", + "MappingView", "KeysView", "ItemsView", "ValuesView", + "Sequence", "MutableSequence", + "ByteString", + ] + + +### collection related types which are not exposed through builtin ### +## iterators ## +bytes_iterator = type(iter(b'')) +bytearray_iterator = type(iter(bytearray())) +#callable_iterator = ??? +dict_keyiterator = type(iter({}.keys())) +dict_valueiterator = type(iter({}.values())) +dict_itemiterator = type(iter({}.items())) +list_iterator = type(iter([])) +list_reverseiterator = type(iter(reversed([]))) +range_iterator = type(iter(range(0))) +set_iterator = type(iter(set())) +str_iterator = type(iter("")) +tuple_iterator = type(iter(())) +zip_iterator = type(iter(zip())) +## views ## +dict_keys = type({}.keys()) +dict_values = type({}.values()) +dict_items = type({}.items()) +## misc ## +dict_proxy = type(type.__dict__) + + +### ONE-TRICK PONIES ### + +class Hashable(metaclass=ABCMeta): + + @abstractmethod + def __hash__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Hashable: + for B in C.__mro__: + if "__hash__" in B.__dict__: + if B.__dict__["__hash__"]: + return True + break + return NotImplemented + + +class Iterable(metaclass=ABCMeta): + + @abstractmethod + def __iter__(self): + while False: + yield None + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterable: + if any("__iter__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Iterator(Iterable): + + @abstractmethod + def __next__(self): + raise StopIteration + + def __iter__(self): + return self + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterator: + if (any("__next__" in B.__dict__ for B in C.__mro__) and + any("__iter__" in B.__dict__ for B in C.__mro__)): + return True + return NotImplemented + +Iterator.register(bytes_iterator) +Iterator.register(bytearray_iterator) +#Iterator.register(callable_iterator) +Iterator.register(dict_keyiterator) +Iterator.register(dict_valueiterator) +Iterator.register(dict_itemiterator) +Iterator.register(list_iterator) +Iterator.register(list_reverseiterator) +Iterator.register(range_iterator) +Iterator.register(set_iterator) +Iterator.register(str_iterator) +Iterator.register(tuple_iterator) +Iterator.register(zip_iterator) + +class Sized(metaclass=ABCMeta): + + @abstractmethod + def __len__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Sized: + if any("__len__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Container(metaclass=ABCMeta): + + @abstractmethod + def __contains__(self, x): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Container: + if any("__contains__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Callable(metaclass=ABCMeta): + + @abstractmethod + def __call__(self, *args, **kwds): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Callable: + if any("__call__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +### SETS ### + + +class Set(Sized, Iterable, Container): + + """A set is a finite, iterable container. + + This class provides concrete generic implementations of all + methods except for __contains__, __iter__ and __len__. + + To override the comparisons (presumably for speed, as the + semantics are fixed), all you have to do is redefine __le__ and + then the other operations will automatically follow suit. + """ + + def __le__(self, other): + if not isinstance(other, Set): + return NotImplemented + if len(self) > len(other): + return False + for elem in self: + if elem not in other: + return False + return True + + def __lt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) < len(other) and self.__le__(other) + + def __gt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other < self + + def __ge__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other <= self + + def __eq__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) == len(other) and self.__le__(other) + + def __ne__(self, other): + return not (self == other) + + @classmethod + def _from_iterable(cls, it): + '''Construct an instance of the class from any iterable input. + + Must override this method if the class constructor signature + does not accept an iterable for an input. + ''' + return cls(it) + + def __and__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + return self._from_iterable(value for value in other if value in self) + + def isdisjoint(self, other): + for value in other: + if value in self: + return False + return True + + def __or__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + chain = (e for s in (self, other) for e in s) + return self._from_iterable(chain) + + def __sub__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return self._from_iterable(value for value in self + if value not in other) + + def __xor__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return (self - other) | (other - self) + + def _hash(self): + """Compute the hash value of a set. + + Note that we don't define __hash__: not all sets are hashable. + But if you define a hashable set type, its __hash__ should + call this function. + + This must be compatible __eq__. + + All sets ought to compare equal if they contain the same + elements, regardless of how they are implemented, and + regardless of the order of the elements; so there's not much + freedom for __eq__ or __hash__. We match the algorithm used + by the built-in frozenset type. + """ + MAX = sys.maxsize + MASK = 2 * MAX + 1 + n = len(self) + h = 1927868237 * (n + 1) + h &= MASK + for x in self: + hx = hash(x) + h ^= (hx ^ (hx << 16) ^ 89869747) * 3644798167 + h &= MASK + h = h * 69069 + 907133923 + h &= MASK + if h > MAX: + h -= MASK + 1 + if h == -1: + h = 590923713 + return h + +Set.register(frozenset) + + +class MutableSet(Set): + + @abstractmethod + def add(self, value): + """Add an element.""" + raise NotImplementedError + + @abstractmethod + def discard(self, value): + """Remove an element. Do not raise an exception if absent.""" + raise NotImplementedError + + def remove(self, value): + """Remove an element. If not a member, raise a KeyError.""" + if value not in self: + raise KeyError(value) + self.discard(value) + + def pop(self): + """Return the popped value. Raise KeyError if empty.""" + it = iter(self) + try: + value = next(it) + except StopIteration: + raise KeyError + self.discard(value) + return value + + def clear(self): + """This is slow (creates N new iterators!) but effective.""" + try: + while True: + self.pop() + except KeyError: + pass + + def __ior__(self, it): + for value in it: + self.add(value) + return self + + def __iand__(self, it): + for value in (self - it): + self.discard(value) + return self + + def __ixor__(self, it): + if it is self: + self.clear() + else: + if not isinstance(it, Set): + it = self._from_iterable(it) + for value in it: + if value in self: + self.discard(value) + else: + self.add(value) + return self + + def __isub__(self, it): + if it is self: + self.clear() + else: + for value in it: + self.discard(value) + return self + +MutableSet.register(set) + + +### MAPPINGS ### + + +class Mapping(Sized, Iterable, Container): + + @abstractmethod + def __getitem__(self, key): + raise KeyError + + def get(self, key, default=None): + try: + return self[key] + except KeyError: + return default + + def __contains__(self, key): + try: + self[key] + except KeyError: + return False + else: + return True + + def keys(self): + return KeysView(self) + + def items(self): + return ItemsView(self) + + def values(self): + return ValuesView(self) + + def __eq__(self, other): + if not isinstance(other, Mapping): + return NotImplemented + return dict(self.items()) == dict(other.items()) + + def __ne__(self, other): + return not (self == other) + + +class MappingView(Sized): + + def __init__(self, mapping): + self._mapping = mapping + + def __len__(self): + return len(self._mapping) + + def __repr__(self): + return '{0.__class__.__name__}({0._mapping!r})'.format(self) + + +class KeysView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, key): + return key in self._mapping + + def __iter__(self): + for key in self._mapping: + yield key + +KeysView.register(dict_keys) + + +class ItemsView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, item): + key, value = item + try: + v = self._mapping[key] + except KeyError: + return False + else: + return v == value + + def __iter__(self): + for key in self._mapping: + yield (key, self._mapping[key]) + +ItemsView.register(dict_items) + + +class ValuesView(MappingView): + + def __contains__(self, value): + for key in self._mapping: + if value == self._mapping[key]: + return True + return False + + def __iter__(self): + for key in self._mapping: + yield self._mapping[key] + +ValuesView.register(dict_values) + + +class MutableMapping(Mapping): + + @abstractmethod + def __setitem__(self, key, value): + raise KeyError + + @abstractmethod + def __delitem__(self, key): + raise KeyError + + __marker = object() + + def pop(self, key, default=__marker): + try: + value = self[key] + except KeyError: + if default is self.__marker: + raise + return default + else: + del self[key] + return value + + def popitem(self): + try: + key = next(iter(self)) + except StopIteration: + raise KeyError + value = self[key] + del self[key] + return key, value + + def clear(self): + try: + while True: + self.popitem() + except KeyError: + pass + + def update(*args, **kwds): + if len(args) > 2: + raise TypeError("update() takes at most 2 positional " + "arguments ({} given)".format(len(args))) + elif not args: + raise TypeError("update() takes at least 1 argument (0 given)") + self = args[0] + other = args[1] if len(args) >= 2 else () + + if isinstance(other, Mapping): + for key in other: + self[key] = other[key] + elif hasattr(other, "keys"): + for key in other.keys(): + self[key] = other[key] + else: + for key, value in other: + self[key] = value + for key, value in kwds.items(): + self[key] = value + + def setdefault(self, key, default=None): + try: + return self[key] + except KeyError: + self[key] = default + return default + +MutableMapping.register(dict) + + +### SEQUENCES ### + + +class Sequence(Sized, Iterable, Container): + + """All the operations on a read-only sequence. + + Concrete subclasses must override __new__ or __init__, + __getitem__, and __len__. + """ + + @abstractmethod + def __getitem__(self, index): + raise IndexError + + def __iter__(self): + i = 0 + try: + while True: + v = self[i] + yield v + i += 1 + except IndexError: + return + + def __contains__(self, value): + for v in self: + if v == value: + return True + return False + + def __reversed__(self): + for i in reversed(range(len(self))): + yield self[i] + + def index(self, value): + for i, v in enumerate(self): + if v == value: + return i + raise ValueError + + def count(self, value): + return sum(1 for v in self if v == value) + +Sequence.register(tuple) +Sequence.register(str) +Sequence.register(range) + + +class ByteString(Sequence): + + """This unifies bytes and bytearray. + + XXX Should add all their methods. + """ + +ByteString.register(bytes) +ByteString.register(bytearray) + + +class MutableSequence(Sequence): + + @abstractmethod + def __setitem__(self, index, value): + raise IndexError + + @abstractmethod + def __delitem__(self, index): + raise IndexError + + @abstractmethod + def insert(self, index, value): + raise IndexError + + def append(self, value): + self.insert(len(self), value) + + def reverse(self): + n = len(self) + for i in range(n//2): + self[i], self[n-i-1] = self[n-i-1], self[i] + + def extend(self, values): + for v in values: + self.append(v) + + def pop(self, index=-1): + v = self[index] + del self[index] + return v + + def remove(self, value): + del self[self.index(value)] + + def __iadd__(self, values): + self.extend(values) + return self + +MutableSequence.register(list) +MutableSequence.register(bytearray) # Multiply inheriting, see ByteString diff --git a/lib-python/3.2/_compat_pickle.py b/lib-python/3.2/_compat_pickle.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_compat_pickle.py @@ -0,0 +1,81 @@ +# This module is used to map the old Python 2 names to the new names used in +# Python 3 for the pickle module. This needed to make pickle streams +# generated with Python 2 loadable by Python 3. + +# This is a copy of lib2to3.fixes.fix_imports.MAPPING. We cannot import +# lib2to3 and use the mapping defined there, because lib2to3 uses pickle. +# Thus, this could cause the module to be imported recursively. +IMPORT_MAPPING = { + 'StringIO': 'io', + 'cStringIO': 'io', + 'cPickle': 'pickle', + '__builtin__' : 'builtins', + 'copy_reg': 'copyreg', + 'Queue': 'queue', + 'SocketServer': 'socketserver', + 'ConfigParser': 'configparser', + 'repr': 'reprlib', + 'FileDialog': 'tkinter.filedialog', + 'tkFileDialog': 'tkinter.filedialog', + 'SimpleDialog': 'tkinter.simpledialog', + 'tkSimpleDialog': 'tkinter.simpledialog', + 'tkColorChooser': 'tkinter.colorchooser', + 'tkCommonDialog': 'tkinter.commondialog', + 'Dialog': 'tkinter.dialog', + 'Tkdnd': 'tkinter.dnd', + 'tkFont': 'tkinter.font', + 'tkMessageBox': 'tkinter.messagebox', + 'ScrolledText': 'tkinter.scrolledtext', + 'Tkconstants': 'tkinter.constants', + 'Tix': 'tkinter.tix', + 'ttk': 'tkinter.ttk', + 'Tkinter': 'tkinter', + 'markupbase': '_markupbase', + '_winreg': 'winreg', + 'thread': '_thread', + 'dummy_thread': '_dummy_thread', + 'dbhash': 'dbm.bsd', + 'dumbdbm': 'dbm.dumb', + 'dbm': 'dbm.ndbm', + 'gdbm': 'dbm.gnu', + 'xmlrpclib': 'xmlrpc.client', + 'DocXMLRPCServer': 'xmlrpc.server', + 'SimpleXMLRPCServer': 'xmlrpc.server', + 'httplib': 'http.client', + 'htmlentitydefs' : 'html.entities', + 'HTMLParser' : 'html.parser', + 'Cookie': 'http.cookies', + 'cookielib': 'http.cookiejar', + 'BaseHTTPServer': 'http.server', + 'SimpleHTTPServer': 'http.server', + 'CGIHTTPServer': 'http.server', + 'test.test_support': 'test.support', + 'commands': 'subprocess', + 'UserString' : 'collections', + 'UserList' : 'collections', + 'urlparse' : 'urllib.parse', + 'robotparser' : 'urllib.robotparser', + 'whichdb': 'dbm', + 'anydbm': 'dbm' +} + + +# This contains rename rules that are easy to handle. We ignore the more +# complex stuff (e.g. mapping the names in the urllib and types modules). +# These rules should be run before import names are fixed. +NAME_MAPPING = { + ('__builtin__', 'xrange'): ('builtins', 'range'), + ('__builtin__', 'reduce'): ('functools', 'reduce'), + ('__builtin__', 'intern'): ('sys', 'intern'), + ('__builtin__', 'unichr'): ('builtins', 'chr'), + ('__builtin__', 'basestring'): ('builtins', 'str'), + ('__builtin__', 'long'): ('builtins', 'int'), + ('itertools', 'izip'): ('builtins', 'zip'), + ('itertools', 'imap'): ('builtins', 'map'), + ('itertools', 'ifilter'): ('builtins', 'filter'), + ('itertools', 'ifilterfalse'): ('itertools', 'filterfalse'), +} + +# Same, but for 3.x to 2.x +REVERSE_IMPORT_MAPPING = dict((v, k) for (k, v) in IMPORT_MAPPING.items()) +REVERSE_NAME_MAPPING = dict((v, k) for (k, v) in NAME_MAPPING.items()) diff --git a/lib-python/3.2/_dummy_thread.py b/lib-python/3.2/_dummy_thread.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_dummy_thread.py @@ -0,0 +1,155 @@ +"""Drop-in replacement for the thread module. + +Meant to be used as a brain-dead substitute so that threaded code does +not need to be rewritten for when the thread module is not present. + +Suggested usage is:: + + try: + import _thread + except ImportError: + import _dummy_thread as _thread + +""" +# Exports only things specified by thread documentation; +# skipping obsolete synonyms allocate(), start_new(), exit_thread(). +__all__ = ['error', 'start_new_thread', 'exit', 'get_ident', 'allocate_lock', + 'interrupt_main', 'LockType'] + +# A dummy value +TIMEOUT_MAX = 2**31 + +# NOTE: this module can be imported early in the extension building process, +# and so top level imports of other modules should be avoided. Instead, all +# imports are done when needed on a function-by-function basis. Since threads +# are disabled, the import lock should not be an issue anyway (??). + +class error(Exception): + """Dummy implementation of _thread.error.""" + + def __init__(self, *args): + self.args = args + +def start_new_thread(function, args, kwargs={}): + """Dummy implementation of _thread.start_new_thread(). + + Compatibility is maintained by making sure that ``args`` is a + tuple and ``kwargs`` is a dictionary. If an exception is raised + and it is SystemExit (which can be done by _thread.exit()) it is + caught and nothing is done; all other exceptions are printed out + by using traceback.print_exc(). + + If the executed function calls interrupt_main the KeyboardInterrupt will be + raised when the function returns. + + """ + if type(args) != type(tuple()): + raise TypeError("2nd arg must be a tuple") + if type(kwargs) != type(dict()): + raise TypeError("3rd arg must be a dict") + global _main + _main = False + try: + function(*args, **kwargs) + except SystemExit: + pass + except: + import traceback + traceback.print_exc() + _main = True + global _interrupt + if _interrupt: + _interrupt = False + raise KeyboardInterrupt + +def exit(): + """Dummy implementation of _thread.exit().""" + raise SystemExit + +def get_ident(): + """Dummy implementation of _thread.get_ident(). + + Since this module should only be used when _threadmodule is not + available, it is safe to assume that the current process is the + only thread. Thus a constant can be safely returned. + """ + return -1 + +def allocate_lock(): + """Dummy implementation of _thread.allocate_lock().""" + return LockType() + +def stack_size(size=None): + """Dummy implementation of _thread.stack_size().""" + if size is not None: + raise error("setting thread stack size not supported") + return 0 + +class LockType(object): + """Class implementing dummy implementation of _thread.LockType. + + Compatibility is maintained by maintaining self.locked_status + which is a boolean that stores the state of the lock. Pickling of + the lock, though, should not be done since if the _thread module is + then used with an unpickled ``lock()`` from here problems could + occur from this class not having atomic methods. + + """ + + def __init__(self): + self.locked_status = False + + def acquire(self, waitflag=None, timeout=-1): + """Dummy implementation of acquire(). + + For blocking calls, self.locked_status is automatically set to + True and returned appropriately based on value of + ``waitflag``. If it is non-blocking, then the value is + actually checked and not set if it is already acquired. This + is all done so that threading.Condition's assert statements + aren't triggered and throw a little fit. + + """ + if waitflag is None or waitflag: + self.locked_status = True + return True + else: + if not self.locked_status: + self.locked_status = True + return True + else: + if timeout > 0: + import time + time.sleep(timeout) + return False + + __enter__ = acquire + + def __exit__(self, typ, val, tb): + self.release() + + def release(self): + """Release the dummy lock.""" + # XXX Perhaps shouldn't actually bother to test? Could lead + # to problems for complex, threaded code. + if not self.locked_status: + raise error + self.locked_status = False + return True + + def locked(self): + return self.locked_status + +# Used to signal that interrupt_main was called in a "thread" +_interrupt = False +# True when not executing in a "thread" +_main = True + +def interrupt_main(): + """Set _interrupt flag to True to have start_new_thread raise + KeyboardInterrupt upon exiting.""" + if _main: + raise KeyboardInterrupt + else: + global _interrupt + _interrupt = True diff --git a/lib-python/3.2/_markupbase.py b/lib-python/3.2/_markupbase.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_markupbase.py @@ -0,0 +1,395 @@ +"""Shared support for scanning document type declarations in HTML and XHTML. + +This module is used as a foundation for the html.parser module. It has no +documented public API and should not be used directly. + +""" + +import re + +_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9]*\s*').match +_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match +_commentclose = re.compile(r'--\s*>') +_markedsectionclose = re.compile(r']\s*]\s*>') + +# An analysis of the MS-Word extensions is available at +# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf + +_msmarkedsectionclose = re.compile(r']\s*>') + +del re + + +class ParserBase: + """Parser base class which provides some common support methods used + by the SGML/HTML and XHTML parsers.""" + + def __init__(self): + if self.__class__ is ParserBase: + raise RuntimeError( + "_markupbase.ParserBase must be subclassed") + + def error(self, message): + raise NotImplementedError( + "subclasses of ParserBase must override error()") + + def reset(self): + self.lineno = 1 + self.offset = 0 + + def getpos(self): + """Return current line number and offset.""" + return self.lineno, self.offset + + # Internal -- update line number and offset. This should be + # called for each piece of data exactly once, in order -- in other + # words the concatenation of all the input strings to this + # function should be exactly the entire input. + def updatepos(self, i, j): + if i >= j: + return j + rawdata = self.rawdata + nlines = rawdata.count("\n", i, j) + if nlines: + self.lineno = self.lineno + nlines + pos = rawdata.rindex("\n", i, j) # Should not fail + self.offset = j-(pos+1) + else: + self.offset = self.offset + j-i + return j + + _decl_otherchars = '' + + # Internal -- parse declaration (for use by subclasses). + def parse_declaration(self, i): + # This is some sort of declaration; in "HTML as + # deployed," this should only be the document type + # declaration (""). + # ISO 8879:1986, however, has more complex + # declaration syntax for elements in , including: + # --comment-- + # [marked section] + # name in the following list: ENTITY, DOCTYPE, ELEMENT, + # ATTLIST, NOTATION, SHORTREF, USEMAP, + # LINKTYPE, LINK, IDLINK, USELINK, SYSTEM + rawdata = self.rawdata + j = i + 2 + assert rawdata[i:j] == "": + # the empty comment + return j + 1 + if rawdata[j:j+1] in ("-", ""): + # Start of comment followed by buffer boundary, + # or just a buffer boundary. + return -1 + # A simple, practical version could look like: ((name|stringlit) S*) + '>' + n = len(rawdata) + if rawdata[j:j+2] == '--': #comment + # Locate --.*-- as the body of the comment + return self.parse_comment(i) + elif rawdata[j] == '[': #marked section + # Locate [statusWord [...arbitrary SGML...]] as the body of the marked section + # Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA + # Note that this is extended by Microsoft Office "Save as Web" function + # to include [if...] and [endif]. + return self.parse_marked_section(i) + else: #all other declaration elements + decltype, j = self._scan_name(j, i) + if j < 0: + return j + if decltype == "doctype": + self._decl_otherchars = '' + while j < n: + c = rawdata[j] + if c == ">": + # end of declaration syntax + data = rawdata[i+2:j] + if decltype == "doctype": + self.handle_decl(data) + else: + # According to the HTML5 specs sections "8.2.4.44 Bogus + # comment state" and "8.2.4.45 Markup declaration open + # state", a comment token should be emitted. + # Calling unknown_decl provides more flexibility though. + self.unknown_decl(data) + return j + 1 + if c in "\"'": + m = _declstringlit_match(rawdata, j) + if not m: + return -1 # incomplete + j = m.end() + elif c in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ": + name, j = self._scan_name(j, i) + elif c in self._decl_otherchars: + j = j + 1 + elif c == "[": + # this could be handled in a separate doctype parser + if decltype == "doctype": + j = self._parse_doctype_subset(j + 1, i) + elif decltype in {"attlist", "linktype", "link", "element"}: + # must tolerate []'d groups in a content model in an element declaration + # also in data attribute specifications of attlist declaration + # also link type declaration subsets in linktype declarations + # also link attribute specification lists in link declarations + self.error("unsupported '[' char in %s declaration" % decltype) + else: + self.error("unexpected '[' char in declaration") + else: + self.error( + "unexpected %r char in declaration" % rawdata[j]) + if j < 0: + return j + return -1 # incomplete + + # Internal -- parse a marked section + # Override this to handle MS-word extension syntax content + def parse_marked_section(self, i, report=1): + rawdata= self.rawdata + assert rawdata[i:i+3] == ' ending + match= _markedsectionclose.search(rawdata, i+3) + elif sectName in {"if", "else", "endif"}: + # look for MS Office ]> ending + match= _msmarkedsectionclose.search(rawdata, i+3) + else: + self.error('unknown status keyword %r in marked section' % rawdata[i+3:j]) + if not match: + return -1 + if report: + j = match.start(0) + self.unknown_decl(rawdata[i+3: j]) + return match.end(0) + + # Internal -- parse comment, return length or -1 if not terminated + def parse_comment(self, i, report=1): + rawdata = self.rawdata + if rawdata[i:i+4] != ' LOAD_CONST None def f(x): - None + y = None return x asm = disassemble(f) for elem in ('LOAD_GLOBAL',): @@ -67,10 +67,13 @@ self.assertIn(elem, asm) def test_pack_unpack(self): + # On PyPy, "a, b = ..." is even more optimized, by removing + # the ROT_TWO. But the ROT_TWO is not removed if assigning + # to more complex expressions, so check that. for line, elem in ( ('a, = a,', 'LOAD_CONST',), - ('a, b = a, b', 'ROT_TWO',), - ('a, b, c = a, b, c', 'ROT_THREE',), + ('a[1], b = a, b', 'ROT_TWO',), + ('a, b[2], c = a, b, c', 'ROT_THREE',), ): asm = dis_single(line) self.assertIn(elem, asm) @@ -78,6 +81,8 @@ self.assertNotIn('UNPACK_TUPLE', asm) def test_folding_of_tuples_of_constants(self): + # On CPython, "a,b,c=1,2,3" turns into "a,b,c=" + # but on PyPy, it turns into "a=1;b=2;c=3". for line, elem in ( ('a = 1,2,3', '((1, 2, 3))'), ('("a","b","c")', "(('a', 'b', 'c'))"), @@ -86,7 +91,8 @@ ('((1, 2), 3, 4)', '(((1, 2), 3, 4))'), ): asm = dis_single(line) - self.assertIn(elem, asm) + self.assert_(elem in asm or ( + line == 'a,b,c = 1,2,3' and 'UNPACK_TUPLE' not in asm)) self.assertNotIn('BUILD_TUPLE', asm) # Bug 1053819: Tuple of constants misidentified when presented with: @@ -139,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib-python/2.7/test/test_pprint.py b/lib-python/2.7/test/test_pprint.py --- a/lib-python/2.7/test/test_pprint.py +++ b/lib-python/2.7/test/test_pprint.py @@ -233,7 +233,16 @@ frozenset([0, 2]), frozenset([0, 1])])}""" cube = test.test_set.cube(3) - self.assertEqual(pprint.pformat(cube), cube_repr_tgt) + # XXX issues of dictionary order, and for the case below, + # order of items in the frozenset([...]) representation. + # Whether we get precisely cube_repr_tgt or not is open + # to implementation-dependent choices (this test probably + # fails horribly in CPython if we tweak the dict order too). + got = pprint.pformat(cube) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cube_repr_tgt) + else: + self.assertEqual(eval(got), cube) cubo_repr_tgt = """\ {frozenset([frozenset([0, 2]), frozenset([0])]): frozenset([frozenset([frozenset([0, 2]), @@ -393,7 +402,11 @@ 2])])])}""" cubo = test.test_set.linegraph(cube) - self.assertEqual(pprint.pformat(cubo), cubo_repr_tgt) + got = pprint.pformat(cubo) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cubo_repr_tgt) + else: + self.assertEqual(eval(got), cubo) def test_depth(self): nested_tuple = (1, (2, (3, (4, (5, 6))))) diff --git a/lib-python/2.7/test/test_pydoc.py b/lib-python/2.7/test/test_pydoc.py --- a/lib-python/2.7/test/test_pydoc.py +++ b/lib-python/2.7/test/test_pydoc.py @@ -267,8 +267,8 @@ testpairs = ( ('i_am_not_here', 'i_am_not_here'), ('test.i_am_not_here_either', 'i_am_not_here_either'), - ('test.i_am_not_here.neither_am_i', 'i_am_not_here.neither_am_i'), - ('i_am_not_here.{}'.format(modname), 'i_am_not_here.{}'.format(modname)), + ('test.i_am_not_here.neither_am_i', 'i_am_not_here'), + ('i_am_not_here.{}'.format(modname), 'i_am_not_here'), ('test.{}'.format(modname), modname), ) @@ -292,8 +292,8 @@ result = run_pydoc(modname) finally: forget(modname) - expected = badimport_pattern % (modname, expectedinmsg) - self.assertEqual(expected, result) + expected = badimport_pattern % (modname, '(.+\\.)?' + expectedinmsg + '(\\..+)?$') + self.assertTrue(re.match(expected, result)) def test_input_strip(self): missing_module = " test.i_am_not_here " diff --git a/lib-python/2.7/test/test_pyexpat.py b/lib-python/2.7/test/test_pyexpat.py --- a/lib-python/2.7/test/test_pyexpat.py +++ b/lib-python/2.7/test/test_pyexpat.py @@ -570,6 +570,9 @@ self.assertEqual(self.n, 4) class MalformedInputText(unittest.TestCase): + # CPython seems to ship its own version of expat, they fixed it on this commit : + # http://svn.python.org/view?revision=74429&view=revision + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test1(self): xml = "\0\r\n" parser = expat.ParserCreate() @@ -579,6 +582,7 @@ except expat.ExpatError as e: self.assertEqual(str(e), 'unclosed token: line 2, column 0') + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test2(self): xml = "\r\n" parser = expat.ParserCreate() diff --git a/lib-python/2.7/test/test_repr.py b/lib-python/2.7/test/test_repr.py --- a/lib-python/2.7/test/test_repr.py +++ b/lib-python/2.7/test/test_repr.py @@ -9,6 +9,7 @@ import unittest from test.test_support import run_unittest, check_py3k_warnings +from test.test_support import check_impl_detail from repr import repr as r # Don't shadow builtin repr from repr import Repr @@ -145,8 +146,11 @@ # Functions eq(repr(hash), '') # Methods - self.assertTrue(repr(''.split).startswith( - '") def test_xrange(self): eq = self.assertEqual @@ -185,7 +189,10 @@ def test_descriptors(self): eq = self.assertEqual # method descriptors - eq(repr(dict.items), "") + if check_impl_detail(cpython=True): + eq(repr(dict.items), "") + elif check_impl_detail(pypy=True): + eq(repr(dict.items), "") # XXX member descriptors # XXX attribute descriptors # XXX slot descriptors @@ -247,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_runpy.py b/lib-python/2.7/test/test_runpy.py --- a/lib-python/2.7/test/test_runpy.py +++ b/lib-python/2.7/test/test_runpy.py @@ -5,10 +5,15 @@ import sys import re import tempfile -from test.test_support import verbose, run_unittest, forget +from test.test_support import verbose, run_unittest, forget, check_impl_detail from test.script_helper import (temp_dir, make_script, compile_script, make_pkg, make_zip_script, make_zip_pkg) +if check_impl_detail(pypy=True): + no_lone_pyc_file = True +else: + no_lone_pyc_file = False + from runpy import _run_code, _run_module_code, run_module, run_path # Note: This module can't safely test _run_module_as_main as it @@ -168,13 +173,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + d2 = run_module(mod_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -190,13 +196,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", pkg_name - d2 = run_module(pkg_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", pkg_name + d2 = run_module(pkg_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, pkg_name) if verbose: print "Package executed successfully" @@ -244,15 +251,17 @@ self.assertIn("sibling", d1) self.assertIn("nephew", d1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name, run_name=run_name) # Read from bytecode - self.assertIn("__package__", d2) - self.assertTrue(d2["__package__"] == pkg_name) - self.assertIn("sibling", d2) - self.assertIn("nephew", d2) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + # Read from bytecode + d2 = run_module(mod_name, run_name=run_name) + self.assertIn("__package__", d2) + self.assertTrue(d2["__package__"] == pkg_name) + self.assertIn("sibling", d2) + self.assertIn("nephew", d2) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -345,6 +354,8 @@ script_dir, '') def test_directory_compiled(self): + if no_lone_pyc_file: + return with temp_dir() as script_dir: mod_name = '__main__' script_name = self._make_test_script(script_dir, mod_name) diff --git a/lib-python/2.7/test/test_scope.py b/lib-python/2.7/test/test_scope.py --- a/lib-python/2.7/test/test_scope.py +++ b/lib-python/2.7/test/test_scope.py @@ -1,6 +1,6 @@ import unittest from test.test_support import check_syntax_error, check_py3k_warnings, \ - check_warnings, run_unittest + check_warnings, run_unittest, gc_collect class ScopeTests(unittest.TestCase): @@ -432,6 +432,7 @@ for i in range(100): f1() + gc_collect() self.assertEqual(Foo.count, 0) diff --git a/lib-python/2.7/test/test_set.py b/lib-python/2.7/test/test_set.py --- a/lib-python/2.7/test/test_set.py +++ b/lib-python/2.7/test/test_set.py @@ -309,6 +309,7 @@ fo.close() test_support.unlink(test_support.TESTFN) + @test_support.impl_detail(pypy=False) def test_do_not_rehash_dict_keys(self): n = 10 d = dict.fromkeys(map(HashCountingInt, xrange(n))) @@ -559,6 +560,7 @@ p = weakref.proxy(s) self.assertEqual(str(p), str(s)) s = None + test_support.gc_collect() self.assertRaises(ReferenceError, str, p) # C API test only available in a debug build @@ -590,6 +592,7 @@ s.__init__(self.otherword) self.assertEqual(s, set(self.word)) + @test_support.impl_detail() def test_singleton_empty_frozenset(self): f = frozenset() efs = [frozenset(), frozenset([]), frozenset(()), frozenset(''), @@ -770,9 +773,10 @@ for v in self.set: self.assertIn(v, self.values) setiter = iter(self.set) - # note: __length_hint__ is an internal undocumented API, - # don't rely on it in your own programs - self.assertEqual(setiter.__length_hint__(), len(self.set)) + if test_support.check_impl_detail(): + # note: __length_hint__ is an internal undocumented API, + # don't rely on it in your own programs + self.assertEqual(setiter.__length_hint__(), len(self.set)) def test_pickling(self): p = pickle.dumps(self.set) @@ -1564,7 +1568,7 @@ for meth in (s.union, s.intersection, s.difference, s.symmetric_difference, s.isdisjoint): for g in (G, I, Ig, L, R): expected = meth(data) - actual = meth(G(data)) + actual = meth(g(data)) if isinstance(expected, bool): self.assertEqual(actual, expected) else: diff --git a/lib-python/2.7/test/test_sets.py b/lib-python/2.7/test/test_sets.py --- a/lib-python/2.7/test/test_sets.py +++ b/lib-python/2.7/test/test_sets.py @@ -686,7 +686,9 @@ set_list = sorted(self.set) self.assertEqual(len(dup_list), len(set_list)) for i, el in enumerate(dup_list): - self.assertIs(el, set_list[i]) + # Object identity is not guarnteed for immutable objects, so we + # can't use assertIs here. + self.assertEqual(el, set_list[i]) def test_deep_copy(self): dup = copy.deepcopy(self.set) diff --git a/lib-python/2.7/test/test_site.py b/lib-python/2.7/test/test_site.py --- a/lib-python/2.7/test/test_site.py +++ b/lib-python/2.7/test/test_site.py @@ -226,6 +226,10 @@ self.assertEqual(len(dirs), 1) wanted = os.path.join('xoxo', 'Lib', 'site-packages') self.assertEqual(dirs[0], wanted) + elif '__pypy__' in sys.builtin_module_names: + self.assertEquals(len(dirs), 1) + wanted = os.path.join('xoxo', 'site-packages') + self.assertEquals(dirs[0], wanted) elif os.sep == '/': self.assertEqual(len(dirs), 2) wanted = os.path.join('xoxo', 'lib', 'python' + sys.version[:3], diff --git a/lib-python/2.7/test/test_socket.py b/lib-python/2.7/test/test_socket.py --- a/lib-python/2.7/test/test_socket.py +++ b/lib-python/2.7/test/test_socket.py @@ -252,6 +252,7 @@ self.assertEqual(p.fileno(), s.fileno()) s.close() s = None + test_support.gc_collect() try: p.fileno() except ReferenceError: @@ -285,32 +286,34 @@ s.sendto(u'\u2620', sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None) - self.assertIn('not NoneType', str(cm.exception)) + self.assertIn('NoneType', str(cm.exception)) # 3 args with self.assertRaises(UnicodeEncodeError): s.sendto(u'\u2620', 0, sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, None) - self.assertIn('not NoneType', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 'bar', sockname) - self.assertIn('an integer is required', str(cm.exception)) + self.assertIn('integer', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None, None) - self.assertIn('an integer is required', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('an integer is required', str(cm.exception)) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto('foo') - self.assertIn('(1 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, sockname, 4) - self.assertIn('(4 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) def testCrucialConstants(self): @@ -385,10 +388,10 @@ socket.htonl(k) socket.htons(k) for k in bad_values: - self.assertRaises(OverflowError, socket.ntohl, k) - self.assertRaises(OverflowError, socket.ntohs, k) - self.assertRaises(OverflowError, socket.htonl, k) - self.assertRaises(OverflowError, socket.htons, k) + self.assertRaises((OverflowError, ValueError), socket.ntohl, k) + self.assertRaises((OverflowError, ValueError), socket.ntohs, k) + self.assertRaises((OverflowError, ValueError), socket.htonl, k) + self.assertRaises((OverflowError, ValueError), socket.htons, k) def testGetServBy(self): eq = self.assertEqual @@ -428,8 +431,8 @@ if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. - self.assertRaises(OverflowError, socket.getservbyport, -1) - self.assertRaises(OverflowError, socket.getservbyport, 65536) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, -1) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout @@ -608,8 +611,8 @@ neg_port = port - 65536 sock = socket.socket() try: - self.assertRaises(OverflowError, sock.bind, (host, big_port)) - self.assertRaises(OverflowError, sock.bind, (host, neg_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, big_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, neg_port)) sock.bind((host, port)) finally: sock.close() @@ -1309,6 +1312,7 @@ closed = False def flush(self): pass def close(self): self.closed = True + def _decref_socketios(self): pass # must not close unless we request it: the original use of _fileobject # by module socket requires that the underlying socket not be closed until diff --git a/lib-python/2.7/test/test_sort.py b/lib-python/2.7/test/test_sort.py --- a/lib-python/2.7/test/test_sort.py +++ b/lib-python/2.7/test/test_sort.py @@ -140,7 +140,10 @@ return random.random() < 0.5 L = [C() for i in range(50)] - self.assertRaises(ValueError, L.sort) + try: + L.sort() + except ValueError: + pass def test_cmpNone(self): # Testing None as a comparison function. @@ -150,8 +153,10 @@ L.sort(None) self.assertEqual(L, range(50)) + @test_support.impl_detail(pypy=False) def test_undetected_mutation(self): # Python 2.4a1 did not always detect mutation + # So does pypy... memorywaster = [] for i in range(20): def mutating_cmp(x, y): @@ -226,7 +231,10 @@ def __del__(self): del data[:] data[:] = range(20) - self.assertRaises(ValueError, data.sort, key=SortKiller) + try: + data.sort(key=SortKiller) + except ValueError: + pass def test_key_with_mutating_del_and_exception(self): data = range(10) diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -881,6 +881,8 @@ c = socket.socket() c.connect((HOST, port)) listener_gone.wait() + # XXX why is it necessary? + test_support.gc_collect() try: ssl_sock = ssl.wrap_socket(c) except IOError: @@ -1330,10 +1332,8 @@ def test_main(verbose=False): global CERTFILE, SVN_PYTHON_ORG_ROOT_CERT - CERTFILE = os.path.join(os.path.dirname(__file__) or os.curdir, - "keycert.pem") - SVN_PYTHON_ORG_ROOT_CERT = os.path.join( - os.path.dirname(__file__) or os.curdir, + CERTFILE = test_support.findfile("keycert.pem") + SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( "https_svn_python_org_root.pem") if (not os.path.exists(CERTFILE) or diff --git a/lib-python/2.7/test/test_str.py b/lib-python/2.7/test/test_str.py --- a/lib-python/2.7/test/test_str.py +++ b/lib-python/2.7/test/test_str.py @@ -422,10 +422,11 @@ for meth in ('foo'.startswith, 'foo'.endswith): with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) def test_main(): test_support.run_unittest(StrTest) diff --git a/lib-python/2.7/test/test_struct.py b/lib-python/2.7/test/test_struct.py --- a/lib-python/2.7/test/test_struct.py +++ b/lib-python/2.7/test/test_struct.py @@ -535,7 +535,8 @@ @unittest.skipUnless(IS32BIT, "Specific to 32bit machines") def test_crasher(self): - self.assertRaises(MemoryError, struct.pack, "357913941c", "a") + self.assertRaises((MemoryError, struct.error), struct.pack, + "357913941c", "a") def test_count_overflow(self): hugecount = '{}b'.format(sys.maxsize+1) diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/2.7/test/test_support.py b/lib-python/2.7/test/test_support.py --- a/lib-python/2.7/test/test_support.py +++ b/lib-python/2.7/test/test_support.py @@ -431,16 +431,20 @@ rmtree(name) -def findfile(file, here=__file__, subdir=None): +def findfile(file, here=None, subdir=None): """Try to find a file on sys.path and the working directory. If it is not found the argument passed to the function is returned (this does not necessarily signal failure; could still be the legitimate path).""" + import test if os.path.isabs(file): return file if subdir is not None: file = os.path.join(subdir, file) path = sys.path - path = [os.path.dirname(here)] + path + if here is None: + path = test.__path__ + path + else: + path = [os.path.dirname(here)] + path for dn in path: fn = os.path.join(dn, file) if os.path.exists(fn): return fn @@ -1050,15 +1054,33 @@ guards, default = _parse_guards(guards) return guards.get(platform.python_implementation().lower(), default) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --pdb +# to get a pdb prompt in case of exceptions +ResultClass = unittest.TextTestRunner.resultclass + +class TestResultWithPdb(ResultClass): + + def addError(self, testcase, exc_info): + ResultClass.addError(self, testcase, exc_info) + if '--pdb' in sys.argv: + import pdb, traceback + traceback.print_tb(exc_info[2]) + pdb.post_mortem(exc_info[2]) + +# ---------------------------------- def _run_suite(suite): """Run tests from a unittest.TestSuite-derived class.""" if verbose: - runner = unittest.TextTestRunner(sys.stdout, verbosity=2) + runner = unittest.TextTestRunner(sys.stdout, verbosity=2, + resultclass=TestResultWithPdb) else: runner = BasicTestRunner() + result = runner.run(suite) if not result.wasSuccessful(): if len(result.errors) == 1 and not result.failures: @@ -1071,6 +1093,34 @@ err += "; run in verbose mode for details" raise TestFailed(err) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --filter bar +# to run only the test cases whose name contains bar + +def filter_maybe(suite): + try: + i = sys.argv.index('--filter') + filter = sys.argv[i+1] + except (ValueError, IndexError): + return suite + tests = [] + for test in linearize_suite(suite): + if filter in test._testMethodName: + tests.append(test) + return unittest.TestSuite(tests) + +def linearize_suite(suite_or_test): + try: + it = iter(suite_or_test) + except TypeError: + yield suite_or_test + return + for subsuite in it: + for item in linearize_suite(subsuite): + yield item + +# ---------------------------------- def run_unittest(*classes): """Run tests from unittest.TestCase-derived classes.""" @@ -1086,6 +1136,7 @@ suite.addTest(cls) else: suite.addTest(unittest.makeSuite(cls)) + suite = filter_maybe(suite) _run_suite(suite) diff --git a/lib-python/2.7/test/test_syntax.py b/lib-python/2.7/test/test_syntax.py --- a/lib-python/2.7/test/test_syntax.py +++ b/lib-python/2.7/test/test_syntax.py @@ -5,7 +5,8 @@ >>> def f(x): ... global x Traceback (most recent call last): -SyntaxError: name 'x' is local and global (, line 1) + File "", line 1 +SyntaxError: name 'x' is local and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that @@ -375,7 +376,7 @@ In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 - >>> while 1: + >>> while 1: # doctest:+SKIP ... while 2: ... while 3: ... while 4: diff --git a/lib-python/2.7/test/test_sys.py b/lib-python/2.7/test/test_sys.py --- a/lib-python/2.7/test/test_sys.py +++ b/lib-python/2.7/test/test_sys.py @@ -264,6 +264,7 @@ self.assertEqual(sys.getdlopenflags(), oldflags+1) sys.setdlopenflags(oldflags) + @test.test_support.impl_detail("reference counting") def test_refcount(self): # n here must be a global in order for this test to pass while # tracing with a python function. Tracing calls PyFrame_FastToLocals @@ -287,7 +288,7 @@ is sys._getframe().f_code ) - # sys._current_frames() is a CPython-only gimmick. + @test.test_support.impl_detail("current_frames") def test_current_frames(self): have_threads = True try: @@ -383,7 +384,10 @@ self.assertEqual(len(sys.float_info), 11) self.assertEqual(sys.float_info.radix, 2) self.assertEqual(len(sys.long_info), 2) - self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + if test.test_support.check_impl_detail(cpython=True): + self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + else: + self.assertTrue(sys.long_info.bits_per_digit >= 1) self.assertTrue(sys.long_info.sizeof_digit >= 1) self.assertEqual(type(sys.long_info.bits_per_digit), int) self.assertEqual(type(sys.long_info.sizeof_digit), int) @@ -432,6 +436,7 @@ self.assertEqual(type(getattr(sys.flags, attr)), int, attr) self.assertTrue(repr(sys.flags)) + @test.test_support.impl_detail("sys._clear_type_cache") def test_clear_type_cache(self): sys._clear_type_cache() @@ -473,6 +478,7 @@ p.wait() self.assertIn(executable, ["''", repr(sys.executable)]) + at unittest.skipUnless(test.test_support.check_impl_detail(), "sys.getsizeof()") class SizeofTest(unittest.TestCase): TPFLAGS_HAVE_GC = 1<<14 diff --git a/lib-python/2.7/test/test_sys_settrace.py b/lib-python/2.7/test/test_sys_settrace.py --- a/lib-python/2.7/test/test_sys_settrace.py +++ b/lib-python/2.7/test/test_sys_settrace.py @@ -213,12 +213,16 @@ "finally" def generator_example(): # any() will leave the generator before its end - x = any(generator_function()) + x = any(generator_function()); gc.collect() # the following lines were not traced for x in range(10): y = x +# On CPython, when the generator is decref'ed to zero, we see the trace +# for the "finally:" portion. On PyPy, we don't see it before the next +# garbage collection. That's why we put gc.collect() on the same line above. + generator_example.events = ([(0, 'call'), (2, 'line'), (-6, 'call'), @@ -282,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass @@ -323,17 +327,24 @@ self.run_test(tighterloop_example) def test_13_genexp(self): - self.run_test(generator_example) - # issue1265: if the trace function contains a generator, - # and if the traced function contains another generator - # that is not completely exhausted, the trace stopped. - # Worse: the 'finally' clause was not invoked. - tracer = Tracer() - sys.settrace(tracer.traceWithGenexp) - generator_example() - sys.settrace(None) - self.compare_events(generator_example.__code__.co_firstlineno, - tracer.events, generator_example.events) + if self.using_gc: + test_support.gc_collect() + gc.enable() + try: + self.run_test(generator_example) + # issue1265: if the trace function contains a generator, + # and if the traced function contains another generator + # that is not completely exhausted, the trace stopped. + # Worse: the 'finally' clause was not invoked. + tracer = Tracer() + sys.settrace(tracer.traceWithGenexp) + generator_example() + sys.settrace(None) + self.compare_events(generator_example.__code__.co_firstlineno, + tracer.events, generator_example.events) + finally: + if self.using_gc: + gc.disable() def test_14_onliner_if(self): def onliners(): diff --git a/lib-python/2.7/test/test_sysconfig.py b/lib-python/2.7/test/test_sysconfig.py --- a/lib-python/2.7/test/test_sysconfig.py +++ b/lib-python/2.7/test/test_sysconfig.py @@ -209,13 +209,22 @@ self.assertEqual(get_platform(), 'macosx-10.4-fat64') - for arch in ('ppc', 'i386', 'x86_64', 'ppc64'): + for arch in ('ppc', 'i386', 'ppc64', 'x86_64'): get_config_vars()['CFLAGS'] = ('-arch %s -isysroot ' '/Developer/SDKs/MacOSX10.4u.sdk ' '-fno-strict-aliasing -fno-common ' '-dynamic -DNDEBUG -g -O3'%(arch,)) self.assertEqual(get_platform(), 'macosx-10.4-%s'%(arch,)) + + # macosx with ARCHFLAGS set and empty _CONFIG_VARS + os.environ['ARCHFLAGS'] = '-arch i386' + sysconfig._CONFIG_VARS = None + + # this will attempt to recreate the _CONFIG_VARS based on environment + # variables; used to check a problem with the PyPy's _init_posix + # implementation; see: issue 705 + get_config_vars() # linux debian sarge os.name = 'posix' @@ -235,7 +244,7 @@ def test_get_scheme_names(self): wanted = ('nt', 'nt_user', 'os2', 'os2_home', 'osx_framework_user', - 'posix_home', 'posix_prefix', 'posix_user') + 'posix_home', 'posix_prefix', 'posix_user', 'pypy') self.assertEqual(get_scheme_names(), wanted) def test_symlink(self): diff --git a/lib-python/2.7/test/test_tarfile.py b/lib-python/2.7/test/test_tarfile.py --- a/lib-python/2.7/test/test_tarfile.py +++ b/lib-python/2.7/test/test_tarfile.py @@ -169,6 +169,7 @@ except tarfile.ReadError: self.fail("tarfile.open() failed on empty archive") self.assertListEqual(tar.getmembers(), []) + tar.close() def test_null_tarfile(self): # Test for issue6123: Allow opening empty archives. @@ -207,16 +208,21 @@ fobj = open(self.tarname, "rb") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, os.path.abspath(fobj.name)) + tar.close() def test_no_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) self.assertRaises(AttributeError, getattr, fobj, "name") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, None) def test_empty_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) fobj.name = "" tar = tarfile.open(fileobj=fobj, mode=self.mode) @@ -515,6 +521,7 @@ self.tar = tarfile.open(self.tarname, mode=self.mode, encoding="iso8859-1") tarinfo = self.tar.getmember("pax/umlauts-�������") self._test_member(tarinfo, size=7011, chksum=md5_regtype) + self.tar.close() class LongnameTest(ReadTest): @@ -675,6 +682,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.rmdir(path) @@ -692,6 +700,7 @@ tar.gettarinfo(target) tarinfo = tar.gettarinfo(link) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(target) os.remove(link) @@ -704,6 +713,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(path) @@ -722,6 +732,7 @@ tar.add(dstname) os.chdir(cwd) self.assertTrue(tar.getnames() == [], "added the archive to itself") + tar.close() def test_exclude(self): tempdir = os.path.join(TEMPDIR, "exclude") @@ -742,6 +753,7 @@ tar = tarfile.open(tmpname, "r") self.assertEqual(len(tar.getmembers()), 1) self.assertEqual(tar.getnames()[0], "empty_dir") + tar.close() finally: shutil.rmtree(tempdir) @@ -947,7 +959,9 @@ fobj.close() elif self.mode.endswith("bz2"): dec = bz2.BZ2Decompressor() - data = open(tmpname, "rb").read() + f = open(tmpname, "rb") + data = f.read() + f.close() data = dec.decompress(data) self.assertTrue(len(dec.unused_data) == 0, "found trailing data") @@ -1026,6 +1040,7 @@ "unable to read longname member") self.assertEqual(tarinfo.linkname, member.linkname, "unable to read longname member") + tar.close() def test_longname_1023(self): self._test(("longnam/" * 127) + "longnam") @@ -1118,6 +1133,7 @@ else: n = tar.getmembers()[0].name self.assertTrue(name == n, "PAX longname creation failed") + tar.close() def test_pax_global_header(self): pax_headers = { @@ -1146,6 +1162,7 @@ tarfile.PAX_NUMBER_FIELDS[key](val) except (TypeError, ValueError): self.fail("unable to convert pax header field") + tar.close() def test_pax_extended_header(self): # The fields from the pax header have priority over the @@ -1165,6 +1182,7 @@ self.assertEqual(t.pax_headers, pax_headers) self.assertEqual(t.name, "foo") self.assertEqual(t.uid, 123) + tar.close() class UstarUnicodeTest(unittest.TestCase): @@ -1208,6 +1226,7 @@ tarinfo.name = "foo" tarinfo.uname = u"���" self.assertRaises(UnicodeError, tar.addfile, tarinfo) + tar.close() def test_unicode_argument(self): tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict") @@ -1262,6 +1281,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="ascii", errors=handler) self.assertEqual(tar.getnames()[0], name) + tar.close() self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="ascii", errors="strict") @@ -1274,6 +1294,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1", errors="utf-8") self.assertEqual(tar.getnames()[0], "���/" + u"�".encode("utf8")) + tar.close() class AppendTest(unittest.TestCase): @@ -1301,6 +1322,7 @@ def _test(self, names=["bar"], fileobj=None): tar = tarfile.open(self.tarname, fileobj=fileobj) self.assertEqual(tar.getnames(), names) + tar.close() def test_non_existing(self): self._add_testfile() @@ -1319,7 +1341,9 @@ def test_fileobj(self): self._create_testtar() - data = open(self.tarname).read() + f = open(self.tarname) + data = f.read() + f.close() fobj = StringIO.StringIO(data) self._add_testfile(fobj) fobj.seek(0) @@ -1345,7 +1369,9 @@ # Append mode is supposed to fail if the tarfile to append to # does not end with a zero block. def _test_error(self, data): - open(self.tarname, "wb").write(data) + f = open(self.tarname, "wb") + f.write(data) + f.close() self.assertRaises(tarfile.ReadError, self._add_testfile) def test_null(self): diff --git a/lib-python/2.7/test/test_tempfile.py b/lib-python/2.7/test/test_tempfile.py --- a/lib-python/2.7/test/test_tempfile.py +++ b/lib-python/2.7/test/test_tempfile.py @@ -23,8 +23,8 @@ # TEST_FILES may need to be tweaked for systems depending on the maximum # number of files that can be opened at one time (see ulimit -n) -if sys.platform in ('openbsd3', 'openbsd4'): - TEST_FILES = 48 +if sys.platform.startswith("openbsd"): + TEST_FILES = 64 # ulimit -n defaults to 128 for normal users else: TEST_FILES = 100 @@ -244,6 +244,7 @@ dir = tempfile.mkdtemp() try: self.do_create(dir=dir).write("blat") + test_support.gc_collect() finally: os.rmdir(dir) @@ -528,12 +529,15 @@ self.do_create(suf="b") self.do_create(pre="a", suf="b") self.do_create(pre="aa", suf=".txt") + test_support.gc_collect() def test_many(self): # mktemp can choose many usable file names (stochastic) extant = range(TEST_FILES) for i in extant: extant[i] = self.do_create(pre="aa") + del extant + test_support.gc_collect() ## def test_warning(self): ## # mktemp issues a warning when used diff --git a/lib-python/2.7/test/test_thread.py b/lib-python/2.7/test/test_thread.py --- a/lib-python/2.7/test/test_thread.py +++ b/lib-python/2.7/test/test_thread.py @@ -128,6 +128,7 @@ del task while not done: time.sleep(0.01) + test_support.gc_collect() self.assertEqual(thread._count(), orig) diff --git a/lib-python/2.7/test/test_threading.py b/lib-python/2.7/test/test_threading.py --- a/lib-python/2.7/test/test_threading.py +++ b/lib-python/2.7/test/test_threading.py @@ -161,6 +161,7 @@ # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. + @test.test_support.cpython_only def test_PyThreadState_SetAsyncExc(self): try: import ctypes @@ -266,6 +267,7 @@ finally: threading._start_new_thread = _start_new_thread + @test.test_support.cpython_only def test_finalize_runnning_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for @@ -383,6 +385,7 @@ finally: sys.setcheckinterval(old_interval) + @test.test_support.cpython_only def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): @@ -425,6 +428,9 @@ def joiningfunc(mainthread): mainthread.join() print 'end of thread' + # stdout is fully buffered because not a tty, we have to flush + # before exit. + sys.stdout.flush() \n""" + script p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE) diff --git a/lib-python/2.7/test/test_threading_local.py b/lib-python/2.7/test/test_threading_local.py --- a/lib-python/2.7/test/test_threading_local.py +++ b/lib-python/2.7/test/test_threading_local.py @@ -173,8 +173,9 @@ obj = cls() obj.x = 5 self.assertEqual(obj.__dict__, {'x': 5}) - with self.assertRaises(AttributeError): - obj.__dict__ = {} + if test_support.check_impl_detail(): + with self.assertRaises(AttributeError): + obj.__dict__ = {} with self.assertRaises(AttributeError): del obj.__dict__ diff --git a/lib-python/2.7/test/test_traceback.py b/lib-python/2.7/test/test_traceback.py --- a/lib-python/2.7/test/test_traceback.py +++ b/lib-python/2.7/test/test_traceback.py @@ -5,7 +5,8 @@ import sys import unittest from imp import reload -from test.test_support import run_unittest, is_jython, Error +from test.test_support import run_unittest, Error +from test.test_support import impl_detail, check_impl_detail import traceback @@ -49,10 +50,8 @@ self.assertTrue(err[2].count('\n') == 1) # and no additional newline self.assertTrue(err[1].find("+") == err[2].find("^")) # in the right place + @impl_detail("other implementations may add a caret (why shouldn't they?)") def test_nocaret(self): - if is_jython: - # jython adds a caret in this case (why shouldn't it?) - return err = self.get_exception_format(self.syntax_error_without_caret, SyntaxError) self.assertTrue(len(err) == 3) @@ -63,8 +62,11 @@ IndentationError) self.assertTrue(len(err) == 4) self.assertTrue(err[1].strip() == "print 2") - self.assertIn("^", err[2]) - self.assertTrue(err[1].find("2") == err[2].find("^")) + if check_impl_detail(): + # on CPython, there is a "^" at the end of the line + # on PyPy, there is a "^" too, but at the start, more logically + self.assertIn("^", err[2]) + self.assertTrue(err[1].find("2") == err[2].find("^")) def test_bug737473(self): import os, tempfile, time @@ -74,7 +76,8 @@ try: sys.path.insert(0, testdir) testfile = os.path.join(testdir, 'test_bug737473.py') - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise ValueError""" @@ -96,7 +99,8 @@ # three seconds are needed for this test to pass reliably :-( time.sleep(4) - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise NotImplementedError""" reload(test_bug737473) diff --git a/lib-python/2.7/test/test_types.py b/lib-python/2.7/test/test_types.py --- a/lib-python/2.7/test/test_types.py +++ b/lib-python/2.7/test/test_types.py @@ -1,7 +1,8 @@ # Python test set -- part 6, built-in types from test.test_support import run_unittest, have_unicode, run_with_locale, \ - check_py3k_warnings + check_py3k_warnings, \ + impl_detail, check_impl_detail import unittest import sys import locale @@ -289,9 +290,14 @@ # array.array() returns an object that does not implement a char buffer, # something which int() uses for conversion. import array - try: int(buffer(array.array('c'))) + try: int(buffer(array.array('c', '5'))) except TypeError: pass - else: self.fail("char buffer (at C level) not working") + else: + if check_impl_detail(): + self.fail("char buffer (at C level) not working") + #else: + # it works on PyPy, which does not have the distinction + # between char buffer and binary buffer. XXX fine enough? def test_int__format__(self): def test(i, format_spec, result): @@ -741,6 +747,7 @@ for code in 'xXobns': self.assertRaises(ValueError, format, 0, ',' + code) + @impl_detail("the types' internal size attributes are CPython-only") def test_internal_sizes(self): self.assertGreater(object.__basicsize__, 0) self.assertGreater(tuple.__itemsize__, 0) diff --git a/lib-python/2.7/test/test_unicode.py b/lib-python/2.7/test/test_unicode.py --- a/lib-python/2.7/test/test_unicode.py +++ b/lib-python/2.7/test/test_unicode.py @@ -448,10 +448,11 @@ meth('\xff') with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) @test_support.run_with_locale('LC_ALL', 'de_DE', 'fr_FR') def test_format_float(self): @@ -1062,7 +1063,8 @@ # to take a 64-bit long, this test should apply to all platforms. if sys.maxint > (1 << 32) or struct.calcsize('P') != 4: return - self.assertRaises(OverflowError, u't\tt\t'.expandtabs, sys.maxint) + self.assertRaises((OverflowError, MemoryError), + u't\tt\t'.expandtabs, sys.maxint) def test__format__(self): def test(value, format, expected): diff --git a/lib-python/2.7/test/test_unicodedata.py b/lib-python/2.7/test/test_unicodedata.py --- a/lib-python/2.7/test/test_unicodedata.py +++ b/lib-python/2.7/test/test_unicodedata.py @@ -233,10 +233,12 @@ # been loaded in this process. popen = subprocess.Popen(args, stderr=subprocess.PIPE) popen.wait() - self.assertEqual(popen.returncode, 1) - error = "SyntaxError: (unicode error) \N escapes not supported " \ - "(can't load unicodedata module)" - self.assertIn(error, popen.stderr.read()) + self.assertIn(popen.returncode, [0, 1]) # at least it did not segfault + if test.test_support.check_impl_detail(): + self.assertEqual(popen.returncode, 1) + error = "SyntaxError: (unicode error) \N escapes not supported " \ + "(can't load unicodedata module)" + self.assertIn(error, popen.stderr.read()) def test_decimal_numeric_consistent(self): # Test that decimal and numeric are consistent, diff --git a/lib-python/2.7/test/test_unpack.py b/lib-python/2.7/test/test_unpack.py --- a/lib-python/2.7/test/test_unpack.py +++ b/lib-python/2.7/test/test_unpack.py @@ -62,14 +62,14 @@ >>> a, b = t Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking tuple of wrong size >>> a, b = l Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking sequence too short diff --git a/lib-python/2.7/test/test_urllib2.py b/lib-python/2.7/test/test_urllib2.py --- a/lib-python/2.7/test/test_urllib2.py +++ b/lib-python/2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/2.7/test/test_warnings.py b/lib-python/2.7/test/test_warnings.py --- a/lib-python/2.7/test/test_warnings.py +++ b/lib-python/2.7/test/test_warnings.py @@ -355,7 +355,8 @@ # test_support.import_fresh_module utility function def test_accelerated(self): self.assertFalse(original_warnings is self.module) - self.assertFalse(hasattr(self.module.warn, 'func_code')) + self.assertFalse(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class PyWarnTests(BaseTest, WarnTests): module = py_warnings @@ -364,7 +365,8 @@ # test_support.import_fresh_module utility function def test_pure_python(self): self.assertFalse(original_warnings is self.module) - self.assertTrue(hasattr(self.module.warn, 'func_code')) + self.assertTrue(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class WCmdLineTests(unittest.TestCase): diff --git a/lib-python/2.7/test/test_weakref.py b/lib-python/2.7/test/test_weakref.py --- a/lib-python/2.7/test/test_weakref.py +++ b/lib-python/2.7/test/test_weakref.py @@ -1,4 +1,3 @@ -import gc import sys import unittest import UserList @@ -6,6 +5,7 @@ import operator from test import test_support +from test.test_support import gc_collect # Used in ReferencesTestCase.test_ref_created_during_del() . ref_from_del = None @@ -70,6 +70,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(ref1() is None, "expected reference to be invalidated") self.assertTrue(ref2() is None, @@ -101,13 +102,16 @@ ref1 = weakref.proxy(o, self.callback) ref2 = weakref.proxy(o, self.callback) del o + gc_collect() def check(proxy): proxy.bar self.assertRaises(weakref.ReferenceError, check, ref1) self.assertRaises(weakref.ReferenceError, check, ref2) - self.assertRaises(weakref.ReferenceError, bool, weakref.proxy(C())) + ref3 = weakref.proxy(C()) + gc_collect() + self.assertRaises(weakref.ReferenceError, bool, ref3) self.assertTrue(self.cbcalled == 2) def check_basic_ref(self, factory): @@ -124,6 +128,7 @@ o = factory() ref = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(self.cbcalled == 1, "callback did not properly set 'cbcalled'") self.assertTrue(ref() is None, @@ -148,6 +153,7 @@ self.assertTrue(weakref.getweakrefcount(o) == 2, "wrong weak ref count for object") del proxy + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 1, "wrong weak ref count for object after deleting proxy") @@ -325,6 +331,7 @@ "got wrong number of weak reference objects") del ref1, ref2, proxy1, proxy2 + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 0, "weak reference objects not unlinked from" " referent when discarded.") @@ -338,6 +345,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref2], "list of refs does not match") @@ -345,10 +353,12 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref2 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref1], "list of refs does not match") del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [], "list of refs not cleared") @@ -400,13 +410,11 @@ # when the second attempt to remove the instance from the "list # of all objects" occurs. - import gc - class C(object): pass c = C() - wr = weakref.ref(c, lambda ignore: gc.collect()) + wr = weakref.ref(c, lambda ignore: gc_collect()) del c # There endeth the first part. It gets worse. @@ -414,7 +422,7 @@ c1 = C() c1.i = C() - wr = weakref.ref(c1.i, lambda ignore: gc.collect()) + wr = weakref.ref(c1.i, lambda ignore: gc_collect()) c2 = C() c2.c1 = c1 @@ -430,8 +438,6 @@ del c2 def test_callback_in_cycle_1(self): - import gc - class J(object): pass @@ -467,11 +473,9 @@ # search II.__mro__, but that's NULL. The result was a segfault in # a release build, and an assert failure in a debug build. del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_2(self): - import gc - # This is just like test_callback_in_cycle_1, except that II is an # old-style class. The symptom is different then: an instance of an # old-style class looks in its own __dict__ first. 'J' happens to @@ -496,11 +500,9 @@ I.wr = weakref.ref(J, I.acallback) del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_3(self): - import gc - # This one broke the first patch that fixed the last two. In this # case, the objects reachable from the callback aren't also reachable # from the object (c1) *triggering* the callback: you can get to @@ -520,11 +522,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2 - gc.collect() + gc_collect() def test_callback_in_cycle_4(self): - import gc - # Like test_callback_in_cycle_3, except c2 and c1 have different # classes. c2's class (C) isn't reachable from c1 then, so protecting # objects reachable from the dying object (c1) isn't enough to stop @@ -548,11 +548,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2, C, D - gc.collect() + gc_collect() def test_callback_in_cycle_resurrection(self): - import gc - # Do something nasty in a weakref callback: resurrect objects # from dead cycles. For this to be attempted, the weakref and # its callback must also be part of the cyclic trash (else the @@ -583,7 +581,7 @@ del c1, c2, C # make them all trash self.assertEqual(alist, []) # del isn't enough to reclaim anything - gc.collect() + gc_collect() # c1.wr and c2.wr were part of the cyclic trash, so should have # been cleared without their callbacks executing. OTOH, the weakref # to C is bound to a function local (wr), and wasn't trash, so that @@ -593,12 +591,10 @@ self.assertEqual(wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_callbacks_on_callback(self): - import gc - # Set up weakref callbacks *on* weakref callbacks. alist = [] def safe_callback(ignore): @@ -626,12 +622,12 @@ del callback, c, d, C self.assertEqual(alist, []) # del isn't enough to clean up cycles - gc.collect() + gc_collect() self.assertEqual(alist, ["safe_callback called"]) self.assertEqual(external_wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_gc_during_ref_creation(self): @@ -641,9 +637,11 @@ self.check_gc_during_creation(weakref.proxy) def check_gc_during_creation(self, makeref): - thresholds = gc.get_threshold() - gc.set_threshold(1, 1, 1) - gc.collect() + if test_support.check_impl_detail(): + import gc + thresholds = gc.get_threshold() + gc.set_threshold(1, 1, 1) + gc_collect() class A: pass @@ -663,7 +661,8 @@ weakref.ref(referenced, callback) finally: - gc.set_threshold(*thresholds) + if test_support.check_impl_detail(): + gc.set_threshold(*thresholds) def test_ref_created_during_del(self): # Bug #1377858 @@ -683,7 +682,7 @@ r = weakref.ref(Exception) self.assertRaises(TypeError, r.__init__, 0, 0, 0, 0, 0) # No exception should be raised here - gc.collect() + gc_collect() def test_classes(self): # Check that both old-style classes and new-style classes @@ -696,12 +695,12 @@ weakref.ref(int) a = weakref.ref(A, l.append) A = None - gc.collect() + gc_collect() self.assertEqual(a(), None) self.assertEqual(l, [a]) b = weakref.ref(B, l.append) B = None - gc.collect() + gc_collect() self.assertEqual(b(), None) self.assertEqual(l, [a, b]) @@ -722,6 +721,7 @@ self.assertTrue(mr.called) self.assertEqual(mr.value, 24) del o + gc_collect() self.assertTrue(mr() is None) self.assertTrue(mr.called) @@ -738,9 +738,11 @@ self.assertEqual(weakref.getweakrefcount(o), 3) refs = weakref.getweakrefs(o) self.assertEqual(len(refs), 3) - self.assertTrue(r2 is refs[0]) - self.assertIn(r1, refs[1:]) - self.assertIn(r3, refs[1:]) + assert set(refs) == set((r1, r2, r3)) + if test_support.check_impl_detail(): + self.assertTrue(r2 is refs[0]) + self.assertIn(r1, refs[1:]) + self.assertIn(r3, refs[1:]) def test_subclass_refs_dont_conflate_callbacks(self): class MyRef(weakref.ref): @@ -839,15 +841,18 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the values did not clear the dictionary") # regression on SF bug #447152: dict = weakref.WeakValueDictionary() self.assertRaises(KeyError, dict.__getitem__, 1) dict[2] = C() + gc_collect() self.assertRaises(KeyError, dict.__getitem__, 2) def test_weak_keys(self): @@ -868,9 +873,11 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the keys did not clear the dictionary") o = Object(42) @@ -986,13 +993,13 @@ self.assertTrue(len(weakdict) == 2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 1) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 0) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) @@ -1137,6 +1144,7 @@ for o in objs: count += 1 del d[o] + gc_collect() self.assertEqual(len(d), 0) self.assertEqual(count, 2) @@ -1177,6 +1185,7 @@ >>> o is o2 True >>> del o, o2 +>>> gc_collect() >>> print r() None @@ -1229,6 +1238,7 @@ >>> id2obj(a_id) is a True >>> del a +>>> gc_collect() >>> try: ... id2obj(a_id) ... except KeyError: diff --git a/lib-python/2.7/test/test_weakset.py b/lib-python/2.7/test/test_weakset.py --- a/lib-python/2.7/test/test_weakset.py +++ b/lib-python/2.7/test/test_weakset.py @@ -57,6 +57,7 @@ self.assertEqual(len(self.s), len(self.d)) self.assertEqual(len(self.fs), 1) del self.obj + test_support.gc_collect() self.assertEqual(len(self.fs), 0) def test_contains(self): @@ -66,6 +67,7 @@ self.assertNotIn(1, self.s) self.assertIn(self.obj, self.fs) del self.obj + test_support.gc_collect() self.assertNotIn(SomeClass('F'), self.fs) def test_union(self): @@ -204,6 +206,7 @@ self.assertEqual(self.s, dup) self.assertRaises(TypeError, self.s.add, []) self.fs.add(Foo()) + test_support.gc_collect() self.assertTrue(len(self.fs) == 1) self.fs.add(self.obj) self.assertTrue(len(self.fs) == 1) @@ -330,10 +333,11 @@ next(it) # Trigger internal iteration # Destroy an item del items[-1] - gc.collect() # just in case + test_support.gc_collect() # We have removed either the first consumed items, or another one self.assertIn(len(list(it)), [len(items), len(items) - 1]) del it + test_support.gc_collect() # The removal has been committed self.assertEqual(len(s), len(items)) diff --git a/lib-python/2.7/test/test_xml_etree.py b/lib-python/2.7/test/test_xml_etree.py --- a/lib-python/2.7/test/test_xml_etree.py +++ b/lib-python/2.7/test/test_xml_etree.py @@ -1633,10 +1633,10 @@ Check reference leak. >>> xmltoolkit63() - >>> count = sys.getrefcount(None) + >>> count = sys.getrefcount(None) #doctest: +SKIP >>> for i in range(1000): ... xmltoolkit63() - >>> sys.getrefcount(None) - count + >>> sys.getrefcount(None) - count #doctest: +SKIP 0 """ diff --git a/lib-python/2.7/test/test_xmlrpc.py b/lib-python/2.7/test/test_xmlrpc.py --- a/lib-python/2.7/test/test_xmlrpc.py +++ b/lib-python/2.7/test/test_xmlrpc.py @@ -308,7 +308,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -367,7 +367,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -435,6 +435,7 @@ def tearDown(self): # wait on the server thread to terminate + test_support.gc_collect() # to close the active connections self.evt.wait(10) # disable traceback reporting @@ -472,9 +473,6 @@ # protocol error; provide additional information in test output self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) - def test_unicode_host(self): - server = xmlrpclib.ServerProxy(u"http://%s:%d/RPC2"%(ADDR, PORT)) - self.assertEqual(server.add("a", u"\xe9"), u"a\xe9") # [ch] The test 404 is causing lots of false alarms. def XXXtest_404(self): @@ -589,12 +587,6 @@ # This avoids waiting for the socket timeout. self.test_simple1() - def test_partial_post(self): - # Check that a partial POST doesn't make the server loop: issue #14001. - conn = httplib.HTTPConnection(ADDR, PORT) - conn.request('POST', '/RPC2 HTTP/1.0\r\nContent-Length: 100\r\n\r\nbye') - conn.close() - class MultiPathServerTestCase(BaseServerTestCase): threadFunc = staticmethod(http_multi_server) request_count = 2 diff --git a/lib-python/2.7/test/test_zlib.py b/lib-python/2.7/test/test_zlib.py --- a/lib-python/2.7/test/test_zlib.py +++ b/lib-python/2.7/test/test_zlib.py @@ -1,6 +1,7 @@ import unittest from test.test_support import TESTFN, run_unittest, import_module, unlink, requires import binascii +import os import random from test.test_support import precisionbigmemtest, _1G, _4G import sys @@ -99,14 +100,7 @@ class BaseCompressTestCase(object): def check_big_compress_buffer(self, size, compress_func): - _1M = 1024 * 1024 - fmt = "%%0%dx" % (2 * _1M) - # Generate 10MB worth of random, and expand it by repeating it. - # The assumption is that zlib's memory is not big enough to exploit - # such spread out redundancy. - data = ''.join([binascii.a2b_hex(fmt % random.getrandbits(8 * _1M)) - for i in range(10)]) - data = data * (size // len(data) + 1) + data = os.urandom(size) try: compress_func(data) finally: diff --git a/lib-python/2.7/trace.py b/lib-python/2.7/trace.py --- a/lib-python/2.7/trace.py +++ b/lib-python/2.7/trace.py @@ -559,6 +559,10 @@ if len(funcs) == 1: dicts = [d for d in gc.get_referrers(funcs[0]) if isinstance(d, dict)] + if len(dicts) == 0: + # PyPy may store functions directly on the class + # (more exactly: the container is not a Python object) + dicts = funcs if len(dicts) == 1: classes = [c for c in gc.get_referrers(dicts[0]) if hasattr(c, "__bases__")] diff --git a/lib-python/2.7/urllib2.py b/lib-python/2.7/urllib2.py --- a/lib-python/2.7/urllib2.py +++ b/lib-python/2.7/urllib2.py @@ -1171,6 +1171,7 @@ except TypeError: #buffering kw not supported r = h.getresponse() except socket.error, err: # XXX what error? + h.close() raise URLError(err) # Pick apart the HTTPResponse object to get the addinfourl diff --git a/lib-python/2.7/uuid.py b/lib-python/2.7/uuid.py --- a/lib-python/2.7/uuid.py +++ b/lib-python/2.7/uuid.py @@ -406,8 +406,12 @@ continue if hasattr(lib, 'uuid_generate_random'): _uuid_generate_random = lib.uuid_generate_random + _uuid_generate_random.argtypes = [ctypes.c_char * 16] + _uuid_generate_random.restype = None if hasattr(lib, 'uuid_generate_time'): _uuid_generate_time = lib.uuid_generate_time + _uuid_generate_time.argtypes = [ctypes.c_char * 16] + _uuid_generate_time.restype = None # The uuid_generate_* functions are broken on MacOS X 10.5, as noted # in issue #8621 the function generates the same sequence of values @@ -436,6 +440,9 @@ lib = None _UuidCreate = getattr(lib, 'UuidCreateSequential', getattr(lib, 'UuidCreate', None)) + if _UuidCreate is not None: + _UuidCreate.argtypes = [ctypes.c_char * 16] + _UuidCreate.restype = ctypes.c_int except: pass diff --git a/lib-python/3.2/__future__.py b/lib-python/3.2/__future__.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__future__.py @@ -0,0 +1,134 @@ +"""Record of phased-in incompatible language changes. + +Each line is of the form: + + FeatureName = "_Feature(" OptionalRelease "," MandatoryRelease "," + CompilerFlag ")" + +where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples +of the same form as sys.version_info: + + (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int + PY_MINOR_VERSION, # the 1; an int + PY_MICRO_VERSION, # the 0; an int + PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string + PY_RELEASE_SERIAL # the 3; an int + ) + +OptionalRelease records the first release in which + + from __future__ import FeatureName + +was accepted. + +In the case of MandatoryReleases that have not yet occurred, +MandatoryRelease predicts the release in which the feature will become part +of the language. + +Else MandatoryRelease records when the feature became part of the language; +in releases at or after that, modules no longer need + + from __future__ import FeatureName + +to use the feature in question, but may continue to use such imports. + +MandatoryRelease may also be None, meaning that a planned feature got +dropped. + +Instances of class _Feature have two corresponding methods, +.getOptionalRelease() and .getMandatoryRelease(). + +CompilerFlag is the (bitfield) flag that should be passed in the fourth +argument to the builtin function compile() to enable the feature in +dynamically compiled code. This flag is stored in the .compiler_flag +attribute on _Future instances. These values must match the appropriate +#defines of CO_xxx flags in Include/compile.h. + +No feature line is ever to be deleted from this file. +""" + +all_feature_names = [ + "nested_scopes", + "generators", + "division", + "absolute_import", + "with_statement", + "print_function", + "unicode_literals", + "barry_as_FLUFL", +] + +__all__ = ["all_feature_names"] + all_feature_names + +# The CO_xxx symbols are defined here under the same names used by +# compile.h, so that an editor search will find them here. However, +# they're not exported in __all__, because they don't really belong to +# this module. +CO_NESTED = 0x0010 # nested_scopes +CO_GENERATOR_ALLOWED = 0 # generators (obsolete, was 0x1000) +CO_FUTURE_DIVISION = 0x2000 # division +CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default +CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement +CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function +CO_FUTURE_UNICODE_LITERALS = 0x20000 # unicode string literals +CO_FUTURE_BARRY_AS_BDFL = 0x40000 + +class _Feature: + def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): + self.optional = optionalRelease + self.mandatory = mandatoryRelease + self.compiler_flag = compiler_flag + + def getOptionalRelease(self): + """Return first release in which this feature was recognized. + + This is a 5-tuple, of the same form as sys.version_info. + """ + + return self.optional + + def getMandatoryRelease(self): + """Return release in which this feature will become mandatory. + + This is a 5-tuple, of the same form as sys.version_info, or, if + the feature was dropped, is None. + """ + + return self.mandatory + + def __repr__(self): + return "_Feature" + repr((self.optional, + self.mandatory, + self.compiler_flag)) + +nested_scopes = _Feature((2, 1, 0, "beta", 1), + (2, 2, 0, "alpha", 0), + CO_NESTED) + +generators = _Feature((2, 2, 0, "alpha", 1), + (2, 3, 0, "final", 0), + CO_GENERATOR_ALLOWED) + +division = _Feature((2, 2, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_DIVISION) + +absolute_import = _Feature((2, 5, 0, "alpha", 1), + (2, 7, 0, "alpha", 0), + CO_FUTURE_ABSOLUTE_IMPORT) + +with_statement = _Feature((2, 5, 0, "alpha", 1), + (2, 6, 0, "alpha", 0), + CO_FUTURE_WITH_STATEMENT) + +print_function = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_PRINT_FUNCTION) + +unicode_literals = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_UNICODE_LITERALS) + +barry_as_FLUFL = _Feature((3, 1, 0, "alpha", 2), + (3, 9, 0, "alpha", 0), + CO_FUTURE_BARRY_AS_BDFL) diff --git a/lib-python/3.2/__phello__.foo.py b/lib-python/3.2/__phello__.foo.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__phello__.foo.py @@ -0,0 +1,1 @@ +# This file exists as a helper for the test.test_frozen module. diff --git a/lib-python/3.2/_abcoll.py b/lib-python/3.2/_abcoll.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_abcoll.py @@ -0,0 +1,623 @@ +# Copyright 2007 Google, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""Abstract Base Classes (ABCs) for collections, according to PEP 3119. + +DON'T USE THIS MODULE DIRECTLY! The classes here should be imported +via collections; they are defined here only to alleviate certain +bootstrapping issues. Unit tests are in test_collections. +""" + +from abc import ABCMeta, abstractmethod +import sys + +__all__ = ["Hashable", "Iterable", "Iterator", + "Sized", "Container", "Callable", + "Set", "MutableSet", + "Mapping", "MutableMapping", + "MappingView", "KeysView", "ItemsView", "ValuesView", + "Sequence", "MutableSequence", + "ByteString", + ] + + +### collection related types which are not exposed through builtin ### +## iterators ## +bytes_iterator = type(iter(b'')) +bytearray_iterator = type(iter(bytearray())) +#callable_iterator = ??? +dict_keyiterator = type(iter({}.keys())) +dict_valueiterator = type(iter({}.values())) +dict_itemiterator = type(iter({}.items())) +list_iterator = type(iter([])) +list_reverseiterator = type(iter(reversed([]))) +range_iterator = type(iter(range(0))) +set_iterator = type(iter(set())) +str_iterator = type(iter("")) +tuple_iterator = type(iter(())) +zip_iterator = type(iter(zip())) +## views ## +dict_keys = type({}.keys()) +dict_values = type({}.values()) +dict_items = type({}.items()) +## misc ## +dict_proxy = type(type.__dict__) + + +### ONE-TRICK PONIES ### + +class Hashable(metaclass=ABCMeta): + + @abstractmethod + def __hash__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Hashable: + for B in C.__mro__: + if "__hash__" in B.__dict__: + if B.__dict__["__hash__"]: + return True + break + return NotImplemented + + +class Iterable(metaclass=ABCMeta): + + @abstractmethod + def __iter__(self): + while False: + yield None + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterable: + if any("__iter__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Iterator(Iterable): + + @abstractmethod + def __next__(self): + raise StopIteration + + def __iter__(self): + return self + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterator: + if (any("__next__" in B.__dict__ for B in C.__mro__) and + any("__iter__" in B.__dict__ for B in C.__mro__)): + return True + return NotImplemented + +Iterator.register(bytes_iterator) +Iterator.register(bytearray_iterator) +#Iterator.register(callable_iterator) +Iterator.register(dict_keyiterator) +Iterator.register(dict_valueiterator) +Iterator.register(dict_itemiterator) +Iterator.register(list_iterator) +Iterator.register(list_reverseiterator) +Iterator.register(range_iterator) +Iterator.register(set_iterator) +Iterator.register(str_iterator) +Iterator.register(tuple_iterator) +Iterator.register(zip_iterator) + +class Sized(metaclass=ABCMeta): + + @abstractmethod + def __len__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Sized: + if any("__len__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Container(metaclass=ABCMeta): + + @abstractmethod + def __contains__(self, x): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Container: + if any("__contains__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Callable(metaclass=ABCMeta): + + @abstractmethod + def __call__(self, *args, **kwds): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Callable: + if any("__call__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +### SETS ### + + +class Set(Sized, Iterable, Container): + + """A set is a finite, iterable container. + + This class provides concrete generic implementations of all + methods except for __contains__, __iter__ and __len__. + + To override the comparisons (presumably for speed, as the + semantics are fixed), all you have to do is redefine __le__ and + then the other operations will automatically follow suit. + """ + + def __le__(self, other): + if not isinstance(other, Set): + return NotImplemented + if len(self) > len(other): + return False + for elem in self: + if elem not in other: + return False + return True + + def __lt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) < len(other) and self.__le__(other) + + def __gt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other < self + + def __ge__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other <= self + + def __eq__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) == len(other) and self.__le__(other) + + def __ne__(self, other): + return not (self == other) + + @classmethod + def _from_iterable(cls, it): + '''Construct an instance of the class from any iterable input. + + Must override this method if the class constructor signature + does not accept an iterable for an input. + ''' + return cls(it) + + def __and__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + return self._from_iterable(value for value in other if value in self) + + def isdisjoint(self, other): + for value in other: + if value in self: + return False + return True + + def __or__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + chain = (e for s in (self, other) for e in s) + return self._from_iterable(chain) + + def __sub__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return self._from_iterable(value for value in self + if value not in other) + + def __xor__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return (self - other) | (other - self) + + def _hash(self): + """Compute the hash value of a set. + + Note that we don't define __hash__: not all sets are hashable. + But if you define a hashable set type, its __hash__ should + call this function. + + This must be compatible __eq__. + + All sets ought to compare equal if they contain the same + elements, regardless of how they are implemented, and + regardless of the order of the elements; so there's not much + freedom for __eq__ or __hash__. We match the algorithm used + by the built-in frozenset type. + """ + MAX = sys.maxsize + MASK = 2 * MAX + 1 + n = len(self) + h = 1927868237 * (n + 1) + h &= MASK + for x in self: + hx = hash(x) + h ^= (hx ^ (hx << 16) ^ 89869747) * 3644798167 + h &= MASK + h = h * 69069 + 907133923 + h &= MASK + if h > MAX: + h -= MASK + 1 + if h == -1: + h = 590923713 + return h + +Set.register(frozenset) + + +class MutableSet(Set): + + @abstractmethod + def add(self, value): + """Add an element.""" + raise NotImplementedError + + @abstractmethod + def discard(self, value): + """Remove an element. Do not raise an exception if absent.""" + raise NotImplementedError + + def remove(self, value): + """Remove an element. If not a member, raise a KeyError.""" + if value not in self: + raise KeyError(value) + self.discard(value) + + def pop(self): + """Return the popped value. Raise KeyError if empty.""" + it = iter(self) + try: + value = next(it) + except StopIteration: + raise KeyError + self.discard(value) + return value + + def clear(self): + """This is slow (creates N new iterators!) but effective.""" + try: + while True: + self.pop() + except KeyError: + pass + + def __ior__(self, it): + for value in it: + self.add(value) + return self + + def __iand__(self, it): + for value in (self - it): + self.discard(value) + return self + + def __ixor__(self, it): + if it is self: + self.clear() + else: + if not isinstance(it, Set): + it = self._from_iterable(it) + for value in it: + if value in self: + self.discard(value) + else: + self.add(value) + return self + + def __isub__(self, it): + if it is self: + self.clear() + else: + for value in it: + self.discard(value) + return self + +MutableSet.register(set) + + +### MAPPINGS ### + + +class Mapping(Sized, Iterable, Container): + + @abstractmethod + def __getitem__(self, key): + raise KeyError + + def get(self, key, default=None): + try: + return self[key] + except KeyError: + return default + + def __contains__(self, key): + try: + self[key] + except KeyError: + return False + else: + return True + + def keys(self): + return KeysView(self) + + def items(self): + return ItemsView(self) + + def values(self): + return ValuesView(self) + + def __eq__(self, other): + if not isinstance(other, Mapping): + return NotImplemented + return dict(self.items()) == dict(other.items()) + + def __ne__(self, other): + return not (self == other) + + +class MappingView(Sized): + + def __init__(self, mapping): + self._mapping = mapping + + def __len__(self): + return len(self._mapping) + + def __repr__(self): + return '{0.__class__.__name__}({0._mapping!r})'.format(self) + + +class KeysView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, key): + return key in self._mapping + + def __iter__(self): + for key in self._mapping: + yield key + +KeysView.register(dict_keys) + + +class ItemsView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, item): + key, value = item + try: + v = self._mapping[key] + except KeyError: + return False + else: + return v == value + + def __iter__(self): + for key in self._mapping: + yield (key, self._mapping[key]) + +ItemsView.register(dict_items) + + +class ValuesView(MappingView): + + def __contains__(self, value): + for key in self._mapping: + if value == self._mapping[key]: + return True + return False + + def __iter__(self): + for key in self._mapping: + yield self._mapping[key] + +ValuesView.register(dict_values) + + +class MutableMapping(Mapping): + + @abstractmethod + def __setitem__(self, key, value): + raise KeyError + + @abstractmethod + def __delitem__(self, key): + raise KeyError + + __marker = object() + + def pop(self, key, default=__marker): + try: + value = self[key] + except KeyError: + if default is self.__marker: + raise + return default + else: + del self[key] + return value + + def popitem(self): + try: + key = next(iter(self)) + except StopIteration: + raise KeyError + value = self[key] + del self[key] + return key, value + + def clear(self): + try: + while True: + self.popitem() + except KeyError: + pass + + def update(*args, **kwds): + if len(args) > 2: + raise TypeError("update() takes at most 2 positional " + "arguments ({} given)".format(len(args))) + elif not args: + raise TypeError("update() takes at least 1 argument (0 given)") + self = args[0] + other = args[1] if len(args) >= 2 else () + + if isinstance(other, Mapping): + for key in other: + self[key] = other[key] + elif hasattr(other, "keys"): + for key in other.keys(): + self[key] = other[key] + else: + for key, value in other: + self[key] = value + for key, value in kwds.items(): + self[key] = value + + def setdefault(self, key, default=None): + try: + return self[key] + except KeyError: + self[key] = default + return default + +MutableMapping.register(dict) + + +### SEQUENCES ### + + +class Sequence(Sized, Iterable, Container): + + """All the operations on a read-only sequence. + + Concrete subclasses must override __new__ or __init__, + __getitem__, and __len__. + """ + + @abstractmethod + def __getitem__(self, index): + raise IndexError + + def __iter__(self): + i = 0 + try: + while True: + v = self[i] + yield v + i += 1 + except IndexError: + return + + def __contains__(self, value): + for v in self: + if v == value: + return True + return False + + def __reversed__(self): + for i in reversed(range(len(self))): + yield self[i] + + def index(self, value): + for i, v in enumerate(self): + if v == value: + return i + raise ValueError + + def count(self, value): + return sum(1 for v in self if v == value) + +Sequence.register(tuple) +Sequence.register(str) +Sequence.register(range) + + +class ByteString(Sequence): + + """This unifies bytes and bytearray. + + XXX Should add all their methods. + """ + +ByteString.register(bytes) +ByteString.register(bytearray) + + +class MutableSequence(Sequence): + + @abstractmethod + def __setitem__(self, index, value): + raise IndexError + + @abstractmethod + def __delitem__(self, index): + raise IndexError + + @abstractmethod + def insert(self, index, value): + raise IndexError + + def append(self, value): + self.insert(len(self), value) + + def reverse(self): + n = len(self) + for i in range(n//2): + self[i], self[n-i-1] = self[n-i-1], self[i] + + def extend(self, values): + for v in values: + self.append(v) + + def pop(self, index=-1): + v = self[index] + del self[index] + return v + + def remove(self, value): + del self[self.index(value)] + + def __iadd__(self, values): + self.extend(values) + return self + +MutableSequence.register(list) +MutableSequence.register(bytearray) # Multiply inheriting, see ByteString diff --git a/lib-python/3.2/_compat_pickle.py b/lib-python/3.2/_compat_pickle.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_compat_pickle.py @@ -0,0 +1,81 @@ +# This module is used to map the old Python 2 names to the new names used in +# Python 3 for the pickle module. This needed to make pickle streams +# generated with Python 2 loadable by Python 3. + +# This is a copy of lib2to3.fixes.fix_imports.MAPPING. We cannot import +# lib2to3 and use the mapping defined there, because lib2to3 uses pickle. +# Thus, this could cause the module to be imported recursively. +IMPORT_MAPPING = { + 'StringIO': 'io', + 'cStringIO': 'io', + 'cPickle': 'pickle', + '__builtin__' : 'builtins', + 'copy_reg': 'copyreg', + 'Queue': 'queue', + 'SocketServer': 'socketserver', + 'ConfigParser': 'configparser', + 'repr': 'reprlib', + 'FileDialog': 'tkinter.filedialog', + 'tkFileDialog': 'tkinter.filedialog', + 'SimpleDialog': 'tkinter.simpledialog', + 'tkSimpleDialog': 'tkinter.simpledialog', + 'tkColorChooser': 'tkinter.colorchooser', + 'tkCommonDialog': 'tkinter.commondialog', + 'Dialog': 'tkinter.dialog', + 'Tkdnd': 'tkinter.dnd', + 'tkFont': 'tkinter.font', + 'tkMessageBox': 'tkinter.messagebox', + 'ScrolledText': 'tkinter.scrolledtext', + 'Tkconstants': 'tkinter.constants', + 'Tix': 'tkinter.tix', + 'ttk': 'tkinter.ttk', + 'Tkinter': 'tkinter', + 'markupbase': '_markupbase', + '_winreg': 'winreg', + 'thread': '_thread', + 'dummy_thread': '_dummy_thread', + 'dbhash': 'dbm.bsd', + 'dumbdbm': 'dbm.dumb', + 'dbm': 'dbm.ndbm', + 'gdbm': 'dbm.gnu', + 'xmlrpclib': 'xmlrpc.client', + 'DocXMLRPCServer': 'xmlrpc.server', + 'SimpleXMLRPCServer': 'xmlrpc.server', + 'httplib': 'http.client', + 'htmlentitydefs' : 'html.entities', + 'HTMLParser' : 'html.parser', + 'Cookie': 'http.cookies', + 'cookielib': 'http.cookiejar', + 'BaseHTTPServer': 'http.server', + 'SimpleHTTPServer': 'http.server', + 'CGIHTTPServer': 'http.server', + 'test.test_support': 'test.support', + 'commands': 'subprocess', + 'UserString' : 'collections', + 'UserList' : 'collections', + 'urlparse' : 'urllib.parse', + 'robotparser' : 'urllib.robotparser', + 'whichdb': 'dbm', + 'anydbm': 'dbm' +} + + +# This contains rename rules that are easy to handle. We ignore the more +# complex stuff (e.g. mapping the names in the urllib and types modules). +# These rules should be run before import names are fixed. +NAME_MAPPING = { + ('__builtin__', 'xrange'): ('builtins', 'range'), + ('__builtin__', 'reduce'): ('functools', 'reduce'), + ('__builtin__', 'intern'): ('sys', 'intern'), + ('__builtin__', 'unichr'): ('builtins', 'chr'), + ('__builtin__', 'basestring'): ('builtins', 'str'), + ('__builtin__', 'long'): ('builtins', 'int'), + ('itertools', 'izip'): ('builtins', 'zip'), + ('itertools', 'imap'): ('builtins', 'map'), + ('itertools', 'ifilter'): ('builtins', 'filter'), + ('itertools', 'ifilterfalse'): ('itertools', 'filterfalse'), +} + +# Same, but for 3.x to 2.x +REVERSE_IMPORT_MAPPING = dict((v, k) for (k, v) in IMPORT_MAPPING.items()) +REVERSE_NAME_MAPPING = dict((v, k) for (k, v) in NAME_MAPPING.items()) diff --git a/lib-python/3.2/_dummy_thread.py b/lib-python/3.2/_dummy_thread.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_dummy_thread.py @@ -0,0 +1,155 @@ +"""Drop-in replacement for the thread module. + +Meant to be used as a brain-dead substitute so that threaded code does +not need to be rewritten for when the thread module is not present. + +Suggested usage is:: + + try: + import _thread + except ImportError: + import _dummy_thread as _thread + +""" +# Exports only things specified by thread documentation; +# skipping obsolete synonyms allocate(), start_new(), exit_thread(). +__all__ = ['error', 'start_new_thread', 'exit', 'get_ident', 'allocate_lock', + 'interrupt_main', 'LockType'] + +# A dummy value +TIMEOUT_MAX = 2**31 + +# NOTE: this module can be imported early in the extension building process, +# and so top level imports of other modules should be avoided. Instead, all +# imports are done when needed on a function-by-function basis. Since threads +# are disabled, the import lock should not be an issue anyway (??). + +class error(Exception): + """Dummy implementation of _thread.error.""" + + def __init__(self, *args): + self.args = args + +def start_new_thread(function, args, kwargs={}): + """Dummy implementation of _thread.start_new_thread(). + + Compatibility is maintained by making sure that ``args`` is a + tuple and ``kwargs`` is a dictionary. If an exception is raised + and it is SystemExit (which can be done by _thread.exit()) it is + caught and nothing is done; all other exceptions are printed out + by using traceback.print_exc(). + + If the executed function calls interrupt_main the KeyboardInterrupt will be + raised when the function returns. + + """ + if type(args) != type(tuple()): + raise TypeError("2nd arg must be a tuple") + if type(kwargs) != type(dict()): + raise TypeError("3rd arg must be a dict") + global _main + _main = False + try: + function(*args, **kwargs) + except SystemExit: + pass + except: + import traceback + traceback.print_exc() + _main = True + global _interrupt + if _interrupt: + _interrupt = False + raise KeyboardInterrupt + +def exit(): + """Dummy implementation of _thread.exit().""" + raise SystemExit + +def get_ident(): + """Dummy implementation of _thread.get_ident(). + + Since this module should only be used when _threadmodule is not + available, it is safe to assume that the current process is the + only thread. Thus a constant can be safely returned. + """ + return -1 + +def allocate_lock(): + """Dummy implementation of _thread.allocate_lock().""" + return LockType() + +def stack_size(size=None): + """Dummy implementation of _thread.stack_size().""" + if size is not None: + raise error("setting thread stack size not supported") + return 0 + +class LockType(object): + """Class implementing dummy implementation of _thread.LockType. + + Compatibility is maintained by maintaining self.locked_status + which is a boolean that stores the state of the lock. Pickling of + the lock, though, should not be done since if the _thread module is + then used with an unpickled ``lock()`` from here problems could + occur from this class not having atomic methods. + + """ + + def __init__(self): + self.locked_status = False + + def acquire(self, waitflag=None, timeout=-1): + """Dummy implementation of acquire(). + + For blocking calls, self.locked_status is automatically set to + True and returned appropriately based on value of + ``waitflag``. If it is non-blocking, then the value is + actually checked and not set if it is already acquired. This + is all done so that threading.Condition's assert statements + aren't triggered and throw a little fit. + + """ + if waitflag is None or waitflag: + self.locked_status = True + return True + else: + if not self.locked_status: + self.locked_status = True + return True + else: + if timeout > 0: + import time + time.sleep(timeout) + return False + + __enter__ = acquire + + def __exit__(self, typ, val, tb): + self.release() + + def release(self): + """Release the dummy lock.""" + # XXX Perhaps shouldn't actually bother to test? Could lead + # to problems for complex, threaded code. + if not self.locked_status: + raise error + self.locked_status = False + return True + + def locked(self): + return self.locked_status + +# Used to signal that interrupt_main was called in a "thread" +_interrupt = False +# True when not executing in a "thread" +_main = True + +def interrupt_main(): + """Set _interrupt flag to True to have start_new_thread raise + KeyboardInterrupt upon exiting.""" + if _main: + raise KeyboardInterrupt + else: + global _interrupt + _interrupt = True diff --git a/lib-python/3.2/_markupbase.py b/lib-python/3.2/_markupbase.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_markupbase.py @@ -0,0 +1,395 @@ +"""Shared support for scanning document type declarations in HTML and XHTML. + +This module is used as a foundation for the html.parser module. It has no +documented public API and should not be used directly. + +""" + +import re + +_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9]*\s*').match +_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match +_commentclose = re.compile(r'--\s*>') +_markedsectionclose = re.compile(r']\s*]\s*>') + +# An analysis of the MS-Word extensions is available at +# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf + +_msmarkedsectionclose = re.compile(r']\s*>') + +del re + + +class ParserBase: + """Parser base class which provides some common support methods used + by the SGML/HTML and XHTML parsers.""" + + def __init__(self): + if self.__class__ is ParserBase: + raise RuntimeError( + "_markupbase.ParserBase must be subclassed") + + def error(self, message): + raise NotImplementedError( + "subclasses of ParserBase must override error()") + + def reset(self): + self.lineno = 1 + self.offset = 0 + + def getpos(self): + """Return current line number and offset.""" + return self.lineno, self.offset + + # Internal -- update line number and offset. This should be + # called for each piece of data exactly once, in order -- in other + # words the concatenation of all the input strings to this + # function should be exactly the entire input. + def updatepos(self, i, j): + if i >= j: + return j + rawdata = self.rawdata + nlines = rawdata.count("\n", i, j) + if nlines: + self.lineno = self.lineno + nlines + pos = rawdata.rindex("\n", i, j) # Should not fail + self.offset = j-(pos+1) + else: + self.offset = self.offset + j-i + return j + + _decl_otherchars = '' + + # Internal -- parse declaration (for use by subclasses). + def parse_declaration(self, i): + # This is some sort of declaration; in "HTML as + # deployed," this should only be the document type + # declaration (""). + # ISO 8879:1986, however, has more complex + # declaration syntax for elements in , including: + # --comment-- + # [marked section] + # name in the following list: ENTITY, DOCTYPE, ELEMENT, + # ATTLIST, NOTATION, SHORTREF, USEMAP, + # LINKTYPE, LINK, IDLINK, USELINK, SYSTEM + rawdata = self.rawdata + j = i + 2 + assert rawdata[i:j] == "": + # the empty comment + return j + 1 + if rawdata[j:j+1] in ("-", ""): + # Start of comment followed by buffer boundary, + # or just a buffer boundary. + return -1 + # A simple, practical version could look like: ((name|stringlit) S*) + '>' + n = len(rawdata) + if rawdata[j:j+2] == '--': #comment + # Locate --.*-- as the body of the comment + return self.parse_comment(i) + elif rawdata[j] == '[': #marked section + # Locate [statusWord [...arbitrary SGML...]] as the body of the marked section + # Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA + # Note that this is extended by Microsoft Office "Save as Web" function + # to include [if...] and [endif]. + return self.parse_marked_section(i) + else: #all other declaration elements + decltype, j = self._scan_name(j, i) + if j < 0: + return j + if decltype == "doctype": + self._decl_otherchars = '' + while j < n: + c = rawdata[j] + if c == ">": + # end of declaration syntax + data = rawdata[i+2:j] + if decltype == "doctype": + self.handle_decl(data) + else: + # According to the HTML5 specs sections "8.2.4.44 Bogus + # comment state" and "8.2.4.45 Markup declaration open + # state", a comment token should be emitted. + # Calling unknown_decl provides more flexibility though. + self.unknown_decl(data) + return j + 1 + if c in "\"'": + m = _declstringlit_match(rawdata, j) + if not m: + return -1 # incomplete + j = m.end() + elif c in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ": + name, j = self._scan_name(j, i) + elif c in self._decl_otherchars: + j = j + 1 + elif c == "[": + # this could be handled in a separate doctype parser + if decltype == "doctype": + j = self._parse_doctype_subset(j + 1, i) + elif decltype in {"attlist", "linktype", "link", "element"}: + # must tolerate []'d groups in a content model in an element declaration + # also in data attribute specifications of attlist declaration + # also link type declaration subsets in linktype declarations + # also link attribute specification lists in link declarations + self.error("unsupported '[' char in %s declaration" % decltype) + else: + self.error("unexpected '[' char in declaration") + else: + self.error( + "unexpected %r char in declaration" % rawdata[j]) + if j < 0: + return j + return -1 # incomplete + + # Internal -- parse a marked section + # Override this to handle MS-word extension syntax content + def parse_marked_section(self, i, report=1): + rawdata= self.rawdata + assert rawdata[i:i+3] == ' ending + match= _markedsectionclose.search(rawdata, i+3) + elif sectName in {"if", "else", "endif"}: + # look for MS Office ]> ending + match= _msmarkedsectionclose.search(rawdata, i+3) + else: + self.error('unknown status keyword %r in marked section' % rawdata[i+3:j]) + if not match: + return -1 + if report: + j = match.start(0) + self.unknown_decl(rawdata[i+3: j]) + return match.end(0) + + # Internal -- parse comment, return length or -1 if not terminated + def parse_comment(self, i, report=1): + rawdata = self.rawdata + if rawdata[i:i+4] != ' LOAD_CONST None def f(x): - None + y = None return x asm = disassemble(f) for elem in ('LOAD_GLOBAL',): @@ -67,10 +67,13 @@ self.assertIn(elem, asm) def test_pack_unpack(self): + # On PyPy, "a, b = ..." is even more optimized, by removing + # the ROT_TWO. But the ROT_TWO is not removed if assigning + # to more complex expressions, so check that. for line, elem in ( ('a, = a,', 'LOAD_CONST',), - ('a, b = a, b', 'ROT_TWO',), - ('a, b, c = a, b, c', 'ROT_THREE',), + ('a[1], b = a, b', 'ROT_TWO',), + ('a, b[2], c = a, b, c', 'ROT_THREE',), ): asm = dis_single(line) self.assertIn(elem, asm) @@ -78,6 +81,8 @@ self.assertNotIn('UNPACK_TUPLE', asm) def test_folding_of_tuples_of_constants(self): + # On CPython, "a,b,c=1,2,3" turns into "a,b,c=" + # but on PyPy, it turns into "a=1;b=2;c=3". for line, elem in ( ('a = 1,2,3', '((1, 2, 3))'), ('("a","b","c")', "(('a', 'b', 'c'))"), @@ -86,7 +91,8 @@ ('((1, 2), 3, 4)', '(((1, 2), 3, 4))'), ): asm = dis_single(line) - self.assertIn(elem, asm) + self.assert_(elem in asm or ( + line == 'a,b,c = 1,2,3' and 'UNPACK_TUPLE' not in asm)) self.assertNotIn('BUILD_TUPLE', asm) # Bug 1053819: Tuple of constants misidentified when presented with: @@ -139,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib-python/2.7/test/test_pprint.py b/lib-python/2.7/test/test_pprint.py --- a/lib-python/2.7/test/test_pprint.py +++ b/lib-python/2.7/test/test_pprint.py @@ -233,7 +233,16 @@ frozenset([0, 2]), frozenset([0, 1])])}""" cube = test.test_set.cube(3) - self.assertEqual(pprint.pformat(cube), cube_repr_tgt) + # XXX issues of dictionary order, and for the case below, + # order of items in the frozenset([...]) representation. + # Whether we get precisely cube_repr_tgt or not is open + # to implementation-dependent choices (this test probably + # fails horribly in CPython if we tweak the dict order too). + got = pprint.pformat(cube) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cube_repr_tgt) + else: + self.assertEqual(eval(got), cube) cubo_repr_tgt = """\ {frozenset([frozenset([0, 2]), frozenset([0])]): frozenset([frozenset([frozenset([0, 2]), @@ -393,7 +402,11 @@ 2])])])}""" cubo = test.test_set.linegraph(cube) - self.assertEqual(pprint.pformat(cubo), cubo_repr_tgt) + got = pprint.pformat(cubo) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cubo_repr_tgt) + else: + self.assertEqual(eval(got), cubo) def test_depth(self): nested_tuple = (1, (2, (3, (4, (5, 6))))) diff --git a/lib-python/2.7/test/test_pydoc.py b/lib-python/2.7/test/test_pydoc.py --- a/lib-python/2.7/test/test_pydoc.py +++ b/lib-python/2.7/test/test_pydoc.py @@ -267,8 +267,8 @@ testpairs = ( ('i_am_not_here', 'i_am_not_here'), ('test.i_am_not_here_either', 'i_am_not_here_either'), - ('test.i_am_not_here.neither_am_i', 'i_am_not_here.neither_am_i'), - ('i_am_not_here.{}'.format(modname), 'i_am_not_here.{}'.format(modname)), + ('test.i_am_not_here.neither_am_i', 'i_am_not_here'), + ('i_am_not_here.{}'.format(modname), 'i_am_not_here'), ('test.{}'.format(modname), modname), ) @@ -292,8 +292,8 @@ result = run_pydoc(modname) finally: forget(modname) - expected = badimport_pattern % (modname, expectedinmsg) - self.assertEqual(expected, result) + expected = badimport_pattern % (modname, '(.+\\.)?' + expectedinmsg + '(\\..+)?$') + self.assertTrue(re.match(expected, result)) def test_input_strip(self): missing_module = " test.i_am_not_here " diff --git a/lib-python/2.7/test/test_pyexpat.py b/lib-python/2.7/test/test_pyexpat.py --- a/lib-python/2.7/test/test_pyexpat.py +++ b/lib-python/2.7/test/test_pyexpat.py @@ -570,6 +570,9 @@ self.assertEqual(self.n, 4) class MalformedInputText(unittest.TestCase): + # CPython seems to ship its own version of expat, they fixed it on this commit : + # http://svn.python.org/view?revision=74429&view=revision + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test1(self): xml = "\0\r\n" parser = expat.ParserCreate() @@ -579,6 +582,7 @@ except expat.ExpatError as e: self.assertEqual(str(e), 'unclosed token: line 2, column 0') + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test2(self): xml = "\r\n" parser = expat.ParserCreate() diff --git a/lib-python/2.7/test/test_repr.py b/lib-python/2.7/test/test_repr.py --- a/lib-python/2.7/test/test_repr.py +++ b/lib-python/2.7/test/test_repr.py @@ -9,6 +9,7 @@ import unittest from test.test_support import run_unittest, check_py3k_warnings +from test.test_support import check_impl_detail from repr import repr as r # Don't shadow builtin repr from repr import Repr @@ -145,8 +146,11 @@ # Functions eq(repr(hash), '') # Methods - self.assertTrue(repr(''.split).startswith( - '") def test_xrange(self): eq = self.assertEqual @@ -185,7 +189,10 @@ def test_descriptors(self): eq = self.assertEqual # method descriptors - eq(repr(dict.items), "") + if check_impl_detail(cpython=True): + eq(repr(dict.items), "") + elif check_impl_detail(pypy=True): + eq(repr(dict.items), "") # XXX member descriptors # XXX attribute descriptors # XXX slot descriptors @@ -247,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_runpy.py b/lib-python/2.7/test/test_runpy.py --- a/lib-python/2.7/test/test_runpy.py +++ b/lib-python/2.7/test/test_runpy.py @@ -5,10 +5,15 @@ import sys import re import tempfile -from test.test_support import verbose, run_unittest, forget +from test.test_support import verbose, run_unittest, forget, check_impl_detail from test.script_helper import (temp_dir, make_script, compile_script, make_pkg, make_zip_script, make_zip_pkg) +if check_impl_detail(pypy=True): + no_lone_pyc_file = True +else: + no_lone_pyc_file = False + from runpy import _run_code, _run_module_code, run_module, run_path # Note: This module can't safely test _run_module_as_main as it @@ -168,13 +173,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + d2 = run_module(mod_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -190,13 +196,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", pkg_name - d2 = run_module(pkg_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", pkg_name + d2 = run_module(pkg_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, pkg_name) if verbose: print "Package executed successfully" @@ -244,15 +251,17 @@ self.assertIn("sibling", d1) self.assertIn("nephew", d1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name, run_name=run_name) # Read from bytecode - self.assertIn("__package__", d2) - self.assertTrue(d2["__package__"] == pkg_name) - self.assertIn("sibling", d2) - self.assertIn("nephew", d2) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + # Read from bytecode + d2 = run_module(mod_name, run_name=run_name) + self.assertIn("__package__", d2) + self.assertTrue(d2["__package__"] == pkg_name) + self.assertIn("sibling", d2) + self.assertIn("nephew", d2) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -345,6 +354,8 @@ script_dir, '') def test_directory_compiled(self): + if no_lone_pyc_file: + return with temp_dir() as script_dir: mod_name = '__main__' script_name = self._make_test_script(script_dir, mod_name) diff --git a/lib-python/2.7/test/test_scope.py b/lib-python/2.7/test/test_scope.py --- a/lib-python/2.7/test/test_scope.py +++ b/lib-python/2.7/test/test_scope.py @@ -1,6 +1,6 @@ import unittest from test.test_support import check_syntax_error, check_py3k_warnings, \ - check_warnings, run_unittest + check_warnings, run_unittest, gc_collect class ScopeTests(unittest.TestCase): @@ -432,6 +432,7 @@ for i in range(100): f1() + gc_collect() self.assertEqual(Foo.count, 0) diff --git a/lib-python/2.7/test/test_set.py b/lib-python/2.7/test/test_set.py --- a/lib-python/2.7/test/test_set.py +++ b/lib-python/2.7/test/test_set.py @@ -309,6 +309,7 @@ fo.close() test_support.unlink(test_support.TESTFN) + @test_support.impl_detail(pypy=False) def test_do_not_rehash_dict_keys(self): n = 10 d = dict.fromkeys(map(HashCountingInt, xrange(n))) @@ -559,6 +560,7 @@ p = weakref.proxy(s) self.assertEqual(str(p), str(s)) s = None + test_support.gc_collect() self.assertRaises(ReferenceError, str, p) # C API test only available in a debug build @@ -590,6 +592,7 @@ s.__init__(self.otherword) self.assertEqual(s, set(self.word)) + @test_support.impl_detail() def test_singleton_empty_frozenset(self): f = frozenset() efs = [frozenset(), frozenset([]), frozenset(()), frozenset(''), @@ -770,9 +773,10 @@ for v in self.set: self.assertIn(v, self.values) setiter = iter(self.set) - # note: __length_hint__ is an internal undocumented API, - # don't rely on it in your own programs - self.assertEqual(setiter.__length_hint__(), len(self.set)) + if test_support.check_impl_detail(): + # note: __length_hint__ is an internal undocumented API, + # don't rely on it in your own programs + self.assertEqual(setiter.__length_hint__(), len(self.set)) def test_pickling(self): p = pickle.dumps(self.set) @@ -1564,7 +1568,7 @@ for meth in (s.union, s.intersection, s.difference, s.symmetric_difference, s.isdisjoint): for g in (G, I, Ig, L, R): expected = meth(data) - actual = meth(G(data)) + actual = meth(g(data)) if isinstance(expected, bool): self.assertEqual(actual, expected) else: diff --git a/lib-python/2.7/test/test_sets.py b/lib-python/2.7/test/test_sets.py --- a/lib-python/2.7/test/test_sets.py +++ b/lib-python/2.7/test/test_sets.py @@ -686,7 +686,9 @@ set_list = sorted(self.set) self.assertEqual(len(dup_list), len(set_list)) for i, el in enumerate(dup_list): - self.assertIs(el, set_list[i]) + # Object identity is not guarnteed for immutable objects, so we + # can't use assertIs here. + self.assertEqual(el, set_list[i]) def test_deep_copy(self): dup = copy.deepcopy(self.set) diff --git a/lib-python/2.7/test/test_site.py b/lib-python/2.7/test/test_site.py --- a/lib-python/2.7/test/test_site.py +++ b/lib-python/2.7/test/test_site.py @@ -226,6 +226,10 @@ self.assertEqual(len(dirs), 1) wanted = os.path.join('xoxo', 'Lib', 'site-packages') self.assertEqual(dirs[0], wanted) + elif '__pypy__' in sys.builtin_module_names: + self.assertEquals(len(dirs), 1) + wanted = os.path.join('xoxo', 'site-packages') + self.assertEquals(dirs[0], wanted) elif os.sep == '/': self.assertEqual(len(dirs), 2) wanted = os.path.join('xoxo', 'lib', 'python' + sys.version[:3], diff --git a/lib-python/2.7/test/test_socket.py b/lib-python/2.7/test/test_socket.py --- a/lib-python/2.7/test/test_socket.py +++ b/lib-python/2.7/test/test_socket.py @@ -252,6 +252,7 @@ self.assertEqual(p.fileno(), s.fileno()) s.close() s = None + test_support.gc_collect() try: p.fileno() except ReferenceError: @@ -285,32 +286,34 @@ s.sendto(u'\u2620', sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None) - self.assertIn('not NoneType', str(cm.exception)) + self.assertIn('NoneType', str(cm.exception)) # 3 args with self.assertRaises(UnicodeEncodeError): s.sendto(u'\u2620', 0, sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, None) - self.assertIn('not NoneType', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 'bar', sockname) - self.assertIn('an integer is required', str(cm.exception)) + self.assertIn('integer', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None, None) - self.assertIn('an integer is required', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('an integer is required', str(cm.exception)) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto('foo') - self.assertIn('(1 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, sockname, 4) - self.assertIn('(4 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) def testCrucialConstants(self): @@ -385,10 +388,10 @@ socket.htonl(k) socket.htons(k) for k in bad_values: - self.assertRaises(OverflowError, socket.ntohl, k) - self.assertRaises(OverflowError, socket.ntohs, k) - self.assertRaises(OverflowError, socket.htonl, k) - self.assertRaises(OverflowError, socket.htons, k) + self.assertRaises((OverflowError, ValueError), socket.ntohl, k) + self.assertRaises((OverflowError, ValueError), socket.ntohs, k) + self.assertRaises((OverflowError, ValueError), socket.htonl, k) + self.assertRaises((OverflowError, ValueError), socket.htons, k) def testGetServBy(self): eq = self.assertEqual @@ -428,8 +431,8 @@ if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. - self.assertRaises(OverflowError, socket.getservbyport, -1) - self.assertRaises(OverflowError, socket.getservbyport, 65536) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, -1) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout @@ -608,8 +611,8 @@ neg_port = port - 65536 sock = socket.socket() try: - self.assertRaises(OverflowError, sock.bind, (host, big_port)) - self.assertRaises(OverflowError, sock.bind, (host, neg_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, big_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, neg_port)) sock.bind((host, port)) finally: sock.close() @@ -1309,6 +1312,7 @@ closed = False def flush(self): pass def close(self): self.closed = True + def _decref_socketios(self): pass # must not close unless we request it: the original use of _fileobject # by module socket requires that the underlying socket not be closed until diff --git a/lib-python/2.7/test/test_sort.py b/lib-python/2.7/test/test_sort.py --- a/lib-python/2.7/test/test_sort.py +++ b/lib-python/2.7/test/test_sort.py @@ -140,7 +140,10 @@ return random.random() < 0.5 L = [C() for i in range(50)] - self.assertRaises(ValueError, L.sort) + try: + L.sort() + except ValueError: + pass def test_cmpNone(self): # Testing None as a comparison function. @@ -150,8 +153,10 @@ L.sort(None) self.assertEqual(L, range(50)) + @test_support.impl_detail(pypy=False) def test_undetected_mutation(self): # Python 2.4a1 did not always detect mutation + # So does pypy... memorywaster = [] for i in range(20): def mutating_cmp(x, y): @@ -226,7 +231,10 @@ def __del__(self): del data[:] data[:] = range(20) - self.assertRaises(ValueError, data.sort, key=SortKiller) + try: + data.sort(key=SortKiller) + except ValueError: + pass def test_key_with_mutating_del_and_exception(self): data = range(10) diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -881,6 +881,8 @@ c = socket.socket() c.connect((HOST, port)) listener_gone.wait() + # XXX why is it necessary? + test_support.gc_collect() try: ssl_sock = ssl.wrap_socket(c) except IOError: @@ -1330,10 +1332,8 @@ def test_main(verbose=False): global CERTFILE, SVN_PYTHON_ORG_ROOT_CERT - CERTFILE = os.path.join(os.path.dirname(__file__) or os.curdir, - "keycert.pem") - SVN_PYTHON_ORG_ROOT_CERT = os.path.join( - os.path.dirname(__file__) or os.curdir, + CERTFILE = test_support.findfile("keycert.pem") + SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( "https_svn_python_org_root.pem") if (not os.path.exists(CERTFILE) or diff --git a/lib-python/2.7/test/test_str.py b/lib-python/2.7/test/test_str.py --- a/lib-python/2.7/test/test_str.py +++ b/lib-python/2.7/test/test_str.py @@ -422,10 +422,11 @@ for meth in ('foo'.startswith, 'foo'.endswith): with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) def test_main(): test_support.run_unittest(StrTest) diff --git a/lib-python/2.7/test/test_struct.py b/lib-python/2.7/test/test_struct.py --- a/lib-python/2.7/test/test_struct.py +++ b/lib-python/2.7/test/test_struct.py @@ -535,7 +535,8 @@ @unittest.skipUnless(IS32BIT, "Specific to 32bit machines") def test_crasher(self): - self.assertRaises(MemoryError, struct.pack, "357913941c", "a") + self.assertRaises((MemoryError, struct.error), struct.pack, + "357913941c", "a") def test_count_overflow(self): hugecount = '{}b'.format(sys.maxsize+1) diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/2.7/test/test_support.py b/lib-python/2.7/test/test_support.py --- a/lib-python/2.7/test/test_support.py +++ b/lib-python/2.7/test/test_support.py @@ -431,16 +431,20 @@ rmtree(name) -def findfile(file, here=__file__, subdir=None): +def findfile(file, here=None, subdir=None): """Try to find a file on sys.path and the working directory. If it is not found the argument passed to the function is returned (this does not necessarily signal failure; could still be the legitimate path).""" + import test if os.path.isabs(file): return file if subdir is not None: file = os.path.join(subdir, file) path = sys.path - path = [os.path.dirname(here)] + path + if here is None: + path = test.__path__ + path + else: + path = [os.path.dirname(here)] + path for dn in path: fn = os.path.join(dn, file) if os.path.exists(fn): return fn @@ -1050,15 +1054,33 @@ guards, default = _parse_guards(guards) return guards.get(platform.python_implementation().lower(), default) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --pdb +# to get a pdb prompt in case of exceptions +ResultClass = unittest.TextTestRunner.resultclass + +class TestResultWithPdb(ResultClass): + + def addError(self, testcase, exc_info): + ResultClass.addError(self, testcase, exc_info) + if '--pdb' in sys.argv: + import pdb, traceback + traceback.print_tb(exc_info[2]) + pdb.post_mortem(exc_info[2]) + +# ---------------------------------- def _run_suite(suite): """Run tests from a unittest.TestSuite-derived class.""" if verbose: - runner = unittest.TextTestRunner(sys.stdout, verbosity=2) + runner = unittest.TextTestRunner(sys.stdout, verbosity=2, + resultclass=TestResultWithPdb) else: runner = BasicTestRunner() + result = runner.run(suite) if not result.wasSuccessful(): if len(result.errors) == 1 and not result.failures: @@ -1071,6 +1093,34 @@ err += "; run in verbose mode for details" raise TestFailed(err) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --filter bar +# to run only the test cases whose name contains bar + +def filter_maybe(suite): + try: + i = sys.argv.index('--filter') + filter = sys.argv[i+1] + except (ValueError, IndexError): + return suite + tests = [] + for test in linearize_suite(suite): + if filter in test._testMethodName: + tests.append(test) + return unittest.TestSuite(tests) + +def linearize_suite(suite_or_test): + try: + it = iter(suite_or_test) + except TypeError: + yield suite_or_test + return + for subsuite in it: + for item in linearize_suite(subsuite): + yield item + +# ---------------------------------- def run_unittest(*classes): """Run tests from unittest.TestCase-derived classes.""" @@ -1086,6 +1136,7 @@ suite.addTest(cls) else: suite.addTest(unittest.makeSuite(cls)) + suite = filter_maybe(suite) _run_suite(suite) diff --git a/lib-python/2.7/test/test_syntax.py b/lib-python/2.7/test/test_syntax.py --- a/lib-python/2.7/test/test_syntax.py +++ b/lib-python/2.7/test/test_syntax.py @@ -5,7 +5,8 @@ >>> def f(x): ... global x Traceback (most recent call last): -SyntaxError: name 'x' is local and global (, line 1) + File "", line 1 +SyntaxError: name 'x' is local and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that @@ -375,7 +376,7 @@ In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 - >>> while 1: + >>> while 1: # doctest:+SKIP ... while 2: ... while 3: ... while 4: diff --git a/lib-python/2.7/test/test_sys.py b/lib-python/2.7/test/test_sys.py --- a/lib-python/2.7/test/test_sys.py +++ b/lib-python/2.7/test/test_sys.py @@ -264,6 +264,7 @@ self.assertEqual(sys.getdlopenflags(), oldflags+1) sys.setdlopenflags(oldflags) + @test.test_support.impl_detail("reference counting") def test_refcount(self): # n here must be a global in order for this test to pass while # tracing with a python function. Tracing calls PyFrame_FastToLocals @@ -287,7 +288,7 @@ is sys._getframe().f_code ) - # sys._current_frames() is a CPython-only gimmick. + @test.test_support.impl_detail("current_frames") def test_current_frames(self): have_threads = True try: @@ -383,7 +384,10 @@ self.assertEqual(len(sys.float_info), 11) self.assertEqual(sys.float_info.radix, 2) self.assertEqual(len(sys.long_info), 2) - self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + if test.test_support.check_impl_detail(cpython=True): + self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + else: + self.assertTrue(sys.long_info.bits_per_digit >= 1) self.assertTrue(sys.long_info.sizeof_digit >= 1) self.assertEqual(type(sys.long_info.bits_per_digit), int) self.assertEqual(type(sys.long_info.sizeof_digit), int) @@ -432,6 +436,7 @@ self.assertEqual(type(getattr(sys.flags, attr)), int, attr) self.assertTrue(repr(sys.flags)) + @test.test_support.impl_detail("sys._clear_type_cache") def test_clear_type_cache(self): sys._clear_type_cache() @@ -473,6 +478,7 @@ p.wait() self.assertIn(executable, ["''", repr(sys.executable)]) + at unittest.skipUnless(test.test_support.check_impl_detail(), "sys.getsizeof()") class SizeofTest(unittest.TestCase): TPFLAGS_HAVE_GC = 1<<14 diff --git a/lib-python/2.7/test/test_sys_settrace.py b/lib-python/2.7/test/test_sys_settrace.py --- a/lib-python/2.7/test/test_sys_settrace.py +++ b/lib-python/2.7/test/test_sys_settrace.py @@ -213,12 +213,16 @@ "finally" def generator_example(): # any() will leave the generator before its end - x = any(generator_function()) + x = any(generator_function()); gc.collect() # the following lines were not traced for x in range(10): y = x +# On CPython, when the generator is decref'ed to zero, we see the trace +# for the "finally:" portion. On PyPy, we don't see it before the next +# garbage collection. That's why we put gc.collect() on the same line above. + generator_example.events = ([(0, 'call'), (2, 'line'), (-6, 'call'), @@ -282,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass @@ -323,17 +327,24 @@ self.run_test(tighterloop_example) def test_13_genexp(self): - self.run_test(generator_example) - # issue1265: if the trace function contains a generator, - # and if the traced function contains another generator - # that is not completely exhausted, the trace stopped. - # Worse: the 'finally' clause was not invoked. - tracer = Tracer() - sys.settrace(tracer.traceWithGenexp) - generator_example() - sys.settrace(None) - self.compare_events(generator_example.__code__.co_firstlineno, - tracer.events, generator_example.events) + if self.using_gc: + test_support.gc_collect() + gc.enable() + try: + self.run_test(generator_example) + # issue1265: if the trace function contains a generator, + # and if the traced function contains another generator + # that is not completely exhausted, the trace stopped. + # Worse: the 'finally' clause was not invoked. + tracer = Tracer() + sys.settrace(tracer.traceWithGenexp) + generator_example() + sys.settrace(None) + self.compare_events(generator_example.__code__.co_firstlineno, + tracer.events, generator_example.events) + finally: + if self.using_gc: + gc.disable() def test_14_onliner_if(self): def onliners(): diff --git a/lib-python/2.7/test/test_sysconfig.py b/lib-python/2.7/test/test_sysconfig.py --- a/lib-python/2.7/test/test_sysconfig.py +++ b/lib-python/2.7/test/test_sysconfig.py @@ -209,13 +209,22 @@ self.assertEqual(get_platform(), 'macosx-10.4-fat64') - for arch in ('ppc', 'i386', 'x86_64', 'ppc64'): + for arch in ('ppc', 'i386', 'ppc64', 'x86_64'): get_config_vars()['CFLAGS'] = ('-arch %s -isysroot ' '/Developer/SDKs/MacOSX10.4u.sdk ' '-fno-strict-aliasing -fno-common ' '-dynamic -DNDEBUG -g -O3'%(arch,)) self.assertEqual(get_platform(), 'macosx-10.4-%s'%(arch,)) + + # macosx with ARCHFLAGS set and empty _CONFIG_VARS + os.environ['ARCHFLAGS'] = '-arch i386' + sysconfig._CONFIG_VARS = None + + # this will attempt to recreate the _CONFIG_VARS based on environment + # variables; used to check a problem with the PyPy's _init_posix + # implementation; see: issue 705 + get_config_vars() # linux debian sarge os.name = 'posix' @@ -235,7 +244,7 @@ def test_get_scheme_names(self): wanted = ('nt', 'nt_user', 'os2', 'os2_home', 'osx_framework_user', - 'posix_home', 'posix_prefix', 'posix_user') + 'posix_home', 'posix_prefix', 'posix_user', 'pypy') self.assertEqual(get_scheme_names(), wanted) def test_symlink(self): diff --git a/lib-python/2.7/test/test_tarfile.py b/lib-python/2.7/test/test_tarfile.py --- a/lib-python/2.7/test/test_tarfile.py +++ b/lib-python/2.7/test/test_tarfile.py @@ -169,6 +169,7 @@ except tarfile.ReadError: self.fail("tarfile.open() failed on empty archive") self.assertListEqual(tar.getmembers(), []) + tar.close() def test_null_tarfile(self): # Test for issue6123: Allow opening empty archives. @@ -207,16 +208,21 @@ fobj = open(self.tarname, "rb") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, os.path.abspath(fobj.name)) + tar.close() def test_no_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) self.assertRaises(AttributeError, getattr, fobj, "name") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, None) def test_empty_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) fobj.name = "" tar = tarfile.open(fileobj=fobj, mode=self.mode) @@ -515,6 +521,7 @@ self.tar = tarfile.open(self.tarname, mode=self.mode, encoding="iso8859-1") tarinfo = self.tar.getmember("pax/umlauts-�������") self._test_member(tarinfo, size=7011, chksum=md5_regtype) + self.tar.close() class LongnameTest(ReadTest): @@ -675,6 +682,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.rmdir(path) @@ -692,6 +700,7 @@ tar.gettarinfo(target) tarinfo = tar.gettarinfo(link) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(target) os.remove(link) @@ -704,6 +713,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(path) @@ -722,6 +732,7 @@ tar.add(dstname) os.chdir(cwd) self.assertTrue(tar.getnames() == [], "added the archive to itself") + tar.close() def test_exclude(self): tempdir = os.path.join(TEMPDIR, "exclude") @@ -742,6 +753,7 @@ tar = tarfile.open(tmpname, "r") self.assertEqual(len(tar.getmembers()), 1) self.assertEqual(tar.getnames()[0], "empty_dir") + tar.close() finally: shutil.rmtree(tempdir) @@ -947,7 +959,9 @@ fobj.close() elif self.mode.endswith("bz2"): dec = bz2.BZ2Decompressor() - data = open(tmpname, "rb").read() + f = open(tmpname, "rb") + data = f.read() + f.close() data = dec.decompress(data) self.assertTrue(len(dec.unused_data) == 0, "found trailing data") @@ -1026,6 +1040,7 @@ "unable to read longname member") self.assertEqual(tarinfo.linkname, member.linkname, "unable to read longname member") + tar.close() def test_longname_1023(self): self._test(("longnam/" * 127) + "longnam") @@ -1118,6 +1133,7 @@ else: n = tar.getmembers()[0].name self.assertTrue(name == n, "PAX longname creation failed") + tar.close() def test_pax_global_header(self): pax_headers = { @@ -1146,6 +1162,7 @@ tarfile.PAX_NUMBER_FIELDS[key](val) except (TypeError, ValueError): self.fail("unable to convert pax header field") + tar.close() def test_pax_extended_header(self): # The fields from the pax header have priority over the @@ -1165,6 +1182,7 @@ self.assertEqual(t.pax_headers, pax_headers) self.assertEqual(t.name, "foo") self.assertEqual(t.uid, 123) + tar.close() class UstarUnicodeTest(unittest.TestCase): @@ -1208,6 +1226,7 @@ tarinfo.name = "foo" tarinfo.uname = u"���" self.assertRaises(UnicodeError, tar.addfile, tarinfo) + tar.close() def test_unicode_argument(self): tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict") @@ -1262,6 +1281,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="ascii", errors=handler) self.assertEqual(tar.getnames()[0], name) + tar.close() self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="ascii", errors="strict") @@ -1274,6 +1294,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1", errors="utf-8") self.assertEqual(tar.getnames()[0], "���/" + u"�".encode("utf8")) + tar.close() class AppendTest(unittest.TestCase): @@ -1301,6 +1322,7 @@ def _test(self, names=["bar"], fileobj=None): tar = tarfile.open(self.tarname, fileobj=fileobj) self.assertEqual(tar.getnames(), names) + tar.close() def test_non_existing(self): self._add_testfile() @@ -1319,7 +1341,9 @@ def test_fileobj(self): self._create_testtar() - data = open(self.tarname).read() + f = open(self.tarname) + data = f.read() + f.close() fobj = StringIO.StringIO(data) self._add_testfile(fobj) fobj.seek(0) @@ -1345,7 +1369,9 @@ # Append mode is supposed to fail if the tarfile to append to # does not end with a zero block. def _test_error(self, data): - open(self.tarname, "wb").write(data) + f = open(self.tarname, "wb") + f.write(data) + f.close() self.assertRaises(tarfile.ReadError, self._add_testfile) def test_null(self): diff --git a/lib-python/2.7/test/test_tempfile.py b/lib-python/2.7/test/test_tempfile.py --- a/lib-python/2.7/test/test_tempfile.py +++ b/lib-python/2.7/test/test_tempfile.py @@ -23,8 +23,8 @@ # TEST_FILES may need to be tweaked for systems depending on the maximum # number of files that can be opened at one time (see ulimit -n) -if sys.platform in ('openbsd3', 'openbsd4'): - TEST_FILES = 48 +if sys.platform.startswith("openbsd"): + TEST_FILES = 64 # ulimit -n defaults to 128 for normal users else: TEST_FILES = 100 @@ -244,6 +244,7 @@ dir = tempfile.mkdtemp() try: self.do_create(dir=dir).write("blat") + test_support.gc_collect() finally: os.rmdir(dir) @@ -528,12 +529,15 @@ self.do_create(suf="b") self.do_create(pre="a", suf="b") self.do_create(pre="aa", suf=".txt") + test_support.gc_collect() def test_many(self): # mktemp can choose many usable file names (stochastic) extant = range(TEST_FILES) for i in extant: extant[i] = self.do_create(pre="aa") + del extant + test_support.gc_collect() ## def test_warning(self): ## # mktemp issues a warning when used diff --git a/lib-python/2.7/test/test_thread.py b/lib-python/2.7/test/test_thread.py --- a/lib-python/2.7/test/test_thread.py +++ b/lib-python/2.7/test/test_thread.py @@ -128,6 +128,7 @@ del task while not done: time.sleep(0.01) + test_support.gc_collect() self.assertEqual(thread._count(), orig) diff --git a/lib-python/2.7/test/test_threading.py b/lib-python/2.7/test/test_threading.py --- a/lib-python/2.7/test/test_threading.py +++ b/lib-python/2.7/test/test_threading.py @@ -161,6 +161,7 @@ # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. + @test.test_support.cpython_only def test_PyThreadState_SetAsyncExc(self): try: import ctypes @@ -266,6 +267,7 @@ finally: threading._start_new_thread = _start_new_thread + @test.test_support.cpython_only def test_finalize_runnning_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for @@ -383,6 +385,7 @@ finally: sys.setcheckinterval(old_interval) + @test.test_support.cpython_only def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): @@ -425,6 +428,9 @@ def joiningfunc(mainthread): mainthread.join() print 'end of thread' + # stdout is fully buffered because not a tty, we have to flush + # before exit. + sys.stdout.flush() \n""" + script p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE) diff --git a/lib-python/2.7/test/test_threading_local.py b/lib-python/2.7/test/test_threading_local.py --- a/lib-python/2.7/test/test_threading_local.py +++ b/lib-python/2.7/test/test_threading_local.py @@ -173,8 +173,9 @@ obj = cls() obj.x = 5 self.assertEqual(obj.__dict__, {'x': 5}) - with self.assertRaises(AttributeError): - obj.__dict__ = {} + if test_support.check_impl_detail(): + with self.assertRaises(AttributeError): + obj.__dict__ = {} with self.assertRaises(AttributeError): del obj.__dict__ diff --git a/lib-python/2.7/test/test_traceback.py b/lib-python/2.7/test/test_traceback.py --- a/lib-python/2.7/test/test_traceback.py +++ b/lib-python/2.7/test/test_traceback.py @@ -5,7 +5,8 @@ import sys import unittest from imp import reload -from test.test_support import run_unittest, is_jython, Error +from test.test_support import run_unittest, Error +from test.test_support import impl_detail, check_impl_detail import traceback @@ -49,10 +50,8 @@ self.assertTrue(err[2].count('\n') == 1) # and no additional newline self.assertTrue(err[1].find("+") == err[2].find("^")) # in the right place + @impl_detail("other implementations may add a caret (why shouldn't they?)") def test_nocaret(self): - if is_jython: - # jython adds a caret in this case (why shouldn't it?) - return err = self.get_exception_format(self.syntax_error_without_caret, SyntaxError) self.assertTrue(len(err) == 3) @@ -63,8 +62,11 @@ IndentationError) self.assertTrue(len(err) == 4) self.assertTrue(err[1].strip() == "print 2") - self.assertIn("^", err[2]) - self.assertTrue(err[1].find("2") == err[2].find("^")) + if check_impl_detail(): + # on CPython, there is a "^" at the end of the line + # on PyPy, there is a "^" too, but at the start, more logically + self.assertIn("^", err[2]) + self.assertTrue(err[1].find("2") == err[2].find("^")) def test_bug737473(self): import os, tempfile, time @@ -74,7 +76,8 @@ try: sys.path.insert(0, testdir) testfile = os.path.join(testdir, 'test_bug737473.py') - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise ValueError""" @@ -96,7 +99,8 @@ # three seconds are needed for this test to pass reliably :-( time.sleep(4) - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise NotImplementedError""" reload(test_bug737473) diff --git a/lib-python/2.7/test/test_types.py b/lib-python/2.7/test/test_types.py --- a/lib-python/2.7/test/test_types.py +++ b/lib-python/2.7/test/test_types.py @@ -1,7 +1,8 @@ # Python test set -- part 6, built-in types from test.test_support import run_unittest, have_unicode, run_with_locale, \ - check_py3k_warnings + check_py3k_warnings, \ + impl_detail, check_impl_detail import unittest import sys import locale @@ -289,9 +290,14 @@ # array.array() returns an object that does not implement a char buffer, # something which int() uses for conversion. import array - try: int(buffer(array.array('c'))) + try: int(buffer(array.array('c', '5'))) except TypeError: pass - else: self.fail("char buffer (at C level) not working") + else: + if check_impl_detail(): + self.fail("char buffer (at C level) not working") + #else: + # it works on PyPy, which does not have the distinction + # between char buffer and binary buffer. XXX fine enough? def test_int__format__(self): def test(i, format_spec, result): @@ -741,6 +747,7 @@ for code in 'xXobns': self.assertRaises(ValueError, format, 0, ',' + code) + @impl_detail("the types' internal size attributes are CPython-only") def test_internal_sizes(self): self.assertGreater(object.__basicsize__, 0) self.assertGreater(tuple.__itemsize__, 0) diff --git a/lib-python/2.7/test/test_unicode.py b/lib-python/2.7/test/test_unicode.py --- a/lib-python/2.7/test/test_unicode.py +++ b/lib-python/2.7/test/test_unicode.py @@ -448,10 +448,11 @@ meth('\xff') with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) @test_support.run_with_locale('LC_ALL', 'de_DE', 'fr_FR') def test_format_float(self): @@ -1062,7 +1063,8 @@ # to take a 64-bit long, this test should apply to all platforms. if sys.maxint > (1 << 32) or struct.calcsize('P') != 4: return - self.assertRaises(OverflowError, u't\tt\t'.expandtabs, sys.maxint) + self.assertRaises((OverflowError, MemoryError), + u't\tt\t'.expandtabs, sys.maxint) def test__format__(self): def test(value, format, expected): diff --git a/lib-python/2.7/test/test_unicodedata.py b/lib-python/2.7/test/test_unicodedata.py --- a/lib-python/2.7/test/test_unicodedata.py +++ b/lib-python/2.7/test/test_unicodedata.py @@ -233,10 +233,12 @@ # been loaded in this process. popen = subprocess.Popen(args, stderr=subprocess.PIPE) popen.wait() - self.assertEqual(popen.returncode, 1) - error = "SyntaxError: (unicode error) \N escapes not supported " \ - "(can't load unicodedata module)" - self.assertIn(error, popen.stderr.read()) + self.assertIn(popen.returncode, [0, 1]) # at least it did not segfault + if test.test_support.check_impl_detail(): + self.assertEqual(popen.returncode, 1) + error = "SyntaxError: (unicode error) \N escapes not supported " \ + "(can't load unicodedata module)" + self.assertIn(error, popen.stderr.read()) def test_decimal_numeric_consistent(self): # Test that decimal and numeric are consistent, diff --git a/lib-python/2.7/test/test_unpack.py b/lib-python/2.7/test/test_unpack.py --- a/lib-python/2.7/test/test_unpack.py +++ b/lib-python/2.7/test/test_unpack.py @@ -62,14 +62,14 @@ >>> a, b = t Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking tuple of wrong size >>> a, b = l Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking sequence too short diff --git a/lib-python/2.7/test/test_urllib2.py b/lib-python/2.7/test/test_urllib2.py --- a/lib-python/2.7/test/test_urllib2.py +++ b/lib-python/2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/2.7/test/test_warnings.py b/lib-python/2.7/test/test_warnings.py --- a/lib-python/2.7/test/test_warnings.py +++ b/lib-python/2.7/test/test_warnings.py @@ -355,7 +355,8 @@ # test_support.import_fresh_module utility function def test_accelerated(self): self.assertFalse(original_warnings is self.module) - self.assertFalse(hasattr(self.module.warn, 'func_code')) + self.assertFalse(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class PyWarnTests(BaseTest, WarnTests): module = py_warnings @@ -364,7 +365,8 @@ # test_support.import_fresh_module utility function def test_pure_python(self): self.assertFalse(original_warnings is self.module) - self.assertTrue(hasattr(self.module.warn, 'func_code')) + self.assertTrue(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class WCmdLineTests(unittest.TestCase): diff --git a/lib-python/2.7/test/test_weakref.py b/lib-python/2.7/test/test_weakref.py --- a/lib-python/2.7/test/test_weakref.py +++ b/lib-python/2.7/test/test_weakref.py @@ -1,4 +1,3 @@ -import gc import sys import unittest import UserList @@ -6,6 +5,7 @@ import operator from test import test_support +from test.test_support import gc_collect # Used in ReferencesTestCase.test_ref_created_during_del() . ref_from_del = None @@ -70,6 +70,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(ref1() is None, "expected reference to be invalidated") self.assertTrue(ref2() is None, @@ -101,13 +102,16 @@ ref1 = weakref.proxy(o, self.callback) ref2 = weakref.proxy(o, self.callback) del o + gc_collect() def check(proxy): proxy.bar self.assertRaises(weakref.ReferenceError, check, ref1) self.assertRaises(weakref.ReferenceError, check, ref2) - self.assertRaises(weakref.ReferenceError, bool, weakref.proxy(C())) + ref3 = weakref.proxy(C()) + gc_collect() + self.assertRaises(weakref.ReferenceError, bool, ref3) self.assertTrue(self.cbcalled == 2) def check_basic_ref(self, factory): @@ -124,6 +128,7 @@ o = factory() ref = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(self.cbcalled == 1, "callback did not properly set 'cbcalled'") self.assertTrue(ref() is None, @@ -148,6 +153,7 @@ self.assertTrue(weakref.getweakrefcount(o) == 2, "wrong weak ref count for object") del proxy + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 1, "wrong weak ref count for object after deleting proxy") @@ -325,6 +331,7 @@ "got wrong number of weak reference objects") del ref1, ref2, proxy1, proxy2 + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 0, "weak reference objects not unlinked from" " referent when discarded.") @@ -338,6 +345,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref2], "list of refs does not match") @@ -345,10 +353,12 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref2 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref1], "list of refs does not match") del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [], "list of refs not cleared") @@ -400,13 +410,11 @@ # when the second attempt to remove the instance from the "list # of all objects" occurs. - import gc - class C(object): pass c = C() - wr = weakref.ref(c, lambda ignore: gc.collect()) + wr = weakref.ref(c, lambda ignore: gc_collect()) del c # There endeth the first part. It gets worse. @@ -414,7 +422,7 @@ c1 = C() c1.i = C() - wr = weakref.ref(c1.i, lambda ignore: gc.collect()) + wr = weakref.ref(c1.i, lambda ignore: gc_collect()) c2 = C() c2.c1 = c1 @@ -430,8 +438,6 @@ del c2 def test_callback_in_cycle_1(self): - import gc - class J(object): pass @@ -467,11 +473,9 @@ # search II.__mro__, but that's NULL. The result was a segfault in # a release build, and an assert failure in a debug build. del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_2(self): - import gc - # This is just like test_callback_in_cycle_1, except that II is an # old-style class. The symptom is different then: an instance of an # old-style class looks in its own __dict__ first. 'J' happens to @@ -496,11 +500,9 @@ I.wr = weakref.ref(J, I.acallback) del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_3(self): - import gc - # This one broke the first patch that fixed the last two. In this # case, the objects reachable from the callback aren't also reachable # from the object (c1) *triggering* the callback: you can get to @@ -520,11 +522,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2 - gc.collect() + gc_collect() def test_callback_in_cycle_4(self): - import gc - # Like test_callback_in_cycle_3, except c2 and c1 have different # classes. c2's class (C) isn't reachable from c1 then, so protecting # objects reachable from the dying object (c1) isn't enough to stop @@ -548,11 +548,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2, C, D - gc.collect() + gc_collect() def test_callback_in_cycle_resurrection(self): - import gc - # Do something nasty in a weakref callback: resurrect objects # from dead cycles. For this to be attempted, the weakref and # its callback must also be part of the cyclic trash (else the @@ -583,7 +581,7 @@ del c1, c2, C # make them all trash self.assertEqual(alist, []) # del isn't enough to reclaim anything - gc.collect() + gc_collect() # c1.wr and c2.wr were part of the cyclic trash, so should have # been cleared without their callbacks executing. OTOH, the weakref # to C is bound to a function local (wr), and wasn't trash, so that @@ -593,12 +591,10 @@ self.assertEqual(wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_callbacks_on_callback(self): - import gc - # Set up weakref callbacks *on* weakref callbacks. alist = [] def safe_callback(ignore): @@ -626,12 +622,12 @@ del callback, c, d, C self.assertEqual(alist, []) # del isn't enough to clean up cycles - gc.collect() + gc_collect() self.assertEqual(alist, ["safe_callback called"]) self.assertEqual(external_wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_gc_during_ref_creation(self): @@ -641,9 +637,11 @@ self.check_gc_during_creation(weakref.proxy) def check_gc_during_creation(self, makeref): - thresholds = gc.get_threshold() - gc.set_threshold(1, 1, 1) - gc.collect() + if test_support.check_impl_detail(): + import gc + thresholds = gc.get_threshold() + gc.set_threshold(1, 1, 1) + gc_collect() class A: pass @@ -663,7 +661,8 @@ weakref.ref(referenced, callback) finally: - gc.set_threshold(*thresholds) + if test_support.check_impl_detail(): + gc.set_threshold(*thresholds) def test_ref_created_during_del(self): # Bug #1377858 @@ -683,7 +682,7 @@ r = weakref.ref(Exception) self.assertRaises(TypeError, r.__init__, 0, 0, 0, 0, 0) # No exception should be raised here - gc.collect() + gc_collect() def test_classes(self): # Check that both old-style classes and new-style classes @@ -696,12 +695,12 @@ weakref.ref(int) a = weakref.ref(A, l.append) A = None - gc.collect() + gc_collect() self.assertEqual(a(), None) self.assertEqual(l, [a]) b = weakref.ref(B, l.append) B = None - gc.collect() + gc_collect() self.assertEqual(b(), None) self.assertEqual(l, [a, b]) @@ -722,6 +721,7 @@ self.assertTrue(mr.called) self.assertEqual(mr.value, 24) del o + gc_collect() self.assertTrue(mr() is None) self.assertTrue(mr.called) @@ -738,9 +738,11 @@ self.assertEqual(weakref.getweakrefcount(o), 3) refs = weakref.getweakrefs(o) self.assertEqual(len(refs), 3) - self.assertTrue(r2 is refs[0]) - self.assertIn(r1, refs[1:]) - self.assertIn(r3, refs[1:]) + assert set(refs) == set((r1, r2, r3)) + if test_support.check_impl_detail(): + self.assertTrue(r2 is refs[0]) + self.assertIn(r1, refs[1:]) + self.assertIn(r3, refs[1:]) def test_subclass_refs_dont_conflate_callbacks(self): class MyRef(weakref.ref): @@ -839,15 +841,18 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the values did not clear the dictionary") # regression on SF bug #447152: dict = weakref.WeakValueDictionary() self.assertRaises(KeyError, dict.__getitem__, 1) dict[2] = C() + gc_collect() self.assertRaises(KeyError, dict.__getitem__, 2) def test_weak_keys(self): @@ -868,9 +873,11 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the keys did not clear the dictionary") o = Object(42) @@ -986,13 +993,13 @@ self.assertTrue(len(weakdict) == 2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 1) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 0) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) @@ -1137,6 +1144,7 @@ for o in objs: count += 1 del d[o] + gc_collect() self.assertEqual(len(d), 0) self.assertEqual(count, 2) @@ -1177,6 +1185,7 @@ >>> o is o2 True >>> del o, o2 +>>> gc_collect() >>> print r() None @@ -1229,6 +1238,7 @@ >>> id2obj(a_id) is a True >>> del a +>>> gc_collect() >>> try: ... id2obj(a_id) ... except KeyError: diff --git a/lib-python/2.7/test/test_weakset.py b/lib-python/2.7/test/test_weakset.py --- a/lib-python/2.7/test/test_weakset.py +++ b/lib-python/2.7/test/test_weakset.py @@ -57,6 +57,7 @@ self.assertEqual(len(self.s), len(self.d)) self.assertEqual(len(self.fs), 1) del self.obj + test_support.gc_collect() self.assertEqual(len(self.fs), 0) def test_contains(self): @@ -66,6 +67,7 @@ self.assertNotIn(1, self.s) self.assertIn(self.obj, self.fs) del self.obj + test_support.gc_collect() self.assertNotIn(SomeClass('F'), self.fs) def test_union(self): @@ -204,6 +206,7 @@ self.assertEqual(self.s, dup) self.assertRaises(TypeError, self.s.add, []) self.fs.add(Foo()) + test_support.gc_collect() self.assertTrue(len(self.fs) == 1) self.fs.add(self.obj) self.assertTrue(len(self.fs) == 1) @@ -330,10 +333,11 @@ next(it) # Trigger internal iteration # Destroy an item del items[-1] - gc.collect() # just in case + test_support.gc_collect() # We have removed either the first consumed items, or another one self.assertIn(len(list(it)), [len(items), len(items) - 1]) del it + test_support.gc_collect() # The removal has been committed self.assertEqual(len(s), len(items)) diff --git a/lib-python/2.7/test/test_xml_etree.py b/lib-python/2.7/test/test_xml_etree.py --- a/lib-python/2.7/test/test_xml_etree.py +++ b/lib-python/2.7/test/test_xml_etree.py @@ -1633,10 +1633,10 @@ Check reference leak. >>> xmltoolkit63() - >>> count = sys.getrefcount(None) + >>> count = sys.getrefcount(None) #doctest: +SKIP >>> for i in range(1000): ... xmltoolkit63() - >>> sys.getrefcount(None) - count + >>> sys.getrefcount(None) - count #doctest: +SKIP 0 """ diff --git a/lib-python/2.7/test/test_xmlrpc.py b/lib-python/2.7/test/test_xmlrpc.py --- a/lib-python/2.7/test/test_xmlrpc.py +++ b/lib-python/2.7/test/test_xmlrpc.py @@ -308,7 +308,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -367,7 +367,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -435,6 +435,7 @@ def tearDown(self): # wait on the server thread to terminate + test_support.gc_collect() # to close the active connections self.evt.wait(10) # disable traceback reporting @@ -472,9 +473,6 @@ # protocol error; provide additional information in test output self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) - def test_unicode_host(self): - server = xmlrpclib.ServerProxy(u"http://%s:%d/RPC2"%(ADDR, PORT)) - self.assertEqual(server.add("a", u"\xe9"), u"a\xe9") # [ch] The test 404 is causing lots of false alarms. def XXXtest_404(self): @@ -589,12 +587,6 @@ # This avoids waiting for the socket timeout. self.test_simple1() - def test_partial_post(self): - # Check that a partial POST doesn't make the server loop: issue #14001. - conn = httplib.HTTPConnection(ADDR, PORT) - conn.request('POST', '/RPC2 HTTP/1.0\r\nContent-Length: 100\r\n\r\nbye') - conn.close() - class MultiPathServerTestCase(BaseServerTestCase): threadFunc = staticmethod(http_multi_server) request_count = 2 diff --git a/lib-python/2.7/test/test_zlib.py b/lib-python/2.7/test/test_zlib.py --- a/lib-python/2.7/test/test_zlib.py +++ b/lib-python/2.7/test/test_zlib.py @@ -1,6 +1,7 @@ import unittest from test.test_support import TESTFN, run_unittest, import_module, unlink, requires import binascii +import os import random from test.test_support import precisionbigmemtest, _1G, _4G import sys @@ -99,14 +100,7 @@ class BaseCompressTestCase(object): def check_big_compress_buffer(self, size, compress_func): - _1M = 1024 * 1024 - fmt = "%%0%dx" % (2 * _1M) - # Generate 10MB worth of random, and expand it by repeating it. - # The assumption is that zlib's memory is not big enough to exploit - # such spread out redundancy. - data = ''.join([binascii.a2b_hex(fmt % random.getrandbits(8 * _1M)) - for i in range(10)]) - data = data * (size // len(data) + 1) + data = os.urandom(size) try: compress_func(data) finally: diff --git a/lib-python/2.7/trace.py b/lib-python/2.7/trace.py --- a/lib-python/2.7/trace.py +++ b/lib-python/2.7/trace.py @@ -559,6 +559,10 @@ if len(funcs) == 1: dicts = [d for d in gc.get_referrers(funcs[0]) if isinstance(d, dict)] + if len(dicts) == 0: + # PyPy may store functions directly on the class + # (more exactly: the container is not a Python object) + dicts = funcs if len(dicts) == 1: classes = [c for c in gc.get_referrers(dicts[0]) if hasattr(c, "__bases__")] diff --git a/lib-python/2.7/urllib2.py b/lib-python/2.7/urllib2.py --- a/lib-python/2.7/urllib2.py +++ b/lib-python/2.7/urllib2.py @@ -1171,6 +1171,7 @@ except TypeError: #buffering kw not supported r = h.getresponse() except socket.error, err: # XXX what error? + h.close() raise URLError(err) # Pick apart the HTTPResponse object to get the addinfourl diff --git a/lib-python/2.7/uuid.py b/lib-python/2.7/uuid.py --- a/lib-python/2.7/uuid.py +++ b/lib-python/2.7/uuid.py @@ -406,8 +406,12 @@ continue if hasattr(lib, 'uuid_generate_random'): _uuid_generate_random = lib.uuid_generate_random + _uuid_generate_random.argtypes = [ctypes.c_char * 16] + _uuid_generate_random.restype = None if hasattr(lib, 'uuid_generate_time'): _uuid_generate_time = lib.uuid_generate_time + _uuid_generate_time.argtypes = [ctypes.c_char * 16] + _uuid_generate_time.restype = None # The uuid_generate_* functions are broken on MacOS X 10.5, as noted # in issue #8621 the function generates the same sequence of values @@ -436,6 +440,9 @@ lib = None _UuidCreate = getattr(lib, 'UuidCreateSequential', getattr(lib, 'UuidCreate', None)) + if _UuidCreate is not None: + _UuidCreate.argtypes = [ctypes.c_char * 16] + _UuidCreate.restype = ctypes.c_int except: pass diff --git a/lib-python/3.2/__future__.py b/lib-python/3.2/__future__.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__future__.py @@ -0,0 +1,134 @@ +"""Record of phased-in incompatible language changes. + +Each line is of the form: + + FeatureName = "_Feature(" OptionalRelease "," MandatoryRelease "," + CompilerFlag ")" + +where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples +of the same form as sys.version_info: + + (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int + PY_MINOR_VERSION, # the 1; an int + PY_MICRO_VERSION, # the 0; an int + PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string + PY_RELEASE_SERIAL # the 3; an int + ) + +OptionalRelease records the first release in which + + from __future__ import FeatureName + +was accepted. + +In the case of MandatoryReleases that have not yet occurred, +MandatoryRelease predicts the release in which the feature will become part +of the language. + +Else MandatoryRelease records when the feature became part of the language; +in releases at or after that, modules no longer need + + from __future__ import FeatureName + +to use the feature in question, but may continue to use such imports. + +MandatoryRelease may also be None, meaning that a planned feature got +dropped. + +Instances of class _Feature have two corresponding methods, +.getOptionalRelease() and .getMandatoryRelease(). + +CompilerFlag is the (bitfield) flag that should be passed in the fourth +argument to the builtin function compile() to enable the feature in +dynamically compiled code. This flag is stored in the .compiler_flag +attribute on _Future instances. These values must match the appropriate +#defines of CO_xxx flags in Include/compile.h. + +No feature line is ever to be deleted from this file. +""" + +all_feature_names = [ + "nested_scopes", + "generators", + "division", + "absolute_import", + "with_statement", + "print_function", + "unicode_literals", + "barry_as_FLUFL", +] + +__all__ = ["all_feature_names"] + all_feature_names + +# The CO_xxx symbols are defined here under the same names used by +# compile.h, so that an editor search will find them here. However, +# they're not exported in __all__, because they don't really belong to +# this module. +CO_NESTED = 0x0010 # nested_scopes +CO_GENERATOR_ALLOWED = 0 # generators (obsolete, was 0x1000) +CO_FUTURE_DIVISION = 0x2000 # division +CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default +CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement +CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function +CO_FUTURE_UNICODE_LITERALS = 0x20000 # unicode string literals +CO_FUTURE_BARRY_AS_BDFL = 0x40000 + +class _Feature: + def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): + self.optional = optionalRelease + self.mandatory = mandatoryRelease + self.compiler_flag = compiler_flag + + def getOptionalRelease(self): + """Return first release in which this feature was recognized. + + This is a 5-tuple, of the same form as sys.version_info. + """ + + return self.optional + + def getMandatoryRelease(self): + """Return release in which this feature will become mandatory. + + This is a 5-tuple, of the same form as sys.version_info, or, if + the feature was dropped, is None. + """ + + return self.mandatory + + def __repr__(self): + return "_Feature" + repr((self.optional, + self.mandatory, + self.compiler_flag)) + +nested_scopes = _Feature((2, 1, 0, "beta", 1), + (2, 2, 0, "alpha", 0), + CO_NESTED) + +generators = _Feature((2, 2, 0, "alpha", 1), + (2, 3, 0, "final", 0), + CO_GENERATOR_ALLOWED) + +division = _Feature((2, 2, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_DIVISION) + +absolute_import = _Feature((2, 5, 0, "alpha", 1), + (2, 7, 0, "alpha", 0), + CO_FUTURE_ABSOLUTE_IMPORT) + +with_statement = _Feature((2, 5, 0, "alpha", 1), + (2, 6, 0, "alpha", 0), + CO_FUTURE_WITH_STATEMENT) + +print_function = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_PRINT_FUNCTION) + +unicode_literals = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_UNICODE_LITERALS) + +barry_as_FLUFL = _Feature((3, 1, 0, "alpha", 2), + (3, 9, 0, "alpha", 0), + CO_FUTURE_BARRY_AS_BDFL) diff --git a/lib-python/3.2/__phello__.foo.py b/lib-python/3.2/__phello__.foo.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__phello__.foo.py @@ -0,0 +1,1 @@ +# This file exists as a helper for the test.test_frozen module. diff --git a/lib-python/3.2/_abcoll.py b/lib-python/3.2/_abcoll.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_abcoll.py @@ -0,0 +1,623 @@ +# Copyright 2007 Google, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""Abstract Base Classes (ABCs) for collections, according to PEP 3119. + +DON'T USE THIS MODULE DIRECTLY! The classes here should be imported +via collections; they are defined here only to alleviate certain +bootstrapping issues. Unit tests are in test_collections. +""" + +from abc import ABCMeta, abstractmethod +import sys + +__all__ = ["Hashable", "Iterable", "Iterator", + "Sized", "Container", "Callable", + "Set", "MutableSet", + "Mapping", "MutableMapping", + "MappingView", "KeysView", "ItemsView", "ValuesView", + "Sequence", "MutableSequence", + "ByteString", + ] + + +### collection related types which are not exposed through builtin ### +## iterators ## +bytes_iterator = type(iter(b'')) +bytearray_iterator = type(iter(bytearray())) +#callable_iterator = ??? +dict_keyiterator = type(iter({}.keys())) +dict_valueiterator = type(iter({}.values())) +dict_itemiterator = type(iter({}.items())) +list_iterator = type(iter([])) +list_reverseiterator = type(iter(reversed([]))) +range_iterator = type(iter(range(0))) +set_iterator = type(iter(set())) +str_iterator = type(iter("")) +tuple_iterator = type(iter(())) +zip_iterator = type(iter(zip())) +## views ## +dict_keys = type({}.keys()) +dict_values = type({}.values()) +dict_items = type({}.items()) +## misc ## +dict_proxy = type(type.__dict__) + + +### ONE-TRICK PONIES ### + +class Hashable(metaclass=ABCMeta): + + @abstractmethod + def __hash__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Hashable: + for B in C.__mro__: + if "__hash__" in B.__dict__: + if B.__dict__["__hash__"]: + return True + break + return NotImplemented + + +class Iterable(metaclass=ABCMeta): + + @abstractmethod + def __iter__(self): + while False: + yield None + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterable: + if any("__iter__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Iterator(Iterable): + + @abstractmethod + def __next__(self): + raise StopIteration + + def __iter__(self): + return self + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterator: + if (any("__next__" in B.__dict__ for B in C.__mro__) and + any("__iter__" in B.__dict__ for B in C.__mro__)): + return True + return NotImplemented + +Iterator.register(bytes_iterator) +Iterator.register(bytearray_iterator) +#Iterator.register(callable_iterator) +Iterator.register(dict_keyiterator) +Iterator.register(dict_valueiterator) +Iterator.register(dict_itemiterator) +Iterator.register(list_iterator) +Iterator.register(list_reverseiterator) +Iterator.register(range_iterator) +Iterator.register(set_iterator) +Iterator.register(str_iterator) +Iterator.register(tuple_iterator) +Iterator.register(zip_iterator) + +class Sized(metaclass=ABCMeta): + + @abstractmethod + def __len__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Sized: + if any("__len__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Container(metaclass=ABCMeta): + + @abstractmethod + def __contains__(self, x): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Container: + if any("__contains__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Callable(metaclass=ABCMeta): + + @abstractmethod + def __call__(self, *args, **kwds): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Callable: + if any("__call__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +### SETS ### + + +class Set(Sized, Iterable, Container): + + """A set is a finite, iterable container. + + This class provides concrete generic implementations of all + methods except for __contains__, __iter__ and __len__. + + To override the comparisons (presumably for speed, as the + semantics are fixed), all you have to do is redefine __le__ and + then the other operations will automatically follow suit. + """ + + def __le__(self, other): + if not isinstance(other, Set): + return NotImplemented + if len(self) > len(other): + return False + for elem in self: + if elem not in other: + return False + return True + + def __lt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) < len(other) and self.__le__(other) + + def __gt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other < self + + def __ge__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other <= self + + def __eq__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) == len(other) and self.__le__(other) + + def __ne__(self, other): + return not (self == other) + + @classmethod + def _from_iterable(cls, it): + '''Construct an instance of the class from any iterable input. + + Must override this method if the class constructor signature + does not accept an iterable for an input. + ''' + return cls(it) + + def __and__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + return self._from_iterable(value for value in other if value in self) + + def isdisjoint(self, other): + for value in other: + if value in self: + return False + return True + + def __or__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + chain = (e for s in (self, other) for e in s) + return self._from_iterable(chain) + + def __sub__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return self._from_iterable(value for value in self + if value not in other) + + def __xor__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return (self - other) | (other - self) + + def _hash(self): + """Compute the hash value of a set. + + Note that we don't define __hash__: not all sets are hashable. + But if you define a hashable set type, its __hash__ should + call this function. + + This must be compatible __eq__. + + All sets ought to compare equal if they contain the same + elements, regardless of how they are implemented, and + regardless of the order of the elements; so there's not much + freedom for __eq__ or __hash__. We match the algorithm used + by the built-in frozenset type. + """ + MAX = sys.maxsize + MASK = 2 * MAX + 1 + n = len(self) + h = 1927868237 * (n + 1) + h &= MASK + for x in self: + hx = hash(x) + h ^= (hx ^ (hx << 16) ^ 89869747) * 3644798167 + h &= MASK + h = h * 69069 + 907133923 + h &= MASK + if h > MAX: + h -= MASK + 1 + if h == -1: + h = 590923713 + return h + +Set.register(frozenset) + + +class MutableSet(Set): + + @abstractmethod + def add(self, value): + """Add an element.""" + raise NotImplementedError + + @abstractmethod + def discard(self, value): + """Remove an element. Do not raise an exception if absent.""" + raise NotImplementedError + + def remove(self, value): + """Remove an element. If not a member, raise a KeyError.""" + if value not in self: + raise KeyError(value) + self.discard(value) + + def pop(self): + """Return the popped value. Raise KeyError if empty.""" + it = iter(self) + try: + value = next(it) + except StopIteration: + raise KeyError + self.discard(value) + return value + + def clear(self): + """This is slow (creates N new iterators!) but effective.""" + try: + while True: + self.pop() + except KeyError: + pass + + def __ior__(self, it): + for value in it: + self.add(value) + return self + + def __iand__(self, it): + for value in (self - it): + self.discard(value) + return self + + def __ixor__(self, it): + if it is self: + self.clear() + else: + if not isinstance(it, Set): + it = self._from_iterable(it) + for value in it: + if value in self: + self.discard(value) + else: + self.add(value) + return self + + def __isub__(self, it): + if it is self: + self.clear() + else: + for value in it: + self.discard(value) + return self + +MutableSet.register(set) + + +### MAPPINGS ### + + +class Mapping(Sized, Iterable, Container): + + @abstractmethod + def __getitem__(self, key): + raise KeyError + + def get(self, key, default=None): + try: + return self[key] + except KeyError: + return default + + def __contains__(self, key): + try: + self[key] + except KeyError: + return False + else: + return True + + def keys(self): + return KeysView(self) + + def items(self): + return ItemsView(self) + + def values(self): + return ValuesView(self) + + def __eq__(self, other): + if not isinstance(other, Mapping): + return NotImplemented + return dict(self.items()) == dict(other.items()) + + def __ne__(self, other): + return not (self == other) + + +class MappingView(Sized): + + def __init__(self, mapping): + self._mapping = mapping + + def __len__(self): + return len(self._mapping) + + def __repr__(self): + return '{0.__class__.__name__}({0._mapping!r})'.format(self) + + +class KeysView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, key): + return key in self._mapping + + def __iter__(self): + for key in self._mapping: + yield key + +KeysView.register(dict_keys) + + +class ItemsView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, item): + key, value = item + try: + v = self._mapping[key] + except KeyError: + return False + else: + return v == value + + def __iter__(self): + for key in self._mapping: + yield (key, self._mapping[key]) + +ItemsView.register(dict_items) + + +class ValuesView(MappingView): + + def __contains__(self, value): + for key in self._mapping: + if value == self._mapping[key]: + return True + return False + + def __iter__(self): + for key in self._mapping: + yield self._mapping[key] + +ValuesView.register(dict_values) + + +class MutableMapping(Mapping): + + @abstractmethod + def __setitem__(self, key, value): + raise KeyError + + @abstractmethod + def __delitem__(self, key): + raise KeyError + + __marker = object() + + def pop(self, key, default=__marker): + try: + value = self[key] + except KeyError: + if default is self.__marker: + raise + return default + else: + del self[key] + return value + + def popitem(self): + try: + key = next(iter(self)) + except StopIteration: + raise KeyError + value = self[key] + del self[key] + return key, value + + def clear(self): + try: + while True: + self.popitem() + except KeyError: + pass + + def update(*args, **kwds): + if len(args) > 2: + raise TypeError("update() takes at most 2 positional " + "arguments ({} given)".format(len(args))) + elif not args: + raise TypeError("update() takes at least 1 argument (0 given)") + self = args[0] + other = args[1] if len(args) >= 2 else () + + if isinstance(other, Mapping): + for key in other: + self[key] = other[key] + elif hasattr(other, "keys"): + for key in other.keys(): + self[key] = other[key] + else: + for key, value in other: + self[key] = value + for key, value in kwds.items(): + self[key] = value + + def setdefault(self, key, default=None): + try: + return self[key] + except KeyError: + self[key] = default + return default + +MutableMapping.register(dict) + + +### SEQUENCES ### + + +class Sequence(Sized, Iterable, Container): + + """All the operations on a read-only sequence. + + Concrete subclasses must override __new__ or __init__, + __getitem__, and __len__. + """ + + @abstractmethod + def __getitem__(self, index): + raise IndexError + + def __iter__(self): + i = 0 + try: + while True: + v = self[i] + yield v + i += 1 + except IndexError: + return + + def __contains__(self, value): + for v in self: + if v == value: + return True + return False + + def __reversed__(self): + for i in reversed(range(len(self))): + yield self[i] + + def index(self, value): + for i, v in enumerate(self): + if v == value: + return i + raise ValueError + + def count(self, value): + return sum(1 for v in self if v == value) + +Sequence.register(tuple) +Sequence.register(str) +Sequence.register(range) + + +class ByteString(Sequence): + + """This unifies bytes and bytearray. + + XXX Should add all their methods. + """ + +ByteString.register(bytes) +ByteString.register(bytearray) + + +class MutableSequence(Sequence): + + @abstractmethod + def __setitem__(self, index, value): + raise IndexError + + @abstractmethod + def __delitem__(self, index): + raise IndexError + + @abstractmethod + def insert(self, index, value): + raise IndexError + + def append(self, value): + self.insert(len(self), value) + + def reverse(self): + n = len(self) + for i in range(n//2): + self[i], self[n-i-1] = self[n-i-1], self[i] + + def extend(self, values): + for v in values: + self.append(v) + + def pop(self, index=-1): + v = self[index] + del self[index] + return v + + def remove(self, value): + del self[self.index(value)] + + def __iadd__(self, values): + self.extend(values) + return self + +MutableSequence.register(list) +MutableSequence.register(bytearray) # Multiply inheriting, see ByteString diff --git a/lib-python/3.2/_compat_pickle.py b/lib-python/3.2/_compat_pickle.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_compat_pickle.py @@ -0,0 +1,81 @@ +# This module is used to map the old Python 2 names to the new names used in +# Python 3 for the pickle module. This needed to make pickle streams +# generated with Python 2 loadable by Python 3. + +# This is a copy of lib2to3.fixes.fix_imports.MAPPING. We cannot import +# lib2to3 and use the mapping defined there, because lib2to3 uses pickle. +# Thus, this could cause the module to be imported recursively. +IMPORT_MAPPING = { + 'StringIO': 'io', + 'cStringIO': 'io', + 'cPickle': 'pickle', + '__builtin__' : 'builtins', + 'copy_reg': 'copyreg', + 'Queue': 'queue', + 'SocketServer': 'socketserver', + 'ConfigParser': 'configparser', + 'repr': 'reprlib', + 'FileDialog': 'tkinter.filedialog', + 'tkFileDialog': 'tkinter.filedialog', + 'SimpleDialog': 'tkinter.simpledialog', + 'tkSimpleDialog': 'tkinter.simpledialog', + 'tkColorChooser': 'tkinter.colorchooser', + 'tkCommonDialog': 'tkinter.commondialog', + 'Dialog': 'tkinter.dialog', + 'Tkdnd': 'tkinter.dnd', + 'tkFont': 'tkinter.font', + 'tkMessageBox': 'tkinter.messagebox', + 'ScrolledText': 'tkinter.scrolledtext', + 'Tkconstants': 'tkinter.constants', + 'Tix': 'tkinter.tix', + 'ttk': 'tkinter.ttk', + 'Tkinter': 'tkinter', + 'markupbase': '_markupbase', + '_winreg': 'winreg', + 'thread': '_thread', + 'dummy_thread': '_dummy_thread', + 'dbhash': 'dbm.bsd', + 'dumbdbm': 'dbm.dumb', + 'dbm': 'dbm.ndbm', + 'gdbm': 'dbm.gnu', + 'xmlrpclib': 'xmlrpc.client', + 'DocXMLRPCServer': 'xmlrpc.server', + 'SimpleXMLRPCServer': 'xmlrpc.server', + 'httplib': 'http.client', + 'htmlentitydefs' : 'html.entities', + 'HTMLParser' : 'html.parser', + 'Cookie': 'http.cookies', + 'cookielib': 'http.cookiejar', + 'BaseHTTPServer': 'http.server', + 'SimpleHTTPServer': 'http.server', + 'CGIHTTPServer': 'http.server', + 'test.test_support': 'test.support', + 'commands': 'subprocess', + 'UserString' : 'collections', + 'UserList' : 'collections', + 'urlparse' : 'urllib.parse', + 'robotparser' : 'urllib.robotparser', + 'whichdb': 'dbm', + 'anydbm': 'dbm' +} + + +# This contains rename rules that are easy to handle. We ignore the more +# complex stuff (e.g. mapping the names in the urllib and types modules). +# These rules should be run before import names are fixed. +NAME_MAPPING = { + ('__builtin__', 'xrange'): ('builtins', 'range'), + ('__builtin__', 'reduce'): ('functools', 'reduce'), + ('__builtin__', 'intern'): ('sys', 'intern'), + ('__builtin__', 'unichr'): ('builtins', 'chr'), + ('__builtin__', 'basestring'): ('builtins', 'str'), + ('__builtin__', 'long'): ('builtins', 'int'), + ('itertools', 'izip'): ('builtins', 'zip'), + ('itertools', 'imap'): ('builtins', 'map'), + ('itertools', 'ifilter'): ('builtins', 'filter'), + ('itertools', 'ifilterfalse'): ('itertools', 'filterfalse'), +} + +# Same, but for 3.x to 2.x +REVERSE_IMPORT_MAPPING = dict((v, k) for (k, v) in IMPORT_MAPPING.items()) +REVERSE_NAME_MAPPING = dict((v, k) for (k, v) in NAME_MAPPING.items()) diff --git a/lib-python/3.2/_dummy_thread.py b/lib-python/3.2/_dummy_thread.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_dummy_thread.py @@ -0,0 +1,155 @@ +"""Drop-in replacement for the thread module. + +Meant to be used as a brain-dead substitute so that threaded code does +not need to be rewritten for when the thread module is not present. + +Suggested usage is:: + + try: + import _thread + except ImportError: + import _dummy_thread as _thread + +""" +# Exports only things specified by thread documentation; +# skipping obsolete synonyms allocate(), start_new(), exit_thread(). +__all__ = ['error', 'start_new_thread', 'exit', 'get_ident', 'allocate_lock', + 'interrupt_main', 'LockType'] + +# A dummy value +TIMEOUT_MAX = 2**31 + +# NOTE: this module can be imported early in the extension building process, +# and so top level imports of other modules should be avoided. Instead, all +# imports are done when needed on a function-by-function basis. Since threads +# are disabled, the import lock should not be an issue anyway (??). + +class error(Exception): + """Dummy implementation of _thread.error.""" + + def __init__(self, *args): + self.args = args + +def start_new_thread(function, args, kwargs={}): + """Dummy implementation of _thread.start_new_thread(). + + Compatibility is maintained by making sure that ``args`` is a + tuple and ``kwargs`` is a dictionary. If an exception is raised + and it is SystemExit (which can be done by _thread.exit()) it is + caught and nothing is done; all other exceptions are printed out + by using traceback.print_exc(). + + If the executed function calls interrupt_main the KeyboardInterrupt will be + raised when the function returns. + + """ + if type(args) != type(tuple()): + raise TypeError("2nd arg must be a tuple") + if type(kwargs) != type(dict()): + raise TypeError("3rd arg must be a dict") + global _main + _main = False + try: + function(*args, **kwargs) + except SystemExit: + pass + except: + import traceback + traceback.print_exc() + _main = True + global _interrupt + if _interrupt: + _interrupt = False + raise KeyboardInterrupt + +def exit(): + """Dummy implementation of _thread.exit().""" + raise SystemExit + +def get_ident(): + """Dummy implementation of _thread.get_ident(). + + Since this module should only be used when _threadmodule is not + available, it is safe to assume that the current process is the + only thread. Thus a constant can be safely returned. + """ + return -1 + +def allocate_lock(): + """Dummy implementation of _thread.allocate_lock().""" + return LockType() + +def stack_size(size=None): + """Dummy implementation of _thread.stack_size().""" + if size is not None: + raise error("setting thread stack size not supported") + return 0 + +class LockType(object): + """Class implementing dummy implementation of _thread.LockType. + + Compatibility is maintained by maintaining self.locked_status + which is a boolean that stores the state of the lock. Pickling of + the lock, though, should not be done since if the _thread module is + then used with an unpickled ``lock()`` from here problems could + occur from this class not having atomic methods. + + """ + + def __init__(self): + self.locked_status = False + + def acquire(self, waitflag=None, timeout=-1): + """Dummy implementation of acquire(). + + For blocking calls, self.locked_status is automatically set to + True and returned appropriately based on value of + ``waitflag``. If it is non-blocking, then the value is + actually checked and not set if it is already acquired. This + is all done so that threading.Condition's assert statements + aren't triggered and throw a little fit. + + """ + if waitflag is None or waitflag: + self.locked_status = True + return True + else: + if not self.locked_status: + self.locked_status = True + return True + else: + if timeout > 0: + import time + time.sleep(timeout) + return False + + __enter__ = acquire + + def __exit__(self, typ, val, tb): + self.release() + + def release(self): + """Release the dummy lock.""" + # XXX Perhaps shouldn't actually bother to test? Could lead + # to problems for complex, threaded code. + if not self.locked_status: + raise error + self.locked_status = False + return True + + def locked(self): + return self.locked_status + +# Used to signal that interrupt_main was called in a "thread" +_interrupt = False +# True when not executing in a "thread" +_main = True + +def interrupt_main(): + """Set _interrupt flag to True to have start_new_thread raise + KeyboardInterrupt upon exiting.""" + if _main: + raise KeyboardInterrupt + else: + global _interrupt + _interrupt = True diff --git a/lib-python/3.2/_markupbase.py b/lib-python/3.2/_markupbase.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_markupbase.py @@ -0,0 +1,395 @@ +"""Shared support for scanning document type declarations in HTML and XHTML. + +This module is used as a foundation for the html.parser module. It has no +documented public API and should not be used directly. + +""" + +import re + +_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9]*\s*').match +_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match +_commentclose = re.compile(r'--\s*>') +_markedsectionclose = re.compile(r']\s*]\s*>') + +# An analysis of the MS-Word extensions is available at +# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf + +_msmarkedsectionclose = re.compile(r']\s*>') + +del re + + +class ParserBase: + """Parser base class which provides some common support methods used + by the SGML/HTML and XHTML parsers.""" + + def __init__(self): + if self.__class__ is ParserBase: + raise RuntimeError( + "_markupbase.ParserBase must be subclassed") + + def error(self, message): + raise NotImplementedError( + "subclasses of ParserBase must override error()") + + def reset(self): + self.lineno = 1 + self.offset = 0 + + def getpos(self): + """Return current line number and offset.""" + return self.lineno, self.offset + + # Internal -- update line number and offset. This should be + # called for each piece of data exactly once, in order -- in other + # words the concatenation of all the input strings to this + # function should be exactly the entire input. + def updatepos(self, i, j): + if i >= j: + return j + rawdata = self.rawdata + nlines = rawdata.count("\n", i, j) + if nlines: + self.lineno = self.lineno + nlines + pos = rawdata.rindex("\n", i, j) # Should not fail + self.offset = j-(pos+1) + else: + self.offset = self.offset + j-i + return j + + _decl_otherchars = '' + + # Internal -- parse declaration (for use by subclasses). + def parse_declaration(self, i): + # This is some sort of declaration; in "HTML as + # deployed," this should only be the document type + # declaration (""). + # ISO 8879:1986, however, has more complex + # declaration syntax for elements in , including: + # --comment-- + # [marked section] + # name in the following list: ENTITY, DOCTYPE, ELEMENT, + # ATTLIST, NOTATION, SHORTREF, USEMAP, + # LINKTYPE, LINK, IDLINK, USELINK, SYSTEM + rawdata = self.rawdata + j = i + 2 + assert rawdata[i:j] == "": + # the empty comment + return j + 1 + if rawdata[j:j+1] in ("-", ""): + # Start of comment followed by buffer boundary, + # or just a buffer boundary. + return -1 + # A simple, practical version could look like: ((name|stringlit) S*) + '>' + n = len(rawdata) + if rawdata[j:j+2] == '--': #comment + # Locate --.*-- as the body of the comment + return self.parse_comment(i) + elif rawdata[j] == '[': #marked section + # Locate [statusWord [...arbitrary SGML...]] as the body of the marked section + # Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA + # Note that this is extended by Microsoft Office "Save as Web" function + # to include [if...] and [endif]. + return self.parse_marked_section(i) + else: #all other declaration elements + decltype, j = self._scan_name(j, i) + if j < 0: + return j + if decltype == "doctype": + self._decl_otherchars = '' + while j < n: + c = rawdata[j] + if c == ">": + # end of declaration syntax + data = rawdata[i+2:j] + if decltype == "doctype": + self.handle_decl(data) + else: + # According to the HTML5 specs sections "8.2.4.44 Bogus + # comment state" and "8.2.4.45 Markup declaration open + # state", a comment token should be emitted. + # Calling unknown_decl provides more flexibility though. + self.unknown_decl(data) + return j + 1 + if c in "\"'": + m = _declstringlit_match(rawdata, j) + if not m: + return -1 # incomplete + j = m.end() + elif c in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ": + name, j = self._scan_name(j, i) + elif c in self._decl_otherchars: + j = j + 1 + elif c == "[": + # this could be handled in a separate doctype parser + if decltype == "doctype": + j = self._parse_doctype_subset(j + 1, i) + elif decltype in {"attlist", "linktype", "link", "element"}: + # must tolerate []'d groups in a content model in an element declaration + # also in data attribute specifications of attlist declaration + # also link type declaration subsets in linktype declarations + # also link attribute specification lists in link declarations + self.error("unsupported '[' char in %s declaration" % decltype) + else: + self.error("unexpected '[' char in declaration") + else: + self.error( + "unexpected %r char in declaration" % rawdata[j]) + if j < 0: + return j + return -1 # incomplete + + # Internal -- parse a marked section + # Override this to handle MS-word extension syntax content + def parse_marked_section(self, i, report=1): + rawdata= self.rawdata + assert rawdata[i:i+3] == ' ending + match= _markedsectionclose.search(rawdata, i+3) + elif sectName in {"if", "else", "endif"}: + # look for MS Office ]> ending + match= _msmarkedsectionclose.search(rawdata, i+3) + else: + self.error('unknown status keyword %r in marked section' % rawdata[i+3:j]) + if not match: + return -1 + if report: + j = match.start(0) + self.unknown_decl(rawdata[i+3: j]) + return match.end(0) + + # Internal -- parse comment, return length or -1 if not terminated + def parse_comment(self, i, report=1): + rawdata = self.rawdata + if rawdata[i:i+4] != ' Largest of the nsmallest for elem in it: - if los <= elem: - continue - insort(result, elem) - pop() - los = result[-1] + if cmp_lt(elem, los): + insort(result, elem) + pop() + los = result[-1] return result # An alternative approach manifests the whole iterable in memory but # saves comparisons by heapifying all at once. Also, saves time @@ -240,7 +248,7 @@ while pos > startpos: parentpos = (pos - 1) >> 1 parent = heap[parentpos] - if newitem < parent: + if cmp_lt(newitem, parent): heap[pos] = parent pos = parentpos continue @@ -295,7 +303,7 @@ while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 - if rightpos < endpos and not heap[childpos] < heap[rightpos]: + if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]): childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] @@ -364,7 +372,7 @@ return [min(chain(head, it))] return [min(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): @@ -402,7 +410,7 @@ return [max(chain(head, it))] return [max(chain(head, it), key=key)] - # When n>=size, it's faster to use sort() + # When n>=size, it's faster to use sorted() try: size = len(iterable) except (TypeError, AttributeError): diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -212,6 +212,9 @@ # maximal amount of data to read at one time in _safe_read MAXAMOUNT = 1048576 +# maximal line length when calling readline(). +_MAXLINE = 65536 + class HTTPMessage(mimetools.Message): def addheader(self, key, value): @@ -274,7 +277,9 @@ except IOError: startofline = tell = None self.seekable = 0 - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if not line: self.status = 'EOF in headers' break @@ -404,7 +409,10 @@ break # skip the header from the 100 response while True: - skip = self.fp.readline().strip() + skip = self.fp.readline(_MAXLINE + 1) + if len(skip) > _MAXLINE: + raise LineTooLong("header line") + skip = skip.strip() if not skip: break if self.debuglevel > 0: @@ -563,7 +571,9 @@ value = [] while True: if chunk_left is None: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("chunk size") i = line.find(';') if i >= 0: line = line[:i] # strip chunk-extensions @@ -598,7 +608,9 @@ # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: - line = self.fp.readline() + line = self.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer @@ -730,7 +742,9 @@ raise socket.error("Tunnel connection failed: %d %s" % (code, message.strip())) while True: - line = response.fp.readline() + line = response.fp.readline(_MAXLINE + 1) + if len(line) > _MAXLINE: + raise LineTooLong("header line") if line == '\r\n': break @@ -790,7 +804,7 @@ del self._buffer[:] # If msg and message_body are sent in a single send() call, # it will avoid performance problems caused by the interaction - # between delayed ack and the Nagle algorithim. + # between delayed ack and the Nagle algorithm. if isinstance(message_body, str): msg += message_body message_body = None @@ -1010,7 +1024,11 @@ kwds["buffering"] = True; response = self.response_class(*args, **kwds) - response.begin() + try: + response.begin() + except: + response.close() + raise assert response.will_close != _UNKNOWN self.__state = _CS_IDLE @@ -1233,6 +1251,11 @@ self.args = line, self.line = line +class LineTooLong(HTTPException): + def __init__(self, line_type): + HTTPException.__init__(self, "got more than %d bytes when reading %s" + % (_MAXLINE, line_type)) + # for backwards compatibility error = HTTPException diff --git a/lib-python/2.7/idlelib/Bindings.py b/lib-python/2.7/idlelib/Bindings.py --- a/lib-python/2.7/idlelib/Bindings.py +++ b/lib-python/2.7/idlelib/Bindings.py @@ -98,14 +98,6 @@ # menu del menudefs[-1][1][0:2] - menudefs.insert(0, - ('application', [ - ('About IDLE', '<>'), - None, - ('_Preferences....', '<>'), - ])) - - default_keydefs = idleConf.GetCurrentKeySet() del sys diff --git a/lib-python/2.7/idlelib/Delegator.py b/lib-python/2.7/idlelib/Delegator.py --- a/lib-python/2.7/idlelib/Delegator.py +++ b/lib-python/2.7/idlelib/Delegator.py @@ -12,6 +12,14 @@ self.__cache[name] = attr return attr + def __nonzero__(self): + # this is needed for PyPy: else, if self.delegate is None, the + # __getattr__ above picks NoneType.__nonzero__, which returns + # False. Thus, bool(Delegator()) is False as well, but it's not what + # we want. On CPython, bool(Delegator()) is True because NoneType + # does not have __nonzero__ + return True + def resetcache(self): for key in self.__cache.keys(): try: diff --git a/lib-python/2.7/idlelib/EditorWindow.py b/lib-python/2.7/idlelib/EditorWindow.py --- a/lib-python/2.7/idlelib/EditorWindow.py +++ b/lib-python/2.7/idlelib/EditorWindow.py @@ -48,6 +48,21 @@ path = module.__path__ except AttributeError: raise ImportError, 'No source for module ' + module.__name__ + if descr[2] != imp.PY_SOURCE: + # If all of the above fails and didn't raise an exception,fallback + # to a straight import which can find __init__.py in a package. + m = __import__(fullname) + try: + filename = m.__file__ + except AttributeError: + pass + else: + file = None + base, ext = os.path.splitext(filename) + if ext == '.pyc': + ext = '.py' + filename = base + ext + descr = filename, None, imp.PY_SOURCE return file, filename, descr class EditorWindow(object): @@ -102,8 +117,8 @@ self.top = top = WindowList.ListedToplevel(root, menu=self.menubar) if flist: self.tkinter_vars = flist.vars - #self.top.instance_dict makes flist.inversedict avalable to - #configDialog.py so it can access all EditorWindow instaces + #self.top.instance_dict makes flist.inversedict available to + #configDialog.py so it can access all EditorWindow instances self.top.instance_dict = flist.inversedict else: self.tkinter_vars = {} # keys: Tkinter event names @@ -136,6 +151,14 @@ if macosxSupport.runningAsOSXApp(): # Command-W on editorwindows doesn't work without this. text.bind('<>', self.close_event) + # Some OS X systems have only one mouse button, + # so use control-click for pulldown menus there. + # (Note, AquaTk defines <2> as the right button if + # present and the Tk Text widget already binds <2>.) + text.bind("",self.right_menu_event) + else: + # Elsewhere, use right-click for pulldown menus. + text.bind("<3>",self.right_menu_event) text.bind("<>", self.cut) text.bind("<>", self.copy) text.bind("<>", self.paste) @@ -154,7 +177,6 @@ text.bind("<>", self.find_selection_event) text.bind("<>", self.replace_event) text.bind("<>", self.goto_line_event) - text.bind("<3>", self.right_menu_event) text.bind("<>",self.smart_backspace_event) text.bind("<>",self.newline_and_indent_event) text.bind("<>",self.smart_indent_event) @@ -300,13 +322,13 @@ return "break" def home_callback(self, event): - if (event.state & 12) != 0 and event.keysym == "Home": - # state&1==shift, state&4==control, state&8==alt - return # ; fall back to class binding - + if (event.state & 4) != 0 and event.keysym == "Home": + # state&4==Control. If , use the Tk binding. + return if self.text.index("iomark") and \ self.text.compare("iomark", "<=", "insert lineend") and \ self.text.compare("insert linestart", "<=", "iomark"): + # In Shell on input line, go to just after prompt insertpt = int(self.text.index("iomark").split(".")[1]) else: line = self.text.get("insert linestart", "insert lineend") @@ -315,30 +337,27 @@ break else: insertpt=len(line) - lineat = int(self.text.index("insert").split('.')[1]) - if insertpt == lineat: insertpt = 0 - dest = "insert linestart+"+str(insertpt)+"c" - if (event.state&1) == 0: - # shift not pressed + # shift was not pressed self.text.tag_remove("sel", "1.0", "end") else: if not self.text.index("sel.first"): - self.text.mark_set("anchor","insert") - + self.text.mark_set("my_anchor", "insert") # there was no previous selection + else: + if self.text.compare(self.text.index("sel.first"), "<", self.text.index("insert")): + self.text.mark_set("my_anchor", "sel.first") # extend back + else: + self.text.mark_set("my_anchor", "sel.last") # extend forward first = self.text.index(dest) - last = self.text.index("anchor") - + last = self.text.index("my_anchor") if self.text.compare(first,">",last): first,last = last,first - self.text.tag_remove("sel", "1.0", "end") self.text.tag_add("sel", first, last) - self.text.mark_set("insert", dest) self.text.see("insert") return "break" @@ -385,7 +404,7 @@ menudict[name] = menu = Menu(mbar, name=name) mbar.add_cascade(label=label, menu=menu, underline=underline) - if macosxSupport.runningAsOSXApp(): + if macosxSupport.isCarbonAquaTk(self.root): # Insert the application menu menudict['application'] = menu = Menu(mbar, name='apple') mbar.add_cascade(label='IDLE', menu=menu) @@ -445,7 +464,11 @@ def python_docs(self, event=None): if sys.platform[:3] == 'win': - os.startfile(self.help_url) + try: + os.startfile(self.help_url) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(self.help_url) return "break" @@ -740,9 +763,13 @@ "Create a callback with the helpfile value frozen at definition time" def display_extra_help(helpfile=helpfile): if not helpfile.startswith(('www', 'http')): - url = os.path.normpath(helpfile) + helpfile = os.path.normpath(helpfile) if sys.platform[:3] == 'win': - os.startfile(helpfile) + try: + os.startfile(helpfile) + except WindowsError as why: + tkMessageBox.showerror(title='Document Start Failure', + message=str(why), parent=self.text) else: webbrowser.open(helpfile) return display_extra_help @@ -1526,7 +1553,12 @@ def get_accelerator(keydefs, eventname): keylist = keydefs.get(eventname) - if not keylist: + # issue10940: temporary workaround to prevent hang with OS X Cocoa Tk 8.5 + # if not keylist: + if (not keylist) or (macosxSupport.runningAsOSXApp() and eventname in { + "<>", + "<>", + "<>"}): return "" s = keylist[0] s = re.sub(r"-[a-z]\b", lambda m: m.group().upper(), s) diff --git a/lib-python/2.7/idlelib/FileList.py b/lib-python/2.7/idlelib/FileList.py --- a/lib-python/2.7/idlelib/FileList.py +++ b/lib-python/2.7/idlelib/FileList.py @@ -43,7 +43,7 @@ def new(self, filename=None): return self.EditorWindow(self, filename) - def close_all_callback(self, event): + def close_all_callback(self, *args, **kwds): for edit in self.inversedict.keys(): reply = edit.close() if reply == "cancel": diff --git a/lib-python/2.7/idlelib/FormatParagraph.py b/lib-python/2.7/idlelib/FormatParagraph.py --- a/lib-python/2.7/idlelib/FormatParagraph.py +++ b/lib-python/2.7/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/lib-python/2.7/idlelib/HISTORY.txt b/lib-python/2.7/idlelib/HISTORY.txt --- a/lib-python/2.7/idlelib/HISTORY.txt +++ b/lib-python/2.7/idlelib/HISTORY.txt @@ -13,7 +13,7 @@ - New tarball released as a result of the 'revitalisation' of the IDLEfork project. -- This release requires python 2.1 or better. Compatability with earlier +- This release requires python 2.1 or better. Compatibility with earlier versions of python (especially ancient ones like 1.5x) is no longer a priority in IDLEfork development. diff --git a/lib-python/2.7/idlelib/IOBinding.py b/lib-python/2.7/idlelib/IOBinding.py --- a/lib-python/2.7/idlelib/IOBinding.py +++ b/lib-python/2.7/idlelib/IOBinding.py @@ -320,17 +320,20 @@ return "yes" message = "Do you want to save %s before closing?" % ( self.filename or "this untitled document") - m = tkMessageBox.Message( - title="Save On Close", - message=message, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.YESNOCANCEL, - master=self.text) - reply = m.show() - if reply == "yes": + confirm = tkMessageBox.askyesnocancel( + title="Save On Close", + message=message, + default=tkMessageBox.YES, + master=self.text) + if confirm: + reply = "yes" self.save(None) if not self.get_saved(): reply = "cancel" + elif confirm is None: + reply = "cancel" + else: + reply = "no" self.text.focus_set() return reply @@ -339,7 +342,7 @@ self.save_as(event) else: if self.writefile(self.filename): - self.set_saved(1) + self.set_saved(True) try: self.editwin.store_file_breaks() except AttributeError: # may be a PyShell @@ -465,15 +468,12 @@ self.text.insert("end-1c", "\n") def print_window(self, event): - m = tkMessageBox.Message( - title="Print", - message="Print to Default Printer", - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.text) - reply = m.show() - if reply != tkMessageBox.OK: + confirm = tkMessageBox.askokcancel( + title="Print", + message="Print to Default Printer", + default=tkMessageBox.OK, + master=self.text) + if not confirm: self.text.focus_set() return "break" tempfilename = None @@ -488,8 +488,8 @@ if not self.writefile(tempfilename): os.unlink(tempfilename) return "break" - platform=os.name - printPlatform=1 + platform = os.name + printPlatform = True if platform == 'posix': #posix platform command = idleConf.GetOption('main','General', 'print-command-posix') @@ -497,7 +497,7 @@ elif platform == 'nt': #win32 platform command = idleConf.GetOption('main','General','print-command-win') else: #no printing for this platform - printPlatform=0 + printPlatform = False if printPlatform: #we can try to print for this platform command = command % filename pipe = os.popen(command, "r") @@ -511,7 +511,7 @@ output = "Printing command: %s\n" % repr(command) + output tkMessageBox.showerror("Print status", output, master=self.text) else: #no printing for this platform - message="Printing is not enabled for this platform: %s" % platform + message = "Printing is not enabled for this platform: %s" % platform tkMessageBox.showinfo("Print status", message, master=self.text) if tempfilename: os.unlink(tempfilename) diff --git a/lib-python/2.7/idlelib/NEWS.txt b/lib-python/2.7/idlelib/NEWS.txt --- a/lib-python/2.7/idlelib/NEWS.txt +++ b/lib-python/2.7/idlelib/NEWS.txt @@ -1,3 +1,18 @@ +What's New in IDLE 2.7.2? +======================= + +*Release date: 29-May-2011* + +- Issue #6378: Further adjust idle.bat to start associated Python + +- Issue #11896: Save on Close failed despite selecting "Yes" in dialog. + +- toggle failing on Tk 8.5, causing IDLE exits and strange selection + behavior. Issue 4676. Improve selection extension behaviour. + +- toggle non-functional when NumLock set on Windows. Issue 3851. + + What's New in IDLE 2.7? ======================= @@ -21,7 +36,7 @@ - Tk 8.5 Text widget requires 'wordprocessor' tabstyle attr to handle mixed space/tab properly. Issue 5129, patch by Guilherme Polo. - + - Issue #3549: On MacOS the preferences menu was not present diff --git a/lib-python/2.7/idlelib/PyShell.py b/lib-python/2.7/idlelib/PyShell.py --- a/lib-python/2.7/idlelib/PyShell.py +++ b/lib-python/2.7/idlelib/PyShell.py @@ -1432,6 +1432,13 @@ shell.interp.prepend_syspath(script) shell.interp.execfile(script) + # Check for problematic OS X Tk versions and print a warning message + # in the IDLE shell window; this is less intrusive than always opening + # a separate window. + tkversionwarning = macosxSupport.tkVersionWarning(root) + if tkversionwarning: + shell.interp.runcommand(''.join(("print('", tkversionwarning, "')"))) + root.mainloop() root.destroy() diff --git a/lib-python/2.7/idlelib/ScriptBinding.py b/lib-python/2.7/idlelib/ScriptBinding.py --- a/lib-python/2.7/idlelib/ScriptBinding.py +++ b/lib-python/2.7/idlelib/ScriptBinding.py @@ -26,6 +26,7 @@ from idlelib import PyShell from idlelib.configHandler import idleConf +from idlelib import macosxSupport IDENTCHARS = string.ascii_letters + string.digits + "_" @@ -53,6 +54,9 @@ self.flist = self.editwin.flist self.root = self.editwin.root + if macosxSupport.runningAsOSXApp(): + self.editwin.text_frame.bind('<>', self._run_module_event) + def check_module_event(self, event): filename = self.getfilename() if not filename: @@ -166,6 +170,19 @@ interp.runcode(code) return 'break' + if macosxSupport.runningAsOSXApp(): + # Tk-Cocoa in MacOSX is broken until at least + # Tk 8.5.9, and without this rather + # crude workaround IDLE would hang when a user + # tries to run a module using the keyboard shortcut + # (the menu item works fine). + _run_module_event = run_module_event + + def run_module_event(self, event): + self.editwin.text_frame.after(200, + lambda: self.editwin.text_frame.event_generate('<>')) + return 'break' + def getfilename(self): """Get source filename. If not saved, offer to save (or create) file @@ -184,9 +201,9 @@ if autosave and filename: self.editwin.io.save(None) else: - reply = self.ask_save_dialog() + confirm = self.ask_save_dialog() self.editwin.text.focus_set() - if reply == "ok": + if confirm: self.editwin.io.save(None) filename = self.editwin.io.filename else: @@ -195,13 +212,11 @@ def ask_save_dialog(self): msg = "Source Must Be Saved\n" + 5*' ' + "OK to Save?" - mb = tkMessageBox.Message(title="Save Before Run or Check", - message=msg, - icon=tkMessageBox.QUESTION, - type=tkMessageBox.OKCANCEL, - default=tkMessageBox.OK, - master=self.editwin.text) - return mb.show() + confirm = tkMessageBox.askokcancel(title="Save Before Run or Check", + message=msg, + default=tkMessageBox.OK, + master=self.editwin.text) + return confirm def errorbox(self, title, message): # XXX This should really be a function of EditorWindow... diff --git a/lib-python/2.7/idlelib/config-keys.def b/lib-python/2.7/idlelib/config-keys.def --- a/lib-python/2.7/idlelib/config-keys.def +++ b/lib-python/2.7/idlelib/config-keys.def @@ -176,7 +176,7 @@ redo = close-window = restart-shell = -save-window-as-file = +save-window-as-file = close-all-windows = view-restart = tabify-region = @@ -208,7 +208,7 @@ open-module = find-selection = python-context-help = -save-copy-of-window-as-file = +save-copy-of-window-as-file = open-window-from-file = python-docs = diff --git a/lib-python/2.7/idlelib/extend.txt b/lib-python/2.7/idlelib/extend.txt --- a/lib-python/2.7/idlelib/extend.txt +++ b/lib-python/2.7/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument, `editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/lib-python/2.7/idlelib/idle.bat b/lib-python/2.7/idlelib/idle.bat --- a/lib-python/2.7/idlelib/idle.bat +++ b/lib-python/2.7/idlelib/idle.bat @@ -1,4 +1,4 @@ @echo off rem Start IDLE using the appropriate Python interpreter set CURRDIR=%~dp0 -start "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 +start "IDLE" "%CURRDIR%..\..\pythonw.exe" "%CURRDIR%idle.pyw" %1 %2 %3 %4 %5 %6 %7 %8 %9 diff --git a/lib-python/2.7/idlelib/idlever.py b/lib-python/2.7/idlelib/idlever.py --- a/lib-python/2.7/idlelib/idlever.py +++ b/lib-python/2.7/idlelib/idlever.py @@ -1,1 +1,1 @@ -IDLE_VERSION = "2.7.1" +IDLE_VERSION = "2.7.2" diff --git a/lib-python/2.7/idlelib/macosxSupport.py b/lib-python/2.7/idlelib/macosxSupport.py --- a/lib-python/2.7/idlelib/macosxSupport.py +++ b/lib-python/2.7/idlelib/macosxSupport.py @@ -4,6 +4,7 @@ """ import sys import Tkinter +from os import path _appbundle = None @@ -19,10 +20,41 @@ _appbundle = (sys.platform == 'darwin' and '.app' in sys.executable) return _appbundle +_carbonaquatk = None + +def isCarbonAquaTk(root): + """ + Returns True if IDLE is using a Carbon Aqua Tk (instead of the + newer Cocoa Aqua Tk). + """ + global _carbonaquatk + if _carbonaquatk is None: + _carbonaquatk = (runningAsOSXApp() and + 'aqua' in root.tk.call('tk', 'windowingsystem') and + 'AppKit' not in root.tk.call('winfo', 'server', '.')) + return _carbonaquatk + +def tkVersionWarning(root): + """ + Returns a string warning message if the Tk version in use appears to + be one known to cause problems with IDLE. The Apple Cocoa-based Tk 8.5 + that was shipped with Mac OS X 10.6. + """ + + if (runningAsOSXApp() and + ('AppKit' in root.tk.call('winfo', 'server', '.')) and + (root.tk.call('info', 'patchlevel') == '8.5.7') ): + return (r"WARNING: The version of Tcl/Tk (8.5.7) in use may" + r" be unstable.\n" + r"Visit http://www.python.org/download/mac/tcltk/" + r" for current information.") + else: + return False + def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: @@ -79,9 +111,6 @@ WindowList.add_windows_to_menu(menu) WindowList.register_callback(postwindowsmenu) - menudict['application'] = menu = Menu(menubar, name='apple') - menubar.add_cascade(label='IDLE', menu=menu) - def about_dialog(event=None): from idlelib import aboutDialog aboutDialog.AboutDialog(root, 'About IDLE') @@ -91,41 +120,45 @@ root.instance_dict = flist.inversedict configDialog.ConfigDialog(root, 'Settings') + def help_dialog(event=None): + from idlelib import textView + fn = path.join(path.abspath(path.dirname(__file__)), 'help.txt') + textView.view_file(root, 'Help', fn) root.bind('<>', about_dialog) root.bind('<>', config_dialog) + root.createcommand('::tk::mac::ShowPreferences', config_dialog) if flist: root.bind('<>', flist.close_all_callback) + # The binding above doesn't reliably work on all versions of Tk + # on MacOSX. Adding command definition below does seem to do the + # right thing for now. + root.createcommand('exit', flist.close_all_callback) - ###check if Tk version >= 8.4.14; if so, use hard-coded showprefs binding - tkversion = root.tk.eval('info patchlevel') - # Note: we cannot check if the string tkversion >= '8.4.14', because - # the string '8.4.7' is greater than the string '8.4.14'. - if tuple(map(int, tkversion.split('.'))) >= (8, 4, 14): - Bindings.menudefs[0] = ('application', [ + if isCarbonAquaTk(root): + # for Carbon AquaTk, replace the default Tk apple menu + menudict['application'] = menu = Menu(menubar, name='apple') + menubar.add_cascade(label='IDLE', menu=menu) + Bindings.menudefs.insert(0, + ('application', [ ('About IDLE', '<>'), - None, - ]) - root.createcommand('::tk::mac::ShowPreferences', config_dialog) + None, + ])) + tkversion = root.tk.eval('info patchlevel') + if tuple(map(int, tkversion.split('.'))) < (8, 4, 14): + # for earlier AquaTk versions, supply a Preferences menu item + Bindings.menudefs[0][1].append( + ('_Preferences....', '<>'), + ) else: - for mname, entrylist in Bindings.menudefs: - menu = menudict.get(mname) - if not menu: - continue - else: - for entry in entrylist: - if not entry: - menu.add_separator() - else: - label, eventname = entry - underline, label = prepstr(label) - accelerator = get_accelerator(Bindings.default_keydefs, - eventname) - def command(text=root, eventname=eventname): - text.event_generate(eventname) - menu.add_command(label=label, underline=underline, - command=command, accelerator=accelerator) + # assume Cocoa AquaTk + # replace default About dialog with About IDLE one + root.createcommand('tkAboutDialog', about_dialog) + # replace default "Help" item in Help menu + root.createcommand('::tk::mac::ShowHelp', help_dialog) + # remove redundant "IDLE Help" from menu + del Bindings.menudefs[-1][1][0] def setupApp(root, flist): """ diff --git a/lib-python/2.7/imaplib.py b/lib-python/2.7/imaplib.py --- a/lib-python/2.7/imaplib.py +++ b/lib-python/2.7/imaplib.py @@ -1158,28 +1158,17 @@ self.port = port self.sock = socket.create_connection((host, port)) self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile) + self.file = self.sslobj.makefile('rb') def read(self, size): """Read 'size' bytes from remote.""" - # sslobj.read() sometimes returns < size bytes - chunks = [] - read = 0 - while read < size: - data = self.sslobj.read(min(size-read, 16384)) - read += len(data) - chunks.append(data) - - return ''.join(chunks) + return self.file.read(size) def readline(self): """Read line from remote.""" - line = [] - while 1: - char = self.sslobj.read(1) - line.append(char) - if char in ("\n", ""): return ''.join(line) + return self.file.readline() def send(self, data): @@ -1195,6 +1184,7 @@ def shutdown(self): """Close I/O established in "open".""" + self.file.close() self.sock.close() @@ -1321,9 +1311,10 @@ 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12} def Internaldate2tuple(resp): - """Convert IMAP4 INTERNALDATE to UT. + """Parse an IMAP4 INTERNALDATE string. - Returns Python time module tuple. + Return corresponding local time. The return value is a + time.struct_time instance or None if the string has wrong format. """ mo = InternalDate.match(resp) @@ -1390,9 +1381,14 @@ def Time2Internaldate(date_time): - """Convert 'date_time' to IMAP4 INTERNALDATE representation. + """Convert date_time to IMAP4 INTERNALDATE representation. - Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"' + Return string in form: '"DD-Mmm-YYYY HH:MM:SS +HHMM"'. The + date_time argument can be a number (int or float) representing + seconds since epoch (as returned by time.time()), a 9-tuple + representing local time (as returned by time.localtime()), or a + double-quoted string. In the last case, it is assumed to already + be in the correct format. """ if isinstance(date_time, (int, float)): diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -746,8 +746,15 @@ 'varargs' and 'varkw' are the names of the * and ** arguments or None.""" if not iscode(co): - raise TypeError('{!r} is not a code object'.format(co)) + if hasattr(len, 'func_code') and type(co) is type(len.func_code): + # PyPy extension: built-in function objects have a func_code too. + # There is no co_code on it, but co_argcount and co_varnames and + # co_flags are present. + pass + else: + raise TypeError('{!r} is not a code object'.format(co)) + code = getattr(co, 'co_code', '') nargs = co.co_argcount names = co.co_varnames args = list(names[:nargs]) @@ -757,12 +764,12 @@ for i in range(nargs): if args[i][:1] in ('', '.'): stack, remain, count = [], [], [] - while step < len(co.co_code): - op = ord(co.co_code[step]) + while step < len(code): + op = ord(code[step]) step = step + 1 if op >= dis.HAVE_ARGUMENT: opname = dis.opname[op] - value = ord(co.co_code[step]) + ord(co.co_code[step+1])*256 + value = ord(code[step]) + ord(code[step+1])*256 step = step + 2 if opname in ('UNPACK_TUPLE', 'UNPACK_SEQUENCE'): remain.append(value) @@ -809,7 +816,9 @@ if ismethod(func): func = func.im_func - if not isfunction(func): + if not (isfunction(func) or + isbuiltin(func) and hasattr(func, 'func_code')): + # PyPy extension: this works for built-in functions too raise TypeError('{!r} is not a Python function'.format(func)) args, varargs, varkw = getargs(func.func_code) return ArgSpec(args, varargs, varkw, func.func_defaults) @@ -943,8 +952,14 @@ f_name, 'at most' if defaults else 'exactly', num_args, 'arguments' if num_args > 1 else 'argument', num_total)) elif num_args == 0 and num_total: - raise TypeError('%s() takes no arguments (%d given)' % - (f_name, num_total)) + if varkw: + if num_pos: + # XXX: We should use num_pos, but Python also uses num_total: + raise TypeError('%s() takes exactly 0 arguments ' + '(%d given)' % (f_name, num_total)) + else: + raise TypeError('%s() takes no argument (%d given)' % + (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: if is_assigned(arg): diff --git a/lib-python/2.7/json/decoder.py b/lib-python/2.7/json/decoder.py --- a/lib-python/2.7/json/decoder.py +++ b/lib-python/2.7/json/decoder.py @@ -4,7 +4,7 @@ import sys import struct -from json.scanner import make_scanner +from json import scanner try: from _json import scanstring as c_scanstring except ImportError: @@ -161,6 +161,12 @@ nextchar = s[end:end + 1] # Trivial empty object if nextchar == '}': + if object_pairs_hook is not None: + result = object_pairs_hook(pairs) + return result, end + pairs = {} + if object_hook is not None: + pairs = object_hook(pairs) return pairs, end + 1 elif nextchar != '"': raise ValueError(errmsg("Expecting property name", s, end)) @@ -350,7 +356,7 @@ self.parse_object = JSONObject self.parse_array = JSONArray self.parse_string = scanstring - self.scan_once = make_scanner(self) + self.scan_once = scanner.make_scanner(self) def decode(self, s, _w=WHITESPACE.match): """Return the Python representation of ``s`` (a ``str`` or ``unicode`` diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -2,14 +2,7 @@ """ import re -try: - from _json import encode_basestring_ascii as c_encode_basestring_ascii -except ImportError: - c_encode_basestring_ascii = None -try: - from _json import make_encoder as c_make_encoder -except ImportError: - c_make_encoder = None +from __pypy__.builders import StringBuilder, UnicodeBuilder ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') @@ -24,23 +17,22 @@ '\t': '\\t', } for i in range(0x20): - ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) - #ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) + ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) # Assume this produces an infinity on all machines (probably not guaranteed) INFINITY = float('1e66666') FLOAT_REPR = repr -def encode_basestring(s): +def raw_encode_basestring(s): """Return a JSON representation of a Python string """ def replace(match): return ESCAPE_DCT[match.group(0)] - return '"' + ESCAPE.sub(replace, s) + '"' + return ESCAPE.sub(replace, s) +encode_basestring = lambda s: '"' + raw_encode_basestring(s) + '"' - -def py_encode_basestring_ascii(s): +def raw_encode_basestring_ascii(s): """Return an ASCII-only JSON representation of a Python string """ @@ -53,21 +45,19 @@ except KeyError: n = ord(s) if n < 0x10000: - return '\\u{0:04x}'.format(n) - #return '\\u%04x' % (n,) + return '\\u%04x' % (n,) else: # surrogate pair n -= 0x10000 s1 = 0xd800 | ((n >> 10) & 0x3ff) s2 = 0xdc00 | (n & 0x3ff) - return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) - #return '\\u%04x\\u%04x' % (s1, s2) - return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' + return '\\u%04x\\u%04x' % (s1, s2) + if ESCAPE_ASCII.search(s): + return str(ESCAPE_ASCII.sub(replace, s)) + return s +encode_basestring_ascii = lambda s: '"' + raw_encode_basestring_ascii(s) + '"' -encode_basestring_ascii = ( - c_encode_basestring_ascii or py_encode_basestring_ascii) - class JSONEncoder(object): """Extensible JSON encoder for Python data structures. @@ -147,6 +137,17 @@ self.skipkeys = skipkeys self.ensure_ascii = ensure_ascii + if ensure_ascii: + self.encoder = raw_encode_basestring_ascii + else: + self.encoder = raw_encode_basestring + if encoding != 'utf-8': + orig_encoder = self.encoder + def encoder(o): + if isinstance(o, str): + o = o.decode(encoding) + return orig_encoder(o) + self.encoder = encoder self.check_circular = check_circular self.allow_nan = allow_nan self.sort_keys = sort_keys @@ -184,24 +185,126 @@ '{"foo": ["bar", "baz"]}' """ - # This is for extremely simple cases and benchmarks. + if self.check_circular: + markers = {} + else: + markers = None + if self.ensure_ascii: + builder = StringBuilder() + else: + builder = UnicodeBuilder() + self._encode(o, markers, builder, 0) + return builder.build() + + def _emit_indent(self, builder, _current_indent_level): + if self.indent is not None: + _current_indent_level += 1 + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent + builder.append(newline_indent) + else: + separator = self.item_separator + return separator, _current_indent_level + + def _emit_unindent(self, builder, _current_indent_level): + if self.indent is not None: + builder.append('\n') + builder.append(' ' * (self.indent * (_current_indent_level - 1))) + + def _encode(self, o, markers, builder, _current_indent_level): if isinstance(o, basestring): - if isinstance(o, str): - _encoding = self.encoding - if (_encoding is not None - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - if self.ensure_ascii: - return encode_basestring_ascii(o) + builder.append('"') + builder.append(self.encoder(o)) + builder.append('"') + elif o is None: + builder.append('null') + elif o is True: + builder.append('true') + elif o is False: + builder.append('false') + elif isinstance(o, (int, long)): + builder.append(str(o)) + elif isinstance(o, float): + builder.append(self._floatstr(o)) + elif isinstance(o, (list, tuple)): + if not o: + builder.append('[]') + return + self._encode_list(o, markers, builder, _current_indent_level) + elif isinstance(o, dict): + if not o: + builder.append('{}') + return + self._encode_dict(o, markers, builder, _current_indent_level) + else: + self._mark_markers(markers, o) + res = self.default(o) + self._encode(res, markers, builder, _current_indent_level) + self._remove_markers(markers, o) + return res + + def _encode_list(self, l, markers, builder, _current_indent_level): + self._mark_markers(markers, l) + builder.append('[') + first = True + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + for elem in l: + if first: + first = False else: - return encode_basestring(o) - # This doesn't pass the iterator directly to ''.join() because the - # exceptions aren't as detailed. The list call should be roughly - # equivalent to the PySequence_Fast that ''.join() would do. - chunks = self.iterencode(o, _one_shot=True) - if not isinstance(chunks, (list, tuple)): - chunks = list(chunks) - return ''.join(chunks) + builder.append(separator) + self._encode(elem, markers, builder, _current_indent_level) + del elem # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append(']') + self._remove_markers(markers, l) + + def _encode_dict(self, d, markers, builder, _current_indent_level): + self._mark_markers(markers, d) + first = True + builder.append('{') + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + if self.sort_keys: + items = sorted(d.items(), key=lambda kv: kv[0]) + else: + items = d.iteritems() + + for key, v in items: + if first: + first = False + else: + builder.append(separator) + if isinstance(key, basestring): + pass + # JavaScript is weakly typed for these, so it makes sense to + # also allow them. Many encoders seem to do something like this. + elif isinstance(key, float): + key = self._floatstr(key) + elif key is True: + key = 'true' + elif key is False: + key = 'false' + elif key is None: + key = 'null' + elif isinstance(key, (int, long)): + key = str(key) + elif self.skipkeys: + continue + else: + raise TypeError("key " + repr(key) + " is not a string") + builder.append('"') + builder.append(self.encoder(key)) + builder.append('"') + builder.append(self.key_separator) + self._encode(v, markers, builder, _current_indent_level) + del key + del v # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append('}') + self._remove_markers(markers, d) def iterencode(self, o, _one_shot=False): """Encode the given object and yield each string @@ -217,86 +320,54 @@ markers = {} else: markers = None - if self.ensure_ascii: - _encoder = encode_basestring_ascii + return self._iterencode(o, markers, 0) + + def _floatstr(self, o): + # Check for specials. Note that this type of test is processor + # and/or platform-specific, so do tests which don't depend on the + # internals. + + if o != o: + text = 'NaN' + elif o == INFINITY: + text = 'Infinity' + elif o == -INFINITY: + text = '-Infinity' else: - _encoder = encode_basestring - if self.encoding != 'utf-8': - def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding): - if isinstance(o, str): - o = o.decode(_encoding) - return _orig_encoder(o) + return FLOAT_REPR(o) - def floatstr(o, allow_nan=self.allow_nan, - _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY): - # Check for specials. Note that this type of test is processor - # and/or platform-specific, so do tests which don't depend on the - # internals. + if not self.allow_nan: + raise ValueError( + "Out of range float values are not JSON compliant: " + + repr(o)) - if o != o: - text = 'NaN' - elif o == _inf: - text = 'Infinity' - elif o == _neginf: - text = '-Infinity' - else: - return _repr(o) + return text - if not allow_nan: - raise ValueError( - "Out of range float values are not JSON compliant: " + - repr(o)) + def _mark_markers(self, markers, o): + if markers is not None: + if id(o) in markers: + raise ValueError("Circular reference detected") + markers[id(o)] = None - return text + def _remove_markers(self, markers, o): + if markers is not None: + del markers[id(o)] - - if (_one_shot and c_make_encoder is not None - and not self.indent and not self.sort_keys): - _iterencode = c_make_encoder( - markers, self.default, _encoder, self.indent, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, self.allow_nan) - else: - _iterencode = _make_iterencode( - markers, self.default, _encoder, self.indent, floatstr, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, _one_shot) - return _iterencode(o, 0) - -def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, - _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot, - ## HACK: hand-optimized bytecode; turn globals into locals - ValueError=ValueError, - basestring=basestring, - dict=dict, - float=float, - id=id, - int=int, - isinstance=isinstance, - list=list, - long=long, - str=str, - tuple=tuple, - ): - - def _iterencode_list(lst, _current_indent_level): + def _iterencode_list(self, lst, markers, _current_indent_level): if not lst: yield '[]' return - if markers is not None: - markerid = id(lst) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = lst + self._mark_markers(markers, lst) buf = '[' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent buf += newline_indent else: newline_indent = None - separator = _item_separator + separator = self.item_separator first = True for value in lst: if first: @@ -304,7 +375,7 @@ else: buf = separator if isinstance(value, basestring): - yield buf + _encoder(value) + yield buf + '"' + self.encoder(value) + '"' elif value is None: yield buf + 'null' elif value is True: @@ -314,44 +385,43 @@ elif isinstance(value, (int, long)): yield buf + str(value) elif isinstance(value, float): - yield buf + _floatstr(value) + yield buf + self._floatstr(value) else: yield buf if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield ']' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, lst) - def _iterencode_dict(dct, _current_indent_level): + def _iterencode_dict(self, dct, markers, _current_indent_level): if not dct: yield '{}' return - if markers is not None: - markerid = id(dct) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = dct + self._mark_markers(markers, dct) yield '{' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - item_separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + item_separator = self.item_separator + newline_indent yield newline_indent else: newline_indent = None - item_separator = _item_separator + item_separator = self.item_separator first = True - if _sort_keys: + if self.sort_keys: items = sorted(dct.items(), key=lambda kv: kv[0]) else: items = dct.iteritems() @@ -361,7 +431,7 @@ # JavaScript is weakly typed for these, so it makes sense to # also allow them. Many encoders seem to do something like this. elif isinstance(key, float): - key = _floatstr(key) + key = self._floatstr(key) elif key is True: key = 'true' elif key is False: @@ -370,7 +440,7 @@ key = 'null' elif isinstance(key, (int, long)): key = str(key) - elif _skipkeys: + elif self.skipkeys: continue else: raise TypeError("key " + repr(key) + " is not a string") @@ -378,10 +448,10 @@ first = False else: yield item_separator - yield _encoder(key) - yield _key_separator + yield '"' + self.encoder(key) + '"' + yield self.key_separator if isinstance(value, basestring): - yield _encoder(value) + yield '"' + self.encoder(value) + '"' elif value is None: yield 'null' elif value is True: @@ -391,26 +461,28 @@ elif isinstance(value, (int, long)): yield str(value) elif isinstance(value, float): - yield _floatstr(value) + yield self._floatstr(value) else: if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield '}' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, dct) - def _iterencode(o, _current_indent_level): + def _iterencode(self, o, markers, _current_indent_level): if isinstance(o, basestring): - yield _encoder(o) + yield '"' + self.encoder(o) + '"' elif o is None: yield 'null' elif o is True: @@ -420,23 +492,19 @@ elif isinstance(o, (int, long)): yield str(o) elif isinstance(o, float): - yield _floatstr(o) + yield self._floatstr(o) elif isinstance(o, (list, tuple)): - for chunk in _iterencode_list(o, _current_indent_level): + for chunk in self._iterencode_list(o, markers, + _current_indent_level): yield chunk elif isinstance(o, dict): - for chunk in _iterencode_dict(o, _current_indent_level): + for chunk in self._iterencode_dict(o, markers, + _current_indent_level): yield chunk else: - if markers is not None: - markerid = id(o) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = o - o = _default(o) - for chunk in _iterencode(o, _current_indent_level): + self._mark_markers(markers, o) + obj = self.default(o) + for chunk in self._iterencode(obj, markers, + _current_indent_level): yield chunk - if markers is not None: - del markers[markerid] - - return _iterencode + self._remove_markers(markers, o) diff --git a/lib-python/2.7/json/tests/__init__.py b/lib-python/2.7/json/tests/__init__.py --- a/lib-python/2.7/json/tests/__init__.py +++ b/lib-python/2.7/json/tests/__init__.py @@ -1,7 +1,46 @@ import os import sys +import json +import doctest import unittest -import doctest + +from test import test_support + +# import json with and without accelerations +cjson = test_support.import_fresh_module('json', fresh=['_json']) +pyjson = test_support.import_fresh_module('json', blocked=['_json']) + +# create two base classes that will be used by the other tests +class PyTest(unittest.TestCase): + json = pyjson + loads = staticmethod(pyjson.loads) + dumps = staticmethod(pyjson.dumps) + + at unittest.skipUnless(cjson, 'requires _json') +class CTest(unittest.TestCase): + if cjson is not None: + json = cjson + loads = staticmethod(cjson.loads) + dumps = staticmethod(cjson.dumps) + +# test PyTest and CTest checking if the functions come from the right module +class TestPyTest(PyTest): + def test_pyjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, + 'json.scanner') + self.assertEqual(self.json.decoder.scanstring.__module__, + 'json.decoder') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + 'json.encoder') + +class TestCTest(CTest): + def test_cjson(self): + self.assertEqual(self.json.scanner.make_scanner.__module__, '_json') + self.assertEqual(self.json.decoder.scanstring.__module__, '_json') + self.assertEqual(self.json.encoder.c_make_encoder.__module__, '_json') + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + '_json') + here = os.path.dirname(__file__) @@ -17,12 +56,11 @@ return suite def additional_tests(): - import json - import json.encoder - import json.decoder suite = unittest.TestSuite() for mod in (json, json.encoder, json.decoder): suite.addTest(doctest.DocTestSuite(mod)) + suite.addTest(TestPyTest('test_pyjson')) + suite.addTest(TestCTest('test_cjson')) return suite def main(): diff --git a/lib-python/2.7/json/tests/test_check_circular.py b/lib-python/2.7/json/tests/test_check_circular.py --- a/lib-python/2.7/json/tests/test_check_circular.py +++ b/lib-python/2.7/json/tests/test_check_circular.py @@ -1,30 +1,34 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + def default_iterable(obj): return list(obj) -class TestCheckCircular(TestCase): +class TestCheckCircular(object): def test_circular_dict(self): dct = {} dct['a'] = dct - self.assertRaises(ValueError, json.dumps, dct) + self.assertRaises(ValueError, self.dumps, dct) def test_circular_list(self): lst = [] lst.append(lst) - self.assertRaises(ValueError, json.dumps, lst) + self.assertRaises(ValueError, self.dumps, lst) def test_circular_composite(self): dct2 = {} dct2['a'] = [] dct2['a'].append(dct2) - self.assertRaises(ValueError, json.dumps, dct2) + self.assertRaises(ValueError, self.dumps, dct2) def test_circular_default(self): - json.dumps([set()], default=default_iterable) - self.assertRaises(TypeError, json.dumps, [set()]) + self.dumps([set()], default=default_iterable) + self.assertRaises(TypeError, self.dumps, [set()]) def test_circular_off_default(self): - json.dumps([set()], default=default_iterable, check_circular=False) - self.assertRaises(TypeError, json.dumps, [set()], check_circular=False) + self.dumps([set()], default=default_iterable, check_circular=False) + self.assertRaises(TypeError, self.dumps, [set()], check_circular=False) + + +class TestPyCheckCircular(TestCheckCircular, PyTest): pass +class TestCCheckCircular(TestCheckCircular, CTest): pass diff --git a/lib-python/2.7/json/tests/test_decode.py b/lib-python/2.7/json/tests/test_decode.py --- a/lib-python/2.7/json/tests/test_decode.py +++ b/lib-python/2.7/json/tests/test_decode.py @@ -1,18 +1,17 @@ import decimal -from unittest import TestCase from StringIO import StringIO +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestDecode(TestCase): +class TestDecode(object): def test_decimal(self): - rval = json.loads('1.1', parse_float=decimal.Decimal) + rval = self.loads('1.1', parse_float=decimal.Decimal) self.assertTrue(isinstance(rval, decimal.Decimal)) self.assertEqual(rval, decimal.Decimal('1.1')) def test_float(self): - rval = json.loads('1', parse_int=float) + rval = self.loads('1', parse_int=float) self.assertTrue(isinstance(rval, float)) self.assertEqual(rval, 1.0) @@ -20,22 +19,32 @@ # Several optimizations were made that skip over calls to # the whitespace regex, so this test is designed to try and # exercise the uncommon cases. The array cases are already covered. - rval = json.loads('{ "key" : "value" , "k":"v" }') + rval = self.loads('{ "key" : "value" , "k":"v" }') self.assertEqual(rval, {"key":"value", "k":"v"}) + def test_empty_objects(self): + self.assertEqual(self.loads('{}'), {}) + self.assertEqual(self.loads('[]'), []) + self.assertEqual(self.loads('""'), u"") + self.assertIsInstance(self.loads('""'), unicode) + def test_object_pairs_hook(self): s = '{"xkd":1, "kcw":2, "art":3, "hxm":4, "qrt":5, "pad":6, "hoy":7}' p = [("xkd", 1), ("kcw", 2), ("art", 3), ("hxm", 4), ("qrt", 5), ("pad", 6), ("hoy", 7)] - self.assertEqual(json.loads(s), eval(s)) - self.assertEqual(json.loads(s, object_pairs_hook=lambda x: x), p) - self.assertEqual(json.load(StringIO(s), - object_pairs_hook=lambda x: x), p) - od = json.loads(s, object_pairs_hook=OrderedDict) + self.assertEqual(self.loads(s), eval(s)) + self.assertEqual(self.loads(s, object_pairs_hook=lambda x: x), p) + self.assertEqual(self.json.load(StringIO(s), + object_pairs_hook=lambda x: x), p) + od = self.loads(s, object_pairs_hook=OrderedDict) self.assertEqual(od, OrderedDict(p)) self.assertEqual(type(od), OrderedDict) # the object_pairs_hook takes priority over the object_hook - self.assertEqual(json.loads(s, + self.assertEqual(self.loads(s, object_pairs_hook=OrderedDict, object_hook=lambda x: None), OrderedDict(p)) + + +class TestPyDecode(TestDecode, PyTest): pass +class TestCDecode(TestDecode, CTest): pass diff --git a/lib-python/2.7/json/tests/test_default.py b/lib-python/2.7/json/tests/test_default.py --- a/lib-python/2.7/json/tests/test_default.py +++ b/lib-python/2.7/json/tests/test_default.py @@ -1,9 +1,12 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestDefault(TestCase): +class TestDefault(object): def test_default(self): self.assertEqual( - json.dumps(type, default=repr), - json.dumps(repr(type))) + self.dumps(type, default=repr), + self.dumps(repr(type))) + + +class TestPyDefault(TestDefault, PyTest): pass +class TestCDefault(TestDefault, CTest): pass diff --git a/lib-python/2.7/json/tests/test_dump.py b/lib-python/2.7/json/tests/test_dump.py --- a/lib-python/2.7/json/tests/test_dump.py +++ b/lib-python/2.7/json/tests/test_dump.py @@ -1,21 +1,23 @@ -from unittest import TestCase from cStringIO import StringIO +from json.tests import PyTest, CTest -import json -class TestDump(TestCase): +class TestDump(object): def test_dump(self): sio = StringIO() - json.dump({}, sio) + self.json.dump({}, sio) self.assertEqual(sio.getvalue(), '{}') def test_dumps(self): - self.assertEqual(json.dumps({}), '{}') + self.assertEqual(self.dumps({}), '{}') def test_encode_truefalse(self): - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {True: False, False: True}, sort_keys=True), '{"false": true, "true": false}') - self.assertEqual(json.dumps( + self.assertEqual(self.dumps( {2: 3.0, 4.0: 5L, False: 1, 6L: True}, sort_keys=True), '{"false": 1, "2": 3.0, "4.0": 5, "6": true}') + +class TestPyDump(TestDump, PyTest): pass +class TestCDump(TestDump, CTest): pass diff --git a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py --- a/lib-python/2.7/json/tests/test_encode_basestring_ascii.py +++ b/lib-python/2.7/json/tests/test_encode_basestring_ascii.py @@ -1,8 +1,6 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json.encoder -from json import dumps -from collections import OrderedDict CASES = [ (u'/\\"\ucafe\ubabe\uab98\ufcde\ubcda\uef4a\x08\x0c\n\r\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?', '"/\\\\\\"\\ucafe\\ubabe\\uab98\\ufcde\\ubcda\\uef4a\\b\\f\\n\\r\\t`1~!@#$%^&*()_+-=[]{}|;:\',./<>?"'), @@ -23,19 +21,11 @@ (u'\u0123\u4567\u89ab\ucdef\uabcd\uef4a', '"\\u0123\\u4567\\u89ab\\ucdef\\uabcd\\uef4a"'), ] -class TestEncodeBaseStringAscii(TestCase): - def test_py_encode_basestring_ascii(self): - self._test_encode_basestring_ascii(json.encoder.py_encode_basestring_ascii) - - def test_c_encode_basestring_ascii(self): - if not json.encoder.c_encode_basestring_ascii: - return - self._test_encode_basestring_ascii(json.encoder.c_encode_basestring_ascii) - - def _test_encode_basestring_ascii(self, encode_basestring_ascii): - fname = encode_basestring_ascii.__name__ +class TestEncodeBasestringAscii(object): + def test_encode_basestring_ascii(self): + fname = self.json.encoder.encode_basestring_ascii.__name__ for input_string, expect in CASES: - result = encode_basestring_ascii(input_string) + result = self.json.encoder.encode_basestring_ascii(input_string) self.assertEqual(result, expect, '{0!r} != {1!r} for {2}({3!r})'.format( result, expect, fname, input_string)) @@ -43,5 +33,9 @@ def test_ordered_dict(self): # See issue 6105 items = [('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)] - s = json.dumps(OrderedDict(items)) + s = self.dumps(OrderedDict(items)) self.assertEqual(s, '{"one": 1, "two": 2, "three": 3, "four": 4, "five": 5}') + + +class TestPyEncodeBasestringAscii(TestEncodeBasestringAscii, PyTest): pass +class TestCEncodeBasestringAscii(TestEncodeBasestringAscii, CTest): pass diff --git a/lib-python/2.7/json/tests/test_fail.py b/lib-python/2.7/json/tests/test_fail.py --- a/lib-python/2.7/json/tests/test_fail.py +++ b/lib-python/2.7/json/tests/test_fail.py @@ -1,6 +1,4 @@ -from unittest import TestCase - -import json +from json.tests import PyTest, CTest # Fri Dec 30 18:57:26 2005 JSONDOCS = [ @@ -61,15 +59,15 @@ 18: "spec doesn't specify any nesting limitations", } -class TestFail(TestCase): +class TestFail(object): def test_failures(self): for idx, doc in enumerate(JSONDOCS): idx = idx + 1 if idx in SKIPS: - json.loads(doc) + self.loads(doc) continue try: - json.loads(doc) + self.loads(doc) except ValueError: pass else: @@ -79,7 +77,11 @@ data = {'a' : 1, (1, 2) : 2} #This is for c encoder - self.assertRaises(TypeError, json.dumps, data) + self.assertRaises(TypeError, self.dumps, data) #This is for python encoder - self.assertRaises(TypeError, json.dumps, data, indent=True) + self.assertRaises(TypeError, self.dumps, data, indent=True) + + +class TestPyFail(TestFail, PyTest): pass +class TestCFail(TestFail, CTest): pass diff --git a/lib-python/2.7/json/tests/test_float.py b/lib-python/2.7/json/tests/test_float.py --- a/lib-python/2.7/json/tests/test_float.py +++ b/lib-python/2.7/json/tests/test_float.py @@ -1,19 +1,22 @@ import math -from unittest import TestCase +from json.tests import PyTest, CTest -import json -class TestFloat(TestCase): +class TestFloat(object): def test_floats(self): for num in [1617161771.7650001, math.pi, math.pi**100, math.pi**-100, 3.1]: - self.assertEqual(float(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(float(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) def test_ints(self): for num in [1, 1L, 1<<32, 1<<64]: - self.assertEqual(json.dumps(num), str(num)) - self.assertEqual(int(json.dumps(num)), num) - self.assertEqual(json.loads(json.dumps(num)), num) - self.assertEqual(json.loads(unicode(json.dumps(num))), num) + self.assertEqual(self.dumps(num), str(num)) + self.assertEqual(int(self.dumps(num)), num) + self.assertEqual(self.loads(self.dumps(num)), num) + self.assertEqual(self.loads(unicode(self.dumps(num))), num) + + +class TestPyFloat(TestFloat, PyTest): pass +class TestCFloat(TestFloat, CTest): pass diff --git a/lib-python/2.7/json/tests/test_indent.py b/lib-python/2.7/json/tests/test_indent.py --- a/lib-python/2.7/json/tests/test_indent.py +++ b/lib-python/2.7/json/tests/test_indent.py @@ -1,9 +1,9 @@ -from unittest import TestCase +import textwrap +from StringIO import StringIO +from json.tests import PyTest, CTest -import json -import textwrap -class TestIndent(TestCase): +class TestIndent(object): def test_indent(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -30,12 +30,31 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(',', ': ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + def test_indent0(self): + h = {3: 1} + def check(indent, expected): + d1 = self.dumps(h, indent=indent) + self.assertEqual(d1, expected) + + sio = StringIO() + self.json.dump(h, sio, indent=indent) + self.assertEqual(sio.getvalue(), expected) + + # indent=0 should emit newlines + check(0, '{\n"3": 1\n}') + # indent=None is more compact + check(None, '{"3": 1}') + + +class TestPyIndent(TestIndent, PyTest): pass +class TestCIndent(TestIndent, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass1.py b/lib-python/2.7/json/tests/test_pass1.py --- a/lib-python/2.7/json/tests/test_pass1.py +++ b/lib-python/2.7/json/tests/test_pass1.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass1.json JSON = r''' @@ -62,15 +61,19 @@ ,"rosebud"] ''' -class TestPass1(TestCase): +class TestPass1(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) try: - json.dumps(res, allow_nan=False) + self.dumps(res, allow_nan=False) except ValueError: pass else: self.fail("23456789012E666 should be out of range") + + +class TestPyPass1(TestPass1, PyTest): pass +class TestCPass1(TestPass1, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass2.py b/lib-python/2.7/json/tests/test_pass2.py --- a/lib-python/2.7/json/tests/test_pass2.py +++ b/lib-python/2.7/json/tests/test_pass2.py @@ -1,14 +1,18 @@ -from unittest import TestCase -import json +from json.tests import PyTest, CTest + # from http://json.org/JSON_checker/test/pass2.json JSON = r''' [[[[[[[[[[[[[[[[[[["Not too deep"]]]]]]]]]]]]]]]]]]] ''' -class TestPass2(TestCase): +class TestPass2(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass2(TestPass2, PyTest): pass +class TestCPass2(TestPass2, CTest): pass diff --git a/lib-python/2.7/json/tests/test_pass3.py b/lib-python/2.7/json/tests/test_pass3.py --- a/lib-python/2.7/json/tests/test_pass3.py +++ b/lib-python/2.7/json/tests/test_pass3.py @@ -1,6 +1,5 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json # from http://json.org/JSON_checker/test/pass3.json JSON = r''' @@ -12,9 +11,14 @@ } ''' -class TestPass3(TestCase): + +class TestPass3(object): def test_parse(self): # test in/out equivalence and parsing - res = json.loads(JSON) - out = json.dumps(res) - self.assertEqual(res, json.loads(out)) + res = self.loads(JSON) + out = self.dumps(res) + self.assertEqual(res, self.loads(out)) + + +class TestPyPass3(TestPass3, PyTest): pass +class TestCPass3(TestPass3, CTest): pass diff --git a/lib-python/2.7/json/tests/test_recursion.py b/lib-python/2.7/json/tests/test_recursion.py --- a/lib-python/2.7/json/tests/test_recursion.py +++ b/lib-python/2.7/json/tests/test_recursion.py @@ -1,28 +1,16 @@ -from unittest import TestCase +from json.tests import PyTest, CTest -import json class JSONTestObject: pass -class RecursiveJSONEncoder(json.JSONEncoder): - recurse = False - def default(self, o): - if o is JSONTestObject: - if self.recurse: - return [JSONTestObject] - else: - return 'JSONTestObject' - return json.JSONEncoder.default(o) - - -class TestRecursion(TestCase): +class TestRecursion(object): def test_listrecursion(self): x = [] x.append(x) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -31,7 +19,7 @@ y = [x] x.append(y) try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -39,13 +27,13 @@ y = [] x = [y, y] # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_dictrecursion(self): x = {} x["test"] = x try: - json.dumps(x) + self.dumps(x) except ValueError: pass else: @@ -53,9 +41,19 @@ x = {} y = {"a": x, "b": x} # ensure that the marker is cleared - json.dumps(x) + self.dumps(x) def test_defaultrecursion(self): + class RecursiveJSONEncoder(self.json.JSONEncoder): + recurse = False + def default(self, o): + if o is JSONTestObject: + if self.recurse: + return [JSONTestObject] + else: + return 'JSONTestObject' + return pyjson.JSONEncoder.default(o) + enc = RecursiveJSONEncoder() self.assertEqual(enc.encode(JSONTestObject), '"JSONTestObject"') enc.recurse = True @@ -65,3 +63,46 @@ pass else: self.fail("didn't raise ValueError on default recursion") + + + def test_highly_nested_objects_decoding(self): + # test that loading highly-nested objects doesn't segfault when C + # accelerations are used. See #12017 + # str + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '1' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('{"a":' * 100000 + '[1]' + '}' * 100000) + with self.assertRaises(RuntimeError): + self.loads('[' * 100000 + '1' + ']' * 100000) + # unicode + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'1' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'{"a":' * 100000 + u'[1]' + u'}' * 100000) + with self.assertRaises(RuntimeError): + self.loads(u'[' * 100000 + u'1' + u']' * 100000) + + def test_highly_nested_objects_encoding(self): + # See #12051 + l, d = [], {} + for x in xrange(100000): + l, d = [l], {'k':d} + with self.assertRaises(RuntimeError): + self.dumps(l) + with self.assertRaises(RuntimeError): + self.dumps(d) + + def test_endless_recursion(self): + # See #12051 + class EndlessJSONEncoder(self.json.JSONEncoder): + def default(self, o): + """If check_circular is False, this will keep adding another list.""" + return [o] + + with self.assertRaises(RuntimeError): + EndlessJSONEncoder(check_circular=False).encode(5j) + + +class TestPyRecursion(TestRecursion, PyTest): pass +class TestCRecursion(TestRecursion, CTest): pass diff --git a/lib-python/2.7/json/tests/test_scanstring.py b/lib-python/2.7/json/tests/test_scanstring.py --- a/lib-python/2.7/json/tests/test_scanstring.py +++ b/lib-python/2.7/json/tests/test_scanstring.py @@ -1,18 +1,10 @@ import sys -import decimal -from unittest import TestCase +from json.tests import PyTest, CTest -import json -import json.decoder -class TestScanString(TestCase): - def test_py_scanstring(self): - self._test_scanstring(json.decoder.py_scanstring) - - def test_c_scanstring(self): - self._test_scanstring(json.decoder.c_scanstring) - - def _test_scanstring(self, scanstring): +class TestScanstring(object): + def test_scanstring(self): + scanstring = self.json.decoder.scanstring self.assertEqual( scanstring('"z\\ud834\\udd20x"', 1, None, True), (u'z\U0001d120x', 16)) @@ -103,10 +95,15 @@ (u'Bad value', 12)) def test_issue3623(self): - self.assertRaises(ValueError, json.decoder.scanstring, b"xxx", 1, + self.assertRaises(ValueError, self.json.decoder.scanstring, b"xxx", 1, "xxx") self.assertRaises(UnicodeDecodeError, - json.encoder.encode_basestring_ascii, b"xx\xff") + self.json.encoder.encode_basestring_ascii, b"xx\xff") def test_overflow(self): - self.assertRaises(OverflowError, json.decoder.scanstring, b"xxx", sys.maxsize+1) + with self.assertRaises(OverflowError): + self.json.decoder.scanstring(b"xxx", sys.maxsize+1) + + +class TestPyScanstring(TestScanstring, PyTest): pass +class TestCScanstring(TestScanstring, CTest): pass diff --git a/lib-python/2.7/json/tests/test_separators.py b/lib-python/2.7/json/tests/test_separators.py --- a/lib-python/2.7/json/tests/test_separators.py +++ b/lib-python/2.7/json/tests/test_separators.py @@ -1,10 +1,8 @@ import textwrap -from unittest import TestCase +from json.tests import PyTest, CTest -import json - -class TestSeparators(TestCase): +class TestSeparators(object): def test_separators(self): h = [['blorpie'], ['whoops'], [], 'd-shtaeou', 'd-nthiouh', 'i-vhbjkhnth', {'nifty': 87}, {'field': 'yes', 'morefield': False} ] @@ -31,12 +29,16 @@ ]""") - d1 = json.dumps(h) - d2 = json.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) + d1 = self.dumps(h) + d2 = self.dumps(h, indent=2, sort_keys=True, separators=(' ,', ' : ')) - h1 = json.loads(d1) - h2 = json.loads(d2) + h1 = self.loads(d1) + h2 = self.loads(d2) self.assertEqual(h1, h) self.assertEqual(h2, h) self.assertEqual(d2, expect) + + +class TestPySeparators(TestSeparators, PyTest): pass +class TestCSeparators(TestSeparators, CTest): pass diff --git a/lib-python/2.7/json/tests/test_speedups.py b/lib-python/2.7/json/tests/test_speedups.py --- a/lib-python/2.7/json/tests/test_speedups.py +++ b/lib-python/2.7/json/tests/test_speedups.py @@ -1,24 +1,23 @@ -import decimal -from unittest import TestCase +from json.tests import CTest -from json import decoder, encoder, scanner -class TestSpeedups(TestCase): +class TestSpeedups(CTest): def test_scanstring(self): - self.assertEqual(decoder.scanstring.__module__, "_json") - self.assertTrue(decoder.scanstring is decoder.c_scanstring) + self.assertEqual(self.json.decoder.scanstring.__module__, "_json") + self.assertIs(self.json.decoder.scanstring, self.json.decoder.c_scanstring) def test_encode_basestring_ascii(self): - self.assertEqual(encoder.encode_basestring_ascii.__module__, "_json") - self.assertTrue(encoder.encode_basestring_ascii is - encoder.c_encode_basestring_ascii) + self.assertEqual(self.json.encoder.encode_basestring_ascii.__module__, + "_json") + self.assertIs(self.json.encoder.encode_basestring_ascii, + self.json.encoder.c_encode_basestring_ascii) -class TestDecode(TestCase): +class TestDecode(CTest): def test_make_scanner(self): - self.assertRaises(AttributeError, scanner.c_make_scanner, 1) + self.assertRaises(AttributeError, self.json.scanner.c_make_scanner, 1) def test_make_encoder(self): - self.assertRaises(TypeError, encoder.c_make_encoder, + self.assertRaises(TypeError, self.json.encoder.c_make_encoder, None, "\xCD\x7D\x3D\x4E\x12\x4C\xF9\x79\xD7\x52\xBA\x82\xF2\x27\x4A\x7D\xA0\xCA\x75", None) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -1,11 +1,10 @@ -from unittest import TestCase +from collections import OrderedDict +from json.tests import PyTest, CTest -import json -from collections import OrderedDict -class TestUnicode(TestCase): +class TestUnicode(object): def test_encoding1(self): - encoder = json.JSONEncoder(encoding='utf-8') + encoder = self.json.JSONEncoder(encoding='utf-8') u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') ju = encoder.encode(u) @@ -15,68 +14,78 @@ def test_encoding2(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' s = u.encode('utf-8') - ju = json.dumps(u, encoding='utf-8') - js = json.dumps(s, encoding='utf-8') + ju = self.dumps(u, encoding='utf-8') + js = self.dumps(s, encoding='utf-8') self.assertEqual(ju, js) def test_encoding3(self): u = u'\N{GREEK SMALL LETTER ALPHA}\N{GREEK CAPITAL LETTER OMEGA}' - j = json.dumps(u) + j = self.dumps(u) self.assertEqual(j, '"\\u03b1\\u03a9"') From noreply at buildbot.pypy.org Wed May 9 19:13:29 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 9 May 2012 19:13:29 +0200 (CEST) Subject: [pypy-commit] pypy step-one-xrange: merge messup Message-ID: <20120509171329.BCA7382E46@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: step-one-xrange Changeset: r54988:8f16e211c4b2 Date: 2012-05-09 19:07 +0200 http://bitbucket.org/pypy/pypy/changeset/8f16e211c4b2/ Log: merge messup diff --git a/pypy/module/__builtin__/functional.py b/pypy/module/__builtin__/functional.py --- a/pypy/module/__builtin__/functional.py +++ b/pypy/module/__builtin__/functional.py @@ -317,16 +317,10 @@ self.start = start self.len = len self.step = step - self.promote_step = promote_step - def descr_new(space, w_subtype, w_start, w_stop=None, w_step=None): + def descr_new(space, w_subtype, w_start, w_stop=None, w_step=1): start = _toint(space, w_start) - if space.is_w(w_step, space.w_None): # no step argument provided - step = 1 - promote_step = True - else: - step = _toint(space, w_step) - promote_step = False + step = _toint(space, w_step) if space.is_w(w_stop, space.w_None): # only 1 argument provided start, stop = 0, start else: @@ -362,12 +356,14 @@ space.wrap("xrange object index out of range")) def descr_iter(self): - if self.promote_step and self.step == 1: - return self.space.wrap(W_XRangeStepOneIterator(self.space, self.start, - self.stop)) + if self.step == 1: + stop = self.start + self.len + return self.space.wrap(W_XRangeStepOneIterator(self.space, + self.start, + stop)) else: return self.space.wrap(W_XRangeIterator(self.space, self.start, - self.len, self.step)) + self.len, self.step)) def descr_reversed(self): lastitem = self.start + (self.len-1) * self.step From noreply at buildbot.pypy.org Wed May 9 21:19:09 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 9 May 2012 21:19:09 +0200 (CEST) Subject: [pypy-commit] pypy gc-minimark-pinning: merge default Message-ID: <20120509191909.5EE1382E46@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: gc-minimark-pinning Changeset: r54989:41c37da0ae16 Date: 2012-05-08 17:21 +0200 http://bitbucket.org/pypy/pypy/changeset/41c37da0ae16/ Log: merge default diff too long, truncating to 10000 out of 14962 lines diff --git a/lib-python/modified-2.7/test/test_peepholer.py b/lib-python/modified-2.7/test/test_peepholer.py --- a/lib-python/modified-2.7/test/test_peepholer.py +++ b/lib-python/modified-2.7/test/test_peepholer.py @@ -145,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib_pypy/_ctypes_test.py b/lib_pypy/_ctypes_test.py --- a/lib_pypy/_ctypes_test.py +++ b/lib_pypy/_ctypes_test.py @@ -21,7 +21,7 @@ # Compile .c file include_dir = os.path.join(thisdir, '..', 'include') if sys.platform == 'win32': - ccflags = [] + ccflags = ['-D_CRT_SECURE_NO_WARNINGS'] else: ccflags = ['-fPIC'] res = compiler.compile([os.path.join(thisdir, '_ctypes_test.c')], @@ -34,6 +34,13 @@ if sys.platform == 'win32': # XXX libpypy-c.lib is currently not installed automatically library = os.path.join(thisdir, '..', 'include', 'libpypy-c') + if not os.path.exists(library + '.lib'): + #For a nightly build + library = os.path.join(thisdir, '..', 'include', 'python27') + if not os.path.exists(library + '.lib'): + # For a local translation + library = os.path.join(thisdir, '..', 'pypy', 'translator', + 'goal', 'libpypy-c') libraries = [library, 'oleaut32'] extra_ldargs = ['/MANIFEST'] # needed for VC10 else: diff --git a/lib_pypy/_testcapi.py b/lib_pypy/_testcapi.py --- a/lib_pypy/_testcapi.py +++ b/lib_pypy/_testcapi.py @@ -16,7 +16,7 @@ # Compile .c file include_dir = os.path.join(thisdir, '..', 'include') if sys.platform == 'win32': - ccflags = [] + ccflags = ['-D_CRT_SECURE_NO_WARNINGS'] else: ccflags = ['-fPIC', '-Wimplicit-function-declaration'] res = compiler.compile([os.path.join(thisdir, '_testcapimodule.c')], @@ -29,6 +29,13 @@ if sys.platform == 'win32': # XXX libpypy-c.lib is currently not installed automatically library = os.path.join(thisdir, '..', 'include', 'libpypy-c') + if not os.path.exists(library + '.lib'): + #For a nightly build + library = os.path.join(thisdir, '..', 'include', 'python27') + if not os.path.exists(library + '.lib'): + # For a local translation + library = os.path.join(thisdir, '..', 'pypy', 'translator', + 'goal', 'libpypy-c') libraries = [library, 'oleaut32'] extra_ldargs = ['/MANIFEST', # needed for VC10 '/EXPORT:init_testcapi'] diff --git a/pypy/doc/cppyy.rst b/pypy/doc/cppyy.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/cppyy.rst @@ -0,0 +1,587 @@ +============================ +cppyy: C++ bindings for PyPy +============================ + +The cppyy module provides C++ bindings for PyPy by using the reflection +information extracted from C++ header files by means of the +`Reflex package`_. +For this to work, you have to both install Reflex and build PyPy from the +reflex-support branch. +As indicated by this being a branch, support for Reflex is still +experimental. +However, it is functional enough to put it in the hands of those who want +to give it a try. +In the medium term, cppyy will move away from Reflex and instead use +`cling`_ as its backend, which is based on `llvm`_. +Although that will change the logistics on the generation of reflection +information, it will not change the python-side interface. + +.. _`Reflex package`: http://root.cern.ch/drupal/content/reflex +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ + + +Motivation +========== + +The cppyy module offers two unique features, which result in great +performance as well as better functionality and cross-language integration +than would otherwise be possible. +First, cppyy is written in RPython and therefore open to optimizations by the +JIT up until the actual point of call into C++. +This means that there are no conversions necessary between a garbage collected +and a reference counted environment, as is needed for the use of existing +extension modules written or generated for CPython. +It also means that if variables are already unboxed by the JIT, they can be +passed through directly to C++. +Second, Reflex (and cling far more so) adds dynamic features to C++, thus +greatly reducing impedance mismatches between the two languages. +In fact, Reflex is dynamic enough that you could write the runtime bindings +generation in python (as opposed to RPython) and this is used to create very +natural "pythonizations" of the bound code. + + +Installation +============ + +For now, the easiest way of getting the latest version of Reflex, is by +installing the ROOT package. +Besides getting the latest version of Reflex, another advantage is that with +the full ROOT package, you can also use your Reflex-bound code on `CPython`_. +`Download`_ a binary or install from `source`_. +Some Linux and Mac systems may have ROOT provided in the list of scientific +software of their packager. +If, however, you prefer a standalone version of Reflex, the best is to get +this `recent snapshot`_, and install like so:: + + $ tar jxf reflex-2012-05-02.tar.bz2 + $ cd reflex-2012-05-02 + $ build/autogen + $ ./configure + $ make && make install + +Also, make sure you have a version of `gccxml`_ installed, which is most +easily provided by the packager of your system. +If you read up on gccxml, you'll probably notice that it is no longer being +developed and hence will not provide C++11 support. +That's why the medium term plan is to move to `cling`_. + +.. _`Download`: http://root.cern.ch/drupal/content/downloading-root +.. _`source`: http://root.cern.ch/drupal/content/installing-root-source +.. _`recent snapshot`: http://cern.ch/wlav/reflex-2012-05-02.tar.bz2 +.. _`gccxml`: http://www.gccxml.org + +Next, get the `PyPy sources`_, select the reflex-support branch, and build +pypy-c. +For the build to succeed, the ``$ROOTSYS`` environment variable must point to +the location of your ROOT (or standalone Reflex) installation:: + + $ hg clone https://bitbucket.org/pypy/pypy + $ cd pypy + $ hg up reflex-support + $ cd pypy/translator/goal + $ python translate.py -O jit --gcrootfinder=shadowstack targetpypystandalone.py --withmod-cppyy + +This will build a ``pypy-c`` that includes the cppyy module, and through that, +Reflex support. +Of course, if you already have a pre-built version of the ``pypy`` interpreter, +you can use that for the translation rather than ``python``. + +.. _`PyPy sources`: https://bitbucket.org/pypy/pypy/overview + + +Basic example +============= + +Now test with a trivial example whether all packages are properly installed +and functional. +First, create a C++ header file with some class in it (note that all functions +are made inline for convenience; a real-world example would of course have a +corresponding source file):: + + $ cat MyClass.h + class MyClass { + public: + MyClass(int i = -99) : m_myint(i) {} + + int GetMyInt() { return m_myint; } + void SetMyInt(int i) { m_myint = i; } + + public: + int m_myint; + }; + +Then, generate the bindings using ``genreflex`` (part of ROOT), and compile the +code:: + + $ genreflex MyClass.h + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyClass_rflx.cpp -o libMyClassDict.so + +Now you're ready to use the bindings. +Since the bindings are designed to look pythonistic, it should be +straightforward:: + + $ pypy-c + >>>> import cppyy + >>>> cppyy.load_reflection_info("libMyClassDict.so") + + >>>> myinst = cppyy.gbl.MyClass(42) + >>>> print myinst.GetMyInt() + 42 + >>>> myinst.SetMyInt(33) + >>>> print myinst.m_myint + 33 + >>>> myinst.m_myint = 77 + >>>> print myinst.GetMyInt() + 77 + >>>> help(cppyy.gbl.MyClass) # shows that normal python introspection works + +That's all there is to it! + + +Advanced example +================ +The following snippet of C++ is very contrived, to allow showing that such +pathological code can be handled and to show how certain features play out in +practice:: + + $ cat MyAdvanced.h + #include + + class Base1 { + public: + Base1(int i) : m_i(i) {} + virtual ~Base1() {} + int m_i; + }; + + class Base2 { + public: + Base2(double d) : m_d(d) {} + virtual ~Base2() {} + double m_d; + }; + + class C; + + class Derived : public virtual Base1, public virtual Base2 { + public: + Derived(const std::string& name, int i, double d) : Base1(i), Base2(d), m_name(name) {} + virtual C* gimeC() { return (C*)0; } + std::string m_name; + }; + + Base1* BaseFactory(const std::string& name, int i, double d) { + return new Derived(name, i, d); + } + +This code is still only in a header file, with all functions inline, for +convenience of the example. +If the implementations live in a separate source file or shared library, the +only change needed is to link those in when building the reflection library. + +If you were to run ``genreflex`` like above in the basic example, you will +find that not all classes of interest will be reflected, nor will be the +global factory function. +In particular, ``std::string`` will be missing, since it is not defined in +this header file, but in a header file that is included. +In practical terms, general classes such as ``std::string`` should live in a +core reflection set, but for the moment assume we want to have it in the +reflection library that we are building for this example. + +The ``genreflex`` script can be steered using a so-called `selection file`_, +which is a simple XML file specifying, either explicitly or by using a +pattern, which classes, variables, namespaces, etc. to select from the given +header file. +With the aid of a selection file, a large project can be easily managed: +simply ``#include`` all relevant headers into a single header file that is +handed to ``genreflex``. +Then, apply a selection file to pick up all the relevant classes. +For our purposes, the following rather straightforward selection will do +(the name ``lcgdict`` for the root is historical, but required):: + + $ cat MyAdvanced.xml + + + + + + + +.. _`selection file`: http://root.cern.ch/drupal/content/generating-reflex-dictionaries + +Now the reflection info can be generated and compiled:: + + $ genreflex MyAdvanced.h --selection=MyAdvanced.xml + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyAdvanced_rflx.cpp -o libAdvExDict.so + +and subsequently be used from PyPy:: + + >>>> import cppyy + >>>> cppyy.load_reflection_info("libAdvExDict.so") + + >>>> d = cppyy.gbl.BaseFactory("name", 42, 3.14) + >>>> type(d) + + >>>> isinstance(d, cppyy.gbl.Base1) + True + >>>> isinstance(d, cppyy.gbl.Base2) + True + >>>> d.m_i, d.m_d + (42, 3.14) + >>>> d.m_name == "name" + True + >>>> + +Again, that's all there is to it! + +A couple of things to note, though. +If you look back at the C++ definition of the ``BaseFactory`` function, +you will see that it declares the return type to be a ``Base1``, yet the +bindings return an object of the actual type ``Derived``? +This choice is made for a couple of reasons. +First, it makes method dispatching easier: if bound objects are always their +most derived type, then it is easy to calculate any offsets, if necessary. +Second, it makes memory management easier: the combination of the type and +the memory address uniquely identifies an object. +That way, it can be recycled and object identity can be maintained if it is +entered as a function argument into C++ and comes back to PyPy as a return +value. +Last, but not least, casting is decidedly unpythonistic. +By always providing the most derived type known, casting becomes unnecessary. +For example, the data member of ``Base2`` is simply directly available. +Note also that the unreflected ``gimeC`` method of ``Derived`` does not +preclude its use. +It is only the ``gimeC`` method that is unusable as long as class ``C`` is +unknown to the system. + + +Features +======== + +The following is not meant to be an exhaustive list, since cppyy is still +under active development. +Furthermore, the intention is that every feature is as natural as possible on +the python side, so if you find something missing in the list below, simply +try it out. +It is not always possible to provide exact mapping between python and C++ +(active memory management is one such case), but by and large, if the use of a +feature does not strike you as obvious, it is more likely to simply be a bug. +That is a strong statement to make, but also a worthy goal. + +* **abstract classes**: Are represented as python classes, since they are + needed to complete the inheritance hierarchies, but will raise an exception + if an attempt is made to instantiate from them. + +* **arrays**: Supported for builtin data types only, as used from module + ``array``. + Out-of-bounds checking is limited to those cases where the size is known at + compile time (and hence part of the reflection info). + +* **builtin data types**: Map onto the expected equivalent python types, with + the caveat that there may be size differences, and thus it is possible that + exceptions are raised if an overflow is detected. + +* **casting**: Is supposed to be unnecessary. + Object pointer returns from functions provide the most derived class known + in the hierarchy of the object being returned. + This is important to preserve object identity as well as to make casting, + a pure C++ feature after all, superfluous. + +* **classes and structs**: Get mapped onto python classes, where they can be + instantiated as expected. + If classes are inner classes or live in a namespace, their naming and + location will reflect that. + +* **data members**: Public data members are represented as python properties + and provide read and write access on instances as expected. + +* **default arguments**: C++ default arguments work as expected, but python + keywords are not supported. + It is technically possible to support keywords, but for the C++ interface, + the formal argument names have no meaning and are not considered part of the + API, hence it is not a good idea to use keywords. + +* **doc strings**: The doc string of a method or function contains the C++ + arguments and return types of all overloads of that name, as applicable. + +* **enums**: Are translated as ints with no further checking. + +* **functions**: Work as expected and live in their appropriate namespace + (which can be the global one, ``cppyy.gbl``). + +* **inheritance**: All combinations of inheritance on the C++ (single, + multiple, virtual) are supported in the binding. + However, new python classes can only use single inheritance from a bound C++ + class. + Multiple inheritance would introduce two "this" pointers in the binding. + This is a current, not a fundamental, limitation. + The C++ side will not see any overridden methods on the python side, as + cross-inheritance is planned but not yet supported. + +* **methods**: Are represented as python methods and work as expected. + They are first class objects and can be bound to an instance. + Virtual C++ methods work as expected. + To select a specific virtual method, do like with normal python classes + that override methods: select it from the class that you need, rather than + calling the method on the instance. + To select a specific overload, use the __dispatch__ special function, which + takes the name of the desired method and its signature (which can be + obtained from the doc string) as arguments. + +* **namespaces**: Are represented as python classes. + Namespaces are more open-ended than classes, so sometimes initial access may + result in updates as data and functions are looked up and constructed + lazily. + Thus the result of ``dir()`` on a namespace should not be relied upon: it + only shows the already accessed members. (TODO: to be fixed by implementing + __dir__.) + The global namespace is ``cppyy.gbl``. + +* **operator conversions**: If defined in the C++ class and a python + equivalent exists (i.e. all builtin integer and floating point types, as well + as ``bool``), it will map onto that python conversion. + Note that ``char*`` is mapped onto ``__str__``. + +* **operator overloads**: If defined in the C++ class and if a python + equivalent is available (not always the case, think e.g. of ``operator||``), + then they work as expected. + Special care needs to be taken for global operator overloads in C++: first, + make sure that they are actually reflected, especially for the global + overloads for ``operator==`` and ``operator!=`` of STL iterators in the case + of gcc. + Second, make sure that reflection info is loaded in the proper order. + I.e. that these global overloads are available before use. + +* **pointers**: For builtin data types, see arrays. + For objects, a pointer to an object and an object looks the same, unless + the pointer is a data member. + In that case, assigning to the data member will cause a copy of the pointer + and care should be taken about the object's life time. + If a pointer is a global variable, the C++ side can replace the underlying + object and the python side will immediately reflect that. + +* **static data members**: Are represented as python property objects on the + class and the meta-class. + Both read and write access is as expected. + +* **static methods**: Are represented as python's ``staticmethod`` objects + and can be called both from the class as well as from instances. + +* **strings**: The std::string class is considered a builtin C++ type and + mixes quite well with python's str. + Python's str can be passed where a ``const char*`` is expected, and an str + will be returned if the return type is ``const char*``. + +* **templated classes**: Are represented in a meta-class style in python. + This looks a little bit confusing, but conceptually is rather natural. + For example, given the class ``std::vector``, the meta-class part would + be ``std.vector`` in python. + Then, to get the instantiation on ``int``, do ``std.vector(int)`` and to + create an instance of that class, do ``std.vector(int)()``. + Note that templates can be build up by handing actual types to the class + instantiation (as done in this vector example), or by passing in the list of + template arguments as a string. + The former is a lot easier to work with if you have template instantiations + using classes that themselves are templates (etc.) in the arguments. + All classes must already exist in the loaded reflection info. + +* **typedefs**: Are simple python references to the actual classes to which + they refer. + +* **unary operators**: Are supported if a python equivalent exists, and if the + operator is defined in the C++ class. + +You can always find more detailed examples and see the full of supported +features by looking at the tests in pypy/module/cppyy/test. + +If a feature or reflection info is missing, this is supposed to be handled +gracefully. +In fact, there are unit tests explicitly for this purpose (even as their use +becomes less interesting over time, as the number of missing features +decreases). +Only when a missing feature is used, should there be an exception. +For example, if no reflection info is available for a return type, then a +class that has a method with that return type can still be used. +Only that one specific method can not be used. + + +Templates +========= + +A bit of special care needs to be taken for the use of templates. +For a templated class to be completely available, it must be guaranteed that +said class is fully instantiated, and hence all executable C++ code is +generated and compiled in. +The easiest way to fulfill that guarantee is by explicit instantiation in the +header file that is handed to ``genreflex``. +The following example should make that clear:: + + $ cat MyTemplate.h + #include + + class MyClass { + public: + MyClass(int i = -99) : m_i(i) {} + MyClass(const MyClass& s) : m_i(s.m_i) {} + MyClass& operator=(const MyClass& s) { m_i = s.m_i; return *this; } + ~MyClass() {} + int m_i; + }; + + template class std::vector; + +If you know for certain that all symbols will be linked in from other sources, +you can also declare the explicit template instantiation ``extern``. + +Unfortunately, this is not enough for gcc. +The iterators, if they are going to be used, need to be instantiated as well, +as do the comparison operators on those iterators, as these live in an +internal namespace, rather than in the iterator classes. +One way to handle this, is to deal with this once in a macro, then reuse that +macro for all ``vector`` classes. +Thus, the header above needs this, instead of just the explicit instantiation +of the ``vector``:: + + #define STLTYPES_EXPLICIT_INSTANTIATION_DECL(STLTYPE, TTYPE) \ + template class std::STLTYPE< TTYPE >; \ + template class __gnu_cxx::__normal_iterator >; \ + template class __gnu_cxx::__normal_iterator >;\ + namespace __gnu_cxx { \ + template bool operator==(const std::STLTYPE< TTYPE >::iterator&, \ + const std::STLTYPE< TTYPE >::iterator&); \ + template bool operator!=(const std::STLTYPE< TTYPE >::iterator&, \ + const std::STLTYPE< TTYPE >::iterator&); \ + } + + STLTYPES_EXPLICIT_INSTANTIATION_DECL(vector, MyClass) + +Then, still for gcc, the selection file needs to contain the full hierarchy as +well as the global overloads for comparisons for the iterators:: + + $ cat MyTemplate.xml + + + + + + + + + + + + + +Run the normal ``genreflex`` and compilation steps:: + + $ genreflex MyTemplate.h --selection=MyTemplate.xm + $ g++ -fPIC -rdynamic -O2 -shared -I$ROOTSYS/include MyTemplate_rflx.cpp -o libTemplateDict.so + +Note: this is a dirty corner that clearly could do with some automation, +even if the macro already helps. +Such automation is planned. +In fact, in the cling world, the backend can perform the template +instantations and generate the reflection info on the fly, and none of the +above will any longer be necessary. + +Subsequent use should be as expected. +Note the meta-class style of "instantiating" the template:: + + >>>> import cppyy + >>>> cppyy.load_reflection_info("libTemplateDict.so") + >>>> std = cppyy.gbl.std + >>>> MyClass = cppyy.gbl.MyClass + >>>> v = std.vector(MyClass)() + >>>> v += [MyClass(1), MyClass(2), MyClass(3)] + >>>> for m in v: + .... print m.m_i, + .... + 1 2 3 + >>>> + +Other templates work similarly. +The arguments to the template instantiation can either be a string with the +full list of arguments, or the explicit classes. +The latter makes for easier code writing if the classes passed to the +instantiation are themselves templates. + + +The fast lane +============= + +The following is an experimental feature of cppyy, and that makes it doubly +experimental, so caveat emptor. +With a slight modification of Reflex, it can provide function pointers for +C++ methods, and hence allow PyPy to call those pointers directly, rather than +calling C++ through a Reflex stub. +This results in a rather significant speed-up. +Mind you, the normal stub path is not exactly slow, so for now only use this +out of curiosity or if you really need it. + +To install this patch of Reflex, locate the file genreflex-methptrgetter.patch +in pypy/module/cppyy and apply it to the genreflex python scripts found in +``$ROOTSYS/lib``:: + + $ cd $ROOTSYS/lib + $ patch -p2 < genreflex-methptrgetter.patch + +With this patch, ``genreflex`` will have grown the ``--with-methptrgetter`` +option. +Use this option when running ``genreflex``, and add the +``-Wno-pmf-conversions`` option to ``g++`` when compiling. +The rest works the same way: the fast path will be used transparently (which +also means that you can't actually find out whether it is in use, other than +by running a micro-benchmark). + + +CPython +======= + +Most of the ideas in cppyy come originally from the `PyROOT`_ project. +Although PyROOT does not support Reflex directly, it has an alter ego called +"PyCintex" that, in a somewhat roundabout way, does. +If you installed ROOT, rather than just Reflex, PyCintex should be available +immediately if you add ``$ROOTSYS/lib`` to the ``PYTHONPATH`` environment +variable. + +.. _`PyROOT`: http://root.cern.ch/drupal/content/pyroot + +There are a couple of minor differences between PyCintex and cppyy, most to do +with naming. +The one that you will run into directly, is that PyCintex uses a function +called ``loadDictionary`` rather than ``load_reflection_info``. +The reason for this is that Reflex calls the shared libraries that contain +reflection info "dictionaries." +However, in python, the name `dictionary` already has a well-defined meaning, +so a more descriptive name was chosen for cppyy. +In addition, PyCintex requires that the names of shared libraries so loaded +start with "lib" in their name. +The basic example above, rewritten for PyCintex thus goes like this:: + + $ python + >>> import PyCintex + >>> PyCintex.loadDictionary("libMyClassDict.so") + >>> myinst = PyCintex.gbl.MyClass(42) + >>> print myinst.GetMyInt() + 42 + >>> myinst.SetMyInt(33) + >>> print myinst.m_myint + 33 + >>> myinst.m_myint = 77 + >>> print myinst.GetMyInt() + 77 + >>> help(PyCintex.gbl.MyClass) # shows that normal python introspection works + +Other naming differences are such things as taking an address of an object. +In PyCintex, this is done with ``AddressOf`` whereas in cppyy the choice was +made to follow the naming as in ``ctypes`` and hence use ``addressof`` +(PyROOT/PyCintex predate ``ctypes`` by several years, and the ROOT project +follows camel-case, hence the differences). + +Of course, this is python, so if any of the naming is not to your liking, all +you have to do is provide a wrapper script that you import instead of +importing the ``cppyy`` or ``PyCintex`` modules directly. +In that wrapper script you can rename methods exactly the way you need it. + +In the cling world, all these differences will be resolved. diff --git a/pypy/doc/extending.rst b/pypy/doc/extending.rst --- a/pypy/doc/extending.rst +++ b/pypy/doc/extending.rst @@ -23,6 +23,8 @@ * Write them in RPython as mixedmodule_, using *rffi* as bindings. +* Write them in C++ and bind them through Reflex_ (EXPERIMENTAL) + .. _ctypes: #CTypes .. _\_ffi: #LibFFI .. _mixedmodule: #Mixed Modules @@ -110,3 +112,59 @@ XXX we should provide detailed docs about lltype and rffi, especially if we want people to follow that way. + +Reflex +====== + +This method is still experimental and is being exercised on a branch, +`reflex-support`_, which adds the `cppyy`_ module. +The method works by using the `Reflex package`_ to provide reflection +information of the C++ code, which is then used to automatically generate +bindings at runtime. +From a python standpoint, there is no difference between generating bindings +at runtime, or having them "statically" generated and available in scripts +or compiled into extension modules: python classes and functions are always +runtime structures, created when a script or module loads. +However, if the backend itself is capable of dynamic behavior, it is a much +better functional match to python, allowing tighter integration and more +natural language mappings. +Full details are `available here`_. + +.. _`cppyy`: cppyy.html +.. _`reflex-support`: cppyy.html +.. _`Reflex package`: http://root.cern.ch/drupal/content/reflex +.. _`available here`: cppyy.html + +Pros +---- + +The cppyy module is written in RPython, which makes it possible to keep the +code execution visible to the JIT all the way to the actual point of call into +C++, thus allowing for a very fast interface. +Reflex is currently in use in large software environments in High Energy +Physics (HEP), across many different projects and packages, and its use can be +virtually completely automated in a production environment. +One of its uses in HEP is in providing language bindings for CPython. +Thus, it is possible to use Reflex to have bound code work on both CPython and +on PyPy. +In the medium-term, Reflex will be replaced by `cling`_, which is based on +`llvm`_. +This will affect the backend only; the python-side interface is expected to +remain the same, except that cling adds a lot of dynamic behavior to C++, +enabling further language integration. + +.. _`cling`: http://root.cern.ch/drupal/content/cling +.. _`llvm`: http://llvm.org/ + +Cons +---- + +C++ is a large language, and cppyy is not yet feature-complete. +Still, the experience gained in developing the equivalent bindings for CPython +means that adding missing features is a simple matter of engineering, not a +question of research. +The module is written so that currently missing features should do no harm if +you don't use them, if you do need a particular feature, it may be necessary +to work around it in python or with a C++ helper function. +Although Reflex works on various platforms, the bindings with PyPy have only +been tested on Linux. diff --git a/pypy/doc/windows.rst b/pypy/doc/windows.rst --- a/pypy/doc/windows.rst +++ b/pypy/doc/windows.rst @@ -24,7 +24,8 @@ translation. Failing that, they will pick the most recent Visual Studio compiler they can find. In addition, the target architecture (32 bits, 64 bits) is automatically selected. A 32 bit build can only be built -using a 32 bit Python and vice versa. +using a 32 bit Python and vice versa. By default pypy is built using the +Multi-threaded DLL (/MD) runtime environment. **Note:** PyPy is currently not supported for 64 bit Windows, and translation will fail in this case. @@ -102,10 +103,12 @@ Download the source code of expat on sourceforge: http://sourceforge.net/projects/expat/ and extract it in the base -directory. Then open the project file ``expat.dsw`` with Visual +directory. Version 2.1.0 is known to pass tests. Then open the project +file ``expat.dsw`` with Visual Studio; follow the instruction for converting the project files, -switch to the "Release" configuration, and build the solution (the -``expat`` project is actually enough for pypy). +switch to the "Release" configuration, reconfigure the runtime for +Multi-threaded DLL (/MD) and build the solution (the ``expat`` project +is actually enough for pypy). Then, copy the file ``win32\bin\release\libexpat.dll`` somewhere in your PATH. diff --git a/pypy/interpreter/astcompiler/optimize.py b/pypy/interpreter/astcompiler/optimize.py --- a/pypy/interpreter/astcompiler/optimize.py +++ b/pypy/interpreter/astcompiler/optimize.py @@ -304,14 +304,19 @@ # produce compatible pycs. if (self.space.isinstance_w(w_obj, self.space.w_unicode) and self.space.isinstance_w(w_const, self.space.w_unicode)): - unistr = self.space.unicode_w(w_const) - if len(unistr) == 1: - ch = ord(unistr[0]) - else: - ch = 0 - if (ch > 0xFFFF or - (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): - return subs + #unistr = self.space.unicode_w(w_const) + #if len(unistr) == 1: + # ch = ord(unistr[0]) + #else: + # ch = 0 + #if (ch > 0xFFFF or + # (MAXUNICODE == 0xFFFF and 0xD800 <= ch <= 0xDFFF)): + # --XXX-- for now we always disable optimization of + # u'...'[constant] because the tests above are not + # enough to fix issue5057 (CPython has the same + # problem as of April 24, 2012). + # See test_const_fold_unicode_subscr + return subs return ast.Const(w_const, subs.lineno, subs.col_offset) diff --git a/pypy/interpreter/astcompiler/test/test_compiler.py b/pypy/interpreter/astcompiler/test/test_compiler.py --- a/pypy/interpreter/astcompiler/test/test_compiler.py +++ b/pypy/interpreter/astcompiler/test/test_compiler.py @@ -844,7 +844,8 @@ return u"abc"[0] """ counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + if 0: # xxx later? + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} # getitem outside of the BMP should not be optimized source = """def f(): @@ -854,12 +855,20 @@ assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, ops.RETURN_VALUE: 1} + source = """def f(): + return u"\U00012345abcdef"[3] + """ + counts = self.count_instructions(source) + assert counts == {ops.LOAD_CONST: 2, ops.BINARY_SUBSCR: 1, + ops.RETURN_VALUE: 1} + monkeypatch.setattr(optimize, "MAXUNICODE", 0xFFFF) source = """def f(): return u"\uE01F"[0] """ counts = self.count_instructions(source) - assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} + if 0: # xxx later? + assert counts == {ops.LOAD_CONST: 1, ops.RETURN_VALUE: 1} monkeypatch.undo() # getslice is not yet optimized. diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1652,8 +1652,6 @@ 'UnicodeTranslateError', 'ValueError', 'ZeroDivisionError', - 'UnicodeEncodeError', - 'UnicodeDecodeError', ] if sys.platform.startswith("win"): diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -1290,10 +1290,6 @@ w(self.valuestackdepth)]) def handle(self, frame, unroller): - next_instr = self.really_handle(frame, unroller) # JIT hack - return r_uint(next_instr) - - def really_handle(self, frame, unroller): """ Purely abstract method """ raise NotImplementedError @@ -1305,17 +1301,17 @@ _opname = 'SETUP_LOOP' handling_mask = SBreakLoop.kind | SContinueLoop.kind - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): if isinstance(unroller, SContinueLoop): # re-push the loop block without cleaning up the value stack, # and jump to the beginning of the loop, stored in the # exception's argument frame.append_block(self) - return unroller.jump_to + return r_uint(unroller.jump_to) else: # jump to the end of the loop self.cleanupstack(frame) - return self.handlerposition + return r_uint(self.handlerposition) class ExceptBlock(FrameBlock): @@ -1325,7 +1321,7 @@ _opname = 'SETUP_EXCEPT' handling_mask = SApplicationException.kind - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): # push the exception to the value stack for inspection by the # exception handler (the code after the except:) self.cleanupstack(frame) @@ -1340,7 +1336,7 @@ frame.pushvalue(operationerr.get_w_value(frame.space)) frame.pushvalue(operationerr.w_type) frame.last_exception = operationerr - return self.handlerposition # jump to the handler + return r_uint(self.handlerposition) # jump to the handler class FinallyBlock(FrameBlock): @@ -1361,7 +1357,7 @@ frame.pushvalue(frame.space.w_None) frame.pushvalue(frame.space.w_None) - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): # any abnormal reason for unrolling a finally: triggers the end of # the block unrolling and the entering the finally: handler. # see comments in cleanup(). @@ -1369,18 +1365,18 @@ frame.pushvalue(frame.space.wrap(unroller)) frame.pushvalue(frame.space.w_None) frame.pushvalue(frame.space.w_None) - return self.handlerposition # jump to the handler + return r_uint(self.handlerposition) # jump to the handler class WithBlock(FinallyBlock): _immutable_ = True - def really_handle(self, frame, unroller): + def handle(self, frame, unroller): if (frame.space.full_exceptions and isinstance(unroller, SApplicationException)): unroller.operr.normalize_exception(frame.space) - return FinallyBlock.really_handle(self, frame, unroller) + return FinallyBlock.handle(self, frame, unroller) block_classes = {'SETUP_LOOP': LoopBlock, 'SETUP_EXCEPT': ExceptBlock, diff --git a/pypy/jit/backend/llsupport/asmmemmgr.py b/pypy/jit/backend/llsupport/asmmemmgr.py --- a/pypy/jit/backend/llsupport/asmmemmgr.py +++ b/pypy/jit/backend/llsupport/asmmemmgr.py @@ -277,6 +277,8 @@ from pypy.jit.backend.hlinfo import highleveljitinfo if highleveljitinfo.sys_executable: debug_print('SYS_EXECUTABLE', highleveljitinfo.sys_executable) + else: + debug_print('SYS_EXECUTABLE', '??') # HEX = '0123456789ABCDEF' dump = [] diff --git a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py --- a/pypy/jit/backend/llsupport/test/test_asmmemmgr.py +++ b/pypy/jit/backend/llsupport/test/test_asmmemmgr.py @@ -217,7 +217,8 @@ encoded = ''.join(writtencode).encode('hex').upper() ataddr = '@%x' % addr assert log == [('test-logname-section', - [('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] + [('debug_print', 'SYS_EXECUTABLE', '??'), + ('debug_print', 'CODE_DUMP', ataddr, '+0 ', encoded)])] lltype.free(p, flavor='raw') diff --git a/pypy/jit/metainterp/heapcache.py b/pypy/jit/metainterp/heapcache.py --- a/pypy/jit/metainterp/heapcache.py +++ b/pypy/jit/metainterp/heapcache.py @@ -20,6 +20,7 @@ self.dependencies = {} # contains frame boxes that are not virtualizables self.nonstandard_virtualizables = {} + # heap cache # maps descrs to {from_box, to_box} dicts self.heap_cache = {} @@ -29,6 +30,26 @@ # cache the length of arrays self.length_cache = {} + # replace_box is called surprisingly often, therefore it's not efficient + # to go over all the dicts and fix them. + # instead, these two dicts are kept, and a replace_box adds an entry to + # each of them. + # every time one of the dicts heap_cache, heap_array_cache, length_cache + # is accessed, suitable indirections need to be performed + + # this looks all very subtle, but in practice the patterns of + # replacements should not be that complex. Usually a box is replaced by + # a const, once. Also, if something goes wrong, the effect is that less + # caching than possible is done, which is not a huge problem. + self.input_indirections = {} + self.output_indirections = {} + + def _input_indirection(self, box): + return self.input_indirections.get(box, box) + + def _output_indirection(self, box): + return self.output_indirections.get(box, box) + def invalidate_caches(self, opnum, descr, argboxes): self.mark_escaped(opnum, argboxes) self.clear_caches(opnum, descr, argboxes) @@ -132,14 +153,16 @@ self.arraylen_now_known(box, lengthbox) def getfield(self, box, descr): + box = self._input_indirection(box) d = self.heap_cache.get(descr, None) if d: tobox = d.get(box, None) - if tobox: - return tobox + return self._output_indirection(tobox) return None def getfield_now_known(self, box, descr, fieldbox): + box = self._input_indirection(box) + fieldbox = self._input_indirection(fieldbox) self.heap_cache.setdefault(descr, {})[box] = fieldbox def setfield(self, box, descr, fieldbox): @@ -148,6 +171,8 @@ self.heap_cache[descr] = new_d def _do_write_with_aliasing(self, d, box, fieldbox): + box = self._input_indirection(box) + fieldbox = self._input_indirection(fieldbox) # slightly subtle logic here # a write to an arbitrary box, all other boxes can alias this one if not d or box not in self.new_boxes: @@ -166,6 +191,7 @@ return new_d def getarrayitem(self, box, descr, indexbox): + box = self._input_indirection(box) if not isinstance(indexbox, ConstInt): return index = indexbox.getint() @@ -173,9 +199,11 @@ if cache: indexcache = cache.get(index, None) if indexcache is not None: - return indexcache.get(box, None) + return self._output_indirection(indexcache.get(box, None)) def getarrayitem_now_known(self, box, descr, indexbox, valuebox): + box = self._input_indirection(box) + valuebox = self._input_indirection(valuebox) if not isinstance(indexbox, ConstInt): return index = indexbox.getint() @@ -198,25 +226,13 @@ cache[index] = self._do_write_with_aliasing(indexcache, box, valuebox) def arraylen(self, box): - return self.length_cache.get(box, None) + box = self._input_indirection(box) + return self._output_indirection(self.length_cache.get(box, None)) def arraylen_now_known(self, box, lengthbox): - self.length_cache[box] = lengthbox - - def _replace_box(self, d, oldbox, newbox): - new_d = {} - for frombox, tobox in d.iteritems(): - if frombox is oldbox: - frombox = newbox - if tobox is oldbox: - tobox = newbox - new_d[frombox] = tobox - return new_d + box = self._input_indirection(box) + self.length_cache[box] = self._input_indirection(lengthbox) def replace_box(self, oldbox, newbox): - for descr, d in self.heap_cache.iteritems(): - self.heap_cache[descr] = self._replace_box(d, oldbox, newbox) - for descr, d in self.heap_array_cache.iteritems(): - for index, cache in d.iteritems(): - d[index] = self._replace_box(cache, oldbox, newbox) - self.length_cache = self._replace_box(self.length_cache, oldbox, newbox) + self.input_indirections[self._output_indirection(newbox)] = self._input_indirection(oldbox) + self.output_indirections[self._input_indirection(oldbox)] = self._output_indirection(newbox) diff --git a/pypy/jit/metainterp/jitexc.py b/pypy/jit/metainterp/jitexc.py --- a/pypy/jit/metainterp/jitexc.py +++ b/pypy/jit/metainterp/jitexc.py @@ -12,7 +12,6 @@ """ _go_through_llinterp_uncaught_ = True # ugh - def _get_standard_error(rtyper, Class): exdata = rtyper.getexceptiondata() clsdef = rtyper.annotator.bookkeeper.getuniqueclassdef(Class) diff --git a/pypy/jit/metainterp/optimize.py b/pypy/jit/metainterp/optimize.py --- a/pypy/jit/metainterp/optimize.py +++ b/pypy/jit/metainterp/optimize.py @@ -5,3 +5,9 @@ """Raised when the optimize*.py detect that the loop that we are trying to build cannot possibly make sense as a long-running loop (e.g. it cannot run 2 complete iterations).""" + + def __init__(self, msg='?'): + debug_start("jit-abort") + debug_print(msg) + debug_stop("jit-abort") + self.msg = msg diff --git a/pypy/jit/metainterp/optimizeopt/__init__.py b/pypy/jit/metainterp/optimizeopt/__init__.py --- a/pypy/jit/metainterp/optimizeopt/__init__.py +++ b/pypy/jit/metainterp/optimizeopt/__init__.py @@ -49,8 +49,9 @@ optimizations.append(OptFfiCall()) if ('rewrite' not in enable_opts or 'virtualize' not in enable_opts - or 'heap' not in enable_opts or 'unroll' not in enable_opts): - optimizations.append(OptSimplify()) + or 'heap' not in enable_opts or 'unroll' not in enable_opts + or 'pure' not in enable_opts): + optimizations.append(OptSimplify(unroll)) return optimizations, unroll diff --git a/pypy/jit/metainterp/optimizeopt/heap.py b/pypy/jit/metainterp/optimizeopt/heap.py --- a/pypy/jit/metainterp/optimizeopt/heap.py +++ b/pypy/jit/metainterp/optimizeopt/heap.py @@ -257,8 +257,8 @@ opnum == rop.COPYSTRCONTENT or # no effect on GC struct/array opnum == rop.COPYUNICODECONTENT): # no effect on GC struct/array return - assert opnum != rop.CALL_PURE if (opnum == rop.CALL or + opnum == rop.CALL_PURE or opnum == rop.CALL_MAY_FORCE or opnum == rop.CALL_RELEASE_GIL or opnum == rop.CALL_ASSEMBLER): @@ -481,7 +481,7 @@ # already between the tracing and now. In this case, we are # simply ignoring the QUASIIMMUT_FIELD hint and compiling it # as a regular getfield. - if not qmutdescr.is_still_valid(): + if not qmutdescr.is_still_valid_for(structvalue.get_key_box()): self._remove_guard_not_invalidated = True return # record as an out-of-line guard diff --git a/pypy/jit/metainterp/optimizeopt/intbounds.py b/pypy/jit/metainterp/optimizeopt/intbounds.py --- a/pypy/jit/metainterp/optimizeopt/intbounds.py +++ b/pypy/jit/metainterp/optimizeopt/intbounds.py @@ -191,10 +191,13 @@ # GUARD_OVERFLOW, then the loop is invalid. lastop = self.last_emitted_operation if lastop is None: - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') opnum = lastop.getopnum() if opnum not in (rop.INT_ADD_OVF, rop.INT_SUB_OVF, rop.INT_MUL_OVF): - raise InvalidLoop + raise InvalidLoop('An INT_xxx_OVF was proven not to overflow but' + + 'guarded with GUARD_OVERFLOW') + self.emit_operation(op) def optimize_INT_ADD_OVF(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -525,6 +525,7 @@ @specialize.argtype(0) def _emit_operation(self, op): + assert op.getopnum() != rop.CALL_PURE for i in range(op.numargs()): arg = op.getarg(i) try: diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -208,7 +208,8 @@ box = value.box assert isinstance(box, Const) if not box.same_constant(constbox): - raise InvalidLoop + raise InvalidLoop('A GURAD_{VALUE,TRUE,FALSE} was proven to' + + 'always fail') return if emit_operation: self.emit_operation(op) @@ -220,7 +221,7 @@ if value.is_null(): return elif value.is_nonnull(): - raise InvalidLoop + raise InvalidLoop('A GUARD_ISNULL was proven to always fail') self.emit_operation(op) value.make_constant(self.optimizer.cpu.ts.CONST_NULL) @@ -229,7 +230,7 @@ if value.is_nonnull(): return elif value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL was proven to always fail') self.emit_operation(op) value.make_nonnull(op) @@ -278,7 +279,7 @@ if realclassbox is not None: if realclassbox.same_constant(expectedclassbox): return - raise InvalidLoop + raise InvalidLoop('A GUARD_CLASS was proven to always fail') if value.last_guard: # there already has been a guard_nonnull or guard_class or # guard_nonnull_class on this value. @@ -301,7 +302,8 @@ def optimize_GUARD_NONNULL_CLASS(self, op): value = self.getvalue(op.getarg(0)) if value.is_null(): - raise InvalidLoop + raise InvalidLoop('A GUARD_NONNULL_CLASS was proven to always ' + + 'fail') self.optimize_GUARD_CLASS(op) def optimize_CALL_LOOPINVARIANT(self, op): diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -4,8 +4,9 @@ from pypy.jit.metainterp.history import TargetToken, JitCellToken class OptSimplify(Optimization): - def __init__(self): + def __init__(self, unroll): self.last_label_descr = None + self.unroll = unroll def optimize_CALL_PURE(self, op): args = op.getarglist() @@ -35,24 +36,26 @@ pass def optimize_LABEL(self, op): - descr = op.getdescr() - if isinstance(descr, JitCellToken): - return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) - self.last_label_descr = op.getdescr() + if not self.unroll: + descr = op.getdescr() + if isinstance(descr, JitCellToken): + return self.optimize_JUMP(op.copy_and_change(rop.JUMP)) + self.last_label_descr = op.getdescr() self.emit_operation(op) def optimize_JUMP(self, op): - descr = op.getdescr() - assert isinstance(descr, JitCellToken) - if not descr.target_tokens: - assert self.last_label_descr is not None - target_token = self.last_label_descr - assert isinstance(target_token, TargetToken) - assert target_token.targeting_jitcell_token is descr - op.setdescr(self.last_label_descr) - else: - assert len(descr.target_tokens) == 1 - op.setdescr(descr.target_tokens[0]) + if not self.unroll: + descr = op.getdescr() + assert isinstance(descr, JitCellToken) + if not descr.target_tokens: + assert self.last_label_descr is not None + target_token = self.last_label_descr + assert isinstance(target_token, TargetToken) + assert target_token.targeting_jitcell_token is descr + op.setdescr(self.last_label_descr) + else: + assert len(descr.target_tokens) == 1 + op.setdescr(descr.target_tokens[0]) self.emit_operation(op) dispatch_opt = make_dispatcher_method(OptSimplify, 'optimize_', diff --git a/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/optimizeopt/test/test_disable_optimizations.py @@ -0,0 +1,46 @@ +from pypy.jit.metainterp.optimizeopt.test.test_optimizeopt import OptimizeOptTest +from pypy.jit.metainterp.optimizeopt.test.test_util import LLtypeMixin +from pypy.jit.metainterp.resoperation import rop + + +allopts = OptimizeOptTest.enable_opts.split(':') +for optnum in range(len(allopts)): + myopts = allopts[:] + del myopts[optnum] + + class TestLLtype(OptimizeOptTest, LLtypeMixin): + enable_opts = ':'.join(myopts) + + def optimize_loop(self, ops, expected, expected_preamble=None, + call_pure_results=None, expected_short=None): + loop = self.parse(ops) + if expected != "crash!": + expected = self.parse(expected) + if expected_preamble: + expected_preamble = self.parse(expected_preamble) + if expected_short: + expected_short = self.parse(expected_short) + + preamble = self.unroll_and_optimize(loop, call_pure_results) + + for op in preamble.operations + loop.operations: + assert op.getopnum() not in (rop.CALL_PURE, + rop.CALL_LOOPINVARIANT, + rop.VIRTUAL_REF_FINISH, + rop.VIRTUAL_REF, + rop.QUASIIMMUT_FIELD, + rop.MARK_OPAQUE_PTR, + rop.RECORD_KNOWN_CLASS) + + def raises(self, e, fn, *args): + try: + fn(*args) + except e: + pass + + opt = allopts[optnum] + exec "TestNo%sLLtype = TestLLtype" % (opt[0].upper() + opt[1:]) + +del TestLLtype # No need to run the last set twice +del TestNoUnrollLLtype # This case is take care of by test_optimizebasic + diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizebasic.py @@ -7,7 +7,7 @@ import pypy.jit.metainterp.optimizeopt.optimizer as optimizeopt import pypy.jit.metainterp.optimizeopt.virtualize as virtualize from pypy.jit.metainterp.optimize import InvalidLoop -from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt +from pypy.jit.metainterp.history import AbstractDescr, ConstInt, BoxInt, get_const_ptr_for_string from pypy.jit.metainterp import executor, compile, resume, history from pypy.jit.metainterp.resoperation import rop, opname, ResOperation from pypy.rlib.rarithmetic import LONG_BIT @@ -5067,11 +5067,29 @@ """ self.optimize_strunicode_loop(ops, expected) + def test_call_pure_vstring_const(self): + ops = """ + [] + p0 = newstr(3) + strsetitem(p0, 0, 97) + strsetitem(p0, 1, 98) + strsetitem(p0, 2, 99) + i0 = call_pure(123, p0, descr=nonwritedescr) + finish(i0) + """ + expected = """ + [] + finish(5) + """ + call_pure_results = { + (ConstInt(123), get_const_ptr_for_string("abc"),): ConstInt(5), + } + self.optimize_loop(ops, expected, call_pure_results) + class TestLLtype(BaseTestOptimizeBasic, LLtypeMixin): pass - ##class TestOOtype(BaseTestOptimizeBasic, OOtypeMixin): ## def test_instanceof(self): diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizeopt.py @@ -105,6 +105,9 @@ return loop + def raises(self, e, fn, *args): + py.test.raises(e, fn, *args) + class OptimizeOptTest(BaseTestWithUnroll): def setup_method(self, meth=None): @@ -2639,7 +2642,7 @@ p2 = new_with_vtable(ConstClass(node_vtable)) jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_2(self): @@ -2651,7 +2654,7 @@ escape(p2) # prevent it from staying Virtual jump(p2) """ - py.test.raises(InvalidLoop, self.optimize_loop, + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_invalid_loop_3(self): @@ -2665,7 +2668,7 @@ setfield_gc(p3, p4, descr=nextdescr) jump(p3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_merge_guard_class_guard_value(self): @@ -4411,7 +4414,7 @@ setfield_gc(p1, p3, descr=nextdescr) jump(p3) """ - py.test.raises(BogusPureField, self.optimize_loop, ops, "crash!") + self.raises(BogusPureField, self.optimize_loop, ops, "crash!") def test_dont_complains_different_field(self): ops = """ @@ -5024,7 +5027,7 @@ i2 = int_add(i0, 3) jump(i2) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_ne_const_not(self): ops = """ @@ -5074,7 +5077,7 @@ i3 = int_add(i0, 3) jump(i3) """ - py.test.raises(InvalidLoop, self.optimize_loop, ops, ops) + self.raises(InvalidLoop, self.optimize_loop, ops, ops) def test_bound_lshift(self): ops = """ @@ -6533,9 +6536,9 @@ def test_quasi_immut_2(self): ops = """ [] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) escape(i1) jump() """ @@ -6585,13 +6588,13 @@ def test_call_may_force_invalidated_guards_reload(self): ops = """ [i0a, i0b] - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i1 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i1 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) call_may_force(i0b, descr=mayforcevirtdescr) - quasiimmut_field(ConstPtr(myptr), descr=quasiimmutdescr) + quasiimmut_field(ConstPtr(quasiptr), descr=quasiimmutdescr) guard_not_invalidated() [] - i2 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + i2 = getfield_gc(ConstPtr(quasiptr), descr=quasifielddescr) i3 = escape(i1) i4 = escape(i2) jump(i3, i4) @@ -7813,6 +7816,52 @@ """ self.optimize_loop(ops, expected) + def test_issue1080_infinitie_loop_virtual(self): + ops = """ + [p10] + p52 = getfield_gc(p10, descr=nextdescr) # inst_storage + p54 = getarrayitem_gc(p52, 0, descr=arraydescr) + p69 = getfield_gc_pure(p54, descr=otherdescr) # inst_w_function + + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + + p106 = new_with_vtable(ConstClass(node_vtable)) + p108 = new_array(3, descr=arraydescr) + p110 = new_with_vtable(ConstClass(node_vtable)) + setfield_gc(p110, ConstPtr(myptr2), descr=otherdescr) # inst_w_function + setarrayitem_gc(p108, 0, p110, descr=arraydescr) + setfield_gc(p106, p108, descr=nextdescr) # inst_storage + jump(p106) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr2), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + + + def test_issue1080_infinitie_loop_simple(self): + ops = """ + [p69] + quasiimmut_field(p69, descr=quasiimmutdescr) + guard_not_invalidated() [] + p71 = getfield_gc(p69, descr=quasifielddescr) # inst_code + guard_value(p71, -4247) [] + jump(ConstPtr(myptr)) + """ + expected = """ + [] + p72 = getfield_gc(ConstPtr(myptr), descr=quasifielddescr) + guard_value(p72, -4247) [] + jump() + """ + self.optimize_loop(ops, expected) + class TestLLtype(OptimizeOptTest, LLtypeMixin): pass diff --git a/pypy/jit/metainterp/optimizeopt/test/test_util.py b/pypy/jit/metainterp/optimizeopt/test/test_util.py --- a/pypy/jit/metainterp/optimizeopt/test/test_util.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_util.py @@ -122,6 +122,7 @@ quasi.inst_field = -4247 quasifielddescr = cpu.fielddescrof(QUASI, 'inst_field') quasibox = BoxPtr(lltype.cast_opaque_ptr(llmemory.GCREF, quasi)) + quasiptr = quasibox.value quasiimmutdescr = QuasiImmutDescr(cpu, quasibox, quasifielddescr, cpu.fielddescrof(QUASI, 'mutate_field')) diff --git a/pypy/jit/metainterp/optimizeopt/unroll.py b/pypy/jit/metainterp/optimizeopt/unroll.py --- a/pypy/jit/metainterp/optimizeopt/unroll.py +++ b/pypy/jit/metainterp/optimizeopt/unroll.py @@ -315,7 +315,10 @@ try: jumpargs = virtual_state.make_inputargs(values, self.optimizer) except BadVirtualState: - raise InvalidLoop + raise InvalidLoop('The state of the optimizer at the end of ' + + 'peeled loop is inconsistent with the ' + + 'VirtualState at the begining of the peeled ' + + 'loop') jumpop.initarglist(jumpargs) # Inline the short preamble at the end of the loop @@ -325,12 +328,20 @@ for i in range(len(short_inputargs)): if short_inputargs[i] in args: if args[short_inputargs[i]] != jmp_to_short_args[i]: - raise InvalidLoop + raise InvalidLoop('The short preamble wants the ' + + 'same box passed to multiple of its ' + + 'inputargs, but the jump at the ' + + 'end of this bridge does not do that.') + args[short_inputargs[i]] = jmp_to_short_args[i] self.short_inliner = Inliner(short_inputargs, jmp_to_short_args) - for op in self.short[1:]: + i = 1 + while i < len(self.short): + # Note that self.short might be extended during this loop + op = self.short[i] newop = self.short_inliner.inline_op(op) self.optimizer.send_extra_operation(newop) + i += 1 # Import boxes produced in the preamble but used in the loop newoperations = self.optimizer.get_newoperations() @@ -378,7 +389,10 @@ #final_virtual_state.debug_print("Bad virtual state at end of loop, ", # bad) #debug_stop('jit-log-virtualstate') - raise InvalidLoop + raise InvalidLoop('The virtual state at the end of the peeled ' + + 'loop is not compatible with the virtual ' + + 'state at the start of the loop which makes ' + + 'it impossible to close the loop') #debug_stop('jit-log-virtualstate') @@ -526,8 +540,8 @@ args = jumpop.getarglist() modifier = VirtualStateAdder(self.optimizer) virtual_state = modifier.get_virtual_state(args) - #debug_start('jit-log-virtualstate') - #virtual_state.debug_print("Looking for ") + debug_start('jit-log-virtualstate') + virtual_state.debug_print("Looking for ") for target in cell_token.target_tokens: if not target.virtual_state: @@ -536,10 +550,10 @@ extra_guards = [] bad = {} - #debugmsg = 'Did not match ' + debugmsg = 'Did not match ' if target.virtual_state.generalization_of(virtual_state, bad): ok = True - #debugmsg = 'Matched ' + debugmsg = 'Matched ' else: try: cpu = self.optimizer.cpu @@ -548,13 +562,13 @@ extra_guards) ok = True - #debugmsg = 'Guarded to match ' + debugmsg = 'Guarded to match ' except InvalidLoop: pass - #target.virtual_state.debug_print(debugmsg, bad) + target.virtual_state.debug_print(debugmsg, bad) if ok: - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') values = [self.getvalue(arg) for arg in jumpop.getarglist()] @@ -581,7 +595,7 @@ jumpop.setdescr(cell_token.target_tokens[0]) self.optimizer.send_extra_operation(jumpop) return True - #debug_stop('jit-log-virtualstate') + debug_stop('jit-log-virtualstate') return False class ValueImporter(object): diff --git a/pypy/jit/metainterp/optimizeopt/virtualstate.py b/pypy/jit/metainterp/optimizeopt/virtualstate.py --- a/pypy/jit/metainterp/optimizeopt/virtualstate.py +++ b/pypy/jit/metainterp/optimizeopt/virtualstate.py @@ -27,11 +27,15 @@ if self.generalization_of(other, renum, {}): return if renum[self.position] != other.position: - raise InvalidLoop + raise InvalidLoop('The numbering of the virtual states does not ' + + 'match. This means that two virtual fields ' + + 'have been set to the same Box in one of the ' + + 'virtual states but not in the other.') self._generate_guards(other, box, cpu, extra_guards) def _generate_guards(self, other, box, cpu, extra_guards): - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') def enum_forced_boxes(self, boxes, value, optimizer): raise NotImplementedError @@ -346,10 +350,12 @@ def _generate_guards(self, other, box, cpu, extra_guards): if not isinstance(other, NotVirtualStateInfo): - raise InvalidLoop + raise InvalidLoop('The VirtualStates does not match as a ' + + 'virtual appears where a pointer is needed ' + + 'and it is too late to force it.') if self.lenbound or other.lenbound: - raise InvalidLoop + raise InvalidLoop('The array length bounds does not match.') if self.level == LEVEL_KNOWNCLASS and \ box.nonnull() and \ @@ -400,7 +406,8 @@ return # Remaining cases are probably not interesting - raise InvalidLoop + raise InvalidLoop('Generating guards for making the VirtualStates ' + + 'at hand match have not been implemented') if self.level == LEVEL_CONSTANT: import pdb; pdb.set_trace() raise NotImplementedError diff --git a/pypy/jit/metainterp/quasiimmut.py b/pypy/jit/metainterp/quasiimmut.py --- a/pypy/jit/metainterp/quasiimmut.py +++ b/pypy/jit/metainterp/quasiimmut.py @@ -120,8 +120,10 @@ self.fielddescr, self.structbox) return fieldbox.constbox() - def is_still_valid(self): + def is_still_valid_for(self, structconst): assert self.structbox is not None + if not self.structbox.constbox().same_constant(structconst): + return False cpu = self.cpu gcref = self.structbox.getref_base() qmut = get_current_qmut_instance(cpu, gcref, self.mutatefielddescr) diff --git a/pypy/jit/metainterp/test/test_heapcache.py b/pypy/jit/metainterp/test/test_heapcache.py --- a/pypy/jit/metainterp/test/test_heapcache.py +++ b/pypy/jit/metainterp/test/test_heapcache.py @@ -2,12 +2,14 @@ from pypy.jit.metainterp.resoperation import rop from pypy.jit.metainterp.history import ConstInt -box1 = object() -box2 = object() -box3 = object() -box4 = object() +box1 = "box1" +box2 = "box2" +box3 = "box3" +box4 = "box4" +box5 = "box5" lengthbox1 = object() lengthbox2 = object() +lengthbox3 = object() descr1 = object() descr2 = object() descr3 = object() @@ -276,11 +278,43 @@ h.setfield(box1, descr2, box3) h.setfield(box2, descr3, box3) h.replace_box(box1, box4) - assert h.getfield(box1, descr1) is None - assert h.getfield(box1, descr2) is None assert h.getfield(box4, descr1) is box2 assert h.getfield(box4, descr2) is box3 assert h.getfield(box2, descr3) is box3 + h.setfield(box4, descr1, box3) + assert h.getfield(box4, descr1) is box3 + + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box3, box4) + assert h.getfield(box1, descr1) is box2 + assert h.getfield(box1, descr2) is box4 + assert h.getfield(box2, descr3) is box4 + + def test_replace_box_twice(self): + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box1, box4) + h.replace_box(box4, box5) + assert h.getfield(box5, descr1) is box2 + assert h.getfield(box5, descr2) is box3 + assert h.getfield(box2, descr3) is box3 + h.setfield(box5, descr1, box3) + assert h.getfield(box4, descr1) is box3 + + h = HeapCache() + h.setfield(box1, descr1, box2) + h.setfield(box1, descr2, box3) + h.setfield(box2, descr3, box3) + h.replace_box(box3, box4) + h.replace_box(box4, box5) + assert h.getfield(box1, descr1) is box2 + assert h.getfield(box1, descr2) is box5 + assert h.getfield(box2, descr3) is box5 def test_replace_box_array(self): h = HeapCache() @@ -291,9 +325,6 @@ h.setarrayitem(box3, descr2, index2, box1) h.setarrayitem(box2, descr3, index2, box3) h.replace_box(box1, box4) - assert h.getarrayitem(box1, descr1, index1) is None - assert h.getarrayitem(box1, descr2, index1) is None - assert h.arraylen(box1) is None assert h.arraylen(box4) is lengthbox1 assert h.getarrayitem(box4, descr1, index1) is box2 assert h.getarrayitem(box4, descr2, index1) is box3 @@ -304,6 +335,27 @@ h.replace_box(lengthbox1, lengthbox2) assert h.arraylen(box4) is lengthbox2 + def test_replace_box_array_twice(self): + h = HeapCache() + h.setarrayitem(box1, descr1, index1, box2) + h.setarrayitem(box1, descr2, index1, box3) + h.arraylen_now_known(box1, lengthbox1) + h.setarrayitem(box2, descr1, index2, box1) + h.setarrayitem(box3, descr2, index2, box1) + h.setarrayitem(box2, descr3, index2, box3) + h.replace_box(box1, box4) + h.replace_box(box4, box5) + assert h.arraylen(box4) is lengthbox1 + assert h.getarrayitem(box5, descr1, index1) is box2 + assert h.getarrayitem(box5, descr2, index1) is box3 + assert h.getarrayitem(box2, descr1, index2) is box5 + assert h.getarrayitem(box3, descr2, index2) is box5 + assert h.getarrayitem(box2, descr3, index2) is box3 + + h.replace_box(lengthbox1, lengthbox2) + h.replace_box(lengthbox2, lengthbox3) + assert h.arraylen(box4) is lengthbox3 + def test_ll_arraycopy(self): h = HeapCache() h.new_array(box1, lengthbox1) diff --git a/pypy/jit/metainterp/test/test_quasiimmut.py b/pypy/jit/metainterp/test/test_quasiimmut.py --- a/pypy/jit/metainterp/test/test_quasiimmut.py +++ b/pypy/jit/metainterp/test/test_quasiimmut.py @@ -8,7 +8,7 @@ from pypy.jit.metainterp.quasiimmut import get_current_qmut_instance from pypy.jit.metainterp.test.support import LLJitMixin from pypy.jit.codewriter.policy import StopAtXPolicy -from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe +from pypy.rlib.jit import JitDriver, dont_look_inside, unroll_safe, promote def test_get_current_qmut_instance(): @@ -506,6 +506,27 @@ "guard_not_invalidated": 2 }) + def test_issue1080(self): + myjitdriver = JitDriver(greens=[], reds=["n", "sa", "a"]) + class Foo(object): + _immutable_fields_ = ["x?"] + def __init__(self, x): + self.x = x + one, two = Foo(1), Foo(2) + def main(n): + sa = 0 + a = one + while n: + myjitdriver.jit_merge_point(n=n, sa=sa, a=a) + sa += a.x + if a.x == 1: + a = two + elif a.x == 2: + a = one + n -= 1 + return sa + res = self.meta_interp(main, [10]) + assert res == main(10) class TestLLtypeGreenFieldsTests(QuasiImmutTests, LLJitMixin): pass diff --git a/pypy/module/_multiprocessing/test/test_connection.py b/pypy/module/_multiprocessing/test/test_connection.py --- a/pypy/module/_multiprocessing/test/test_connection.py +++ b/pypy/module/_multiprocessing/test/test_connection.py @@ -157,13 +157,15 @@ raises(IOError, _multiprocessing.Connection, -15) def test_byte_order(self): + import socket + if not 'fromfd' in dir(socket): + skip('No fromfd in socket') # The exact format of net strings (length in network byte # order) is important for interoperation with others # implementations. rhandle, whandle = self.make_pair() whandle.send_bytes("abc") whandle.send_bytes("defg") - import socket sock = socket.fromfd(rhandle.fileno(), socket.AF_INET, socket.SOCK_STREAM) data1 = sock.recv(7) diff --git a/pypy/module/_winreg/test/test_winreg.py b/pypy/module/_winreg/test/test_winreg.py --- a/pypy/module/_winreg/test/test_winreg.py +++ b/pypy/module/_winreg/test/test_winreg.py @@ -198,7 +198,10 @@ import nt r = ExpandEnvironmentStrings(u"%windir%\\test") assert isinstance(r, unicode) - assert r == nt.environ["WINDIR"] + "\\test" + if 'WINDIR' in nt.environ.keys(): + assert r == nt.environ["WINDIR"] + "\\test" + else: + assert r == nt.environ["windir"] + "\\test" def test_long_key(self): from _winreg import ( diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -103,8 +103,8 @@ """.split() for name in constant_names: setattr(CConfig_constants, name, rffi_platform.ConstantInteger(name)) -udir.join('pypy_decl.h').write("/* Will be filled later */") -udir.join('pypy_macros.h').write("/* Will be filled later */") +udir.join('pypy_decl.h').write("/* Will be filled later */\n") +udir.join('pypy_macros.h').write("/* Will be filled later */\n") globals().update(rffi_platform.configure(CConfig_constants)) def copy_header_files(dstdir): @@ -927,12 +927,12 @@ source_dir / "pyerrors.c", source_dir / "modsupport.c", source_dir / "getargs.c", + source_dir / "abstract.c", source_dir / "stringobject.c", source_dir / "mysnprintf.c", source_dir / "pythonrun.c", source_dir / "sysmodule.c", source_dir / "bufferobject.c", - source_dir / "object.c", source_dir / "cobject.c", source_dir / "structseq.c", source_dir / "capsule.c", diff --git a/pypy/module/cpyext/complexobject.py b/pypy/module/cpyext/complexobject.py --- a/pypy/module/cpyext/complexobject.py +++ b/pypy/module/cpyext/complexobject.py @@ -33,6 +33,11 @@ # CPython also accepts anything return 0.0 + at cpython_api([Py_complex_ptr], PyObject) +def _PyComplex_FromCComplex(space, v): + """Create a new Python complex number object from a C Py_complex value.""" + return space.newcomplex(v.c_real, v.c_imag) + # lltype does not handle functions returning a structure. This implements a # helper function, which takes as argument a reference to the return value. @cpython_api([PyObject, Py_complex_ptr], lltype.Void) diff --git a/pypy/module/cpyext/floatobject.py b/pypy/module/cpyext/floatobject.py --- a/pypy/module/cpyext/floatobject.py +++ b/pypy/module/cpyext/floatobject.py @@ -2,6 +2,7 @@ from pypy.module.cpyext.api import ( CANNOT_FAIL, cpython_api, PyObject, build_type_checkers, CONST_STRING) from pypy.interpreter.error import OperationError +from pypy.rlib.rstruct import runpack PyFloat_Check, PyFloat_CheckExact = build_type_checkers("Float") @@ -33,3 +34,19 @@ backward compatibility.""" return space.call_function(space.w_float, w_obj) + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack4(space, ptr, le): + input = rffi.charpsize2str(ptr, 4) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("f", input) + + at cpython_api([CONST_STRING, rffi.INT_real], rffi.DOUBLE, error=-1.0) +def _PyFloat_Unpack8(space, ptr, le): + input = rffi.charpsize2str(ptr, 8) + if rffi.cast(lltype.Signed, le): + return runpack.runpack("d", input) + diff --git a/pypy/module/cpyext/include/complexobject.h b/pypy/module/cpyext/include/complexobject.h --- a/pypy/module/cpyext/include/complexobject.h +++ b/pypy/module/cpyext/include/complexobject.h @@ -21,6 +21,8 @@ return result; } +#define PyComplex_FromCComplex(c) _PyComplex_FromCComplex(&c) + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/object.h b/pypy/module/cpyext/include/object.h --- a/pypy/module/cpyext/include/object.h +++ b/pypy/module/cpyext/include/object.h @@ -38,10 +38,19 @@ PyObject_VAR_HEAD } PyVarObject; +#ifndef PYPY_DEBUG_REFCOUNT #define Py_INCREF(ob) (Py_IncRef((PyObject *)ob)) #define Py_DECREF(ob) (Py_DecRef((PyObject *)ob)) #define Py_XINCREF(ob) (Py_IncRef((PyObject *)ob)) #define Py_XDECREF(ob) (Py_DecRef((PyObject *)ob)) +#else +#define Py_INCREF(ob) (((PyObject *)ob)->ob_refcnt++) +#define Py_DECREF(ob) ((((PyObject *)ob)->ob_refcnt > 1) ? \ + ((PyObject *)ob)->ob_refcnt-- : (Py_DecRef((PyObject *)ob))) + +#define Py_XINCREF(op) do { if ((op) == NULL) ; else Py_INCREF(op); } while (0) +#define Py_XDECREF(op) do { if ((op) == NULL) ; else Py_DECREF(op); } while (0) +#endif #define Py_CLEAR(op) \ do { \ diff --git a/pypy/module/cpyext/include/pyerrors.h b/pypy/module/cpyext/include/pyerrors.h --- a/pypy/module/cpyext/include/pyerrors.h +++ b/pypy/module/cpyext/include/pyerrors.h @@ -29,6 +29,10 @@ # define vsnprintf _vsnprintf #endif +#include +PyAPI_FUNC(int) PyOS_snprintf(char *str, size_t size, const char *format, ...); +PyAPI_FUNC(int) PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va); + #ifdef __cplusplus } #endif diff --git a/pypy/module/cpyext/include/stringobject.h b/pypy/module/cpyext/include/stringobject.h --- a/pypy/module/cpyext/include/stringobject.h +++ b/pypy/module/cpyext/include/stringobject.h @@ -7,8 +7,6 @@ extern "C" { #endif -int PyOS_snprintf(char *str, size_t size, const char *format, ...); - #define PyString_GET_SIZE(op) PyString_Size(op) #define PyString_AS_STRING(op) PyString_AsString(op) diff --git a/pypy/module/cpyext/include/structmember.h b/pypy/module/cpyext/include/structmember.h --- a/pypy/module/cpyext/include/structmember.h +++ b/pypy/module/cpyext/include/structmember.h @@ -40,7 +40,8 @@ when the value is NULL, instead of converting to None. */ #define T_LONGLONG 17 -#define T_ULONGLONG 18 +#define T_ULONGLONG 18 +#define T_PYSSIZET 19 /* Flags. These constants are also in structmemberdefs.py. */ #define READONLY 1 diff --git a/pypy/module/cpyext/listobject.py b/pypy/module/cpyext/listobject.py --- a/pypy/module/cpyext/listobject.py +++ b/pypy/module/cpyext/listobject.py @@ -110,6 +110,16 @@ space.call_method(w_list, "reverse") return 0 + at cpython_api([PyObject, Py_ssize_t, Py_ssize_t], PyObject) +def PyList_GetSlice(space, w_list, low, high): + """Return a list of the objects in list containing the objects between low + and high. Return NULL and set an exception if unsuccessful. Analogous + to list[low:high]. Negative indices, as when slicing from Python, are not + supported.""" + w_start = space.wrap(low) + w_stop = space.wrap(high) + return space.getslice(w_list, w_start, w_stop) + @cpython_api([PyObject, Py_ssize_t, Py_ssize_t, PyObject], rffi.INT_real, error=-1) def PyList_SetSlice(space, w_list, low, high, w_sequence): """Set the slice of list between low and high to the contents of diff --git a/pypy/module/cpyext/longobject.py b/pypy/module/cpyext/longobject.py --- a/pypy/module/cpyext/longobject.py +++ b/pypy/module/cpyext/longobject.py @@ -1,6 +1,7 @@ from pypy.rpython.lltypesystem import lltype, rffi -from pypy.module.cpyext.api import (cpython_api, PyObject, build_type_checkers, - CONST_STRING, ADDR, CANNOT_FAIL) +from pypy.module.cpyext.api import ( + cpython_api, PyObject, build_type_checkers, Py_ssize_t, + CONST_STRING, ADDR, CANNOT_FAIL) from pypy.objspace.std.longobject import W_LongObject from pypy.interpreter.error import OperationError from pypy.module.cpyext.intobject import PyInt_AsUnsignedLongMask @@ -15,6 +16,13 @@ """Return a new PyLongObject object from v, or NULL on failure.""" return space.newlong(val) + at cpython_api([Py_ssize_t], PyObject) +def PyLong_FromSsize_t(space, val): + """Return a new PyLongObject object from a C Py_ssize_t, or + NULL on failure. + """ + return space.newlong(val) + @cpython_api([rffi.LONGLONG], PyObject) def PyLong_FromLongLong(space, val): """Return a new PyLongObject object from a C long long, or NULL @@ -56,6 +64,14 @@ and -1 will be returned.""" return space.int_w(w_long) + at cpython_api([PyObject], Py_ssize_t, error=-1) +def PyLong_AsSsize_t(space, w_long): + """Return a C Py_ssize_t representation of the contents of pylong. If + pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised + and -1 will be returned. + """ + return space.int_w(w_long) + @cpython_api([PyObject], rffi.LONGLONG, error=-1) def PyLong_AsLongLong(space, w_long): """ diff --git a/pypy/module/cpyext/object.py b/pypy/module/cpyext/object.py --- a/pypy/module/cpyext/object.py +++ b/pypy/module/cpyext/object.py @@ -381,6 +381,15 @@ This is the equivalent of the Python expression hash(o).""" return space.int_w(space.hash(w_obj)) + at cpython_api([PyObject], lltype.Signed, error=-1) +def PyObject_HashNotImplemented(space, o): + """Set a TypeError indicating that type(o) is not hashable and return -1. + This function receives special treatment when stored in a tp_hash slot, + allowing a type to explicitly indicate to the interpreter that it is not + hashable. + """ + raise OperationError(space.w_TypeError, space.wrap("unhashable type")) + @cpython_api([PyObject], PyObject) def PyObject_Dir(space, w_o): """This is equivalent to the Python expression dir(o), returning a (possibly diff --git a/pypy/module/cpyext/slotdefs.py b/pypy/module/cpyext/slotdefs.py --- a/pypy/module/cpyext/slotdefs.py +++ b/pypy/module/cpyext/slotdefs.py @@ -7,7 +7,7 @@ cpython_api, generic_cpy_call, PyObject, Py_ssize_t) from pypy.module.cpyext.typeobjectdefs import ( unaryfunc, wrapperfunc, ternaryfunc, PyTypeObjectPtr, binaryfunc, - getattrfunc, getattrofunc, setattrofunc, lenfunc, ssizeargfunc, + getattrfunc, getattrofunc, setattrofunc, lenfunc, ssizeargfunc, inquiry, ssizessizeargfunc, ssizeobjargproc, iternextfunc, initproc, richcmpfunc, cmpfunc, hashfunc, descrgetfunc, descrsetfunc, objobjproc, objobjargproc, readbufferproc) @@ -60,6 +60,16 @@ args_w = space.fixedview(w_args) return generic_cpy_call(space, func_binary, w_self, args_w[0]) +def wrap_inquirypred(space, w_self, w_args, func): + func_inquiry = rffi.cast(inquiry, func) + check_num_args(space, w_args, 0) + args_w = space.fixedview(w_args) + res = generic_cpy_call(space, func_inquiry, w_self) + res = rffi.cast(lltype.Signed, res) + if res == -1: + space.fromcache(State).check_and_raise_exception() + return space.wrap(bool(res)) + def wrap_getattr(space, w_self, w_args, func): func_target = rffi.cast(getattrfunc, func) check_num_args(space, w_args, 1) diff --git a/pypy/module/cpyext/src/abstract.c b/pypy/module/cpyext/src/abstract.c new file mode 100644 --- /dev/null +++ b/pypy/module/cpyext/src/abstract.c @@ -0,0 +1,269 @@ +/* Abstract Object Interface */ + +#include "Python.h" + +/* Shorthands to return certain errors */ + +static PyObject * +type_error(const char *msg, PyObject *obj) +{ + PyErr_Format(PyExc_TypeError, msg, obj->ob_type->tp_name); + return NULL; +} + +static PyObject * +null_error(void) +{ + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_SystemError, + "null argument to internal routine"); + return NULL; +} + +/* Operations on any object */ + +int +PyObject_CheckReadBuffer(PyObject *obj) +{ + PyBufferProcs *pb = obj->ob_type->tp_as_buffer; + + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + (*pb->bf_getsegcount)(obj, NULL) != 1) + return 0; + return 1; +} + +int PyObject_AsReadBuffer(PyObject *obj, + const void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void *pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a readable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getreadbuffer)(obj, 0, &pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +int PyObject_AsWriteBuffer(PyObject *obj, + void **buffer, + Py_ssize_t *buffer_len) +{ + PyBufferProcs *pb; + void*pp; + Py_ssize_t len; + + if (obj == NULL || buffer == NULL || buffer_len == NULL) { + null_error(); + return -1; + } + pb = obj->ob_type->tp_as_buffer; + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) { + PyErr_SetString(PyExc_TypeError, + "expected a writeable buffer object"); + return -1; + } + if ((*pb->bf_getsegcount)(obj, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "expected a single-segment buffer object"); + return -1; + } + len = (*pb->bf_getwritebuffer)(obj,0,&pp); + if (len < 0) + return -1; + *buffer = pp; + *buffer_len = len; + return 0; +} + +/* Operations on callable objects */ + +static PyObject* +call_function_tail(PyObject *callable, PyObject *args) +{ + PyObject *retval; + + if (args == NULL) + return NULL; + + if (!PyTuple_Check(args)) { + PyObject *a; + + a = PyTuple_New(1); + if (a == NULL) { + Py_DECREF(args); + return NULL; + } + PyTuple_SET_ITEM(a, 0, args); + args = a; + } + retval = PyObject_Call(callable, args, NULL); + + Py_DECREF(args); + + return retval; +} + +PyObject * +PyObject_CallFunction(PyObject *callable, const char *format, ...) +{ + va_list va; + PyObject *args; + + if (callable == NULL) + return null_error(); + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + return call_function_tail(callable, args); +} + +PyObject * +PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) +{ + va_list va; + PyObject *args; + PyObject *func = NULL; + PyObject *retval = NULL; + + if (o == NULL || name == NULL) + return null_error(); + + func = PyObject_GetAttrString(o, name); + if (func == NULL) { + PyErr_SetString(PyExc_AttributeError, name); + return 0; + } + + if (!PyCallable_Check(func)) { + type_error("attribute of type '%.200s' is not callable", func); + goto exit; + } + + if (format && *format) { + va_start(va, format); + args = Py_VaBuildValue(format, va); + va_end(va); + } + else + args = PyTuple_New(0); + + retval = call_function_tail(func, args); + + exit: + /* args gets consumed in call_function_tail */ + Py_XDECREF(func); + + return retval; +} + +static PyObject * +objargs_mktuple(va_list va) +{ + int i, n = 0; + va_list countva; + PyObject *result, *tmp; + +#ifdef VA_LIST_IS_ARRAY + memcpy(countva, va, sizeof(va_list)); +#else +#ifdef __va_copy + __va_copy(countva, va); +#else + countva = va; +#endif +#endif + + while (((PyObject *)va_arg(countva, PyObject *)) != NULL) + ++n; + result = PyTuple_New(n); + if (result != NULL && n > 0) { + for (i = 0; i < n; ++i) { + tmp = (PyObject *)va_arg(va, PyObject *); + Py_INCREF(tmp); + PyTuple_SET_ITEM(result, i, tmp); + } + } + return result; +} + +PyObject * +PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL || name == NULL) + return null_error(); + + callable = PyObject_GetAttr(callable, name); + if (callable == NULL) + return NULL; + + /* count the args */ + va_start(vargs, name); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) { + Py_DECREF(callable); + return NULL; + } + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + Py_DECREF(callable); + + return tmp; +} + +PyObject * +PyObject_CallFunctionObjArgs(PyObject *callable, ...) +{ + PyObject *args, *tmp; + va_list vargs; + + if (callable == NULL) + return null_error(); + + /* count the args */ + va_start(vargs, callable); + args = objargs_mktuple(vargs); + va_end(vargs); + if (args == NULL) + return NULL; + tmp = PyObject_Call(callable, args, NULL); + Py_DECREF(args); + + return tmp; +} + diff --git a/pypy/module/cpyext/src/bufferobject.c b/pypy/module/cpyext/src/bufferobject.c --- a/pypy/module/cpyext/src/bufferobject.c +++ b/pypy/module/cpyext/src/bufferobject.c @@ -13,207 +13,207 @@ static int get_buf(PyBufferObject *self, void **ptr, Py_ssize_t *size, - enum buffer_t buffer_type) + enum buffer_t buffer_type) { - if (self->b_base == NULL) { - assert (ptr != NULL); - *ptr = self->b_ptr; - *size = self->b_size; - } - else { - Py_ssize_t count, offset; - readbufferproc proc = 0; - PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; - if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return 0; - } - if ((buffer_type == READ_BUFFER) || - ((buffer_type == ANY_BUFFER) && self->b_readonly)) - proc = bp->bf_getreadbuffer; - else if ((buffer_type == WRITE_BUFFER) || - (buffer_type == ANY_BUFFER)) - proc = (readbufferproc)bp->bf_getwritebuffer; - else if (buffer_type == CHAR_BUFFER) { + if (self->b_base == NULL) { + assert (ptr != NULL); + *ptr = self->b_ptr; + *size = self->b_size; + } + else { + Py_ssize_t count, offset; + readbufferproc proc = 0; + PyBufferProcs *bp = self->b_base->ob_type->tp_as_buffer; + if ((*bp->bf_getsegcount)(self->b_base, NULL) != 1) { + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return 0; + } + if ((buffer_type == READ_BUFFER) || + ((buffer_type == ANY_BUFFER) && self->b_readonly)) + proc = bp->bf_getreadbuffer; + else if ((buffer_type == WRITE_BUFFER) || + (buffer_type == ANY_BUFFER)) + proc = (readbufferproc)bp->bf_getwritebuffer; + else if (buffer_type == CHAR_BUFFER) { if (!PyType_HasFeature(self->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER)) { - PyErr_SetString(PyExc_TypeError, - "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); - return 0; - } - proc = (readbufferproc)bp->bf_getcharbuffer; - } - if (!proc) { - char *buffer_type_name; - switch (buffer_type) { - case READ_BUFFER: - buffer_type_name = "read"; - break; - case WRITE_BUFFER: - buffer_type_name = "write"; - break; - case CHAR_BUFFER: - buffer_type_name = "char"; - break; - default: - buffer_type_name = "no"; - break; - } - PyErr_Format(PyExc_TypeError, - "%s buffer type not available", - buffer_type_name); - return 0; - } - if ((count = (*proc)(self->b_base, 0, ptr)) < 0) - return 0; - /* apply constraints to the start/end */ - if (self->b_offset > count) - offset = count; - else - offset = self->b_offset; - *(char **)ptr = *(char **)ptr + offset; - if (self->b_size == Py_END_OF_BUFFER) - *size = count; - else - *size = self->b_size; - if (offset + *size > count) - *size = count - offset; - } - return 1; + Py_TPFLAGS_HAVE_GETCHARBUFFER)) { + PyErr_SetString(PyExc_TypeError, + "Py_TPFLAGS_HAVE_GETCHARBUFFER needed"); + return 0; + } + proc = (readbufferproc)bp->bf_getcharbuffer; + } + if (!proc) { + char *buffer_type_name; + switch (buffer_type) { + case READ_BUFFER: + buffer_type_name = "read"; + break; + case WRITE_BUFFER: + buffer_type_name = "write"; + break; + case CHAR_BUFFER: + buffer_type_name = "char"; + break; + default: + buffer_type_name = "no"; + break; + } + PyErr_Format(PyExc_TypeError, + "%s buffer type not available", + buffer_type_name); + return 0; + } + if ((count = (*proc)(self->b_base, 0, ptr)) < 0) + return 0; + /* apply constraints to the start/end */ + if (self->b_offset > count) + offset = count; + else + offset = self->b_offset; + *(char **)ptr = *(char **)ptr + offset; + if (self->b_size == Py_END_OF_BUFFER) + *size = count; + else + *size = self->b_size; + if (offset + *size > count) + *size = count - offset; + } + return 1; } static PyObject * buffer_from_memory(PyObject *base, Py_ssize_t size, Py_ssize_t offset, void *ptr, - int readonly) + int readonly) { - PyBufferObject * b; + PyBufferObject * b; - if (size < 0 && size != Py_END_OF_BUFFER) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } + if (size < 0 && size != Py_END_OF_BUFFER) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } - b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); - if ( b == NULL ) - return NULL; + b = PyObject_NEW(PyBufferObject, &PyBuffer_Type); + if ( b == NULL ) + return NULL; - Py_XINCREF(base); - b->b_base = base; - b->b_ptr = ptr; - b->b_size = size; - b->b_offset = offset; - b->b_readonly = readonly; - b->b_hash = -1; + Py_XINCREF(base); + b->b_base = base; + b->b_ptr = ptr; + b->b_size = size; + b->b_offset = offset; + b->b_readonly = readonly; + b->b_hash = -1; - return (PyObject *) b; + return (PyObject *) b; } static PyObject * buffer_from_object(PyObject *base, Py_ssize_t size, Py_ssize_t offset, int readonly) { - if (offset < 0) { - PyErr_SetString(PyExc_ValueError, - "offset must be zero or positive"); - return NULL; - } - if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { - /* another buffer, refer to the base object */ - PyBufferObject *b = (PyBufferObject *)base; - if (b->b_size != Py_END_OF_BUFFER) { - Py_ssize_t base_size = b->b_size - offset; - if (base_size < 0) - base_size = 0; - if (size == Py_END_OF_BUFFER || size > base_size) - size = base_size; - } - offset += b->b_offset; - base = b->b_base; - } - return buffer_from_memory(base, size, offset, NULL, readonly); + if (offset < 0) { + PyErr_SetString(PyExc_ValueError, + "offset must be zero or positive"); + return NULL; + } + if ( PyBuffer_Check(base) && (((PyBufferObject *)base)->b_base) ) { + /* another buffer, refer to the base object */ + PyBufferObject *b = (PyBufferObject *)base; + if (b->b_size != Py_END_OF_BUFFER) { + Py_ssize_t base_size = b->b_size - offset; + if (base_size < 0) + base_size = 0; + if (size == Py_END_OF_BUFFER || size > base_size) + size = base_size; + } + offset += b->b_offset; + base = b->b_base; + } + return buffer_from_memory(base, size, offset, NULL, readonly); } PyObject * PyBuffer_FromObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 1); + return buffer_from_object(base, size, offset, 1); } PyObject * PyBuffer_FromReadWriteObject(PyObject *base, Py_ssize_t offset, Py_ssize_t size) { - PyBufferProcs *pb = base->ob_type->tp_as_buffer; + PyBufferProcs *pb = base->ob_type->tp_as_buffer; - if ( pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_SetString(PyExc_TypeError, "buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_SetString(PyExc_TypeError, "buffer object expected"); + return NULL; + } - return buffer_from_object(base, size, offset, 0); + return buffer_from_object(base, size, offset, 0); } PyObject * PyBuffer_FromMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 1); + return buffer_from_memory(NULL, size, 0, ptr, 1); } PyObject * PyBuffer_FromReadWriteMemory(void *ptr, Py_ssize_t size) { - return buffer_from_memory(NULL, size, 0, ptr, 0); + return buffer_from_memory(NULL, size, 0, ptr, 0); } PyObject * PyBuffer_New(Py_ssize_t size) { - PyObject *o; - PyBufferObject * b; + PyObject *o; + PyBufferObject * b; - if (size < 0) { - PyErr_SetString(PyExc_ValueError, - "size must be zero or positive"); - return NULL; - } - if (sizeof(*b) > PY_SSIZE_T_MAX - size) { - /* unlikely */ - return PyErr_NoMemory(); - } - /* Inline PyObject_New */ - o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); - if ( o == NULL ) - return PyErr_NoMemory(); - b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); + if (size < 0) { + PyErr_SetString(PyExc_ValueError, + "size must be zero or positive"); + return NULL; + } + if (sizeof(*b) > PY_SSIZE_T_MAX - size) { + /* unlikely */ + return PyErr_NoMemory(); + } + /* Inline PyObject_New */ + o = (PyObject *)PyObject_MALLOC(sizeof(*b) + size); + if ( o == NULL ) + return PyErr_NoMemory(); + b = (PyBufferObject *) PyObject_INIT(o, &PyBuffer_Type); - b->b_base = NULL; - b->b_ptr = (void *)(b + 1); - b->b_size = size; - b->b_offset = 0; - b->b_readonly = 0; - b->b_hash = -1; + b->b_base = NULL; + b->b_ptr = (void *)(b + 1); + b->b_size = size; + b->b_offset = 0; + b->b_readonly = 0; + b->b_hash = -1; - return o; + return o; } /* Methods */ @@ -221,19 +221,21 @@ static PyObject * buffer_new(PyTypeObject *type, PyObject *args, PyObject *kw) { - PyObject *ob; - Py_ssize_t offset = 0; - Py_ssize_t size = Py_END_OF_BUFFER; + PyObject *ob; + Py_ssize_t offset = 0; + Py_ssize_t size = Py_END_OF_BUFFER; - /*if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) - return NULL;*/ - - if (!_PyArg_NoKeywords("buffer()", kw)) - return NULL; + /* + * if (PyErr_WarnPy3k("buffer() not supported in 3.x", 1) < 0) + * return NULL; + */ - if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) - return NULL; - return PyBuffer_FromObject(ob, offset, size); + if (!_PyArg_NoKeywords("buffer()", kw)) + return NULL; + + if (!PyArg_ParseTuple(args, "O|nn:buffer", &ob, &offset, &size)) + return NULL; + return PyBuffer_FromObject(ob, offset, size); } PyDoc_STRVAR(buffer_doc, @@ -248,99 +250,99 @@ static void buffer_dealloc(PyBufferObject *self) { - Py_XDECREF(self->b_base); - PyObject_DEL(self); + Py_XDECREF(self->b_base); + PyObject_DEL(self); } static int buffer_compare(PyBufferObject *self, PyBufferObject *other) { - void *p1, *p2; - Py_ssize_t len_self, len_other, min_len; - int cmp; + void *p1, *p2; + Py_ssize_t len_self, len_other, min_len; + int cmp; - if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) - return -1; - if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) - return -1; - min_len = (len_self < len_other) ? len_self : len_other; - if (min_len > 0) { - cmp = memcmp(p1, p2, min_len); - if (cmp != 0) - return cmp < 0 ? -1 : 1; - } - return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; + if (!get_buf(self, &p1, &len_self, ANY_BUFFER)) + return -1; + if (!get_buf(other, &p2, &len_other, ANY_BUFFER)) + return -1; + min_len = (len_self < len_other) ? len_self : len_other; + if (min_len > 0) { + cmp = memcmp(p1, p2, min_len); + if (cmp != 0) + return cmp < 0 ? -1 : 1; + } + return (len_self < len_other) ? -1 : (len_self > len_other) ? 1 : 0; } static PyObject * buffer_repr(PyBufferObject *self) { - const char *status = self->b_readonly ? "read-only" : "read-write"; + const char *status = self->b_readonly ? "read-only" : "read-write"; if ( self->b_base == NULL ) - return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", - status, - self->b_ptr, - self->b_size, - self); - else - return PyString_FromFormat( - "<%s buffer for %p, size %zd, offset %zd at %p>", - status, - self->b_base, - self->b_size, - self->b_offset, - self); + return PyString_FromFormat("<%s buffer ptr %p, size %zd at %p>", + status, + self->b_ptr, + self->b_size, + self); + else + return PyString_FromFormat( + "<%s buffer for %p, size %zd, offset %zd at %p>", + status, + self->b_base, + self->b_size, + self->b_offset, + self); } static long buffer_hash(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - register Py_ssize_t len; - register unsigned char *p; - register long x; + void *ptr; + Py_ssize_t size; + register Py_ssize_t len; + register unsigned char *p; + register long x; - if ( self->b_hash != -1 ) - return self->b_hash; + if ( self->b_hash != -1 ) + return self->b_hash; - /* XXX potential bugs here, a readonly buffer does not imply that the - * underlying memory is immutable. b_readonly is a necessary but not - * sufficient condition for a buffer to be hashable. Perhaps it would - * be better to only allow hashing if the underlying object is known to - * be immutable (e.g. PyString_Check() is true). Another idea would - * be to call tp_hash on the underlying object and see if it raises - * an error. */ - if ( !self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, - "writable buffers are not hashable"); - return -1; - } + /* XXX potential bugs here, a readonly buffer does not imply that the + * underlying memory is immutable. b_readonly is a necessary but not + * sufficient condition for a buffer to be hashable. Perhaps it would + * be better to only allow hashing if the underlying object is known to + * be immutable (e.g. PyString_Check() is true). Another idea would + * be to call tp_hash on the underlying object and see if it raises + * an error. */ + if ( !self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, + "writable buffers are not hashable"); + return -1; + } - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - p = (unsigned char *) ptr; - len = size; - x = *p << 7; - while (--len >= 0) - x = (1000003*x) ^ *p++; - x ^= size; - if (x == -1) - x = -2; - self->b_hash = x; - return x; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + p = (unsigned char *) ptr; + len = size; + x = *p << 7; + while (--len >= 0) + x = (1000003*x) ^ *p++; + x ^= size; + if (x == -1) + x = -2; + self->b_hash = x; + return x; } static PyObject * buffer_str(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - return PyString_FromStringAndSize((const char *)ptr, size); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + return PyString_FromStringAndSize((const char *)ptr, size); } /* Sequence methods */ @@ -348,374 +350,372 @@ static Py_ssize_t buffer_length(PyBufferObject *self) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - return size; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + return size; } static PyObject * buffer_concat(PyBufferObject *self, PyObject *other) { - PyBufferProcs *pb = other->ob_type->tp_as_buffer; - void *ptr1, *ptr2; - char *p; - PyObject *ob; - Py_ssize_t size, count; + PyBufferProcs *pb = other->ob_type->tp_as_buffer; + void *ptr1, *ptr2; + char *p; + PyObject *ob; + Py_ssize_t size, count; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return NULL; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return NULL; - } + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return NULL; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return NULL; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return NULL; - - /* optimize special case */ - if ( size == 0 ) - { - Py_INCREF(other); - return other; - } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return NULL; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return NULL; + /* optimize special case */ + if ( size == 0 ) + { + Py_INCREF(other); + return other; + } - assert(count <= PY_SIZE_MAX - size); + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return NULL; - ob = PyString_FromStringAndSize(NULL, size + count); - if ( ob == NULL ) - return NULL; - p = PyString_AS_STRING(ob); - memcpy(p, ptr1, size); - memcpy(p + size, ptr2, count); + assert(count <= PY_SIZE_MAX - size); - /* there is an extra byte in the string object, so this is safe */ - p[size + count] = '\0'; + ob = PyString_FromStringAndSize(NULL, size + count); + if ( ob == NULL ) + return NULL; + p = PyString_AS_STRING(ob); + memcpy(p, ptr1, size); + memcpy(p + size, ptr2, count); - return ob; + /* there is an extra byte in the string object, so this is safe */ + p[size + count] = '\0'; + + return ob; } static PyObject * buffer_repeat(PyBufferObject *self, Py_ssize_t count) { - PyObject *ob; - register char *p; - void *ptr; - Py_ssize_t size; + PyObject *ob; + register char *p; + void *ptr; + Py_ssize_t size; - if ( count < 0 ) - count = 0; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if (count > PY_SSIZE_T_MAX / size) { - PyErr_SetString(PyExc_MemoryError, "result too large"); - return NULL; - } - ob = PyString_FromStringAndSize(NULL, size * count); - if ( ob == NULL ) - return NULL; + if ( count < 0 ) + count = 0; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if (count > PY_SSIZE_T_MAX / size) { + PyErr_SetString(PyExc_MemoryError, "result too large"); + return NULL; + } + ob = PyString_FromStringAndSize(NULL, size * count); + if ( ob == NULL ) + return NULL; - p = PyString_AS_STRING(ob); - while ( count-- ) - { - memcpy(p, ptr, size); - p += size; - } + p = PyString_AS_STRING(ob); + while ( count-- ) + { + memcpy(p, ptr, size); + p += size; + } - /* there is an extra byte in the string object, so this is safe */ - *p = '\0'; + /* there is an extra byte in the string object, so this is safe */ + *p = '\0'; - return ob; + return ob; } static PyObject * buffer_item(PyBufferObject *self, Py_ssize_t idx) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( idx < 0 || idx >= size ) { - PyErr_SetString(PyExc_IndexError, "buffer index out of range"); - return NULL; - } - return PyString_FromStringAndSize((char *)ptr + idx, 1); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( idx < 0 || idx >= size ) { + PyErr_SetString(PyExc_IndexError, "buffer index out of range"); + return NULL; + } + return PyString_FromStringAndSize((char *)ptr + idx, 1); } static PyObject * buffer_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return NULL; - if ( left < 0 ) - left = 0; - if ( right < 0 ) - right = 0; - if ( right > size ) - right = size; - if ( right < left ) - right = left; - return PyString_FromStringAndSize((char *)ptr + left, - right - left); + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return NULL; + if ( left < 0 ) + left = 0; + if ( right < 0 ) + right = 0; + if ( right > size ) + right = size; + if ( right < left ) + right = left; + return PyString_FromStringAndSize((char *)ptr + left, + right - left); } static PyObject * buffer_subscript(PyBufferObject *self, PyObject *item) { - void *p; - Py_ssize_t size; - - if (!get_buf(self, &p, &size, ANY_BUFFER)) - return NULL; - + void *p; + Py_ssize_t size; + + if (!get_buf(self, &p, &size, ANY_BUFFER)) + return NULL; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return NULL; - if (i < 0) - i += size; - return buffer_item(self, i); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength, cur, i; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return NULL; + if (i < 0) + i += size; + return buffer_item(self, i); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength, cur, i; - if (PySlice_GetIndicesEx((PySliceObject*)item, size, - &start, &stop, &step, &slicelength) < 0) { - return NULL; - } + if (PySlice_GetIndicesEx((PySliceObject*)item, size, + &start, &stop, &step, &slicelength) < 0) { + return NULL; + } - if (slicelength <= 0) - return PyString_FromStringAndSize("", 0); - else if (step == 1) - return PyString_FromStringAndSize((char *)p + start, - stop - start); - else { - PyObject *result; - char *source_buf = (char *)p; - char *result_buf = (char *)PyMem_Malloc(slicelength); + if (slicelength <= 0) + return PyString_FromStringAndSize("", 0); + else if (step == 1) + return PyString_FromStringAndSize((char *)p + start, + stop - start); + else { + PyObject *result; + char *source_buf = (char *)p; + char *result_buf = (char *)PyMem_Malloc(slicelength); - if (result_buf == NULL) - return PyErr_NoMemory(); + if (result_buf == NULL) + return PyErr_NoMemory(); - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - result_buf[i] = source_buf[cur]; - } + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + result_buf[i] = source_buf[cur]; + } - result = PyString_FromStringAndSize(result_buf, - slicelength); - PyMem_Free(result_buf); - return result; - } - } - else { - PyErr_SetString(PyExc_TypeError, - "sequence index must be integer"); - return NULL; - } + result = PyString_FromStringAndSize(result_buf, + slicelength); + PyMem_Free(result_buf); + return result; + } + } + else { + PyErr_SetString(PyExc_TypeError, + "sequence index must be integer"); + return NULL; + } } static int buffer_ass_item(PyBufferObject *self, Py_ssize_t idx, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; - if (idx < 0 || idx >= size) { - PyErr_SetString(PyExc_IndexError, - "buffer assignment index out of range"); - return -1; - } + if (idx < 0 || idx >= size) { + PyErr_SetString(PyExc_IndexError, + "buffer assignment index out of range"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; - if ( count != 1 ) { - PyErr_SetString(PyExc_TypeError, - "right operand must be a single byte"); - return -1; - } + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; + if ( count != 1 ) { + PyErr_SetString(PyExc_TypeError, + "right operand must be a single byte"); + return -1; + } - ((char *)ptr1)[idx] = *(char *)ptr2; - return 0; + ((char *)ptr1)[idx] = *(char *)ptr2; + return 0; } static int buffer_ass_slice(PyBufferObject *self, Py_ssize_t left, Py_ssize_t right, PyObject *other) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t size; - Py_ssize_t slice_len; - Py_ssize_t count; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t size; + Py_ssize_t slice_len; + Py_ssize_t count; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = other ? other->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) - return -1; - if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) - return -1; + pb = other ? other->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(other, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &size, ANY_BUFFER)) + return -1; + if ( (count = (*pb->bf_getreadbuffer)(other, 0, &ptr2)) < 0 ) + return -1; - if ( left < 0 ) - left = 0; - else if ( left > size ) - left = size; - if ( right < left ) - right = left; - else if ( right > size ) - right = size; - slice_len = right - left; + if ( left < 0 ) + left = 0; + else if ( left > size ) + left = size; + if ( right < left ) + right = left; + else if ( right > size ) + right = size; + slice_len = right - left; - if ( count != slice_len ) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ( count != slice_len ) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - if ( slice_len ) - memcpy((char *)ptr1 + left, ptr2, slice_len); + if ( slice_len ) + memcpy((char *)ptr1 + left, ptr2, slice_len); - return 0; + return 0; } static int buffer_ass_subscript(PyBufferObject *self, PyObject *item, PyObject *value) { - PyBufferProcs *pb; - void *ptr1, *ptr2; - Py_ssize_t selfsize; - Py_ssize_t othersize; + PyBufferProcs *pb; + void *ptr1, *ptr2; + Py_ssize_t selfsize; + Py_ssize_t othersize; - if ( self->b_readonly ) { - PyErr_SetString(PyExc_TypeError, - "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) { + PyErr_SetString(PyExc_TypeError, + "buffer is read-only"); + return -1; + } - pb = value ? value->ob_type->tp_as_buffer : NULL; - if ( pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL ) - { - PyErr_BadArgument(); - return -1; - } - if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) - { - /* ### use a different exception type/message? */ - PyErr_SetString(PyExc_TypeError, - "single-segment buffer object expected"); - return -1; - } - if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) - return -1; - + pb = value ? value->ob_type->tp_as_buffer : NULL; + if ( pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL ) + { + PyErr_BadArgument(); + return -1; + } + if ( (*pb->bf_getsegcount)(value, NULL) != 1 ) + { + /* ### use a different exception type/message? */ + PyErr_SetString(PyExc_TypeError, + "single-segment buffer object expected"); + return -1; + } + if (!get_buf(self, &ptr1, &selfsize, ANY_BUFFER)) + return -1; if (PyIndex_Check(item)) { - Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); - if (i == -1 && PyErr_Occurred()) - return -1; - if (i < 0) - i += selfsize; - return buffer_ass_item(self, i, value); - } - else if (PySlice_Check(item)) { - Py_ssize_t start, stop, step, slicelength; - - if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, - &start, &stop, &step, &slicelength) < 0) - return -1; + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError); + if (i == -1 && PyErr_Occurred()) + return -1; + if (i < 0) + i += selfsize; + return buffer_ass_item(self, i, value); + } + else if (PySlice_Check(item)) { + Py_ssize_t start, stop, step, slicelength; - if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) - return -1; + if (PySlice_GetIndicesEx((PySliceObject *)item, selfsize, + &start, &stop, &step, &slicelength) < 0) + return -1; - if (othersize != slicelength) { - PyErr_SetString( - PyExc_TypeError, - "right operand length must match slice length"); - return -1; - } + if ((othersize = (*pb->bf_getreadbuffer)(value, 0, &ptr2)) < 0) + return -1; - if (slicelength == 0) - return 0; - else if (step == 1) { - memcpy((char *)ptr1 + start, ptr2, slicelength); - return 0; - } - else { - Py_ssize_t cur, i; - - for (cur = start, i = 0; i < slicelength; - cur += step, i++) { - ((char *)ptr1)[cur] = ((char *)ptr2)[i]; - } + if (othersize != slicelength) { + PyErr_SetString( + PyExc_TypeError, + "right operand length must match slice length"); + return -1; + } - return 0; - } - } else { - PyErr_SetString(PyExc_TypeError, - "buffer indices must be integers"); - return -1; - } + if (slicelength == 0) + return 0; + else if (step == 1) { + memcpy((char *)ptr1 + start, ptr2, slicelength); + return 0; + } + else { + Py_ssize_t cur, i; + + for (cur = start, i = 0; i < slicelength; + cur += step, i++) { + ((char *)ptr1)[cur] = ((char *)ptr2)[i]; + } + + return 0; + } + } else { + PyErr_SetString(PyExc_TypeError, + "buffer indices must be integers"); + return -1; + } } /* Buffer methods */ @@ -723,64 +723,64 @@ static Py_ssize_t buffer_getreadbuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, READ_BUFFER)) - return -1; - return size; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, READ_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getwritebuf(PyBufferObject *self, Py_ssize_t idx, void **pp) { - Py_ssize_t size; + Py_ssize_t size; - if ( self->b_readonly ) - { - PyErr_SetString(PyExc_TypeError, "buffer is read-only"); - return -1; - } + if ( self->b_readonly ) + { + PyErr_SetString(PyExc_TypeError, "buffer is read-only"); + return -1; + } - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, pp, &size, WRITE_BUFFER)) - return -1; - return size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, pp, &size, WRITE_BUFFER)) + return -1; + return size; } static Py_ssize_t buffer_getsegcount(PyBufferObject *self, Py_ssize_t *lenp) { - void *ptr; - Py_ssize_t size; - if (!get_buf(self, &ptr, &size, ANY_BUFFER)) - return -1; - if (lenp) - *lenp = size; - return 1; + void *ptr; + Py_ssize_t size; + if (!get_buf(self, &ptr, &size, ANY_BUFFER)) + return -1; + if (lenp) + *lenp = size; + return 1; } static Py_ssize_t buffer_getcharbuf(PyBufferObject *self, Py_ssize_t idx, const char **pp) { - void *ptr; - Py_ssize_t size; - if ( idx != 0 ) { - PyErr_SetString(PyExc_SystemError, - "accessing non-existent buffer segment"); - return -1; - } - if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) - return -1; - *pp = (const char *)ptr; - return size; + void *ptr; + Py_ssize_t size; + if ( idx != 0 ) { + PyErr_SetString(PyExc_SystemError, + "accessing non-existent buffer segment"); + return -1; + } + if (!get_buf(self, &ptr, &size, CHAR_BUFFER)) + return -1; + *pp = (const char *)ptr; + return size; } void init_bufferobject(void) @@ -789,67 +789,65 @@ } static PySequenceMethods buffer_as_sequence = { - (lenfunc)buffer_length, /*sq_length*/ - (binaryfunc)buffer_concat, /*sq_concat*/ - (ssizeargfunc)buffer_repeat, /*sq_repeat*/ - (ssizeargfunc)buffer_item, /*sq_item*/ - (ssizessizeargfunc)buffer_slice, /*sq_slice*/ - (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ - (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ + (lenfunc)buffer_length, /*sq_length*/ + (binaryfunc)buffer_concat, /*sq_concat*/ + (ssizeargfunc)buffer_repeat, /*sq_repeat*/ + (ssizeargfunc)buffer_item, /*sq_item*/ + (ssizessizeargfunc)buffer_slice, /*sq_slice*/ + (ssizeobjargproc)buffer_ass_item, /*sq_ass_item*/ + (ssizessizeobjargproc)buffer_ass_slice, /*sq_ass_slice*/ }; static PyMappingMethods buffer_as_mapping = { - (lenfunc)buffer_length, - (binaryfunc)buffer_subscript, - (objobjargproc)buffer_ass_subscript, + (lenfunc)buffer_length, + (binaryfunc)buffer_subscript, + (objobjargproc)buffer_ass_subscript, }; static PyBufferProcs buffer_as_buffer = { - (readbufferproc)buffer_getreadbuf, - (writebufferproc)buffer_getwritebuf, - (segcountproc)buffer_getsegcount, - (charbufferproc)buffer_getcharbuf, + (readbufferproc)buffer_getreadbuf, + (writebufferproc)buffer_getwritebuf, + (segcountproc)buffer_getsegcount, + (charbufferproc)buffer_getcharbuf, }; PyTypeObject PyBuffer_Type = { - PyObject_HEAD_INIT(NULL) + PyVarObject_HEAD_INIT(NULL, 0) + "buffer", + sizeof(PyBufferObject), 0, - "buffer", - sizeof(PyBufferObject), - 0, - (destructor)buffer_dealloc, /* tp_dealloc */ - 0, /* tp_print */ - 0, /* tp_getattr */ - 0, /* tp_setattr */ - (cmpfunc)buffer_compare, /* tp_compare */ - (reprfunc)buffer_repr, /* tp_repr */ - 0, /* tp_as_number */ - &buffer_as_sequence, /* tp_as_sequence */ - &buffer_as_mapping, /* tp_as_mapping */ - (hashfunc)buffer_hash, /* tp_hash */ - 0, /* tp_call */ - (reprfunc)buffer_str, /* tp_str */ - PyObject_GenericGetAttr, /* tp_getattro */ - 0, /* tp_setattro */ - &buffer_as_buffer, /* tp_as_buffer */ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ - buffer_doc, /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - 0, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - buffer_new, /* tp_new */ + (destructor)buffer_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + (cmpfunc)buffer_compare, /* tp_compare */ + (reprfunc)buffer_repr, /* tp_repr */ + 0, /* tp_as_number */ + &buffer_as_sequence, /* tp_as_sequence */ + &buffer_as_mapping, /* tp_as_mapping */ + (hashfunc)buffer_hash, /* tp_hash */ + 0, /* tp_call */ + (reprfunc)buffer_str, /* tp_str */ + PyObject_GenericGetAttr, /* tp_getattro */ + 0, /* tp_setattro */ + &buffer_as_buffer, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GETCHARBUFFER, /* tp_flags */ + buffer_doc, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + 0, /* tp_methods */ + 0, /* tp_members */ + 0, /* tp_getset */ + 0, /* tp_base */ + 0, /* tp_dict */ + 0, /* tp_descr_get */ + 0, /* tp_descr_set */ + 0, /* tp_dictoffset */ + 0, /* tp_init */ + 0, /* tp_alloc */ + buffer_new, /* tp_new */ }; - diff --git a/pypy/module/cpyext/src/cobject.c b/pypy/module/cpyext/src/cobject.c --- a/pypy/module/cpyext/src/cobject.c +++ b/pypy/module/cpyext/src/cobject.c @@ -50,6 +50,10 @@ PyCObject_AsVoidPtr(PyObject *self) { if (self) { + if (PyCapsule_CheckExact(self)) { + const char *name = PyCapsule_GetName(self); + return (void *)PyCapsule_GetPointer(self, name); + } if (self->ob_type == &PyCObject_Type) return ((PyCObject *)self)->cobject; PyErr_SetString(PyExc_TypeError, diff --git a/pypy/module/cpyext/src/getargs.c b/pypy/module/cpyext/src/getargs.c --- a/pypy/module/cpyext/src/getargs.c +++ b/pypy/module/cpyext/src/getargs.c @@ -7,349 +7,348 @@ #ifdef __cplusplus -extern "C" { +extern "C" { #endif - int PyArg_Parse(PyObject *, const char *, ...); int PyArg_ParseTuple(PyObject *, const char *, ...); int PyArg_VaParse(PyObject *, const char *, va_list); int PyArg_ParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, ...); + const char *, char **, ...); int PyArg_VaParseTupleAndKeywords(PyObject *, PyObject *, - const char *, char **, va_list); + const char *, char **, va_list); #define FLAG_COMPAT 1 #define FLAG_SIZE_T 2 -typedef int (*destr_t)(PyObject *, void *); - - -/* Keep track of "objects" that have been allocated or initialized and - which will need to be deallocated or cleaned up somehow if overall - parsing fails. -*/ -typedef struct { - void *item; - destr_t destructor; -} freelistentry_t; - -typedef struct { - int first_available; - freelistentry_t *entries; -} freelist_t; - /* Forward */ static int vgetargs1(PyObject *, const char *, va_list *, int); static void seterror(int, const char *, int *, const char *, const char *); -static char *convertitem(PyObject *, const char **, va_list *, int, int *, - char *, size_t, freelist_t *); +static char *convertitem(PyObject *, const char **, va_list *, int, int *, + char *, size_t, PyObject **); static char *converttuple(PyObject *, const char **, va_list *, int, - int *, char *, size_t, int, freelist_t *); + int *, char *, size_t, int, PyObject **); static char *convertsimple(PyObject *, const char **, va_list *, int, char *, - size_t, freelist_t *); + size_t, PyObject **); static Py_ssize_t convertbuffer(PyObject *, void **p, char **); static int getbuffer(PyObject *, Py_buffer *, char**); static int vgetargskeywords(PyObject *, PyObject *, - const char *, char **, va_list *, int); + const char *, char **, va_list *, int); static char *skipitem(const char **, va_list *, int); int PyArg_Parse(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT); + va_end(va); + return retval; } int _PyArg_Parse_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_COMPAT|FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_ParseTuple(PyObject *args, const char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, 0); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTuple_SizeT(PyObject *args, char *format, ...) { - int retval; - va_list va; - - va_start(va, format); - retval = vgetargs1(args, format, &va, FLAG_SIZE_T); - va_end(va); - return retval; + int retval; + va_list va; + + va_start(va, format); + retval = vgetargs1(args, format, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParse(PyObject *args, const char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, 0); + return vgetargs1(args, format, &lva, 0); } int _PyArg_VaParse_SizeT(PyObject *args, char *format, va_list va) { - va_list lva; + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - return vgetargs1(args, format, &lva, FLAG_SIZE_T); + return vgetargs1(args, format, &lva, FLAG_SIZE_T); } /* Handle cleanup of allocated memory in case of exception */ +#define GETARGS_CAPSULE_NAME_CLEANUP_PTR "getargs.cleanup_ptr" +#define GETARGS_CAPSULE_NAME_CLEANUP_BUFFER "getargs.cleanup_buffer" + +static void +cleanup_ptr(PyObject *self) +{ + void *ptr = PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_PTR); + if (ptr) { + PyMem_FREE(ptr); + } +} + +static void +cleanup_buffer(PyObject *self) +{ + Py_buffer *ptr = (Py_buffer *)PyCapsule_GetPointer(self, GETARGS_CAPSULE_NAME_CLEANUP_BUFFER); + if (ptr) { + PyBuffer_Release(ptr); + } +} + static int -cleanup_ptr(PyObject *self, void *ptr) +addcleanup(void *ptr, PyObject **freelist, PyCapsule_Destructor destr) { - if (ptr) { - PyMem_FREE(ptr); + PyObject *cobj; + const char *name; + + if (!*freelist) { + *freelist = PyList_New(0); + if (!*freelist) { + destr(ptr); + return -1; + } } + + if (destr == cleanup_ptr) { + name = GETARGS_CAPSULE_NAME_CLEANUP_PTR; + } else if (destr == cleanup_buffer) { + name = GETARGS_CAPSULE_NAME_CLEANUP_BUFFER; + } else { + return -1; + } + cobj = PyCapsule_New(ptr, name, destr); + if (!cobj) { + destr(ptr); + return -1; + } + if (PyList_Append(*freelist, cobj)) { + Py_DECREF(cobj); + return -1; + } + Py_DECREF(cobj); return 0; } static int -cleanup_buffer(PyObject *self, void *ptr) +cleanreturn(int retval, PyObject *freelist) { - Py_buffer *buf = (Py_buffer *)ptr; - if (buf) { - PyBuffer_Release(buf); + if (freelist && retval != 0) { + /* We were successful, reset the destructors so that they + don't get called. */ + Py_ssize_t len = PyList_GET_SIZE(freelist), i; + for (i = 0; i < len; i++) + PyCapsule_SetDestructor(PyList_GET_ITEM(freelist, i), NULL); } - return 0; + Py_XDECREF(freelist); + return retval; } -static int -addcleanup(void *ptr, freelist_t *freelist, destr_t destructor) -{ - int index; - - index = freelist->first_available; - freelist->first_available += 1; - - freelist->entries[index].item = ptr; - freelist->entries[index].destructor = destructor; - - return 0; -} - -static int -cleanreturn(int retval, freelist_t *freelist) -{ - int index; - - if (retval == 0) { - /* A failure occurred, therefore execute all of the cleanup - functions. - */ - for (index = 0; index < freelist->first_available; ++index) { - freelist->entries[index].destructor(NULL, - freelist->entries[index].item); - } - } - PyMem_Free(freelist->entries); - return retval; -} static int vgetargs1(PyObject *args, const char *format, va_list *p_va, int flags) { - char msgbuf[256]; - int levels[32]; - const char *fname = NULL; - const char *message = NULL; - int min = -1; - int max = 0; - int level = 0; - int endfmt = 0; - const char *formatsave = format; - Py_ssize_t i, len; - char *msg; - freelist_t freelist = {0, NULL}; - int compat = flags & FLAG_COMPAT; + char msgbuf[256]; + int levels[32]; + const char *fname = NULL; + const char *message = NULL; + int min = -1; + int max = 0; + int level = 0; + int endfmt = 0; + const char *formatsave = format; + Py_ssize_t i, len; + char *msg; + PyObject *freelist = NULL; + int compat = flags & FLAG_COMPAT; - assert(compat || (args != (PyObject*)NULL)); - flags = flags & ~FLAG_COMPAT; + assert(compat || (args != (PyObject*)NULL)); + flags = flags & ~FLAG_COMPAT; - while (endfmt == 0) { - int c = *format++; - switch (c) { - case '(': - if (level == 0) - max++; - level++; - if (level >= 30) - Py_FatalError("too many tuple nesting levels " - "in argument format string"); - break; - case ')': - if (level == 0) - Py_FatalError("excess ')' in getargs format"); - else - level--; - break; - case '\0': - endfmt = 1; - break; - case ':': - fname = format; - endfmt = 1; - break; - case ';': - message = format; - endfmt = 1; - break; - default: - if (level == 0) { - if (c == 'O') - max++; - else if (isalpha(Py_CHARMASK(c))) { - if (c != 'e') /* skip encoded */ - max++; - } else if (c == '|') - min = max; - } - break; - } - } - - if (level != 0) - Py_FatalError(/* '(' */ "missing ')' in getargs format"); - - if (min < 0) - min = max; - - format = formatsave; - - freelist.entries = PyMem_New(freelistentry_t, max); + while (endfmt == 0) { + int c = *format++; + switch (c) { + case '(': + if (level == 0) + max++; + level++; + if (level >= 30) + Py_FatalError("too many tuple nesting levels " + "in argument format string"); + break; + case ')': + if (level == 0) + Py_FatalError("excess ')' in getargs format"); + else + level--; + break; + case '\0': + endfmt = 1; + break; + case ':': + fname = format; + endfmt = 1; + break; + case ';': + message = format; + endfmt = 1; + break; + default: + if (level == 0) { + if (c == 'O') + max++; + else if (isalpha(Py_CHARMASK(c))) { + if (c != 'e') /* skip encoded */ + max++; + } else if (c == '|') + min = max; + } + break; + } + } - if (compat) { - if (max == 0) { - if (args == NULL) - return cleanreturn(1, &freelist); - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes no arguments", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - else if (min == 1 && max == 1) { - if (args == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.200s%s takes at least one argument", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()"); - PyErr_SetString(PyExc_TypeError, msgbuf); - return cleanreturn(0, &freelist); - } - msg = convertitem(args, &format, p_va, flags, levels, - msgbuf, sizeof(msgbuf), &freelist); - if (msg == NULL) - return cleanreturn(1, &freelist); - seterror(levels[0], msg, levels+1, fname, message); - return cleanreturn(0, &freelist); - } - else { - PyErr_SetString(PyExc_SystemError, - "old style getargs format uses new features"); - return cleanreturn(0, &freelist); - } - } - - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "new style getargs format but argument is not a tuple"); - return cleanreturn(0, &freelist); - } - - len = PyTuple_GET_SIZE(args); - - if (len < min || max < len) { - if (message == NULL) { - PyOS_snprintf(msgbuf, sizeof(msgbuf), - "%.150s%s takes %s %d argument%s " - "(%ld given)", - fname==NULL ? "function" : fname, - fname==NULL ? "" : "()", - min==max ? "exactly" - : len < min ? "at least" : "at most", - len < min ? min : max, - (len < min ? min : max) == 1 ? "" : "s", - Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); - message = msgbuf; - } - PyErr_SetString(PyExc_TypeError, message); - return cleanreturn(0, &freelist); - } - - for (i = 0; i < len; i++) { - if (*format == '|') - format++; - msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, - flags, levels, msgbuf, - sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, message); - return cleanreturn(0, &freelist); - } - } + if (level != 0) + Py_FatalError(/* '(' */ "missing ')' in getargs format"); - if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && - *format != '(' && - *format != '|' && *format != ':' && *format != ';') { - PyErr_Format(PyExc_SystemError, - "bad format string: %.200s", formatsave); - return cleanreturn(0, &freelist); - } - - return cleanreturn(1, &freelist); + if (min < 0) + min = max; + + format = formatsave; + + if (compat) { + if (max == 0) { + if (args == NULL) + return 1; + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes no arguments", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + else if (min == 1 && max == 1) { + if (args == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.200s%s takes at least one argument", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()"); + PyErr_SetString(PyExc_TypeError, msgbuf); + return 0; + } + msg = convertitem(args, &format, p_va, flags, levels, + msgbuf, sizeof(msgbuf), &freelist); + if (msg == NULL) + return cleanreturn(1, freelist); + seterror(levels[0], msg, levels+1, fname, message); + return cleanreturn(0, freelist); + } + else { + PyErr_SetString(PyExc_SystemError, + "old style getargs format uses new features"); + return 0; + } + } + + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "new style getargs format but argument is not a tuple"); + return 0; + } + + len = PyTuple_GET_SIZE(args); + + if (len < min || max < len) { + if (message == NULL) { + PyOS_snprintf(msgbuf, sizeof(msgbuf), + "%.150s%s takes %s %d argument%s " + "(%ld given)", + fname==NULL ? "function" : fname, + fname==NULL ? "" : "()", + min==max ? "exactly" + : len < min ? "at least" : "at most", + len < min ? min : max, + (len < min ? min : max) == 1 ? "" : "s", + Py_SAFE_DOWNCAST(len, Py_ssize_t, long)); + message = msgbuf; + } + PyErr_SetString(PyExc_TypeError, message); + return 0; + } + + for (i = 0; i < len; i++) { + if (*format == '|') + format++; + msg = convertitem(PyTuple_GET_ITEM(args, i), &format, p_va, + flags, levels, msgbuf, + sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, msg); + return cleanreturn(0, freelist); + } + } + + if (*format != '\0' && !isalpha(Py_CHARMASK(*format)) && + *format != '(' && + *format != '|' && *format != ':' && *format != ';') { + PyErr_Format(PyExc_SystemError, + "bad format string: %.200s", formatsave); + return cleanreturn(0, freelist); + } + + return cleanreturn(1, freelist); } @@ -358,37 +357,37 @@ seterror(int iarg, const char *msg, int *levels, const char *fname, const char *message) { - char buf[512]; - int i; - char *p = buf; + char buf[512]; + int i; + char *p = buf; - if (PyErr_Occurred()) - return; - else if (message == NULL) { - if (fname != NULL) { - PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); - p += strlen(p); - } - if (iarg != 0) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - "argument %d", iarg); - i = 0; - p += strlen(p); - while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { - PyOS_snprintf(p, sizeof(buf) - (p - buf), - ", item %d", levels[i]-1); - p += strlen(p); - i++; - } - } - else { - PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); - p += strlen(p); - } - PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); - message = buf; - } - PyErr_SetString(PyExc_TypeError, message); + if (PyErr_Occurred()) + return; + else if (message == NULL) { + if (fname != NULL) { + PyOS_snprintf(p, sizeof(buf), "%.200s() ", fname); + p += strlen(p); + } + if (iarg != 0) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + "argument %d", iarg); + i = 0; + p += strlen(p); + while (levels[i] > 0 && i < 32 && (int)(p-buf) < 220) { + PyOS_snprintf(p, sizeof(buf) - (p - buf), + ", item %d", levels[i]-1); + p += strlen(p); + i++; + } + } + else { + PyOS_snprintf(p, sizeof(buf) - (p - buf), "argument"); + p += strlen(p); + } + PyOS_snprintf(p, sizeof(buf) - (p - buf), " %.256s", msg); + message = buf; + } + PyErr_SetString(PyExc_TypeError, message); } @@ -404,85 +403,84 @@ *p_va is undefined, *levels is a 0-terminated list of item numbers, *msgbuf contains an error message, whose format is: - "must be , not ", where: - is the name of the expected type, and - is the name of the actual type, + "must be , not ", where: + is the name of the expected type, and + is the name of the actual type, and msgbuf is returned. */ static char * converttuple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, int toplevel, - freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, int toplevel, + PyObject **freelist) { - int level = 0; - int n = 0; - const char *format = *p_format; - int i; - - for (;;) { - int c = *format++; - if (c == '(') { - if (level == 0) - n++; - level++; - } - else if (c == ')') { - if (level == 0) - break; - level--; - } - else if (c == ':' || c == ';' || c == '\0') - break; - else if (level == 0 && isalpha(Py_CHARMASK(c))) - n++; - } - - if (!PySequence_Check(arg) || PyString_Check(arg)) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %.50s" : - "must be %d-item sequence, not %.50s", - n, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; - } - - if ((i = PySequence_Size(arg)) != n) { - levels[0] = 0; - PyOS_snprintf(msgbuf, bufsize, - toplevel ? "expected %d arguments, not %d" : - "must be sequence of length %d, not %d", - n, i); - return msgbuf; - } + int level = 0; + int n = 0; + const char *format = *p_format; + int i; - format = *p_format; - for (i = 0; i < n; i++) { - char *msg; - PyObject *item; + for (;;) { + int c = *format++; + if (c == '(') { + if (level == 0) + n++; + level++; + } + else if (c == ')') { + if (level == 0) + break; + level--; + } + else if (c == ':' || c == ';' || c == '\0') + break; + else if (level == 0 && isalpha(Py_CHARMASK(c))) + n++; + } + + if (!PySequence_Check(arg) || PyString_Check(arg)) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %.50s" : + "must be %d-item sequence, not %.50s", + n, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; + } + + if ((i = PySequence_Size(arg)) != n) { + levels[0] = 0; + PyOS_snprintf(msgbuf, bufsize, + toplevel ? "expected %d arguments, not %d" : + "must be sequence of length %d, not %d", + n, i); + return msgbuf; + } + + format = *p_format; + for (i = 0; i < n; i++) { + char *msg; + PyObject *item; item = PySequence_GetItem(arg, i); - if (item == NULL) { - PyErr_Clear(); - levels[0] = i+1; - levels[1] = 0; - strncpy(msgbuf, "is not retrievable", - bufsize); - return msgbuf; - } - PyPy_Borrow(arg, item); - msg = convertitem(item, &format, p_va, flags, levels+1, - msgbuf, bufsize, freelist); + if (item == NULL) { + PyErr_Clear(); + levels[0] = i+1; + levels[1] = 0; + strncpy(msgbuf, "is not retrievable", bufsize); + return msgbuf; + } + PyPy_Borrow(arg, item); + msg = convertitem(item, &format, p_va, flags, levels+1, + msgbuf, bufsize, freelist); /* PySequence_GetItem calls tp->sq_item, which INCREFs */ Py_XDECREF(item); - if (msg != NULL) { - levels[0] = i+1; - return msg; - } - } + if (msg != NULL) { + levels[0] = i+1; + return msg; + } + } - *p_format = format; - return NULL; + *p_format = format; + return NULL; } @@ -490,45 +488,45 @@ static char * convertitem(PyObject *arg, const char **p_format, va_list *p_va, int flags, - int *levels, char *msgbuf, size_t bufsize, freelist_t *freelist) + int *levels, char *msgbuf, size_t bufsize, PyObject **freelist) { - char *msg; - const char *format = *p_format; - - if (*format == '(' /* ')' */) { - format++; - msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, - bufsize, 0, freelist); - if (msg == NULL) - format++; - } - else { - msg = convertsimple(arg, &format, p_va, flags, - msgbuf, bufsize, freelist); - if (msg != NULL) - levels[0] = 0; - } - if (msg == NULL) - *p_format = format; - return msg; + char *msg; + const char *format = *p_format; + + if (*format == '(' /* ')' */) { + format++; + msg = converttuple(arg, &format, p_va, flags, levels, msgbuf, + bufsize, 0, freelist); + if (msg == NULL) + format++; + } + else { + msg = convertsimple(arg, &format, p_va, flags, + msgbuf, bufsize, freelist); + if (msg != NULL) + levels[0] = 0; + } + if (msg == NULL) + *p_format = format; + return msg; } #define UNICODE_DEFAULT_ENCODING(arg) \ - _PyUnicode_AsDefaultEncodedString(arg, NULL) + _PyUnicode_AsDefaultEncodedString(arg, NULL) /* Format an error message generated by convertsimple(). */ static char * converterr(const char *expected, PyObject *arg, char *msgbuf, size_t bufsize) { - assert(expected != NULL); - assert(arg != NULL); - PyOS_snprintf(msgbuf, bufsize, - "must be %.50s, not %.50s", expected, - arg == Py_None ? "None" : arg->ob_type->tp_name); - return msgbuf; + assert(expected != NULL); + assert(arg != NULL); + PyOS_snprintf(msgbuf, bufsize, + "must be %.50s, not %.50s", expected, + arg == Py_None ? "None" : arg->ob_type->tp_name); + return msgbuf; } #define CONV_UNICODE "(unicode conversion error)" @@ -536,14 +534,28 @@ /* explicitly check for float arguments when integers are expected. For now * signal a warning. Returns true if an exception was raised. */ static int +float_argument_warning(PyObject *arg) +{ + if (PyFloat_Check(arg) && + PyErr_Warn(PyExc_DeprecationWarning, + "integer argument expected, got float" )) + return 1; + else + return 0; +} + +/* explicitly check for float arguments when integers are expected. Raises + TypeError and returns true for float arguments. */ +static int float_argument_error(PyObject *arg) { - if (PyFloat_Check(arg) && - PyErr_Warn(PyExc_DeprecationWarning, - "integer argument expected, got float" )) - return 1; - else - return 0; + if (PyFloat_Check(arg)) { + PyErr_SetString(PyExc_TypeError, + "integer argument expected, got float"); + return 1; + } + else + return 0; } /* Convert a non-tuple argument. Return NULL if conversion went OK, @@ -557,836 +569,839 @@ static char * convertsimple(PyObject *arg, const char **p_format, va_list *p_va, int flags, - char *msgbuf, size_t bufsize, freelist_t *freelist) + char *msgbuf, size_t bufsize, PyObject **freelist) { - /* For # codes */ -#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ - if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ - else q=va_arg(*p_va, int*); -#define STORE_SIZE(s) if (flags & FLAG_SIZE_T) *q2=s; else *q=s; + /* For # codes */ +#define FETCH_SIZE int *q=NULL;Py_ssize_t *q2=NULL;\ + if (flags & FLAG_SIZE_T) q2=va_arg(*p_va, Py_ssize_t*); \ + else q=va_arg(*p_va, int*); +#define STORE_SIZE(s) \ + if (flags & FLAG_SIZE_T) \ + *q2=s; \ + else { \ + if (INT_MAX < s) { \ + PyErr_SetString(PyExc_OverflowError, \ + "size does not fit in an int"); \ + return converterr("", arg, msgbuf, bufsize); \ + } \ + *q=s; \ + } #define BUFFER_LEN ((flags & FLAG_SIZE_T) ? *q2:*q) - const char *format = *p_format; - char c = *format++; + const char *format = *p_format; + char c = *format++; #ifdef Py_USING_UNICODE - PyObject *uarg; -#endif - - switch (c) { - - case 'b': { /* unsigned byte -- very short int */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < 0) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > UCHAR_MAX) { - PyErr_SetString(PyExc_OverflowError, - "unsigned byte integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (unsigned char) ival; - break; - } - - case 'B': {/* byte sized bitfield - both signed and unsigned - values allowed */ - char *p = va_arg(*p_va, char *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned char) ival; - break; - } - - case 'h': {/* signed short int */ - short *p = va_arg(*p_va, short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival < SHRT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival > SHRT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed short integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = (short) ival; - break; - } - - case 'H': { /* short int sized bitfield, both signed and - unsigned allowed */ - unsigned short *p = va_arg(*p_va, unsigned short *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsUnsignedLongMask(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = (unsigned short) ival; - break; - } - case 'i': {/* signed int */ - int *p = va_arg(*p_va, int *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else if (ival > INT_MAX) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is greater than maximum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else if (ival < INT_MIN) { - PyErr_SetString(PyExc_OverflowError, - "signed integer is less than minimum"); - return converterr("integer", arg, msgbuf, bufsize); - } - else - *p = ival; - break; - } - case 'I': { /* int sized bitfield, both signed and - unsigned allowed */ - unsigned int *p = va_arg(*p_va, unsigned int *); - unsigned int ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); - if (ival == (unsigned int)-1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - case 'n': /* Py_ssize_t */ -#if SIZEOF_SIZE_T != SIZEOF_LONG - { - Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); - Py_ssize_t ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsSsize_t(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif - /* Fall through from 'n' to 'l' if Py_ssize_t is int */ - case 'l': {/* long int */ - long *p = va_arg(*p_va, long *); - long ival; - if (float_argument_error(arg)) - return converterr("integer", arg, msgbuf, bufsize); - ival = PyInt_AsLong(arg); - if (ival == -1 && PyErr_Occurred()) - return converterr("integer", arg, msgbuf, bufsize); - else - *p = ival; - break; - } - - case 'k': { /* long sized bitfield */ - unsigned long *p = va_arg(*p_va, unsigned long *); - unsigned long ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } - -#ifdef HAVE_LONG_LONG - case 'L': {/* PY_LONG_LONG */ - PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); - PY_LONG_LONG ival = PyLong_AsLongLong( arg ); - if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { - return converterr("long", arg, msgbuf, bufsize); - } else { - *p = ival; - } - break; - } - - case 'K': { /* long long sized bitfield */ - unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); - unsigned PY_LONG_LONG ival; - if (PyInt_Check(arg)) - ival = PyInt_AsUnsignedLongMask(arg); - else if (PyLong_Check(arg)) - ival = PyLong_AsUnsignedLongLongMask(arg); - else - return converterr("integer", arg, msgbuf, bufsize); - *p = ival; - break; - } -#endif // HAVE_LONG_LONG - - case 'f': {/* float */ - float *p = va_arg(*p_va, float *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = (float) dval; - break; - } - - case 'd': {/* double */ - double *p = va_arg(*p_va, double *); - double dval = PyFloat_AsDouble(arg); - if (PyErr_Occurred()) - return converterr("float", arg, msgbuf, bufsize); - else - *p = dval; - break; - } - -#ifndef WITHOUT_COMPLEX - case 'D': {/* complex double */ - Py_complex *p = va_arg(*p_va, Py_complex *); - Py_complex cval; - cval = PyComplex_AsCComplex(arg); - if (PyErr_Occurred()) - return converterr("complex", arg, msgbuf, bufsize); - else - *p = cval; - break; - } -#endif /* WITHOUT_COMPLEX */ - - case 'c': {/* char */ - char *p = va_arg(*p_va, char *); - if (PyString_Check(arg) && PyString_Size(arg) == 1) - *p = PyString_AS_STRING(arg)[0]; - else - return converterr("char", arg, msgbuf, bufsize); - break; - } - case 's': {/* string */ - if (*format == '*') { - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (PyString_Check(arg)) { - fflush(stdout); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { -#if 0 - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); -#else - return converterr("string or buffer", arg, msgbuf, bufsize); -#endif - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; - } else if (*format == '#') { - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string", arg, msgbuf, bufsize); - if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr("string without null bytes", - arg, msgbuf, bufsize); - } - break; - } - - case 'z': {/* string, may be NULL (None) */ - if (*format == '*') { - Py_FatalError("'*' format not supported in PyArg_*\n"); -#if 0 - Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); - - if (arg == Py_None) - PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); - else if (PyString_Check(arg)) { - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(arg), PyString_GET_SIZE(arg), - 1, 0); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - PyBuffer_FillInfo(p, arg, - PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), - 1, 0); - } -#endif - else { /* any buffer-like object */ - char *buf; - if (getbuffer(arg, p, &buf) < 0) - return converterr(buf, arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - format++; -#endif - } else if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - - if (arg == Py_None) { - *p = 0; - STORE_SIZE(0); - } - else if (PyString_Check(arg)) { - *p = PyString_AS_STRING(arg); - STORE_SIZE(PyString_GET_SIZE(arg)); - } -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - STORE_SIZE(PyString_GET_SIZE(uarg)); - } -#endif - else { /* any buffer-like object */ - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count); - } - format++; - } else { - char **p = va_arg(*p_va, char **); - - if (arg == Py_None) - *p = 0; - else if (PyString_Check(arg)) - *p = PyString_AS_STRING(arg); -#ifdef Py_USING_UNICODE - else if (PyUnicode_Check(arg)) { - uarg = UNICODE_DEFAULT_ENCODING(arg); - if (uarg == NULL) - return converterr(CONV_UNICODE, - arg, msgbuf, bufsize); - *p = PyString_AS_STRING(uarg); - } -#endif - else - return converterr("string or None", - arg, msgbuf, bufsize); - if (*format == '#') { - FETCH_SIZE; - assert(0); /* XXX redundant with if-case */ - if (arg == Py_None) - *q = 0; - else - *q = PyString_Size(arg); - format++; - } - else if (*p != NULL && - (Py_ssize_t)strlen(*p) != PyString_Size(arg)) - return converterr( - "string without null bytes or None", - arg, msgbuf, bufsize); - } - break; - } - case 'e': {/* encoded string */ - char **buffer; - const char *encoding; - PyObject *s; - Py_ssize_t size; - int recode_strings; - - /* Get 'e' parameter: the encoding name */ - encoding = (const char *)va_arg(*p_va, const char *); -#ifdef Py_USING_UNICODE - if (encoding == NULL) - encoding = PyUnicode_GetDefaultEncoding(); + PyObject *uarg; #endif - /* Get output buffer parameter: - 's' (recode all objects via Unicode) or - 't' (only recode non-string objects) - */ - if (*format == 's') - recode_strings = 1; - else if (*format == 't') - recode_strings = 0; - else - return converterr( - "(unknown parser marker combination)", - arg, msgbuf, bufsize); - buffer = (char **)va_arg(*p_va, char **); - format++; - if (buffer == NULL) - return converterr("(buffer is NULL)", - arg, msgbuf, bufsize); - - /* Encode object */ - if (!recode_strings && PyString_Check(arg)) { - s = arg; - Py_INCREF(s); - } - else { + switch (c) { + + case 'b': { /* unsigned byte -- very short int */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < 0) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > UCHAR_MAX) { + PyErr_SetString(PyExc_OverflowError, + "unsigned byte integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (unsigned char) ival; + break; + } + + case 'B': {/* byte sized bitfield - both signed and unsigned + values allowed */ + char *p = va_arg(*p_va, char *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned char) ival; + break; + } + + case 'h': {/* signed short int */ + short *p = va_arg(*p_va, short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival < SHRT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival > SHRT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed short integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = (short) ival; + break; + } + + case 'H': { /* short int sized bitfield, both signed and + unsigned allowed */ + unsigned short *p = va_arg(*p_va, unsigned short *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsUnsignedLongMask(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = (unsigned short) ival; + break; + } + + case 'i': {/* signed int */ + int *p = va_arg(*p_va, int *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else if (ival > INT_MAX) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is greater than maximum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else if (ival < INT_MIN) { + PyErr_SetString(PyExc_OverflowError, + "signed integer is less than minimum"); + return converterr("integer", arg, msgbuf, bufsize); + } + else + *p = ival; + break; + } + + case 'I': { /* int sized bitfield, both signed and + unsigned allowed */ + unsigned int *p = va_arg(*p_va, unsigned int *); + unsigned int ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = (unsigned int)PyInt_AsUnsignedLongMask(arg); + if (ival == (unsigned int)-1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'n': /* Py_ssize_t */ +#if SIZEOF_SIZE_T != SIZEOF_LONG + { + Py_ssize_t *p = va_arg(*p_va, Py_ssize_t *); + Py_ssize_t ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsSsize_t(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + /* Fall through from 'n' to 'l' if Py_ssize_t is int */ + case 'l': {/* long int */ + long *p = va_arg(*p_va, long *); + long ival; + if (float_argument_error(arg)) + return converterr("integer", arg, msgbuf, bufsize); + ival = PyInt_AsLong(arg); + if (ival == -1 && PyErr_Occurred()) + return converterr("integer", arg, msgbuf, bufsize); + else + *p = ival; + break; + } + + case 'k': { /* long sized bitfield */ + unsigned long *p = va_arg(*p_va, unsigned long *); + unsigned long ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } + +#ifdef HAVE_LONG_LONG + case 'L': {/* PY_LONG_LONG */ + PY_LONG_LONG *p = va_arg( *p_va, PY_LONG_LONG * ); + PY_LONG_LONG ival; + if (float_argument_warning(arg)) + return converterr("long", arg, msgbuf, bufsize); + ival = PyLong_AsLongLong(arg); + if (ival == (PY_LONG_LONG)-1 && PyErr_Occurred() ) { + return converterr("long", arg, msgbuf, bufsize); + } else { + *p = ival; + } + break; + } + + case 'K': { /* long long sized bitfield */ + unsigned PY_LONG_LONG *p = va_arg(*p_va, unsigned PY_LONG_LONG *); + unsigned PY_LONG_LONG ival; + if (PyInt_Check(arg)) + ival = PyInt_AsUnsignedLongMask(arg); + else if (PyLong_Check(arg)) + ival = PyLong_AsUnsignedLongLongMask(arg); + else + return converterr("integer", arg, msgbuf, bufsize); + *p = ival; + break; + } +#endif + + case 'f': {/* float */ + float *p = va_arg(*p_va, float *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = (float) dval; + break; + } + + case 'd': {/* double */ + double *p = va_arg(*p_va, double *); + double dval = PyFloat_AsDouble(arg); + if (PyErr_Occurred()) + return converterr("float", arg, msgbuf, bufsize); + else + *p = dval; + break; + } + +#ifndef WITHOUT_COMPLEX + case 'D': {/* complex double */ + Py_complex *p = va_arg(*p_va, Py_complex *); + Py_complex cval; + cval = PyComplex_AsCComplex(arg); + if (PyErr_Occurred()) + return converterr("complex", arg, msgbuf, bufsize); + else + *p = cval; + break; + } +#endif /* WITHOUT_COMPLEX */ + + case 'c': {/* char */ + char *p = va_arg(*p_va, char *); + if (PyString_Check(arg) && PyString_Size(arg) == 1) + *p = PyString_AS_STRING(arg)[0]; + else + return converterr("char", arg, msgbuf, bufsize); + break; + } + + case 's': {/* string */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } #ifdef Py_USING_UNICODE - PyObject *u; + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; - /* Convert object to Unicode */ - u = PyUnicode_FromObject(arg); - if (u == NULL) - return converterr( - "string or unicode or text buffer", - arg, msgbuf, bufsize); - - /* Encode object; use default error handling */ - s = PyUnicode_AsEncodedString(u, - encoding, - NULL); - Py_DECREF(u); - if (s == NULL) - return converterr("(encoding failed)", - arg, msgbuf, bufsize); - if (!PyString_Check(s)) { - Py_DECREF(s); - return converterr( - "(encoder failed to return a string)", - arg, msgbuf, bufsize); - } + if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string", arg, msgbuf, bufsize); + if ((Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr("string without null bytes", + arg, msgbuf, bufsize); + } + break; + } + + case 'z': {/* string, may be NULL (None) */ + if (*format == '*') { + Py_buffer *p = (Py_buffer *)va_arg(*p_va, Py_buffer *); + + if (arg == Py_None) + PyBuffer_FillInfo(p, NULL, NULL, 0, 1, 0); + else if (PyString_Check(arg)) { + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(arg), PyString_GET_SIZE(arg), + 1, 0); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + PyBuffer_FillInfo(p, arg, + PyString_AS_STRING(uarg), PyString_GET_SIZE(uarg), + 1, 0); + } +#endif + else { /* any buffer-like object */ + char *buf; + if (getbuffer(arg, p, &buf) < 0) + return converterr(buf, arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + format++; + } else if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + + if (arg == Py_None) { + *p = 0; + STORE_SIZE(0); + } + else if (PyString_Check(arg)) { + *p = PyString_AS_STRING(arg); + STORE_SIZE(PyString_GET_SIZE(arg)); + } +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + STORE_SIZE(PyString_GET_SIZE(uarg)); + } +#endif + else { /* any buffer-like object */ + char *buf; + Py_ssize_t count = convertbuffer(arg, p, &buf); + if (count < 0) + return converterr(buf, arg, msgbuf, bufsize); + STORE_SIZE(count); + } + format++; + } else { + char **p = va_arg(*p_va, char **); + + if (arg == Py_None) + *p = 0; + else if (PyString_Check(arg)) + *p = PyString_AS_STRING(arg); +#ifdef Py_USING_UNICODE + else if (PyUnicode_Check(arg)) { + uarg = UNICODE_DEFAULT_ENCODING(arg); + if (uarg == NULL) + return converterr(CONV_UNICODE, + arg, msgbuf, bufsize); + *p = PyString_AS_STRING(uarg); + } +#endif + else + return converterr("string or None", + arg, msgbuf, bufsize); + if (*format == '#') { + FETCH_SIZE; + assert(0); /* XXX redundant with if-case */ + if (arg == Py_None) { + STORE_SIZE(0); + } else { + STORE_SIZE(PyString_Size(arg)); + } + format++; + } + else if (*p != NULL && + (Py_ssize_t)strlen(*p) != PyString_Size(arg)) + return converterr( + "string without null bytes or None", + arg, msgbuf, bufsize); + } + break; + } + + case 'e': {/* encoded string */ + char **buffer; + const char *encoding; + PyObject *s; + Py_ssize_t size; + int recode_strings; + + /* Get 'e' parameter: the encoding name */ + encoding = (const char *)va_arg(*p_va, const char *); +#ifdef Py_USING_UNICODE + if (encoding == NULL) + encoding = PyUnicode_GetDefaultEncoding(); +#endif + + /* Get output buffer parameter: + 's' (recode all objects via Unicode) or + 't' (only recode non-string objects) + */ + if (*format == 's') + recode_strings = 1; + else if (*format == 't') + recode_strings = 0; + else + return converterr( + "(unknown parser marker combination)", + arg, msgbuf, bufsize); + buffer = (char **)va_arg(*p_va, char **); + format++; + if (buffer == NULL) + return converterr("(buffer is NULL)", + arg, msgbuf, bufsize); + + /* Encode object */ + if (!recode_strings && PyString_Check(arg)) { + s = arg; + Py_INCREF(s); + } + else { +#ifdef Py_USING_UNICODE + PyObject *u; + + /* Convert object to Unicode */ + u = PyUnicode_FromObject(arg); + if (u == NULL) + return converterr( + "string or unicode or text buffer", + arg, msgbuf, bufsize); + + /* Encode object; use default error handling */ + s = PyUnicode_AsEncodedString(u, + encoding, + NULL); + Py_DECREF(u); + if (s == NULL) + return converterr("(encoding failed)", + arg, msgbuf, bufsize); + if (!PyString_Check(s)) { + Py_DECREF(s); + return converterr( + "(encoder failed to return a string)", + arg, msgbuf, bufsize); + } #else - return converterr("string", arg, msgbuf, bufsize); + return converterr("string", arg, msgbuf, bufsize); #endif - } - size = PyString_GET_SIZE(s); + } + size = PyString_GET_SIZE(s); - /* Write output; output is guaranteed to be 0-terminated */ - if (*format == '#') { - /* Using buffer length parameter '#': - - - if *buffer is NULL, a new buffer of the - needed size is allocated and the data - copied into it; *buffer is updated to point - to the new buffer; the caller is - responsible for PyMem_Free()ing it after - usage + /* Write output; output is guaranteed to be 0-terminated */ + if (*format == '#') { + /* Using buffer length parameter '#': - - if *buffer is not NULL, the data is - copied to *buffer; *buffer_len has to be - set to the size of the buffer on input; - buffer overflow is signalled with an error; - buffer has to provide enough room for the - encoded string plus the trailing 0-byte - - - in both cases, *buffer_len is updated to - the size of the buffer /excluding/ the - trailing 0-byte - - */ - FETCH_SIZE; + - if *buffer is NULL, a new buffer of the + needed size is allocated and the data + copied into it; *buffer is updated to point + to the new buffer; the caller is + responsible for PyMem_Free()ing it after + usage - format++; - if (q == NULL && q2 == NULL) { - Py_DECREF(s); - return converterr( - "(buffer_len is NULL)", - arg, msgbuf, bufsize); - } - if (*buffer == NULL) { - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr( - "(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - } else { - if (size + 1 > BUFFER_LEN) { - Py_DECREF(s); - return converterr( - "(buffer overflow)", - arg, msgbuf, bufsize); - } - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - STORE_SIZE(size); - } else { - /* Using a 0-terminated buffer: - - - the encoded string has to be 0-terminated - for this variant to work; if it is not, an - error raised + - if *buffer is not NULL, the data is + copied to *buffer; *buffer_len has to be + set to the size of the buffer on input; + buffer overflow is signalled with an error; + buffer has to provide enough room for the + encoded string plus the trailing 0-byte - - a new buffer of the needed size is - allocated and the data copied into it; - *buffer is updated to point to the new - buffer; the caller is responsible for - PyMem_Free()ing it after usage + - in both cases, *buffer_len is updated to + the size of the buffer /excluding/ the + trailing 0-byte - */ - if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) - != size) { - Py_DECREF(s); - return converterr( - "encoded string without NULL bytes", - arg, msgbuf, bufsize); - } - *buffer = PyMem_NEW(char, size + 1); - if (*buffer == NULL) { - Py_DECREF(s); - return converterr("(memory error)", - arg, msgbuf, bufsize); - } - if (addcleanup(*buffer, freelist, cleanup_ptr)) { - Py_DECREF(s); - return converterr("(cleanup problem)", - arg, msgbuf, bufsize); - } - memcpy(*buffer, - PyString_AS_STRING(s), - size + 1); - } - Py_DECREF(s); - break; - } + */ + FETCH_SIZE; + + format++; + if (q == NULL && q2 == NULL) { + Py_DECREF(s); + return converterr( + "(buffer_len is NULL)", + arg, msgbuf, bufsize); + } + if (*buffer == NULL) { + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr( + "(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + } else { + if (size + 1 > BUFFER_LEN) { + Py_DECREF(s); + return converterr( + "(buffer overflow)", + arg, msgbuf, bufsize); + } + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + STORE_SIZE(size); + } else { + /* Using a 0-terminated buffer: + + - the encoded string has to be 0-terminated + for this variant to work; if it is not, an + error raised + + - a new buffer of the needed size is + allocated and the data copied into it; + *buffer is updated to point to the new + buffer; the caller is responsible for + PyMem_Free()ing it after usage + + */ + if ((Py_ssize_t)strlen(PyString_AS_STRING(s)) + != size) { + Py_DECREF(s); + return converterr( + "encoded string without NULL bytes", + arg, msgbuf, bufsize); + } + *buffer = PyMem_NEW(char, size + 1); + if (*buffer == NULL) { + Py_DECREF(s); + return converterr("(memory error)", + arg, msgbuf, bufsize); + } + if (addcleanup(*buffer, freelist, cleanup_ptr)) { + Py_DECREF(s); + return converterr("(cleanup problem)", + arg, msgbuf, bufsize); + } + memcpy(*buffer, + PyString_AS_STRING(s), + size + 1); + } + Py_DECREF(s); + break; + } #ifdef Py_USING_UNICODE - case 'u': {/* raw unicode buffer (Py_UNICODE *) */ - if (*format == '#') { /* any buffer-like object */ - void **p = (void **)va_arg(*p_va, char **); - FETCH_SIZE; - if (PyUnicode_Check(arg)) { - *p = PyUnicode_AS_UNICODE(arg); - STORE_SIZE(PyUnicode_GET_SIZE(arg)); - } - else { - return converterr("cannot convert raw buffers", - arg, msgbuf, bufsize); - } - format++; - } else { - Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); - if (PyUnicode_Check(arg)) - *p = PyUnicode_AS_UNICODE(arg); - else - return converterr("unicode", arg, msgbuf, bufsize); - } - break; - } + case 'u': {/* raw unicode buffer (Py_UNICODE *) */ + if (*format == '#') { /* any buffer-like object */ + void **p = (void **)va_arg(*p_va, char **); + FETCH_SIZE; + if (PyUnicode_Check(arg)) { + *p = PyUnicode_AS_UNICODE(arg); + STORE_SIZE(PyUnicode_GET_SIZE(arg)); + } + else { + return converterr("cannot convert raw buffers", + arg, msgbuf, bufsize); + } + format++; + } else { + Py_UNICODE **p = va_arg(*p_va, Py_UNICODE **); + if (PyUnicode_Check(arg)) + *p = PyUnicode_AS_UNICODE(arg); + else + return converterr("unicode", arg, msgbuf, bufsize); + } + break; + } #endif - case 'S': { /* string object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyString_Check(arg)) - *p = arg; - else - return converterr("string", arg, msgbuf, bufsize); - break; - } - + case 'S': { /* string object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyString_Check(arg)) + *p = arg; + else + return converterr("string", arg, msgbuf, bufsize); + break; + } + #ifdef Py_USING_UNICODE - case 'U': { /* Unicode object */ - PyObject **p = va_arg(*p_va, PyObject **); - if (PyUnicode_Check(arg)) - *p = arg; - else - return converterr("unicode", arg, msgbuf, bufsize); - break; - } + case 'U': { /* Unicode object */ + PyObject **p = va_arg(*p_va, PyObject **); + if (PyUnicode_Check(arg)) + *p = arg; + else + return converterr("unicode", arg, msgbuf, bufsize); + break; + } #endif - case 'O': { /* object */ - PyTypeObject *type; - PyObject **p; - if (*format == '!') { - type = va_arg(*p_va, PyTypeObject*); - p = va_arg(*p_va, PyObject **); - format++; - if (PyType_IsSubtype(arg->ob_type, type)) - *p = arg; - else - return converterr(type->tp_name, arg, msgbuf, bufsize); - } - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - p = va_arg(*p_va, PyObject **); - format++; - if ((*pred)(arg)) - *p = arg; - else - return converterr("(unspecified)", - arg, msgbuf, bufsize); - - } - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - converter convert = va_arg(*p_va, converter); - void *addr = va_arg(*p_va, void *); - format++; - if (! (*convert)(arg, addr)) - return converterr("(unspecified)", - arg, msgbuf, bufsize); - } - else { - p = va_arg(*p_va, PyObject **); - *p = arg; - } - break; - } - - case 'w': { /* memory buffer, read-write access */ - Py_FatalError("'w' unsupported\n"); -#if 0 - void **p = va_arg(*p_va, void **); - void *res; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; + case 'O': { /* object */ + PyTypeObject *type; + PyObject **p; + if (*format == '!') { + type = va_arg(*p_va, PyTypeObject*); + p = va_arg(*p_va, PyObject **); + format++; + if (PyType_IsSubtype(arg->ob_type, type)) + *p = arg; + else + return converterr(type->tp_name, arg, msgbuf, bufsize); - if (pb && pb->bf_releasebuffer && *format != '*') - /* Buffer must be released, yet caller does not use - the Py_buffer protocol. */ - return converterr("pinned buffer", arg, msgbuf, bufsize); + } + else if (*format == '?') { + inquiry pred = va_arg(*p_va, inquiry); + p = va_arg(*p_va, PyObject **); + format++; + if ((*pred)(arg)) + *p = arg; + else + return converterr("(unspecified)", + arg, msgbuf, bufsize); - if (pb && pb->bf_getbuffer && *format == '*') { - /* Caller is interested in Py_buffer, and the object - supports it directly. */ - format++; - if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { - PyErr_Clear(); - return converterr("read-write buffer", arg, msgbuf, bufsize); - } - if (addcleanup(p, freelist, cleanup_buffer)) { - return converterr( - "(cleanup problem)", - arg, msgbuf, bufsize); - } - if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) - return converterr("contiguous buffer", arg, msgbuf, bufsize); - break; - } + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + converter convert = va_arg(*p_va, converter); + void *addr = va_arg(*p_va, void *); + format++; + if (! (*convert)(arg, addr)) + return converterr("(unspecified)", + arg, msgbuf, bufsize); + } + else { + p = va_arg(*p_va, PyObject **); + *p = arg; + } + break; + } - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) - return converterr("read-write buffer", arg, msgbuf, bufsize); - if ((*pb->bf_getsegcount)(arg, NULL) != 1) - return converterr("single-segment read-write buffer", - arg, msgbuf, bufsize); - if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); - if (*format == '*') { - PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); - format++; - } - else { - *p = res; - if (*format == '#') { - FETCH_SIZE; - STORE_SIZE(count); - format++; - } - } - break; -#endif - } - - case 't': { /* 8-bit character buffer, read-only access */ - char **p = va_arg(*p_va, char **); - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; -#if 0 - if (*format++ != '#') - return converterr( - "invalid use of 't' format character", - arg, msgbuf, bufsize); -#endif - if (!PyType_HasFeature(arg->ob_type, - Py_TPFLAGS_HAVE_GETCHARBUFFER) -#if 0 - || pb == NULL || pb->bf_getcharbuffer == NULL || - pb->bf_getsegcount == NULL -#endif - ) - return converterr( - "string or read-only character buffer", - arg, msgbuf, bufsize); -#if 0 - if (pb->bf_getsegcount(arg, NULL) != 1) - return converterr( - "string or single-segment read-only buffer", - arg, msgbuf, bufsize); + case 'w': { /* memory buffer, read-write access */ + void **p = va_arg(*p_va, void **); + void *res; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; - if (pb->bf_releasebuffer) - return converterr( - "string or pinned buffer", - arg, msgbuf, bufsize); -#endif - count = pb->bf_getcharbuffer(arg, 0, p); -#if 0 - if (count < 0) - return converterr("(unspecified)", arg, msgbuf, bufsize); -#endif - { - FETCH_SIZE; - STORE_SIZE(count); - ++format; - } - break; - } - default: - return converterr("impossible", arg, msgbuf, bufsize); - - } - - *p_format = format; - return NULL; + if (pb && pb->bf_releasebuffer && *format != '*') + /* Buffer must be released, yet caller does not use + the Py_buffer protocol. */ + return converterr("pinned buffer", arg, msgbuf, bufsize); + + if (pb && pb->bf_getbuffer && *format == '*') { + /* Caller is interested in Py_buffer, and the object + supports it directly. */ + format++; + if (pb->bf_getbuffer(arg, (Py_buffer*)p, PyBUF_WRITABLE) < 0) { + PyErr_Clear(); + return converterr("read-write buffer", arg, msgbuf, bufsize); + } + if (addcleanup(p, freelist, cleanup_buffer)) { + return converterr( + "(cleanup problem)", + arg, msgbuf, bufsize); + } + if (!PyBuffer_IsContiguous((Py_buffer*)p, 'C')) + return converterr("contiguous buffer", arg, msgbuf, bufsize); + break; + } + + if (pb == NULL || + pb->bf_getwritebuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr("read-write buffer", arg, msgbuf, bufsize); + if ((*pb->bf_getsegcount)(arg, NULL) != 1) + return converterr("single-segment read-write buffer", + arg, msgbuf, bufsize); + if ((count = pb->bf_getwritebuffer(arg, 0, &res)) < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + if (*format == '*') { + PyBuffer_FillInfo((Py_buffer*)p, arg, res, count, 1, 0); + format++; + } + else { + *p = res; + if (*format == '#') { + FETCH_SIZE; + STORE_SIZE(count); + format++; + } + } + break; + } + + case 't': { /* 8-bit character buffer, read-only access */ + char **p = va_arg(*p_va, char **); + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + + if (*format++ != '#') + return converterr( + "invalid use of 't' format character", + arg, msgbuf, bufsize); + if (!PyType_HasFeature(arg->ob_type, + Py_TPFLAGS_HAVE_GETCHARBUFFER) || + pb == NULL || pb->bf_getcharbuffer == NULL || + pb->bf_getsegcount == NULL) + return converterr( + "string or read-only character buffer", + arg, msgbuf, bufsize); + + if (pb->bf_getsegcount(arg, NULL) != 1) + return converterr( + "string or single-segment read-only buffer", + arg, msgbuf, bufsize); + + if (pb->bf_releasebuffer) + return converterr( + "string or pinned buffer", + arg, msgbuf, bufsize); + + count = pb->bf_getcharbuffer(arg, 0, p); + if (count < 0) + return converterr("(unspecified)", arg, msgbuf, bufsize); + { + FETCH_SIZE; + STORE_SIZE(count); + } + break; + } + + default: + return converterr("impossible", arg, msgbuf, bufsize); + + } + + *p_format = format; + return NULL; } static Py_ssize_t convertbuffer(PyObject *arg, void **p, char **errmsg) { - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - Py_ssize_t count; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - pb->bf_releasebuffer != NULL) { - *errmsg = "string or read-only buffer"; - return -1; - } - if ((*pb->bf_getsegcount)(arg, NULL) != 1) { - *errmsg = "string or single-segment read-only buffer"; - return -1; - } - if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { - *errmsg = "(unspecified)"; - } - return count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + Py_ssize_t count; + if (pb == NULL || + pb->bf_getreadbuffer == NULL || + pb->bf_getsegcount == NULL || + pb->bf_releasebuffer != NULL) { + *errmsg = "string or read-only buffer"; + return -1; + } + if ((*pb->bf_getsegcount)(arg, NULL) != 1) { + *errmsg = "string or single-segment read-only buffer"; + return -1; + } + if ((count = (*pb->bf_getreadbuffer)(arg, 0, p)) < 0) { + *errmsg = "(unspecified)"; + } + return count; } static int getbuffer(PyObject *arg, Py_buffer *view, char **errmsg) { - void *buf; - Py_ssize_t count; - PyBufferProcs *pb = arg->ob_type->tp_as_buffer; - if (pb == NULL) { - *errmsg = "string or buffer"; - return -1; - } - if (pb->bf_getbuffer) { - if (pb->bf_getbuffer(arg, view, 0) < 0) { - *errmsg = "convertible to a buffer"; - return -1; - } - if (!PyBuffer_IsContiguous(view, 'C')) { - *errmsg = "contiguous buffer"; - return -1; - } - return 0; - } + void *buf; + Py_ssize_t count; + PyBufferProcs *pb = arg->ob_type->tp_as_buffer; + if (pb == NULL) { + *errmsg = "string or buffer"; + return -1; + } + if (pb->bf_getbuffer) { + if (pb->bf_getbuffer(arg, view, 0) < 0) { + *errmsg = "convertible to a buffer"; + return -1; + } + if (!PyBuffer_IsContiguous(view, 'C')) { + *errmsg = "contiguous buffer"; + return -1; + } + return 0; + } - count = convertbuffer(arg, &buf, errmsg); - if (count < 0) { - *errmsg = "convertible to a buffer"; - return count; - } - PyBuffer_FillInfo(view, NULL, buf, count, 1, 0); - return 0; + count = convertbuffer(arg, &buf, errmsg); + if (count < 0) { + *errmsg = "convertible to a buffer"; + return count; + } + PyBuffer_FillInfo(view, arg, buf, count, 1, 0); + return 0; } /* Support for keyword arguments donated by @@ -1395,501 +1410,487 @@ /* Return false (0) for error, else true. */ int PyArg_ParseTupleAndKeywords(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, kwlist, &va, 0); + va_end(va); + return retval; } int _PyArg_ParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, ...) + PyObject *keywords, + const char *format, + char **kwlist, ...) { - int retval; - va_list va; + int retval; + va_list va; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } - va_start(va, kwlist); - retval = vgetargskeywords(args, keywords, format, - kwlist, &va, FLAG_SIZE_T); - va_end(va); - return retval; + va_start(va, kwlist); + retval = vgetargskeywords(args, keywords, format, + kwlist, &va, FLAG_SIZE_T); + va_end(va); + return retval; } int PyArg_VaParseTupleAndKeywords(PyObject *args, PyObject *keywords, - const char *format, + const char *format, char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); - return retval; + retval = vgetargskeywords(args, keywords, format, kwlist, &lva, 0); + return retval; } int _PyArg_VaParseTupleAndKeywords_SizeT(PyObject *args, - PyObject *keywords, - const char *format, - char **kwlist, va_list va) + PyObject *keywords, + const char *format, + char **kwlist, va_list va) { - int retval; - va_list lva; + int retval; + va_list lva; - if ((args == NULL || !PyTuple_Check(args)) || - (keywords != NULL && !PyDict_Check(keywords)) || - format == NULL || - kwlist == NULL) - { - PyErr_BadInternalCall(); - return 0; - } + if ((args == NULL || !PyTuple_Check(args)) || + (keywords != NULL && !PyDict_Check(keywords)) || + format == NULL || + kwlist == NULL) + { + PyErr_BadInternalCall(); + return 0; + } #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - retval = vgetargskeywords(args, keywords, format, - kwlist, &lva, FLAG_SIZE_T); - return retval; + retval = vgetargskeywords(args, keywords, format, + kwlist, &lva, FLAG_SIZE_T); + return retval; } #define IS_END_OF_FORMAT(c) (c == '\0' || c == ';' || c == ':') static int vgetargskeywords(PyObject *args, PyObject *keywords, const char *format, - char **kwlist, va_list *p_va, int flags) + char **kwlist, va_list *p_va, int flags) { - char msgbuf[512]; - int levels[32]; - const char *fname, *msg, *custom_msg, *keyword; - int min = INT_MAX; - int i, len, nargs, nkeywords; - PyObject *current_arg; - freelist_t freelist = {0, NULL}; + char msgbuf[512]; + int levels[32]; + const char *fname, *msg, *custom_msg, *keyword; + int min = INT_MAX; + int i, len, nargs, nkeywords; + PyObject *freelist = NULL, *current_arg; + assert(args != NULL && PyTuple_Check(args)); + assert(keywords == NULL || PyDict_Check(keywords)); + assert(format != NULL); + assert(kwlist != NULL); + assert(p_va != NULL); - assert(args != NULL && PyTuple_Check(args)); - assert(keywords == NULL || PyDict_Check(keywords)); - assert(format != NULL); - assert(kwlist != NULL); - assert(p_va != NULL); + /* grab the function name or custom error msg first (mutually exclusive) */ + fname = strchr(format, ':'); + if (fname) { + fname++; + custom_msg = NULL; + } + else { + custom_msg = strchr(format,';'); + if (custom_msg) + custom_msg++; + } - /* grab the function name or custom error msg first (mutually exclusive) */ - fname = strchr(format, ':'); - if (fname) { - fname++; - custom_msg = NULL; - } - else { - custom_msg = strchr(format,';'); - if (custom_msg) - custom_msg++; - } + /* scan kwlist and get greatest possible nbr of args */ + for (len=0; kwlist[len]; len++) + continue; - /* scan kwlist and get greatest possible nbr of args */ - for (len=0; kwlist[len]; len++) - continue; + nargs = PyTuple_GET_SIZE(args); + nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); + if (nargs + nkeywords > len) { + PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " + "argument%s (%d given)", + (fname == NULL) ? "function" : fname, + (fname == NULL) ? "" : "()", + len, + (len == 1) ? "" : "s", + nargs + nkeywords); + return 0; + } - freelist.entries = PyMem_New(freelistentry_t, len); + /* convert tuple args and keyword args in same loop, using kwlist to drive process */ + for (i = 0; i < len; i++) { + keyword = kwlist[i]; + if (*format == '|') { + min = i; + format++; + } + if (IS_END_OF_FORMAT(*format)) { + PyErr_Format(PyExc_RuntimeError, + "More keyword list entries (%d) than " + "format specifiers (%d)", len, i); + return cleanreturn(0, freelist); + } + current_arg = NULL; + if (nkeywords) { + current_arg = PyDict_GetItemString(keywords, keyword); + } + if (current_arg) { + --nkeywords; + if (i < nargs) { + /* arg present in tuple and in dict */ + PyErr_Format(PyExc_TypeError, + "Argument given by name ('%s') " + "and position (%d)", + keyword, i+1); + return cleanreturn(0, freelist); + } + } + else if (nkeywords && PyErr_Occurred()) + return cleanreturn(0, freelist); + else if (i < nargs) + current_arg = PyTuple_GET_ITEM(args, i); - nargs = PyTuple_GET_SIZE(args); - nkeywords = (keywords == NULL) ? 0 : PyDict_Size(keywords); - if (nargs + nkeywords > len) { - PyErr_Format(PyExc_TypeError, "%s%s takes at most %d " - "argument%s (%d given)", - (fname == NULL) ? "function" : fname, - (fname == NULL) ? "" : "()", - len, - (len == 1) ? "" : "s", - nargs + nkeywords); - return cleanreturn(0, &freelist); - } + if (current_arg) { + msg = convertitem(current_arg, &format, p_va, flags, + levels, msgbuf, sizeof(msgbuf), &freelist); + if (msg) { + seterror(i+1, msg, levels, fname, custom_msg); + return cleanreturn(0, freelist); + } + continue; + } - /* convert tuple args and keyword args in same loop, using kwlist to drive process */ - for (i = 0; i < len; i++) { - keyword = kwlist[i]; - if (*format == '|') { - min = i; - format++; - } - if (IS_END_OF_FORMAT(*format)) { - PyErr_Format(PyExc_RuntimeError, - "More keyword list entries (%d) than " - "format specifiers (%d)", len, i); - return cleanreturn(0, &freelist); - } - current_arg = NULL; - if (nkeywords) { - current_arg = PyDict_GetItemString(keywords, keyword); - } - if (current_arg) { - --nkeywords; - if (i < nargs) { - /* arg present in tuple and in dict */ - PyErr_Format(PyExc_TypeError, - "Argument given by name ('%s') " - "and position (%d)", - keyword, i+1); - return cleanreturn(0, &freelist); - } - } - else if (nkeywords && PyErr_Occurred()) - return cleanreturn(0, &freelist); - else if (i < nargs) - current_arg = PyTuple_GET_ITEM(args, i); - - if (current_arg) { - msg = convertitem(current_arg, &format, p_va, flags, - levels, msgbuf, sizeof(msgbuf), &freelist); - if (msg) { - seterror(i+1, msg, levels, fname, custom_msg); - return cleanreturn(0, &freelist); - } - continue; - } + if (i < min) { + PyErr_Format(PyExc_TypeError, "Required argument " + "'%s' (pos %d) not found", + keyword, i+1); + return cleanreturn(0, freelist); + } + /* current code reports success when all required args + * fulfilled and no keyword args left, with no further + * validation. XXX Maybe skip this in debug build ? + */ + if (!nkeywords) + return cleanreturn(1, freelist); - if (i < min) { - PyErr_Format(PyExc_TypeError, "Required argument " - "'%s' (pos %d) not found", - keyword, i+1); - return cleanreturn(0, &freelist); - } - /* current code reports success when all required args - * fulfilled and no keyword args left, with no further - * validation. XXX Maybe skip this in debug build ? - */ - if (!nkeywords) - return cleanreturn(1, &freelist); + /* We are into optional args, skip thru to any remaining + * keyword args */ + msg = skipitem(&format, p_va, flags); + if (msg) { + PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, + format); + return cleanreturn(0, freelist); + } + } - /* We are into optional args, skip thru to any remaining - * keyword args */ - msg = skipitem(&format, p_va, flags); - if (msg) { - PyErr_Format(PyExc_RuntimeError, "%s: '%s'", msg, - format); - return cleanreturn(0, &freelist); - } - } + if (!IS_END_OF_FORMAT(*format) && *format != '|') { + PyErr_Format(PyExc_RuntimeError, + "more argument specifiers than keyword list entries " + "(remaining format:'%s')", format); + return cleanreturn(0, freelist); + } - if (!IS_END_OF_FORMAT(*format) && *format != '|') { - PyErr_Format(PyExc_RuntimeError, - "more argument specifiers than keyword list entries " - "(remaining format:'%s')", format); - return cleanreturn(0, &freelist); - } + /* make sure there are no extraneous keyword arguments */ + if (nkeywords > 0) { + PyObject *key, *value; + Py_ssize_t pos = 0; + while (PyDict_Next(keywords, &pos, &key, &value)) { + int match = 0; + char *ks; + if (!PyString_Check(key)) { + PyErr_SetString(PyExc_TypeError, + "keywords must be strings"); + return cleanreturn(0, freelist); + } + ks = PyString_AsString(key); + for (i = 0; i < len; i++) { + if (!strcmp(ks, kwlist[i])) { + match = 1; + break; + } + } + if (!match) { + PyErr_Format(PyExc_TypeError, + "'%s' is an invalid keyword " + "argument for this function", + ks); + return cleanreturn(0, freelist); + } + } + } - /* make sure there are no extraneous keyword arguments */ - if (nkeywords > 0) { - PyObject *key, *value; - Py_ssize_t pos = 0; - while (PyDict_Next(keywords, &pos, &key, &value)) { - int match = 0; - char *ks; - if (!PyString_Check(key)) { - PyErr_SetString(PyExc_TypeError, - "keywords must be strings"); - return cleanreturn(0, &freelist); - } - ks = PyString_AsString(key); - for (i = 0; i < len; i++) { - if (!strcmp(ks, kwlist[i])) { - match = 1; - break; - } - } - if (!match) { - PyErr_Format(PyExc_TypeError, - "'%s' is an invalid keyword " - "argument for this function", - ks); - return cleanreturn(0, &freelist); - } - } - } - - return cleanreturn(1, &freelist); + return cleanreturn(1, freelist); } static char * skipitem(const char **p_format, va_list *p_va, int flags) { - const char *format = *p_format; - char c = *format++; - - switch (c) { + const char *format = *p_format; + char c = *format++; - /* simple codes - * The individual types (second arg of va_arg) are irrelevant */ + switch (c) { - case 'b': /* byte -- very short int */ - case 'B': /* byte as bitfield */ - case 'h': /* short int */ - case 'H': /* short int as bitfield */ - case 'i': /* int */ - case 'I': /* int sized bitfield */ - case 'l': /* long int */ - case 'k': /* long int sized bitfield */ + /* simple codes + * The individual types (second arg of va_arg) are irrelevant */ + + case 'b': /* byte -- very short int */ + case 'B': /* byte as bitfield */ + case 'h': /* short int */ + case 'H': /* short int as bitfield */ + case 'i': /* int */ + case 'I': /* int sized bitfield */ + case 'l': /* long int */ + case 'k': /* long int sized bitfield */ #ifdef HAVE_LONG_LONG - case 'L': /* PY_LONG_LONG */ - case 'K': /* PY_LONG_LONG sized bitfield */ + case 'L': /* PY_LONG_LONG */ + case 'K': /* PY_LONG_LONG sized bitfield */ #endif - case 'f': /* float */ - case 'd': /* double */ + case 'f': /* float */ + case 'd': /* double */ #ifndef WITHOUT_COMPLEX - case 'D': /* complex double */ + case 'D': /* complex double */ #endif - case 'c': /* char */ - { - (void) va_arg(*p_va, void *); - break; - } + case 'c': /* char */ + { + (void) va_arg(*p_va, void *); + break; + } - case 'n': /* Py_ssize_t */ - { - (void) va_arg(*p_va, Py_ssize_t *); - break; - } - - /* string codes */ - - case 'e': /* string with encoding */ - { - (void) va_arg(*p_va, const char *); - if (!(*format == 's' || *format == 't')) - /* after 'e', only 's' and 't' is allowed */ - goto err; - format++; - /* explicit fallthrough to string cases */ - } - - case 's': /* string */ - case 'z': /* string or None */ + case 'n': /* Py_ssize_t */ + { + (void) va_arg(*p_va, Py_ssize_t *); + break; + } + + /* string codes */ + + case 'e': /* string with encoding */ + { + (void) va_arg(*p_va, const char *); + if (!(*format == 's' || *format == 't')) + /* after 'e', only 's' and 't' is allowed */ + goto err; + format++; + /* explicit fallthrough to string cases */ + } + + case 's': /* string */ + case 'z': /* string or None */ #ifdef Py_USING_UNICODE - case 'u': /* unicode string */ + case 'u': /* unicode string */ #endif - case 't': /* buffer, read-only */ - case 'w': /* buffer, read-write */ - { - (void) va_arg(*p_va, char **); - if (*format == '#') { - if (flags & FLAG_SIZE_T) - (void) va_arg(*p_va, Py_ssize_t *); - else - (void) va_arg(*p_va, int *); - format++; - } else if ((c == 's' || c == 'z') && *format == '*') { - format++; - } - break; - } + case 't': /* buffer, read-only */ + case 'w': /* buffer, read-write */ + { + (void) va_arg(*p_va, char **); + if (*format == '#') { + if (flags & FLAG_SIZE_T) + (void) va_arg(*p_va, Py_ssize_t *); + else + (void) va_arg(*p_va, int *); + format++; + } else if ((c == 's' || c == 'z') && *format == '*') { + format++; + } + break; + } - /* object codes */ + /* object codes */ - case 'S': /* string object */ + case 'S': /* string object */ #ifdef Py_USING_UNICODE - case 'U': /* unicode string object */ + case 'U': /* unicode string object */ #endif - { - (void) va_arg(*p_va, PyObject **); - break; - } - - case 'O': /* object */ - { - if (*format == '!') { - format++; - (void) va_arg(*p_va, PyTypeObject*); - (void) va_arg(*p_va, PyObject **); - } -#if 0 -/* I don't know what this is for */ - else if (*format == '?') { - inquiry pred = va_arg(*p_va, inquiry); - format++; - if ((*pred)(arg)) { - (void) va_arg(*p_va, PyObject **); - } - } -#endif - else if (*format == '&') { - typedef int (*converter)(PyObject *, void *); - (void) va_arg(*p_va, converter); - (void) va_arg(*p_va, void *); - format++; - } - else { - (void) va_arg(*p_va, PyObject **); - } - break; - } + { + (void) va_arg(*p_va, PyObject **); + break; + } - case '(': /* bypass tuple, not handled at all previously */ - { - char *msg; - for (;;) { - if (*format==')') - break; - if (IS_END_OF_FORMAT(*format)) - return "Unmatched left paren in format " - "string"; - msg = skipitem(&format, p_va, flags); - if (msg) - return msg; - } - format++; - break; - } + case 'O': /* object */ + { + if (*format == '!') { + format++; + (void) va_arg(*p_va, PyTypeObject*); + (void) va_arg(*p_va, PyObject **); + } + else if (*format == '&') { + typedef int (*converter)(PyObject *, void *); + (void) va_arg(*p_va, converter); + (void) va_arg(*p_va, void *); + format++; + } + else { + (void) va_arg(*p_va, PyObject **); + } + break; + } - case ')': - return "Unmatched right paren in format string"; + case '(': /* bypass tuple, not handled at all previously */ + { + char *msg; + for (;;) { + if (*format==')') + break; + if (IS_END_OF_FORMAT(*format)) + return "Unmatched left paren in format " + "string"; + msg = skipitem(&format, p_va, flags); + if (msg) + return msg; + } + format++; + break; + } - default: + case ')': + return "Unmatched right paren in format string"; + + default: err: - return "impossible"; - - } + return "impossible"; - *p_format = format; - return NULL; + } + + *p_format = format; + return NULL; } int PyArg_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, ...) { - Py_ssize_t i, l; - PyObject **o; - va_list vargs; + Py_ssize_t i, l; + PyObject **o; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, max); + va_start(vargs, max); #else - va_start(vargs); + va_start(vargs); #endif - assert(min >= 0); - assert(min <= max); - if (!PyTuple_Check(args)) { - PyErr_SetString(PyExc_SystemError, - "PyArg_UnpackTuple() argument list is not a tuple"); - return 0; - } - l = PyTuple_GET_SIZE(args); - if (l < min) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at least "), min, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at least "), min, l); - va_end(vargs); - return 0; - } - if (l > max) { - if (name != NULL) - PyErr_Format( - PyExc_TypeError, - "%s expected %s%zd arguments, got %zd", - name, (min == max ? "" : "at most "), max, l); - else - PyErr_Format( - PyExc_TypeError, - "unpacked tuple should have %s%zd elements," - " but has %zd", - (min == max ? "" : "at most "), max, l); - va_end(vargs); - return 0; - } - for (i = 0; i < l; i++) { - o = va_arg(vargs, PyObject **); - *o = PyTuple_GET_ITEM(args, i); - } - va_end(vargs); - return 1; + assert(min >= 0); + assert(min <= max); + if (!PyTuple_Check(args)) { + PyErr_SetString(PyExc_SystemError, + "PyArg_UnpackTuple() argument list is not a tuple"); + return 0; + } + l = PyTuple_GET_SIZE(args); + if (l < min) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at least "), min, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at least "), min, l); + va_end(vargs); + return 0; + } + if (l > max) { + if (name != NULL) + PyErr_Format( + PyExc_TypeError, + "%s expected %s%zd arguments, got %zd", + name, (min == max ? "" : "at most "), max, l); + else + PyErr_Format( + PyExc_TypeError, + "unpacked tuple should have %s%zd elements," + " but has %zd", + (min == max ? "" : "at most "), max, l); + va_end(vargs); + return 0; + } + for (i = 0; i < l; i++) { + o = va_arg(vargs, PyObject **); + *o = PyTuple_GET_ITEM(args, i); + } + va_end(vargs); + return 1; } /* For type constructors that don't take keyword args * - * Sets a TypeError and returns 0 if the kwds dict is + * Sets a TypeError and returns 0 if the kwds dict is * not empty, returns 1 otherwise */ int _PyArg_NoKeywords(const char *funcname, PyObject *kw) { - if (kw == NULL) - return 1; - if (!PyDict_CheckExact(kw)) { - PyErr_BadInternalCall(); - return 0; - } - if (PyDict_Size(kw) == 0) - return 1; - - PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", - funcname); - return 0; + if (kw == NULL) + return 1; + if (!PyDict_CheckExact(kw)) { + PyErr_BadInternalCall(); + return 0; + } + if (PyDict_Size(kw) == 0) + return 1; + + PyErr_Format(PyExc_TypeError, "%s does not take keyword arguments", + funcname); + return 0; } #ifdef __cplusplus }; diff --git a/pypy/module/cpyext/src/modsupport.c b/pypy/module/cpyext/src/modsupport.c --- a/pypy/module/cpyext/src/modsupport.c +++ b/pypy/module/cpyext/src/modsupport.c @@ -33,41 +33,41 @@ static int countformat(const char *format, int endchar) { - int count = 0; - int level = 0; - while (level > 0 || *format != endchar) { - switch (*format) { - case '\0': - /* Premature end */ - PyErr_SetString(PyExc_SystemError, - "unmatched paren in format"); - return -1; - case '(': - case '[': - case '{': - if (level == 0) - count++; - level++; - break; - case ')': - case ']': - case '}': - level--; - break; - case '#': - case '&': - case ',': - case ':': - case ' ': - case '\t': - break; - default: - if (level == 0) - count++; - } - format++; - } - return count; + int count = 0; + int level = 0; + while (level > 0 || *format != endchar) { + switch (*format) { + case '\0': + /* Premature end */ + PyErr_SetString(PyExc_SystemError, + "unmatched paren in format"); + return -1; + case '(': + case '[': + case '{': + if (level == 0) + count++; + level++; + break; + case ')': + case ']': + case '}': + level--; + break; + case '#': + case '&': + case ',': + case ':': + case ' ': + case '\t': + break; + default: + if (level == 0) + count++; + } + format++; + } + return count; } @@ -83,582 +83,435 @@ static PyObject * do_mkdict(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *d; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((d = PyDict_New()) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i+= 2) { - PyObject *k, *v; - int err; - k = do_mkvalue(p_format, p_va, flags); - if (k == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - k = Py_None; - } - v = do_mkvalue(p_format, p_va, flags); - if (v == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - v = Py_None; - } - err = PyDict_SetItem(d, k, v); - Py_DECREF(k); - Py_DECREF(v); - if (err < 0 || itemfailed) { - Py_DECREF(d); - return NULL; - } - } - if (d != NULL && **p_format != endchar) { - Py_DECREF(d); - d = NULL; - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - } - else if (endchar) - ++*p_format; - return d; + PyObject *d; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((d = PyDict_New()) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i+= 2) { + PyObject *k, *v; + int err; + k = do_mkvalue(p_format, p_va, flags); + if (k == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + k = Py_None; + } + v = do_mkvalue(p_format, p_va, flags); + if (v == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + v = Py_None; + } + err = PyDict_SetItem(d, k, v); + Py_DECREF(k); + Py_DECREF(v); + if (err < 0 || itemfailed) { + Py_DECREF(d); + return NULL; + } + } + if (d != NULL && **p_format != endchar) { + Py_DECREF(d); + d = NULL; + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + } + else if (endchar) + ++*p_format; + return d; } static PyObject * do_mklist(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - v = PyList_New(n); - if (v == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyList_SET_ITEM(v, i, w); - } + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + v = PyList_New(n); + if (v == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyList_SET_ITEM(v, i, w); + } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } #ifdef Py_USING_UNICODE static int _ustrlen(Py_UNICODE *u) { - int i = 0; - Py_UNICODE *v = u; - while (*v != 0) { i++; v++; } - return i; + int i = 0; + Py_UNICODE *v = u; + while (*v != 0) { i++; v++; } + return i; } #endif static PyObject * do_mktuple(const char **p_format, va_list *p_va, int endchar, int n, int flags) { - PyObject *v; - int i; - int itemfailed = 0; - if (n < 0) - return NULL; - if ((v = PyTuple_New(n)) == NULL) - return NULL; - /* Note that we can't bail immediately on error as this will leak - refcounts on any 'N' arguments. */ - for (i = 0; i < n; i++) { - PyObject *w = do_mkvalue(p_format, p_va, flags); - if (w == NULL) { - itemfailed = 1; - Py_INCREF(Py_None); - w = Py_None; - } - PyTuple_SET_ITEM(v, i, w); - } - if (itemfailed) { - /* do_mkvalue() should have already set an error */ - Py_DECREF(v); - return NULL; - } - if (**p_format != endchar) { - Py_DECREF(v); - PyErr_SetString(PyExc_SystemError, - "Unmatched paren in format"); - return NULL; - } - if (endchar) - ++*p_format; - return v; + PyObject *v; + int i; + int itemfailed = 0; + if (n < 0) + return NULL; + if ((v = PyTuple_New(n)) == NULL) + return NULL; + /* Note that we can't bail immediately on error as this will leak + refcounts on any 'N' arguments. */ + for (i = 0; i < n; i++) { + PyObject *w = do_mkvalue(p_format, p_va, flags); + if (w == NULL) { + itemfailed = 1; + Py_INCREF(Py_None); + w = Py_None; + } + PyTuple_SET_ITEM(v, i, w); + } + if (itemfailed) { + /* do_mkvalue() should have already set an error */ + Py_DECREF(v); + return NULL; + } + if (**p_format != endchar) { + Py_DECREF(v); + PyErr_SetString(PyExc_SystemError, + "Unmatched paren in format"); + return NULL; + } + if (endchar) + ++*p_format; + return v; } static PyObject * do_mkvalue(const char **p_format, va_list *p_va, int flags) { - for (;;) { - switch (*(*p_format)++) { - case '(': - return do_mktuple(p_format, p_va, ')', - countformat(*p_format, ')'), flags); + for (;;) { + switch (*(*p_format)++) { + case '(': + return do_mktuple(p_format, p_va, ')', + countformat(*p_format, ')'), flags); - case '[': - return do_mklist(p_format, p_va, ']', - countformat(*p_format, ']'), flags); + case '[': + return do_mklist(p_format, p_va, ']', + countformat(*p_format, ']'), flags); - case '{': - return do_mkdict(p_format, p_va, '}', - countformat(*p_format, '}'), flags); + case '{': + return do_mkdict(p_format, p_va, '}', + countformat(*p_format, '}'), flags); - case 'b': - case 'B': - case 'h': - case 'i': - return PyInt_FromLong((long)va_arg(*p_va, int)); - - case 'H': - return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + case 'b': + case 'B': + case 'h': + case 'i': + return PyInt_FromLong((long)va_arg(*p_va, int)); - case 'I': - { - unsigned int n; - n = va_arg(*p_va, unsigned int); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong((unsigned long)n); - else - return PyInt_FromLong(n); - } - - case 'n': + case 'H': + return PyInt_FromLong((long)va_arg(*p_va, unsigned int)); + + case 'I': + { + unsigned int n; + n = va_arg(*p_va, unsigned int); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong((unsigned long)n); + else + return PyInt_FromLong(n); + } + + case 'n': #if SIZEOF_SIZE_T!=SIZEOF_LONG - return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); + return PyInt_FromSsize_t(va_arg(*p_va, Py_ssize_t)); #endif - /* Fall through from 'n' to 'l' if Py_ssize_t is long */ - case 'l': - return PyInt_FromLong(va_arg(*p_va, long)); + /* Fall through from 'n' to 'l' if Py_ssize_t is long */ + case 'l': + return PyInt_FromLong(va_arg(*p_va, long)); - case 'k': - { - unsigned long n; - n = va_arg(*p_va, unsigned long); - if (n > (unsigned long)PyInt_GetMax()) - return PyLong_FromUnsignedLong(n); - else - return PyInt_FromLong(n); - } + case 'k': + { + unsigned long n; + n = va_arg(*p_va, unsigned long); + if (n > (unsigned long)PyInt_GetMax()) + return PyLong_FromUnsignedLong(n); + else + return PyInt_FromLong(n); + } #ifdef HAVE_LONG_LONG - case 'L': - return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); + case 'L': + return PyLong_FromLongLong((PY_LONG_LONG)va_arg(*p_va, PY_LONG_LONG)); - case 'K': - return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); + case 'K': + return PyLong_FromUnsignedLongLong((PY_LONG_LONG)va_arg(*p_va, unsigned PY_LONG_LONG)); #endif #ifdef Py_USING_UNICODE - case 'u': - { - PyObject *v; - Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (u == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) - n = _ustrlen(u); - v = PyUnicode_FromUnicode(u, n); - } - return v; - } + case 'u': + { + PyObject *v; + Py_UNICODE *u = va_arg(*p_va, Py_UNICODE *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (u == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) + n = _ustrlen(u); + v = PyUnicode_FromUnicode(u, n); + } + return v; + } #endif - case 'f': - case 'd': - return PyFloat_FromDouble( - (double)va_arg(*p_va, va_double)); + case 'f': + case 'd': + return PyFloat_FromDouble( + (double)va_arg(*p_va, va_double)); #ifndef WITHOUT_COMPLEX - case 'D': - return PyComplex_FromCComplex( - *((Py_complex *)va_arg(*p_va, Py_complex *))); + case 'D': + return PyComplex_FromCComplex( + *((Py_complex *)va_arg(*p_va, Py_complex *))); #endif /* WITHOUT_COMPLEX */ - case 'c': - { - char p[1]; - p[0] = (char)va_arg(*p_va, int); - return PyString_FromStringAndSize(p, 1); - } + case 'c': + { + char p[1]; + p[0] = (char)va_arg(*p_va, int); + return PyString_FromStringAndSize(p, 1); + } - case 's': - case 'z': - { - PyObject *v; - char *str = va_arg(*p_va, char *); - Py_ssize_t n; - if (**p_format == '#') { - ++*p_format; - if (flags & FLAG_SIZE_T) - n = va_arg(*p_va, Py_ssize_t); - else - n = va_arg(*p_va, int); - } - else - n = -1; - if (str == NULL) { - v = Py_None; - Py_INCREF(v); - } - else { - if (n < 0) { - size_t m = strlen(str); - if (m > PY_SSIZE_T_MAX) { - PyErr_SetString(PyExc_OverflowError, - "string too long for Python string"); - return NULL; - } - n = (Py_ssize_t)m; - } - v = PyString_FromStringAndSize(str, n); - } - return v; - } + case 's': + case 'z': + { + PyObject *v; + char *str = va_arg(*p_va, char *); + Py_ssize_t n; + if (**p_format == '#') { + ++*p_format; + if (flags & FLAG_SIZE_T) + n = va_arg(*p_va, Py_ssize_t); + else + n = va_arg(*p_va, int); + } + else + n = -1; + if (str == NULL) { + v = Py_None; + Py_INCREF(v); + } + else { + if (n < 0) { + size_t m = strlen(str); + if (m > PY_SSIZE_T_MAX) { + PyErr_SetString(PyExc_OverflowError, + "string too long for Python string"); + return NULL; + } + n = (Py_ssize_t)m; + } + v = PyString_FromStringAndSize(str, n); + } + return v; + } - case 'N': - case 'S': - case 'O': - if (**p_format == '&') { - typedef PyObject *(*converter)(void *); - converter func = va_arg(*p_va, converter); - void *arg = va_arg(*p_va, void *); - ++*p_format; - return (*func)(arg); - } - else { - PyObject *v; - v = va_arg(*p_va, PyObject *); - if (v != NULL) { - if (*(*p_format - 1) != 'N') - Py_INCREF(v); - } - else if (!PyErr_Occurred()) - /* If a NULL was passed - * because a call that should - * have constructed a value - * failed, that's OK, and we - * pass the error on; but if - * no error occurred it's not - * clear that the caller knew - * what she was doing. */ - PyErr_SetString(PyExc_SystemError, - "NULL object passed to Py_BuildValue"); - return v; - } + case 'N': + case 'S': + case 'O': + if (**p_format == '&') { + typedef PyObject *(*converter)(void *); + converter func = va_arg(*p_va, converter); + void *arg = va_arg(*p_va, void *); + ++*p_format; + return (*func)(arg); + } + else { + PyObject *v; + v = va_arg(*p_va, PyObject *); + if (v != NULL) { + if (*(*p_format - 1) != 'N') + Py_INCREF(v); + } + else if (!PyErr_Occurred()) + /* If a NULL was passed + * because a call that should + * have constructed a value + * failed, that's OK, and we + * pass the error on; but if + * no error occurred it's not + * clear that the caller knew + * what she was doing. */ + PyErr_SetString(PyExc_SystemError, + "NULL object passed to Py_BuildValue"); + return v; + } - case ':': - case ',': - case ' ': - case '\t': - break; + case ':': + case ',': + case ' ': + case '\t': + break; - default: - PyErr_SetString(PyExc_SystemError, - "bad format char passed to Py_BuildValue"); - return NULL; + default: + PyErr_SetString(PyExc_SystemError, + "bad format char passed to Py_BuildValue"); + return NULL; - } - } + } + } } PyObject * Py_BuildValue(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, 0); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, 0); + va_end(va); + return retval; } PyObject * _Py_BuildValue_SizeT(const char *format, ...) { - va_list va; - PyObject* retval; - va_start(va, format); - retval = va_build_value(format, va, FLAG_SIZE_T); - va_end(va); - return retval; + va_list va; + PyObject* retval; + va_start(va, format); + retval = va_build_value(format, va, FLAG_SIZE_T); + va_end(va); + return retval; } PyObject * Py_VaBuildValue(const char *format, va_list va) { - return va_build_value(format, va, 0); + return va_build_value(format, va, 0); } PyObject * _Py_VaBuildValue_SizeT(const char *format, va_list va) { - return va_build_value(format, va, FLAG_SIZE_T); + return va_build_value(format, va, FLAG_SIZE_T); } static PyObject * va_build_value(const char *format, va_list va, int flags) { - const char *f = format; - int n = countformat(f, '\0'); - va_list lva; + const char *f = format; + int n = countformat(f, '\0'); + va_list lva; #ifdef VA_LIST_IS_ARRAY - memcpy(lva, va, sizeof(va_list)); + memcpy(lva, va, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(lva, va); + __va_copy(lva, va); #else - lva = va; + lva = va; #endif #endif - if (n < 0) - return NULL; - if (n == 0) { - Py_INCREF(Py_None); - return Py_None; - } - if (n == 1) - return do_mkvalue(&f, &lva, flags); - return do_mktuple(&f, &lva, '\0', n, flags); + if (n < 0) + return NULL; + if (n == 0) { + Py_INCREF(Py_None); + return Py_None; + } + if (n == 1) + return do_mkvalue(&f, &lva, flags); + return do_mktuple(&f, &lva, '\0', n, flags); } PyObject * PyEval_CallFunction(PyObject *obj, const char *format, ...) { - va_list vargs; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *args; + PyObject *res; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) - return NULL; + if (args == NULL) + return NULL; - res = PyEval_CallObject(obj, args); - Py_DECREF(args); + res = PyEval_CallObject(obj, args); + Py_DECREF(args); - return res; + return res; } PyObject * PyEval_CallMethod(PyObject *obj, const char *methodname, const char *format, ...) { - va_list vargs; - PyObject *meth; - PyObject *args; - PyObject *res; + va_list vargs; + PyObject *meth; + PyObject *args; + PyObject *res; - meth = PyObject_GetAttrString(obj, methodname); - if (meth == NULL) - return NULL; + meth = PyObject_GetAttrString(obj, methodname); + if (meth == NULL) + return NULL; - va_start(vargs, format); + va_start(vargs, format); - args = Py_VaBuildValue(format, vargs); - va_end(vargs); + args = Py_VaBuildValue(format, vargs); + va_end(vargs); - if (args == NULL) { - Py_DECREF(meth); - return NULL; - } + if (args == NULL) { + Py_DECREF(meth); + return NULL; + } - res = PyEval_CallObject(meth, args); - Py_DECREF(meth); - Py_DECREF(args); + res = PyEval_CallObject(meth, args); + Py_DECREF(meth); + Py_DECREF(args); - return res; -} - -static PyObject* -call_function_tail(PyObject *callable, PyObject *args) -{ - PyObject *retval; - - if (args == NULL) - return NULL; - - if (!PyTuple_Check(args)) { - PyObject *a; - - a = PyTuple_New(1); - if (a == NULL) { - Py_DECREF(args); - return NULL; - } - PyTuple_SET_ITEM(a, 0, args); - args = a; - } - retval = PyObject_Call(callable, args, NULL); - - Py_DECREF(args); - - return retval; -} - -PyObject * -PyObject_CallFunction(PyObject *callable, const char *format, ...) -{ - va_list va; - PyObject *args; - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - return call_function_tail(callable, args); -} - -PyObject * -PyObject_CallMethod(PyObject *o, const char *name, const char *format, ...) -{ - va_list va; - PyObject *args; - PyObject *func = NULL; - PyObject *retval = NULL; - - func = PyObject_GetAttrString(o, name); - if (func == NULL) { - PyErr_SetString(PyExc_AttributeError, name); - return 0; - } - - if (format && *format) { - va_start(va, format); - args = Py_VaBuildValue(format, va); - va_end(va); - } - else - args = PyTuple_New(0); - - retval = call_function_tail(func, args); - - exit: - /* args gets consumed in call_function_tail */ - Py_XDECREF(func); - - return retval; -} - -static PyObject * -objargs_mktuple(va_list va) -{ - int i, n = 0; - va_list countva; - PyObject *result, *tmp; - -#ifdef VA_LIST_IS_ARRAY - memcpy(countva, va, sizeof(va_list)); -#else -#ifdef __va_copy - __va_copy(countva, va); -#else - countva = va; -#endif -#endif - - while (((PyObject *)va_arg(countva, PyObject *)) != NULL) - ++n; - result = PyTuple_New(n); - if (result != NULL && n > 0) { - for (i = 0; i < n; ++i) { - tmp = (PyObject *)va_arg(va, PyObject *); - Py_INCREF(tmp); - PyTuple_SET_ITEM(result, i, tmp); - } - } - return result; -} - -PyObject * -PyObject_CallFunctionObjArgs(PyObject *callable, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - /* count the args */ - va_start(vargs, callable); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) - return NULL; - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - - return tmp; -} - -PyObject * -PyObject_CallMethodObjArgs(PyObject *callable, PyObject *name, ...) -{ - PyObject *args, *tmp; - va_list vargs; - - callable = PyObject_GetAttr(callable, name); - if (callable == NULL) - return NULL; - - /* count the args */ - va_start(vargs, name); - args = objargs_mktuple(vargs); - va_end(vargs); - if (args == NULL) { - Py_DECREF(callable); - return NULL; - } - tmp = PyObject_Call(callable, args, NULL); - Py_DECREF(args); - Py_DECREF(callable); - - return tmp; + return res; } /* returns -1 in case of error, 0 if a new key was added, 1 if the key @@ -666,67 +519,67 @@ static int _PyModule_AddObject_NoConsumeRef(PyObject *m, const char *name, PyObject *o) { - PyObject *dict, *prev; - if (!PyModule_Check(m)) { - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs module as first arg"); - return -1; - } - if (!o) { - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_TypeError, - "PyModule_AddObject() needs non-NULL value"); - return -1; - } + PyObject *dict, *prev; + if (!PyModule_Check(m)) { + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs module as first arg"); + return -1; + } + if (!o) { + if (!PyErr_Occurred()) + PyErr_SetString(PyExc_TypeError, + "PyModule_AddObject() needs non-NULL value"); + return -1; + } - dict = PyModule_GetDict(m); - if (dict == NULL) { - /* Internal error -- modules must have a dict! */ - PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", - PyModule_GetName(m)); - return -1; - } - prev = PyDict_GetItemString(dict, name); - if (PyDict_SetItemString(dict, name, o)) - return -1; - return prev != NULL; + dict = PyModule_GetDict(m); + if (dict == NULL) { + /* Internal error -- modules must have a dict! */ + PyErr_Format(PyExc_SystemError, "module '%s' has no __dict__", + PyModule_GetName(m)); + return -1; + } + prev = PyDict_GetItemString(dict, name); + if (PyDict_SetItemString(dict, name, o)) + return -1; + return prev != NULL; } int PyModule_AddObject(PyObject *m, const char *name, PyObject *o) { - int result = _PyModule_AddObject_NoConsumeRef(m, name, o); - /* XXX WORKAROUND for a common misusage of PyModule_AddObject: - for the common case of adding a new key, we don't consume a - reference, but instead just leak it away. The issue is that - people generally don't realize that this function consumes a - reference, because on CPython the reference is still stored - on the dictionary. */ - if (result != 0) - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result = _PyModule_AddObject_NoConsumeRef(m, name, o); + /* XXX WORKAROUND for a common misusage of PyModule_AddObject: + for the common case of adding a new key, we don't consume a + reference, but instead just leak it away. The issue is that + people generally don't realize that this function consumes a + reference, because on CPython the reference is still stored + on the dictionary. */ + if (result != 0) + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddIntConstant(PyObject *m, const char *name, long value) { - int result; - PyObject *o = PyInt_FromLong(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyInt_FromLong(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } -int +int PyModule_AddStringConstant(PyObject *m, const char *name, const char *value) { - int result; - PyObject *o = PyString_FromString(value); - if (!o) - return -1; - result = _PyModule_AddObject_NoConsumeRef(m, name, o); - Py_DECREF(o); - return result < 0 ? -1 : 0; + int result; + PyObject *o = PyString_FromString(value); + if (!o) + return -1; + result = _PyModule_AddObject_NoConsumeRef(m, name, o); + Py_DECREF(o); + return result < 0 ? -1 : 0; } diff --git a/pypy/module/cpyext/src/mysnprintf.c b/pypy/module/cpyext/src/mysnprintf.c --- a/pypy/module/cpyext/src/mysnprintf.c +++ b/pypy/module/cpyext/src/mysnprintf.c @@ -20,86 +20,86 @@ Return value (rv): - When 0 <= rv < size, the output conversion was unexceptional, and - rv characters were written to str (excluding a trailing \0 byte at - str[rv]). + When 0 <= rv < size, the output conversion was unexceptional, and + rv characters were written to str (excluding a trailing \0 byte at + str[rv]). - When rv >= size, output conversion was truncated, and a buffer of - size rv+1 would have been needed to avoid truncation. str[size-1] - is \0 in this case. + When rv >= size, output conversion was truncated, and a buffer of + size rv+1 would have been needed to avoid truncation. str[size-1] + is \0 in this case. - When rv < 0, "something bad happened". str[size-1] is \0 in this - case too, but the rest of str is unreliable. It could be that - an error in format codes was detected by libc, or on platforms - with a non-C99 vsnprintf simply that the buffer wasn't big enough - to avoid truncation, or on platforms without any vsnprintf that - PyMem_Malloc couldn't obtain space for a temp buffer. + When rv < 0, "something bad happened". str[size-1] is \0 in this + case too, but the rest of str is unreliable. It could be that + an error in format codes was detected by libc, or on platforms + with a non-C99 vsnprintf simply that the buffer wasn't big enough + to avoid truncation, or on platforms without any vsnprintf that + PyMem_Malloc couldn't obtain space for a temp buffer. CAUTION: Unlike C99, str != NULL and size > 0 are required. */ int +PyOS_snprintf(char *str, size_t size, const char *format, ...) +{ + int rc; + va_list va; + + va_start(va, format); + rc = PyOS_vsnprintf(str, size, format, va); + va_end(va); + return rc; +} + +int PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va) { - int len; /* # bytes written, excluding \0 */ + int len; /* # bytes written, excluding \0 */ #ifdef HAVE_SNPRINTF #define _PyOS_vsnprintf_EXTRA_SPACE 1 #else #define _PyOS_vsnprintf_EXTRA_SPACE 512 - char *buffer; + char *buffer; #endif - assert(str != NULL); - assert(size > 0); - assert(format != NULL); - /* We take a size_t as input but return an int. Sanity check - * our input so that it won't cause an overflow in the - * vsnprintf return value or the buffer malloc size. */ - if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { - len = -666; - goto Done; - } + assert(str != NULL); + assert(size > 0); + assert(format != NULL); + /* We take a size_t as input but return an int. Sanity check + * our input so that it won't cause an overflow in the + * vsnprintf return value or the buffer malloc size. */ + if (size > INT_MAX - _PyOS_vsnprintf_EXTRA_SPACE) { + len = -666; + goto Done; + } #ifdef HAVE_SNPRINTF - len = vsnprintf(str, size, format, va); + len = vsnprintf(str, size, format, va); #else - /* Emulate it. */ - buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); - if (buffer == NULL) { - len = -666; - goto Done; - } + /* Emulate it. */ + buffer = PyMem_MALLOC(size + _PyOS_vsnprintf_EXTRA_SPACE); + if (buffer == NULL) { + len = -666; + goto Done; + } - len = vsprintf(buffer, format, va); - if (len < 0) - /* ignore the error */; + len = vsprintf(buffer, format, va); + if (len < 0) + /* ignore the error */; - else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) - Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); + else if ((size_t)len >= size + _PyOS_vsnprintf_EXTRA_SPACE) + Py_FatalError("Buffer overflow in PyOS_snprintf/PyOS_vsnprintf"); - else { - const size_t to_copy = (size_t)len < size ? - (size_t)len : size - 1; - assert(to_copy < size); - memcpy(str, buffer, to_copy); - str[to_copy] = '\0'; - } - PyMem_FREE(buffer); + else { + const size_t to_copy = (size_t)len < size ? + (size_t)len : size - 1; + assert(to_copy < size); + memcpy(str, buffer, to_copy); + str[to_copy] = '\0'; + } + PyMem_FREE(buffer); #endif Done: - if (size > 0) - str[size-1] = '\0'; - return len; + if (size > 0) + str[size-1] = '\0'; + return len; #undef _PyOS_vsnprintf_EXTRA_SPACE } - -int -PyOS_snprintf(char *str, size_t size, const char *format, ...) -{ - int rc; - va_list va; - - va_start(va, format); - rc = PyOS_vsnprintf(str, size, format, va); - va_end(va); - return rc; -} diff --git a/pypy/module/cpyext/src/object.c b/pypy/module/cpyext/src/object.c deleted file mode 100644 --- a/pypy/module/cpyext/src/object.c +++ /dev/null @@ -1,91 +0,0 @@ -// contains code from abstract.c -#include - - -static PyObject * -null_error(void) -{ - if (!PyErr_Occurred()) - PyErr_SetString(PyExc_SystemError, - "null argument to internal routine"); - return NULL; -} - -int PyObject_AsReadBuffer(PyObject *obj, - const void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void *pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a readable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getreadbuffer)(obj, 0, &pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int PyObject_AsWriteBuffer(PyObject *obj, - void **buffer, - Py_ssize_t *buffer_len) -{ - PyBufferProcs *pb; - void*pp; - Py_ssize_t len; - - if (obj == NULL || buffer == NULL || buffer_len == NULL) { - null_error(); - return -1; - } - pb = obj->ob_type->tp_as_buffer; - if (pb == NULL || - pb->bf_getwritebuffer == NULL || - pb->bf_getsegcount == NULL) { - PyErr_SetString(PyExc_TypeError, - "expected a writeable buffer object"); - return -1; - } - if ((*pb->bf_getsegcount)(obj, NULL) != 1) { - PyErr_SetString(PyExc_TypeError, - "expected a single-segment buffer object"); - return -1; - } - len = (*pb->bf_getwritebuffer)(obj,0,&pp); - if (len < 0) - return -1; - *buffer = pp; - *buffer_len = len; - return 0; -} - -int -PyObject_CheckReadBuffer(PyObject *obj) -{ - PyBufferProcs *pb = obj->ob_type->tp_as_buffer; - - if (pb == NULL || - pb->bf_getreadbuffer == NULL || - pb->bf_getsegcount == NULL || - (*pb->bf_getsegcount)(obj, NULL) != 1) - return 0; - return 1; -} diff --git a/pypy/module/cpyext/src/pyerrors.c b/pypy/module/cpyext/src/pyerrors.c --- a/pypy/module/cpyext/src/pyerrors.c +++ b/pypy/module/cpyext/src/pyerrors.c @@ -4,72 +4,75 @@ PyObject * PyErr_Format(PyObject *exception, const char *format, ...) { - va_list vargs; - PyObject* string; + va_list vargs; + PyObject* string; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - string = PyString_FromFormatV(format, vargs); - PyErr_SetObject(exception, string); - Py_XDECREF(string); - va_end(vargs); - return NULL; + string = PyString_FromFormatV(format, vargs); + PyErr_SetObject(exception, string); + Py_XDECREF(string); + va_end(vargs); + return NULL; } + + PyObject * PyErr_NewException(const char *name, PyObject *base, PyObject *dict) { - char *dot; - PyObject *modulename = NULL; - PyObject *classname = NULL; - PyObject *mydict = NULL; - PyObject *bases = NULL; - PyObject *result = NULL; - dot = strrchr(name, '.'); - if (dot == NULL) { - PyErr_SetString(PyExc_SystemError, - "PyErr_NewException: name must be module.class"); - return NULL; - } - if (base == NULL) - base = PyExc_Exception; - if (dict == NULL) { - dict = mydict = PyDict_New(); - if (dict == NULL) - goto failure; - } - if (PyDict_GetItemString(dict, "__module__") == NULL) { - modulename = PyString_FromStringAndSize(name, - (Py_ssize_t)(dot-name)); - if (modulename == NULL) - goto failure; - if (PyDict_SetItemString(dict, "__module__", modulename) != 0) - goto failure; - } - if (PyTuple_Check(base)) { - bases = base; - /* INCREF as we create a new ref in the else branch */ - Py_INCREF(bases); - } else { - bases = PyTuple_Pack(1, base); - if (bases == NULL) - goto failure; - } - /* Create a real new-style class. */ - result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", - dot+1, bases, dict); + char *dot; + PyObject *modulename = NULL; + PyObject *classname = NULL; + PyObject *mydict = NULL; + PyObject *bases = NULL; + PyObject *result = NULL; + dot = strrchr(name, '.'); + if (dot == NULL) { + PyErr_SetString(PyExc_SystemError, + "PyErr_NewException: name must be module.class"); + return NULL; + } + if (base == NULL) + base = PyExc_Exception; + if (dict == NULL) { + dict = mydict = PyDict_New(); + if (dict == NULL) + goto failure; + } + if (PyDict_GetItemString(dict, "__module__") == NULL) { + modulename = PyString_FromStringAndSize(name, + (Py_ssize_t)(dot-name)); + if (modulename == NULL) + goto failure; + if (PyDict_SetItemString(dict, "__module__", modulename) != 0) + goto failure; + } + if (PyTuple_Check(base)) { + bases = base; + /* INCREF as we create a new ref in the else branch */ + Py_INCREF(bases); + } else { + bases = PyTuple_Pack(1, base); + if (bases == NULL) + goto failure; + } + /* Create a real new-style class. */ + result = PyObject_CallFunction((PyObject *)&PyType_Type, "sOO", + dot+1, bases, dict); failure: - Py_XDECREF(bases); - Py_XDECREF(mydict); - Py_XDECREF(classname); - Py_XDECREF(modulename); - return result; + Py_XDECREF(bases); + Py_XDECREF(mydict); + Py_XDECREF(classname); + Py_XDECREF(modulename); + return result; } + /* Create an exception with docstring */ PyObject * PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict) diff --git a/pypy/module/cpyext/src/pysignals.c b/pypy/module/cpyext/src/pysignals.c --- a/pypy/module/cpyext/src/pysignals.c +++ b/pypy/module/cpyext/src/pysignals.c @@ -17,17 +17,34 @@ PyOS_getsig(int sig) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context; - if (sigaction(sig, NULL, &context) == -1) - return SIG_ERR; - return context.sa_handler; + /* assume sigaction exists */ + struct sigaction context; + if (sigaction(sig, NULL, &context) == -1) + return SIG_ERR; + return context.sa_handler; #else - PyOS_sighandler_t handler; - handler = signal(sig, SIG_IGN); - if (handler != SIG_ERR) - signal(sig, handler); - return handler; + PyOS_sighandler_t handler; +/* Special signal handling for the secure CRT in Visual Studio 2005 */ +#if defined(_MSC_VER) && _MSC_VER >= 1400 + switch (sig) { + /* Only these signals are valid */ + case SIGINT: + case SIGILL: + case SIGFPE: + case SIGSEGV: + case SIGTERM: + case SIGBREAK: + case SIGABRT: + break; + /* Don't call signal() with other values or it will assert */ + default: + return SIG_ERR; + } +#endif /* _MSC_VER && _MSC_VER >= 1400 */ + handler = signal(sig, SIG_IGN); + if (handler != SIG_ERR) + signal(sig, handler); + return handler; #endif } @@ -35,21 +52,21 @@ PyOS_setsig(int sig, PyOS_sighandler_t handler) { #ifdef SA_RESTART - /* assume sigaction exists */ - struct sigaction context, ocontext; - context.sa_handler = handler; - sigemptyset(&context.sa_mask); - context.sa_flags = 0; - if (sigaction(sig, &context, &ocontext) == -1) - return SIG_ERR; - return ocontext.sa_handler; + /* assume sigaction exists */ + struct sigaction context, ocontext; + context.sa_handler = handler; + sigemptyset(&context.sa_mask); + context.sa_flags = 0; + if (sigaction(sig, &context, &ocontext) == -1) + return SIG_ERR; + return ocontext.sa_handler; #else - PyOS_sighandler_t oldhandler; - oldhandler = signal(sig, handler); + PyOS_sighandler_t oldhandler; + oldhandler = signal(sig, handler); #ifndef MS_WINDOWS - /* should check if this exists */ - siginterrupt(sig, 1); + /* should check if this exists */ + siginterrupt(sig, 1); #endif - return oldhandler; + return oldhandler; #endif } diff --git a/pypy/module/cpyext/src/pythonrun.c b/pypy/module/cpyext/src/pythonrun.c --- a/pypy/module/cpyext/src/pythonrun.c +++ b/pypy/module/cpyext/src/pythonrun.c @@ -9,28 +9,28 @@ void Py_FatalError(const char *msg) { - fprintf(stderr, "Fatal Python error: %s\n", msg); - fflush(stderr); /* it helps in Windows debug build */ + fprintf(stderr, "Fatal Python error: %s\n", msg); + fflush(stderr); /* it helps in Windows debug build */ #ifdef MS_WINDOWS - { - size_t len = strlen(msg); - WCHAR* buffer; - size_t i; + { + size_t len = strlen(msg); + WCHAR* buffer; + size_t i; - /* Convert the message to wchar_t. This uses a simple one-to-one - conversion, assuming that the this error message actually uses ASCII - only. If this ceases to be true, we will have to convert. */ - buffer = alloca( (len+1) * (sizeof *buffer)); - for( i=0; i<=len; ++i) - buffer[i] = msg[i]; - OutputDebugStringW(L"Fatal Python error: "); - OutputDebugStringW(buffer); - OutputDebugStringW(L"\n"); - } + /* Convert the message to wchar_t. This uses a simple one-to-one + conversion, assuming that the this error message actually uses ASCII + only. If this ceases to be true, we will have to convert. */ + buffer = alloca( (len+1) * (sizeof *buffer)); + for( i=0; i<=len; ++i) + buffer[i] = msg[i]; + OutputDebugStringW(L"Fatal Python error: "); + OutputDebugStringW(buffer); + OutputDebugStringW(L"\n"); + } #ifdef _DEBUG - DebugBreak(); + DebugBreak(); #endif #endif /* MS_WINDOWS */ - abort(); + abort(); } diff --git a/pypy/module/cpyext/src/stringobject.c b/pypy/module/cpyext/src/stringobject.c --- a/pypy/module/cpyext/src/stringobject.c +++ b/pypy/module/cpyext/src/stringobject.c @@ -4,246 +4,247 @@ PyObject * PyString_FromFormatV(const char *format, va_list vargs) { - va_list count; - Py_ssize_t n = 0; - const char* f; - char *s; - PyObject* string; + va_list count; + Py_ssize_t n = 0; + const char* f; + char *s; + PyObject* string; #ifdef VA_LIST_IS_ARRAY - Py_MEMCPY(count, vargs, sizeof(va_list)); + Py_MEMCPY(count, vargs, sizeof(va_list)); #else #ifdef __va_copy - __va_copy(count, vargs); + __va_copy(count, vargs); #else - count = vargs; + count = vargs; #endif #endif - /* step 1: figure out how large a buffer we need */ - for (f = format; *f; f++) { - if (*f == '%') { + /* step 1: figure out how large a buffer we need */ + for (f = format; *f; f++) { + if (*f == '%') { #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - const char* p = f; - while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - ; + const char* p = f; + while (*++f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + ; - /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since - * they don't affect the amount of space we reserve. - */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - ++f; - } + /* skip the 'l' or 'z' in {%ld, %zd, %lu, %zu} since + * they don't affect the amount of space we reserve. + */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - ++f; - } + } + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + ++f; + } - switch (*f) { - case 'c': - (void)va_arg(count, int); - /* fall through... */ - case '%': - n++; - break; - case 'd': case 'u': case 'i': case 'x': - (void) va_arg(count, int); + switch (*f) { + case 'c': + (void)va_arg(count, int); + /* fall through... */ + case '%': + n++; + break; + case 'd': case 'u': case 'i': case 'x': + (void) va_arg(count, int); #ifdef HAVE_LONG_LONG - /* Need at most - ceil(log10(256)*SIZEOF_LONG_LONG) digits, - plus 1 for the sign. 53/22 is an upper - bound for log10(256). */ - if (longlongflag) - n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; - else + /* Need at most + ceil(log10(256)*SIZEOF_LONG_LONG) digits, + plus 1 for the sign. 53/22 is an upper + bound for log10(256). */ + if (longlongflag) + n += 2 + (SIZEOF_LONG_LONG*53-1) / 22; + else #endif - /* 20 bytes is enough to hold a 64-bit - integer. Decimal takes the most - space. This isn't enough for - octal. */ - n += 20; + /* 20 bytes is enough to hold a 64-bit + integer. Decimal takes the most + space. This isn't enough for + octal. */ + n += 20; - break; - case 's': - s = va_arg(count, char*); - n += strlen(s); - break; - case 'p': - (void) va_arg(count, int); - /* maximum 64-bit pointer representation: - * 0xffffffffffffffff - * so 19 characters is enough. - * XXX I count 18 -- what's the extra for? - */ - n += 19; - break; - default: - /* if we stumble upon an unknown - formatting code, copy the rest of - the format string to the output - string. (we cannot just skip the - code, since there's no way to know - what's in the argument list) */ - n += strlen(p); - goto expand; - } - } else - n++; - } + break; + case 's': + s = va_arg(count, char*); + n += strlen(s); + break; + case 'p': + (void) va_arg(count, int); + /* maximum 64-bit pointer representation: + * 0xffffffffffffffff + * so 19 characters is enough. + * XXX I count 18 -- what's the extra for? + */ + n += 19; + break; + default: + /* if we stumble upon an unknown + formatting code, copy the rest of + the format string to the output + string. (we cannot just skip the + code, since there's no way to know + what's in the argument list) */ + n += strlen(p); + goto expand; + } + } else + n++; + } expand: - /* step 2: fill the buffer */ - /* Since we've analyzed how much space we need for the worst case, - use sprintf directly instead of the slower PyOS_snprintf. */ - string = PyString_FromStringAndSize(NULL, n); - if (!string) - return NULL; + /* step 2: fill the buffer */ + /* Since we've analyzed how much space we need for the worst case, + use sprintf directly instead of the slower PyOS_snprintf. */ + string = PyString_FromStringAndSize(NULL, n); + if (!string) + return NULL; - s = PyString_AsString(string); + s = PyString_AsString(string); - for (f = format; *f; f++) { - if (*f == '%') { - const char* p = f++; - Py_ssize_t i; - int longflag = 0; + for (f = format; *f; f++) { + if (*f == '%') { + const char* p = f++; + Py_ssize_t i; + int longflag = 0; #ifdef HAVE_LONG_LONG - int longlongflag = 0; + int longlongflag = 0; #endif - int size_tflag = 0; - /* parse the width.precision part (we're only - interested in the precision value, if any) */ - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - if (*f == '.') { - f++; - n = 0; - while (isdigit(Py_CHARMASK(*f))) - n = (n*10) + *f++ - '0'; - } - while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) - f++; - /* Handle %ld, %lu, %lld and %llu. */ - if (*f == 'l') { - if (f[1] == 'd' || f[1] == 'u') { - longflag = 1; - ++f; - } + int size_tflag = 0; + /* parse the width.precision part (we're only + interested in the precision value, if any) */ + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + if (*f == '.') { + f++; + n = 0; + while (isdigit(Py_CHARMASK(*f))) + n = (n*10) + *f++ - '0'; + } + while (*f && *f != '%' && !isalpha(Py_CHARMASK(*f))) + f++; + /* Handle %ld, %lu, %lld and %llu. */ + if (*f == 'l') { + if (f[1] == 'd' || f[1] == 'u') { + longflag = 1; + ++f; + } #ifdef HAVE_LONG_LONG - else if (f[1] == 'l' && - (f[2] == 'd' || f[2] == 'u')) { - longlongflag = 1; - f += 2; - } + else if (f[1] == 'l' && + (f[2] == 'd' || f[2] == 'u')) { + longlongflag = 1; + f += 2; + } #endif - } - /* handle the size_t flag. */ - else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { - size_tflag = 1; - ++f; - } + } + /* handle the size_t flag. */ + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u')) { + size_tflag = 1; + ++f; + } - switch (*f) { - case 'c': - *s++ = va_arg(vargs, int); - break; - case 'd': - if (longflag) - sprintf(s, "%ld", va_arg(vargs, long)); + switch (*f) { + case 'c': + *s++ = va_arg(vargs, int); + break; + case 'd': + if (longflag) + sprintf(s, "%ld", va_arg(vargs, long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "d", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "d", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "d", - va_arg(vargs, Py_ssize_t)); - else - sprintf(s, "%d", va_arg(vargs, int)); - s += strlen(s); - break; - case 'u': - if (longflag) - sprintf(s, "%lu", - va_arg(vargs, unsigned long)); + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "d", + va_arg(vargs, Py_ssize_t)); + else + sprintf(s, "%d", va_arg(vargs, int)); + s += strlen(s); + break; + case 'u': + if (longflag) + sprintf(s, "%lu", + va_arg(vargs, unsigned long)); #ifdef HAVE_LONG_LONG - else if (longlongflag) - sprintf(s, "%" PY_FORMAT_LONG_LONG "u", - va_arg(vargs, PY_LONG_LONG)); + else if (longlongflag) + sprintf(s, "%" PY_FORMAT_LONG_LONG "u", + va_arg(vargs, PY_LONG_LONG)); #endif - else if (size_tflag) - sprintf(s, "%" PY_FORMAT_SIZE_T "u", - va_arg(vargs, size_t)); - else - sprintf(s, "%u", - va_arg(vargs, unsigned int)); - s += strlen(s); - break; - case 'i': - sprintf(s, "%i", va_arg(vargs, int)); - s += strlen(s); - break; - case 'x': - sprintf(s, "%x", va_arg(vargs, int)); - s += strlen(s); - break; - case 's': - p = va_arg(vargs, char*); - i = strlen(p); - if (n > 0 && i > n) - i = n; - Py_MEMCPY(s, p, i); - s += i; - break; - case 'p': - sprintf(s, "%p", va_arg(vargs, void*)); - /* %p is ill-defined: ensure leading 0x. */ - if (s[1] == 'X') - s[1] = 'x'; - else if (s[1] != 'x') { - memmove(s+2, s, strlen(s)+1); - s[0] = '0'; - s[1] = 'x'; - } - s += strlen(s); - break; - case '%': - *s++ = '%'; - break; - default: - strcpy(s, p); - s += strlen(s); - goto end; - } - } else - *s++ = *f; - } + else if (size_tflag) + sprintf(s, "%" PY_FORMAT_SIZE_T "u", + va_arg(vargs, size_t)); + else + sprintf(s, "%u", + va_arg(vargs, unsigned int)); + s += strlen(s); + break; + case 'i': + sprintf(s, "%i", va_arg(vargs, int)); + s += strlen(s); + break; + case 'x': + sprintf(s, "%x", va_arg(vargs, int)); + s += strlen(s); + break; + case 's': + p = va_arg(vargs, char*); + i = strlen(p); + if (n > 0 && i > n) + i = n; + Py_MEMCPY(s, p, i); + s += i; + break; + case 'p': + sprintf(s, "%p", va_arg(vargs, void*)); + /* %p is ill-defined: ensure leading 0x. */ + if (s[1] == 'X') + s[1] = 'x'; + else if (s[1] != 'x') { + memmove(s+2, s, strlen(s)+1); + s[0] = '0'; + s[1] = 'x'; + } + s += strlen(s); + break; + case '%': + *s++ = '%'; + break; + default: + strcpy(s, p); + s += strlen(s); + goto end; + } + } else + *s++ = *f; + } end: - _PyString_Resize(&string, s - PyString_AS_STRING(string)); - return string; + if (_PyString_Resize(&string, s - PyString_AS_STRING(string))) + return NULL; + return string; } PyObject * PyString_FromFormat(const char *format, ...) { - PyObject* ret; - va_list vargs; + PyObject* ret; + va_list vargs; #ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, format); + va_start(vargs, format); #else - va_start(vargs); + va_start(vargs); #endif - ret = PyString_FromFormatV(format, vargs); - va_end(vargs); - return ret; + ret = PyString_FromFormatV(format, vargs); + va_end(vargs); + return ret; } diff --git a/pypy/module/cpyext/src/structseq.c b/pypy/module/cpyext/src/structseq.c --- a/pypy/module/cpyext/src/structseq.c +++ b/pypy/module/cpyext/src/structseq.c @@ -175,32 +175,33 @@ if (min_len != max_len) { if (len < min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at least %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at least %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } if (len > max_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes an at most %zd-sequence (%zd-sequence given)", - type->tp_name, max_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes an at most %zd-sequence (%zd-sequence given)", + type->tp_name, max_len, len); + Py_DECREF(arg); + return NULL; } } else { if (len != min_len) { PyErr_Format(PyExc_TypeError, - "%.500s() takes a %zd-sequence (%zd-sequence given)", - type->tp_name, min_len, len); - Py_DECREF(arg); - return NULL; + "%.500s() takes a %zd-sequence (%zd-sequence given)", + type->tp_name, min_len, len); + Py_DECREF(arg); + return NULL; } } res = (PyStructSequence*) PyStructSequence_New(type); if (res == NULL) { + Py_DECREF(arg); return NULL; } for (i = 0; i < len; ++i) { diff --git a/pypy/module/cpyext/src/sysmodule.c b/pypy/module/cpyext/src/sysmodule.c --- a/pypy/module/cpyext/src/sysmodule.c +++ b/pypy/module/cpyext/src/sysmodule.c @@ -100,4 +100,3 @@ sys_write("stderr", stderr, format, va); va_end(va); } - diff --git a/pypy/module/cpyext/src/varargwrapper.c b/pypy/module/cpyext/src/varargwrapper.c --- a/pypy/module/cpyext/src/varargwrapper.c +++ b/pypy/module/cpyext/src/varargwrapper.c @@ -1,21 +1,25 @@ #include #include -PyObject * PyTuple_Pack(Py_ssize_t size, ...) +PyObject * +PyTuple_Pack(Py_ssize_t n, ...) { - va_list ap; - PyObject *cur, *tuple; - int i; + Py_ssize_t i; + PyObject *o; + PyObject *result; + va_list vargs; - tuple = PyTuple_New(size); - va_start(ap, size); - for (i = 0; i < size; i++) { - cur = va_arg(ap, PyObject*); - Py_INCREF(cur); - if (PyTuple_SetItem(tuple, i, cur) < 0) + va_start(vargs, n); + result = PyTuple_New(n); + if (result == NULL) + return NULL; + for (i = 0; i < n; i++) { + o = va_arg(vargs, PyObject *); + Py_INCREF(o); + if (PyTuple_SetItem(result, i, o) < 0) return NULL; } - va_end(ap); - return tuple; + va_end(vargs); + return result; } diff --git a/pypy/module/cpyext/stringobject.py b/pypy/module/cpyext/stringobject.py --- a/pypy/module/cpyext/stringobject.py +++ b/pypy/module/cpyext/stringobject.py @@ -294,6 +294,26 @@ w_errors = space.wrap(rffi.charp2str(errors)) return space.call_method(w_str, 'encode', w_encoding, w_errors) + at cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) +def PyString_AsDecodedObject(space, w_str, encoding, errors): + """Decode a string object by passing it to the codec registered + for encoding and return the result as Python object. encoding and + errors have the same meaning as the parameters of the same name in + the string encode() method. The codec to be used is looked up + using the Python codec registry. Return NULL if an exception was + raised by the codec. + + This function is not available in 3.x and does not have a PyBytes alias.""" + if not PyString_Check(space, w_str): + PyErr_BadArgument(space) + + w_encoding = w_errors = space.w_None + if encoding: + w_encoding = space.wrap(rffi.charp2str(encoding)) + if errors: + w_errors = space.wrap(rffi.charp2str(errors)) + return space.call_method(w_str, "decode", w_encoding, w_errors) + @cpython_api([PyObject, PyObject], PyObject) def _PyString_Join(space, w_sep, w_seq): return space.call_method(w_sep, 'join', w_seq) diff --git a/pypy/module/cpyext/structmember.py b/pypy/module/cpyext/structmember.py --- a/pypy/module/cpyext/structmember.py +++ b/pypy/module/cpyext/structmember.py @@ -10,7 +10,7 @@ PyString_FromString, PyString_FromStringAndSize) from pypy.module.cpyext.floatobject import PyFloat_AsDouble from pypy.module.cpyext.longobject import ( - PyLong_AsLongLong, PyLong_AsUnsignedLongLong) + PyLong_AsLongLong, PyLong_AsUnsignedLongLong, PyLong_AsSsize_t) from pypy.module.cpyext.typeobjectdefs import PyMemberDef from pypy.rlib.unroll import unrolling_iterable @@ -28,6 +28,7 @@ (T_DOUBLE, rffi.DOUBLE, PyFloat_AsDouble), (T_LONGLONG, rffi.LONGLONG, PyLong_AsLongLong), (T_ULONGLONG, rffi.ULONGLONG, PyLong_AsUnsignedLongLong), + (T_PYSSIZET, rffi.SSIZE_T, PyLong_AsSsize_t), ]) diff --git a/pypy/module/cpyext/structmemberdefs.py b/pypy/module/cpyext/structmemberdefs.py --- a/pypy/module/cpyext/structmemberdefs.py +++ b/pypy/module/cpyext/structmemberdefs.py @@ -18,6 +18,7 @@ T_OBJECT_EX = 16 T_LONGLONG = 17 T_ULONGLONG = 18 +T_PYSSIZET = 19 READONLY = RO = 1 READ_RESTRICTED = 2 diff --git a/pypy/module/cpyext/stubs.py b/pypy/module/cpyext/stubs.py --- a/pypy/module/cpyext/stubs.py +++ b/pypy/module/cpyext/stubs.py @@ -1405,24 +1405,6 @@ """ raise NotImplementedError - at cpython_api([PyObject, Py_ssize_t, Py_ssize_t], PyObject) -def PyList_GetSlice(space, list, low, high): - """Return a list of the objects in list containing the objects between low - and high. Return NULL and set an exception if unsuccessful. Analogous - to list[low:high]. Negative indices, as when slicing from Python, are not - supported. - - This function used an int for low and high. This might - require changes in your code for properly supporting 64-bit systems.""" - raise NotImplementedError - - at cpython_api([Py_ssize_t], PyObject) -def PyLong_FromSsize_t(space, v): - """Return a new PyLongObject object from a C Py_ssize_t, or - NULL on failure. - """ - raise NotImplementedError - @cpython_api([rffi.SIZE_T], PyObject) def PyLong_FromSize_t(space, v): """Return a new PyLongObject object from a C size_t, or @@ -1442,14 +1424,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject], Py_ssize_t, error=-1) -def PyLong_AsSsize_t(space, pylong): - """Return a C Py_ssize_t representation of the contents of pylong. If - pylong is greater than PY_SSIZE_T_MAX, an OverflowError is raised - and -1 will be returned. - """ - raise NotImplementedError - @cpython_api([PyObject, rffi.CCHARP], rffi.INT_real, error=-1) def PyMapping_DelItemString(space, o, key): """Remove the mapping for object key from the object o. Return -1 on @@ -1606,15 +1580,6 @@ for PyObject_Str().""" raise NotImplementedError - at cpython_api([PyObject], lltype.Signed, error=-1) -def PyObject_HashNotImplemented(space, o): - """Set a TypeError indicating that type(o) is not hashable and return -1. - This function receives special treatment when stored in a tp_hash slot, - allowing a type to explicitly indicate to the interpreter that it is not - hashable. - """ - raise NotImplementedError - @cpython_api([], PyFrameObject) def PyEval_GetFrame(space): """Return the current thread state's frame, which is NULL if no frame is @@ -1737,17 +1702,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([PyObject, rffi.CCHARP, rffi.CCHARP], PyObject) -def PyString_AsDecodedObject(space, str, encoding, errors): - """Decode a string object by passing it to the codec registered for encoding and - return the result as Python object. encoding and errors have the same - meaning as the parameters of the same name in the string encode() method. - The codec to be used is looked up using the Python codec registry. Return NULL - if an exception was raised by the codec. - - This function is not available in 3.x and does not have a PyBytes alias.""" - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.CCHARP], PyObject) def PyString_Encode(space, s, size, encoding, errors): """Encode the char buffer of the given size by passing it to the codec @@ -2011,35 +1965,6 @@ changes in your code for properly supporting 64-bit systems.""" raise NotImplementedError - at cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP], PyObject) -def PyUnicode_DecodeUTF32(space, s, size, errors, byteorder): - """Decode length bytes from a UTF-32 encoded buffer string and return the - corresponding Unicode object. errors (if non-NULL) defines the error - handling. It defaults to "strict". - - If byteorder is non-NULL, the decoder starts decoding using the given byte - order: - - *byteorder == -1: little endian - *byteorder == 0: native order - *byteorder == 1: big endian - - If *byteorder is zero, and the first four bytes of the input data are a - byte order mark (BOM), the decoder switches to this byte order and the BOM is - not copied into the resulting Unicode string. If *byteorder is -1 or - 1, any byte order mark is copied to the output. - - After completion, *byteorder is set to the current byte order at the end - of input data. - - In a narrow build codepoints outside the BMP will be decoded as surrogate pairs. - - If byteorder is NULL, the codec starts in native order mode. - - Return NULL if an exception was raised by the codec. - """ - raise NotImplementedError - @cpython_api([rffi.CCHARP, Py_ssize_t, rffi.CCHARP, rffi.INTP, Py_ssize_t], PyObject) def PyUnicode_DecodeUTF32Stateful(space, s, size, errors, byteorder, consumed): """If consumed is NULL, behave like PyUnicode_DecodeUTF32(). If diff --git a/pypy/module/cpyext/test/_sre.c b/pypy/module/cpyext/test/_sre.c --- a/pypy/module/cpyext/test/_sre.c +++ b/pypy/module/cpyext/test/_sre.c @@ -81,9 +81,6 @@ #define PyObject_DEL(op) PyMem_DEL((op)) #endif -#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size) -#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type) - /* -------------------------------------------------------------------- */ #if defined(_MSC_VER) @@ -1689,7 +1686,7 @@ if (PyUnicode_Check(string)) { /* unicode strings doesn't always support the buffer interface */ ptr = (void*) PyUnicode_AS_DATA(string); - bytes = PyUnicode_GET_DATA_SIZE(string); + /* bytes = PyUnicode_GET_DATA_SIZE(string); */ size = PyUnicode_GET_SIZE(string); charsize = sizeof(Py_UNICODE); @@ -2601,46 +2598,22 @@ {NULL, NULL} }; -static PyObject* -pattern_getattr(PatternObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(pattern_methods, (PyObject*) self, name); - - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - if (!strcmp(name, "flags")) - return Py_BuildValue("i", self->flags); - - if (!strcmp(name, "groups")) - return Py_BuildValue("i", self->groups); - - if (!strcmp(name, "groupindex") && self->groupindex) { - Py_INCREF(self->groupindex); - return self->groupindex; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define PAT_OFF(x) offsetof(PatternObject, x) +static PyMemberDef pattern_members[] = { + {"pattern", T_OBJECT, PAT_OFF(pattern), READONLY}, + {"flags", T_INT, PAT_OFF(flags), READONLY}, + {"groups", T_PYSSIZET, PAT_OFF(groups), READONLY}, + {"groupindex", T_OBJECT, PAT_OFF(groupindex), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Pattern_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Pattern", sizeof(PatternObject), sizeof(SRE_CODE), (destructor)pattern_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)pattern_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattrn */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ @@ -2653,12 +2626,16 @@ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ - Py_TPFLAGS_HAVE_WEAKREFS, /* tp_flags */ + Py_TPFLAGS_DEFAULT, /* tp_flags */ pattern_doc, /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ offsetof(PatternObject, weakreflist), /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + pattern_methods, /* tp_methods */ + pattern_members, /* tp_members */ }; static int _validate(PatternObject *self); /* Forward */ @@ -2767,7 +2744,7 @@ #if defined(VVERBOSE) #define VTRACE(v) printf v #else -#define VTRACE(v) +#define VTRACE(v) do {} while(0) /* do nothing */ #endif /* Report failure */ @@ -2970,13 +2947,13 @@ <1=skip> <2=flags> <3=min> <4=max>; If SRE_INFO_PREFIX or SRE_INFO_CHARSET is in the flags, more follows. */ - SRE_CODE flags, min, max, i; + SRE_CODE flags, i; SRE_CODE *newcode; GET_SKIP; newcode = code+skip-1; GET_ARG; flags = arg; - GET_ARG; min = arg; - GET_ARG; max = arg; + GET_ARG; /* min */ + GET_ARG; /* max */ /* Check that only valid flags are present */ if ((flags & ~(SRE_INFO_PREFIX | SRE_INFO_LITERAL | @@ -2992,9 +2969,9 @@ FAIL; /* Validate the prefix */ if (flags & SRE_INFO_PREFIX) { - SRE_CODE prefix_len, prefix_skip; + SRE_CODE prefix_len; GET_ARG; prefix_len = arg; - GET_ARG; prefix_skip = arg; + GET_ARG; /* prefix skip */ /* Here comes the prefix string */ if (code+prefix_len < code || code+prefix_len > newcode) FAIL; @@ -3565,7 +3542,7 @@ #endif } -static PyMethodDef match_methods[] = { +static struct PyMethodDef match_methods[] = { {"group", (PyCFunction) match_group, METH_VARARGS}, {"start", (PyCFunction) match_start, METH_VARARGS}, {"end", (PyCFunction) match_end, METH_VARARGS}, @@ -3578,80 +3555,90 @@ {NULL, NULL} }; -static PyObject* -match_getattr(MatchObject* self, char* name) +static PyObject * +match_lastindex_get(MatchObject *self) { - PyObject* res; - - res = Py_FindMethod(match_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - if (!strcmp(name, "lastindex")) { - if (self->lastindex >= 0) - return Py_BuildValue("i", self->lastindex); - Py_INCREF(Py_None); - return Py_None; + if (self->lastindex >= 0) + return Py_BuildValue("i", self->lastindex); + Py_INCREF(Py_None); + return Py_None; +} + +static PyObject * +match_lastgroup_get(MatchObject *self) +{ + if (self->pattern->indexgroup && self->lastindex >= 0) { + PyObject* result = PySequence_GetItem( + self->pattern->indexgroup, self->lastindex + ); + if (result) + return result; + PyErr_Clear(); } - - if (!strcmp(name, "lastgroup")) { - if (self->pattern->indexgroup && self->lastindex >= 0) { - PyObject* result = PySequence_GetItem( - self->pattern->indexgroup, self->lastindex - ); - if (result) - return result; - PyErr_Clear(); - } - Py_INCREF(Py_None); - return Py_None; - } - - if (!strcmp(name, "string")) { - if (self->string) { - Py_INCREF(self->string); - return self->string; - } else { - Py_INCREF(Py_None); - return Py_None; - } - } - - if (!strcmp(name, "regs")) { - if (self->regs) { - Py_INCREF(self->regs); - return self->regs; - } else - return match_regs(self); - } - - if (!strcmp(name, "re")) { - Py_INCREF(self->pattern); - return (PyObject*) self->pattern; - } - - if (!strcmp(name, "pos")) - return Py_BuildValue("i", self->pos); - - if (!strcmp(name, "endpos")) - return Py_BuildValue("i", self->endpos); - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; + Py_INCREF(Py_None); + return Py_None; } +static PyObject * +match_regs_get(MatchObject *self) +{ + if (self->regs) { + Py_INCREF(self->regs); + return self->regs; + } else + return match_regs(self); +} + +static PyGetSetDef match_getset[] = { + {"lastindex", (getter)match_lastindex_get, (setter)NULL}, + {"lastgroup", (getter)match_lastgroup_get, (setter)NULL}, + {"regs", (getter)match_regs_get, (setter)NULL}, + {NULL} +}; + +#define MATCH_OFF(x) offsetof(MatchObject, x) +static PyMemberDef match_members[] = { + {"string", T_OBJECT, MATCH_OFF(string), READONLY}, + {"re", T_OBJECT, MATCH_OFF(pattern), READONLY}, + {"pos", T_PYSSIZET, MATCH_OFF(pos), READONLY}, + {"endpos", T_PYSSIZET, MATCH_OFF(endpos), READONLY}, + {NULL} +}; + + /* FIXME: implement setattr("string", None) as a special case (to detach the associated string, if any */ -statichere PyTypeObject Match_Type = { - PyObject_HEAD_INIT(NULL) - 0, "_" SRE_MODULE ".SRE_Match", +static PyTypeObject Match_Type = { + PyVarObject_HEAD_INIT(NULL, 0) + "_" SRE_MODULE ".SRE_Match", sizeof(MatchObject), sizeof(Py_ssize_t), - (destructor)match_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)match_getattr /*tp_getattr*/ + (destructor)match_dealloc, /* tp_dealloc */ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_compare */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ + 0, /* tp_str */ + 0, /* tp_getattro */ + 0, /* tp_setattro */ + 0, /* tp_as_buffer */ + Py_TPFLAGS_DEFAULT, + 0, /* tp_doc */ + 0, /* tp_traverse */ + 0, /* tp_clear */ + 0, /* tp_richcompare */ + 0, /* tp_weaklistoffset */ + 0, /* tp_iter */ + 0, /* tp_iternext */ + match_methods, /* tp_methods */ + match_members, /* tp_members */ + match_getset, /* tp_getset */ }; static PyObject* @@ -3800,34 +3787,42 @@ {NULL, NULL} }; -static PyObject* -scanner_getattr(ScannerObject* self, char* name) -{ - PyObject* res; - - res = Py_FindMethod(scanner_methods, (PyObject*) self, name); - if (res) - return res; - - PyErr_Clear(); - - /* attributes */ - if (!strcmp(name, "pattern")) { - Py_INCREF(self->pattern); - return self->pattern; - } - - PyErr_SetString(PyExc_AttributeError, name); - return NULL; -} +#define SCAN_OFF(x) offsetof(ScannerObject, x) +static PyMemberDef scanner_members[] = { + {"pattern", T_OBJECT, SCAN_OFF(pattern), READONLY}, + {NULL} /* Sentinel */ +}; statichere PyTypeObject Scanner_Type = { PyObject_HEAD_INIT(NULL) 0, "_" SRE_MODULE ".SRE_Scanner", sizeof(ScannerObject), 0, (destructor)scanner_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - (getattrfunc)scanner_getattr, /*tp_getattr*/ + 0, /* tp_print */ + 0, /* tp_getattr */ + 0, /* tp_setattr */ + 0, /* tp_reserved */ + 0, /* tp_repr */ + 0, /* tp_as_number */ + 0, /* tp_as_sequence */ + 0, /* tp_as_mapping */ + 0, /* tp_hash */ + 0, /* tp_call */ From noreply at buildbot.pypy.org Wed May 9 21:19:10 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 9 May 2012 21:19:10 +0200 (CEST) Subject: [pypy-commit] pypy gc-minimark-pinning: this is mostly done Message-ID: <20120509191910.AE06C82F4E@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: gc-minimark-pinning Changeset: r54990:70df5c8669f5 Date: 2012-05-08 17:25 +0200 http://bitbucket.org/pypy/pypy/changeset/70df5c8669f5/ Log: this is mostly done diff --git a/TODO b/TODO deleted file mode 100644 --- a/TODO +++ /dev/null @@ -1,4 +0,0 @@ - -* implement limit on no of pinned objects + replace insert with qsort - -* implement tracing for pinned objects (check that it works) \ No newline at end of file From noreply at buildbot.pypy.org Wed May 9 21:19:12 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 9 May 2012 21:19:12 +0200 (CEST) Subject: [pypy-commit] pypy gc-minimark-pinning: remove malloc_and_pin after discussions with armin Message-ID: <20120509191912.2E50182E46@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: gc-minimark-pinning Changeset: r54991:e38b2238ce3b Date: 2012-05-08 17:34 +0200 http://bitbucket.org/pypy/pypy/changeset/e38b2238ce3b/ Log: remove malloc_and_pin after discussions with armin diff --git a/pypy/rlib/rgc.py b/pypy/rlib/rgc.py --- a/pypy/rlib/rgc.py +++ b/pypy/rlib/rgc.py @@ -119,40 +119,6 @@ hop.exception_is_here() return hop.genop('gc_heap_stats', [], resulttype=hop.r_result) -def malloc_and_pin(TP, n=None, zero=False): - """ Allocate a buffer and immediately pin it. If the GC does not support - pinning or cannot fulfill the request for some other reason, null ptr - is returned. Make sure you unpin this thing when you're done - """ - return lltype.malloc(TP, n, zero=zero) - -class MallocAndPinEntry(ExtRegistryEntry): - _about_ = malloc_and_pin - - def compute_result_annotation(self, s_TP, s_n=None, s_zero=None): - # basically return the same as malloc - from pypy.annotation.builtin import malloc - return malloc(s_TP, s_n, s_zero=s_zero) - - def specialize_call(self, hop, i_zero=None): - assert hop.args_s[0].is_constant() - vlist = [hop.inputarg(lltype.Void, arg=0)] - opname = 'malloc_and_pin' - flags = {'flavor': 'gc'} - if i_zero is not None: - flags['zero'] = hop.args_s[i_zero].const - nb_args = hop.nb_args - 1 - else: - nb_args = hop.nb_args - vlist.append(hop.inputconst(lltype.Void, flags)) - - if nb_args == 2: - vlist.append(hop.inputarg(lltype.Signed, arg=1)) - opname = 'malloc_varsize_and_pin' - - hop.exception_cannot_occur() - return hop.genop(opname, vlist, resulttype = hop.r_result.lowleveltype) - @jit.oopspec('list.ll_arraycopy(source, dest, source_start, dest_start, length)') @specialize.ll() @enforceargs(None, None, int, int, int) diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -702,18 +702,6 @@ except MemoryError: self.make_llexception() - def op_malloc_and_pin(self, TYPE, flags): - flavor = flags['flavor'] - assert flavor == 'gc' - zero = flags.get('zero', False) - return self.heap.malloc_and_pin(TYPE, zero=zero) - - def op_malloc_varsize_and_pin(self, TYPE, flags, size): - flavor = flags['flavor'] - assert flavor == 'gc' - zero = flags.get('zero', False) - return self.heap.malloc_varsize_and_pin(TYPE, size, zero=zero) - def op_free(self, obj, flags): assert flags['flavor'] == 'raw' track_allocation = flags.get('track_allocation', True) diff --git a/pypy/rpython/lltypesystem/llheap.py b/pypy/rpython/lltypesystem/llheap.py --- a/pypy/rpython/lltypesystem/llheap.py +++ b/pypy/rpython/lltypesystem/llheap.py @@ -18,9 +18,6 @@ def weakref_create_getlazy(objgetter): return weakref_create(objgetter()) -malloc_and_pin = malloc -malloc_varsize_and_pin = malloc - def shrink_array(p, smallersize): return False diff --git a/pypy/rpython/lltypesystem/lloperation.py b/pypy/rpython/lltypesystem/lloperation.py --- a/pypy/rpython/lltypesystem/lloperation.py +++ b/pypy/rpython/lltypesystem/lloperation.py @@ -357,8 +357,6 @@ 'malloc': LLOp(canmallocgc=True), 'malloc_varsize': LLOp(canmallocgc=True), - 'malloc_and_pin': LLOp(canmallocgc=True), - 'malloc_varsize_and_pin':LLOp(canmallocgc=True), 'shrink_array': LLOp(canrun=True), 'zero_gc_pointers_inside': LLOp(), 'free': LLOp(), diff --git a/pypy/rpython/memory/gc/base.py b/pypy/rpython/memory/gc/base.py --- a/pypy/rpython/memory/gc/base.py +++ b/pypy/rpython/memory/gc/base.py @@ -166,13 +166,6 @@ # lots of cast and reverse-cast around... return llmemory.cast_ptr_to_adr(ref) - def malloc_fixedsize_and_pin(self, typeid, size): - return lltype.nullptr(llmemory.GCREF.TO) - - def malloc_varsize_and_pin(self, typeid, length, size, itemsize, - offset_to_length): - return lltype.nullptr(llmemory.GCREF.TO) - def id(self, ptr): return lltype.cast_ptr_to_int(ptr) diff --git a/pypy/rpython/memory/gc/minimark.py b/pypy/rpython/memory/gc/minimark.py --- a/pypy/rpython/memory/gc/minimark.py +++ b/pypy/rpython/memory/gc/minimark.py @@ -581,20 +581,6 @@ # return llmemory.cast_adr_to_ptr(obj, llmemory.GCREF) - def malloc_fixedsize_and_pin(self, typeid, size): - r = self.malloc_fixedsize_clear(typeid, size) - if not self.pin(llmemory.cast_ptr_to_adr(r)): - return lltype.nullptr(llmemory.GCREF.TO) - return r - - def malloc_varsize_and_pin(self, typeid, length, size, itemsize, - offset_to_length): - r = self.malloc_varsize_clear(typeid, length, size, itemsize, - offset_to_length) - if not self.pin(llmemory.cast_ptr_to_adr(r)): - return lltype.nullptr(llmemory.GCREF.TO) - return r - def collect(self, gen=1): """Do a minor (gen=0) or major (gen>0) collection.""" self.minor_collection() diff --git a/pypy/rpython/memory/gctransform/framework.py b/pypy/rpython/memory/gctransform/framework.py --- a/pypy/rpython/memory/gctransform/framework.py +++ b/pypy/rpython/memory/gctransform/framework.py @@ -283,14 +283,6 @@ self.can_move_ptr = getfn(GCClass.can_move.im_func, [s_gc, annmodel.SomeAddress()], annmodel.SomeBool()) - self.malloc_fixedsize_and_pin_ptr = getfn( - GCClass.malloc_fixedsize_and_pin, - [s_gc, s_typeid16, annmodel.SomeInteger(nonneg=True)], - s_gcref) - self.malloc_varsize_and_pin_ptr = getfn( - GCClass.malloc_varsize_and_pin, - [s_gc, s_typeid16] + [annmodel.SomeInteger(nonneg=True)] * 4, - s_gcref) if hasattr(GCClass, 'shrink_array'): self.shrink_array_ptr = getfn( @@ -686,9 +678,7 @@ if not 'varsize' in op.opname and not flags.get('varsize'): #malloc_ptr = self.malloc_fixedsize_ptr zero = flags.get('zero', False) - if op.opname == 'malloc_and_pin': - malloc_ptr = self.malloc_fixedsize_and_pin_ptr - elif (self.malloc_fast_ptr is not None and + if (self.malloc_fast_ptr is not None and not c_has_finalizer.value and (self.malloc_fast_is_clearing or not zero)): malloc_ptr = self.malloc_fast_ptr @@ -707,9 +697,7 @@ info_varsize.ofstolength) c_varitemsize = rmodel.inputconst(lltype.Signed, info_varsize.varitemsize) - if op.opname == 'malloc_varsize_and_pin': - malloc_ptr = self.malloc_varsize_and_pin_ptr - elif self.malloc_varsize_clear_fast_ptr is not None: + if self.malloc_varsize_clear_fast_ptr is not None: malloc_ptr = self.malloc_varsize_clear_fast_ptr else: malloc_ptr = self.malloc_varsize_clear_ptr @@ -1101,16 +1089,6 @@ resultvar=hop.spaceop.result) self.pop_roots(hop, livevars) - def gct_malloc_and_pin(self, hop): - TYPE = hop.spaceop.result.concretetype - v_raw = self.gct_fv_gc_malloc(hop, {'flavor': 'gc'}, TYPE.TO) - hop.cast_result(v_raw) - - def gct_malloc_varisze_and_pin(self, hop): - TYPE = hop.spaceop.result.concretetype - v_raw = self.gct_fv_gc_malloc(hop, {'flavor': 'gc'}, TYPE.TO) - hop.cast_result(v_raw) - def _set_into_gc_array_part(self, op): if op.opname == 'setarrayitem': return op.args[1] diff --git a/pypy/rpython/memory/gctransform/transform.py b/pypy/rpython/memory/gctransform/transform.py --- a/pypy/rpython/memory/gctransform/transform.py +++ b/pypy/rpython/memory/gctransform/transform.py @@ -1,4 +1,3 @@ -import py from pypy.rpython.lltypesystem import lltype, llmemory from pypy.objspace.flow.model import SpaceOperation, Variable, Constant, \ c_last_exception, checkgraph @@ -7,22 +6,18 @@ from pypy.translator.unsimplify import starts_with_empty_block from pypy.translator.backendopt.support import var_needsgc from pypy.translator.backendopt import inline -from pypy.translator.backendopt import graphanalyze from pypy.translator.backendopt.canraise import RaiseAnalyzer from pypy.translator.backendopt.ssa import DataFlowFamilyBuilder from pypy.translator.backendopt.constfold import constant_fold_graph from pypy.annotation import model as annmodel from pypy.rpython import rmodel -from pypy.rpython.memory import gc from pypy.rpython.memory.gctransform.support import var_ispyobj from pypy.rpython.annlowlevel import MixLevelHelperAnnotator from pypy.rpython.rtyper import LowLevelOpList from pypy.rpython.rbuiltin import gen_cast from pypy.rlib.rarithmetic import ovfcheck -import sys -import os from pypy.rpython.lltypesystem.lloperation import llop -from pypy.translator.simplify import join_blocks, cleanup_graph +from pypy.translator.simplify import cleanup_graph PyObjPtr = lltype.Ptr(lltype.PyObject) @@ -560,12 +555,6 @@ assert meth, "%s has no support for malloc_varsize with flavor %r" % (self, flavor) return self.varsize_malloc_helper(hop, flags, meth, []) - def gct_malloc_and_pin(self, *args, **kwds): - return self.gct_malloc(*args, **kwds) - - def gct_malloc_varsize_and_pin(self, hop, *args, **kwds): - return self.gct_malloc_varsize(hop, *args, **kwds) - def gct_gc_add_memory_pressure(self, hop): if hasattr(self, 'raw_malloc_memory_pressure_ptr'): op = hop.spaceop diff --git a/pypy/rpython/memory/gcwrapper.py b/pypy/rpython/memory/gcwrapper.py --- a/pypy/rpython/memory/gcwrapper.py +++ b/pypy/rpython/memory/gcwrapper.py @@ -56,25 +56,6 @@ return lltype.malloc(TYPE, n, flavor=flavor, zero=zero, track_allocation=track_allocation) - def malloc_and_pin(self, TYPE, n=None, zero=False): - typeid = self.get_type_id(TYPE) - size = self.gc.fixed_size(typeid) - result = self.gc.malloc_fixedsize_and_pin(typeid, size) - if result: - assert self.gc.malloc_zero_filled - return lltype.cast_opaque_ptr(lltype.Ptr(TYPE), result) - - def malloc_varsize_and_pin(self, TYPE, n=None, zero=False): - typeid = self.get_type_id(TYPE) - size = self.gc.fixed_size(typeid) - itemsize = self.gc.varsize_item_sizes(typeid) - offset_to_length = self.gc.varsize_offset_to_length(typeid) - result = self.gc.malloc_varsize_and_pin(typeid, n, size, itemsize, - offset_to_length) - if result: - assert self.gc.malloc_zero_filled - return lltype.cast_opaque_ptr(lltype.Ptr(TYPE), result) - def add_memory_pressure(self, size): if hasattr(self.gc, 'raw_malloc_memory_pressure'): self.gc.raw_malloc_memory_pressure(size) diff --git a/pypy/rpython/memory/test/test_gc.py b/pypy/rpython/memory/test/test_gc.py --- a/pypy/rpython/memory/test/test_gc.py +++ b/pypy/rpython/memory/test/test_gc.py @@ -25,7 +25,6 @@ class GCTest(object): GC_PARAMS = {} GC_CAN_MOVE = False - GC_CAN_MALLOC_AND_PIN = False GC_CAN_SHRINK_ARRAY = False GC_CAN_SHRINK_BIG_ARRAY = False BUT_HOW_BIG_IS_A_BIG_STRING = 3*WORD @@ -592,36 +591,6 @@ return rgc.can_move(lltype.malloc(TP, 1)) assert self.interpret(func, []) == self.GC_CAN_MOVE - - def test_malloc_and_pin(self): - TP = lltype.GcArray(lltype.Char) - def func(): - a = rgc.malloc_and_pin(TP, 3) - adr = llmemory.cast_ptr_to_adr(a) - if a: - rgc.collect() - assert adr == llmemory.cast_ptr_to_adr(a) - keepalive_until_here(a) - return 1 - return 0 - - assert self.interpret(func, []) == int(self.GC_CAN_MALLOC_AND_PIN) - - def test_malloc_and_pin_fixsize(self): - S = lltype.GcStruct('S', ('x', lltype.Float)) - TP = lltype.GcStruct('T', ('s', lltype.Ptr(S))) - def func(): - try: - a = rgc.malloc_and_pin(TP) - if a: - assert not rgc.can_move(a) - return 1 - return 0 - except Exception: - return 2 - - assert self.interpret(func, []) == int(self.GC_CAN_MALLOC_AND_PIN) - def test_shrink_array(self): from pypy.rpython.lltypesystem.rstr import STR @@ -928,7 +897,6 @@ class TestMiniMarkGC(TestSemiSpaceGC): from pypy.rpython.memory.gc.minimark import MiniMarkGC as GCClass GC_CAN_SHRINK_BIG_ARRAY = False - GC_CAN_MALLOC_AND_PIN = True BUT_HOW_BIG_IS_A_BIG_STRING = 11*WORD # those tests are here because they'll be messy and useless diff --git a/pypy/rpython/memory/test/test_transformed_gc.py b/pypy/rpython/memory/test/test_transformed_gc.py --- a/pypy/rpython/memory/test/test_transformed_gc.py +++ b/pypy/rpython/memory/test/test_transformed_gc.py @@ -1,10 +1,9 @@ import py -import sys import inspect from pypy.translator.c import gc from pypy.annotation import model as annmodel from pypy.annotation import policy as annpolicy -from pypy.rpython.lltypesystem import lltype, llmemory, llarena, rffi, llgroup +from pypy.rpython.lltypesystem import lltype, llmemory, rffi, llgroup from pypy.rpython.memory.gctransform import framework from pypy.rpython.lltypesystem.lloperation import llop, void from pypy.rlib.objectmodel import compute_unique_id, we_are_translated @@ -42,7 +41,6 @@ class GCTest(object): gcpolicy = None GC_CAN_MOVE = False - GC_CAN_MALLOC_AND_PIN = False GC_CAN_ALWAYS_PIN = False taggedpointers = False @@ -643,42 +641,6 @@ res = run([]) assert res == self.GC_CAN_MOVE - def define_malloc_and_pin(cls): - TP = lltype.GcArray(lltype.Char) - def func(): - a = rgc.malloc_and_pin(TP, 3, zero=True) - if a: - assert not rgc.can_move(a) - rgc.unpin(a) - return 1 - return 0 - - return func - - def test_malloc_and_pin(self): - run = self.runner("malloc_and_pin") - assert int(self.GC_CAN_MALLOC_AND_PIN) == run([]) - - def define_malloc_and_pin_fixsize(cls): - S = lltype.GcStruct('S', ('x', lltype.Float)) - TP = lltype.GcStruct('T', ('s', lltype.Ptr(S))) - def func(): - try: - a = rgc.malloc_and_pin(TP) - if a: - assert not rgc.can_move(a) - rgc.unpin(a) - return 1 - return 0 - except Exception: - return 2 - - return func - - def test_malloc_and_pin_fixsize(self): - run = self.runner("malloc_and_pin_fixsize") - assert run([]) == int(self.GC_CAN_MALLOC_AND_PIN) - def define_shrink_array(cls): from pypy.rpython.lltypesystem.rstr import STR @@ -1222,7 +1184,6 @@ class TestHybridGC(TestGenerationGC): gcname = "hybrid" - GC_CAN_MALLOC_AND_PIN = True class gcpolicy(gc.FrameworkGcPolicy): class transformerclass(framework.FrameworkGCTransformer): @@ -1290,7 +1251,6 @@ gcname = "minimark" GC_CAN_TEST_ID = True GC_CAN_ALWAYS_PIN = True - GC_CAN_MALLOC_AND_PIN = True class gcpolicy(gc.FrameworkGcPolicy): class transformerclass(framework.FrameworkGCTransformer): diff --git a/pypy/translator/backendopt/removenoops.py b/pypy/translator/backendopt/removenoops.py --- a/pypy/translator/backendopt/removenoops.py +++ b/pypy/translator/backendopt/removenoops.py @@ -1,8 +1,6 @@ -from pypy.objspace.flow.model import Block, Variable, Constant -from pypy.rpython.lltypesystem.lltype import Void +from pypy.objspace.flow.model import Variable, Constant from pypy.translator.backendopt.support import log from pypy.translator import simplify -from pypy import conftest def remove_unaryops(graph, opnames): """Removes unary low-level ops with a name appearing in the opnames list. diff --git a/pypy/translator/c/test/test_newgc.py b/pypy/translator/c/test/test_newgc.py --- a/pypy/translator/c/test/test_newgc.py +++ b/pypy/translator/c/test/test_newgc.py @@ -1,13 +1,12 @@ import py import sys, os, inspect -from pypy.objspace.flow.model import summary from pypy.rpython.lltypesystem import lltype, llmemory from pypy.rpython.lltypesystem.lloperation import llop from pypy.rpython.memory.test import snippet from pypy.rlib import rgc from pypy.rlib.objectmodel import keepalive_until_here -from pypy.rlib.rstring import StringBuilder, UnicodeBuilder +from pypy.rlib.rstring import StringBuilder from pypy.tool.udir import udir from pypy.translator.interactive import Translation from pypy.annotation import policy as annpolicy @@ -19,7 +18,6 @@ removetypeptr = False taggedpointers = False GC_CAN_MOVE = False - GC_CAN_MALLOC_AND_PIN = False GC_CAN_SHRINK_ARRAY = False _isolated_func = None @@ -723,25 +721,6 @@ def test_can_move(self): assert self.run('can_move') == self.GC_CAN_MOVE - def define_malloc_and_pin(cls): - TP = lltype.GcArray(lltype.Char) - def func(): - try: - a = rgc.malloc_and_pin(TP, 3) - if a: - assert not rgc.can_move(a) - rgc.unpin(a) - return 1 - return 0 - except Exception: - return 2 - - return func - - def test_malloc_and_pin(self): - res = self.run('malloc_and_pin') - assert res == self.GC_CAN_MALLOC_AND_PIN - def define_resizable_buffer(cls): from pypy.rpython.lltypesystem.rstr import STR @@ -1438,7 +1417,6 @@ class TestMiniMarkGC(TestSemiSpaceGC): gcpolicy = "minimark" should_be_moving = True - GC_CAN_MALLOC_AND_PIN = True GC_CAN_SHRINK_ARRAY = True def test_gc_heap_stats(self): diff --git a/pypy/translator/exceptiontransform.py b/pypy/translator/exceptiontransform.py --- a/pypy/translator/exceptiontransform.py +++ b/pypy/translator/exceptiontransform.py @@ -1,14 +1,13 @@ from pypy.translator.simplify import join_blocks, cleanup_graph from pypy.translator.unsimplify import copyvar, varoftype from pypy.translator.unsimplify import insert_empty_block, split_block -from pypy.translator.backendopt import canraise, inline, support, removenoops +from pypy.translator.backendopt import canraise, inline from pypy.objspace.flow.model import Block, Constant, Variable, Link, \ - c_last_exception, SpaceOperation, checkgraph, FunctionGraph, mkentrymap + c_last_exception, SpaceOperation, FunctionGraph, mkentrymap from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rpython.ootypesystem import ootype from pypy.rpython.lltypesystem import lloperation from pypy.rpython import rtyper -from pypy.rpython import rclass from pypy.rpython.rmodel import inputconst from pypy.rlib.rarithmetic import r_uint, r_longlong, r_ulonglong from pypy.rlib.rarithmetic import r_singlefloat @@ -255,9 +254,7 @@ len(block.operations) and (block.exits[0].args[0].concretetype is lltype.Void or block.exits[0].args[0] is block.operations[-1].result) and - block.operations[-1].opname not in ('malloc', # special cases - 'malloc_and_pin', - 'malloc_varsize_and_pin')): + block.operations[-1].opname != 'malloc'): # special cases last_operation -= 1 lastblock = block for i in range(last_operation, -1, -1): @@ -438,14 +435,6 @@ flavor = spaceop.args[1].value['flavor'] if flavor == 'gc': insert_zeroing_op = True - elif spaceop.opname in ['malloc_and_pin', 'malloc_varsize_and_pin']: - # xxx we cannot insert zero_gc_pointers_inside after - # malloc_and_pin, because it can return null. For now - # we simply always force the zero=True flag on - # malloc_and_pin. - c_flags = spaceop.args[1] - c_flags.value = c_flags.value.copy() - spaceop.args[1].value['zero'] = True # NB. when inserting more special-cases here, keep in mind that # you also need to list the opnames in transform_block() # (see "special cases") From noreply at buildbot.pypy.org Wed May 9 21:19:13 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 9 May 2012 21:19:13 +0200 (CEST) Subject: [pypy-commit] pypy gc-minimark-pinning: use direct pinning in rffi, leave few unclear comments about the JIT, looks good Message-ID: <20120509191913.798F482E46@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: gc-minimark-pinning Changeset: r54992:96708b4d184f Date: 2012-05-08 17:50 +0200 http://bitbucket.org/pypy/pypy/changeset/96708b4d184f/ Log: use direct pinning in rffi, leave few unclear comments about the JIT, looks good diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -703,7 +703,8 @@ i += 1 return assert_str0(b.build()) - # str -> char* + # str -> char*, bool, bool + # XXX is still the JIT unhappy with that? investigate # Can't inline this because of the raw address manipulation. @jit.dont_look_inside def get_nonmovingbuffer(data): @@ -729,7 +730,8 @@ return cast(TYPEP, data_start), pinned, False get_nonmovingbuffer._annenforceargs_ = [strtype] - # (str, char*) -> None + # (str, char*, bool, bool) -> None + # XXX is still the JIT unhappy with that? investigate # Can't inline this because of the raw address manipulation. @jit.dont_look_inside def free_nonmovingbuffer(data, buf, pinned, is_raw): @@ -748,34 +750,41 @@ keepalive_until_here(data) free_nonmovingbuffer._annenforceargs_ = [strtype, None, bool, bool] - # int -> (char*, str) + # int -> (char*, str, bool) def alloc_buffer(count): """ - Returns a (raw_buffer, gc_buffer) pair, allocated with count bytes. - The raw_buffer can be safely passed to a native function which expects - it to not move. Call str_from_buffer with the returned values to get a - safe high-level string. When the garbage collector cooperates, this + Returns a (raw_buffer, gc_buffer, pinned) triplet, allocated + with count bytes. The raw_buffer can be safely passed to a + native function which expects it to not move. Call + str_from_buffer with the returned values to get a safe + high-level string. When the garbage collector cooperates, this allows for the process to be performed without an extra copy. - Make sure to call keep_buffer_alive_until_here on the returned values. + Make sure to call keep_buffer_alive_until_here on the returned + values. - Right now this is a version optimized for minimark which can pin values - in the nursery. + Right now this is a version optimized for minimark which can + pin values in the nursery. """ - ll_s = rgc.malloc_and_pin(STRTYPE, count) - if not ll_s: - raw_buf = lltype.malloc(TYPEP.TO, count, flavor='raw') - return raw_buf, lltype.nullptr(STRTYPE) + ll_s = lltype.malloc(STRTYPE, count) + if rgc.can_move(ll_s): + pinned = rgc.pin(ll_s) + if not pinned: + raw_buf = lltype.malloc(TYPEP.TO, count, flavor='raw') + return raw_buf, lltype.nullptr(STRTYPE), False else: - data_start = cast_ptr_to_adr(ll_s) + \ - offsetof(STRTYPE, 'chars') + itemoffsetof(STRTYPE.chars, 0) - raw_buf = cast(TYPEP, data_start) - return raw_buf, ll_s + pinned = False + data_start = cast_ptr_to_adr(ll_s) + \ + offsetof(STRTYPE, 'chars') + itemoffsetof(STRTYPE.chars, 0) + raw_buf = cast(TYPEP, data_start) + return raw_buf, ll_s, pinned alloc_buffer._always_inline_ = True # to get rid of the returned tuple alloc_buffer._annenforceargs_ = [int] - # (char*, str, int, int) -> None + # (char*, str, int, int, bool) -> None + # XXX is still the JIT unhappy with that? investigate @jit.dont_look_inside - def str_from_buffer(raw_buf, gc_buf, allocated_size, needed_size): + def str_from_buffer(raw_buf, gc_buf, allocated_size, needed_size, + is_pinned): """ Converts from a pair returned by alloc_buffer to a high-level string. The returned string will be truncated to needed_size. @@ -783,7 +792,8 @@ assert allocated_size >= needed_size if gc_buf: - rgc.unpin(gc_buf) + if is_pinned: + rgc.unpin(gc_buf) if allocated_size != needed_size: gc_buf = rgc.ll_shrink_array(gc_buf, needed_size) return hlstrtype(gc_buf) @@ -803,6 +813,7 @@ return hlstrtype(new_buf) # (char*, str) -> None + # XXX is the JIT unhappy? @jit.dont_look_inside def keep_buffer_alive_until_here(raw_buf, gc_buf): """ @@ -1088,23 +1099,25 @@ def __init__(self, size): self.size = size def __enter__(self): - self.raw, self.gc_buf = alloc_buffer(self.size) + self.raw, self.gc_buf, self.pinned = alloc_buffer(self.size) return self def __exit__(self, *args): keep_buffer_alive_until_here(self.raw, self.gc_buf) def str(self, length): - return str_from_buffer(self.raw, self.gc_buf, self.size, length) + return str_from_buffer(self.raw, self.gc_buf, self.size, length, + self.pinned) class scoped_alloc_unicodebuffer: def __init__(self, size): self.size = size def __enter__(self): - self.raw, self.gc_buf = alloc_unicodebuffer(self.size) + self.raw, self.gc_buf, self.pinned = alloc_unicodebuffer(self.size) return self def __exit__(self, *args): keep_unicodebuffer_alive_until_here(self.raw, self.gc_buf) def str(self, length): - return unicode_from_buffer(self.raw, self.gc_buf, self.size, length) + return unicode_from_buffer(self.raw, self.gc_buf, self.size, length, + self.pinned) # You would have to have a *huge* amount of data for this to block long enough # to be worth it to release the GIL. diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -880,20 +880,15 @@ [rffi.INT, rffi.VOIDP, rffi.SIZE_T], rffi.SIZE_T) - offset = offsetof(STR, 'chars') + itemoffsetof(STR.chars, 0) - def os_read_llimpl(fd, count): if count < 0: raise OSError(errno.EINVAL, None) - raw_buf, gc_buf = rffi.alloc_buffer(count) - try: - void_buf = rffi.cast(rffi.VOIDP, raw_buf) + with rffi.scoped_alloc_buffer(count) as buf: + void_buf = rffi.cast(rffi.VOIDP, buf.raw) got = rffi.cast(lltype.Signed, os_read(fd, void_buf, count)) if got < 0: raise OSError(rposix.get_errno(), "os_read failed") - return rffi.str_from_buffer(raw_buf, gc_buf, count, got) - finally: - rffi.keep_buffer_alive_until_here(raw_buf, gc_buf) + return buf.str(got) def os_read_oofakeimpl(fd, count): return OOSupport.to_rstr(os.read(fd, count)) From noreply at buildbot.pypy.org Wed May 9 21:21:29 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 May 2012 21:21:29 +0200 (CEST) Subject: [pypy-commit] pypy default: A failing test about weak key dictionaries. Message-ID: <20120509192129.AF82F82E46@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r54993:e37769dd2e8e Date: 2012-05-09 20:13 +0200 http://bitbucket.org/pypy/pypy/changeset/e37769dd2e8e/ Log: A failing test about weak key dictionaries. diff --git a/pypy/rlib/test/test_rweakkeydict.py b/pypy/rlib/test/test_rweakkeydict.py --- a/pypy/rlib/test/test_rweakkeydict.py +++ b/pypy/rlib/test/test_rweakkeydict.py @@ -135,3 +135,31 @@ d = RWeakKeyDictionary(KX, VY) d.set(KX(), VX()) py.test.raises(Exception, interpret, g, [1]) + + +def test_rpython_free_values(): + class VXDel: + def __del__(self): + state.freed.append(1) + class State: + pass + state = State() + state.freed = [] + # + def add_me(): + k = KX() + v = VXDel() + d = RWeakKeyDictionary(KX, VXDel) + d.set(k, v) + return d + def f(): + del state.freed[:] + d = add_me() + rgc.collect() + # we want the dictionary to be really empty here. It's hard to + # ensure in the current implementation after just one collect(), + # but at least two collects should be enough. + rgc.collect() + return len(state.freed) + assert f() == 1 + assert interpret(f, []) == 1 From noreply at buildbot.pypy.org Wed May 9 21:21:30 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 May 2012 21:21:30 +0200 (CEST) Subject: [pypy-commit] pypy default: More messy than it looks :-( Will try another approach not Message-ID: <20120509192130.F04A582E46@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r54994:84c9def63fc4 Date: 2012-05-09 20:36 +0200 http://bitbucket.org/pypy/pypy/changeset/84c9def63fc4/ Log: More messy than it looks :-( Will try another approach not involving using RWeakKeyDictionary at all... diff --git a/pypy/rlib/test/test_rweakkeydict.py b/pypy/rlib/test/test_rweakkeydict.py --- a/pypy/rlib/test/test_rweakkeydict.py +++ b/pypy/rlib/test/test_rweakkeydict.py @@ -138,6 +138,7 @@ def test_rpython_free_values(): + import py; py.test.skip("XXX not implemented, messy") class VXDel: def __del__(self): state.freed.append(1) From noreply at buildbot.pypy.org Wed May 9 21:21:32 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 May 2012 21:21:32 +0200 (CEST) Subject: [pypy-commit] pypy default: Tweak again thread._local. Cannot use RWeakKeyDictionary in its current Message-ID: <20120509192132.4567582E46@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r54995:f86cddbfc751 Date: 2012-05-09 21:21 +0200 http://bitbucket.org/pypy/pypy/changeset/f86cddbfc751/ Log: Tweak again thread._local. Cannot use RWeakKeyDictionary in its current implementation, because of the warning in rlib._rweakkeydict:13. It would mean that the __dict__ of some long-dead threads might remain around for an unknown amount of time. diff --git a/pypy/module/thread/__init__.py b/pypy/module/thread/__init__.py --- a/pypy/module/thread/__init__.py +++ b/pypy/module/thread/__init__.py @@ -18,7 +18,7 @@ 'allocate_lock': 'os_lock.allocate_lock', 'allocate': 'os_lock.allocate_lock', # obsolete synonym 'LockType': 'os_lock.Lock', - #'_local': 'os_local.Local', # only if 'rweakref' + '_local': 'os_local.Local', 'error': 'space.fromcache(error.Cache).w_error', } @@ -34,8 +34,3 @@ from pypy.module.posix.interp_posix import add_fork_hook from pypy.module.thread.os_thread import reinit_threads add_fork_hook('child', reinit_threads) - - def setup_after_space_initialization(self): - """NOT_RPYTHON""" - if self.space.config.translation.rweakref: - self.extra_interpdef('_local', 'os_local.Local') diff --git a/pypy/module/thread/os_local.py b/pypy/module/thread/os_local.py --- a/pypy/module/thread/os_local.py +++ b/pypy/module/thread/os_local.py @@ -1,16 +1,19 @@ -from pypy.rlib.rweakref import RWeakKeyDictionary +import weakref from pypy.interpreter.baseobjspace import Wrappable, W_Root from pypy.interpreter.executioncontext import ExecutionContext from pypy.interpreter.typedef import (TypeDef, interp2app, GetSetProperty, descr_get_dict) +ExecutionContext._thread_local_objs = None + + class Local(Wrappable): """Thread-local data""" def __init__(self, space, initargs): self.initargs = initargs - self.dicts = RWeakKeyDictionary(ExecutionContext, W_Root) + self.dicts = {} # mapping ExecutionContexts to the wraped dict # The app-level __init__() will be called by the general # instance-creation logic. It causes getdict() to be # immediately called. If we don't prepare and set a w_dict @@ -18,15 +21,24 @@ # to call __init__() a second time. ec = space.getexecutioncontext() w_dict = space.newdict(instance=True) - self.dicts.set(ec, w_dict) + self.dicts[ec] = w_dict + self._register_in_ec(ec) + + def _register_in_ec(self, ec): + if not ec.space.config.translation.rweakref: + return # without weakrefs, works but 'dicts' is never cleared + if ec._thread_local_objs is None: + ec._thread_local_objs = [] + ec._thread_local_objs.append(weakref.ref(self)) def getdict(self, space): ec = space.getexecutioncontext() - w_dict = self.dicts.get(ec) - if w_dict is None: + try: + w_dict = self.dicts[ec] + except KeyError: # create a new dict for this thread w_dict = space.newdict(instance=True) - self.dicts.set(ec, w_dict) + self.dicts[ec] = w_dict # call __init__ try: w_self = space.wrap(self) @@ -35,9 +47,10 @@ space.call_obj_args(w_init, w_self, self.initargs) except: # failed, forget w_dict and propagate the exception - self.dicts.set(ec, None) + del self.dicts[ec] raise # ready + self._register_in_ec(ec) return w_dict def descr_local__new__(space, w_subtype, __args__): @@ -55,3 +68,13 @@ __init__ = interp2app(Local.descr_local__init__), __dict__ = GetSetProperty(descr_get_dict, cls=Local), ) + +def thread_is_stopping(ec): + tlobjs = ec._thread_local_objs + if tlobjs is None: + return + ec._thread_local_objs = None + for wref in tlobjs: + local = wref() + if local is not None: + del local.dicts[ec] diff --git a/pypy/module/thread/threadlocals.py b/pypy/module/thread/threadlocals.py --- a/pypy/module/thread/threadlocals.py +++ b/pypy/module/thread/threadlocals.py @@ -54,4 +54,8 @@ def leave_thread(self, space): "Notification that the current thread is about to stop." - self.setvalue(None) + from pypy.module.thread.os_local import thread_is_stopping + try: + thread_is_stopping(self.getvalue()) + finally: + self.setvalue(None) From noreply at buildbot.pypy.org Wed May 9 22:06:52 2012 From: noreply at buildbot.pypy.org (arigo) Date: Wed, 9 May 2012 22:06:52 +0200 (CEST) Subject: [pypy-commit] pypy default: JIT fixes. Message-ID: <20120509200652.709BF82E46@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r54996:09fc45e24db4 Date: 2012-05-09 20:03 +0000 http://bitbucket.org/pypy/pypy/changeset/09fc45e24db4/ Log: JIT fixes. diff --git a/pypy/module/thread/os_local.py b/pypy/module/thread/os_local.py --- a/pypy/module/thread/os_local.py +++ b/pypy/module/thread/os_local.py @@ -1,4 +1,5 @@ import weakref +from pypy.rlib import jit from pypy.interpreter.baseobjspace import Wrappable, W_Root from pypy.interpreter.executioncontext import ExecutionContext from pypy.interpreter.typedef import (TypeDef, interp2app, GetSetProperty, @@ -11,6 +12,7 @@ class Local(Wrappable): """Thread-local data""" + @jit.dont_look_inside def __init__(self, space, initargs): self.initargs = initargs self.dicts = {} # mapping ExecutionContexts to the wraped dict @@ -31,26 +33,32 @@ ec._thread_local_objs = [] ec._thread_local_objs.append(weakref.ref(self)) + @jit.dont_look_inside + def create_new_dict(self, ec): + # create a new dict for this thread + space = ec.space + w_dict = space.newdict(instance=True) + self.dicts[ec] = w_dict + # call __init__ + try: + w_self = space.wrap(self) + w_type = space.type(w_self) + w_init = space.getattr(w_type, space.wrap("__init__")) + space.call_obj_args(w_init, w_self, self.initargs) + except: + # failed, forget w_dict and propagate the exception + del self.dicts[ec] + raise + # ready + self._register_in_ec(ec) + return w_dict + def getdict(self, space): ec = space.getexecutioncontext() try: w_dict = self.dicts[ec] except KeyError: - # create a new dict for this thread - w_dict = space.newdict(instance=True) - self.dicts[ec] = w_dict - # call __init__ - try: - w_self = space.wrap(self) - w_type = space.type(w_self) - w_init = space.getattr(w_type, space.wrap("__init__")) - space.call_obj_args(w_init, w_self, self.initargs) - except: - # failed, forget w_dict and propagate the exception - del self.dicts[ec] - raise - # ready - self._register_in_ec(ec) + w_dict = self.create_new_dict(ec) return w_dict def descr_local__new__(space, w_subtype, __args__): From noreply at buildbot.pypy.org Wed May 9 22:55:49 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 9 May 2012 22:55:49 +0200 (CEST) Subject: [pypy-commit] pypy default: call close Message-ID: <20120509205549.B4FBF82CE0@wyvern.cs.uni-duesseldorf.de> Author: Matti Picus Branch: Changeset: r54997:30e8354eba05 Date: 2012-05-09 19:02 +0300 http://bitbucket.org/pypy/pypy/changeset/30e8354eba05/ Log: call close diff --git a/pypy/module/mmap/test/test_mmap.py b/pypy/module/mmap/test/test_mmap.py --- a/pypy/module/mmap/test/test_mmap.py +++ b/pypy/module/mmap/test/test_mmap.py @@ -548,6 +548,8 @@ assert len(b) == 6 assert b[3] == "b" assert b[:] == "foobar" + m.close() + f.close() def test_offset(self): from mmap import mmap, ALLOCATIONGRANULARITY From noreply at buildbot.pypy.org Wed May 9 22:55:51 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 9 May 2012 22:55:51 +0200 (CEST) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20120509205551.0B83582CE1@wyvern.cs.uni-duesseldorf.de> Author: Matti Picus Branch: Changeset: r54998:d8e00a3ec08d Date: 2012-05-09 23:55 +0300 http://bitbucket.org/pypy/pypy/changeset/d8e00a3ec08d/ Log: merge heads diff --git a/pypy/module/thread/__init__.py b/pypy/module/thread/__init__.py --- a/pypy/module/thread/__init__.py +++ b/pypy/module/thread/__init__.py @@ -18,7 +18,7 @@ 'allocate_lock': 'os_lock.allocate_lock', 'allocate': 'os_lock.allocate_lock', # obsolete synonym 'LockType': 'os_lock.Lock', - #'_local': 'os_local.Local', # only if 'rweakref' + '_local': 'os_local.Local', 'error': 'space.fromcache(error.Cache).w_error', } @@ -34,8 +34,3 @@ from pypy.module.posix.interp_posix import add_fork_hook from pypy.module.thread.os_thread import reinit_threads add_fork_hook('child', reinit_threads) - - def setup_after_space_initialization(self): - """NOT_RPYTHON""" - if self.space.config.translation.rweakref: - self.extra_interpdef('_local', 'os_local.Local') diff --git a/pypy/module/thread/os_local.py b/pypy/module/thread/os_local.py --- a/pypy/module/thread/os_local.py +++ b/pypy/module/thread/os_local.py @@ -1,16 +1,21 @@ -from pypy.rlib.rweakref import RWeakKeyDictionary +import weakref +from pypy.rlib import jit from pypy.interpreter.baseobjspace import Wrappable, W_Root from pypy.interpreter.executioncontext import ExecutionContext from pypy.interpreter.typedef import (TypeDef, interp2app, GetSetProperty, descr_get_dict) +ExecutionContext._thread_local_objs = None + + class Local(Wrappable): """Thread-local data""" + @jit.dont_look_inside def __init__(self, space, initargs): self.initargs = initargs - self.dicts = RWeakKeyDictionary(ExecutionContext, W_Root) + self.dicts = {} # mapping ExecutionContexts to the wraped dict # The app-level __init__() will be called by the general # instance-creation logic. It causes getdict() to be # immediately called. If we don't prepare and set a w_dict @@ -18,26 +23,42 @@ # to call __init__() a second time. ec = space.getexecutioncontext() w_dict = space.newdict(instance=True) - self.dicts.set(ec, w_dict) + self.dicts[ec] = w_dict + self._register_in_ec(ec) + + def _register_in_ec(self, ec): + if not ec.space.config.translation.rweakref: + return # without weakrefs, works but 'dicts' is never cleared + if ec._thread_local_objs is None: + ec._thread_local_objs = [] + ec._thread_local_objs.append(weakref.ref(self)) + + @jit.dont_look_inside + def create_new_dict(self, ec): + # create a new dict for this thread + space = ec.space + w_dict = space.newdict(instance=True) + self.dicts[ec] = w_dict + # call __init__ + try: + w_self = space.wrap(self) + w_type = space.type(w_self) + w_init = space.getattr(w_type, space.wrap("__init__")) + space.call_obj_args(w_init, w_self, self.initargs) + except: + # failed, forget w_dict and propagate the exception + del self.dicts[ec] + raise + # ready + self._register_in_ec(ec) + return w_dict def getdict(self, space): ec = space.getexecutioncontext() - w_dict = self.dicts.get(ec) - if w_dict is None: - # create a new dict for this thread - w_dict = space.newdict(instance=True) - self.dicts.set(ec, w_dict) - # call __init__ - try: - w_self = space.wrap(self) - w_type = space.type(w_self) - w_init = space.getattr(w_type, space.wrap("__init__")) - space.call_obj_args(w_init, w_self, self.initargs) - except: - # failed, forget w_dict and propagate the exception - self.dicts.set(ec, None) - raise - # ready + try: + w_dict = self.dicts[ec] + except KeyError: + w_dict = self.create_new_dict(ec) return w_dict def descr_local__new__(space, w_subtype, __args__): @@ -55,3 +76,13 @@ __init__ = interp2app(Local.descr_local__init__), __dict__ = GetSetProperty(descr_get_dict, cls=Local), ) + +def thread_is_stopping(ec): + tlobjs = ec._thread_local_objs + if tlobjs is None: + return + ec._thread_local_objs = None + for wref in tlobjs: + local = wref() + if local is not None: + del local.dicts[ec] diff --git a/pypy/module/thread/threadlocals.py b/pypy/module/thread/threadlocals.py --- a/pypy/module/thread/threadlocals.py +++ b/pypy/module/thread/threadlocals.py @@ -54,4 +54,8 @@ def leave_thread(self, space): "Notification that the current thread is about to stop." - self.setvalue(None) + from pypy.module.thread.os_local import thread_is_stopping + try: + thread_is_stopping(self.getvalue()) + finally: + self.setvalue(None) diff --git a/pypy/rlib/test/test_rweakkeydict.py b/pypy/rlib/test/test_rweakkeydict.py --- a/pypy/rlib/test/test_rweakkeydict.py +++ b/pypy/rlib/test/test_rweakkeydict.py @@ -135,3 +135,32 @@ d = RWeakKeyDictionary(KX, VY) d.set(KX(), VX()) py.test.raises(Exception, interpret, g, [1]) + + +def test_rpython_free_values(): + import py; py.test.skip("XXX not implemented, messy") + class VXDel: + def __del__(self): + state.freed.append(1) + class State: + pass + state = State() + state.freed = [] + # + def add_me(): + k = KX() + v = VXDel() + d = RWeakKeyDictionary(KX, VXDel) + d.set(k, v) + return d + def f(): + del state.freed[:] + d = add_me() + rgc.collect() + # we want the dictionary to be really empty here. It's hard to + # ensure in the current implementation after just one collect(), + # but at least two collects should be enough. + rgc.collect() + return len(state.freed) + assert f() == 1 + assert interpret(f, []) == 1 From noreply at buildbot.pypy.org Wed May 9 23:15:03 2012 From: noreply at buildbot.pypy.org (fijal) Date: Wed, 9 May 2012 23:15:03 +0200 (CEST) Subject: [pypy-commit] pypy gc-minimark-pinning: fix more usages Message-ID: <20120509211503.0A5AD82CE0@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: gc-minimark-pinning Changeset: r54999:de5922dc19fa Date: 2012-05-09 23:14 +0200 http://bitbucket.org/pypy/pypy/changeset/de5922dc19fa/ Log: fix more usages diff --git a/pypy/module/bz2/interp_bz2.py b/pypy/module/bz2/interp_bz2.py --- a/pypy/module/bz2/interp_bz2.py +++ b/pypy/module/bz2/interp_bz2.py @@ -195,7 +195,7 @@ self._allocate_chunk(initial_size) def _allocate_chunk(self, size): - self.raw_buf, self.gc_buf = rffi.alloc_buffer(size) + self.raw_buf, self.gc_buf, self.is_pinned = rffi.alloc_buffer(size) self.current_size = size self.bzs.c_next_out = self.raw_buf rffi.setintfield(self.bzs, 'c_avail_out', size) @@ -204,7 +204,8 @@ assert 0 <= chunksize <= self.current_size raw_buf = self.raw_buf gc_buf = self.gc_buf - s = rffi.str_from_buffer(raw_buf, gc_buf, self.current_size, chunksize) + s = rffi.str_from_buffer(raw_buf, gc_buf, self.current_size, chunksize, + self.is_pinned) rffi.keep_buffer_alive_until_here(raw_buf, gc_buf) self.current_size = 0 return s diff --git a/pypy/rlib/rsocket.py b/pypy/rlib/rsocket.py --- a/pypy/rlib/rsocket.py +++ b/pypy/rlib/rsocket.py @@ -934,13 +934,10 @@ if timeout == 1: raise SocketTimeout elif timeout == 0: - raw_buf, gc_buf = rffi.alloc_buffer(buffersize) - try: - read_bytes = _c.socketrecv(self.fd, raw_buf, buffersize, flags) + with rffi.scoped_alloc_buffer(buffersize) as buf: + read_bytes = _c.socketrecv(self.fd, buf.raw, buffersize, flags) if read_bytes >= 0: - return rffi.str_from_buffer(raw_buf, gc_buf, buffersize, read_bytes) - finally: - rffi.keep_buffer_alive_until_here(raw_buf, gc_buf) + return buf.str(read_bytes) raise self.error_handler() def recvinto(self, rwbuffer, nbytes, flags=0): @@ -956,11 +953,11 @@ if timeout == 1: raise SocketTimeout elif timeout == 0: - raw_buf, gc_buf = rffi.alloc_buffer(buffersize) - try: + with rffi.scoped_alloc_buffer(buffersize) as buf: address, addr_p, addrlen_p = self._addrbuf() try: - read_bytes = _c.recvfrom(self.fd, raw_buf, buffersize, flags, + read_bytes = _c.recvfrom(self.fd, buf.raw, + buffersize, flags, addr_p, addrlen_p) addrlen = rffi.cast(lltype.Signed, addrlen_p[0]) finally: @@ -971,10 +968,7 @@ address.addrlen = addrlen else: address = None - data = rffi.str_from_buffer(raw_buf, gc_buf, buffersize, read_bytes) - return (data, address) - finally: - rffi.keep_buffer_alive_until_here(raw_buf, gc_buf) + return (buf.str(read_bytes), address) raise self.error_handler() def recvfrom_into(self, rwbuffer, nbytes, flags=0): From noreply at buildbot.pypy.org Thu May 10 10:04:13 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 May 2012 10:04:13 +0200 (CEST) Subject: [pypy-commit] pypy default: Add a generally useful auto-shrinking list type. Message-ID: <20120510080413.A611582F51@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r55000:083a5889c293 Date: 2012-05-10 08:10 +0200 http://bitbucket.org/pypy/pypy/changeset/083a5889c293/ Log: Add a generally useful auto-shrinking list type. diff --git a/pypy/rlib/rshrinklist.py b/pypy/rlib/rshrinklist.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/rshrinklist.py @@ -0,0 +1,27 @@ + +class AbstractShrinkList(object): + _mixin_ = True + + def __init__(self): + self._list = [] + self._next_shrink = 16 + + def append(self, x): + self._do_shrink() + self._list.append(x) + + def items(self): + return self._list + + def _do_shrink(self): + if len(self._list) >= self._next_shrink: + rest = 0 + for x in self._list: + if self.must_keep(x): + self._list[rest] = x + rest += 1 + del self._list[rest:] + self._next_shrink = 16 + 2 * rest + + def must_keep(self, x): + raise NotImplementedError diff --git a/pypy/rlib/test/test_rshrinklist.py b/pypy/rlib/test/test_rshrinklist.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/test/test_rshrinklist.py @@ -0,0 +1,30 @@ +from pypy.rlib.rshrinklist import AbstractShrinkList + +class Item: + alive = True + +class ItemList(AbstractShrinkList): + def must_keep(self, x): + return x.alive + +def test_simple(): + l = ItemList() + l2 = [Item() for i in range(150)] + for x in l2: + l.append(x) + assert l.items() == l2 + # + for x in l2[::2]: + x.alive = False + l3 = [Item() for i in range(150 + 16)] + for x in l3: + l.append(x) + assert l.items() == l2[1::2] + l3 + +def test_append_dead_items(): + l = ItemList() + for i in range(150): + x = Item() + l.append(x) + x.alive = False + assert len(l.items()) <= 16 From noreply at buildbot.pypy.org Thu May 10 10:04:15 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 May 2012 10:04:15 +0200 (CEST) Subject: [pypy-commit] pypy default: Comments Message-ID: <20120510080415.07D1B82F51@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r55001:7af5293034d4 Date: 2012-05-10 08:42 +0200 http://bitbucket.org/pypy/pypy/changeset/7af5293034d4/ Log: Comments diff --git a/pypy/rlib/rshrinklist.py b/pypy/rlib/rshrinklist.py --- a/pypy/rlib/rshrinklist.py +++ b/pypy/rlib/rshrinklist.py @@ -1,5 +1,12 @@ class AbstractShrinkList(object): + """A mixin base class. You should subclass it and add a method + must_keep(). Behaves like a list with the method append(), and + you can read *for reading* the list of items by calling items(). + The twist is that occasionally append() will throw away the + items for which must_keep() returns False. (It does so without + changing the order.) + """ _mixin_ = True def __init__(self): diff --git a/pypy/rlib/test/test_rshrinklist.py b/pypy/rlib/test/test_rshrinklist.py --- a/pypy/rlib/test/test_rshrinklist.py +++ b/pypy/rlib/test/test_rshrinklist.py @@ -19,7 +19,7 @@ l3 = [Item() for i in range(150 + 16)] for x in l3: l.append(x) - assert l.items() == l2[1::2] + l3 + assert l.items() == l2[1::2] + l3 # keeps the order def test_append_dead_items(): l = ItemList() From noreply at buildbot.pypy.org Thu May 10 10:04:16 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 May 2012 10:04:16 +0200 (CEST) Subject: [pypy-commit] pypy default: Use an AbstractShrinkList instead of this ever-growing list. Message-ID: <20120510080416.5500B82F51@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r55002:cc70e7e0ae58 Date: 2012-05-10 08:42 +0200 http://bitbucket.org/pypy/pypy/changeset/cc70e7e0ae58/ Log: Use an AbstractShrinkList instead of this ever-growing list. diff --git a/pypy/module/thread/os_local.py b/pypy/module/thread/os_local.py --- a/pypy/module/thread/os_local.py +++ b/pypy/module/thread/os_local.py @@ -3,6 +3,11 @@ from pypy.interpreter.executioncontext import ExecutionContext from pypy.interpreter.typedef import (TypeDef, interp2app, GetSetProperty, descr_get_dict) +from pypy.rlib.rshrinklist import AbstractShrinkList + +class WRefShrinkList(AbstractShrinkList): + def must_keep(self, wref): + return wref() is not None ExecutionContext._thread_local_objs = None @@ -28,7 +33,7 @@ if not ec.space.config.translation.rweakref: return # without weakrefs, works but 'dicts' is never cleared if ec._thread_local_objs is None: - ec._thread_local_objs = [] + ec._thread_local_objs = WRefShrinkList() ec._thread_local_objs.append(weakref.ref(self)) def getdict(self, space): @@ -74,7 +79,7 @@ if tlobjs is None: return ec._thread_local_objs = None - for wref in tlobjs: + for wref in tlobjs.items(): local = wref() if local is not None: del local.dicts[ec] From noreply at buildbot.pypy.org Thu May 10 10:04:17 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 May 2012 10:04:17 +0200 (CEST) Subject: [pypy-commit] pypy default: Use here too an AbstractShrinkList instead of this ever-growing Message-ID: <20120510080417.A5D5482F51@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r55003:ea21f0cc18c3 Date: 2012-05-10 08:43 +0200 http://bitbucket.org/pypy/pypy/changeset/ea21f0cc18c3/ Log: Use here too an AbstractShrinkList instead of this ever-growing list. Some refactorings are needed because we cannot any more cache by index some entries of the list. diff --git a/pypy/module/_weakref/interp__weakref.py b/pypy/module/_weakref/interp__weakref.py --- a/pypy/module/_weakref/interp__weakref.py +++ b/pypy/module/_weakref/interp__weakref.py @@ -4,25 +4,50 @@ from pypy.interpreter.gateway import interp2app, ObjSpace from pypy.interpreter.typedef import TypeDef from pypy.rlib import jit +from pypy.rlib.rshrinklist import AbstractShrinkList +from pypy.rlib.objectmodel import specialize import weakref +class WRefShrinkList(AbstractShrinkList): + def must_keep(self, wref): + return wref() is not None + + class WeakrefLifeline(W_Root): - cached_weakref_index = -1 - cached_proxy_index = -1 + cached_weakref = None + cached_proxy = None + other_refs_weak = None def __init__(self, space): self.space = space - self.refs_weak = [] + + def append_wref_to(self, w_ref): + if self.other_refs_weak is None: + self.other_refs_weak = WRefShrinkList() + self.other_refs_weak.append(weakref.ref(w_ref)) + + @specialize.arg(1) + def traverse(self, callback, arg=None): + if self.cached_weakref is not None: + arg = callback(self, self.cached_weakref, arg) + if self.cached_proxy is not None: + arg = callback(self, self.cached_proxy, arg) + if self.other_refs_weak is not None: + for ref_w_ref in self.other_refs_weak.items(): + arg = callback(self, ref_w_ref, arg) + return arg + + def _clear_wref(self, wref, _): + w_ref = wref() + if w_ref is not None: + w_ref.clear() def clear_all_weakrefs(self): """Clear all weakrefs. This is called when an app-level object has a __del__, just before the app-level __del__ method is called. """ - for ref_w_ref in self.refs_weak: - w_ref = ref_w_ref() - if w_ref is not None: - w_ref.clear() + self.traverse(WeakrefLifeline._clear_wref) # Note that for no particular reason other than convenience, # weakref callbacks are not invoked eagerly here. They are # invoked by self.__del__() anyway. @@ -30,49 +55,46 @@ def get_or_make_weakref(self, w_subtype, w_obj): space = self.space w_weakreftype = space.gettypeobject(W_Weakref.typedef) - is_weakreftype = space.is_w(w_weakreftype, w_subtype) - if is_weakreftype and self.cached_weakref_index >= 0: - w_cached = self.refs_weak[self.cached_weakref_index]() - if w_cached is not None: - return w_cached - else: - self.cached_weakref_index = -1 - w_ref = space.allocate_instance(W_Weakref, w_subtype) - index = len(self.refs_weak) - W_Weakref.__init__(w_ref, space, w_obj, None) - self.refs_weak.append(weakref.ref(w_ref)) - if is_weakreftype: - self.cached_weakref_index = index + # + if space.is_w(w_weakreftype, w_subtype): + if self.cached_weakref is not None: + w_cached = self.cached_weakref() + if w_cached is not None: + return w_cached + w_ref = W_Weakref(space, w_obj, None) + self.cached_weakref = weakref.ref(w_ref) + else: + # subclass: cannot cache + w_ref = space.allocate_instance(W_Weakref, w_subtype) + W_Weakref.__init__(w_ref, space, w_obj, None) + self.append_wref_to(w_ref) return w_ref def get_or_make_proxy(self, w_obj): space = self.space - if self.cached_proxy_index >= 0: - w_cached = self.refs_weak[self.cached_proxy_index]() + if self.cached_proxy is not None: + w_cached = self.cached_proxy() if w_cached is not None: return w_cached - else: - self.cached_proxy_index = -1 - index = len(self.refs_weak) if space.is_true(space.callable(w_obj)): w_proxy = W_CallableProxy(space, w_obj, None) else: w_proxy = W_Proxy(space, w_obj, None) - self.refs_weak.append(weakref.ref(w_proxy)) - self.cached_proxy_index = index + self.cached_proxy = weakref.ref(w_proxy) return w_proxy def get_any_weakref(self, space): - if self.cached_weakref_index != -1: - w_ref = self.refs_weak[self.cached_weakref_index]() + if self.cached_weakref is not None: + w_ref = self.cached_weakref() if w_ref is not None: return w_ref - w_weakreftype = space.gettypeobject(W_Weakref.typedef) - for i in range(len(self.refs_weak)): - w_ref = self.refs_weak[i]() - if (w_ref is not None and - space.is_true(space.isinstance(w_ref, w_weakreftype))): - return w_ref + if self.other_refs_weak is not None: + w_weakreftype = space.gettypeobject(W_Weakref.typedef) + for wref in self.other_refs_weak.items(): + w_ref = wref() + if (w_ref is not None and + space.is_true(space.isinstance(w_ref, w_weakreftype))): + return w_ref return space.w_None @@ -80,10 +102,10 @@ def __init__(self, space, oldlifeline=None): self.space = space - if oldlifeline is None: - self.refs_weak = [] - else: - self.refs_weak = oldlifeline.refs_weak + if oldlifeline is not None: + self.cached_weakref = oldlifeline.cached_weakref + self.cached_proxy = oldlifeline.cached_proxy + self.other_refs_weak = oldlifeline.other_refs_weak def __del__(self): """This runs when the interp-level object goes away, and allows @@ -91,8 +113,11 @@ callbacks even if there is no __del__ method on the interp-level W_Root subclass implementing the object. """ - for i in range(len(self.refs_weak) - 1, -1, -1): - w_ref = self.refs_weak[i]() + if self.other_refs_weak is None: + return + items = self.other_refs_weak.items() + for i in range(len(items)-1, -1, -1): + w_ref = items[i]() if w_ref is not None and w_ref.w_callable is not None: w_ref.enqueue_for_destruction(self.space, W_WeakrefBase.activate_callback, @@ -102,7 +127,7 @@ space = self.space w_ref = space.allocate_instance(W_Weakref, w_subtype) W_Weakref.__init__(w_ref, space, w_obj, w_callable) - self.refs_weak.append(weakref.ref(w_ref)) + self.append_wref_to(w_ref) return w_ref def make_proxy_with_callback(self, w_obj, w_callable): @@ -111,7 +136,7 @@ w_proxy = W_CallableProxy(space, w_obj, w_callable) else: w_proxy = W_Proxy(space, w_obj, w_callable) - self.refs_weak.append(weakref.ref(w_proxy)) + self.append_wref_to(w_proxy) return w_proxy # ____________________________________________________________ @@ -247,30 +272,33 @@ ) +def _weakref_count(lifeline, wref, count): + if wref() is not None: + count += 1 + return count + def getweakrefcount(space, w_obj): """Return the number of weak references to 'obj'.""" lifeline = w_obj.getweakref() if lifeline is None: return space.wrap(0) else: - result = 0 - for i in range(len(lifeline.refs_weak)): - if lifeline.refs_weak[i]() is not None: - result += 1 + result = lifeline.traverse(_weakref_count, 0) return space.wrap(result) +def _get_weakrefs(lifeline, wref, result): + w_ref = wref() + if w_ref is not None: + result.append(w_ref) + return result + def getweakrefs(space, w_obj): """Return a list of all weak reference objects that point to 'obj'.""" + result = [] lifeline = w_obj.getweakref() - if lifeline is None: - return space.newlist([]) - else: - result = [] - for i in range(len(lifeline.refs_weak)): - w_ref = lifeline.refs_weak[i]() - if w_ref is not None: - result.append(w_ref) - return space.newlist(result) + if lifeline is not None: + lifeline.traverse(_get_weakrefs, result) + return space.newlist(result) #_________________________________________________________________ # Proxy From noreply at buildbot.pypy.org Thu May 10 10:04:19 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 May 2012 10:04:19 +0200 (CEST) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20120510080419.126B682F51@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r55004:9690a8b51532 Date: 2012-05-10 08:47 +0200 http://bitbucket.org/pypy/pypy/changeset/9690a8b51532/ Log: merge heads diff --git a/pypy/module/mmap/test/test_mmap.py b/pypy/module/mmap/test/test_mmap.py --- a/pypy/module/mmap/test/test_mmap.py +++ b/pypy/module/mmap/test/test_mmap.py @@ -548,6 +548,8 @@ assert len(b) == 6 assert b[3] == "b" assert b[:] == "foobar" + m.close() + f.close() def test_offset(self): from mmap import mmap, ALLOCATIONGRANULARITY diff --git a/pypy/module/thread/os_local.py b/pypy/module/thread/os_local.py --- a/pypy/module/thread/os_local.py +++ b/pypy/module/thread/os_local.py @@ -1,4 +1,5 @@ import weakref +from pypy.rlib import jit from pypy.interpreter.baseobjspace import Wrappable, W_Root from pypy.interpreter.executioncontext import ExecutionContext from pypy.interpreter.typedef import (TypeDef, interp2app, GetSetProperty, @@ -16,6 +17,7 @@ class Local(Wrappable): """Thread-local data""" + @jit.dont_look_inside def __init__(self, space, initargs): self.initargs = initargs self.dicts = {} # mapping ExecutionContexts to the wraped dict @@ -36,26 +38,32 @@ ec._thread_local_objs = WRefShrinkList() ec._thread_local_objs.append(weakref.ref(self)) + @jit.dont_look_inside + def create_new_dict(self, ec): + # create a new dict for this thread + space = ec.space + w_dict = space.newdict(instance=True) + self.dicts[ec] = w_dict + # call __init__ + try: + w_self = space.wrap(self) + w_type = space.type(w_self) + w_init = space.getattr(w_type, space.wrap("__init__")) + space.call_obj_args(w_init, w_self, self.initargs) + except: + # failed, forget w_dict and propagate the exception + del self.dicts[ec] + raise + # ready + self._register_in_ec(ec) + return w_dict + def getdict(self, space): ec = space.getexecutioncontext() try: w_dict = self.dicts[ec] except KeyError: - # create a new dict for this thread - w_dict = space.newdict(instance=True) - self.dicts[ec] = w_dict - # call __init__ - try: - w_self = space.wrap(self) - w_type = space.type(w_self) - w_init = space.getattr(w_type, space.wrap("__init__")) - space.call_obj_args(w_init, w_self, self.initargs) - except: - # failed, forget w_dict and propagate the exception - del self.dicts[ec] - raise - # ready - self._register_in_ec(ec) + w_dict = self.create_new_dict(ec) return w_dict def descr_local__new__(space, w_subtype, __args__): From noreply at buildbot.pypy.org Thu May 10 10:05:31 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 May 2012 10:05:31 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: hg merge default Message-ID: <20120510080531.34CB282F51@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r55005:143bcdb4406b Date: 2012-05-10 10:04 +0200 http://bitbucket.org/pypy/pypy/changeset/143bcdb4406b/ Log: hg merge default diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -18,6 +18,7 @@ from pypy.tool.pytest import appsupport from pypy.tool.pytest.confpath import pypydir, testdir, testresultdir +from pypy.config.parse import parse_info pytest_plugins = "resultlog", rsyncdirs = ['.', '../pypy/'] @@ -599,8 +600,9 @@ # check modules info = py.process.cmdexec("%s --info" % execpath) + info = parse_info(info) for mod in regrtest.usemodules: - if "objspace.usemodules.%s: False" % mod in info: + if info.get('objspace.usemodules.%s' % mod) is not True: py.test.skip("%s module not included in %s" % (mod, execpath)) diff --git a/pypy/bin/rpython b/pypy/bin/rpython old mode 100755 new mode 100644 diff --git a/pypy/config/parse.py b/pypy/config/parse.py new file mode 100644 --- /dev/null +++ b/pypy/config/parse.py @@ -0,0 +1,55 @@ + + +def parse_info(text): + """See test_parse.py.""" + text = text.lstrip() + result = {} + if (text+':').index(':') > (text+'=').index('='): + # found a '=' before a ':' means that we have the new format + current = {0: ''} + indentation_prefix = None + for line in text.splitlines(): + line = line.rstrip() + if not line: + continue + realline = line.lstrip() + indent = len(line) - len(realline) + # + # 'indentation_prefix' is set when the previous line was a [group] + if indentation_prefix is not None: + assert indent > max(current) # missing indent? + current[indent] = indentation_prefix + indentation_prefix = None + # + else: + # in case of dedent, must kill the extra items from 'current' + for n in current.keys(): + if n > indent: + del current[n] + # + prefix = current[indent] # KeyError if bad dedent + # + if realline.startswith('[') and realline.endswith(']'): + indentation_prefix = prefix + realline[1:-1] + '.' + else: + # build the whole dotted key and evaluate the value + i = realline.index(' = ') + key = prefix + realline[:i] + value = realline[i+3:] + value = eval(value, {}) + result[key] = value + # + else: + # old format + for line in text.splitlines(): + i = line.index(':') + key = line[:i].strip() + value = line[i+1:].strip() + try: + value = int(value) + except ValueError: + if value in ('True', 'False', 'None'): + value = eval(value, {}) + result[key] = value + # + return result diff --git a/pypy/config/test/test_parse.py b/pypy/config/test/test_parse.py new file mode 100644 --- /dev/null +++ b/pypy/config/test/test_parse.py @@ -0,0 +1,38 @@ +from pypy.config.parse import parse_info + + +def test_parse_new_format(): + assert (parse_info("[foo]\n" + " bar = True\n") + == {'foo.bar': True}) + + assert (parse_info("[objspace]\n" + " x = 'hello'\n" + "[translation]\n" + " bar = 42\n" + " [egg]\n" + " something = None\n" + " foo = True\n") + == { + 'translation.foo': True, + 'translation.bar': 42, + 'translation.egg.something': None, + 'objspace.x': 'hello', + }) + + assert parse_info("simple = 43\n") == {'simple': 43} + + +def test_parse_old_format(): + assert (parse_info(" objspace.allworkingmodules: True\n" + " objspace.disable_call_speedhacks: False\n" + " objspace.extmodules: None\n" + " objspace.name: std\n" + " objspace.std.prebuiltintfrom: -5\n") + == { + 'objspace.allworkingmodules': True, + 'objspace.disable_call_speedhacks': False, + 'objspace.extmodules': None, + 'objspace.name': 'std', + 'objspace.std.prebuiltintfrom': -5, + }) diff --git a/pypy/jit/metainterp/optimizeopt/earlyforce.py b/pypy/jit/metainterp/optimizeopt/earlyforce.py --- a/pypy/jit/metainterp/optimizeopt/earlyforce.py +++ b/pypy/jit/metainterp/optimizeopt/earlyforce.py @@ -7,7 +7,9 @@ opnum = op.getopnum() if (opnum != rop.SETFIELD_GC and opnum != rop.SETARRAYITEM_GC and - opnum != rop.QUASIIMMUT_FIELD): + opnum != rop.QUASIIMMUT_FIELD and + opnum != rop.SAME_AS and + opnum != rop.MARK_OPAQUE_PTR): for arg in op.getarglist(): if arg in self.optimizer.values: diff --git a/pypy/jit/metainterp/optimizeopt/optimizer.py b/pypy/jit/metainterp/optimizeopt/optimizer.py --- a/pypy/jit/metainterp/optimizeopt/optimizer.py +++ b/pypy/jit/metainterp/optimizeopt/optimizer.py @@ -658,6 +658,9 @@ def optimize_SAME_AS(self, op): self.make_equal_to(op.result, self.getvalue(op.getarg(0))) + def optimize_MARK_OPAQUE_PTR(self, op): + value = self.getvalue(op.getarg(0)) + self.optimizer.opaque_pointers[value] = True dispatch_opt = make_dispatcher_method(Optimizer, 'optimize_', default=Optimizer.optimize_default) diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -208,7 +208,7 @@ box = value.box assert isinstance(box, Const) if not box.same_constant(constbox): - raise InvalidLoop('A GURAD_{VALUE,TRUE,FALSE} was proven to' + + raise InvalidLoop('A GUARD_{VALUE,TRUE,FALSE} was proven to' + 'always fail') return if emit_operation: @@ -481,10 +481,6 @@ args = [op.getarg(0), ConstInt(highest_bit(val))]) self.emit_operation(op) - def optimize_MARK_OPAQUE_PTR(self, op): - value = self.getvalue(op.getarg(0)) - self.optimizer.opaque_pointers[value] = True - def optimize_CAST_PTR_TO_INT(self, op): self.pure(rop.CAST_INT_TO_PTR, [op.result], op.getarg(0)) self.emit_operation(op) diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -29,9 +29,6 @@ # but it's a bit hard to implement robustly if heap.py is also run pass - def optimize_MARK_OPAQUE_PTR(self, op): - pass - def optimize_RECORD_KNOWN_CLASS(self, op): pass diff --git a/pypy/jit/metainterp/test/test_loop_unroll.py b/pypy/jit/metainterp/test/test_loop_unroll.py --- a/pypy/jit/metainterp/test/test_loop_unroll.py +++ b/pypy/jit/metainterp/test/test_loop_unroll.py @@ -19,3 +19,4 @@ class TestOOtype(LoopUnrollTest, OOJitMixin): pass + diff --git a/pypy/jit/metainterp/test/test_loop_unroll_disopt.py b/pypy/jit/metainterp/test/test_loop_unroll_disopt.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/test/test_loop_unroll_disopt.py @@ -0,0 +1,25 @@ +import py +from pypy.rlib.jit import JitDriver +from pypy.jit.metainterp.test import test_loop +from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin +from pypy.jit.metainterp.optimizeopt import ALL_OPTS_NAMES + +allopts = ALL_OPTS_NAMES.split(':') +for optnum in range(len(allopts)): + myopts = allopts[:] + del myopts[optnum] + + class TestLLtype(test_loop.LoopTest, LLJitMixin): + enable_opts = ':'.join(myopts) + + def check_resops(self, *args, **kwargs): + pass + def check_trace_count(self, count): + pass + + opt = allopts[optnum] + exec "TestLoopNo%sLLtype = TestLLtype" % (opt[0].upper() + opt[1:]) + +del TestLLtype # No need to run the last set twice +del TestLoopNoUnrollLLtype # This case is take care of by test_loop + diff --git a/pypy/module/__pypy__/interp_time.py b/pypy/module/__pypy__/interp_time.py --- a/pypy/module/__pypy__/interp_time.py +++ b/pypy/module/__pypy__/interp_time.py @@ -1,5 +1,5 @@ from __future__ import with_statement -import sys +import os from pypy.interpreter.error import exception_from_errno from pypy.interpreter.gateway import unwrap_spec @@ -7,11 +7,15 @@ from pypy.rpython.tool import rffi_platform from pypy.translator.tool.cbuild import ExternalCompilationInfo +if os.name == 'nt': + libraries = [] +else: + libraries = ["rt"] class CConfig: _compilation_info_ = ExternalCompilationInfo( includes=["time.h"], - libraries=["rt"], + libraries=libraries, ) HAS_CLOCK_GETTIME = rffi_platform.Has('clock_gettime') @@ -22,11 +26,6 @@ CLOCK_PROCESS_CPUTIME_ID = rffi_platform.DefinedConstantInteger("CLOCK_PROCESS_CPUTIME_ID") CLOCK_THREAD_CPUTIME_ID = rffi_platform.DefinedConstantInteger("CLOCK_THREAD_CPUTIME_ID") - TIMESPEC = rffi_platform.Struct("struct timespec", [ - ("tv_sec", rffi.TIME_T), - ("tv_nsec", rffi.LONG), - ]) - cconfig = rffi_platform.configure(CConfig) HAS_CLOCK_GETTIME = cconfig["HAS_CLOCK_GETTIME"] @@ -37,29 +36,36 @@ CLOCK_PROCESS_CPUTIME_ID = cconfig["CLOCK_PROCESS_CPUTIME_ID"] CLOCK_THREAD_CPUTIME_ID = cconfig["CLOCK_THREAD_CPUTIME_ID"] -TIMESPEC = cconfig["TIMESPEC"] +if HAS_CLOCK_GETTIME: + #redo it for timespec + CConfig.TIMESPEC = rffi_platform.Struct("struct timespec", [ + ("tv_sec", rffi.TIME_T), + ("tv_nsec", rffi.LONG), + ]) + cconfig = rffi_platform.configure(CConfig) + TIMESPEC = cconfig['TIMESPEC'] -c_clock_gettime = rffi.llexternal("clock_gettime", - [lltype.Signed, lltype.Ptr(TIMESPEC)], rffi.INT, - compilation_info=CConfig._compilation_info_, threadsafe=False -) -c_clock_getres = rffi.llexternal("clock_getres", - [lltype.Signed, lltype.Ptr(TIMESPEC)], rffi.INT, - compilation_info=CConfig._compilation_info_, threadsafe=False -) + c_clock_gettime = rffi.llexternal("clock_gettime", + [lltype.Signed, lltype.Ptr(TIMESPEC)], rffi.INT, + compilation_info=CConfig._compilation_info_, threadsafe=False + ) + c_clock_getres = rffi.llexternal("clock_getres", + [lltype.Signed, lltype.Ptr(TIMESPEC)], rffi.INT, + compilation_info=CConfig._compilation_info_, threadsafe=False + ) - at unwrap_spec(clk_id="c_int") -def clock_gettime(space, clk_id): - with lltype.scoped_alloc(TIMESPEC) as tp: - ret = c_clock_gettime(clk_id, tp) - if ret != 0: - raise exception_from_errno(space, space.w_IOError) - return space.wrap(tp.c_tv_sec + tp.c_tv_nsec * 1e-9) + @unwrap_spec(clk_id="c_int") + def clock_gettime(space, clk_id): + with lltype.scoped_alloc(TIMESPEC) as tp: + ret = c_clock_gettime(clk_id, tp) + if ret != 0: + raise exception_from_errno(space, space.w_IOError) + return space.wrap(tp.c_tv_sec + tp.c_tv_nsec * 1e-9) - at unwrap_spec(clk_id="c_int") -def clock_getres(space, clk_id): - with lltype.scoped_alloc(TIMESPEC) as tp: - ret = c_clock_getres(clk_id, tp) - if ret != 0: - raise exception_from_errno(space, space.w_IOError) - return space.wrap(tp.c_tv_sec + tp.c_tv_nsec * 1e-9) + @unwrap_spec(clk_id="c_int") + def clock_getres(space, clk_id): + with lltype.scoped_alloc(TIMESPEC) as tp: + ret = c_clock_getres(clk_id, tp) + if ret != 0: + raise exception_from_errno(space, space.w_IOError) + return space.wrap(tp.c_tv_sec + tp.c_tv_nsec * 1e-9) diff --git a/pypy/module/_file/test/test_file.py b/pypy/module/_file/test/test_file.py --- a/pypy/module/_file/test/test_file.py +++ b/pypy/module/_file/test/test_file.py @@ -265,14 +265,6 @@ if option.runappdirect: py.test.skip("works with internals of _file impl on py.py") - import platform - if platform.system() == 'Windows': - # XXX This test crashes until someone implements something like - # XXX verify_fd from - # XXX http://hg.python.org/cpython/file/80ddbd822227/Modules/posixmodule.c#l434 - # XXX and adds it to fopen - assert False - state = [0] def read(fd, n=None): if fd != 42: @@ -286,7 +278,7 @@ return '' os.read = read stdin = W_File(cls.space) - stdin.file_fdopen(42, "r", 1) + stdin.file_fdopen(42, 'rb', 1) stdin.name = '' cls.w_stream = stdin diff --git a/pypy/module/_socket/test/test_sock_app.py b/pypy/module/_socket/test/test_sock_app.py --- a/pypy/module/_socket/test/test_sock_app.py +++ b/pypy/module/_socket/test/test_sock_app.py @@ -616,8 +616,9 @@ cli.send('foobar' * 70) except timeout: pass - # test sendall() timeout - raises(timeout, cli.sendall, 'foobar' * 70) + # test sendall() timeout, be sure to send data larger than the + # socket buffer + raises(timeout, cli.sendall, 'foobar' * 7000) # done cli.close() t.close() diff --git a/pypy/module/_weakref/interp__weakref.py b/pypy/module/_weakref/interp__weakref.py --- a/pypy/module/_weakref/interp__weakref.py +++ b/pypy/module/_weakref/interp__weakref.py @@ -4,25 +4,50 @@ from pypy.interpreter.gateway import interp2app, ObjSpace from pypy.interpreter.typedef import TypeDef from pypy.rlib import jit +from pypy.rlib.rshrinklist import AbstractShrinkList +from pypy.rlib.objectmodel import specialize import weakref +class WRefShrinkList(AbstractShrinkList): + def must_keep(self, wref): + return wref() is not None + + class WeakrefLifeline(W_Root): - cached_weakref_index = -1 - cached_proxy_index = -1 + cached_weakref = None + cached_proxy = None + other_refs_weak = None def __init__(self, space): self.space = space - self.refs_weak = [] + + def append_wref_to(self, w_ref): + if self.other_refs_weak is None: + self.other_refs_weak = WRefShrinkList() + self.other_refs_weak.append(weakref.ref(w_ref)) + + @specialize.arg(1) + def traverse(self, callback, arg=None): + if self.cached_weakref is not None: + arg = callback(self, self.cached_weakref, arg) + if self.cached_proxy is not None: + arg = callback(self, self.cached_proxy, arg) + if self.other_refs_weak is not None: + for ref_w_ref in self.other_refs_weak.items(): + arg = callback(self, ref_w_ref, arg) + return arg + + def _clear_wref(self, wref, _): + w_ref = wref() + if w_ref is not None: + w_ref.clear() def clear_all_weakrefs(self): """Clear all weakrefs. This is called when an app-level object has a __del__, just before the app-level __del__ method is called. """ - for ref_w_ref in self.refs_weak: - w_ref = ref_w_ref() - if w_ref is not None: - w_ref.clear() + self.traverse(WeakrefLifeline._clear_wref) # Note that for no particular reason other than convenience, # weakref callbacks are not invoked eagerly here. They are # invoked by self.__del__() anyway. @@ -30,49 +55,46 @@ def get_or_make_weakref(self, w_subtype, w_obj): space = self.space w_weakreftype = space.gettypeobject(W_Weakref.typedef) - is_weakreftype = space.is_w(w_weakreftype, w_subtype) - if is_weakreftype and self.cached_weakref_index >= 0: - w_cached = self.refs_weak[self.cached_weakref_index]() - if w_cached is not None: - return w_cached - else: - self.cached_weakref_index = -1 - w_ref = space.allocate_instance(W_Weakref, w_subtype) - index = len(self.refs_weak) - W_Weakref.__init__(w_ref, space, w_obj, None) - self.refs_weak.append(weakref.ref(w_ref)) - if is_weakreftype: - self.cached_weakref_index = index + # + if space.is_w(w_weakreftype, w_subtype): + if self.cached_weakref is not None: + w_cached = self.cached_weakref() + if w_cached is not None: + return w_cached + w_ref = W_Weakref(space, w_obj, None) + self.cached_weakref = weakref.ref(w_ref) + else: + # subclass: cannot cache + w_ref = space.allocate_instance(W_Weakref, w_subtype) + W_Weakref.__init__(w_ref, space, w_obj, None) + self.append_wref_to(w_ref) return w_ref def get_or_make_proxy(self, w_obj): space = self.space - if self.cached_proxy_index >= 0: - w_cached = self.refs_weak[self.cached_proxy_index]() + if self.cached_proxy is not None: + w_cached = self.cached_proxy() if w_cached is not None: return w_cached - else: - self.cached_proxy_index = -1 - index = len(self.refs_weak) if space.is_true(space.callable(w_obj)): w_proxy = W_CallableProxy(space, w_obj, None) else: w_proxy = W_Proxy(space, w_obj, None) - self.refs_weak.append(weakref.ref(w_proxy)) - self.cached_proxy_index = index + self.cached_proxy = weakref.ref(w_proxy) return w_proxy def get_any_weakref(self, space): - if self.cached_weakref_index != -1: - w_ref = self.refs_weak[self.cached_weakref_index]() + if self.cached_weakref is not None: + w_ref = self.cached_weakref() if w_ref is not None: return w_ref - w_weakreftype = space.gettypeobject(W_Weakref.typedef) - for i in range(len(self.refs_weak)): - w_ref = self.refs_weak[i]() - if (w_ref is not None and - space.is_true(space.isinstance(w_ref, w_weakreftype))): - return w_ref + if self.other_refs_weak is not None: + w_weakreftype = space.gettypeobject(W_Weakref.typedef) + for wref in self.other_refs_weak.items(): + w_ref = wref() + if (w_ref is not None and + space.is_true(space.isinstance(w_ref, w_weakreftype))): + return w_ref return space.w_None @@ -80,10 +102,10 @@ def __init__(self, space, oldlifeline=None): self.space = space - if oldlifeline is None: - self.refs_weak = [] - else: - self.refs_weak = oldlifeline.refs_weak + if oldlifeline is not None: + self.cached_weakref = oldlifeline.cached_weakref + self.cached_proxy = oldlifeline.cached_proxy + self.other_refs_weak = oldlifeline.other_refs_weak def __del__(self): """This runs when the interp-level object goes away, and allows @@ -91,8 +113,11 @@ callbacks even if there is no __del__ method on the interp-level W_Root subclass implementing the object. """ - for i in range(len(self.refs_weak) - 1, -1, -1): - w_ref = self.refs_weak[i]() + if self.other_refs_weak is None: + return + items = self.other_refs_weak.items() + for i in range(len(items)-1, -1, -1): + w_ref = items[i]() if w_ref is not None and w_ref.w_callable is not None: w_ref.enqueue_for_destruction(self.space, W_WeakrefBase.activate_callback, @@ -102,7 +127,7 @@ space = self.space w_ref = space.allocate_instance(W_Weakref, w_subtype) W_Weakref.__init__(w_ref, space, w_obj, w_callable) - self.refs_weak.append(weakref.ref(w_ref)) + self.append_wref_to(w_ref) return w_ref def make_proxy_with_callback(self, w_obj, w_callable): @@ -111,7 +136,7 @@ w_proxy = W_CallableProxy(space, w_obj, w_callable) else: w_proxy = W_Proxy(space, w_obj, w_callable) - self.refs_weak.append(weakref.ref(w_proxy)) + self.append_wref_to(w_proxy) return w_proxy # ____________________________________________________________ @@ -247,30 +272,33 @@ ) +def _weakref_count(lifeline, wref, count): + if wref() is not None: + count += 1 + return count + def getweakrefcount(space, w_obj): """Return the number of weak references to 'obj'.""" lifeline = w_obj.getweakref() if lifeline is None: return space.wrap(0) else: - result = 0 - for i in range(len(lifeline.refs_weak)): - if lifeline.refs_weak[i]() is not None: - result += 1 + result = lifeline.traverse(_weakref_count, 0) return space.wrap(result) +def _get_weakrefs(lifeline, wref, result): + w_ref = wref() + if w_ref is not None: + result.append(w_ref) + return result + def getweakrefs(space, w_obj): """Return a list of all weak reference objects that point to 'obj'.""" + result = [] lifeline = w_obj.getweakref() - if lifeline is None: - return space.newlist([]) - else: - result = [] - for i in range(len(lifeline.refs_weak)): - w_ref = lifeline.refs_weak[i]() - if w_ref is not None: - result.append(w_ref) - return space.newlist(result) + if lifeline is not None: + lifeline.traverse(_get_weakrefs, result) + return space.newlist(result) #_________________________________________________________________ # Proxy diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -26,6 +26,7 @@ from pypy.module.__builtin__.interp_classobj import W_ClassObject from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint +from pypy.rlib.rposix import is_valid_fd, validate_fd from pypy.rlib.unroll import unrolling_iterable from pypy.rlib.objectmodel import specialize from pypy.rlib.exports import export_struct @@ -79,20 +80,39 @@ # FILE* interface FILEP = rffi.COpaquePtr('FILE') -fopen = rffi.llexternal('fopen', [CONST_STRING, CONST_STRING], FILEP) -fclose = rffi.llexternal('fclose', [FILEP], rffi.INT) -fwrite = rffi.llexternal('fwrite', - [rffi.VOIDP, rffi.SIZE_T, rffi.SIZE_T, FILEP], - rffi.SIZE_T) -fread = rffi.llexternal('fread', - [rffi.VOIDP, rffi.SIZE_T, rffi.SIZE_T, FILEP], - rffi.SIZE_T) -feof = rffi.llexternal('feof', [FILEP], rffi.INT) + if sys.platform == 'win32': fileno = rffi.llexternal('_fileno', [FILEP], rffi.INT) else: fileno = rffi.llexternal('fileno', [FILEP], rffi.INT) +fopen = rffi.llexternal('fopen', [CONST_STRING, CONST_STRING], FILEP) + +_fclose = rffi.llexternal('fclose', [FILEP], rffi.INT) +def fclose(fp): + if not is_valid_fd(fileno(fp)): + return -1 + return _fclose(fp) + +_fwrite = rffi.llexternal('fwrite', + [rffi.VOIDP, rffi.SIZE_T, rffi.SIZE_T, FILEP], + rffi.SIZE_T) +def fwrite(buf, sz, n, fp): + validate_fd(fileno(fp)) + return _fwrite(buf, sz, n, fp) + +_fread = rffi.llexternal('fread', + [rffi.VOIDP, rffi.SIZE_T, rffi.SIZE_T, FILEP], + rffi.SIZE_T) +def fread(buf, sz, n, fp): + validate_fd(fileno(fp)) + return _fread(buf, sz, n, fp) + +_feof = rffi.llexternal('feof', [FILEP], rffi.INT) +def feof(fp): + validate_fd(fileno(fp)) + return _feof(fp) + constant_names = """ Py_TPFLAGS_READY Py_TPFLAGS_READYING Py_TPFLAGS_HAVE_GETCHARBUFFER diff --git a/pypy/module/math/test/test_math.py b/pypy/module/math/test/test_math.py --- a/pypy/module/math/test/test_math.py +++ b/pypy/module/math/test/test_math.py @@ -271,5 +271,6 @@ assert math.trunc(foo()) == "truncated" def test_copysign_nan(self): + skip('sign of nan is undefined') import math assert math.copysign(1.0, float('-nan')) == -1.0 diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -233,12 +233,15 @@ assert a[1] == 0 def test_signbit(self): - from _numpypy import signbit, copysign + from _numpypy import signbit - assert (signbit([0, 0.0, 1, 1.0, float('inf'), float('nan')]) == - [False, False, False, False, False, False]).all() - assert (signbit([-0, -0.0, -1, -1.0, float('-inf'), -float('nan'), float('-nan')]) == - [False, True, True, True, True, True, True]).all() + assert (signbit([0, 0.0, 1, 1.0, float('inf')]) == + [False, False, False, False, False]).all() + assert (signbit([-0, -0.0, -1, -1.0, float('-inf')]) == + [False, True, True, True, True]).all() + skip('sign of nan is non-determinant') + assert (signbit([float('nan'), float('-nan'), -float('nan')]) == + [False, True, True]).all() def test_reciporocal(self): from _numpypy import array, reciprocal @@ -267,8 +270,8 @@ assert ([ninf, -1.0, -1.0, -1.0, 0.0, 1.0, 2.0, 1.0, inf] == ceil(a)).all() assert ([ninf, -1.0, -1.0, -1.0, 0.0, 1.0, 1.0, 0.0, inf] == trunc(a)).all() assert all([math.isnan(f(float("nan"))) for f in floor, ceil, trunc]) - assert all([math.copysign(1, f(float("nan"))) == 1 for f in floor, ceil, trunc]) - assert all([math.copysign(1, f(float("-nan"))) == -1 for f in floor, ceil, trunc]) + assert all([math.copysign(1, f(abs(float("nan")))) == 1 for f in floor, ceil, trunc]) + assert all([math.copysign(1, f(-abs(float("nan")))) == -1 for f in floor, ceil, trunc]) def test_copysign(self): from _numpypy import array, copysign diff --git a/pypy/module/mmap/test/test_mmap.py b/pypy/module/mmap/test/test_mmap.py --- a/pypy/module/mmap/test/test_mmap.py +++ b/pypy/module/mmap/test/test_mmap.py @@ -548,6 +548,8 @@ assert len(b) == 6 assert b[3] == "b" assert b[:] == "foobar" + m.close() + f.close() def test_offset(self): from mmap import mmap, ALLOCATIONGRANULARITY diff --git a/pypy/module/rctime/test/test_rctime.py b/pypy/module/rctime/test/test_rctime.py --- a/pypy/module/rctime/test/test_rctime.py +++ b/pypy/module/rctime/test/test_rctime.py @@ -213,6 +213,7 @@ def test_strftime(self): import time as rctime + import os t = rctime.time() tt = rctime.gmtime(t) @@ -228,6 +229,14 @@ exp = '2000 01 01 00 00 00 1 001' assert rctime.strftime("%Y %m %d %H %M %S %w %j", (0,)*9) == exp + # Guard against invalid/non-supported format string + # so that Python don't crash (Windows crashes when the format string + # input to [w]strftime is not kosher. + if os.name == 'nt': + raises(ValueError, rctime.strftime, '%f') + else: + assert rctime.strftime('%f') == '%f' + def test_strftime_ext(self): import time as rctime diff --git a/pypy/module/signal/interp_signal.py b/pypy/module/signal/interp_signal.py --- a/pypy/module/signal/interp_signal.py +++ b/pypy/module/signal/interp_signal.py @@ -15,7 +15,8 @@ def setup(): for key, value in cpy_signal.__dict__.items(): - if key.startswith('SIG') and is_valid_int(value): + if (key.startswith('SIG') or key.startswith('CTRL_')) and \ + is_valid_int(value): globals()[key] = value yield key @@ -23,6 +24,10 @@ SIG_DFL = cpy_signal.SIG_DFL SIG_IGN = cpy_signal.SIG_IGN signal_names = list(setup()) +signal_values = [globals()[key] for key in signal_names] +signal_values = {} +for key in signal_names: + signal_values[globals()[key]] = None includes = ['stdlib.h', 'src/signals.h'] if sys.platform != 'win32': @@ -242,9 +247,11 @@ return space.w_None def check_signum(space, signum): - if signum < 1 or signum >= NSIG: - raise OperationError(space.w_ValueError, - space.wrap("signal number out of range")) + if signum in signal_values: + return + raise OperationError(space.w_ValueError, + space.wrap("invalid signal value")) + @jit.dont_look_inside @unwrap_spec(signum=int) diff --git a/pypy/module/signal/test/test_interp_signal.py b/pypy/module/signal/test/test_interp_signal.py --- a/pypy/module/signal/test/test_interp_signal.py +++ b/pypy/module/signal/test/test_interp_signal.py @@ -6,6 +6,8 @@ def setup_module(mod): if not hasattr(os, 'kill') or not hasattr(os, 'getpid'): py.test.skip("requires os.kill() and os.getpid()") + if not hasattr(interp_signal, 'SIGUSR1'): + py.test.skip("requires SIGUSR1 in signal") def check(expected): diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -8,6 +8,8 @@ def setup_class(cls): if not hasattr(os, 'kill') or not hasattr(os, 'getpid'): py.test.skip("requires os.kill() and os.getpid()") + if not hasattr(cpy_signal, 'SIGUSR1'): + py.test.skip("requires SIGUSR1 in signal") cls.space = gettestobjspace(usemodules=['signal']) def test_checksignals(self): @@ -36,8 +38,6 @@ class AppTestSignal: def setup_class(cls): - if not hasattr(os, 'kill') or not hasattr(os, 'getpid'): - py.test.skip("requires os.kill() and os.getpid()") space = gettestobjspace(usemodules=['signal']) cls.space = space cls.w_signal = space.appexec([], "(): import signal; return signal") @@ -45,64 +45,72 @@ def test_exported_names(self): self.signal.__dict__ # crashes if the interpleveldefs are invalid - def test_usr1(self): - import types, posix + def test_basics(self): + import types, os + if not hasattr(os, 'kill') or not hasattr(os, 'getpid'): + skip("requires os.kill() and os.getpid()") signal = self.signal # the signal module to test + if hasattr(signal,'SIGUSR1'): + signum = signal.SIGUSR1 + else: + signum = signal.CTRL_BREAK_EVENT received = [] def myhandler(signum, frame): assert isinstance(frame, types.FrameType) received.append(signum) - signal.signal(signal.SIGUSR1, myhandler) + signal.signal(signum, myhandler) - posix.kill(posix.getpid(), signal.SIGUSR1) + print dir(os) + + os.kill(os.getpid(), signum) # the signal should be delivered to the handler immediately - assert received == [signal.SIGUSR1] + assert received == [signum] del received[:] - posix.kill(posix.getpid(), signal.SIGUSR1) + os.kill(os.getpid(), signum) # the signal should be delivered to the handler immediately - assert received == [signal.SIGUSR1] + assert received == [signum] del received[:] - signal.signal(signal.SIGUSR1, signal.SIG_IGN) + signal.signal(signum, signal.SIG_IGN) - posix.kill(posix.getpid(), signal.SIGUSR1) + os.kill(os.getpid(), signum) for i in range(10000): # wait a bit - signal should not arrive if received: break assert received == [] - signal.signal(signal.SIGUSR1, signal.SIG_DFL) + signal.signal(signum, signal.SIG_DFL) def test_default_return(self): """ Test that signal.signal returns SIG_DFL if that is the current handler. """ - from signal import signal, SIGUSR1, SIG_DFL, SIG_IGN + from signal import signal, SIGINT, SIG_DFL, SIG_IGN try: for handler in SIG_DFL, SIG_IGN, lambda *a: None: - signal(SIGUSR1, SIG_DFL) - assert signal(SIGUSR1, handler) == SIG_DFL + signal(SIGINT, SIG_DFL) + assert signal(SIGINT, handler) == SIG_DFL finally: - signal(SIGUSR1, SIG_DFL) + signal(SIGINT, SIG_DFL) def test_ignore_return(self): """ Test that signal.signal returns SIG_IGN if that is the current handler. """ - from signal import signal, SIGUSR1, SIG_DFL, SIG_IGN + from signal import signal, SIGINT, SIG_DFL, SIG_IGN try: for handler in SIG_DFL, SIG_IGN, lambda *a: None: - signal(SIGUSR1, SIG_IGN) - assert signal(SIGUSR1, handler) == SIG_IGN + signal(SIGINT, SIG_IGN) + assert signal(SIGINT, handler) == SIG_IGN finally: - signal(SIGUSR1, SIG_DFL) + signal(SIGINT, SIG_DFL) def test_obj_return(self): @@ -110,43 +118,47 @@ Test that signal.signal returns a Python object if one is the current handler. """ - from signal import signal, SIGUSR1, SIG_DFL, SIG_IGN + from signal import signal, SIGINT, SIG_DFL, SIG_IGN def installed(*a): pass try: for handler in SIG_DFL, SIG_IGN, lambda *a: None: - signal(SIGUSR1, installed) - assert signal(SIGUSR1, handler) is installed + signal(SIGINT, installed) + assert signal(SIGINT, handler) is installed finally: - signal(SIGUSR1, SIG_DFL) + signal(SIGINT, SIG_DFL) def test_getsignal(self): """ Test that signal.getsignal returns the currently installed handler. """ - from signal import getsignal, signal, SIGUSR1, SIG_DFL, SIG_IGN + from signal import getsignal, signal, SIGINT, SIG_DFL, SIG_IGN def handler(*a): pass try: - assert getsignal(SIGUSR1) == SIG_DFL - signal(SIGUSR1, SIG_DFL) - assert getsignal(SIGUSR1) == SIG_DFL - signal(SIGUSR1, SIG_IGN) - assert getsignal(SIGUSR1) == SIG_IGN - signal(SIGUSR1, handler) - assert getsignal(SIGUSR1) is handler + assert getsignal(SIGINT) == SIG_DFL + signal(SIGINT, SIG_DFL) + assert getsignal(SIGINT) == SIG_DFL + signal(SIGINT, SIG_IGN) + assert getsignal(SIGINT) == SIG_IGN + signal(SIGINT, handler) + assert getsignal(SIGINT) is handler finally: - signal(SIGUSR1, SIG_DFL) + signal(SIGINT, SIG_DFL) raises(ValueError, getsignal, 4444) raises(ValueError, signal, 4444, lambda *args: None) + raises(ValueError, signal, 42, lambda *args: None) def test_alarm(self): - from signal import alarm, signal, SIG_DFL, SIGALRM + try: + from signal import alarm, signal, SIG_DFL, SIGALRM + except: + skip('no alarm on this platform') import time l = [] def handler(*a): @@ -163,10 +175,13 @@ signal(SIGALRM, SIG_DFL) def test_set_wakeup_fd(self): - import signal, posix, fcntl + try: + import signal, posix, fcntl + except ImportError: + skip('cannot import posix or fcntl') def myhandler(signum, frame): pass - signal.signal(signal.SIGUSR1, myhandler) + signal.signal(signal.SIGINT, myhandler) # def cannot_read(): try: @@ -187,17 +202,19 @@ old_wakeup = signal.set_wakeup_fd(fd_write) try: cannot_read() - posix.kill(posix.getpid(), signal.SIGUSR1) + posix.kill(posix.getpid(), signal.SIGINT) res = posix.read(fd_read, 1) assert res == '\x00' cannot_read() finally: old_wakeup = signal.set_wakeup_fd(old_wakeup) # - signal.signal(signal.SIGUSR1, signal.SIG_DFL) + signal.signal(signal.SIGINT, signal.SIG_DFL) def test_siginterrupt(self): import signal, os, time + if not hasattr(signal, 'siginterrupt'): + skip('non siginterrupt in signal') signum = signal.SIGUSR1 def readpipe_is_not_interrupted(): # from CPython's test_signal.readpipe_interrupted() diff --git a/pypy/module/thread/__init__.py b/pypy/module/thread/__init__.py --- a/pypy/module/thread/__init__.py +++ b/pypy/module/thread/__init__.py @@ -19,7 +19,7 @@ 'allocate_lock': 'os_lock.allocate_lock', 'allocate': 'os_lock.allocate_lock', # obsolete synonym 'LockType': 'os_lock.Lock', - #'_local': 'os_local.Local', # only if 'rweakref' + '_local': 'os_local.Local', 'error': 'space.fromcache(error.Cache).w_error', '_atomic_enter': 'atomic.atomic_enter', '_atomic_exit': 'atomic.atomic_exit', @@ -41,8 +41,3 @@ from pypy.module.posix.interp_posix import add_fork_hook from pypy.module.thread.os_thread import reinit_threads add_fork_hook('child', reinit_threads) - - def setup_after_space_initialization(self): - """NOT_RPYTHON""" - if self.space.config.translation.rweakref: - self.extra_interpdef('_local', 'os_local.Local') diff --git a/pypy/module/thread/os_local.py b/pypy/module/thread/os_local.py --- a/pypy/module/thread/os_local.py +++ b/pypy/module/thread/os_local.py @@ -1,16 +1,26 @@ -from pypy.rlib.rweakref import RWeakKeyDictionary +import weakref +from pypy.rlib import jit from pypy.interpreter.baseobjspace import Wrappable, W_Root from pypy.interpreter.executioncontext import ExecutionContext from pypy.interpreter.typedef import (TypeDef, interp2app, GetSetProperty, descr_get_dict) +from pypy.rlib.rshrinklist import AbstractShrinkList + +class WRefShrinkList(AbstractShrinkList): + def must_keep(self, wref): + return wref() is not None + + +ExecutionContext._thread_local_objs = None class Local(Wrappable): """Thread-local data""" + @jit.dont_look_inside def __init__(self, space, initargs): self.initargs = initargs - self.dicts = RWeakKeyDictionary(ExecutionContext, W_Root) + self.dicts = {} # mapping ExecutionContexts to the wraped dict # The app-level __init__() will be called by the general # instance-creation logic. It causes getdict() to be # immediately called. If we don't prepare and set a w_dict @@ -18,26 +28,42 @@ # to call __init__() a second time. ec = space.getexecutioncontext() w_dict = space.newdict(instance=True) - self.dicts.set(ec, w_dict) + self.dicts[ec] = w_dict + self._register_in_ec(ec) + + def _register_in_ec(self, ec): + if not ec.space.config.translation.rweakref: + return # without weakrefs, works but 'dicts' is never cleared + if ec._thread_local_objs is None: + ec._thread_local_objs = WRefShrinkList() + ec._thread_local_objs.append(weakref.ref(self)) + + @jit.dont_look_inside + def create_new_dict(self, ec): + # create a new dict for this thread + space = ec.space + w_dict = space.newdict(instance=True) + self.dicts[ec] = w_dict + # call __init__ + try: + w_self = space.wrap(self) + w_type = space.type(w_self) + w_init = space.getattr(w_type, space.wrap("__init__")) + space.call_obj_args(w_init, w_self, self.initargs) + except: + # failed, forget w_dict and propagate the exception + del self.dicts[ec] + raise + # ready + self._register_in_ec(ec) + return w_dict def getdict(self, space): ec = space.getexecutioncontext() - w_dict = self.dicts.get(ec) - if w_dict is None: - # create a new dict for this thread - w_dict = space.newdict(instance=True) - self.dicts.set(ec, w_dict) - # call __init__ - try: - w_self = space.wrap(self) - w_type = space.type(w_self) - w_init = space.getattr(w_type, space.wrap("__init__")) - space.call_obj_args(w_init, w_self, self.initargs) - except: - # failed, forget w_dict and propagate the exception - self.dicts.set(ec, None) - raise - # ready + try: + w_dict = self.dicts[ec] + except KeyError: + w_dict = self.create_new_dict(ec) return w_dict def descr_local__new__(space, w_subtype, __args__): @@ -55,3 +81,13 @@ __init__ = interp2app(Local.descr_local__init__), __dict__ = GetSetProperty(descr_get_dict, cls=Local), ) + +def thread_is_stopping(ec): + tlobjs = ec._thread_local_objs + if tlobjs is None: + return + ec._thread_local_objs = None + for wref in tlobjs.items(): + local = wref() + if local is not None: + del local.dicts[ec] diff --git a/pypy/module/thread/threadlocals.py b/pypy/module/thread/threadlocals.py --- a/pypy/module/thread/threadlocals.py +++ b/pypy/module/thread/threadlocals.py @@ -59,4 +59,8 @@ def leave_thread(self, space): "Notification that the current thread is about to stop." - self.setvalue(None) + from pypy.module.thread.os_local import thread_is_stopping + try: + thread_is_stopping(self.getvalue()) + finally: + self.setvalue(None) diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -413,8 +413,8 @@ 'retrace_limit': 'how many times we can try retracing before giving up', 'max_retrace_guards': 'number of extra guards a retrace can cause', 'max_unroll_loops': 'number of extra unrollings a loop can cause', - 'enable_opts': 'INTERNAL USE ONLY: optimizations to enable, or all = %s' % - ENABLE_ALL_OPTS, + 'enable_opts': 'INTERNAL USE ONLY (MAY NOT WORK OR LEAD TO CRASHES): ' + 'optimizations to enable, or all = %s' % ENABLE_ALL_OPTS, } PARAMETERS = {'threshold': 1039, # just above 1024, prime diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -71,7 +71,7 @@ # used in tests for ctypes and for genc and friends # to handle the win64 special case: -is_emulated_long = _long_typecode <> 'l' +is_emulated_long = _long_typecode != 'l' LONG_BIT = _get_long_bit() LONG_MASK = (2**LONG_BIT)-1 diff --git a/pypy/rlib/rmmap.py b/pypy/rlib/rmmap.py --- a/pypy/rlib/rmmap.py +++ b/pypy/rlib/rmmap.py @@ -739,14 +739,35 @@ # assume -1 and 0 both mean invalid file descriptor # to 'anonymously' map memory. if fileno != -1 and fileno != 0: - fh = rwin32._get_osfhandle(fileno) - if fh == INVALID_HANDLE: - errno = rposix.get_errno() - raise OSError(errno, os.strerror(errno)) + fh = rwin32.get_osfhandle(fileno) # Win9x appears to need us seeked to zero # SEEK_SET = 0 # libc._lseek(fileno, 0, SEEK_SET) + # check file size + try: + low, high = _get_file_size(fh) + except OSError: + pass # ignore non-seeking files and errors and trust map_size + else: + if not high and low <= sys.maxint: + size = low + else: + # not so sure if the signed/unsigned strictness is a good idea: + high = rffi.cast(lltype.Unsigned, high) + low = rffi.cast(lltype.Unsigned, low) + size = (high << 32) + low + size = rffi.cast(lltype.Signed, size) + if map_size == 0: + if offset > size: + raise RValueError( + "mmap offset is greater than file size") + map_size = int(size - offset) + if map_size != size - offset: + raise RValueError("mmap length is too large") + elif offset + map_size > size: + raise RValueError("mmap length is greater than file size") + m = MMap(access, offset) m.file_handle = INVALID_HANDLE m.map_handle = INVALID_HANDLE diff --git a/pypy/rlib/rposix.py b/pypy/rlib/rposix.py --- a/pypy/rlib/rposix.py +++ b/pypy/rlib/rposix.py @@ -98,16 +98,16 @@ _set_errno(rffi.cast(INT, errno)) if os.name == 'nt': - _validate_fd = rffi.llexternal( + is_valid_fd = rffi.llexternal( "_PyVerify_fd", [rffi.INT], rffi.INT, compilation_info=errno_eci, ) @jit.dont_look_inside def validate_fd(fd): - if not _validate_fd(fd): + if not is_valid_fd(fd): raise OSError(get_errno(), 'Bad file descriptor') else: - def _validate_fd(fd): + def is_valid_fd(fd): return 1 def validate_fd(fd): @@ -117,7 +117,8 @@ # this behaves like os.closerange() from Python 2.6. for fd in xrange(fd_low, fd_high): try: - os.close(fd) + if is_valid_fd(fd): + os.close(fd) except OSError: pass diff --git a/pypy/rlib/rshrinklist.py b/pypy/rlib/rshrinklist.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/rshrinklist.py @@ -0,0 +1,34 @@ + +class AbstractShrinkList(object): + """A mixin base class. You should subclass it and add a method + must_keep(). Behaves like a list with the method append(), and + you can read *for reading* the list of items by calling items(). + The twist is that occasionally append() will throw away the + items for which must_keep() returns False. (It does so without + changing the order.) + """ + _mixin_ = True + + def __init__(self): + self._list = [] + self._next_shrink = 16 + + def append(self, x): + self._do_shrink() + self._list.append(x) + + def items(self): + return self._list + + def _do_shrink(self): + if len(self._list) >= self._next_shrink: + rest = 0 + for x in self._list: + if self.must_keep(x): + self._list[rest] = x + rest += 1 + del self._list[rest:] + self._next_shrink = 16 + 2 * rest + + def must_keep(self, x): + raise NotImplementedError diff --git a/pypy/rlib/rwin32.py b/pypy/rlib/rwin32.py --- a/pypy/rlib/rwin32.py +++ b/pypy/rlib/rwin32.py @@ -8,6 +8,7 @@ from pypy.translator.platform import CompilationError from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rarithmetic import intmask +from pypy.rlib.rposix import validate_fd from pypy.rlib import jit import os, sys, errno @@ -78,6 +79,7 @@ for name in """FORMAT_MESSAGE_ALLOCATE_BUFFER FORMAT_MESSAGE_FROM_SYSTEM MAX_PATH WAIT_OBJECT_0 WAIT_TIMEOUT INFINITE + ERROR_INVALID_HANDLE """.split(): locals()[name] = rffi_platform.ConstantInteger(name) @@ -126,6 +128,13 @@ _get_osfhandle = rffi.llexternal('_get_osfhandle', [rffi.INT], HANDLE) + def get_osfhandle(fd): + validate_fd(fd) + handle = _get_osfhandle(fd) + if handle == INVALID_HANDLE_VALUE: + raise WindowsError(ERROR_INVALID_HANDLE, "Invalid file handle") + return handle + def build_winerror_to_errno(): """Build a dictionary mapping windows error numbers to POSIX errno. The function returns the dict, and the default value for codes not diff --git a/pypy/rlib/streamio.py b/pypy/rlib/streamio.py --- a/pypy/rlib/streamio.py +++ b/pypy/rlib/streamio.py @@ -175,24 +175,23 @@ if sys.platform == "win32": - from pypy.rlib import rwin32 + from pypy.rlib.rwin32 import BOOL, HANDLE, get_osfhandle, GetLastError from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.lltypesystem import rffi - import errno _eci = ExternalCompilationInfo() - _get_osfhandle = rffi.llexternal('_get_osfhandle', [rffi.INT], rffi.LONG, - compilation_info=_eci) _setmode = rffi.llexternal('_setmode', [rffi.INT, rffi.INT], rffi.INT, compilation_info=_eci) - SetEndOfFile = rffi.llexternal('SetEndOfFile', [rffi.LONG], rwin32.BOOL, + SetEndOfFile = rffi.llexternal('SetEndOfFile', [HANDLE], BOOL, compilation_info=_eci) # HACK: These implementations are specific to MSVCRT and the C backend. # When generating on CLI or JVM, these are patched out. # See PyPyTarget.target() in targetpypystandalone.py def _setfd_binary(fd): - _setmode(fd, os.O_BINARY) + #Allow this to succeed on invalid fd's + if rposix.is_valid_fd(fd): + _setmode(fd, os.O_BINARY) def ftruncate_win32(fd, size): curpos = os.lseek(fd, 0, 1) @@ -200,11 +199,9 @@ # move to the position to be truncated os.lseek(fd, size, 0) # Truncate. Note that this may grow the file! - handle = _get_osfhandle(fd) - if handle == -1: - raise OSError(errno.EBADF, "Invalid file handle") + handle = get_osfhandle(fd) if not SetEndOfFile(handle): - raise WindowsError(rwin32.GetLastError(), + raise WindowsError(GetLastError(), "Could not truncate file") finally: # we restore the file pointer position in any case diff --git a/pypy/rlib/test/test_rmmap.py b/pypy/rlib/test/test_rmmap.py --- a/pypy/rlib/test/test_rmmap.py +++ b/pypy/rlib/test/test_rmmap.py @@ -33,8 +33,6 @@ interpret(f, []) def test_file_size(self): - if os.name == "nt": - skip("Only Unix checks file size") def func(no): try: @@ -433,15 +431,16 @@ def test_windows_crasher_1(self): if sys.platform != "win32": skip("Windows-only test") - - m = mmap.mmap(-1, 1000, tagname="foo") - # same tagname, but larger size - try: - m2 = mmap.mmap(-1, 5000, tagname="foo") - m2.getitem(4500) - except WindowsError: - pass - m.close() + def func(): + m = mmap.mmap(-1, 1000, tagname="foo") + # same tagname, but larger size + try: + m2 = mmap.mmap(-1, 5000, tagname="foo") + m2.getitem(4500) + except WindowsError: + pass + m.close() + interpret(func, []) def test_windows_crasher_2(self): if sys.platform != "win32": diff --git a/pypy/rlib/test/test_rposix.py b/pypy/rlib/test/test_rposix.py --- a/pypy/rlib/test/test_rposix.py +++ b/pypy/rlib/test/test_rposix.py @@ -132,14 +132,12 @@ except Exception: pass - def test_validate_fd(self): + def test_is_valid_fd(self): if os.name != 'nt': skip('relevant for windows only') - assert rposix._validate_fd(0) == 1 + assert rposix.is_valid_fd(0) == 1 fid = open(str(udir.join('validate_test.txt')), 'w') fd = fid.fileno() - assert rposix._validate_fd(fd) == 1 + assert rposix.is_valid_fd(fd) == 1 fid.close() - assert rposix._validate_fd(fd) == 0 - - + assert rposix.is_valid_fd(fd) == 0 diff --git a/pypy/rlib/test/test_rshrinklist.py b/pypy/rlib/test/test_rshrinklist.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/test/test_rshrinklist.py @@ -0,0 +1,30 @@ +from pypy.rlib.rshrinklist import AbstractShrinkList + +class Item: + alive = True + +class ItemList(AbstractShrinkList): + def must_keep(self, x): + return x.alive + +def test_simple(): + l = ItemList() + l2 = [Item() for i in range(150)] + for x in l2: + l.append(x) + assert l.items() == l2 + # + for x in l2[::2]: + x.alive = False + l3 = [Item() for i in range(150 + 16)] + for x in l3: + l.append(x) + assert l.items() == l2[1::2] + l3 # keeps the order + +def test_append_dead_items(): + l = ItemList() + for i in range(150): + x = Item() + l.append(x) + x.alive = False + assert len(l.items()) <= 16 diff --git a/pypy/rlib/test/test_rweakkeydict.py b/pypy/rlib/test/test_rweakkeydict.py --- a/pypy/rlib/test/test_rweakkeydict.py +++ b/pypy/rlib/test/test_rweakkeydict.py @@ -135,3 +135,32 @@ d = RWeakKeyDictionary(KX, VY) d.set(KX(), VX()) py.test.raises(Exception, interpret, g, [1]) + + +def test_rpython_free_values(): + import py; py.test.skip("XXX not implemented, messy") + class VXDel: + def __del__(self): + state.freed.append(1) + class State: + pass + state = State() + state.freed = [] + # + def add_me(): + k = KX() + v = VXDel() + d = RWeakKeyDictionary(KX, VXDel) + d.set(k, v) + return d + def f(): + del state.freed[:] + d = add_me() + rgc.collect() + # we want the dictionary to be really empty here. It's hard to + # ensure in the current implementation after just one collect(), + # but at least two collects should be enough. + rgc.collect() + return len(state.freed) + assert f() == 1 + assert interpret(f, []) == 1 diff --git a/pypy/rlib/test/test_rwin32.py b/pypy/rlib/test/test_rwin32.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/test/test_rwin32.py @@ -0,0 +1,28 @@ +import os +if os.name != 'nt': + skip('tests for win32 only') + +from pypy.rlib import rwin32 +from pypy.tool.udir import udir + + +def test_get_osfhandle(): + fid = open(str(udir.join('validate_test.txt')), 'w') + fd = fid.fileno() + rwin32.get_osfhandle(fd) + fid.close() + raises(OSError, rwin32.get_osfhandle, fd) + rwin32.get_osfhandle(0) + +def test_get_osfhandle_raising(): + #try to test what kind of exception get_osfhandle raises w/out fd validation + skip('Crashes python') + fid = open(str(udir.join('validate_test.txt')), 'w') + fd = fid.fileno() + fid.close() + def validate_fd(fd): + return 1 + _validate_fd = rwin32.validate_fd + rwin32.validate_fd = validate_fd + raises(WindowsError, rwin32.get_osfhandle, fd) + rwin32.validate_fd = _validate_fd diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -165,7 +165,8 @@ etype = frame.op_direct_call(exdata.fn_type_of_exc_inst, evalue) if etype == klass: return cls - raise ValueError, "couldn't match exception" + raise ValueError("couldn't match exception, maybe it" + " has RPython attributes like OSError?") def get_transformed_exc_data(self, graph): if hasattr(graph, 'exceptiontransformed'): diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -397,6 +397,7 @@ os_dup = self.llexternal(underscore_on_windows+'dup', [rffi.INT], rffi.INT) def dup_llimpl(fd): + rposix.validate_fd(fd) newfd = rffi.cast(lltype.Signed, os_dup(rffi.cast(rffi.INT, fd))) if newfd == -1: raise OSError(rposix.get_errno(), "dup failed") @@ -411,6 +412,7 @@ [rffi.INT, rffi.INT], rffi.INT) def dup2_llimpl(fd, newfd): + rposix.validate_fd(fd) error = rffi.cast(lltype.Signed, os_dup2(rffi.cast(rffi.INT, fd), rffi.cast(rffi.INT, newfd))) if error == -1: @@ -891,6 +893,7 @@ def os_read_llimpl(fd, count): if count < 0: raise OSError(errno.EINVAL, None) + rposix.validate_fd(fd) raw_buf, gc_buf = rffi.alloc_buffer(count) try: void_buf = rffi.cast(rffi.VOIDP, raw_buf) @@ -916,6 +919,7 @@ def os_write_llimpl(fd, data): count = len(data) + rposix.validate_fd(fd) buf = rffi.get_nonmovingbuffer(data) try: written = rffi.cast(lltype.Signed, os_write( @@ -940,6 +944,7 @@ rffi.INT, threadsafe=False) def close_llimpl(fd): + rposix.validate_fd(fd) error = rffi.cast(lltype.Signed, os_close(rffi.cast(rffi.INT, fd))) if error == -1: raise OSError(rposix.get_errno(), "close failed") @@ -975,6 +980,7 @@ rffi.LONGLONG) def lseek_llimpl(fd, pos, how): + rposix.validate_fd(fd) how = fix_seek_arg(how) res = os_lseek(rffi.cast(rffi.INT, fd), rffi.cast(rffi.LONGLONG, pos), @@ -1000,6 +1006,7 @@ [rffi.INT, rffi.LONGLONG], rffi.INT) def ftruncate_llimpl(fd, length): + rposix.validate_fd(fd) res = rffi.cast(rffi.LONG, os_ftruncate(rffi.cast(rffi.INT, fd), rffi.cast(rffi.LONGLONG, length))) @@ -1018,6 +1025,7 @@ os_fsync = self.llexternal('_commit', [rffi.INT], rffi.INT) def fsync_llimpl(fd): + rposix.validate_fd(fd) res = rffi.cast(rffi.SIGNED, os_fsync(rffi.cast(rffi.INT, fd))) if res < 0: raise OSError(rposix.get_errno(), "fsync failed") @@ -1030,6 +1038,7 @@ os_fdatasync = self.llexternal('fdatasync', [rffi.INT], rffi.INT) def fdatasync_llimpl(fd): + rposix.validate_fd(fd) res = rffi.cast(rffi.SIGNED, os_fdatasync(rffi.cast(rffi.INT, fd))) if res < 0: raise OSError(rposix.get_errno(), "fdatasync failed") @@ -1042,6 +1051,7 @@ os_fchdir = self.llexternal('fchdir', [rffi.INT], rffi.INT) def fchdir_llimpl(fd): + rposix.validate_fd(fd) res = rffi.cast(rffi.SIGNED, os_fchdir(rffi.cast(rffi.INT, fd))) if res < 0: raise OSError(rposix.get_errno(), "fchdir failed") @@ -1357,6 +1367,7 @@ os_isatty = self.llexternal(underscore_on_windows+'isatty', [rffi.INT], rffi.INT) def isatty_llimpl(fd): + rposix.validate_fd(fd) res = rffi.cast(lltype.Signed, os_isatty(rffi.cast(rffi.INT, fd))) return res != 0 @@ -1534,6 +1545,7 @@ os_umask = self.llexternal(underscore_on_windows+'umask', [rffi.MODE_T], rffi.MODE_T) def umask_llimpl(fd): + rposix.validate_fd(fd) res = os_umask(rffi.cast(rffi.MODE_T, fd)) return rffi.cast(lltype.Signed, res) diff --git a/pypy/rpython/module/ll_os_stat.py b/pypy/rpython/module/ll_os_stat.py --- a/pypy/rpython/module/ll_os_stat.py +++ b/pypy/rpython/module/ll_os_stat.py @@ -402,8 +402,7 @@ lltype.free(data, flavor='raw') def win32_fstat_llimpl(fd): - handle = rwin32._get_osfhandle(fd) - + handle = rwin32.get_osfhandle(fd) filetype = win32traits.GetFileType(handle) if filetype == win32traits.FILE_TYPE_CHAR: # console or LPT device diff --git a/pypy/rpython/module/test/test_ll_os.py b/pypy/rpython/module/test/test_ll_os.py --- a/pypy/rpython/module/test/test_ll_os.py +++ b/pypy/rpython/module/test/test_ll_os.py @@ -85,8 +85,10 @@ if (len == 0) and "WINGDB_PYTHON" in os.environ: # the ctypes call seems not to work in the Wing debugger return - assert str(buf.value).lower() == pwd - # ctypes returns the drive letter in uppercase, os.getcwd does not + assert str(buf.value).lower() == pwd.lower() + # ctypes returns the drive letter in uppercase, + # os.getcwd does not, + # but there may be uppercase in os.getcwd path pwd = os.getcwd() try: @@ -188,7 +190,67 @@ OSError, ll_execve, "/etc/passwd", [], {}) assert info.value.errno == errno.EACCES +def test_os_write(): + #Same as test in rpython/test/test_rbuiltin + fname = str(udir.join('os_test.txt')) + fd = os.open(fname, os.O_WRONLY|os.O_CREAT, 0777) + assert fd >= 0 + f = getllimpl(os.write) + f(fd, 'Hello world') + os.close(fd) + with open(fname) as fid: + assert fid.read() == "Hello world" + fd = os.open(fname, os.O_WRONLY|os.O_CREAT, 0777) + os.close(fd) + raises(OSError, f, fd, 'Hello world') +def test_os_close(): + fname = str(udir.join('os_test.txt')) + fd = os.open(fname, os.O_WRONLY|os.O_CREAT, 0777) + assert fd >= 0 + os.write(fd, 'Hello world') + f = getllimpl(os.close) + f(fd) + raises(OSError, f, fd) + +def test_os_lseek(): + fname = str(udir.join('os_test.txt')) + fd = os.open(fname, os.O_RDWR|os.O_CREAT, 0777) + assert fd >= 0 + os.write(fd, 'Hello world') + f = getllimpl(os.lseek) + f(fd,0,0) + assert os.read(fd, 11) == 'Hello world' + os.close(fd) + raises(OSError, f, fd, 0, 0) + +def test_os_fsync(): + fname = str(udir.join('os_test.txt')) + fd = os.open(fname, os.O_WRONLY|os.O_CREAT, 0777) + assert fd >= 0 + os.write(fd, 'Hello world') + f = getllimpl(os.fsync) + f(fd) + os.close(fd) + fid = open(fname) + assert fid.read() == 'Hello world' + fid.close() + raises(OSError, f, fd) + +def test_os_fdatasync(): + try: + f = getllimpl(os.fdatasync) + except: + skip('No fdatasync in os') + fname = str(udir.join('os_test.txt')) + fd = os.open(fname, os.O_WRONLY|os.O_CREAT, 0777) + assert fd >= 0 + os.write(fd, 'Hello world') + f(fd) + fid = open(fname) + assert fid.read() == 'Hello world' + os.close(fd) + raises(OSError, f, fd) class ExpectTestOs: def setup_class(cls): diff --git a/pypy/rpython/test/test_llinterp.py b/pypy/rpython/test/test_llinterp.py --- a/pypy/rpython/test/test_llinterp.py +++ b/pypy/rpython/test/test_llinterp.py @@ -34,7 +34,7 @@ #start = time.time() res = call(*args, **kwds) #elapsed = time.time() - start - #print "%.2f secs" %(elapsed,) + #print "%.2f secs" % (elapsed,) return res def gengraph(func, argtypes=[], viewbefore='auto', policy=None, @@ -137,9 +137,9 @@ info = py.test.raises(LLException, "interp.eval_graph(graph, values)") try: got = interp.find_exception(info.value) - except ValueError: - got = None - assert got is exc, "wrong exception type" + except ValueError as message: + got = 'None %r' % message + assert got is exc, "wrong exception type, expected %r got %r" % (exc, got) #__________________________________________________________________ # tests diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -201,6 +201,9 @@ os.close(res) hello = open(tmpdir).read() assert hello == "hello world" + fd = os.open(tmpdir, os.O_WRONLY|os.O_CREAT, 777) + os.close(fd) + raises(OSError, os.write, fd, "hello world") def test_os_write_single_char(self): tmpdir = str(udir.udir.join("os_write_test_char")) diff --git a/pypy/translator/c/src/main.h b/pypy/translator/c/src/main.h --- a/pypy/translator/c/src/main.h +++ b/pypy/translator/c/src/main.h @@ -23,10 +23,6 @@ #define PYPY_MAIN_FUNCTION main #endif -#ifdef MS_WINDOWS -#include "src/winstuff.c" -#endif - #ifdef __GNUC__ /* Hack to prevent this function from being inlined. Helps asmgcc because the main() function has often a different prologue/epilogue. */ @@ -54,10 +50,6 @@ } #endif -#ifdef MS_WINDOWS - pypy_Windows_startup(); -#endif - errmsg = RPython_StartupCode(); if (errmsg) goto error; diff --git a/pypy/translator/c/src/winstuff.c b/pypy/translator/c/src/winstuff.c deleted file mode 100644 --- a/pypy/translator/c/src/winstuff.c +++ /dev/null @@ -1,37 +0,0 @@ - -/************************************************************/ - /***** Windows-specific stuff. *****/ - - -/* copied from CPython. */ - -#if defined _MSC_VER && _MSC_VER >= 1400 && defined(__STDC_SECURE_LIB__) -/* crt variable checking in VisualStudio .NET 2005 */ -#include - -/* Invalid parameter handler. Sets a ValueError exception */ -static void -InvalidParameterHandler( - const wchar_t * expression, - const wchar_t * function, - const wchar_t * file, - unsigned int line, - uintptr_t pReserved) -{ - /* Do nothing, allow execution to continue. Usually this - * means that the CRT will set errno to EINVAL - */ -} -#endif - - -void pypy_Windows_startup(void) -{ -#if defined _MSC_VER && _MSC_VER >= 1400 && defined(__STDC_SECURE_LIB__) - /* Set CRT argument error handler */ - _set_invalid_parameter_handler(InvalidParameterHandler); - /* turn off assertions within CRT in debug mode; - instead just return EINVAL */ - _CrtSetReportMode(_CRT_ASSERT, 0); -#endif -} diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -121,8 +121,18 @@ else: optitems = options.items() optitems.sort() - for name, value in optitems: - print ' %51s: %s' % (name, value) + current = [] + for key, value in optitems: + group = key.split('.') + name = group.pop() + n = 0 + while n < min(len(current), len(group)) and current[n] == group[n]: + n += 1 + while n < len(group): + print '%s[%s]' % (' ' * n, group[n]) + n += 1 + print '%s%s = %r' % (' ' * n, name, value) + current = group raise SystemExit def print_help(*args): diff --git a/pypy/translator/goal/test2/test_app_main.py b/pypy/translator/goal/test2/test_app_main.py --- a/pypy/translator/goal/test2/test_app_main.py +++ b/pypy/translator/goal/test2/test_app_main.py @@ -787,6 +787,37 @@ assert data.startswith("15\xe2\x82\xac") +class TestAppMain: + + def test_print_info(self): + from pypy.translator.goal import app_main + import sys, cStringIO + prev_so = sys.stdout + prev_ti = getattr(sys, 'pypy_translation_info', 'missing') + sys.pypy_translation_info = { + 'translation.foo': True, + 'translation.bar': 42, + 'translation.egg.something': None, + 'objspace.x': 'hello', + } + try: + sys.stdout = f = cStringIO.StringIO() + py.test.raises(SystemExit, app_main.print_info) + finally: + sys.stdout = prev_so + if prev_ti == 'missing': + del sys.pypy_translation_info + else: + sys.pypy_translation_info = prev_ti + assert f.getvalue() == ("[objspace]\n" + " x = 'hello'\n" + "[translation]\n" + " bar = 42\n" + " [egg]\n" + " something = None\n" + " foo = True\n") + + class AppTestAppMain: def setup_class(self): diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -299,10 +299,11 @@ def set_platform(new_platform, cc): global platform - log.msg("Setting platform to %r cc=%s" % (new_platform,cc)) platform = pick_platform(new_platform, cc) if not platform: - raise ValueError("pick_platform failed") + raise ValueError("pick_platform(%r, %s) failed"%(new_platform, cc)) + log.msg("Set platform with %r cc=%s, using cc=%r" % (new_platform, cc, + getattr(platform, 'cc','Unknown'))) if new_platform == 'host': global host diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -83,13 +83,9 @@ if env is not None: return env - log.error("Could not find a Microsoft Compiler") # Assume that the compiler is already part of the environment -msvc_compiler_environ32 = find_msvc_env(False) -msvc_compiler_environ64 = find_msvc_env(True) - class MsvcPlatform(Platform): name = "msvc" so_ext = 'dll' @@ -108,10 +104,7 @@ def __init__(self, cc=None, x64=False): self.x64 = x64 - if x64: - msvc_compiler_environ = msvc_compiler_environ64 - else: - msvc_compiler_environ = msvc_compiler_environ32 + msvc_compiler_environ = find_msvc_env(x64) Platform.__init__(self, 'cl.exe') if msvc_compiler_environ: self.c_environ = os.environ.copy() From noreply at buildbot.pypy.org Thu May 10 16:46:18 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 10 May 2012 16:46:18 +0200 (CEST) Subject: [pypy-commit] pypy sanitize-finally-stack: (arigo, antocuni): a branch where to try to refactor how the valuestack is handled inside finally: block. Right now two dummy Nones values are always pushed to be popped() and ignored immediately after. Also, it prevent to implement POP_EXCEPT sanely on py3k Message-ID: <20120510144618.89EA69B60BB@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: sanitize-finally-stack Changeset: r55006:ca69981170e1 Date: 2012-05-10 16:42 +0200 http://bitbucket.org/pypy/pypy/changeset/ca69981170e1/ Log: (arigo, antocuni): a branch where to try to refactor how the valuestack is handled inside finally: block. Right now two dummy Nones values are always pushed to be popped() and ignored immediately after. Also, it prevent to implement POP_EXCEPT sanely on py3k diff --git a/pypy/interpreter/astcompiler/assemble.py b/pypy/interpreter/astcompiler/assemble.py --- a/pypy/interpreter/astcompiler/assemble.py +++ b/pypy/interpreter/astcompiler/assemble.py @@ -562,7 +562,7 @@ ops.WITH_CLEANUP : -1, ops.POP_BLOCK : 0, - ops.END_FINALLY : -3, + ops.END_FINALLY : -1, ops.SETUP_WITH : 1, ops.SETUP_FINALLY : 0, ops.SETUP_EXCEPT : 0, diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -564,7 +564,8 @@ self.visit_sequence(handler.body) self.emit_jump(ops.JUMP_FORWARD, end) self.use_next_block(next_except) - self.emit_op(ops.END_FINALLY) + self.emit_op(ops.END_FINALLY) # this END_FINALLY will always re-raise + self.is_dead_code() self.use_next_block(otherwise) self.visit_sequence(te.orelse) self.use_next_block(end) diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -801,6 +801,12 @@ return obj interp_w._annspecialcase_ = 'specialize:arg(1)' + def _check_interp_w_or_none(self, RequiredClass, w_obj): + if self.is_w(w_obj, self.w_None): + return True + obj = self.interpclass_w(w_obj) + return isinstance(obj, RequiredClass) + def unpackiterable(self, w_iterable, expected_length=-1): """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -600,16 +600,28 @@ block.cleanup(self) # the block knows how to clean up the value stack def end_finally(self): - # unlike CPython, when we reach this opcode the value stack has - # always been set up as follows (topmost first): - # [exception type or None] - # [exception value or None] - # [wrapped stack unroller ] - self.popvalue() # ignore the exception type - self.popvalue() # ignore the exception value - w_unroller = self.popvalue() - unroller = self.space.interpclass_w(w_unroller) - return unroller + # unlike CPython, there are two statically distinct cases: the + # END_FINALLY might be closing an 'except' block or a 'finally' + # block. In the first case, the stack contains three items: + # [exception type we are now handling] + # [exception value we are now handling] + # [wrapped SApplicationException] + # In the case of a finally: block, the stack contains only one + # item (unlike CPython which can have 1, 2 or 3 items): + # [wrapped subclass of SuspendedUnroller] + w_top = self.popvalue() + # the following logic is a mess for the flow objspace, + # so we hide it specially in the space :-/ + if self.space._check_interp_w_or_none(SuspendedUnroller, w_top): + # case of a finally: block + unroller = self.space.interpclass_w(w_top) + return unroller + else: + # case of an except: block. We popped the exception type + self.popvalue() # Now we pop the exception value + unroller = self.space.interpclass_w(self.popvalue()) + assert unroller is not None + return unroller def BUILD_CLASS(self, oparg, next_instr): w_methodsdict = self.popvalue() @@ -939,17 +951,17 @@ # Implementation since 2.7a0: 62191 (introduce SETUP_WITH) or self.pycode.magic >= 0xa0df2d1): # implementation since 2.6a1: 62161 (WITH_CLEANUP optimization) - self.popvalue() - self.popvalue() + #self.popvalue() + #self.popvalue() w_unroller = self.popvalue() w_exitfunc = self.popvalue() self.pushvalue(w_unroller) - self.pushvalue(self.space.w_None) - self.pushvalue(self.space.w_None) + #self.pushvalue(self.space.w_None) + #self.pushvalue(self.space.w_None) elif self.pycode.magic >= 0xa0df28c: # Implementation since 2.5a0: 62092 (changed WITH_CLEANUP opcode) w_exitfunc = self.popvalue() - w_unroller = self.peekvalue(2) + w_unroller = self.peekvalue(0) else: raise NotImplementedError("WITH_CLEANUP for CPython <= 2.4") @@ -966,7 +978,7 @@ w_traceback) if self.space.is_true(w_suppress): # __exit__() returned True -> Swallow the exception. - self.settopvalue(self.space.w_None, 2) + self.settopvalue(self.space.w_None) else: self.call_contextmanager_exit_function( w_exitfunc, @@ -1354,8 +1366,8 @@ # here). self.cleanupstack(frame) # one None already pushed by the bytecode - frame.pushvalue(frame.space.w_None) - frame.pushvalue(frame.space.w_None) + #frame.pushvalue(frame.space.w_None) + #frame.pushvalue(frame.space.w_None) def handle(self, frame, unroller): # any abnormal reason for unrolling a finally: triggers the end of @@ -1363,8 +1375,8 @@ # see comments in cleanup(). self.cleanupstack(frame) frame.pushvalue(frame.space.wrap(unroller)) - frame.pushvalue(frame.space.w_None) - frame.pushvalue(frame.space.w_None) + #frame.pushvalue(frame.space.w_None) + #frame.pushvalue(frame.space.w_None) return r_uint(self.handlerposition) # jump to the handler From noreply at buildbot.pypy.org Thu May 10 17:12:22 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 May 2012 17:12:22 +0200 (CEST) Subject: [pypy-commit] pypy py3k: add a failing test Message-ID: <20120510151222.604EC9B60BB@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r55007:9c2f3d02ce3d Date: 2012-05-09 10:41 +0200 http://bitbucket.org/pypy/pypy/changeset/9c2f3d02ce3d/ Log: add a failing test diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -279,3 +279,12 @@ def __new__(cls, *args): return object() raises(TypeError, "raise MyException") + + + def test_pop_exception_value(self): + # assert that this code don't crash + for i in range(10): + try: + raise ValueError + except ValueError as e: + continue From noreply at buildbot.pypy.org Thu May 10 17:12:23 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 May 2012 17:12:23 +0200 (CEST) Subject: [pypy-commit] pypy py3k: generate the very same code as cpython for try/except blocks. In particular, make sure to delete the local variable which contains the exception after we exit the except: block Message-ID: <20120510151223.A80459B60BB@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r55008:f9ef9c856d38 Date: 2012-05-09 11:02 +0200 http://bitbucket.org/pypy/pypy/changeset/f9ef9c856d38/ Log: generate the very same code as cpython for try/except blocks. In particular, make sure to delete the local variable which contains the exception after we exit the except: block diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -576,12 +576,50 @@ self.emit_jump(ops.POP_JUMP_IF_FALSE, next_except, True) self.emit_op(ops.POP_TOP) if handler.name: - self.name_op(handler.name, ast.Store); + ## generate the equivalent of: + ## + ## try: + ## # body + ## except type as name: + ## try: + ## # body + ## finally: + ## name = None + ## del name + # + cleanup_end = self.new_block() + self.name_op(handler.name, ast.Store) + self.emit_op(ops.POP_TOP) + # second try + self.emit_jump(ops.SETUP_FINALLY, cleanup_end) + cleanup_body = self.use_next_block() + self.push_frame_block(F_BLOCK_FINALLY, cleanup_body) + # second # body + self.visit_sequence(handler.body) + self.emit_op(ops.POP_BLOCK) + self.emit_op(ops.POP_EXCEPT) + self.pop_frame_block(F_BLOCK_FINALLY, cleanup_body) + # finally + self.load_const(self.space.w_None) + self.use_next_block(cleanup_end) + self.push_frame_block(F_BLOCK_FINALLY_END, cleanup_end) + # name = None + self.load_const(self.space.w_None) + self.name_op(handler.name, ast.Store) + # del name + self.name_op(handler.name, ast.Del) + # + self.emit_op(ops.END_FINALLY) + self.pop_frame_block(F_BLOCK_FINALLY_END, cleanup_end) else: self.emit_op(ops.POP_TOP) - self.emit_op(ops.POP_TOP) - self.visit_sequence(handler.body) - self.emit_op(ops.POP_EXCEPT) + self.emit_op(ops.POP_TOP) + cleanup_body = self.use_next_block() + self.push_frame_block(F_BLOCK_FINALLY, cleanup_body) + self.visit_sequence(handler.body) + self.emit_op(ops.POP_EXCEPT) + self.pop_frame_block(F_BLOCK_FINALLY, cleanup_body) + # self.emit_jump(ops.JUMP_FORWARD, end) self.use_next_block(next_except) self.emit_op(ops.END_FINALLY) From noreply at buildbot.pypy.org Thu May 10 17:12:25 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 May 2012 17:12:25 +0200 (CEST) Subject: [pypy-commit] pypy sanitize-finally-stack: (antocuni, arigo around): implement this logic for the flow objspace, and rename it to be even more obscure so that it's clear that it should not be used generally Message-ID: <20120510151225.15B8B9B60BB@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: sanitize-finally-stack Changeset: r55009:6511d86d1c8f Date: 2012-05-10 17:05 +0200 http://bitbucket.org/pypy/pypy/changeset/6511d86d1c8f/ Log: (antocuni, arigo around): implement this logic for the flow objspace, and rename it to be even more obscure so that it's clear that it should not be used generally diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -801,7 +801,13 @@ return obj interp_w._annspecialcase_ = 'specialize:arg(1)' - def _check_interp_w_or_none(self, RequiredClass, w_obj): + def _check_constant_interp_w_or_w_None(self, RequiredClass, w_obj): + """ + This method should NOT be called unless you are really sure about + it. It is used inside the implementation of end_finally() in + pyopcode.py, and it's there so that it can be overridden by the + FlowObjSpace. + """ if self.is_w(w_obj, self.w_None): return True obj = self.interpclass_w(w_obj) diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -612,7 +612,7 @@ w_top = self.popvalue() # the following logic is a mess for the flow objspace, # so we hide it specially in the space :-/ - if self.space._check_interp_w_or_none(SuspendedUnroller, w_top): + if self.space._check_constant_interp_w_or_w_None(SuspendedUnroller, w_top): # case of a finally: block unroller = self.space.interpclass_w(w_top) return unroller diff --git a/pypy/objspace/flow/objspace.py b/pypy/objspace/flow/objspace.py --- a/pypy/objspace/flow/objspace.py +++ b/pypy/objspace/flow/objspace.py @@ -204,6 +204,14 @@ return obj return None + def _check_constant_interp_w_or_w_None(self, RequiredClass, w_obj): + """ + WARNING: this implementation is not complete at all. It's just enough + to be used by end_finally() inside pyopcode.py. + """ + return w_obj == self.w_None or (isinstance(w_obj, Constant) and + isinstance(w_obj.value, RequiredClass)) + def getexecutioncontext(self): return getattr(self, 'executioncontext', None) From noreply at buildbot.pypy.org Thu May 10 19:19:11 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 May 2012 19:19:11 +0200 (CEST) Subject: [pypy-commit] pypy sanitize-finally-stack: close to-be-merged branch Message-ID: <20120510171911.1F6129B60BB@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: sanitize-finally-stack Changeset: r55010:4c5ee58f7d42 Date: 2012-05-10 19:17 +0200 http://bitbucket.org/pypy/pypy/changeset/4c5ee58f7d42/ Log: close to-be-merged branch From noreply at buildbot.pypy.org Thu May 10 19:19:12 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 May 2012 19:19:12 +0200 (CEST) Subject: [pypy-commit] pypy default: (antocuni, arigo) Message-ID: <20120510171912.8F79C9B60BB@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r55011:c25300b1bfff Date: 2012-05-10 19:18 +0200 http://bitbucket.org/pypy/pypy/changeset/c25300b1bfff/ Log: (antocuni, arigo) merge the sanitize-finally-stack branch, to refactor how the valuestack is handled inside finally: block. Right now two dummy Nones values are always pushed to be popped() and ignored immediately after. Also, it prevent to implement POP_EXCEPT sanely on py3k diff --git a/pypy/interpreter/astcompiler/assemble.py b/pypy/interpreter/astcompiler/assemble.py --- a/pypy/interpreter/astcompiler/assemble.py +++ b/pypy/interpreter/astcompiler/assemble.py @@ -562,7 +562,7 @@ ops.WITH_CLEANUP : -1, ops.POP_BLOCK : 0, - ops.END_FINALLY : -3, + ops.END_FINALLY : -1, ops.SETUP_WITH : 1, ops.SETUP_FINALLY : 0, ops.SETUP_EXCEPT : 0, diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -564,7 +564,8 @@ self.visit_sequence(handler.body) self.emit_jump(ops.JUMP_FORWARD, end) self.use_next_block(next_except) - self.emit_op(ops.END_FINALLY) + self.emit_op(ops.END_FINALLY) # this END_FINALLY will always re-raise + self.is_dead_code() self.use_next_block(otherwise) self.visit_sequence(te.orelse) self.use_next_block(end) diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -801,6 +801,18 @@ return obj interp_w._annspecialcase_ = 'specialize:arg(1)' + def _check_constant_interp_w_or_w_None(self, RequiredClass, w_obj): + """ + This method should NOT be called unless you are really sure about + it. It is used inside the implementation of end_finally() in + pyopcode.py, and it's there so that it can be overridden by the + FlowObjSpace. + """ + if self.is_w(w_obj, self.w_None): + return True + obj = self.interpclass_w(w_obj) + return isinstance(obj, RequiredClass) + def unpackiterable(self, w_iterable, expected_length=-1): """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -600,16 +600,28 @@ block.cleanup(self) # the block knows how to clean up the value stack def end_finally(self): - # unlike CPython, when we reach this opcode the value stack has - # always been set up as follows (topmost first): - # [exception type or None] - # [exception value or None] - # [wrapped stack unroller ] - self.popvalue() # ignore the exception type - self.popvalue() # ignore the exception value - w_unroller = self.popvalue() - unroller = self.space.interpclass_w(w_unroller) - return unroller + # unlike CPython, there are two statically distinct cases: the + # END_FINALLY might be closing an 'except' block or a 'finally' + # block. In the first case, the stack contains three items: + # [exception type we are now handling] + # [exception value we are now handling] + # [wrapped SApplicationException] + # In the case of a finally: block, the stack contains only one + # item (unlike CPython which can have 1, 2 or 3 items): + # [wrapped subclass of SuspendedUnroller] + w_top = self.popvalue() + # the following logic is a mess for the flow objspace, + # so we hide it specially in the space :-/ + if self.space._check_constant_interp_w_or_w_None(SuspendedUnroller, w_top): + # case of a finally: block + unroller = self.space.interpclass_w(w_top) + return unroller + else: + # case of an except: block. We popped the exception type + self.popvalue() # Now we pop the exception value + unroller = self.space.interpclass_w(self.popvalue()) + assert unroller is not None + return unroller def BUILD_CLASS(self, oparg, next_instr): w_methodsdict = self.popvalue() @@ -939,17 +951,17 @@ # Implementation since 2.7a0: 62191 (introduce SETUP_WITH) or self.pycode.magic >= 0xa0df2d1): # implementation since 2.6a1: 62161 (WITH_CLEANUP optimization) - self.popvalue() - self.popvalue() + #self.popvalue() + #self.popvalue() w_unroller = self.popvalue() w_exitfunc = self.popvalue() self.pushvalue(w_unroller) - self.pushvalue(self.space.w_None) - self.pushvalue(self.space.w_None) + #self.pushvalue(self.space.w_None) + #self.pushvalue(self.space.w_None) elif self.pycode.magic >= 0xa0df28c: # Implementation since 2.5a0: 62092 (changed WITH_CLEANUP opcode) w_exitfunc = self.popvalue() - w_unroller = self.peekvalue(2) + w_unroller = self.peekvalue(0) else: raise NotImplementedError("WITH_CLEANUP for CPython <= 2.4") @@ -966,7 +978,7 @@ w_traceback) if self.space.is_true(w_suppress): # __exit__() returned True -> Swallow the exception. - self.settopvalue(self.space.w_None, 2) + self.settopvalue(self.space.w_None) else: self.call_contextmanager_exit_function( w_exitfunc, @@ -1354,8 +1366,8 @@ # here). self.cleanupstack(frame) # one None already pushed by the bytecode - frame.pushvalue(frame.space.w_None) - frame.pushvalue(frame.space.w_None) + #frame.pushvalue(frame.space.w_None) + #frame.pushvalue(frame.space.w_None) def handle(self, frame, unroller): # any abnormal reason for unrolling a finally: triggers the end of @@ -1363,8 +1375,8 @@ # see comments in cleanup(). self.cleanupstack(frame) frame.pushvalue(frame.space.wrap(unroller)) - frame.pushvalue(frame.space.w_None) - frame.pushvalue(frame.space.w_None) + #frame.pushvalue(frame.space.w_None) + #frame.pushvalue(frame.space.w_None) return r_uint(self.handlerposition) # jump to the handler diff --git a/pypy/objspace/flow/objspace.py b/pypy/objspace/flow/objspace.py --- a/pypy/objspace/flow/objspace.py +++ b/pypy/objspace/flow/objspace.py @@ -204,6 +204,14 @@ return obj return None + def _check_constant_interp_w_or_w_None(self, RequiredClass, w_obj): + """ + WARNING: this implementation is not complete at all. It's just enough + to be used by end_finally() inside pyopcode.py. + """ + return w_obj == self.w_None or (isinstance(w_obj, Constant) and + isinstance(w_obj.value, RequiredClass)) + def getexecutioncontext(self): return getattr(self, 'executioncontext', None) From noreply at buildbot.pypy.org Thu May 10 23:30:37 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 May 2012 23:30:37 +0200 (CEST) Subject: [pypy-commit] pypy py3k: hg merge default Message-ID: <20120510213037.1E8C382114@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r55012:5b636ed4f998 Date: 2012-05-10 23:13 +0200 http://bitbucket.org/pypy/pypy/changeset/5b636ed4f998/ Log: hg merge default diff --git a/lib-python/conftest.py b/lib-python/conftest.py --- a/lib-python/conftest.py +++ b/lib-python/conftest.py @@ -18,6 +18,7 @@ from pypy.tool.pytest import appsupport from pypy.tool.pytest.confpath import pypydir, testdir, testresultdir +from pypy.config.parse import parse_info pytest_plugins = "resultlog", rsyncdirs = ['.', '../pypy/'] @@ -538,8 +539,9 @@ # check modules info = py.process.cmdexec("%s --info" % execpath) + info = parse_info(info) for mod in regrtest.usemodules: - if "objspace.usemodules.%s: False" % mod in info: + if info.get('objspace.usemodules.%s' % mod) is not True: py.test.skip("%s module not included in %s" % (mod, execpath)) diff --git a/pypy/bin/rpython b/pypy/bin/rpython old mode 100755 new mode 100644 diff --git a/pypy/config/parse.py b/pypy/config/parse.py new file mode 100644 --- /dev/null +++ b/pypy/config/parse.py @@ -0,0 +1,55 @@ + + +def parse_info(text): + """See test_parse.py.""" + text = text.lstrip() + result = {} + if (text+':').index(':') > (text+'=').index('='): + # found a '=' before a ':' means that we have the new format + current = {0: ''} + indentation_prefix = None + for line in text.splitlines(): + line = line.rstrip() + if not line: + continue + realline = line.lstrip() + indent = len(line) - len(realline) + # + # 'indentation_prefix' is set when the previous line was a [group] + if indentation_prefix is not None: + assert indent > max(current) # missing indent? + current[indent] = indentation_prefix + indentation_prefix = None + # + else: + # in case of dedent, must kill the extra items from 'current' + for n in current.keys(): + if n > indent: + del current[n] + # + prefix = current[indent] # KeyError if bad dedent + # + if realline.startswith('[') and realline.endswith(']'): + indentation_prefix = prefix + realline[1:-1] + '.' + else: + # build the whole dotted key and evaluate the value + i = realline.index(' = ') + key = prefix + realline[:i] + value = realline[i+3:] + value = eval(value, {}) + result[key] = value + # + else: + # old format + for line in text.splitlines(): + i = line.index(':') + key = line[:i].strip() + value = line[i+1:].strip() + try: + value = int(value) + except ValueError: + if value in ('True', 'False', 'None'): + value = eval(value, {}) + result[key] = value + # + return result diff --git a/pypy/config/test/test_parse.py b/pypy/config/test/test_parse.py new file mode 100644 --- /dev/null +++ b/pypy/config/test/test_parse.py @@ -0,0 +1,38 @@ +from pypy.config.parse import parse_info + + +def test_parse_new_format(): + assert (parse_info("[foo]\n" + " bar = True\n") + == {'foo.bar': True}) + + assert (parse_info("[objspace]\n" + " x = 'hello'\n" + "[translation]\n" + " bar = 42\n" + " [egg]\n" + " something = None\n" + " foo = True\n") + == { + 'translation.foo': True, + 'translation.bar': 42, + 'translation.egg.something': None, + 'objspace.x': 'hello', + }) + + assert parse_info("simple = 43\n") == {'simple': 43} + + +def test_parse_old_format(): + assert (parse_info(" objspace.allworkingmodules: True\n" + " objspace.disable_call_speedhacks: False\n" + " objspace.extmodules: None\n" + " objspace.name: std\n" + " objspace.std.prebuiltintfrom: -5\n") + == { + 'objspace.allworkingmodules': True, + 'objspace.disable_call_speedhacks': False, + 'objspace.extmodules': None, + 'objspace.name': 'std', + 'objspace.std.prebuiltintfrom': -5, + }) diff --git a/pypy/interpreter/astcompiler/assemble.py b/pypy/interpreter/astcompiler/assemble.py --- a/pypy/interpreter/astcompiler/assemble.py +++ b/pypy/interpreter/astcompiler/assemble.py @@ -547,7 +547,7 @@ ops.STORE_LOCALS : -1, ops.POP_BLOCK : 0, ops.POP_EXCEPT : 0, - ops.END_FINALLY : -3, + ops.END_FINALLY : -1, ops.SETUP_WITH : 1, ops.SETUP_FINALLY : 0, ops.SETUP_EXCEPT : 4, diff --git a/pypy/interpreter/astcompiler/codegen.py b/pypy/interpreter/astcompiler/codegen.py --- a/pypy/interpreter/astcompiler/codegen.py +++ b/pypy/interpreter/astcompiler/codegen.py @@ -622,7 +622,8 @@ # self.emit_jump(ops.JUMP_FORWARD, end) self.use_next_block(next_except) - self.emit_op(ops.END_FINALLY) + self.emit_op(ops.END_FINALLY) # this END_FINALLY will always re-raise + self.is_dead_code() self.use_next_block(otherwise) self.visit_sequence(te.orelse) self.use_next_block(end) diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -807,6 +807,18 @@ return obj interp_w._annspecialcase_ = 'specialize:arg(1)' + def _check_constant_interp_w_or_w_None(self, RequiredClass, w_obj): + """ + This method should NOT be called unless you are really sure about + it. It is used inside the implementation of end_finally() in + pyopcode.py, and it's there so that it can be overridden by the + FlowObjSpace. + """ + if self.is_w(w_obj, self.w_None): + return True + obj = self.interpclass_w(w_obj) + return isinstance(obj, RequiredClass) + def unpackiterable(self, w_iterable, expected_length=-1): """Unpack an iterable object into a real (interpreter-level) list. Raise an OperationError(w_ValueError) if the length is wrong.""" diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -543,16 +543,28 @@ block.cleanup(self) # the block knows how to clean up the value stack def end_finally(self): - # unlike CPython, when we reach this opcode the value stack has - # always been set up as follows (topmost first): - # [exception type or None] - # [exception value or None] - # [wrapped stack unroller ] - self.popvalue() # ignore the exception type - self.popvalue() # ignore the exception value - w_unroller = self.popvalue() - unroller = self.space.interpclass_w(w_unroller) - return unroller + # unlike CPython, there are two statically distinct cases: the + # END_FINALLY might be closing an 'except' block or a 'finally' + # block. In the first case, the stack contains three items: + # [exception type we are now handling] + # [exception value we are now handling] + # [wrapped SApplicationException] + # In the case of a finally: block, the stack contains only one + # item (unlike CPython which can have 1, 2 or 3 items): + # [wrapped subclass of SuspendedUnroller] + w_top = self.popvalue() + # the following logic is a mess for the flow objspace, + # so we hide it specially in the space :-/ + if self.space._check_constant_interp_w_or_w_None(SuspendedUnroller, w_top): + # case of a finally: block + unroller = self.space.interpclass_w(w_top) + return unroller + else: + # case of an except: block. We popped the exception type + self.popvalue() # Now we pop the exception value + unroller = self.space.interpclass_w(self.popvalue()) + assert unroller is not None + return unroller def LOAD_BUILD_CLASS(self, oparg, next_instr): w_build_class = self.get_builtin().getdictvalue( @@ -903,14 +915,9 @@ def WITH_CLEANUP(self, oparg, next_instr): # see comment in END_FINALLY for stack state # This opcode changed a lot between CPython versions - self.popvalue() - self.popvalue() w_unroller = self.popvalue() w_exitfunc = self.popvalue() self.pushvalue(w_unroller) - self.pushvalue(self.space.w_None) - self.pushvalue(self.space.w_None) - unroller = self.space.interpclass_w(w_unroller) is_app_exc = (unroller is not None and isinstance(unroller, SApplicationException)) @@ -924,7 +931,7 @@ w_traceback) if self.space.is_true(w_suppress): # __exit__() returned True -> Swallow the exception. - self.settopvalue(self.space.w_None, 2) + self.settopvalue(self.space.w_None) else: self.call_contextmanager_exit_function( w_exitfunc, @@ -1342,8 +1349,8 @@ # here). self.cleanupstack(frame) # one None already pushed by the bytecode - frame.pushvalue(frame.space.w_None) - frame.pushvalue(frame.space.w_None) + #frame.pushvalue(frame.space.w_None) + #frame.pushvalue(frame.space.w_None) def handle(self, frame, unroller): # any abnormal reason for unrolling a finally: triggers the end of @@ -1356,8 +1363,6 @@ if frame.space.full_exceptions: operationerr.normalize_exception(frame.space) frame.pushvalue(frame.space.wrap(unroller)) - frame.pushvalue(frame.space.w_None) - frame.pushvalue(frame.space.w_None) if operationerr and self.restore_last_exception: frame.last_exception = operationerr return r_uint(self.handlerposition) # jump to the handler diff --git a/pypy/jit/metainterp/optimizeopt/earlyforce.py b/pypy/jit/metainterp/optimizeopt/earlyforce.py --- a/pypy/jit/metainterp/optimizeopt/earlyforce.py +++ b/pypy/jit/metainterp/optimizeopt/earlyforce.py @@ -8,7 +8,8 @@ if (opnum != rop.SETFIELD_GC and opnum != rop.SETARRAYITEM_GC and opnum != rop.QUASIIMMUT_FIELD and - opnum != rop.SAME_AS): + opnum != rop.SAME_AS and + opnum != rop.MARK_OPAQUE_PTR): for arg in op.getarglist(): if arg in self.optimizer.values: diff --git a/pypy/jit/metainterp/optimizeopt/rewrite.py b/pypy/jit/metainterp/optimizeopt/rewrite.py --- a/pypy/jit/metainterp/optimizeopt/rewrite.py +++ b/pypy/jit/metainterp/optimizeopt/rewrite.py @@ -208,7 +208,7 @@ box = value.box assert isinstance(box, Const) if not box.same_constant(constbox): - raise InvalidLoop('A GURAD_{VALUE,TRUE,FALSE} was proven to' + + raise InvalidLoop('A GUARD_{VALUE,TRUE,FALSE} was proven to' + 'always fail') return if emit_operation: @@ -481,10 +481,6 @@ args = [op.getarg(0), ConstInt(highest_bit(val))]) self.emit_operation(op) - def optimize_MARK_OPAQUE_PTR(self, op): - value = self.getvalue(op.getarg(0)) - self.optimizer.opaque_pointers[value] = True - def optimize_CAST_PTR_TO_INT(self, op): self.pure(rop.CAST_INT_TO_PTR, [op.result], op.getarg(0)) self.emit_operation(op) diff --git a/pypy/jit/metainterp/optimizeopt/simplify.py b/pypy/jit/metainterp/optimizeopt/simplify.py --- a/pypy/jit/metainterp/optimizeopt/simplify.py +++ b/pypy/jit/metainterp/optimizeopt/simplify.py @@ -29,9 +29,6 @@ # but it's a bit hard to implement robustly if heap.py is also run pass - def optimize_MARK_OPAQUE_PTR(self, op): - pass - def optimize_RECORD_KNOWN_CLASS(self, op): pass diff --git a/pypy/jit/metainterp/test/test_loop_unroll.py b/pypy/jit/metainterp/test/test_loop_unroll.py --- a/pypy/jit/metainterp/test/test_loop_unroll.py +++ b/pypy/jit/metainterp/test/test_loop_unroll.py @@ -19,3 +19,4 @@ class TestOOtype(LoopUnrollTest, OOJitMixin): pass + diff --git a/pypy/jit/metainterp/test/test_loop_unroll_disopt.py b/pypy/jit/metainterp/test/test_loop_unroll_disopt.py new file mode 100644 --- /dev/null +++ b/pypy/jit/metainterp/test/test_loop_unroll_disopt.py @@ -0,0 +1,25 @@ +import py +from pypy.rlib.jit import JitDriver +from pypy.jit.metainterp.test import test_loop +from pypy.jit.metainterp.test.support import LLJitMixin, OOJitMixin +from pypy.jit.metainterp.optimizeopt import ALL_OPTS_NAMES + +allopts = ALL_OPTS_NAMES.split(':') +for optnum in range(len(allopts)): + myopts = allopts[:] + del myopts[optnum] + + class TestLLtype(test_loop.LoopTest, LLJitMixin): + enable_opts = ':'.join(myopts) + + def check_resops(self, *args, **kwargs): + pass + def check_trace_count(self, count): + pass + + opt = allopts[optnum] + exec "TestLoopNo%sLLtype = TestLLtype" % (opt[0].upper() + opt[1:]) + +del TestLLtype # No need to run the last set twice +del TestLoopNoUnrollLLtype # This case is take care of by test_loop + diff --git a/pypy/module/__pypy__/interp_time.py b/pypy/module/__pypy__/interp_time.py --- a/pypy/module/__pypy__/interp_time.py +++ b/pypy/module/__pypy__/interp_time.py @@ -1,5 +1,5 @@ from __future__ import with_statement -import sys +import os from pypy.interpreter.error import exception_from_errno from pypy.interpreter.gateway import unwrap_spec @@ -7,11 +7,15 @@ from pypy.rpython.tool import rffi_platform from pypy.translator.tool.cbuild import ExternalCompilationInfo +if os.name == 'nt': + libraries = [] +else: + libraries = ["rt"] class CConfig: _compilation_info_ = ExternalCompilationInfo( includes=["time.h"], - libraries=["rt"], + libraries=libraries, ) HAS_CLOCK_GETTIME = rffi_platform.Has('clock_gettime') @@ -22,11 +26,6 @@ CLOCK_PROCESS_CPUTIME_ID = rffi_platform.DefinedConstantInteger("CLOCK_PROCESS_CPUTIME_ID") CLOCK_THREAD_CPUTIME_ID = rffi_platform.DefinedConstantInteger("CLOCK_THREAD_CPUTIME_ID") - TIMESPEC = rffi_platform.Struct("struct timespec", [ - ("tv_sec", rffi.TIME_T), - ("tv_nsec", rffi.LONG), - ]) - cconfig = rffi_platform.configure(CConfig) HAS_CLOCK_GETTIME = cconfig["HAS_CLOCK_GETTIME"] @@ -37,29 +36,36 @@ CLOCK_PROCESS_CPUTIME_ID = cconfig["CLOCK_PROCESS_CPUTIME_ID"] CLOCK_THREAD_CPUTIME_ID = cconfig["CLOCK_THREAD_CPUTIME_ID"] -TIMESPEC = cconfig["TIMESPEC"] +if HAS_CLOCK_GETTIME: + #redo it for timespec + CConfig.TIMESPEC = rffi_platform.Struct("struct timespec", [ + ("tv_sec", rffi.TIME_T), + ("tv_nsec", rffi.LONG), + ]) + cconfig = rffi_platform.configure(CConfig) + TIMESPEC = cconfig['TIMESPEC'] -c_clock_gettime = rffi.llexternal("clock_gettime", - [lltype.Signed, lltype.Ptr(TIMESPEC)], rffi.INT, - compilation_info=CConfig._compilation_info_, threadsafe=False -) -c_clock_getres = rffi.llexternal("clock_getres", - [lltype.Signed, lltype.Ptr(TIMESPEC)], rffi.INT, - compilation_info=CConfig._compilation_info_, threadsafe=False -) + c_clock_gettime = rffi.llexternal("clock_gettime", + [lltype.Signed, lltype.Ptr(TIMESPEC)], rffi.INT, + compilation_info=CConfig._compilation_info_, threadsafe=False + ) + c_clock_getres = rffi.llexternal("clock_getres", + [lltype.Signed, lltype.Ptr(TIMESPEC)], rffi.INT, + compilation_info=CConfig._compilation_info_, threadsafe=False + ) - at unwrap_spec(clk_id="c_int") -def clock_gettime(space, clk_id): - with lltype.scoped_alloc(TIMESPEC) as tp: - ret = c_clock_gettime(clk_id, tp) - if ret != 0: - raise exception_from_errno(space, space.w_IOError) - return space.wrap(tp.c_tv_sec + tp.c_tv_nsec * 1e-9) + @unwrap_spec(clk_id="c_int") + def clock_gettime(space, clk_id): + with lltype.scoped_alloc(TIMESPEC) as tp: + ret = c_clock_gettime(clk_id, tp) + if ret != 0: + raise exception_from_errno(space, space.w_IOError) + return space.wrap(tp.c_tv_sec + tp.c_tv_nsec * 1e-9) - at unwrap_spec(clk_id="c_int") -def clock_getres(space, clk_id): - with lltype.scoped_alloc(TIMESPEC) as tp: - ret = c_clock_getres(clk_id, tp) - if ret != 0: - raise exception_from_errno(space, space.w_IOError) - return space.wrap(tp.c_tv_sec + tp.c_tv_nsec * 1e-9) + @unwrap_spec(clk_id="c_int") + def clock_getres(space, clk_id): + with lltype.scoped_alloc(TIMESPEC) as tp: + ret = c_clock_getres(clk_id, tp) + if ret != 0: + raise exception_from_errno(space, space.w_IOError) + return space.wrap(tp.c_tv_sec + tp.c_tv_nsec * 1e-9) diff --git a/pypy/module/_socket/test/test_sock_app.py b/pypy/module/_socket/test/test_sock_app.py --- a/pypy/module/_socket/test/test_sock_app.py +++ b/pypy/module/_socket/test/test_sock_app.py @@ -627,8 +627,9 @@ cli.send(b'foobar' * 70) except timeout: pass - # test sendall() timeout - raises(timeout, cli.sendall, b'foobar' * 70) + # test sendall() timeout, be sure to send data larger than the + # socket buffer + raises(timeout, cli.sendall, b'foobar' * 7000) # done cli.close() t.close() diff --git a/pypy/module/_weakref/interp__weakref.py b/pypy/module/_weakref/interp__weakref.py --- a/pypy/module/_weakref/interp__weakref.py +++ b/pypy/module/_weakref/interp__weakref.py @@ -4,25 +4,50 @@ from pypy.interpreter.gateway import interp2app, ObjSpace from pypy.interpreter.typedef import TypeDef from pypy.rlib import jit +from pypy.rlib.rshrinklist import AbstractShrinkList +from pypy.rlib.objectmodel import specialize import weakref +class WRefShrinkList(AbstractShrinkList): + def must_keep(self, wref): + return wref() is not None + + class WeakrefLifeline(W_Root): - cached_weakref_index = -1 - cached_proxy_index = -1 + cached_weakref = None + cached_proxy = None + other_refs_weak = None def __init__(self, space): self.space = space - self.refs_weak = [] + + def append_wref_to(self, w_ref): + if self.other_refs_weak is None: + self.other_refs_weak = WRefShrinkList() + self.other_refs_weak.append(weakref.ref(w_ref)) + + @specialize.arg(1) + def traverse(self, callback, arg=None): + if self.cached_weakref is not None: + arg = callback(self, self.cached_weakref, arg) + if self.cached_proxy is not None: + arg = callback(self, self.cached_proxy, arg) + if self.other_refs_weak is not None: + for ref_w_ref in self.other_refs_weak.items(): + arg = callback(self, ref_w_ref, arg) + return arg + + def _clear_wref(self, wref, _): + w_ref = wref() + if w_ref is not None: + w_ref.clear() def clear_all_weakrefs(self): """Clear all weakrefs. This is called when an app-level object has a __del__, just before the app-level __del__ method is called. """ - for ref_w_ref in self.refs_weak: - w_ref = ref_w_ref() - if w_ref is not None: - w_ref.clear() + self.traverse(WeakrefLifeline._clear_wref) # Note that for no particular reason other than convenience, # weakref callbacks are not invoked eagerly here. They are # invoked by self.__del__() anyway. @@ -30,49 +55,46 @@ def get_or_make_weakref(self, w_subtype, w_obj): space = self.space w_weakreftype = space.gettypeobject(W_Weakref.typedef) - is_weakreftype = space.is_w(w_weakreftype, w_subtype) - if is_weakreftype and self.cached_weakref_index >= 0: - w_cached = self.refs_weak[self.cached_weakref_index]() - if w_cached is not None: - return w_cached - else: - self.cached_weakref_index = -1 - w_ref = space.allocate_instance(W_Weakref, w_subtype) - index = len(self.refs_weak) - W_Weakref.__init__(w_ref, space, w_obj, None) - self.refs_weak.append(weakref.ref(w_ref)) - if is_weakreftype: - self.cached_weakref_index = index + # + if space.is_w(w_weakreftype, w_subtype): + if self.cached_weakref is not None: + w_cached = self.cached_weakref() + if w_cached is not None: + return w_cached + w_ref = W_Weakref(space, w_obj, None) + self.cached_weakref = weakref.ref(w_ref) + else: + # subclass: cannot cache + w_ref = space.allocate_instance(W_Weakref, w_subtype) + W_Weakref.__init__(w_ref, space, w_obj, None) + self.append_wref_to(w_ref) return w_ref def get_or_make_proxy(self, w_obj): space = self.space - if self.cached_proxy_index >= 0: - w_cached = self.refs_weak[self.cached_proxy_index]() + if self.cached_proxy is not None: + w_cached = self.cached_proxy() if w_cached is not None: return w_cached - else: - self.cached_proxy_index = -1 - index = len(self.refs_weak) if space.is_true(space.callable(w_obj)): w_proxy = W_CallableProxy(space, w_obj, None) else: w_proxy = W_Proxy(space, w_obj, None) - self.refs_weak.append(weakref.ref(w_proxy)) - self.cached_proxy_index = index + self.cached_proxy = weakref.ref(w_proxy) return w_proxy def get_any_weakref(self, space): - if self.cached_weakref_index != -1: - w_ref = self.refs_weak[self.cached_weakref_index]() + if self.cached_weakref is not None: + w_ref = self.cached_weakref() if w_ref is not None: return w_ref - w_weakreftype = space.gettypeobject(W_Weakref.typedef) - for i in range(len(self.refs_weak)): - w_ref = self.refs_weak[i]() - if (w_ref is not None and - space.is_true(space.isinstance(w_ref, w_weakreftype))): - return w_ref + if self.other_refs_weak is not None: + w_weakreftype = space.gettypeobject(W_Weakref.typedef) + for wref in self.other_refs_weak.items(): + w_ref = wref() + if (w_ref is not None and + space.is_true(space.isinstance(w_ref, w_weakreftype))): + return w_ref return space.w_None @@ -80,10 +102,10 @@ def __init__(self, space, oldlifeline=None): self.space = space - if oldlifeline is None: - self.refs_weak = [] - else: - self.refs_weak = oldlifeline.refs_weak + if oldlifeline is not None: + self.cached_weakref = oldlifeline.cached_weakref + self.cached_proxy = oldlifeline.cached_proxy + self.other_refs_weak = oldlifeline.other_refs_weak def __del__(self): """This runs when the interp-level object goes away, and allows @@ -91,8 +113,11 @@ callbacks even if there is no __del__ method on the interp-level W_Root subclass implementing the object. """ - for i in range(len(self.refs_weak) - 1, -1, -1): - w_ref = self.refs_weak[i]() + if self.other_refs_weak is None: + return + items = self.other_refs_weak.items() + for i in range(len(items)-1, -1, -1): + w_ref = items[i]() if w_ref is not None and w_ref.w_callable is not None: w_ref.enqueue_for_destruction(self.space, W_WeakrefBase.activate_callback, @@ -102,7 +127,7 @@ space = self.space w_ref = space.allocate_instance(W_Weakref, w_subtype) W_Weakref.__init__(w_ref, space, w_obj, w_callable) - self.refs_weak.append(weakref.ref(w_ref)) + self.append_wref_to(w_ref) return w_ref def make_proxy_with_callback(self, w_obj, w_callable): @@ -111,7 +136,7 @@ w_proxy = W_CallableProxy(space, w_obj, w_callable) else: w_proxy = W_Proxy(space, w_obj, w_callable) - self.refs_weak.append(weakref.ref(w_proxy)) + self.append_wref_to(w_proxy) return w_proxy # ____________________________________________________________ @@ -247,30 +272,33 @@ ) +def _weakref_count(lifeline, wref, count): + if wref() is not None: + count += 1 + return count + def getweakrefcount(space, w_obj): """Return the number of weak references to 'obj'.""" lifeline = w_obj.getweakref() if lifeline is None: return space.wrap(0) else: - result = 0 - for i in range(len(lifeline.refs_weak)): - if lifeline.refs_weak[i]() is not None: - result += 1 + result = lifeline.traverse(_weakref_count, 0) return space.wrap(result) +def _get_weakrefs(lifeline, wref, result): + w_ref = wref() + if w_ref is not None: + result.append(w_ref) + return result + def getweakrefs(space, w_obj): """Return a list of all weak reference objects that point to 'obj'.""" + result = [] lifeline = w_obj.getweakref() - if lifeline is None: - return space.newlist([]) - else: - result = [] - for i in range(len(lifeline.refs_weak)): - w_ref = lifeline.refs_weak[i]() - if w_ref is not None: - result.append(w_ref) - return space.newlist(result) + if lifeline is not None: + lifeline.traverse(_get_weakrefs, result) + return space.newlist(result) #_________________________________________________________________ # Proxy diff --git a/pypy/module/cpyext/api.py b/pypy/module/cpyext/api.py --- a/pypy/module/cpyext/api.py +++ b/pypy/module/cpyext/api.py @@ -25,6 +25,7 @@ from pypy.module.__builtin__.descriptor import W_Property from pypy.module.__builtin__.interp_memoryview import W_MemoryView from pypy.rlib.entrypoint import entrypoint +from pypy.rlib.rposix import is_valid_fd, validate_fd from pypy.rlib.unroll import unrolling_iterable from pypy.rlib.objectmodel import specialize from pypy.rlib.exports import export_struct @@ -78,20 +79,39 @@ # FILE* interface FILEP = rffi.COpaquePtr('FILE') -fopen = rffi.llexternal('fopen', [CONST_STRING, CONST_STRING], FILEP) -fclose = rffi.llexternal('fclose', [FILEP], rffi.INT) -fwrite = rffi.llexternal('fwrite', - [rffi.VOIDP, rffi.SIZE_T, rffi.SIZE_T, FILEP], - rffi.SIZE_T) -fread = rffi.llexternal('fread', - [rffi.VOIDP, rffi.SIZE_T, rffi.SIZE_T, FILEP], - rffi.SIZE_T) -feof = rffi.llexternal('feof', [FILEP], rffi.INT) + if sys.platform == 'win32': fileno = rffi.llexternal('_fileno', [FILEP], rffi.INT) else: fileno = rffi.llexternal('fileno', [FILEP], rffi.INT) +fopen = rffi.llexternal('fopen', [CONST_STRING, CONST_STRING], FILEP) + +_fclose = rffi.llexternal('fclose', [FILEP], rffi.INT) +def fclose(fp): + if not is_valid_fd(fileno(fp)): + return -1 + return _fclose(fp) + +_fwrite = rffi.llexternal('fwrite', + [rffi.VOIDP, rffi.SIZE_T, rffi.SIZE_T, FILEP], + rffi.SIZE_T) +def fwrite(buf, sz, n, fp): + validate_fd(fileno(fp)) + return _fwrite(buf, sz, n, fp) + +_fread = rffi.llexternal('fread', + [rffi.VOIDP, rffi.SIZE_T, rffi.SIZE_T, FILEP], + rffi.SIZE_T) +def fread(buf, sz, n, fp): + validate_fd(fileno(fp)) + return _fread(buf, sz, n, fp) + +_feof = rffi.llexternal('feof', [FILEP], rffi.INT) +def feof(fp): + validate_fd(fileno(fp)) + return _feof(fp) + constant_names = """ Py_TPFLAGS_READY Py_TPFLAGS_READYING Py_TPFLAGS_HAVE_GETCHARBUFFER diff --git a/pypy/module/math/test/test_math.py b/pypy/module/math/test/test_math.py --- a/pypy/module/math/test/test_math.py +++ b/pypy/module/math/test/test_math.py @@ -273,5 +273,6 @@ assert math.trunc(foo()) == "truncated" def test_copysign_nan(self): + skip('sign of nan is undefined') import math assert math.copysign(1.0, float('-nan')) == -1.0 diff --git a/pypy/module/micronumpy/test/test_ufuncs.py b/pypy/module/micronumpy/test/test_ufuncs.py --- a/pypy/module/micronumpy/test/test_ufuncs.py +++ b/pypy/module/micronumpy/test/test_ufuncs.py @@ -233,12 +233,15 @@ assert a[1] == 0 def test_signbit(self): - from _numpypy import signbit, copysign + from _numpypy import signbit - assert (signbit([0, 0.0, 1, 1.0, float('inf'), float('nan')]) == - [False, False, False, False, False, False]).all() - assert (signbit([-0, -0.0, -1, -1.0, float('-inf'), -float('nan'), float('-nan')]) == - [False, True, True, True, True, True, True]).all() + assert (signbit([0, 0.0, 1, 1.0, float('inf')]) == + [False, False, False, False, False]).all() + assert (signbit([-0, -0.0, -1, -1.0, float('-inf')]) == + [False, True, True, True, True]).all() + skip('sign of nan is non-determinant') + assert (signbit([float('nan'), float('-nan'), -float('nan')]) == + [False, True, True]).all() def test_reciporocal(self): from _numpypy import array, reciprocal @@ -267,8 +270,8 @@ assert ([ninf, -1.0, -1.0, -1.0, 0.0, 1.0, 2.0, 1.0, inf] == ceil(a)).all() assert ([ninf, -1.0, -1.0, -1.0, 0.0, 1.0, 1.0, 0.0, inf] == trunc(a)).all() assert all([math.isnan(f(float("nan"))) for f in floor, ceil, trunc]) - assert all([math.copysign(1, f(float("nan"))) == 1 for f in floor, ceil, trunc]) - assert all([math.copysign(1, f(float("-nan"))) == -1 for f in floor, ceil, trunc]) + assert all([math.copysign(1, f(abs(float("nan")))) == 1 for f in floor, ceil, trunc]) + assert all([math.copysign(1, f(-abs(float("nan")))) == -1 for f in floor, ceil, trunc]) def test_copysign(self): from _numpypy import array, copysign diff --git a/pypy/module/mmap/test/test_mmap.py b/pypy/module/mmap/test/test_mmap.py --- a/pypy/module/mmap/test/test_mmap.py +++ b/pypy/module/mmap/test/test_mmap.py @@ -538,6 +538,8 @@ assert len(b) == 6 assert b[3] == b"b" assert b[:] == b"foobar" + m.close() + f.close() def test_offset(self): from mmap import mmap, ALLOCATIONGRANULARITY diff --git a/pypy/module/rctime/test/test_rctime.py b/pypy/module/rctime/test/test_rctime.py --- a/pypy/module/rctime/test/test_rctime.py +++ b/pypy/module/rctime/test/test_rctime.py @@ -213,6 +213,7 @@ def test_strftime(self): import time as rctime + import os t = rctime.time() tt = rctime.gmtime(t) @@ -228,6 +229,14 @@ exp = '2000 01 01 00 00 00 1 001' assert rctime.strftime("%Y %m %d %H %M %S %w %j", (0,)*9) == exp + # Guard against invalid/non-supported format string + # so that Python don't crash (Windows crashes when the format string + # input to [w]strftime is not kosher. + if os.name == 'nt': + raises(ValueError, rctime.strftime, '%f') + else: + assert rctime.strftime('%f') == '%f' + def test_strftime_ext(self): import time as rctime diff --git a/pypy/module/signal/interp_signal.py b/pypy/module/signal/interp_signal.py --- a/pypy/module/signal/interp_signal.py +++ b/pypy/module/signal/interp_signal.py @@ -15,7 +15,8 @@ def setup(): for key, value in cpy_signal.__dict__.items(): - if key.startswith('SIG') and is_valid_int(value): + if (key.startswith('SIG') or key.startswith('CTRL_')) and \ + is_valid_int(value): globals()[key] = value yield key @@ -23,6 +24,10 @@ SIG_DFL = cpy_signal.SIG_DFL SIG_IGN = cpy_signal.SIG_IGN signal_names = list(setup()) +signal_values = [globals()[key] for key in signal_names] +signal_values = {} +for key in signal_names: + signal_values[globals()[key]] = None includes = ['stdlib.h', 'src/signals.h'] if sys.platform != 'win32': @@ -242,9 +247,11 @@ return space.w_None def check_signum(space, signum): - if signum < 1 or signum >= NSIG: - raise OperationError(space.w_ValueError, - space.wrap("signal number out of range")) + if signum in signal_values: + return + raise OperationError(space.w_ValueError, + space.wrap("invalid signal value")) + @jit.dont_look_inside @unwrap_spec(signum=int) diff --git a/pypy/module/signal/test/test_interp_signal.py b/pypy/module/signal/test/test_interp_signal.py --- a/pypy/module/signal/test/test_interp_signal.py +++ b/pypy/module/signal/test/test_interp_signal.py @@ -6,6 +6,8 @@ def setup_module(mod): if not hasattr(os, 'kill') or not hasattr(os, 'getpid'): py.test.skip("requires os.kill() and os.getpid()") + if not hasattr(interp_signal, 'SIGUSR1'): + py.test.skip("requires SIGUSR1 in signal") def check(expected): diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -8,6 +8,8 @@ def setup_class(cls): if not hasattr(os, 'kill') or not hasattr(os, 'getpid'): py.test.skip("requires os.kill() and os.getpid()") + if not hasattr(cpy_signal, 'SIGUSR1'): + py.test.skip("requires SIGUSR1 in signal") cls.space = gettestobjspace(usemodules=['signal']) def test_checksignals(self): @@ -36,8 +38,6 @@ class AppTestSignal: def setup_class(cls): - if not hasattr(os, 'kill') or not hasattr(os, 'getpid'): - py.test.skip("requires os.kill() and os.getpid()") space = gettestobjspace(usemodules=['signal']) cls.space = space cls.w_signal = space.appexec([], "(): import signal; return signal") @@ -45,64 +45,72 @@ def test_exported_names(self): self.signal.__dict__ # crashes if the interpleveldefs are invalid - def test_usr1(self): - import types, posix + def test_basics(self): + import types, os + if not hasattr(os, 'kill') or not hasattr(os, 'getpid'): + skip("requires os.kill() and os.getpid()") signal = self.signal # the signal module to test + if hasattr(signal,'SIGUSR1'): + signum = signal.SIGUSR1 + else: + signum = signal.CTRL_BREAK_EVENT received = [] def myhandler(signum, frame): assert isinstance(frame, types.FrameType) received.append(signum) - signal.signal(signal.SIGUSR1, myhandler) + signal.signal(signum, myhandler) - posix.kill(posix.getpid(), signal.SIGUSR1) + print dir(os) + + os.kill(os.getpid(), signum) # the signal should be delivered to the handler immediately - assert received == [signal.SIGUSR1] + assert received == [signum] del received[:] - posix.kill(posix.getpid(), signal.SIGUSR1) + os.kill(os.getpid(), signum) # the signal should be delivered to the handler immediately - assert received == [signal.SIGUSR1] + assert received == [signum] del received[:] - signal.signal(signal.SIGUSR1, signal.SIG_IGN) + signal.signal(signum, signal.SIG_IGN) - posix.kill(posix.getpid(), signal.SIGUSR1) + os.kill(os.getpid(), signum) for i in range(10000): # wait a bit - signal should not arrive if received: break assert received == [] - signal.signal(signal.SIGUSR1, signal.SIG_DFL) + signal.signal(signum, signal.SIG_DFL) def test_default_return(self): """ Test that signal.signal returns SIG_DFL if that is the current handler. """ - from signal import signal, SIGUSR1, SIG_DFL, SIG_IGN + from signal import signal, SIGINT, SIG_DFL, SIG_IGN try: for handler in SIG_DFL, SIG_IGN, lambda *a: None: - signal(SIGUSR1, SIG_DFL) - assert signal(SIGUSR1, handler) == SIG_DFL + signal(SIGINT, SIG_DFL) + assert signal(SIGINT, handler) == SIG_DFL finally: - signal(SIGUSR1, SIG_DFL) + signal(SIGINT, SIG_DFL) def test_ignore_return(self): """ Test that signal.signal returns SIG_IGN if that is the current handler. """ - from signal import signal, SIGUSR1, SIG_DFL, SIG_IGN + from signal import signal, SIGINT, SIG_DFL, SIG_IGN try: for handler in SIG_DFL, SIG_IGN, lambda *a: None: - signal(SIGUSR1, SIG_IGN) - assert signal(SIGUSR1, handler) == SIG_IGN + signal(SIGINT, SIG_IGN) + assert signal(SIGINT, handler) == SIG_IGN finally: - signal(SIGUSR1, SIG_DFL) + signal(SIGINT, SIG_DFL) def test_obj_return(self): @@ -110,43 +118,47 @@ Test that signal.signal returns a Python object if one is the current handler. """ - from signal import signal, SIGUSR1, SIG_DFL, SIG_IGN + from signal import signal, SIGINT, SIG_DFL, SIG_IGN def installed(*a): pass try: for handler in SIG_DFL, SIG_IGN, lambda *a: None: - signal(SIGUSR1, installed) - assert signal(SIGUSR1, handler) is installed + signal(SIGINT, installed) + assert signal(SIGINT, handler) is installed finally: - signal(SIGUSR1, SIG_DFL) + signal(SIGINT, SIG_DFL) def test_getsignal(self): """ Test that signal.getsignal returns the currently installed handler. """ - from signal import getsignal, signal, SIGUSR1, SIG_DFL, SIG_IGN + from signal import getsignal, signal, SIGINT, SIG_DFL, SIG_IGN def handler(*a): pass try: - assert getsignal(SIGUSR1) == SIG_DFL - signal(SIGUSR1, SIG_DFL) - assert getsignal(SIGUSR1) == SIG_DFL - signal(SIGUSR1, SIG_IGN) - assert getsignal(SIGUSR1) == SIG_IGN - signal(SIGUSR1, handler) - assert getsignal(SIGUSR1) is handler + assert getsignal(SIGINT) == SIG_DFL + signal(SIGINT, SIG_DFL) + assert getsignal(SIGINT) == SIG_DFL + signal(SIGINT, SIG_IGN) + assert getsignal(SIGINT) == SIG_IGN + signal(SIGINT, handler) + assert getsignal(SIGINT) is handler finally: - signal(SIGUSR1, SIG_DFL) + signal(SIGINT, SIG_DFL) raises(ValueError, getsignal, 4444) raises(ValueError, signal, 4444, lambda *args: None) + raises(ValueError, signal, 42, lambda *args: None) def test_alarm(self): - from signal import alarm, signal, SIG_DFL, SIGALRM + try: + from signal import alarm, signal, SIG_DFL, SIGALRM + except: + skip('no alarm on this platform') import time l = [] def handler(*a): @@ -163,10 +175,13 @@ signal(SIGALRM, SIG_DFL) def test_set_wakeup_fd(self): - import signal, posix, fcntl + try: + import signal, posix, fcntl + except ImportError: + skip('cannot import posix or fcntl') def myhandler(signum, frame): pass - signal.signal(signal.SIGUSR1, myhandler) + signal.signal(signal.SIGINT, myhandler) # def cannot_read(): try: @@ -187,17 +202,19 @@ old_wakeup = signal.set_wakeup_fd(fd_write) try: cannot_read() - posix.kill(posix.getpid(), signal.SIGUSR1) + posix.kill(posix.getpid(), signal.SIGINT) res = posix.read(fd_read, 1) assert res == b'\x00' cannot_read() finally: old_wakeup = signal.set_wakeup_fd(old_wakeup) # - signal.signal(signal.SIGUSR1, signal.SIG_DFL) + signal.signal(signal.SIGINT, signal.SIG_DFL) def test_siginterrupt(self): import signal, os, time + if not hasattr(signal, 'siginterrupt'): + skip('non siginterrupt in signal') signum = signal.SIGUSR1 def readpipe_is_not_interrupted(): # from CPython's test_signal.readpipe_interrupted() diff --git a/pypy/module/thread/__init__.py b/pypy/module/thread/__init__.py --- a/pypy/module/thread/__init__.py +++ b/pypy/module/thread/__init__.py @@ -21,7 +21,7 @@ 'allocate': 'os_lock.allocate_lock', # obsolete synonym 'LockType': 'os_lock.Lock', 'RLock': 'os_lock.W_RLock', - #'_local': 'os_local.Local', + '_local': 'os_local.Local', 'TIMEOUT_MAX': 'space.wrap(float(os_lock.TIMEOUT_MAX) / 1000000.0)', 'error': 'space.fromcache(error.Cache).w_error', } @@ -38,8 +38,3 @@ from pypy.module.posix.interp_posix import add_fork_hook from pypy.module.thread.os_thread import reinit_threads add_fork_hook('child', reinit_threads) - - def setup_after_space_initialization(self): - """NOT_RPYTHON""" - if self.space.config.translation.rweakref: - self.extra_interpdef('_local', 'os_local.Local') diff --git a/pypy/module/thread/os_local.py b/pypy/module/thread/os_local.py --- a/pypy/module/thread/os_local.py +++ b/pypy/module/thread/os_local.py @@ -1,16 +1,26 @@ -from pypy.rlib.rweakref import RWeakKeyDictionary +import weakref +from pypy.rlib import jit from pypy.interpreter.baseobjspace import Wrappable, W_Root from pypy.interpreter.executioncontext import ExecutionContext from pypy.interpreter.typedef import (TypeDef, interp2app, GetSetProperty, descr_get_dict) +from pypy.rlib.rshrinklist import AbstractShrinkList + +class WRefShrinkList(AbstractShrinkList): + def must_keep(self, wref): + return wref() is not None + + +ExecutionContext._thread_local_objs = None class Local(Wrappable): """Thread-local data""" + @jit.dont_look_inside def __init__(self, space, initargs): self.initargs = initargs - self.dicts = RWeakKeyDictionary(ExecutionContext, W_Root) + self.dicts = {} # mapping ExecutionContexts to the wraped dict # The app-level __init__() will be called by the general # instance-creation logic. It causes getdict() to be # immediately called. If we don't prepare and set a w_dict @@ -18,26 +28,42 @@ # to call __init__() a second time. ec = space.getexecutioncontext() w_dict = space.newdict(instance=True) - self.dicts.set(ec, w_dict) + self.dicts[ec] = w_dict + self._register_in_ec(ec) + + def _register_in_ec(self, ec): + if not ec.space.config.translation.rweakref: + return # without weakrefs, works but 'dicts' is never cleared + if ec._thread_local_objs is None: + ec._thread_local_objs = WRefShrinkList() + ec._thread_local_objs.append(weakref.ref(self)) + + @jit.dont_look_inside + def create_new_dict(self, ec): + # create a new dict for this thread + space = ec.space + w_dict = space.newdict(instance=True) + self.dicts[ec] = w_dict + # call __init__ + try: + w_self = space.wrap(self) + w_type = space.type(w_self) + w_init = space.getattr(w_type, space.wrap("__init__")) + space.call_obj_args(w_init, w_self, self.initargs) + except: + # failed, forget w_dict and propagate the exception + del self.dicts[ec] + raise + # ready + self._register_in_ec(ec) + return w_dict def getdict(self, space): ec = space.getexecutioncontext() - w_dict = self.dicts.get(ec) - if w_dict is None: - # create a new dict for this thread - w_dict = space.newdict(instance=True) - self.dicts.set(ec, w_dict) - # call __init__ - try: - w_self = space.wrap(self) - w_type = space.type(w_self) - w_init = space.getattr(w_type, space.wrap("__init__")) - space.call_obj_args(w_init, w_self, self.initargs) - except: - # failed, forget w_dict and propagate the exception - self.dicts.set(ec, None) - raise - # ready + try: + w_dict = self.dicts[ec] + except KeyError: + w_dict = self.create_new_dict(ec) return w_dict def descr_local__new__(space, w_subtype, __args__): @@ -55,3 +81,13 @@ __init__ = interp2app(Local.descr_local__init__), __dict__ = GetSetProperty(descr_get_dict, cls=Local), ) + +def thread_is_stopping(ec): + tlobjs = ec._thread_local_objs + if tlobjs is None: + return + ec._thread_local_objs = None + for wref in tlobjs.items(): + local = wref() + if local is not None: + del local.dicts[ec] diff --git a/pypy/module/thread/threadlocals.py b/pypy/module/thread/threadlocals.py --- a/pypy/module/thread/threadlocals.py +++ b/pypy/module/thread/threadlocals.py @@ -54,4 +54,8 @@ def leave_thread(self, space): "Notification that the current thread is about to stop." - self.setvalue(None) + from pypy.module.thread.os_local import thread_is_stopping + try: + thread_is_stopping(self.getvalue()) + finally: + self.setvalue(None) diff --git a/pypy/objspace/flow/objspace.py b/pypy/objspace/flow/objspace.py --- a/pypy/objspace/flow/objspace.py +++ b/pypy/objspace/flow/objspace.py @@ -205,6 +205,14 @@ return obj return None + def _check_constant_interp_w_or_w_None(self, RequiredClass, w_obj): + """ + WARNING: this implementation is not complete at all. It's just enough + to be used by end_finally() inside pyopcode.py. + """ + return w_obj == self.w_None or (isinstance(w_obj, Constant) and + isinstance(w_obj.value, RequiredClass)) + def getexecutioncontext(self): return getattr(self, 'executioncontext', None) diff --git a/pypy/rlib/jit.py b/pypy/rlib/jit.py --- a/pypy/rlib/jit.py +++ b/pypy/rlib/jit.py @@ -413,8 +413,8 @@ 'retrace_limit': 'how many times we can try retracing before giving up', 'max_retrace_guards': 'number of extra guards a retrace can cause', 'max_unroll_loops': 'number of extra unrollings a loop can cause', - 'enable_opts': 'INTERNAL USE ONLY: optimizations to enable, or all = %s' % - ENABLE_ALL_OPTS, + 'enable_opts': 'INTERNAL USE ONLY (MAY NOT WORK OR LEAD TO CRASHES): ' + 'optimizations to enable, or all = %s' % ENABLE_ALL_OPTS, } PARAMETERS = {'threshold': 1039, # just above 1024, prime diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -71,7 +71,7 @@ # used in tests for ctypes and for genc and friends # to handle the win64 special case: -is_emulated_long = _long_typecode <> 'l' +is_emulated_long = _long_typecode != 'l' LONG_BIT = _get_long_bit() LONG_MASK = (2**LONG_BIT)-1 diff --git a/pypy/rlib/rmmap.py b/pypy/rlib/rmmap.py --- a/pypy/rlib/rmmap.py +++ b/pypy/rlib/rmmap.py @@ -739,14 +739,35 @@ # assume -1 and 0 both mean invalid file descriptor # to 'anonymously' map memory. if fileno != -1 and fileno != 0: - fh = rwin32._get_osfhandle(fileno) - if fh == INVALID_HANDLE: - errno = rposix.get_errno() - raise OSError(errno, os.strerror(errno)) + fh = rwin32.get_osfhandle(fileno) # Win9x appears to need us seeked to zero # SEEK_SET = 0 # libc._lseek(fileno, 0, SEEK_SET) + # check file size + try: + low, high = _get_file_size(fh) + except OSError: + pass # ignore non-seeking files and errors and trust map_size + else: + if not high and low <= sys.maxint: + size = low + else: + # not so sure if the signed/unsigned strictness is a good idea: + high = rffi.cast(lltype.Unsigned, high) + low = rffi.cast(lltype.Unsigned, low) + size = (high << 32) + low + size = rffi.cast(lltype.Signed, size) + if map_size == 0: + if offset > size: + raise RValueError( + "mmap offset is greater than file size") + map_size = int(size - offset) + if map_size != size - offset: + raise RValueError("mmap length is too large") + elif offset + map_size > size: + raise RValueError("mmap length is greater than file size") + m = MMap(access, offset) m.file_handle = INVALID_HANDLE m.map_handle = INVALID_HANDLE diff --git a/pypy/rlib/rposix.py b/pypy/rlib/rposix.py --- a/pypy/rlib/rposix.py +++ b/pypy/rlib/rposix.py @@ -98,16 +98,16 @@ _set_errno(rffi.cast(INT, errno)) if os.name == 'nt': - _validate_fd = rffi.llexternal( + is_valid_fd = rffi.llexternal( "_PyVerify_fd", [rffi.INT], rffi.INT, compilation_info=errno_eci, ) @jit.dont_look_inside def validate_fd(fd): - if not _validate_fd(fd): + if not is_valid_fd(fd): raise OSError(get_errno(), 'Bad file descriptor') else: - def _validate_fd(fd): + def is_valid_fd(fd): return 1 def validate_fd(fd): @@ -117,7 +117,8 @@ # this behaves like os.closerange() from Python 2.6. for fd in xrange(fd_low, fd_high): try: - os.close(fd) + if is_valid_fd(fd): + os.close(fd) except OSError: pass diff --git a/pypy/rlib/rshrinklist.py b/pypy/rlib/rshrinklist.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/rshrinklist.py @@ -0,0 +1,34 @@ + +class AbstractShrinkList(object): + """A mixin base class. You should subclass it and add a method + must_keep(). Behaves like a list with the method append(), and + you can read *for reading* the list of items by calling items(). + The twist is that occasionally append() will throw away the + items for which must_keep() returns False. (It does so without + changing the order.) + """ + _mixin_ = True + + def __init__(self): + self._list = [] + self._next_shrink = 16 + + def append(self, x): + self._do_shrink() + self._list.append(x) + + def items(self): + return self._list + + def _do_shrink(self): + if len(self._list) >= self._next_shrink: + rest = 0 + for x in self._list: + if self.must_keep(x): + self._list[rest] = x + rest += 1 + del self._list[rest:] + self._next_shrink = 16 + 2 * rest + + def must_keep(self, x): + raise NotImplementedError diff --git a/pypy/rlib/rwin32.py b/pypy/rlib/rwin32.py --- a/pypy/rlib/rwin32.py +++ b/pypy/rlib/rwin32.py @@ -8,6 +8,7 @@ from pypy.translator.platform import CompilationError from pypy.rpython.lltypesystem import lltype, rffi from pypy.rlib.rarithmetic import intmask +from pypy.rlib.rposix import validate_fd from pypy.rlib import jit import os, sys, errno @@ -78,6 +79,7 @@ for name in """FORMAT_MESSAGE_ALLOCATE_BUFFER FORMAT_MESSAGE_FROM_SYSTEM MAX_PATH WAIT_OBJECT_0 WAIT_TIMEOUT INFINITE + ERROR_INVALID_HANDLE """.split(): locals()[name] = rffi_platform.ConstantInteger(name) @@ -126,6 +128,13 @@ _get_osfhandle = rffi.llexternal('_get_osfhandle', [rffi.INT], HANDLE) + def get_osfhandle(fd): + validate_fd(fd) + handle = _get_osfhandle(fd) + if handle == INVALID_HANDLE_VALUE: + raise WindowsError(ERROR_INVALID_HANDLE, "Invalid file handle") + return handle + def build_winerror_to_errno(): """Build a dictionary mapping windows error numbers to POSIX errno. The function returns the dict, and the default value for codes not diff --git a/pypy/rlib/streamio.py b/pypy/rlib/streamio.py --- a/pypy/rlib/streamio.py +++ b/pypy/rlib/streamio.py @@ -175,24 +175,23 @@ if sys.platform == "win32": - from pypy.rlib import rwin32 + from pypy.rlib.rwin32 import BOOL, HANDLE, get_osfhandle, GetLastError from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.rpython.lltypesystem import rffi - import errno _eci = ExternalCompilationInfo() - _get_osfhandle = rffi.llexternal('_get_osfhandle', [rffi.INT], rffi.LONG, - compilation_info=_eci) _setmode = rffi.llexternal('_setmode', [rffi.INT, rffi.INT], rffi.INT, compilation_info=_eci) - SetEndOfFile = rffi.llexternal('SetEndOfFile', [rffi.LONG], rwin32.BOOL, + SetEndOfFile = rffi.llexternal('SetEndOfFile', [HANDLE], BOOL, compilation_info=_eci) # HACK: These implementations are specific to MSVCRT and the C backend. # When generating on CLI or JVM, these are patched out. # See PyPyTarget.target() in targetpypystandalone.py def _setfd_binary(fd): - _setmode(fd, os.O_BINARY) + #Allow this to succeed on invalid fd's + if rposix.is_valid_fd(fd): + _setmode(fd, os.O_BINARY) def ftruncate_win32(fd, size): curpos = os.lseek(fd, 0, 1) @@ -200,11 +199,9 @@ # move to the position to be truncated os.lseek(fd, size, 0) # Truncate. Note that this may grow the file! - handle = _get_osfhandle(fd) - if handle == -1: - raise OSError(errno.EBADF, "Invalid file handle") + handle = get_osfhandle(fd) if not SetEndOfFile(handle): - raise WindowsError(rwin32.GetLastError(), + raise WindowsError(GetLastError(), "Could not truncate file") finally: # we restore the file pointer position in any case diff --git a/pypy/rlib/test/test_rmmap.py b/pypy/rlib/test/test_rmmap.py --- a/pypy/rlib/test/test_rmmap.py +++ b/pypy/rlib/test/test_rmmap.py @@ -33,8 +33,6 @@ interpret(f, []) def test_file_size(self): - if os.name == "nt": - skip("Only Unix checks file size") def func(no): try: @@ -433,15 +431,16 @@ def test_windows_crasher_1(self): if sys.platform != "win32": skip("Windows-only test") - - m = mmap.mmap(-1, 1000, tagname="foo") - # same tagname, but larger size - try: - m2 = mmap.mmap(-1, 5000, tagname="foo") - m2.getitem(4500) - except WindowsError: - pass - m.close() + def func(): + m = mmap.mmap(-1, 1000, tagname="foo") + # same tagname, but larger size + try: + m2 = mmap.mmap(-1, 5000, tagname="foo") + m2.getitem(4500) + except WindowsError: + pass + m.close() + interpret(func, []) def test_windows_crasher_2(self): if sys.platform != "win32": diff --git a/pypy/rlib/test/test_rposix.py b/pypy/rlib/test/test_rposix.py --- a/pypy/rlib/test/test_rposix.py +++ b/pypy/rlib/test/test_rposix.py @@ -132,14 +132,12 @@ except Exception: pass - def test_validate_fd(self): + def test_is_valid_fd(self): if os.name != 'nt': skip('relevant for windows only') - assert rposix._validate_fd(0) == 1 + assert rposix.is_valid_fd(0) == 1 fid = open(str(udir.join('validate_test.txt')), 'w') fd = fid.fileno() - assert rposix._validate_fd(fd) == 1 + assert rposix.is_valid_fd(fd) == 1 fid.close() - assert rposix._validate_fd(fd) == 0 - - + assert rposix.is_valid_fd(fd) == 0 diff --git a/pypy/rlib/test/test_rshrinklist.py b/pypy/rlib/test/test_rshrinklist.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/test/test_rshrinklist.py @@ -0,0 +1,30 @@ +from pypy.rlib.rshrinklist import AbstractShrinkList + +class Item: + alive = True + +class ItemList(AbstractShrinkList): + def must_keep(self, x): + return x.alive + +def test_simple(): + l = ItemList() + l2 = [Item() for i in range(150)] + for x in l2: + l.append(x) + assert l.items() == l2 + # + for x in l2[::2]: + x.alive = False + l3 = [Item() for i in range(150 + 16)] + for x in l3: + l.append(x) + assert l.items() == l2[1::2] + l3 # keeps the order + +def test_append_dead_items(): + l = ItemList() + for i in range(150): + x = Item() + l.append(x) + x.alive = False + assert len(l.items()) <= 16 diff --git a/pypy/rlib/test/test_rweakkeydict.py b/pypy/rlib/test/test_rweakkeydict.py --- a/pypy/rlib/test/test_rweakkeydict.py +++ b/pypy/rlib/test/test_rweakkeydict.py @@ -135,3 +135,32 @@ d = RWeakKeyDictionary(KX, VY) d.set(KX(), VX()) py.test.raises(Exception, interpret, g, [1]) + + +def test_rpython_free_values(): + import py; py.test.skip("XXX not implemented, messy") + class VXDel: + def __del__(self): + state.freed.append(1) + class State: + pass + state = State() + state.freed = [] + # + def add_me(): + k = KX() + v = VXDel() + d = RWeakKeyDictionary(KX, VXDel) + d.set(k, v) + return d + def f(): + del state.freed[:] + d = add_me() + rgc.collect() + # we want the dictionary to be really empty here. It's hard to + # ensure in the current implementation after just one collect(), + # but at least two collects should be enough. + rgc.collect() + return len(state.freed) + assert f() == 1 + assert interpret(f, []) == 1 diff --git a/pypy/rlib/test/test_rwin32.py b/pypy/rlib/test/test_rwin32.py new file mode 100644 --- /dev/null +++ b/pypy/rlib/test/test_rwin32.py @@ -0,0 +1,28 @@ +import os +if os.name != 'nt': + skip('tests for win32 only') + +from pypy.rlib import rwin32 +from pypy.tool.udir import udir + + +def test_get_osfhandle(): + fid = open(str(udir.join('validate_test.txt')), 'w') + fd = fid.fileno() + rwin32.get_osfhandle(fd) + fid.close() + raises(OSError, rwin32.get_osfhandle, fd) + rwin32.get_osfhandle(0) + +def test_get_osfhandle_raising(): + #try to test what kind of exception get_osfhandle raises w/out fd validation + skip('Crashes python') + fid = open(str(udir.join('validate_test.txt')), 'w') + fd = fid.fileno() + fid.close() + def validate_fd(fd): + return 1 + _validate_fd = rwin32.validate_fd + rwin32.validate_fd = validate_fd + raises(WindowsError, rwin32.get_osfhandle, fd) + rwin32.validate_fd = _validate_fd diff --git a/pypy/rpython/llinterp.py b/pypy/rpython/llinterp.py --- a/pypy/rpython/llinterp.py +++ b/pypy/rpython/llinterp.py @@ -152,7 +152,8 @@ etype = frame.op_direct_call(exdata.fn_type_of_exc_inst, evalue) if etype == klass: return cls - raise ValueError, "couldn't match exception" + raise ValueError("couldn't match exception, maybe it" + " has RPython attributes like OSError?") def get_transformed_exc_data(self, graph): if hasattr(graph, 'exceptiontransformed'): diff --git a/pypy/rpython/module/ll_os.py b/pypy/rpython/module/ll_os.py --- a/pypy/rpython/module/ll_os.py +++ b/pypy/rpython/module/ll_os.py @@ -397,6 +397,7 @@ os_dup = self.llexternal(underscore_on_windows+'dup', [rffi.INT], rffi.INT) def dup_llimpl(fd): + rposix.validate_fd(fd) newfd = rffi.cast(lltype.Signed, os_dup(rffi.cast(rffi.INT, fd))) if newfd == -1: raise OSError(rposix.get_errno(), "dup failed") @@ -411,6 +412,7 @@ [rffi.INT, rffi.INT], rffi.INT) def dup2_llimpl(fd, newfd): + rposix.validate_fd(fd) error = rffi.cast(lltype.Signed, os_dup2(rffi.cast(rffi.INT, fd), rffi.cast(rffi.INT, newfd))) if error == -1: @@ -891,6 +893,7 @@ def os_read_llimpl(fd, count): if count < 0: raise OSError(errno.EINVAL, None) + rposix.validate_fd(fd) raw_buf, gc_buf = rffi.alloc_buffer(count) try: void_buf = rffi.cast(rffi.VOIDP, raw_buf) @@ -916,6 +919,7 @@ def os_write_llimpl(fd, data): count = len(data) + rposix.validate_fd(fd) buf = rffi.get_nonmovingbuffer(data) try: written = rffi.cast(lltype.Signed, os_write( @@ -940,6 +944,7 @@ rffi.INT, threadsafe=False) def close_llimpl(fd): + rposix.validate_fd(fd) error = rffi.cast(lltype.Signed, os_close(rffi.cast(rffi.INT, fd))) if error == -1: raise OSError(rposix.get_errno(), "close failed") @@ -975,6 +980,7 @@ rffi.LONGLONG) def lseek_llimpl(fd, pos, how): + rposix.validate_fd(fd) how = fix_seek_arg(how) res = os_lseek(rffi.cast(rffi.INT, fd), rffi.cast(rffi.LONGLONG, pos), @@ -1000,6 +1006,7 @@ [rffi.INT, rffi.LONGLONG], rffi.INT) def ftruncate_llimpl(fd, length): + rposix.validate_fd(fd) res = rffi.cast(rffi.LONG, os_ftruncate(rffi.cast(rffi.INT, fd), rffi.cast(rffi.LONGLONG, length))) @@ -1018,6 +1025,7 @@ os_fsync = self.llexternal('_commit', [rffi.INT], rffi.INT) def fsync_llimpl(fd): + rposix.validate_fd(fd) res = rffi.cast(rffi.SIGNED, os_fsync(rffi.cast(rffi.INT, fd))) if res < 0: raise OSError(rposix.get_errno(), "fsync failed") @@ -1030,6 +1038,7 @@ os_fdatasync = self.llexternal('fdatasync', [rffi.INT], rffi.INT) def fdatasync_llimpl(fd): + rposix.validate_fd(fd) res = rffi.cast(rffi.SIGNED, os_fdatasync(rffi.cast(rffi.INT, fd))) if res < 0: raise OSError(rposix.get_errno(), "fdatasync failed") @@ -1042,6 +1051,7 @@ os_fchdir = self.llexternal('fchdir', [rffi.INT], rffi.INT) def fchdir_llimpl(fd): + rposix.validate_fd(fd) res = rffi.cast(rffi.SIGNED, os_fchdir(rffi.cast(rffi.INT, fd))) if res < 0: raise OSError(rposix.get_errno(), "fchdir failed") @@ -1357,6 +1367,7 @@ os_isatty = self.llexternal(underscore_on_windows+'isatty', [rffi.INT], rffi.INT) def isatty_llimpl(fd): + rposix.validate_fd(fd) res = rffi.cast(lltype.Signed, os_isatty(rffi.cast(rffi.INT, fd))) return res != 0 @@ -1534,6 +1545,7 @@ os_umask = self.llexternal(underscore_on_windows+'umask', [rffi.MODE_T], rffi.MODE_T) def umask_llimpl(fd): + rposix.validate_fd(fd) res = os_umask(rffi.cast(rffi.MODE_T, fd)) return rffi.cast(lltype.Signed, res) diff --git a/pypy/rpython/module/ll_os_stat.py b/pypy/rpython/module/ll_os_stat.py --- a/pypy/rpython/module/ll_os_stat.py +++ b/pypy/rpython/module/ll_os_stat.py @@ -402,8 +402,7 @@ lltype.free(data, flavor='raw') def win32_fstat_llimpl(fd): - handle = rwin32._get_osfhandle(fd) - + handle = rwin32.get_osfhandle(fd) filetype = win32traits.GetFileType(handle) if filetype == win32traits.FILE_TYPE_CHAR: # console or LPT device diff --git a/pypy/rpython/module/test/test_ll_os.py b/pypy/rpython/module/test/test_ll_os.py --- a/pypy/rpython/module/test/test_ll_os.py +++ b/pypy/rpython/module/test/test_ll_os.py @@ -85,8 +85,10 @@ if (len == 0) and "WINGDB_PYTHON" in os.environ: # the ctypes call seems not to work in the Wing debugger return - assert str(buf.value).lower() == pwd - # ctypes returns the drive letter in uppercase, os.getcwd does not + assert str(buf.value).lower() == pwd.lower() + # ctypes returns the drive letter in uppercase, + # os.getcwd does not, + # but there may be uppercase in os.getcwd path pwd = os.getcwd() try: @@ -188,7 +190,67 @@ OSError, ll_execve, "/etc/passwd", [], {}) assert info.value.errno == errno.EACCES +def test_os_write(): + #Same as test in rpython/test/test_rbuiltin + fname = str(udir.join('os_test.txt')) + fd = os.open(fname, os.O_WRONLY|os.O_CREAT, 0777) + assert fd >= 0 + f = getllimpl(os.write) + f(fd, 'Hello world') + os.close(fd) + with open(fname) as fid: + assert fid.read() == "Hello world" + fd = os.open(fname, os.O_WRONLY|os.O_CREAT, 0777) + os.close(fd) + raises(OSError, f, fd, 'Hello world') +def test_os_close(): + fname = str(udir.join('os_test.txt')) + fd = os.open(fname, os.O_WRONLY|os.O_CREAT, 0777) + assert fd >= 0 + os.write(fd, 'Hello world') + f = getllimpl(os.close) + f(fd) + raises(OSError, f, fd) + +def test_os_lseek(): + fname = str(udir.join('os_test.txt')) + fd = os.open(fname, os.O_RDWR|os.O_CREAT, 0777) + assert fd >= 0 + os.write(fd, 'Hello world') + f = getllimpl(os.lseek) + f(fd,0,0) + assert os.read(fd, 11) == 'Hello world' + os.close(fd) + raises(OSError, f, fd, 0, 0) + +def test_os_fsync(): + fname = str(udir.join('os_test.txt')) + fd = os.open(fname, os.O_WRONLY|os.O_CREAT, 0777) + assert fd >= 0 + os.write(fd, 'Hello world') + f = getllimpl(os.fsync) + f(fd) + os.close(fd) + fid = open(fname) + assert fid.read() == 'Hello world' + fid.close() + raises(OSError, f, fd) + +def test_os_fdatasync(): + try: + f = getllimpl(os.fdatasync) + except: + skip('No fdatasync in os') + fname = str(udir.join('os_test.txt')) + fd = os.open(fname, os.O_WRONLY|os.O_CREAT, 0777) + assert fd >= 0 + os.write(fd, 'Hello world') + f(fd) + fid = open(fname) + assert fid.read() == 'Hello world' + os.close(fd) + raises(OSError, f, fd) class ExpectTestOs: def setup_class(cls): diff --git a/pypy/rpython/test/test_llinterp.py b/pypy/rpython/test/test_llinterp.py --- a/pypy/rpython/test/test_llinterp.py +++ b/pypy/rpython/test/test_llinterp.py @@ -34,7 +34,7 @@ #start = time.time() res = call(*args, **kwds) #elapsed = time.time() - start - #print "%.2f secs" %(elapsed,) + #print "%.2f secs" % (elapsed,) return res def gengraph(func, argtypes=[], viewbefore='auto', policy=None, @@ -137,9 +137,9 @@ info = py.test.raises(LLException, "interp.eval_graph(graph, values)") try: got = interp.find_exception(info.value) - except ValueError: - got = None - assert got is exc, "wrong exception type" + except ValueError as message: + got = 'None %r' % message + assert got is exc, "wrong exception type, expected %r got %r" % (exc, got) #__________________________________________________________________ # tests diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -201,6 +201,9 @@ os.close(res) hello = open(tmpdir).read() assert hello == "hello world" + fd = os.open(tmpdir, os.O_WRONLY|os.O_CREAT, 777) + os.close(fd) + raises(OSError, os.write, fd, "hello world") def test_os_write_single_char(self): tmpdir = str(udir.udir.join("os_write_test_char")) diff --git a/pypy/translator/c/src/main.h b/pypy/translator/c/src/main.h --- a/pypy/translator/c/src/main.h +++ b/pypy/translator/c/src/main.h @@ -19,10 +19,6 @@ #define PYPY_MAIN_FUNCTION main #endif -#ifdef MS_WINDOWS -#include "src/winstuff.c" -#endif - #ifdef __GNUC__ /* Hack to prevent this function from being inlined. Helps asmgcc because the main() function has often a different prologue/epilogue. */ @@ -50,10 +46,6 @@ } #endif -#ifdef MS_WINDOWS - pypy_Windows_startup(); -#endif - errmsg = RPython_StartupCode(); if (errmsg) goto error; diff --git a/pypy/translator/c/src/winstuff.c b/pypy/translator/c/src/winstuff.c deleted file mode 100644 --- a/pypy/translator/c/src/winstuff.c +++ /dev/null @@ -1,37 +0,0 @@ - -/************************************************************/ - /***** Windows-specific stuff. *****/ - - -/* copied from CPython. */ - -#if defined _MSC_VER && _MSC_VER >= 1400 && defined(__STDC_SECURE_LIB__) -/* crt variable checking in VisualStudio .NET 2005 */ -#include - -/* Invalid parameter handler. Sets a ValueError exception */ -static void -InvalidParameterHandler( - const wchar_t * expression, - const wchar_t * function, - const wchar_t * file, - unsigned int line, - uintptr_t pReserved) -{ - /* Do nothing, allow execution to continue. Usually this - * means that the CRT will set errno to EINVAL - */ -} -#endif - - -void pypy_Windows_startup(void) -{ -#if defined _MSC_VER && _MSC_VER >= 1400 && defined(__STDC_SECURE_LIB__) - /* Set CRT argument error handler */ - _set_invalid_parameter_handler(InvalidParameterHandler); - /* turn off assertions within CRT in debug mode; - instead just return EINVAL */ - _CrtSetReportMode(_CRT_ASSERT, 0); -#endif -} diff --git a/pypy/translator/goal/app_main.py b/pypy/translator/goal/app_main.py --- a/pypy/translator/goal/app_main.py +++ b/pypy/translator/goal/app_main.py @@ -122,8 +122,18 @@ else: optitems = list(options.items()) optitems.sort() - for name, value in optitems: - print(' %51s: %s' % (name, value)) + current = [] + for key, value in optitems: + group = key.split('.') + name = group.pop() + n = 0 + while n < min(len(current), len(group)) and current[n] == group[n]: + n += 1 + while n < len(group): + print('%s[%s]' % (' ' * n, group[n])) + n += 1 + print('%s%s = %r' % (' ' * n, name, value)) + current = group raise SystemExit def print_help(*args): diff --git a/pypy/translator/goal/test2/test_app_main.py b/pypy/translator/goal/test2/test_app_main.py --- a/pypy/translator/goal/test2/test_app_main.py +++ b/pypy/translator/goal/test2/test_app_main.py @@ -790,6 +790,37 @@ assert data.startswith("15\xe2\x82\xac") +class TestAppMain: + + def test_print_info(self): + from pypy.translator.goal import app_main + import sys, cStringIO + prev_so = sys.stdout + prev_ti = getattr(sys, 'pypy_translation_info', 'missing') + sys.pypy_translation_info = { + 'translation.foo': True, + 'translation.bar': 42, + 'translation.egg.something': None, + 'objspace.x': 'hello', + } + try: + sys.stdout = f = cStringIO.StringIO() + py.test.raises(SystemExit, app_main.print_info) + finally: + sys.stdout = prev_so + if prev_ti == 'missing': + del sys.pypy_translation_info + else: + sys.pypy_translation_info = prev_ti + assert f.getvalue() == ("[objspace]\n" + " x = 'hello'\n" + "[translation]\n" + " bar = 42\n" + " [egg]\n" + " something = None\n" + " foo = True\n") + + class AppTestAppMain: def setup_class(self): diff --git a/pypy/translator/platform/__init__.py b/pypy/translator/platform/__init__.py --- a/pypy/translator/platform/__init__.py +++ b/pypy/translator/platform/__init__.py @@ -299,10 +299,11 @@ def set_platform(new_platform, cc): global platform - log.msg("Setting platform to %r cc=%s" % (new_platform,cc)) platform = pick_platform(new_platform, cc) if not platform: - raise ValueError("pick_platform failed") + raise ValueError("pick_platform(%r, %s) failed"%(new_platform, cc)) + log.msg("Set platform with %r cc=%s, using cc=%r" % (new_platform, cc, + getattr(platform, 'cc','Unknown'))) if new_platform == 'host': global host diff --git a/pypy/translator/platform/windows.py b/pypy/translator/platform/windows.py --- a/pypy/translator/platform/windows.py +++ b/pypy/translator/platform/windows.py @@ -83,13 +83,9 @@ if env is not None: return env - log.error("Could not find a Microsoft Compiler") # Assume that the compiler is already part of the environment -msvc_compiler_environ32 = find_msvc_env(False) -msvc_compiler_environ64 = find_msvc_env(True) - class MsvcPlatform(Platform): name = "msvc" so_ext = 'dll' @@ -108,10 +104,7 @@ def __init__(self, cc=None, x64=False): self.x64 = x64 - if x64: - msvc_compiler_environ = msvc_compiler_environ64 - else: - msvc_compiler_environ = msvc_compiler_environ32 + msvc_compiler_environ = find_msvc_env(x64) Platform.__init__(self, 'cl.exe') if msvc_compiler_environ: self.c_environ = os.environ.copy() From noreply at buildbot.pypy.org Thu May 10 23:30:38 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 May 2012 23:30:38 +0200 (CEST) Subject: [pypy-commit] pypy default: kill commented-out lines, and a FinallyBlock.cleanup, which now no longer needs to be overridden Message-ID: <20120510213038.6D46982115@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r55013:8021ff42995c Date: 2012-05-10 23:16 +0200 http://bitbucket.org/pypy/pypy/changeset/8021ff42995c/ Log: kill commented-out lines, and a FinallyBlock.cleanup, which now no longer needs to be overridden diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -951,13 +951,9 @@ # Implementation since 2.7a0: 62191 (introduce SETUP_WITH) or self.pycode.magic >= 0xa0df2d1): # implementation since 2.6a1: 62161 (WITH_CLEANUP optimization) - #self.popvalue() - #self.popvalue() w_unroller = self.popvalue() w_exitfunc = self.popvalue() self.pushvalue(w_unroller) - #self.pushvalue(self.space.w_None) - #self.pushvalue(self.space.w_None) elif self.pycode.magic >= 0xa0df28c: # Implementation since 2.5a0: 62092 (changed WITH_CLEANUP opcode) w_exitfunc = self.popvalue() @@ -1358,25 +1354,12 @@ _opname = 'SETUP_FINALLY' handling_mask = -1 # handles every kind of SuspendedUnroller - def cleanup(self, frame): - # upon normal entry into the finally: part, the standard Python - # bytecode pushes a single None for END_FINALLY. In our case we - # always push three values into the stack: the wrapped ctlflowexc, - # the exception value and the exception type (which are all None - # here). - self.cleanupstack(frame) - # one None already pushed by the bytecode - #frame.pushvalue(frame.space.w_None) - #frame.pushvalue(frame.space.w_None) - def handle(self, frame, unroller): # any abnormal reason for unrolling a finally: triggers the end of # the block unrolling and the entering the finally: handler. # see comments in cleanup(). self.cleanupstack(frame) frame.pushvalue(frame.space.wrap(unroller)) - #frame.pushvalue(frame.space.w_None) - #frame.pushvalue(frame.space.w_None) return r_uint(self.handlerposition) # jump to the handler From noreply at buildbot.pypy.org Thu May 10 23:30:39 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 May 2012 23:30:39 +0200 (CEST) Subject: [pypy-commit] pypy py3k: hg merge default (again) Message-ID: <20120510213039.CFAC182116@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r55014:219a3ed423e6 Date: 2012-05-10 23:18 +0200 http://bitbucket.org/pypy/pypy/changeset/219a3ed423e6/ Log: hg merge default (again) diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -1341,17 +1341,6 @@ handling_mask = -1 # handles every kind of SuspendedUnroller restore_last_exception = True # set to False by WithBlock - def cleanup(self, frame): - # upon normal entry into the finally: part, the standard Python - # bytecode pushes a single None for END_FINALLY. In our case we - # always push three values into the stack: the wrapped ctlflowexc, - # the exception value and the exception type (which are all None - # here). - self.cleanupstack(frame) - # one None already pushed by the bytecode - #frame.pushvalue(frame.space.w_None) - #frame.pushvalue(frame.space.w_None) - def handle(self, frame, unroller): # any abnormal reason for unrolling a finally: triggers the end of # the block unrolling and the entering the finally: handler. From noreply at buildbot.pypy.org Thu May 10 23:30:41 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 10 May 2012 23:30:41 +0200 (CEST) Subject: [pypy-commit] pypy py3k: properly implement POP_EXCEPT as a block, instead of popping the last_exception from the stack directly in the implementation of the opcode. This fixes test_pop_exception_value. Took two days to hunt&fix :-( Message-ID: <20120510213041.2652482118@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: py3k Changeset: r55015:6c1827662c80 Date: 2012-05-10 23:30 +0200 http://bitbucket.org/pypy/pypy/changeset/6c1827662c80/ Log: properly implement POP_EXCEPT as a block, instead of popping the last_exception from the stack directly in the implementation of the opcode. This fixes test_pop_exception_value. Took two days to hunt&fix :-( diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py --- a/pypy/interpreter/pyopcode.py +++ b/pypy/interpreter/pyopcode.py @@ -529,14 +529,9 @@ def POP_EXCEPT(self, oparg, next_instr): assert self.space.py3k - # on CPython, POP_EXCEPT also pops the block. Here, the block is - # automatically popped by unrollstack() - w_last_exception = self.popvalue() - if not isinstance(w_last_exception, W_OperationError): - msg = "expected an OperationError, got %s" % ( - self.space.str_w(w_last_exception)) - raise BytecodeCorruption(msg) - self.last_exception = w_last_exception.operr + block = self.pop_block() + block.cleanup(self) + return def POP_BLOCK(self, oparg, next_instr): block = self.pop_block() @@ -1303,6 +1298,30 @@ return r_uint(self.handlerposition) +class ExceptHandlerBlock(FrameBlock): + """ + This is a special, implicit block type which is created when entering an + except handler. It does not belong to any opcode + """ + + _immutable_ = True + _opname = 'EXCEPT_HANDLER_BLOCK' # it's not associated to any opcode + handling_mask = 0 # this block is never handled, only popped by POP_EXCEPT + + def handle(self, frame, unroller): + assert False # never called + + def cleanup(self, frame): + frame.dropvaluesuntil(self.valuestackdepth+1) + w_last_exception = frame.popvalue() + if not isinstance(w_last_exception, W_OperationError): + msg = "expected an OperationError, got %s" % ( + frame.space.str_w(w_last_exception)) + raise BytecodeCorruption(msg) + frame.last_exception = w_last_exception.operr + FrameBlock.cleanup(self, frame) + + class ExceptBlock(FrameBlock): """An try:except: block. Stores the position of the exception handler.""" @@ -1326,6 +1345,8 @@ w_last_exception = W_OperationError(frame.last_exception) w_last_exception = frame.space.wrap(w_last_exception) frame.pushvalue(w_last_exception) + block = ExceptHandlerBlock(self, 0, frame.lastblock) + frame.lastblock = block frame.pushvalue(frame.space.wrap(unroller)) frame.pushvalue(operationerr.get_w_value(frame.space)) frame.pushvalue(operationerr.w_type) diff --git a/pypy/interpreter/test/test_raise.py b/pypy/interpreter/test/test_raise.py --- a/pypy/interpreter/test/test_raise.py +++ b/pypy/interpreter/test/test_raise.py @@ -15,6 +15,13 @@ else: raise AssertionError("exception executing else clause!") + def test_store_exception(self): + try: + raise ValueError + except Exception as e: + assert e + + def test_args(self): try: raise SystemError(1, 2) From noreply at buildbot.pypy.org Thu May 10 23:43:05 2012 From: noreply at buildbot.pypy.org (mattip) Date: Thu, 10 May 2012 23:43:05 +0200 (CEST) Subject: [pypy-commit] pypy win32-stdlib: merge from default Message-ID: <20120510214305.0FB0D82114@wyvern.cs.uni-duesseldorf.de> Author: Matti Picus Branch: win32-stdlib Changeset: r55016:bfd0e77384ff Date: 2012-05-10 22:28 +0300 http://bitbucket.org/pypy/pypy/changeset/bfd0e77384ff/ Log: merge from default diff too long, truncating to 10000 out of 756140 lines diff --git a/lib-python/2.7/UserDict.py b/lib-python/2.7/UserDict.py --- a/lib-python/2.7/UserDict.py +++ b/lib-python/2.7/UserDict.py @@ -80,8 +80,12 @@ def __iter__(self): return iter(self.data) -import _abcoll -_abcoll.MutableMapping.register(IterableUserDict) +try: + import _abcoll +except ImportError: + pass # e.g. no '_weakref' module on this pypy +else: + _abcoll.MutableMapping.register(IterableUserDict) class DictMixin: diff --git a/lib-python/2.7/_threading_local.py b/lib-python/2.7/_threading_local.py --- a/lib-python/2.7/_threading_local.py +++ b/lib-python/2.7/_threading_local.py @@ -155,7 +155,7 @@ object.__setattr__(self, '_local__args', (args, kw)) object.__setattr__(self, '_local__lock', RLock()) - if (args or kw) and (cls.__init__ is object.__init__): + if (args or kw) and (cls.__init__ == object.__init__): raise TypeError("Initialization arguments are not supported") # We need to create the thread dict in anticipation of diff --git a/lib-python/2.7/ctypes/__init__.py b/lib-python/2.7/ctypes/__init__.py --- a/lib-python/2.7/ctypes/__init__.py +++ b/lib-python/2.7/ctypes/__init__.py @@ -7,6 +7,7 @@ __version__ = "1.1.0" +import _ffi from _ctypes import Union, Structure, Array from _ctypes import _Pointer from _ctypes import CFuncPtr as _CFuncPtr @@ -350,16 +351,17 @@ self._FuncPtr = _FuncPtr if handle is None: - self._handle = _dlopen(self._name, mode) + self._handle = _ffi.CDLL(name, mode) else: self._handle = handle def __repr__(self): - return "<%s '%s', handle %x at %x>" % \ + return "<%s '%s', handle %r at %x>" % \ (self.__class__.__name__, self._name, - (self._handle & (_sys.maxint*2 + 1)), + (self._handle), id(self) & (_sys.maxint*2 + 1)) + def __getattr__(self, name): if name.startswith('__') and name.endswith('__'): raise AttributeError(name) @@ -487,9 +489,12 @@ _flags_ = _FUNCFLAG_CDECL | _FUNCFLAG_PYTHONAPI return CFunctionType -_cast = PYFUNCTYPE(py_object, c_void_p, py_object, py_object)(_cast_addr) def cast(obj, typ): - return _cast(obj, obj, typ) + try: + c_void_p.from_param(obj) + except TypeError, e: + raise ArgumentError(str(e)) + return _cast_addr(obj, obj, typ) _string_at = PYFUNCTYPE(py_object, c_void_p, c_int)(_string_at_addr) def string_at(ptr, size=-1): diff --git a/lib-python/2.7/ctypes/test/__init__.py b/lib-python/2.7/ctypes/test/__init__.py --- a/lib-python/2.7/ctypes/test/__init__.py +++ b/lib-python/2.7/ctypes/test/__init__.py @@ -206,3 +206,16 @@ result = unittest.TestResult() test(result) return result + +def xfail(method): + """ + Poor's man xfail: remove it when all the failures have been fixed + """ + def new_method(self, *args, **kwds): + try: + method(self, *args, **kwds) + except: + pass + else: + self.assertTrue(False, "DID NOT RAISE") + return new_method diff --git a/lib-python/2.7/ctypes/test/test_arrays.py b/lib-python/2.7/ctypes/test/test_arrays.py --- a/lib-python/2.7/ctypes/test/test_arrays.py +++ b/lib-python/2.7/ctypes/test/test_arrays.py @@ -1,12 +1,23 @@ import unittest from ctypes import * +from test.test_support import impl_detail formats = "bBhHiIlLqQfd" +# c_longdouble commented out for PyPy, look at the commend in test_longdouble formats = c_byte, c_ubyte, c_short, c_ushort, c_int, c_uint, \ - c_long, c_ulonglong, c_float, c_double, c_longdouble + c_long, c_ulonglong, c_float, c_double #, c_longdouble class ArrayTestCase(unittest.TestCase): + + @impl_detail('long double not supported by PyPy', pypy=False) + def test_longdouble(self): + """ + This test is empty. It's just here to remind that we commented out + c_longdouble in "formats". If pypy will ever supports c_longdouble, we + should kill this test and uncomment c_longdouble inside formats. + """ + def test_simple(self): # create classes holding simple numeric types, and check # various properties. diff --git a/lib-python/2.7/ctypes/test/test_bitfields.py b/lib-python/2.7/ctypes/test/test_bitfields.py --- a/lib-python/2.7/ctypes/test/test_bitfields.py +++ b/lib-python/2.7/ctypes/test/test_bitfields.py @@ -115,17 +115,21 @@ def test_nonint_types(self): # bit fields are not allowed on non-integer types. result = self.fail_fields(("a", c_char_p, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type c_char_p')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) result = self.fail_fields(("a", c_void_p, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type c_void_p')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) if c_int != c_long: result = self.fail_fields(("a", POINTER(c_int), 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type LP_c_int')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) result = self.fail_fields(("a", c_char, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type c_char')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) try: c_wchar @@ -133,13 +137,15 @@ pass else: result = self.fail_fields(("a", c_wchar, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type c_wchar')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) class Dummy(Structure): _fields_ = [] result = self.fail_fields(("a", Dummy, 1)) - self.assertEqual(result, (TypeError, 'bit fields not allowed for type Dummy')) + self.assertEqual(result[0], TypeError) + self.assertIn('bit fields not allowed for type', result[1]) def test_single_bitfield_size(self): for c_typ in int_types: diff --git a/lib-python/2.7/ctypes/test/test_byteswap.py b/lib-python/2.7/ctypes/test/test_byteswap.py --- a/lib-python/2.7/ctypes/test/test_byteswap.py +++ b/lib-python/2.7/ctypes/test/test_byteswap.py @@ -2,6 +2,7 @@ from binascii import hexlify from ctypes import * +from ctypes.test import xfail def bin(s): return hexlify(memoryview(s)).upper() @@ -21,6 +22,7 @@ setattr(bits, "i%s" % i, 1) dump(bits) + @xfail def test_endian_short(self): if sys.byteorder == "little": self.assertTrue(c_short.__ctype_le__ is c_short) @@ -48,6 +50,7 @@ self.assertEqual(bin(s), "3412") self.assertEqual(s.value, 0x1234) + @xfail def test_endian_int(self): if sys.byteorder == "little": self.assertTrue(c_int.__ctype_le__ is c_int) @@ -76,6 +79,7 @@ self.assertEqual(bin(s), "78563412") self.assertEqual(s.value, 0x12345678) + @xfail def test_endian_longlong(self): if sys.byteorder == "little": self.assertTrue(c_longlong.__ctype_le__ is c_longlong) @@ -104,6 +108,7 @@ self.assertEqual(bin(s), "EFCDAB9078563412") self.assertEqual(s.value, 0x1234567890ABCDEF) + @xfail def test_endian_float(self): if sys.byteorder == "little": self.assertTrue(c_float.__ctype_le__ is c_float) @@ -122,6 +127,7 @@ self.assertAlmostEqual(s.value, math.pi, 6) self.assertEqual(bin(struct.pack(">f", math.pi)), bin(s)) + @xfail def test_endian_double(self): if sys.byteorder == "little": self.assertTrue(c_double.__ctype_le__ is c_double) @@ -149,6 +155,7 @@ self.assertTrue(c_char.__ctype_le__ is c_char) self.assertTrue(c_char.__ctype_be__ is c_char) + @xfail def test_struct_fields_1(self): if sys.byteorder == "little": base = BigEndianStructure @@ -198,6 +205,7 @@ pass self.assertRaises(TypeError, setattr, S, "_fields_", [("s", T)]) + @xfail def test_struct_fields_2(self): # standard packing in struct uses no alignment. # So, we have to align using pad bytes. @@ -221,6 +229,7 @@ s2 = struct.pack(fmt, 0x12, 0x1234, 0x12345678, 3.14) self.assertEqual(bin(s1), bin(s2)) + @xfail def test_unaligned_nonnative_struct_fields(self): if sys.byteorder == "little": base = BigEndianStructure diff --git a/lib-python/2.7/ctypes/test/test_callbacks.py b/lib-python/2.7/ctypes/test/test_callbacks.py --- a/lib-python/2.7/ctypes/test/test_callbacks.py +++ b/lib-python/2.7/ctypes/test/test_callbacks.py @@ -1,5 +1,6 @@ import unittest from ctypes import * +from ctypes.test import xfail import _ctypes_test class Callbacks(unittest.TestCase): @@ -98,6 +99,7 @@ ## self.check_type(c_char_p, "abc") ## self.check_type(c_char_p, "def") + @xfail def test_pyobject(self): o = () from sys import getrefcount as grc diff --git a/lib-python/2.7/ctypes/test/test_cfuncs.py b/lib-python/2.7/ctypes/test/test_cfuncs.py --- a/lib-python/2.7/ctypes/test/test_cfuncs.py +++ b/lib-python/2.7/ctypes/test/test_cfuncs.py @@ -3,8 +3,8 @@ import unittest from ctypes import * - import _ctypes_test +from test.test_support import impl_detail class CFunctions(unittest.TestCase): _dll = CDLL(_ctypes_test.__file__) @@ -158,12 +158,14 @@ self.assertEqual(self._dll.tf_bd(0, 42.), 14.) self.assertEqual(self.S(), 42) + @impl_detail('long double not supported by PyPy', pypy=False) def test_longdouble(self): self._dll.tf_D.restype = c_longdouble self._dll.tf_D.argtypes = (c_longdouble,) self.assertEqual(self._dll.tf_D(42.), 14.) self.assertEqual(self.S(), 42) - + + @impl_detail('long double not supported by PyPy', pypy=False) def test_longdouble_plus(self): self._dll.tf_bD.restype = c_longdouble self._dll.tf_bD.argtypes = (c_byte, c_longdouble) diff --git a/lib-python/2.7/ctypes/test/test_delattr.py b/lib-python/2.7/ctypes/test/test_delattr.py --- a/lib-python/2.7/ctypes/test/test_delattr.py +++ b/lib-python/2.7/ctypes/test/test_delattr.py @@ -6,15 +6,15 @@ class TestCase(unittest.TestCase): def test_simple(self): - self.assertRaises(TypeError, + self.assertRaises((TypeError, AttributeError), delattr, c_int(42), "value") def test_chararray(self): - self.assertRaises(TypeError, + self.assertRaises((TypeError, AttributeError), delattr, (c_char * 5)(), "value") def test_struct(self): - self.assertRaises(TypeError, + self.assertRaises((TypeError, AttributeError), delattr, X(), "foo") if __name__ == "__main__": diff --git a/lib-python/2.7/ctypes/test/test_frombuffer.py b/lib-python/2.7/ctypes/test/test_frombuffer.py --- a/lib-python/2.7/ctypes/test/test_frombuffer.py +++ b/lib-python/2.7/ctypes/test/test_frombuffer.py @@ -2,6 +2,7 @@ import array import gc import unittest +from ctypes.test import xfail class X(Structure): _fields_ = [("c_int", c_int)] @@ -10,6 +11,7 @@ self._init_called = True class Test(unittest.TestCase): + @xfail def test_fom_buffer(self): a = array.array("i", range(16)) x = (c_int * 16).from_buffer(a) @@ -35,6 +37,7 @@ self.assertRaises(TypeError, (c_char * 16).from_buffer, "a" * 16) + @xfail def test_fom_buffer_with_offset(self): a = array.array("i", range(16)) x = (c_int * 15).from_buffer(a, sizeof(c_int)) @@ -43,6 +46,7 @@ self.assertRaises(ValueError, lambda: (c_int * 16).from_buffer(a, sizeof(c_int))) self.assertRaises(ValueError, lambda: (c_int * 1).from_buffer(a, 16 * sizeof(c_int))) + @xfail def test_from_buffer_copy(self): a = array.array("i", range(16)) x = (c_int * 16).from_buffer_copy(a) @@ -67,6 +71,7 @@ x = (c_char * 16).from_buffer_copy("a" * 16) self.assertEqual(x[:], "a" * 16) + @xfail def test_fom_buffer_copy_with_offset(self): a = array.array("i", range(16)) x = (c_int * 15).from_buffer_copy(a, sizeof(c_int)) diff --git a/lib-python/2.7/ctypes/test/test_functions.py b/lib-python/2.7/ctypes/test/test_functions.py --- a/lib-python/2.7/ctypes/test/test_functions.py +++ b/lib-python/2.7/ctypes/test/test_functions.py @@ -7,6 +7,8 @@ from ctypes import * import sys, unittest +from ctypes.test import xfail +from test.test_support import impl_detail try: WINFUNCTYPE @@ -143,6 +145,7 @@ self.assertEqual(result, -21) self.assertEqual(type(result), float) + @impl_detail('long double not supported by PyPy', pypy=False) def test_longdoubleresult(self): f = dll._testfunc_D_bhilfD f.argtypes = [c_byte, c_short, c_int, c_long, c_float, c_longdouble] @@ -393,6 +396,7 @@ self.assertEqual((s8i.a, s8i.b, s8i.c, s8i.d, s8i.e, s8i.f, s8i.g, s8i.h), (9*2, 8*3, 7*4, 6*5, 5*6, 4*7, 3*8, 2*9)) + @xfail def test_sf1651235(self): # see http://www.python.org/sf/1651235 diff --git a/lib-python/2.7/ctypes/test/test_internals.py b/lib-python/2.7/ctypes/test/test_internals.py --- a/lib-python/2.7/ctypes/test/test_internals.py +++ b/lib-python/2.7/ctypes/test/test_internals.py @@ -33,7 +33,13 @@ refcnt = grc(s) cs = c_char_p(s) self.assertEqual(refcnt + 1, grc(s)) - self.assertSame(cs._objects, s) + try: + # Moving gcs need to allocate a nonmoving buffer + cs._objects._obj + except AttributeError: + self.assertSame(cs._objects, s) + else: + self.assertSame(cs._objects._obj, s) def test_simple_struct(self): class X(Structure): diff --git a/lib-python/2.7/ctypes/test/test_libc.py b/lib-python/2.7/ctypes/test/test_libc.py --- a/lib-python/2.7/ctypes/test/test_libc.py +++ b/lib-python/2.7/ctypes/test/test_libc.py @@ -25,5 +25,14 @@ lib.my_qsort(chars, len(chars)-1, sizeof(c_char), comparefunc(sort)) self.assertEqual(chars.raw, " ,,aaaadmmmnpppsss\x00") + def SKIPPED_test_no_more_xfail(self): + # We decided to not explicitly support the whole ctypes-2.7 + # and instead go for a case-by-case, demand-driven approach. + # So this test is skipped instead of failing. + import socket + import ctypes.test + self.assertTrue(not hasattr(ctypes.test, 'xfail'), + "You should incrementally grep for '@xfail' and remove them, they are real failures") + if __name__ == "__main__": unittest.main() diff --git a/lib-python/2.7/ctypes/test/test_loading.py b/lib-python/2.7/ctypes/test/test_loading.py --- a/lib-python/2.7/ctypes/test/test_loading.py +++ b/lib-python/2.7/ctypes/test/test_loading.py @@ -2,7 +2,7 @@ import sys, unittest import os from ctypes.util import find_library -from ctypes.test import is_resource_enabled +from ctypes.test import is_resource_enabled, xfail libc_name = None if os.name == "nt": @@ -75,6 +75,7 @@ self.assertRaises(AttributeError, dll.__getitem__, 1234) if os.name == "nt": + @xfail def test_1703286_A(self): from _ctypes import LoadLibrary, FreeLibrary # On winXP 64-bit, advapi32 loads at an address that does @@ -85,6 +86,7 @@ handle = LoadLibrary("advapi32") FreeLibrary(handle) + @xfail def test_1703286_B(self): # Since on winXP 64-bit advapi32 loads like described # above, the (arbitrarily selected) CloseEventLog function diff --git a/lib-python/2.7/ctypes/test/test_macholib.py b/lib-python/2.7/ctypes/test/test_macholib.py --- a/lib-python/2.7/ctypes/test/test_macholib.py +++ b/lib-python/2.7/ctypes/test/test_macholib.py @@ -52,7 +52,6 @@ '/usr/lib/libSystem.B.dylib') result = find_lib('z') - self.assertTrue(result.startswith('/usr/lib/libz.1')) self.assertTrue(result.endswith('.dylib')) self.assertEqual(find_lib('IOKit'), diff --git a/lib-python/2.7/ctypes/test/test_numbers.py b/lib-python/2.7/ctypes/test/test_numbers.py --- a/lib-python/2.7/ctypes/test/test_numbers.py +++ b/lib-python/2.7/ctypes/test/test_numbers.py @@ -1,6 +1,7 @@ from ctypes import * import unittest import struct +from ctypes.test import xfail def valid_ranges(*types): # given a sequence of numeric types, collect their _type_ @@ -89,12 +90,14 @@ ## self.assertRaises(ValueError, t, l-1) ## self.assertRaises(ValueError, t, h+1) + @xfail def test_from_param(self): # the from_param class method attribute always # returns PyCArgObject instances for t in signed_types + unsigned_types + float_types: self.assertEqual(ArgType, type(t.from_param(0))) + @xfail def test_byref(self): # calling byref returns also a PyCArgObject instance for t in signed_types + unsigned_types + float_types + bool_types: @@ -102,6 +105,7 @@ self.assertEqual(ArgType, type(parm)) + @xfail def test_floats(self): # c_float and c_double can be created from # Python int, long and float @@ -115,6 +119,7 @@ self.assertEqual(t(2L).value, 2.0) self.assertEqual(t(f).value, 2.0) + @xfail def test_integers(self): class FloatLike(object): def __float__(self): diff --git a/lib-python/2.7/ctypes/test/test_objects.py b/lib-python/2.7/ctypes/test/test_objects.py --- a/lib-python/2.7/ctypes/test/test_objects.py +++ b/lib-python/2.7/ctypes/test/test_objects.py @@ -22,7 +22,7 @@ >>> array[4] = 'foo bar' >>> array._objects -{'4': 'foo bar'} +{'4': } >>> array[4] 'foo bar' >>> @@ -47,9 +47,9 @@ >>> x.array[0] = 'spam spam spam' >>> x._objects -{'0:2': 'spam spam spam'} +{'0:2': } >>> x.array._b_base_._objects -{'0:2': 'spam spam spam'} +{'0:2': } >>> ''' diff --git a/lib-python/2.7/ctypes/test/test_parameters.py b/lib-python/2.7/ctypes/test/test_parameters.py --- a/lib-python/2.7/ctypes/test/test_parameters.py +++ b/lib-python/2.7/ctypes/test/test_parameters.py @@ -1,5 +1,7 @@ import unittest, sys +from ctypes.test import xfail + class SimpleTypesTestCase(unittest.TestCase): def setUp(self): @@ -49,6 +51,7 @@ self.assertEqual(CWCHARP.from_param("abc"), "abcabcabc") # XXX Replace by c_char_p tests + @xfail def test_cstrings(self): from ctypes import c_char_p, byref @@ -86,7 +89,10 @@ pa = c_wchar_p.from_param(c_wchar_p(u"123")) self.assertEqual(type(pa), c_wchar_p) + if sys.platform == "win32": + test_cw_strings = xfail(test_cw_strings) + @xfail def test_int_pointers(self): from ctypes import c_short, c_uint, c_int, c_long, POINTER, pointer LPINT = POINTER(c_int) diff --git a/lib-python/2.7/ctypes/test/test_pep3118.py b/lib-python/2.7/ctypes/test/test_pep3118.py --- a/lib-python/2.7/ctypes/test/test_pep3118.py +++ b/lib-python/2.7/ctypes/test/test_pep3118.py @@ -1,6 +1,7 @@ import unittest from ctypes import * import re, sys +from ctypes.test import xfail if sys.byteorder == "little": THIS_ENDIAN = "<" @@ -19,6 +20,7 @@ class Test(unittest.TestCase): + @xfail def test_native_types(self): for tp, fmt, shape, itemtp in native_types: ob = tp() @@ -46,6 +48,7 @@ print(tp) raise + @xfail def test_endian_types(self): for tp, fmt, shape, itemtp in endian_types: ob = tp() diff --git a/lib-python/2.7/ctypes/test/test_pickling.py b/lib-python/2.7/ctypes/test/test_pickling.py --- a/lib-python/2.7/ctypes/test/test_pickling.py +++ b/lib-python/2.7/ctypes/test/test_pickling.py @@ -3,6 +3,7 @@ from ctypes import * import _ctypes_test dll = CDLL(_ctypes_test.__file__) +from ctypes.test import xfail class X(Structure): _fields_ = [("a", c_int), ("b", c_double)] @@ -21,6 +22,7 @@ def loads(self, item): return pickle.loads(item) + @xfail def test_simple(self): for src in [ c_int(42), @@ -31,6 +33,7 @@ self.assertEqual(memoryview(src).tobytes(), memoryview(dst).tobytes()) + @xfail def test_struct(self): X.init_called = 0 @@ -49,6 +52,7 @@ self.assertEqual(memoryview(y).tobytes(), memoryview(x).tobytes()) + @xfail def test_unpickable(self): # ctypes objects that are pointers or contain pointers are # unpickable. @@ -66,6 +70,7 @@ ]: self.assertRaises(ValueError, lambda: self.dumps(item)) + @xfail def test_wchar(self): pickle.dumps(c_char("x")) # Issue 5049 diff --git a/lib-python/2.7/ctypes/test/test_python_api.py b/lib-python/2.7/ctypes/test/test_python_api.py --- a/lib-python/2.7/ctypes/test/test_python_api.py +++ b/lib-python/2.7/ctypes/test/test_python_api.py @@ -1,6 +1,6 @@ from ctypes import * import unittest, sys -from ctypes.test import is_resource_enabled +from ctypes.test import is_resource_enabled, xfail ################################################################ # This section should be moved into ctypes\__init__.py, when it's ready. @@ -17,6 +17,7 @@ class PythonAPITestCase(unittest.TestCase): + @xfail def test_PyString_FromStringAndSize(self): PyString_FromStringAndSize = pythonapi.PyString_FromStringAndSize @@ -25,6 +26,7 @@ self.assertEqual(PyString_FromStringAndSize("abcdefghi", 3), "abc") + @xfail def test_PyString_FromString(self): pythonapi.PyString_FromString.restype = py_object pythonapi.PyString_FromString.argtypes = (c_char_p,) @@ -56,6 +58,7 @@ del res self.assertEqual(grc(42), ref42) + @xfail def test_PyObj_FromPtr(self): s = "abc def ghi jkl" ref = grc(s) @@ -81,6 +84,7 @@ # not enough arguments self.assertRaises(TypeError, PyOS_snprintf, buf) + @xfail def test_pyobject_repr(self): self.assertEqual(repr(py_object()), "py_object()") self.assertEqual(repr(py_object(42)), "py_object(42)") diff --git a/lib-python/2.7/ctypes/test/test_refcounts.py b/lib-python/2.7/ctypes/test/test_refcounts.py --- a/lib-python/2.7/ctypes/test/test_refcounts.py +++ b/lib-python/2.7/ctypes/test/test_refcounts.py @@ -90,6 +90,7 @@ return a * b * 2 f = proto(func) + gc.collect() a = sys.getrefcount(ctypes.c_int) f(1, 2) self.assertEqual(sys.getrefcount(ctypes.c_int), a) diff --git a/lib-python/2.7/ctypes/test/test_stringptr.py b/lib-python/2.7/ctypes/test/test_stringptr.py --- a/lib-python/2.7/ctypes/test/test_stringptr.py +++ b/lib-python/2.7/ctypes/test/test_stringptr.py @@ -2,11 +2,13 @@ from ctypes import * import _ctypes_test +from ctypes.test import xfail lib = CDLL(_ctypes_test.__file__) class StringPtrTestCase(unittest.TestCase): + @xfail def test__POINTER_c_char(self): class X(Structure): _fields_ = [("str", POINTER(c_char))] @@ -27,6 +29,7 @@ self.assertRaises(TypeError, setattr, x, "str", "Hello, World") + @xfail def test__c_char_p(self): class X(Structure): _fields_ = [("str", c_char_p)] diff --git a/lib-python/2.7/ctypes/test/test_strings.py b/lib-python/2.7/ctypes/test/test_strings.py --- a/lib-python/2.7/ctypes/test/test_strings.py +++ b/lib-python/2.7/ctypes/test/test_strings.py @@ -31,8 +31,9 @@ buf.value = "Hello, World" self.assertEqual(buf.value, "Hello, World") - self.assertRaises(TypeError, setattr, buf, "value", memoryview("Hello, World")) - self.assertRaises(TypeError, setattr, buf, "value", memoryview("abc")) + if test_support.check_impl_detail(): + self.assertRaises(TypeError, setattr, buf, "value", memoryview("Hello, World")) + self.assertRaises(TypeError, setattr, buf, "value", memoryview("abc")) self.assertRaises(ValueError, setattr, buf, "raw", memoryview("x" * 100)) def test_c_buffer_raw(self, memoryview=memoryview): @@ -40,7 +41,8 @@ buf.raw = memoryview("Hello, World") self.assertEqual(buf.value, "Hello, World") - self.assertRaises(TypeError, setattr, buf, "value", memoryview("abc")) + if test_support.check_impl_detail(): + self.assertRaises(TypeError, setattr, buf, "value", memoryview("abc")) self.assertRaises(ValueError, setattr, buf, "raw", memoryview("x" * 100)) def test_c_buffer_deprecated(self): diff --git a/lib-python/2.7/ctypes/test/test_structures.py b/lib-python/2.7/ctypes/test/test_structures.py --- a/lib-python/2.7/ctypes/test/test_structures.py +++ b/lib-python/2.7/ctypes/test/test_structures.py @@ -194,8 +194,8 @@ self.assertEqual(X.b.offset, min(8, longlong_align)) - d = {"_fields_": [("a", "b"), - ("b", "q")], + d = {"_fields_": [("a", c_byte), + ("b", c_longlong)], "_pack_": -1} self.assertRaises(ValueError, type(Structure), "X", (Structure,), d) diff --git a/lib-python/2.7/ctypes/test/test_varsize_struct.py b/lib-python/2.7/ctypes/test/test_varsize_struct.py --- a/lib-python/2.7/ctypes/test/test_varsize_struct.py +++ b/lib-python/2.7/ctypes/test/test_varsize_struct.py @@ -1,7 +1,9 @@ from ctypes import * import unittest +from ctypes.test import xfail class VarSizeTest(unittest.TestCase): + @xfail def test_resize(self): class X(Structure): _fields_ = [("item", c_int), diff --git a/lib-python/2.7/ctypes/util.py b/lib-python/2.7/ctypes/util.py --- a/lib-python/2.7/ctypes/util.py +++ b/lib-python/2.7/ctypes/util.py @@ -72,8 +72,8 @@ return name if os.name == "posix" and sys.platform == "darwin": - from ctypes.macholib.dyld import dyld_find as _dyld_find def find_library(name): + from ctypes.macholib.dyld import dyld_find as _dyld_find possible = ['lib%s.dylib' % name, '%s.dylib' % name, '%s.framework/%s' % (name, name)] diff --git a/lib-python/2.7/distutils/command/bdist_wininst.py b/lib-python/2.7/distutils/command/bdist_wininst.py --- a/lib-python/2.7/distutils/command/bdist_wininst.py +++ b/lib-python/2.7/distutils/command/bdist_wininst.py @@ -298,7 +298,8 @@ bitmaplen, # number of bytes in bitmap ) file.write(header) - file.write(open(arcname, "rb").read()) + with open(arcname, "rb") as arcfile: + file.write(arcfile.read()) # create_exe() diff --git a/lib-python/2.7/distutils/command/build_ext.py b/lib-python/2.7/distutils/command/build_ext.py --- a/lib-python/2.7/distutils/command/build_ext.py +++ b/lib-python/2.7/distutils/command/build_ext.py @@ -184,7 +184,7 @@ # the 'libs' directory is for binary installs - we assume that # must be the *native* platform. But we don't really support # cross-compiling via a binary install anyway, so we let it go. - self.library_dirs.append(os.path.join(sys.exec_prefix, 'libs')) + self.library_dirs.append(os.path.join(sys.exec_prefix, 'include')) if self.debug: self.build_temp = os.path.join(self.build_temp, "Debug") else: @@ -192,8 +192,13 @@ # Append the source distribution include and library directories, # this allows distutils on windows to work in the source tree - self.include_dirs.append(os.path.join(sys.exec_prefix, 'PC')) - if MSVC_VERSION == 9: + if 0: + # pypy has no PC directory + self.include_dirs.append(os.path.join(sys.exec_prefix, 'PC')) + if 1: + # pypy has no PCBuild directory + pass + elif MSVC_VERSION == 9: # Use the .lib files for the correct architecture if self.plat_name == 'win32': suffix = '' @@ -695,24 +700,14 @@ shared extension. On most platforms, this is just 'ext.libraries'; on Windows and OS/2, we add the Python library (eg. python20.dll). """ - # The python library is always needed on Windows. For MSVC, this - # is redundant, since the library is mentioned in a pragma in - # pyconfig.h that MSVC groks. The other Windows compilers all seem - # to need it mentioned explicitly, though, so that's what we do. - # Append '_d' to the python import library on debug builds. + # The python library is always needed on Windows. if sys.platform == "win32": - from distutils.msvccompiler import MSVCCompiler - if not isinstance(self.compiler, MSVCCompiler): - template = "python%d%d" - if self.debug: - template = template + '_d' - pythonlib = (template % - (sys.hexversion >> 24, (sys.hexversion >> 16) & 0xff)) - # don't extend ext.libraries, it may be shared with other - # extensions, it is a reference to the original list - return ext.libraries + [pythonlib] - else: - return ext.libraries + template = "python%d%d" + pythonlib = (template % + (sys.hexversion >> 24, (sys.hexversion >> 16) & 0xff)) + # don't extend ext.libraries, it may be shared with other + # extensions, it is a reference to the original list + return ext.libraries + [pythonlib] elif sys.platform == "os2emx": # EMX/GCC requires the python library explicitly, and I # believe VACPP does as well (though not confirmed) - AIM Apr01 diff --git a/lib-python/2.7/distutils/command/install.py b/lib-python/2.7/distutils/command/install.py --- a/lib-python/2.7/distutils/command/install.py +++ b/lib-python/2.7/distutils/command/install.py @@ -83,6 +83,13 @@ 'scripts': '$userbase/bin', 'data' : '$userbase', }, + 'pypy': { + 'purelib': '$base/site-packages', + 'platlib': '$base/site-packages', + 'headers': '$base/include', + 'scripts': '$base/bin', + 'data' : '$base', + }, } # The keys to an installation scheme; if any new types of files are to be @@ -467,6 +474,8 @@ def select_scheme (self, name): # it's the caller's problem if they supply a bad name! + if hasattr(sys, 'pypy_version_info'): + name = 'pypy' scheme = INSTALL_SCHEMES[name] for key in SCHEME_KEYS: attrname = 'install_' + key diff --git a/lib-python/2.7/distutils/cygwinccompiler.py b/lib-python/2.7/distutils/cygwinccompiler.py --- a/lib-python/2.7/distutils/cygwinccompiler.py +++ b/lib-python/2.7/distutils/cygwinccompiler.py @@ -75,6 +75,9 @@ elif msc_ver == '1500': # VS2008 / MSVC 9.0 return ['msvcr90'] + elif msc_ver == '1600': + # VS2010 / MSVC 10.0 + return ['msvcr100'] else: raise ValueError("Unknown MS Compiler version %s " % msc_ver) diff --git a/lib-python/2.7/distutils/msvc9compiler.py b/lib-python/2.7/distutils/msvc9compiler.py --- a/lib-python/2.7/distutils/msvc9compiler.py +++ b/lib-python/2.7/distutils/msvc9compiler.py @@ -648,6 +648,7 @@ temp_manifest = os.path.join( build_temp, os.path.basename(output_filename) + ".manifest") + ld_args.append('/MANIFEST') ld_args.append('/MANIFESTFILE:' + temp_manifest) if extra_preargs: diff --git a/lib-python/2.7/distutils/spawn.py b/lib-python/2.7/distutils/spawn.py --- a/lib-python/2.7/distutils/spawn.py +++ b/lib-python/2.7/distutils/spawn.py @@ -58,7 +58,6 @@ def _spawn_nt(cmd, search_path=1, verbose=0, dry_run=0): executable = cmd[0] - cmd = _nt_quote_args(cmd) if search_path: # either we find one or it stays the same executable = find_executable(executable) or executable @@ -66,7 +65,8 @@ if not dry_run: # spawn for NT requires a full path to the .exe try: - rc = os.spawnv(os.P_WAIT, executable, cmd) + import subprocess + rc = subprocess.call(cmd) except OSError, exc: # this seems to happen when the command isn't found raise DistutilsExecError, \ diff --git a/lib-python/2.7/distutils/sysconfig.py b/lib-python/2.7/distutils/sysconfig.py --- a/lib-python/2.7/distutils/sysconfig.py +++ b/lib-python/2.7/distutils/sysconfig.py @@ -9,563 +9,21 @@ Email: """ -__revision__ = "$Id$" +__revision__ = "$Id: sysconfig.py 85358 2010-10-10 09:54:59Z antoine.pitrou $" -import os -import re -import string import sys -from distutils.errors import DistutilsPlatformError -# These are needed in a couple of spots, so just compute them once. -PREFIX = os.path.normpath(sys.prefix) -EXEC_PREFIX = os.path.normpath(sys.exec_prefix) +# The content of this file is redirected from +# sysconfig_cpython or sysconfig_pypy. -# Path to the base directory of the project. On Windows the binary may -# live in project/PCBuild9. If we're dealing with an x64 Windows build, -# it'll live in project/PCbuild/amd64. -project_base = os.path.dirname(os.path.abspath(sys.executable)) -if os.name == "nt" and "pcbuild" in project_base[-8:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir)) -# PC/VS7.1 -if os.name == "nt" and "\\pc\\v" in project_base[-10:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir, - os.path.pardir)) -# PC/AMD64 -if os.name == "nt" and "\\pcbuild\\amd64" in project_base[-14:].lower(): - project_base = os.path.abspath(os.path.join(project_base, os.path.pardir, - os.path.pardir)) +if '__pypy__' in sys.builtin_module_names: + from distutils.sysconfig_pypy import * + from distutils.sysconfig_pypy import _config_vars # needed by setuptools + from distutils.sysconfig_pypy import _variable_rx # read_setup_file() +else: + from distutils.sysconfig_cpython import * + from distutils.sysconfig_cpython import _config_vars # needed by setuptools + from distutils.sysconfig_cpython import _variable_rx # read_setup_file() -# python_build: (Boolean) if true, we're either building Python or -# building an extension with an un-installed Python, so we use -# different (hard-wired) directories. -# Setup.local is available for Makefile builds including VPATH builds, -# Setup.dist is available on Windows -def _python_build(): - for fn in ("Setup.dist", "Setup.local"): - if os.path.isfile(os.path.join(project_base, "Modules", fn)): - return True - return False -python_build = _python_build() - -def get_python_version(): - """Return a string containing the major and minor Python version, - leaving off the patchlevel. Sample return values could be '1.5' - or '2.2'. - """ - return sys.version[:3] - - -def get_python_inc(plat_specific=0, prefix=None): - """Return the directory containing installed Python header files. - - If 'plat_specific' is false (the default), this is the path to the - non-platform-specific header files, i.e. Python.h and so on; - otherwise, this is the path to platform-specific header files - (namely pyconfig.h). - - If 'prefix' is supplied, use it instead of sys.prefix or - sys.exec_prefix -- i.e., ignore 'plat_specific'. - """ - if prefix is None: - prefix = plat_specific and EXEC_PREFIX or PREFIX - - if os.name == "posix": - if python_build: - buildir = os.path.dirname(sys.executable) - if plat_specific: - # python.h is located in the buildir - inc_dir = buildir - else: - # the source dir is relative to the buildir - srcdir = os.path.abspath(os.path.join(buildir, - get_config_var('srcdir'))) - # Include is located in the srcdir - inc_dir = os.path.join(srcdir, "Include") - return inc_dir - return os.path.join(prefix, "include", "python" + get_python_version()) - elif os.name == "nt": - return os.path.join(prefix, "include") - elif os.name == "os2": - return os.path.join(prefix, "Include") - else: - raise DistutilsPlatformError( - "I don't know where Python installs its C header files " - "on platform '%s'" % os.name) - - -def get_python_lib(plat_specific=0, standard_lib=0, prefix=None): - """Return the directory containing the Python library (standard or - site additions). - - If 'plat_specific' is true, return the directory containing - platform-specific modules, i.e. any module from a non-pure-Python - module distribution; otherwise, return the platform-shared library - directory. If 'standard_lib' is true, return the directory - containing standard Python library modules; otherwise, return the - directory for site-specific modules. - - If 'prefix' is supplied, use it instead of sys.prefix or - sys.exec_prefix -- i.e., ignore 'plat_specific'. - """ - if prefix is None: - prefix = plat_specific and EXEC_PREFIX or PREFIX - - if os.name == "posix": - libpython = os.path.join(prefix, - "lib", "python" + get_python_version()) - if standard_lib: - return libpython - else: - return os.path.join(libpython, "site-packages") - - elif os.name == "nt": - if standard_lib: - return os.path.join(prefix, "Lib") - else: - if get_python_version() < "2.2": - return prefix - else: - return os.path.join(prefix, "Lib", "site-packages") - - elif os.name == "os2": - if standard_lib: - return os.path.join(prefix, "Lib") - else: - return os.path.join(prefix, "Lib", "site-packages") - - else: - raise DistutilsPlatformError( - "I don't know where Python installs its library " - "on platform '%s'" % os.name) - - -def customize_compiler(compiler): - """Do any platform-specific customization of a CCompiler instance. - - Mainly needed on Unix, so we can plug in the information that - varies across Unices and is stored in Python's Makefile. - """ - if compiler.compiler_type == "unix": - (cc, cxx, opt, cflags, ccshared, ldshared, so_ext) = \ - get_config_vars('CC', 'CXX', 'OPT', 'CFLAGS', - 'CCSHARED', 'LDSHARED', 'SO') - - if 'CC' in os.environ: - cc = os.environ['CC'] - if 'CXX' in os.environ: - cxx = os.environ['CXX'] - if 'LDSHARED' in os.environ: - ldshared = os.environ['LDSHARED'] - if 'CPP' in os.environ: - cpp = os.environ['CPP'] - else: - cpp = cc + " -E" # not always - if 'LDFLAGS' in os.environ: - ldshared = ldshared + ' ' + os.environ['LDFLAGS'] - if 'CFLAGS' in os.environ: - cflags = opt + ' ' + os.environ['CFLAGS'] - ldshared = ldshared + ' ' + os.environ['CFLAGS'] - if 'CPPFLAGS' in os.environ: - cpp = cpp + ' ' + os.environ['CPPFLAGS'] - cflags = cflags + ' ' + os.environ['CPPFLAGS'] - ldshared = ldshared + ' ' + os.environ['CPPFLAGS'] - - cc_cmd = cc + ' ' + cflags - compiler.set_executables( - preprocessor=cpp, - compiler=cc_cmd, - compiler_so=cc_cmd + ' ' + ccshared, - compiler_cxx=cxx, - linker_so=ldshared, - linker_exe=cc) - - compiler.shared_lib_extension = so_ext - - -def get_config_h_filename(): - """Return full pathname of installed pyconfig.h file.""" - if python_build: - if os.name == "nt": - inc_dir = os.path.join(project_base, "PC") - else: - inc_dir = project_base - else: - inc_dir = get_python_inc(plat_specific=1) - if get_python_version() < '2.2': - config_h = 'config.h' - else: - # The name of the config.h file changed in 2.2 - config_h = 'pyconfig.h' - return os.path.join(inc_dir, config_h) - - -def get_makefile_filename(): - """Return full pathname of installed Makefile from the Python build.""" - if python_build: - return os.path.join(os.path.dirname(sys.executable), "Makefile") - lib_dir = get_python_lib(plat_specific=1, standard_lib=1) - return os.path.join(lib_dir, "config", "Makefile") - - -def parse_config_h(fp, g=None): - """Parse a config.h-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - if g is None: - g = {} - define_rx = re.compile("#define ([A-Z][A-Za-z0-9_]+) (.*)\n") - undef_rx = re.compile("/[*] #undef ([A-Z][A-Za-z0-9_]+) [*]/\n") - # - while 1: - line = fp.readline() - if not line: - break - m = define_rx.match(line) - if m: - n, v = m.group(1, 2) - try: v = int(v) - except ValueError: pass - g[n] = v - else: - m = undef_rx.match(line) - if m: - g[m.group(1)] = 0 - return g - - -# Regexes needed for parsing Makefile (and similar syntaxes, -# like old-style Setup files). -_variable_rx = re.compile("([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)") -_findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)") -_findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}") - -def parse_makefile(fn, g=None): - """Parse a Makefile-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - from distutils.text_file import TextFile - fp = TextFile(fn, strip_comments=1, skip_blanks=1, join_lines=1) - - if g is None: - g = {} - done = {} - notdone = {} - - while 1: - line = fp.readline() - if line is None: # eof - break - m = _variable_rx.match(line) - if m: - n, v = m.group(1, 2) - v = v.strip() - # `$$' is a literal `$' in make - tmpv = v.replace('$$', '') - - if "$" in tmpv: - notdone[n] = v - else: - try: - v = int(v) - except ValueError: - # insert literal `$' - done[n] = v.replace('$$', '$') - else: - done[n] = v - - # do variable interpolation here - while notdone: - for name in notdone.keys(): - value = notdone[name] - m = _findvar1_rx.search(value) or _findvar2_rx.search(value) - if m: - n = m.group(1) - found = True - if n in done: - item = str(done[n]) - elif n in notdone: - # get it on a subsequent round - found = False - elif n in os.environ: - # do it like make: fall back to environment - item = os.environ[n] - else: - done[n] = item = "" - if found: - after = value[m.end():] - value = value[:m.start()] + item + after - if "$" in after: - notdone[name] = value - else: - try: value = int(value) - except ValueError: - done[name] = value.strip() - else: - done[name] = value - del notdone[name] - else: - # bogus variable reference; just drop it since we can't deal - del notdone[name] - - fp.close() - - # strip spurious spaces - for k, v in done.items(): - if isinstance(v, str): - done[k] = v.strip() - - # save the results in the global dictionary - g.update(done) - return g - - -def expand_makefile_vars(s, vars): - """Expand Makefile-style variables -- "${foo}" or "$(foo)" -- in - 'string' according to 'vars' (a dictionary mapping variable names to - values). Variables not present in 'vars' are silently expanded to the - empty string. The variable values in 'vars' should not contain further - variable expansions; if 'vars' is the output of 'parse_makefile()', - you're fine. Returns a variable-expanded version of 's'. - """ - - # This algorithm does multiple expansion, so if vars['foo'] contains - # "${bar}", it will expand ${foo} to ${bar}, and then expand - # ${bar}... and so forth. This is fine as long as 'vars' comes from - # 'parse_makefile()', which takes care of such expansions eagerly, - # according to make's variable expansion semantics. - - while 1: - m = _findvar1_rx.search(s) or _findvar2_rx.search(s) - if m: - (beg, end) = m.span() - s = s[0:beg] + vars.get(m.group(1)) + s[end:] - else: - break - return s - - -_config_vars = None - -def _init_posix(): - """Initialize the module as appropriate for POSIX systems.""" - g = {} - # load the installed Makefile: - try: - filename = get_makefile_filename() - parse_makefile(filename, g) - except IOError, msg: - my_msg = "invalid Python installation: unable to open %s" % filename - if hasattr(msg, "strerror"): - my_msg = my_msg + " (%s)" % msg.strerror - - raise DistutilsPlatformError(my_msg) - - # load the installed pyconfig.h: - try: - filename = get_config_h_filename() - parse_config_h(file(filename), g) - except IOError, msg: - my_msg = "invalid Python installation: unable to open %s" % filename - if hasattr(msg, "strerror"): - my_msg = my_msg + " (%s)" % msg.strerror - - raise DistutilsPlatformError(my_msg) - - # On MacOSX we need to check the setting of the environment variable - # MACOSX_DEPLOYMENT_TARGET: configure bases some choices on it so - # it needs to be compatible. - # If it isn't set we set it to the configure-time value - if sys.platform == 'darwin' and 'MACOSX_DEPLOYMENT_TARGET' in g: - cfg_target = g['MACOSX_DEPLOYMENT_TARGET'] - cur_target = os.getenv('MACOSX_DEPLOYMENT_TARGET', '') - if cur_target == '': - cur_target = cfg_target - os.environ['MACOSX_DEPLOYMENT_TARGET'] = cfg_target - elif map(int, cfg_target.split('.')) > map(int, cur_target.split('.')): - my_msg = ('$MACOSX_DEPLOYMENT_TARGET mismatch: now "%s" but "%s" during configure' - % (cur_target, cfg_target)) - raise DistutilsPlatformError(my_msg) - - # On AIX, there are wrong paths to the linker scripts in the Makefile - # -- these paths are relative to the Python source, but when installed - # the scripts are in another directory. - if python_build: - g['LDSHARED'] = g['BLDSHARED'] - - elif get_python_version() < '2.1': - # The following two branches are for 1.5.2 compatibility. - if sys.platform == 'aix4': # what about AIX 3.x ? - # Linker script is in the config directory, not in Modules as the - # Makefile says. - python_lib = get_python_lib(standard_lib=1) - ld_so_aix = os.path.join(python_lib, 'config', 'ld_so_aix') - python_exp = os.path.join(python_lib, 'config', 'python.exp') - - g['LDSHARED'] = "%s %s -bI:%s" % (ld_so_aix, g['CC'], python_exp) - - elif sys.platform == 'beos': - # Linker script is in the config directory. In the Makefile it is - # relative to the srcdir, which after installation no longer makes - # sense. - python_lib = get_python_lib(standard_lib=1) - linkerscript_path = string.split(g['LDSHARED'])[0] - linkerscript_name = os.path.basename(linkerscript_path) - linkerscript = os.path.join(python_lib, 'config', - linkerscript_name) - - # XXX this isn't the right place to do this: adding the Python - # library to the link, if needed, should be in the "build_ext" - # command. (It's also needed for non-MS compilers on Windows, and - # it's taken care of for them by the 'build_ext.get_libraries()' - # method.) - g['LDSHARED'] = ("%s -L%s/lib -lpython%s" % - (linkerscript, PREFIX, get_python_version())) - - global _config_vars - _config_vars = g - - -def _init_nt(): - """Initialize the module as appropriate for NT""" - g = {} - # set basic install directories - g['LIBDEST'] = get_python_lib(plat_specific=0, standard_lib=1) - g['BINLIBDEST'] = get_python_lib(plat_specific=1, standard_lib=1) - - # XXX hmmm.. a normal install puts include files here - g['INCLUDEPY'] = get_python_inc(plat_specific=0) - - g['SO'] = '.pyd' - g['EXE'] = ".exe" - g['VERSION'] = get_python_version().replace(".", "") - g['BINDIR'] = os.path.dirname(os.path.abspath(sys.executable)) - - global _config_vars - _config_vars = g - - -def _init_os2(): - """Initialize the module as appropriate for OS/2""" - g = {} - # set basic install directories - g['LIBDEST'] = get_python_lib(plat_specific=0, standard_lib=1) - g['BINLIBDEST'] = get_python_lib(plat_specific=1, standard_lib=1) - - # XXX hmmm.. a normal install puts include files here - g['INCLUDEPY'] = get_python_inc(plat_specific=0) - - g['SO'] = '.pyd' - g['EXE'] = ".exe" - - global _config_vars - _config_vars = g - - -def get_config_vars(*args): - """With no arguments, return a dictionary of all configuration - variables relevant for the current platform. Generally this includes - everything needed to build extensions and install both pure modules and - extensions. On Unix, this means every variable defined in Python's - installed Makefile; on Windows and Mac OS it's a much smaller set. - - With arguments, return a list of values that result from looking up - each argument in the configuration variable dictionary. - """ - global _config_vars - if _config_vars is None: - func = globals().get("_init_" + os.name) - if func: - func() - else: - _config_vars = {} - - # Normalized versions of prefix and exec_prefix are handy to have; - # in fact, these are the standard versions used most places in the - # Distutils. - _config_vars['prefix'] = PREFIX - _config_vars['exec_prefix'] = EXEC_PREFIX - - if sys.platform == 'darwin': - kernel_version = os.uname()[2] # Kernel version (8.4.3) - major_version = int(kernel_version.split('.')[0]) - - if major_version < 8: - # On Mac OS X before 10.4, check if -arch and -isysroot - # are in CFLAGS or LDFLAGS and remove them if they are. - # This is needed when building extensions on a 10.3 system - # using a universal build of python. - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - flags = _config_vars[key] - flags = re.sub('-arch\s+\w+\s', ' ', flags) - flags = re.sub('-isysroot [^ \t]*', ' ', flags) - _config_vars[key] = flags - - else: - - # Allow the user to override the architecture flags using - # an environment variable. - # NOTE: This name was introduced by Apple in OSX 10.5 and - # is used by several scripting languages distributed with - # that OS release. - - if 'ARCHFLAGS' in os.environ: - arch = os.environ['ARCHFLAGS'] - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - - flags = _config_vars[key] - flags = re.sub('-arch\s+\w+\s', ' ', flags) - flags = flags + ' ' + arch - _config_vars[key] = flags - - # If we're on OSX 10.5 or later and the user tries to - # compiles an extension using an SDK that is not present - # on the current machine it is better to not use an SDK - # than to fail. - # - # The major usecase for this is users using a Python.org - # binary installer on OSX 10.6: that installer uses - # the 10.4u SDK, but that SDK is not installed by default - # when you install Xcode. - # - m = re.search('-isysroot\s+(\S+)', _config_vars['CFLAGS']) - if m is not None: - sdk = m.group(1) - if not os.path.exists(sdk): - for key in ('LDFLAGS', 'BASECFLAGS', 'LDSHARED', - # a number of derived variables. These need to be - # patched up as well. - 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - - flags = _config_vars[key] - flags = re.sub('-isysroot\s+\S+(\s|$)', ' ', flags) - _config_vars[key] = flags - - if args: - vals = [] - for name in args: - vals.append(_config_vars.get(name)) - return vals - else: - return _config_vars - -def get_config_var(name): - """Return the value of a single variable using the dictionary - returned by 'get_config_vars()'. Equivalent to - get_config_vars().get(name) - """ - return get_config_vars().get(name) diff --git a/lib-python/modified-2.7/distutils/sysconfig_cpython.py b/lib-python/2.7/distutils/sysconfig_cpython.py rename from lib-python/modified-2.7/distutils/sysconfig_cpython.py rename to lib-python/2.7/distutils/sysconfig_cpython.py diff --git a/lib-python/modified-2.7/distutils/sysconfig_pypy.py b/lib-python/2.7/distutils/sysconfig_pypy.py rename from lib-python/modified-2.7/distutils/sysconfig_pypy.py rename to lib-python/2.7/distutils/sysconfig_pypy.py diff --git a/lib-python/2.7/distutils/tests/test_build_ext.py b/lib-python/2.7/distutils/tests/test_build_ext.py --- a/lib-python/2.7/distutils/tests/test_build_ext.py +++ b/lib-python/2.7/distutils/tests/test_build_ext.py @@ -293,7 +293,7 @@ finally: os.chdir(old_wd) self.assertTrue(os.path.exists(so_file)) - self.assertEqual(os.path.splitext(so_file)[-1], + self.assertEqual(so_file[so_file.index(os.path.extsep):], sysconfig.get_config_var('SO')) so_dir = os.path.dirname(so_file) self.assertEqual(so_dir, other_tmp_dir) @@ -302,7 +302,7 @@ cmd.run() so_file = cmd.get_outputs()[0] self.assertTrue(os.path.exists(so_file)) - self.assertEqual(os.path.splitext(so_file)[-1], + self.assertEqual(so_file[so_file.index(os.path.extsep):], sysconfig.get_config_var('SO')) so_dir = os.path.dirname(so_file) self.assertEqual(so_dir, cmd.build_lib) diff --git a/lib-python/2.7/distutils/tests/test_install.py b/lib-python/2.7/distutils/tests/test_install.py --- a/lib-python/2.7/distutils/tests/test_install.py +++ b/lib-python/2.7/distutils/tests/test_install.py @@ -2,6 +2,7 @@ import os import unittest +from test import test_support from test.test_support import run_unittest @@ -40,14 +41,15 @@ expected = os.path.normpath(expected) self.assertEqual(got, expected) - libdir = os.path.join(destination, "lib", "python") - check_path(cmd.install_lib, libdir) - check_path(cmd.install_platlib, libdir) - check_path(cmd.install_purelib, libdir) - check_path(cmd.install_headers, - os.path.join(destination, "include", "python", "foopkg")) - check_path(cmd.install_scripts, os.path.join(destination, "bin")) - check_path(cmd.install_data, destination) + if test_support.check_impl_detail(): + libdir = os.path.join(destination, "lib", "python") + check_path(cmd.install_lib, libdir) + check_path(cmd.install_platlib, libdir) + check_path(cmd.install_purelib, libdir) + check_path(cmd.install_headers, + os.path.join(destination, "include", "python", "foopkg")) + check_path(cmd.install_scripts, os.path.join(destination, "bin")) + check_path(cmd.install_data, destination) def test_suite(): diff --git a/lib-python/2.7/distutils/unixccompiler.py b/lib-python/2.7/distutils/unixccompiler.py --- a/lib-python/2.7/distutils/unixccompiler.py +++ b/lib-python/2.7/distutils/unixccompiler.py @@ -125,7 +125,22 @@ } if sys.platform[:6] == "darwin": + import platform + if platform.machine() == 'i386': + if platform.architecture()[0] == '32bit': + arch = 'i386' + else: + arch = 'x86_64' + else: + # just a guess + arch = platform.machine() executables['ranlib'] = ["ranlib"] + executables['linker_so'] += ['-undefined', 'dynamic_lookup'] + + for k, v in executables.iteritems(): + if v and v[0] == 'cc': + v += ['-arch', arch] + # Needed for the filename generation methods provided by the base # class, CCompiler. NB. whoever instantiates/uses a particular @@ -309,7 +324,7 @@ # On OSX users can specify an alternate SDK using # '-isysroot', calculate the SDK root if it is specified # (and use it further on) - cflags = sysconfig.get_config_var('CFLAGS') + cflags = sysconfig.get_config_var('CFLAGS') or '' m = re.search(r'-isysroot\s+(\S+)', cflags) if m is None: sysroot = '/' diff --git a/lib-python/2.7/heapq.py b/lib-python/2.7/heapq.py --- a/lib-python/2.7/heapq.py +++ b/lib-python/2.7/heapq.py @@ -193,6 +193,8 @@ Equivalent to: sorted(iterable, reverse=True)[:n] """ + if n < 0: # for consistency with the c impl + return [] it = iter(iterable) result = list(islice(it, n)) if not result: @@ -209,6 +211,8 @@ Equivalent to: sorted(iterable)[:n] """ + if n < 0: # for consistency with the c impl + return [] if hasattr(iterable, '__len__') and n * 10 <= len(iterable): # For smaller values of n, the bisect method is faster than a minheap. # It is also memory efficient, consuming only n elements of space. diff --git a/lib-python/2.7/httplib.py b/lib-python/2.7/httplib.py --- a/lib-python/2.7/httplib.py +++ b/lib-python/2.7/httplib.py @@ -1024,7 +1024,11 @@ kwds["buffering"] = True; response = self.response_class(*args, **kwds) - response.begin() + try: + response.begin() + except: + response.close() + raise assert response.will_close != _UNKNOWN self.__state = _CS_IDLE diff --git a/lib-python/2.7/idlelib/Delegator.py b/lib-python/2.7/idlelib/Delegator.py --- a/lib-python/2.7/idlelib/Delegator.py +++ b/lib-python/2.7/idlelib/Delegator.py @@ -12,6 +12,14 @@ self.__cache[name] = attr return attr + def __nonzero__(self): + # this is needed for PyPy: else, if self.delegate is None, the + # __getattr__ above picks NoneType.__nonzero__, which returns + # False. Thus, bool(Delegator()) is False as well, but it's not what + # we want. On CPython, bool(Delegator()) is True because NoneType + # does not have __nonzero__ + return True + def resetcache(self): for key in self.__cache.keys(): try: diff --git a/lib-python/2.7/inspect.py b/lib-python/2.7/inspect.py --- a/lib-python/2.7/inspect.py +++ b/lib-python/2.7/inspect.py @@ -746,8 +746,15 @@ 'varargs' and 'varkw' are the names of the * and ** arguments or None.""" if not iscode(co): - raise TypeError('{!r} is not a code object'.format(co)) + if hasattr(len, 'func_code') and type(co) is type(len.func_code): + # PyPy extension: built-in function objects have a func_code too. + # There is no co_code on it, but co_argcount and co_varnames and + # co_flags are present. + pass + else: + raise TypeError('{!r} is not a code object'.format(co)) + code = getattr(co, 'co_code', '') nargs = co.co_argcount names = co.co_varnames args = list(names[:nargs]) @@ -757,12 +764,12 @@ for i in range(nargs): if args[i][:1] in ('', '.'): stack, remain, count = [], [], [] - while step < len(co.co_code): - op = ord(co.co_code[step]) + while step < len(code): + op = ord(code[step]) step = step + 1 if op >= dis.HAVE_ARGUMENT: opname = dis.opname[op] - value = ord(co.co_code[step]) + ord(co.co_code[step+1])*256 + value = ord(code[step]) + ord(code[step+1])*256 step = step + 2 if opname in ('UNPACK_TUPLE', 'UNPACK_SEQUENCE'): remain.append(value) @@ -809,7 +816,9 @@ if ismethod(func): func = func.im_func - if not isfunction(func): + if not (isfunction(func) or + isbuiltin(func) and hasattr(func, 'func_code')): + # PyPy extension: this works for built-in functions too raise TypeError('{!r} is not a Python function'.format(func)) args, varargs, varkw = getargs(func.func_code) return ArgSpec(args, varargs, varkw, func.func_defaults) @@ -949,7 +958,7 @@ raise TypeError('%s() takes exactly 0 arguments ' '(%d given)' % (f_name, num_total)) else: - raise TypeError('%s() takes no arguments (%d given)' % + raise TypeError('%s() takes no argument (%d given)' % (f_name, num_total)) for arg in args: if isinstance(arg, str) and arg in named: diff --git a/lib-python/2.7/json/encoder.py b/lib-python/2.7/json/encoder.py --- a/lib-python/2.7/json/encoder.py +++ b/lib-python/2.7/json/encoder.py @@ -2,14 +2,7 @@ """ import re -try: - from _json import encode_basestring_ascii as c_encode_basestring_ascii -except ImportError: - c_encode_basestring_ascii = None -try: - from _json import make_encoder as c_make_encoder -except ImportError: - c_make_encoder = None +from __pypy__.builders import StringBuilder, UnicodeBuilder ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]') ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])') @@ -24,23 +17,22 @@ '\t': '\\t', } for i in range(0x20): - ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i)) - #ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) + ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,)) # Assume this produces an infinity on all machines (probably not guaranteed) INFINITY = float('1e66666') FLOAT_REPR = repr -def encode_basestring(s): +def raw_encode_basestring(s): """Return a JSON representation of a Python string """ def replace(match): return ESCAPE_DCT[match.group(0)] - return '"' + ESCAPE.sub(replace, s) + '"' + return ESCAPE.sub(replace, s) +encode_basestring = lambda s: '"' + raw_encode_basestring(s) + '"' - -def py_encode_basestring_ascii(s): +def raw_encode_basestring_ascii(s): """Return an ASCII-only JSON representation of a Python string """ @@ -53,21 +45,19 @@ except KeyError: n = ord(s) if n < 0x10000: - return '\\u{0:04x}'.format(n) - #return '\\u%04x' % (n,) + return '\\u%04x' % (n,) else: # surrogate pair n -= 0x10000 s1 = 0xd800 | ((n >> 10) & 0x3ff) s2 = 0xdc00 | (n & 0x3ff) - return '\\u{0:04x}\\u{1:04x}'.format(s1, s2) - #return '\\u%04x\\u%04x' % (s1, s2) - return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"' + return '\\u%04x\\u%04x' % (s1, s2) + if ESCAPE_ASCII.search(s): + return str(ESCAPE_ASCII.sub(replace, s)) + return s +encode_basestring_ascii = lambda s: '"' + raw_encode_basestring_ascii(s) + '"' -encode_basestring_ascii = ( - c_encode_basestring_ascii or py_encode_basestring_ascii) - class JSONEncoder(object): """Extensible JSON encoder for Python data structures. @@ -147,6 +137,17 @@ self.skipkeys = skipkeys self.ensure_ascii = ensure_ascii + if ensure_ascii: + self.encoder = raw_encode_basestring_ascii + else: + self.encoder = raw_encode_basestring + if encoding != 'utf-8': + orig_encoder = self.encoder + def encoder(o): + if isinstance(o, str): + o = o.decode(encoding) + return orig_encoder(o) + self.encoder = encoder self.check_circular = check_circular self.allow_nan = allow_nan self.sort_keys = sort_keys @@ -184,24 +185,126 @@ '{"foo": ["bar", "baz"]}' """ - # This is for extremely simple cases and benchmarks. + if self.check_circular: + markers = {} + else: + markers = None + if self.ensure_ascii: + builder = StringBuilder() + else: + builder = UnicodeBuilder() + self._encode(o, markers, builder, 0) + return builder.build() + + def _emit_indent(self, builder, _current_indent_level): + if self.indent is not None: + _current_indent_level += 1 + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent + builder.append(newline_indent) + else: + separator = self.item_separator + return separator, _current_indent_level + + def _emit_unindent(self, builder, _current_indent_level): + if self.indent is not None: + builder.append('\n') + builder.append(' ' * (self.indent * (_current_indent_level - 1))) + + def _encode(self, o, markers, builder, _current_indent_level): if isinstance(o, basestring): - if isinstance(o, str): - _encoding = self.encoding - if (_encoding is not None - and not (_encoding == 'utf-8')): - o = o.decode(_encoding) - if self.ensure_ascii: - return encode_basestring_ascii(o) + builder.append('"') + builder.append(self.encoder(o)) + builder.append('"') + elif o is None: + builder.append('null') + elif o is True: + builder.append('true') + elif o is False: + builder.append('false') + elif isinstance(o, (int, long)): + builder.append(str(o)) + elif isinstance(o, float): + builder.append(self._floatstr(o)) + elif isinstance(o, (list, tuple)): + if not o: + builder.append('[]') + return + self._encode_list(o, markers, builder, _current_indent_level) + elif isinstance(o, dict): + if not o: + builder.append('{}') + return + self._encode_dict(o, markers, builder, _current_indent_level) + else: + self._mark_markers(markers, o) + res = self.default(o) + self._encode(res, markers, builder, _current_indent_level) + self._remove_markers(markers, o) + return res + + def _encode_list(self, l, markers, builder, _current_indent_level): + self._mark_markers(markers, l) + builder.append('[') + first = True + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + for elem in l: + if first: + first = False else: - return encode_basestring(o) - # This doesn't pass the iterator directly to ''.join() because the - # exceptions aren't as detailed. The list call should be roughly - # equivalent to the PySequence_Fast that ''.join() would do. - chunks = self.iterencode(o, _one_shot=True) - if not isinstance(chunks, (list, tuple)): - chunks = list(chunks) - return ''.join(chunks) + builder.append(separator) + self._encode(elem, markers, builder, _current_indent_level) + del elem # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append(']') + self._remove_markers(markers, l) + + def _encode_dict(self, d, markers, builder, _current_indent_level): + self._mark_markers(markers, d) + first = True + builder.append('{') + separator, _current_indent_level = self._emit_indent(builder, + _current_indent_level) + if self.sort_keys: + items = sorted(d.items(), key=lambda kv: kv[0]) + else: + items = d.iteritems() + + for key, v in items: + if first: + first = False + else: + builder.append(separator) + if isinstance(key, basestring): + pass + # JavaScript is weakly typed for these, so it makes sense to + # also allow them. Many encoders seem to do something like this. + elif isinstance(key, float): + key = self._floatstr(key) + elif key is True: + key = 'true' + elif key is False: + key = 'false' + elif key is None: + key = 'null' + elif isinstance(key, (int, long)): + key = str(key) + elif self.skipkeys: + continue + else: + raise TypeError("key " + repr(key) + " is not a string") + builder.append('"') + builder.append(self.encoder(key)) + builder.append('"') + builder.append(self.key_separator) + self._encode(v, markers, builder, _current_indent_level) + del key + del v # XXX grumble + self._emit_unindent(builder, _current_indent_level) + builder.append('}') + self._remove_markers(markers, d) def iterencode(self, o, _one_shot=False): """Encode the given object and yield each string @@ -217,86 +320,54 @@ markers = {} else: markers = None - if self.ensure_ascii: - _encoder = encode_basestring_ascii + return self._iterencode(o, markers, 0) + + def _floatstr(self, o): + # Check for specials. Note that this type of test is processor + # and/or platform-specific, so do tests which don't depend on the + # internals. + + if o != o: + text = 'NaN' + elif o == INFINITY: + text = 'Infinity' + elif o == -INFINITY: + text = '-Infinity' else: - _encoder = encode_basestring - if self.encoding != 'utf-8': - def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding): - if isinstance(o, str): - o = o.decode(_encoding) - return _orig_encoder(o) + return FLOAT_REPR(o) - def floatstr(o, allow_nan=self.allow_nan, - _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY): - # Check for specials. Note that this type of test is processor - # and/or platform-specific, so do tests which don't depend on the - # internals. + if not self.allow_nan: + raise ValueError( + "Out of range float values are not JSON compliant: " + + repr(o)) - if o != o: - text = 'NaN' - elif o == _inf: - text = 'Infinity' - elif o == _neginf: - text = '-Infinity' - else: - return _repr(o) + return text - if not allow_nan: - raise ValueError( - "Out of range float values are not JSON compliant: " + - repr(o)) + def _mark_markers(self, markers, o): + if markers is not None: + if id(o) in markers: + raise ValueError("Circular reference detected") + markers[id(o)] = None - return text + def _remove_markers(self, markers, o): + if markers is not None: + del markers[id(o)] - - if (_one_shot and c_make_encoder is not None - and self.indent is None and not self.sort_keys): - _iterencode = c_make_encoder( - markers, self.default, _encoder, self.indent, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, self.allow_nan) - else: - _iterencode = _make_iterencode( - markers, self.default, _encoder, self.indent, floatstr, - self.key_separator, self.item_separator, self.sort_keys, - self.skipkeys, _one_shot) - return _iterencode(o, 0) - -def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, - _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot, - ## HACK: hand-optimized bytecode; turn globals into locals - ValueError=ValueError, - basestring=basestring, - dict=dict, - float=float, - id=id, - int=int, - isinstance=isinstance, - list=list, - long=long, - str=str, - tuple=tuple, - ): - - def _iterencode_list(lst, _current_indent_level): + def _iterencode_list(self, lst, markers, _current_indent_level): if not lst: yield '[]' return - if markers is not None: - markerid = id(lst) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = lst + self._mark_markers(markers, lst) buf = '[' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + separator = self.item_separator + newline_indent buf += newline_indent else: newline_indent = None - separator = _item_separator + separator = self.item_separator first = True for value in lst: if first: @@ -304,7 +375,7 @@ else: buf = separator if isinstance(value, basestring): - yield buf + _encoder(value) + yield buf + '"' + self.encoder(value) + '"' elif value is None: yield buf + 'null' elif value is True: @@ -314,44 +385,43 @@ elif isinstance(value, (int, long)): yield buf + str(value) elif isinstance(value, float): - yield buf + _floatstr(value) + yield buf + self._floatstr(value) else: yield buf if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield ']' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, lst) - def _iterencode_dict(dct, _current_indent_level): + def _iterencode_dict(self, dct, markers, _current_indent_level): if not dct: yield '{}' return - if markers is not None: - markerid = id(dct) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = dct + self._mark_markers(markers, dct) yield '{' - if _indent is not None: + if self.indent is not None: _current_indent_level += 1 - newline_indent = '\n' + (' ' * (_indent * _current_indent_level)) - item_separator = _item_separator + newline_indent + newline_indent = '\n' + (' ' * (self.indent * + _current_indent_level)) + item_separator = self.item_separator + newline_indent yield newline_indent else: newline_indent = None - item_separator = _item_separator + item_separator = self.item_separator first = True - if _sort_keys: + if self.sort_keys: items = sorted(dct.items(), key=lambda kv: kv[0]) else: items = dct.iteritems() @@ -361,7 +431,7 @@ # JavaScript is weakly typed for these, so it makes sense to # also allow them. Many encoders seem to do something like this. elif isinstance(key, float): - key = _floatstr(key) + key = self._floatstr(key) elif key is True: key = 'true' elif key is False: @@ -370,7 +440,7 @@ key = 'null' elif isinstance(key, (int, long)): key = str(key) - elif _skipkeys: + elif self.skipkeys: continue else: raise TypeError("key " + repr(key) + " is not a string") @@ -378,10 +448,10 @@ first = False else: yield item_separator - yield _encoder(key) - yield _key_separator + yield '"' + self.encoder(key) + '"' + yield self.key_separator if isinstance(value, basestring): - yield _encoder(value) + yield '"' + self.encoder(value) + '"' elif value is None: yield 'null' elif value is True: @@ -391,26 +461,28 @@ elif isinstance(value, (int, long)): yield str(value) elif isinstance(value, float): - yield _floatstr(value) + yield self._floatstr(value) else: if isinstance(value, (list, tuple)): - chunks = _iterencode_list(value, _current_indent_level) + chunks = self._iterencode_list(value, markers, + _current_indent_level) elif isinstance(value, dict): - chunks = _iterencode_dict(value, _current_indent_level) + chunks = self._iterencode_dict(value, markers, + _current_indent_level) else: - chunks = _iterencode(value, _current_indent_level) + chunks = self._iterencode(value, markers, + _current_indent_level) for chunk in chunks: yield chunk if newline_indent is not None: _current_indent_level -= 1 - yield '\n' + (' ' * (_indent * _current_indent_level)) + yield '\n' + (' ' * (self.indent * _current_indent_level)) yield '}' - if markers is not None: - del markers[markerid] + self._remove_markers(markers, dct) - def _iterencode(o, _current_indent_level): + def _iterencode(self, o, markers, _current_indent_level): if isinstance(o, basestring): - yield _encoder(o) + yield '"' + self.encoder(o) + '"' elif o is None: yield 'null' elif o is True: @@ -420,23 +492,19 @@ elif isinstance(o, (int, long)): yield str(o) elif isinstance(o, float): - yield _floatstr(o) + yield self._floatstr(o) elif isinstance(o, (list, tuple)): - for chunk in _iterencode_list(o, _current_indent_level): + for chunk in self._iterencode_list(o, markers, + _current_indent_level): yield chunk elif isinstance(o, dict): - for chunk in _iterencode_dict(o, _current_indent_level): + for chunk in self._iterencode_dict(o, markers, + _current_indent_level): yield chunk else: - if markers is not None: - markerid = id(o) - if markerid in markers: - raise ValueError("Circular reference detected") - markers[markerid] = o - o = _default(o) - for chunk in _iterencode(o, _current_indent_level): + self._mark_markers(markers, o) + obj = self.default(o) + for chunk in self._iterencode(obj, markers, + _current_indent_level): yield chunk - if markers is not None: - del markers[markerid] - - return _iterencode + self._remove_markers(markers, o) diff --git a/lib-python/2.7/json/tests/test_unicode.py b/lib-python/2.7/json/tests/test_unicode.py --- a/lib-python/2.7/json/tests/test_unicode.py +++ b/lib-python/2.7/json/tests/test_unicode.py @@ -80,6 +80,12 @@ # Issue 10038. self.assertEqual(type(self.loads('"foo"')), unicode) + def test_encode_not_utf_8(self): + self.assertEqual(self.dumps('\xb1\xe6', encoding='iso8859-2'), + '"\\u0105\\u0107"') + self.assertEqual(self.dumps(['\xb1\xe6'], encoding='iso8859-2'), + '["\\u0105\\u0107"]') + class TestPyUnicode(TestUnicode, PyTest): pass class TestCUnicode(TestUnicode, CTest): pass diff --git a/lib-python/2.7/mailbox.py b/lib-python/2.7/mailbox.py --- a/lib-python/2.7/mailbox.py +++ b/lib-python/2.7/mailbox.py @@ -619,7 +619,9 @@ """Write any pending changes to disk.""" if not self._pending: return - + if self._file.closed: + self._pending = False + return # In order to be writing anything out at all, self._toc must # already have been generated (and presumably has been modified # by adding or deleting an item). @@ -1818,6 +1820,10 @@ else: self._pos = pos + def __del__(self): + if hasattr(self,'_file'): + self.close() + def read(self, size=None): """Read bytes.""" return self._read(size, self._file.read) @@ -1854,6 +1860,7 @@ def close(self): """Close the file.""" + self._file.close() del self._file def _read(self, size, read_method): diff --git a/lib-python/2.7/multiprocessing/forking.py b/lib-python/2.7/multiprocessing/forking.py --- a/lib-python/2.7/multiprocessing/forking.py +++ b/lib-python/2.7/multiprocessing/forking.py @@ -73,15 +73,12 @@ return getattr, (m.im_self, m.im_func.func_name) ForkingPickler.register(type(ForkingPickler.save), _reduce_method) -def _reduce_method_descriptor(m): - return getattr, (m.__objclass__, m.__name__) -ForkingPickler.register(type(list.append), _reduce_method_descriptor) -ForkingPickler.register(type(int.__add__), _reduce_method_descriptor) - -#def _reduce_builtin_function_or_method(m): -# return getattr, (m.__self__, m.__name__) -#ForkingPickler.register(type(list().append), _reduce_builtin_function_or_method) -#ForkingPickler.register(type(int().__add__), _reduce_builtin_function_or_method) +if type(list.append) is not type(ForkingPickler.save): + # Some python implementations have unbound methods even for builtin types + def _reduce_method_descriptor(m): + return getattr, (m.__objclass__, m.__name__) + ForkingPickler.register(type(list.append), _reduce_method_descriptor) + ForkingPickler.register(type(int.__add__), _reduce_method_descriptor) try: from functools import partial diff --git a/lib-python/2.7/opcode.py b/lib-python/2.7/opcode.py --- a/lib-python/2.7/opcode.py +++ b/lib-python/2.7/opcode.py @@ -1,4 +1,3 @@ - """ opcode module - potentially shared between dis and other modules which operate on bytecodes (e.g. peephole optimizers). @@ -189,4 +188,10 @@ def_op('SET_ADD', 146) def_op('MAP_ADD', 147) +# pypy modification, experimental bytecode +def_op('LOOKUP_METHOD', 201) # Index in name list +hasname.append(201) +def_op('CALL_METHOD', 202) # #args not including 'self' +def_op('BUILD_LIST_FROM_ARG', 203) + del def_op, name_op, jrel_op, jabs_op diff --git a/lib-python/2.7/pickle.py b/lib-python/2.7/pickle.py --- a/lib-python/2.7/pickle.py +++ b/lib-python/2.7/pickle.py @@ -168,7 +168,7 @@ # Pickling machinery -class Pickler: +class Pickler(object): def __init__(self, file, protocol=None): """This takes a file-like object for writing a pickle data stream. @@ -638,6 +638,10 @@ # else tmp is empty, and we're done def save_dict(self, obj): + modict_saver = self._pickle_moduledict(obj) + if modict_saver is not None: + return self.save_reduce(*modict_saver) + write = self.write if self.bin: @@ -687,6 +691,29 @@ write(SETITEM) # else tmp is empty, and we're done + def _pickle_moduledict(self, obj): + # save module dictionary as "getattr(module, '__dict__')" + + # build index of module dictionaries + try: + modict = self.module_dict_ids + except AttributeError: + modict = {} + from sys import modules + for mod in modules.values(): + if isinstance(mod, ModuleType): + modict[id(mod.__dict__)] = mod + self.module_dict_ids = modict + + thisid = id(obj) + try: + themodule = modict[thisid] + except KeyError: + return None + from __builtin__ import getattr + return getattr, (themodule, '__dict__') + + def save_inst(self, obj): cls = obj.__class__ @@ -727,6 +754,29 @@ dispatch[InstanceType] = save_inst + def save_function(self, obj): + try: + return self.save_global(obj) + except PicklingError, e: + pass + # Check copy_reg.dispatch_table + reduce = dispatch_table.get(type(obj)) + if reduce: + rv = reduce(obj) + else: + # Check for a __reduce_ex__ method, fall back to __reduce__ + reduce = getattr(obj, "__reduce_ex__", None) + if reduce: + rv = reduce(self.proto) + else: + reduce = getattr(obj, "__reduce__", None) + if reduce: + rv = reduce() + else: + raise e + return self.save_reduce(obj=obj, *rv) + dispatch[FunctionType] = save_function + def save_global(self, obj, name=None, pack=struct.pack): write = self.write memo = self.memo @@ -768,7 +818,6 @@ self.memoize(obj) dispatch[ClassType] = save_global - dispatch[FunctionType] = save_global dispatch[BuiltinFunctionType] = save_global dispatch[TypeType] = save_global @@ -824,7 +873,7 @@ # Unpickling machinery -class Unpickler: +class Unpickler(object): def __init__(self, file): """This takes a file-like object for reading a pickle data stream. diff --git a/lib-python/2.7/pkgutil.py b/lib-python/2.7/pkgutil.py --- a/lib-python/2.7/pkgutil.py +++ b/lib-python/2.7/pkgutil.py @@ -244,7 +244,8 @@ return mod def get_data(self, pathname): - return open(pathname, "rb").read() + with open(pathname, "rb") as f: + return f.read() def _reopen(self): if self.file and self.file.closed: diff --git a/lib-python/2.7/pprint.py b/lib-python/2.7/pprint.py --- a/lib-python/2.7/pprint.py +++ b/lib-python/2.7/pprint.py @@ -144,7 +144,7 @@ return r = getattr(typ, "__repr__", None) - if issubclass(typ, dict) and r is dict.__repr__: + if issubclass(typ, dict) and r == dict.__repr__: write('{') if self._indent_per_level > 1: write((self._indent_per_level - 1) * ' ') @@ -173,10 +173,10 @@ write('}') return - if ((issubclass(typ, list) and r is list.__repr__) or - (issubclass(typ, tuple) and r is tuple.__repr__) or - (issubclass(typ, set) and r is set.__repr__) or - (issubclass(typ, frozenset) and r is frozenset.__repr__) + if ((issubclass(typ, list) and r == list.__repr__) or + (issubclass(typ, tuple) and r == tuple.__repr__) or + (issubclass(typ, set) and r == set.__repr__) or + (issubclass(typ, frozenset) and r == frozenset.__repr__) ): length = _len(object) if issubclass(typ, list): @@ -266,7 +266,7 @@ return ("%s%s%s" % (closure, sio.getvalue(), closure)), True, False r = getattr(typ, "__repr__", None) - if issubclass(typ, dict) and r is dict.__repr__: + if issubclass(typ, dict) and r == dict.__repr__: if not object: return "{}", True, False objid = _id(object) @@ -291,8 +291,8 @@ del context[objid] return "{%s}" % _commajoin(components), readable, recursive - if (issubclass(typ, list) and r is list.__repr__) or \ - (issubclass(typ, tuple) and r is tuple.__repr__): + if (issubclass(typ, list) and r == list.__repr__) or \ + (issubclass(typ, tuple) and r == tuple.__repr__): if issubclass(typ, list): if not object: return "[]", True, False diff --git a/lib-python/2.7/pydoc.py b/lib-python/2.7/pydoc.py --- a/lib-python/2.7/pydoc.py +++ b/lib-python/2.7/pydoc.py @@ -623,7 +623,9 @@ head, '#ffffff', '#7799ee', 'index
' + filelink + docloc) - modules = inspect.getmembers(object, inspect.ismodule) + def isnonbuiltinmodule(obj): + return inspect.ismodule(obj) and obj is not __builtin__ + modules = inspect.getmembers(object, isnonbuiltinmodule) classes, cdict = [], {} for key, value in inspect.getmembers(object, inspect.isclass): diff --git a/lib-python/2.7/random.py b/lib-python/2.7/random.py --- a/lib-python/2.7/random.py +++ b/lib-python/2.7/random.py @@ -41,7 +41,6 @@ from __future__ import division from warnings import warn as _warn -from types import MethodType as _MethodType, BuiltinMethodType as _BuiltinMethodType from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil from math import sqrt as _sqrt, acos as _acos, cos as _cos, sin as _sin from os import urandom as _urandom @@ -240,8 +239,7 @@ return self.randrange(a, b+1) - def _randbelow(self, n, _log=_log, int=int, _maxwidth=1L< n-1 > 2**(k-2) r = getrandbits(k) while r >= n: diff --git a/lib-python/2.7/site.py b/lib-python/2.7/site.py --- a/lib-python/2.7/site.py +++ b/lib-python/2.7/site.py @@ -75,7 +75,6 @@ USER_SITE = None USER_BASE = None - def makepath(*paths): dir = os.path.join(*paths) try: @@ -91,7 +90,10 @@ if hasattr(m, '__loader__'): continue # don't mess with a PEP 302-supplied __file__ try: - m.__file__ = os.path.abspath(m.__file__) + prev = m.__file__ + new = os.path.abspath(m.__file__) + if prev != new: + m.__file__ = new except (AttributeError, OSError): pass @@ -289,6 +291,7 @@ will find its `site-packages` subdirectory depending on the system environment, and will return a list of full paths. """ + is_pypy = '__pypy__' in sys.builtin_module_names sitepackages = [] seen = set() @@ -299,6 +302,10 @@ if sys.platform in ('os2emx', 'riscos'): sitepackages.append(os.path.join(prefix, "Lib", "site-packages")) + elif is_pypy: + from distutils.sysconfig import get_python_lib + sitedir = get_python_lib(standard_lib=False, prefix=prefix) + sitepackages.append(sitedir) elif os.sep == '/': sitepackages.append(os.path.join(prefix, "lib", "python" + sys.version[:3], @@ -435,22 +442,33 @@ if key == 'q': break +##def setcopyright(): +## """Set 'copyright' and 'credits' in __builtin__""" +## __builtin__.copyright = _Printer("copyright", sys.copyright) +## if sys.platform[:4] == 'java': +## __builtin__.credits = _Printer( +## "credits", +## "Jython is maintained by the Jython developers (www.jython.org).") +## else: +## __builtin__.credits = _Printer("credits", """\ +## Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands +## for supporting Python development. See www.python.org for more information.""") +## here = os.path.dirname(os.__file__) +## __builtin__.license = _Printer( +## "license", "See http://www.python.org/%.3s/license.html" % sys.version, +## ["LICENSE.txt", "LICENSE"], +## [os.path.join(here, os.pardir), here, os.curdir]) + def setcopyright(): - """Set 'copyright' and 'credits' in __builtin__""" + # XXX this is the PyPy-specific version. Should be unified with the above. __builtin__.copyright = _Printer("copyright", sys.copyright) - if sys.platform[:4] == 'java': - __builtin__.credits = _Printer( - "credits", - "Jython is maintained by the Jython developers (www.jython.org).") - else: - __builtin__.credits = _Printer("credits", """\ - Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands - for supporting Python development. See www.python.org for more information.""") - here = os.path.dirname(os.__file__) + __builtin__.credits = _Printer( + "credits", + "PyPy is maintained by the PyPy developers: http://pypy.org/") __builtin__.license = _Printer( - "license", "See http://www.python.org/%.3s/license.html" % sys.version, - ["LICENSE.txt", "LICENSE"], - [os.path.join(here, os.pardir), here, os.curdir]) + "license", + "See https://bitbucket.org/pypy/pypy/src/default/LICENSE") + class _Helper(object): @@ -476,7 +494,7 @@ if sys.platform == 'win32': import locale, codecs enc = locale.getdefaultlocale()[1] - if enc.startswith('cp'): # "cp***" ? + if enc is not None and enc.startswith('cp'): # "cp***" ? try: codecs.lookup(enc) except LookupError: @@ -532,9 +550,18 @@ "'import usercustomize' failed; use -v for traceback" +def import_builtin_stuff(): + """PyPy specific: pre-import a few built-in modules, because + some programs actually rely on them to be in sys.modules :-(""" + import exceptions + if 'zipimport' in sys.builtin_module_names: + import zipimport + + def main(): global ENABLE_USER_SITE + import_builtin_stuff() abs__file__() known_paths = removeduppaths() if (os.name == "posix" and sys.path and diff --git a/lib-python/2.7/socket.py b/lib-python/2.7/socket.py --- a/lib-python/2.7/socket.py +++ b/lib-python/2.7/socket.py @@ -46,8 +46,6 @@ import _socket from _socket import * -from functools import partial -from types import MethodType try: import _ssl @@ -159,11 +157,6 @@ if sys.platform == "riscos": _socketmethods = _socketmethods + ('sleeptaskw',) -# All the method names that must be delegated to either the real socket -# object or the _closedsocket object. -_delegate_methods = ("recv", "recvfrom", "recv_into", "recvfrom_into", - "send", "sendto") - class _closedsocket(object): __slots__ = [] def _dummy(*args): @@ -180,22 +173,43 @@ __doc__ = _realsocket.__doc__ - __slots__ = ["_sock", "__weakref__"] + list(_delegate_methods) - def __init__(self, family=AF_INET, type=SOCK_STREAM, proto=0, _sock=None): if _sock is None: _sock = _realsocket(family, type, proto) self._sock = _sock - for method in _delegate_methods: - setattr(self, method, getattr(_sock, method)) + self._io_refs = 0 + self._closed = False - def close(self, _closedsocket=_closedsocket, - _delegate_methods=_delegate_methods, setattr=setattr): + def send(self, data, flags=0): + return self._sock.send(data, flags=flags) + send.__doc__ = _realsocket.send.__doc__ + + def recv(self, buffersize, flags=0): + return self._sock.recv(buffersize, flags=flags) + recv.__doc__ = _realsocket.recv.__doc__ + + def recv_into(self, buffer, nbytes=0, flags=0): + return self._sock.recv_into(buffer, nbytes=nbytes, flags=flags) + recv_into.__doc__ = _realsocket.recv_into.__doc__ + + def recvfrom(self, buffersize, flags=0): + return self._sock.recvfrom(buffersize, flags=flags) + recvfrom.__doc__ = _realsocket.recvfrom.__doc__ + + def recvfrom_into(self, buffer, nbytes=0, flags=0): + return self._sock.recvfrom_into(buffer, nbytes=nbytes, flags=flags) + recvfrom_into.__doc__ = _realsocket.recvfrom_into.__doc__ + + def sendto(self, data, param2, param3=None): + if param3 is None: + return self._sock.sendto(data, param2) + else: + return self._sock.sendto(data, param2, param3) + sendto.__doc__ = _realsocket.sendto.__doc__ + + def close(self): # This function should not reference any globals. See issue #808164. self._sock = _closedsocket() - dummy = self._sock._dummy - for method in _delegate_methods: - setattr(self, method, dummy) close.__doc__ = _realsocket.close.__doc__ def accept(self): @@ -214,21 +228,49 @@ Return a regular file object corresponding to the socket. The mode and bufsize arguments are as for the built-in open() function.""" - return _fileobject(self._sock, mode, bufsize) + self._io_refs += 1 + return _fileobject(self, mode, bufsize) + + def _decref_socketios(self): + if self._io_refs > 0: + self._io_refs -= 1 + if self._closed: + self.close() + + def _real_close(self): + # This function should not reference any globals. See issue #808164. + self._sock.close() + + def close(self): + # This function should not reference any globals. See issue #808164. + self._closed = True + if self._io_refs <= 0: + self._real_close() family = property(lambda self: self._sock.family, doc="the socket family") type = property(lambda self: self._sock.type, doc="the socket type") proto = property(lambda self: self._sock.proto, doc="the socket protocol") -def meth(name,self,*args): - return getattr(self._sock,name)(*args) + # Delegate many calls to the raw socket object. + _s = ("def %(name)s(self, %(args)s): return self._sock.%(name)s(%(args)s)\n\n" + "%(name)s.__doc__ = _realsocket.%(name)s.__doc__\n") + for _m in _socketmethods: + # yupi! we're on pypy, all code objects have this interface + argcount = getattr(_realsocket, _m).im_func.func_code.co_argcount - 1 + exec _s % {'name': _m, 'args': ', '.join('arg%d' % i for i in range(argcount))} + del _m, _s, argcount -for _m in _socketmethods: - p = partial(meth,_m) - p.__name__ = _m - p.__doc__ = getattr(_realsocket,_m).__doc__ - m = MethodType(p,None,_socketobject) - setattr(_socketobject,_m,m) + # Delegation methods with default arguments, that the code above + # cannot handle correctly + def sendall(self, data, flags=0): + self._sock.sendall(data, flags) + sendall.__doc__ = _realsocket.sendall.__doc__ + + def getsockopt(self, level, optname, buflen=None): + if buflen is None: + return self._sock.getsockopt(level, optname) + return self._sock.getsockopt(level, optname, buflen) + getsockopt.__doc__ = _realsocket.getsockopt.__doc__ socket = SocketType = _socketobject @@ -278,8 +320,11 @@ if self._sock: self.flush() finally: - if self._close: - self._sock.close() + if self._sock: + if self._close: + self._sock.close() + else: + self._sock._decref_socketios() self._sock = None def __del__(self): diff --git a/lib-python/2.7/sqlite3/test/dbapi.py b/lib-python/2.7/sqlite3/test/dbapi.py --- a/lib-python/2.7/sqlite3/test/dbapi.py +++ b/lib-python/2.7/sqlite3/test/dbapi.py @@ -1,4 +1,4 @@ -#-*- coding: ISO-8859-1 -*- +#-*- coding: iso-8859-1 -*- # pysqlite2/test/dbapi.py: tests for DB-API compliance # # Copyright (C) 2004-2010 Gerhard H�ring @@ -332,6 +332,9 @@ def __init__(self): self.value = 5 + def __iter__(self): + return self + def next(self): if self.value == 10: raise StopIteration @@ -826,7 +829,7 @@ con = sqlite.connect(":memory:") con.close() try: - con() + con("select 1") self.fail("Should have raised a ProgrammingError") except sqlite.ProgrammingError: pass diff --git a/lib-python/2.7/sqlite3/test/regression.py b/lib-python/2.7/sqlite3/test/regression.py --- a/lib-python/2.7/sqlite3/test/regression.py +++ b/lib-python/2.7/sqlite3/test/regression.py @@ -264,6 +264,28 @@ """ self.assertRaises(sqlite.Warning, self.con, 1) + def CheckUpdateDescriptionNone(self): + """ + Call Cursor.update with an UPDATE query and check that it sets the + cursor's description to be None. + """ + cur = self.con.cursor() + cur.execute("CREATE TABLE foo (id INTEGER)") + cur.execute("UPDATE foo SET id = 3 WHERE id = 1") + self.assertEqual(cur.description, None) + + def CheckStatementCache(self): + cur = self.con.cursor() + cur.execute("CREATE TABLE foo (id INTEGER)") + values = [(i,) for i in xrange(5)] + cur.executemany("INSERT INTO foo (id) VALUES (?)", values) + + cur.execute("SELECT id FROM foo") + self.assertEqual(list(cur), values) + self.con.commit() + cur.execute("SELECT id FROM foo") + self.assertEqual(list(cur), values) + def suite(): regression_suite = unittest.makeSuite(RegressionTests, "Check") return unittest.TestSuite((regression_suite,)) diff --git a/lib-python/2.7/sqlite3/test/userfunctions.py b/lib-python/2.7/sqlite3/test/userfunctions.py --- a/lib-python/2.7/sqlite3/test/userfunctions.py +++ b/lib-python/2.7/sqlite3/test/userfunctions.py @@ -275,12 +275,14 @@ pass def CheckAggrNoStep(self): + # XXX it's better to raise OperationalError in order to stop + # the query earlier. cur = self.con.cursor() try: cur.execute("select nostep(t) from test") - self.fail("should have raised an AttributeError") - except AttributeError, e: - self.assertEqual(e.args[0], "AggrNoStep instance has no attribute 'step'") + self.fail("should have raised an OperationalError") + except sqlite.OperationalError, e: + self.assertEqual(e.args[0], "user-defined aggregate's 'step' method raised error") def CheckAggrNoFinalize(self): cur = self.con.cursor() diff --git a/lib-python/2.7/ssl.py b/lib-python/2.7/ssl.py --- a/lib-python/2.7/ssl.py +++ b/lib-python/2.7/ssl.py @@ -86,7 +86,7 @@ else: _PROTOCOL_NAMES[PROTOCOL_SSLv2] = "SSLv2" -from socket import socket, _fileobject, _delegate_methods, error as socket_error +from socket import socket, _fileobject, error as socket_error from socket import getnameinfo as _getnameinfo import base64 # for DER-to-PEM translation import errno @@ -103,14 +103,6 @@ do_handshake_on_connect=True, suppress_ragged_eofs=True, ciphers=None): socket.__init__(self, _sock=sock._sock) - # The initializer for socket overrides the methods send(), recv(), etc. - # in the instancce, which we don't need -- but we want to provide the - # methods defined in SSLSocket. - for attr in _delegate_methods: - try: - delattr(self, attr) - except AttributeError: - pass if certfile and not keyfile: keyfile = certfile diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py --- a/lib-python/2.7/subprocess.py +++ b/lib-python/2.7/subprocess.py @@ -803,7 +803,7 @@ elif stderr == PIPE: errread, errwrite = _subprocess.CreatePipe(None, 0) elif stderr == STDOUT: - errwrite = c2pwrite + errwrite = c2pwrite.handle # pass id to not close it elif isinstance(stderr, int): errwrite = msvcrt.get_osfhandle(stderr) else: @@ -818,9 +818,13 @@ def _make_inheritable(self, handle): """Return a duplicate of handle, which is inheritable""" - return _subprocess.DuplicateHandle(_subprocess.GetCurrentProcess(), + dupl = _subprocess.DuplicateHandle(_subprocess.GetCurrentProcess(), handle, _subprocess.GetCurrentProcess(), 0, 1, _subprocess.DUPLICATE_SAME_ACCESS) + # If the initial handle was obtained with CreatePipe, close it. + if not isinstance(handle, int): + handle.Close() + return dupl def _find_w9xpopen(self): diff --git a/lib-python/2.7/sysconfig.py b/lib-python/2.7/sysconfig.py --- a/lib-python/2.7/sysconfig.py +++ b/lib-python/2.7/sysconfig.py @@ -26,6 +26,16 @@ 'scripts': '{base}/bin', 'data' : '{base}', }, + 'pypy': { + 'stdlib': '{base}/lib-python', + 'platstdlib': '{base}/lib-python', + 'purelib': '{base}/lib-python', + 'platlib': '{base}/lib-python', + 'include': '{base}/include', + 'platinclude': '{base}/include', + 'scripts': '{base}/bin', + 'data' : '{base}', + }, 'nt': { 'stdlib': '{base}/Lib', 'platstdlib': '{base}/Lib', @@ -158,7 +168,9 @@ return res def _get_default_scheme(): - if os.name == 'posix': + if '__pypy__' in sys.builtin_module_names: + return 'pypy' + elif os.name == 'posix': # the default scheme for posix is posix_prefix return 'posix_prefix' return os.name @@ -182,126 +194,9 @@ return env_base if env_base else joinuser("~", ".local") -def _parse_makefile(filename, vars=None): - """Parse a Makefile-style file. - - A dictionary containing name/value pairs is returned. If an - optional dictionary is passed in as the second argument, it is - used instead of a new dictionary. - """ - import re - # Regexes needed for parsing Makefile (and similar syntaxes, - # like old-style Setup files). - _variable_rx = re.compile("([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)") - _findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)") - _findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}") - - if vars is None: - vars = {} - done = {} - notdone = {} - - with open(filename) as f: - lines = f.readlines() - - for line in lines: - if line.startswith('#') or line.strip() == '': - continue - m = _variable_rx.match(line) - if m: - n, v = m.group(1, 2) - v = v.strip() - # `$$' is a literal `$' in make - tmpv = v.replace('$$', '') - - if "$" in tmpv: - notdone[n] = v - else: - try: - v = int(v) - except ValueError: - # insert literal `$' - done[n] = v.replace('$$', '$') - else: - done[n] = v - - # do variable interpolation here - while notdone: - for name in notdone.keys(): - value = notdone[name] - m = _findvar1_rx.search(value) or _findvar2_rx.search(value) - if m: - n = m.group(1) - found = True - if n in done: - item = str(done[n]) - elif n in notdone: - # get it on a subsequent round - found = False - elif n in os.environ: - # do it like make: fall back to environment - item = os.environ[n] - else: - done[n] = item = "" - if found: - after = value[m.end():] - value = value[:m.start()] + item + after - if "$" in after: - notdone[name] = value - else: - try: value = int(value) - except ValueError: - done[name] = value.strip() - else: - done[name] = value - del notdone[name] - else: - # bogus variable reference; just drop it since we can't deal - del notdone[name] - # strip spurious spaces - for k, v in done.items(): - if isinstance(v, str): - done[k] = v.strip() - - # save the results in the global dictionary - vars.update(done) - return vars - - -def _get_makefile_filename(): - if _PYTHON_BUILD: - return os.path.join(_PROJECT_BASE, "Makefile") - return os.path.join(get_path('platstdlib'), "config", "Makefile") - - def _init_posix(vars): """Initialize the module as appropriate for POSIX systems.""" - # load the installed Makefile: - makefile = _get_makefile_filename() - try: - _parse_makefile(makefile, vars) - except IOError, e: - msg = "invalid Python installation: unable to open %s" % makefile - if hasattr(e, "strerror"): - msg = msg + " (%s)" % e.strerror - raise IOError(msg) - - # load the installed pyconfig.h: - config_h = get_config_h_filename() - try: - with open(config_h) as f: - parse_config_h(f, vars) - except IOError, e: - msg = "invalid Python installation: unable to open %s" % config_h - if hasattr(e, "strerror"): - msg = msg + " (%s)" % e.strerror - raise IOError(msg) - - # On AIX, there are wrong paths to the linker scripts in the Makefile - # -- these paths are relative to the Python source, but when installed - # the scripts are in another directory. - if _PYTHON_BUILD: - vars['LDSHARED'] = vars['BLDSHARED'] + return def _init_non_posix(vars): """Initialize the module as appropriate for NT""" @@ -474,10 +369,11 @@ # patched up as well. 'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'): - flags = _CONFIG_VARS[key] - flags = re.sub('-arch\s+\w+\s', ' ', flags) - flags = flags + ' ' + arch - _CONFIG_VARS[key] = flags + if key in _CONFIG_VARS: + flags = _CONFIG_VARS[key] + flags = re.sub('-arch\s+\w+\s', ' ', flags) + flags = flags + ' ' + arch + _CONFIG_VARS[key] = flags # If we're on OSX 10.5 or later and the user tries to # compiles an extension using an SDK that is not present diff --git a/lib-python/2.7/tarfile.py b/lib-python/2.7/tarfile.py --- a/lib-python/2.7/tarfile.py +++ b/lib-python/2.7/tarfile.py @@ -425,10 +425,16 @@ raise CompressionError("zlib module is not available") self.zlib = zlib self.crc = zlib.crc32("") & 0xffffffffL - if mode == "r": - self._init_read_gz() - else: - self._init_write_gz() + try: + if mode == "r": + self._init_read_gz() + else: + self._init_write_gz() + except: + if not self._extfileobj: + fileobj.close() + self.closed = True + raise if comptype == "bz2": try: @@ -1682,13 +1688,14 @@ if filemode not in "rw": raise ValueError("mode must be 'r' or 'w'") - - t = cls(name, filemode, - _Stream(name, filemode, comptype, fileobj, bufsize), - **kwargs) - t._extfileobj = False - return t - + fid = _Stream(name, filemode, comptype, fileobj, bufsize) + try: + t = cls(name, filemode, fid, **kwargs) + t._extfileobj = False + return t + except: + fid.close() + raise elif mode in "aw": return cls.taropen(name, mode, fileobj, **kwargs) @@ -1715,16 +1722,18 @@ gzip.GzipFile except (ImportError, AttributeError): raise CompressionError("gzip module is not available") - - if fileobj is None: - fileobj = bltn_open(name, mode + "b") - + gz_fid = None try: - t = cls.taropen(name, mode, - gzip.GzipFile(name, mode, compresslevel, fileobj), - **kwargs) + gz_fid = gzip.GzipFile(name, mode, compresslevel, fileobj) + t = cls.taropen(name, mode, gz_fid, **kwargs) except IOError: + if gz_fid: + gz_fid.close() raise ReadError("not a gzip file") + except: + if gz_fid: + gz_fid.close() + raise t._extfileobj = False return t @@ -1741,15 +1750,21 @@ except ImportError: raise CompressionError("bz2 module is not available") - if fileobj is not None: - fileobj = _BZ2Proxy(fileobj, mode) - else: - fileobj = bz2.BZ2File(name, mode, compresslevel=compresslevel) + try: + if fileobj is not None: + bzfileobj = _BZ2Proxy(fileobj, mode) + else: + bzfileobj = bz2.BZ2File(name, mode, compresslevel=compresslevel) + t = cls.taropen(name, mode, bzfileobj, **kwargs) - try: - t = cls.taropen(name, mode, fileobj, **kwargs) except (IOError, EOFError): + if fileobj is None: + bzfileobj.close() raise ReadError("not a bzip2 file") + except: + if fileobj is None: + bzfileobj.close() + raise t._extfileobj = False return t diff --git a/lib-python/2.7/test/list_tests.py b/lib-python/2.7/test/list_tests.py --- a/lib-python/2.7/test/list_tests.py +++ b/lib-python/2.7/test/list_tests.py @@ -45,8 +45,12 @@ self.assertEqual(str(a2), "[0, 1, 2, [...], 3]") self.assertEqual(repr(a2), "[0, 1, 2, [...], 3]") + if test_support.check_impl_detail(): + depth = sys.getrecursionlimit() + 100 + else: + depth = 1000 * 1000 # should be enough to exhaust the stack l0 = [] - for i in xrange(sys.getrecursionlimit() + 100): + for i in xrange(depth): l0 = [l0] self.assertRaises(RuntimeError, repr, l0) @@ -472,7 +476,11 @@ u += "eggs" self.assertEqual(u, self.type2test("spameggs")) - self.assertRaises(TypeError, u.__iadd__, None) + def f_iadd(u, x): + u += x + return u + + self.assertRaises(TypeError, f_iadd, u, None) def test_imul(self): u = self.type2test([0, 1]) diff --git a/lib-python/2.7/test/mapping_tests.py b/lib-python/2.7/test/mapping_tests.py --- a/lib-python/2.7/test/mapping_tests.py +++ b/lib-python/2.7/test/mapping_tests.py @@ -531,7 +531,10 @@ self.assertEqual(va, int(ka)) kb, vb = tb = b.popitem() self.assertEqual(vb, int(kb)) - self.assertTrue(not(copymode < 0 and ta != tb)) + if copymode < 0 and test_support.check_impl_detail(): + # popitem() is not guaranteed to be deterministic on + # all implementations + self.assertEqual(ta, tb) self.assertTrue(not a) self.assertTrue(not b) diff --git a/lib-python/2.7/test/pickletester.py b/lib-python/2.7/test/pickletester.py --- a/lib-python/2.7/test/pickletester.py +++ b/lib-python/2.7/test/pickletester.py @@ -6,7 +6,7 @@ import pickletools import copy_reg -from test.test_support import TestFailed, have_unicode, TESTFN +from test.test_support import TestFailed, have_unicode, TESTFN, impl_detail # Tests that try a number of pickle protocols should have a # for proto in protocols: @@ -949,6 +949,7 @@ "Failed protocol %d: %r != %r" % (proto, obj, loaded)) + @impl_detail("pypy does not store attribute names", pypy=False) def test_attribute_name_interning(self): # Test that attribute names of pickled objects are interned when # unpickling. @@ -1091,6 +1092,7 @@ s = StringIO.StringIO("X''.") self.assertRaises(EOFError, self.module.load, s) + @impl_detail("no full restricted mode in pypy", pypy=False) def test_restricted(self): # issue7128: cPickle failed in restricted mode builtins = {self.module.__name__: self.module, diff --git a/lib-python/2.7/test/regrtest.py b/lib-python/2.7/test/regrtest.py --- a/lib-python/2.7/test/regrtest.py +++ b/lib-python/2.7/test/regrtest.py @@ -1388,7 +1388,26 @@ test_zipimport test_zlib """, - 'openbsd3': + 'openbsd4': + """ + test_ascii_formatd + test_bsddb + test_bsddb3 + test_ctypes + test_dl + test_epoll + test_gdbm + test_locale + test_normalization + test_ossaudiodev + test_pep277 + test_tcl + test_tk + test_ttk_guionly + test_ttk_textonly + test_multiprocessing + """, + 'openbsd5': """ test_ascii_formatd test_bsddb @@ -1503,13 +1522,7 @@ return self.expected if __name__ == '__main__': - # findtestdir() gets the dirname out of __file__, so we have to make it - # absolute before changing the working directory. - # For example __file__ may be relative when running trace or profile. - # See issue #9323. - __file__ = os.path.abspath(__file__) - - # sanity check + # Simplification for findtestdir(). assert __file__ == os.path.abspath(sys.argv[0]) # When tests are run from the Python build directory, it is best practice diff --git a/lib-python/2.7/test/seq_tests.py b/lib-python/2.7/test/seq_tests.py --- a/lib-python/2.7/test/seq_tests.py +++ b/lib-python/2.7/test/seq_tests.py @@ -307,12 +307,18 @@ def test_bigrepeat(self): import sys - if sys.maxint <= 2147483647: - x = self.type2test([0]) - x *= 2**16 - self.assertRaises(MemoryError, x.__mul__, 2**16) - if hasattr(x, '__imul__'): - self.assertRaises(MemoryError, x.__imul__, 2**16) + # we chose an N such as 2**16 * N does not fit into a cpu word + if sys.maxint == 2147483647: + # 32 bit system + N = 2**16 + else: + # 64 bit system + N = 2**48 + x = self.type2test([0]) + x *= 2**16 + self.assertRaises(MemoryError, x.__mul__, N) + if hasattr(x, '__imul__'): + self.assertRaises(MemoryError, x.__imul__, N) def test_subscript(self): a = self.type2test([10, 11]) diff --git a/lib-python/2.7/test/string_tests.py b/lib-python/2.7/test/string_tests.py --- a/lib-python/2.7/test/string_tests.py +++ b/lib-python/2.7/test/string_tests.py @@ -1024,7 +1024,10 @@ self.checkequal('abc', 'abc', '__mul__', 1) self.checkequal('abcabcabc', 'abc', '__mul__', 3) self.checkraises(TypeError, 'abc', '__mul__') - self.checkraises(TypeError, 'abc', '__mul__', '') + class Mul(object): + def mul(self, a, b): + return a * b + self.checkraises(TypeError, Mul(), 'mul', 'abc', '') # XXX: on a 64-bit system, this doesn't raise an overflow error, # but either raises a MemoryError, or succeeds (if you have 54TiB) #self.checkraises(OverflowError, 10000*'abc', '__mul__', 2000000000) diff --git a/lib-python/2.7/test/test_abstract_numbers.py b/lib-python/2.7/test/test_abstract_numbers.py --- a/lib-python/2.7/test/test_abstract_numbers.py +++ b/lib-python/2.7/test/test_abstract_numbers.py @@ -40,7 +40,8 @@ c1, c2 = complex(3, 2), complex(4,1) # XXX: This is not ideal, but see the comment in math_trunc(). - self.assertRaises(AttributeError, math.trunc, c1) + # Modified to suit PyPy, which gives TypeError in all cases + self.assertRaises((AttributeError, TypeError), math.trunc, c1) self.assertRaises(TypeError, float, c1) self.assertRaises(TypeError, int, c1) diff --git a/lib-python/2.7/test/test_aifc.py b/lib-python/2.7/test/test_aifc.py --- a/lib-python/2.7/test/test_aifc.py +++ b/lib-python/2.7/test/test_aifc.py @@ -1,4 +1,4 @@ -from test.test_support import findfile, run_unittest, TESTFN +from test.test_support import findfile, run_unittest, TESTFN, impl_detail import unittest import os @@ -68,6 +68,7 @@ self.assertEqual(f.getparams(), fout.getparams()) self.assertEqual(f.readframes(5), fout.readframes(5)) + @impl_detail("PyPy has no audioop module yet", pypy=False) def test_compress(self): f = self.f = aifc.open(self.sndfilepath) fout = self.fout = aifc.open(TESTFN, 'wb') diff --git a/lib-python/2.7/test/test_array.py b/lib-python/2.7/test/test_array.py --- a/lib-python/2.7/test/test_array.py +++ b/lib-python/2.7/test/test_array.py @@ -295,9 +295,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, a.__add__, b) - - self.assertRaises(TypeError, a.__add__, "bad") + with self.assertRaises(TypeError): + a + b + with self.assertRaises(TypeError): + a + 'bad' def test_iadd(self): a = array.array(self.typecode, self.example[::-1]) @@ -316,9 +317,10 @@ ) b = array.array(self.badtypecode()) - self.assertRaises(TypeError, a.__add__, b) - - self.assertRaises(TypeError, a.__iadd__, "bad") + with self.assertRaises(TypeError): + a += b + with self.assertRaises(TypeError): + a += 'bad' def test_mul(self): a = 5*array.array(self.typecode, self.example) @@ -345,7 +347,8 @@ array.array(self.typecode) ) - self.assertRaises(TypeError, a.__mul__, "bad") + with self.assertRaises(TypeError): + a * 'bad' def test_imul(self): a = array.array(self.typecode, self.example) @@ -374,7 +377,8 @@ a *= -1 self.assertEqual(a, array.array(self.typecode)) - self.assertRaises(TypeError, a.__imul__, "bad") + with self.assertRaises(TypeError): + a *= 'bad' def test_getitem(self): a = array.array(self.typecode, self.example) @@ -769,6 +773,7 @@ p = proxy(s) self.assertEqual(p.tostring(), s.tostring()) s = None + test_support.gc_collect() self.assertRaises(ReferenceError, len, p) def test_bug_782369(self): diff --git a/lib-python/2.7/test/test_ascii_formatd.py b/lib-python/2.7/test/test_ascii_formatd.py --- a/lib-python/2.7/test/test_ascii_formatd.py +++ b/lib-python/2.7/test/test_ascii_formatd.py @@ -4,6 +4,10 @@ import unittest from test.test_support import check_warnings, run_unittest, import_module +from test.test_support import check_impl_detail + +if not check_impl_detail(cpython=True): + raise unittest.SkipTest("this test is only for CPython") # Skip tests if _ctypes module does not exist import_module('_ctypes') diff --git a/lib-python/2.7/test/test_ast.py b/lib-python/2.7/test/test_ast.py --- a/lib-python/2.7/test/test_ast.py +++ b/lib-python/2.7/test/test_ast.py @@ -20,10 +20,24 @@ # These tests are compiled through "exec" # There should be atleast one test per statement exec_tests = [ + # None + "None", # FunctionDef "def f(): pass", + # FunctionDef with arg + "def f(a): pass", + # FunctionDef with arg and default value + "def f(a=0): pass", + # FunctionDef with varargs + "def f(*args): pass", + # FunctionDef with kwargs + "def f(**kwargs): pass", + # FunctionDef with all kind of args + "def f(a, b=1, c=None, d=[], e={}, *args, **kwargs): pass", # ClassDef "class C:pass", + # ClassDef, new style class + "class C(object): pass", # Return "def f():return 1", # Delete @@ -68,6 +82,27 @@ "for a,b in c: pass", "[(a,b) for a,b in c]", "((a,b) for a,b in c)", + "((a,b) for (a,b) in c)", + # Multiline generator expression + """( + ( + Aa + , + Bb + ) + for + Aa + , + Bb in Cc + )""", + # dictcomp + "{a : b for w in x for m in p if g}", + # dictcomp with naked tuple + "{a : b for v,w in x}", + # setcomp + "{r for l in x if g}", + # setcomp with naked tuple + "{r for l,m in x}", ] # These are compiled through "single" @@ -80,6 +115,8 @@ # These are compiled through "eval" # It should test all expressions eval_tests = [ + # None + "None", # BoolOp "a and b", # BinOp @@ -90,6 +127,16 @@ "lambda:None", # Dict "{ 1:2 }", + # Empty dict + "{}", + # Set + "{None,}", + # Multiline dict + """{ + 1 + : + 2 + }""", # ListComp "[a for b in c if d]", # GeneratorExp @@ -114,8 +161,14 @@ "v", # List "[1,2,3]", + # Empty list + "[]", # Tuple "1,2,3", + # Tuple + "(1,2,3)", + # Empty tuple + "()", # Combination "a.b.c.d(a.b[1:2])", @@ -141,6 +194,35 @@ elif value is not None: self._assertTrueorder(value, parent_pos) + def test_AST_objects(self): + if test_support.check_impl_detail(): + # PyPy also provides a __dict__ to the ast.AST base class. + + x = ast.AST() + try: + x.foobar = 21 + except AttributeError, e: + self.assertEquals(e.args[0], + "'_ast.AST' object has no attribute 'foobar'") + else: + self.assert_(False) + + try: + ast.AST(lineno=2) + except AttributeError, e: + self.assertEquals(e.args[0], + "'_ast.AST' object has no attribute 'lineno'") + else: + self.assert_(False) + + try: + ast.AST(2) + except TypeError, e: + self.assertEquals(e.args[0], + "_ast.AST constructor takes 0 positional arguments") + else: + self.assert_(False) + def test_snippets(self): for input, output, kind in ((exec_tests, exec_results, "exec"), (single_tests, single_results, "single"), @@ -169,6 +251,114 @@ self.assertTrue(issubclass(ast.comprehension, ast.AST)) self.assertTrue(issubclass(ast.Gt, ast.AST)) + def test_field_attr_existence(self): + for name, item in ast.__dict__.iteritems(): + if isinstance(item, type) and name != 'AST' and name[0].isupper(): # XXX: pypy does not allow abstract ast class instanciation + x = item() + if isinstance(x, ast.AST): + self.assertEquals(type(x._fields), tuple) + + def test_arguments(self): + x = ast.arguments() + self.assertEquals(x._fields, ('args', 'vararg', 'kwarg', 'defaults')) + try: + x.vararg + except AttributeError, e: + self.assertEquals(e.args[0], + "'arguments' object has no attribute 'vararg'") + else: + self.assert_(False) + x = ast.arguments(1, 2, 3, 4) + self.assertEquals(x.vararg, 2) + + def test_field_attr_writable(self): + x = ast.Num() + # We can assign to _fields + x._fields = 666 + self.assertEquals(x._fields, 666) + + def test_classattrs(self): + x = ast.Num() + self.assertEquals(x._fields, ('n',)) + try: + x.n + except AttributeError, e: + self.assertEquals(e.args[0], + "'Num' object has no attribute 'n'") + else: + self.assert_(False) + + x = ast.Num(42) + self.assertEquals(x.n, 42) + try: + x.lineno + except AttributeError, e: + self.assertEquals(e.args[0], + "'Num' object has no attribute 'lineno'") + else: + self.assert_(False) + + y = ast.Num() + x.lineno = y + self.assertEquals(x.lineno, y) + + try: + x.foobar + except AttributeError, e: + self.assertEquals(e.args[0], + "'Num' object has no attribute 'foobar'") + else: + self.assert_(False) + + x = ast.Num(lineno=2) + self.assertEquals(x.lineno, 2) + + x = ast.Num(42, lineno=0) + self.assertEquals(x.lineno, 0) + self.assertEquals(x._fields, ('n',)) + self.assertEquals(x.n, 42) + + self.assertRaises(TypeError, ast.Num, 1, 2) + self.assertRaises(TypeError, ast.Num, 1, 2, lineno=0) + + def test_module(self): + body = [ast.Num(42)] + x = ast.Module(body) + self.assertEquals(x.body, body) + + def test_nodeclass(self): + x = ast.BinOp() + self.assertEquals(x._fields, ('left', 'op', 'right')) + + # Zero arguments constructor explicitely allowed + x = ast.BinOp() + # Random attribute allowed too + x.foobarbaz = 5 + self.assertEquals(x.foobarbaz, 5) + + n1 = ast.Num(1) + n3 = ast.Num(3) + addop = ast.Add() + x = ast.BinOp(n1, addop, n3) + self.assertEquals(x.left, n1) + self.assertEquals(x.op, addop) + self.assertEquals(x.right, n3) + + x = ast.BinOp(1, 2, 3) + self.assertEquals(x.left, 1) + self.assertEquals(x.op, 2) + self.assertEquals(x.right, 3) + + x = ast.BinOp(1, 2, 3, lineno=0) + self.assertEquals(x.lineno, 0) + + def test_nodeclasses(self): + x = ast.BinOp(1, 2, 3, lineno=0) + self.assertEquals(x.left, 1) + self.assertEquals(x.op, 2) + self.assertEquals(x.right, 3) + self.assertEquals(x.lineno, 0) + def test_nodeclasses(self): x = ast.BinOp(1, 2, 3, lineno=0) self.assertEqual(x.left, 1) @@ -178,6 +368,12 @@ # node raises exception when not given enough arguments self.assertRaises(TypeError, ast.BinOp, 1, 2) + # node raises exception when given too many arguments + self.assertRaises(TypeError, ast.BinOp, 1, 2, 3, 4) + # node raises exception when not given enough arguments + self.assertRaises(TypeError, ast.BinOp, 1, 2, lineno=0) + # node raises exception when given too many arguments + self.assertRaises(TypeError, ast.BinOp, 1, 2, 3, 4, lineno=0) # can set attributes through kwargs too x = ast.BinOp(left=1, op=2, right=3, lineno=0) @@ -186,8 +382,14 @@ self.assertEqual(x.right, 3) self.assertEqual(x.lineno, 0) + # Random kwargs also allowed + x = ast.BinOp(1, 2, 3, foobarbaz=42) + self.assertEquals(x.foobarbaz, 42) + + def test_no_fields(self): # this used to fail because Sub._fields was None x = ast.Sub() + self.assertEquals(x._fields, ()) def test_pickling(self): import pickle @@ -330,8 +532,15 @@ #### EVERYTHING BELOW IS GENERATED ##### exec_results = [ +('Module', [('Expr', (1, 0), ('Name', (1, 0), 'None', ('Load',)))]), ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, None, []), [('Pass', (1, 9))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [('Name', (1, 6), 'a', ('Param',))], None, None, []), [('Pass', (1, 10))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [('Name', (1, 6), 'a', ('Param',))], None, None, [('Num', (1, 8), 0)]), [('Pass', (1, 12))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], 'args', None, []), [('Pass', (1, 14))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, 'kwargs', []), [('Pass', (1, 17))], [])]), +('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [('Name', (1, 6), 'a', ('Param',)), ('Name', (1, 9), 'b', ('Param',)), ('Name', (1, 14), 'c', ('Param',)), ('Name', (1, 22), 'd', ('Param',)), ('Name', (1, 28), 'e', ('Param',))], 'args', 'kwargs', [('Num', (1, 11), 1), ('Name', (1, 16), 'None', ('Load',)), ('List', (1, 24), [], ('Load',)), ('Dict', (1, 30), [], [])]), [('Pass', (1, 52))], [])]), ('Module', [('ClassDef', (1, 0), 'C', [], [('Pass', (1, 8))], [])]), +('Module', [('ClassDef', (1, 0), 'C', [('Name', (1, 8), 'object', ('Load',))], [('Pass', (1, 17))], [])]), ('Module', [('FunctionDef', (1, 0), 'f', ('arguments', [], None, None, []), [('Return', (1, 8), ('Num', (1, 15), 1))], [])]), ('Module', [('Delete', (1, 0), [('Name', (1, 4), 'v', ('Del',))])]), ('Module', [('Assign', (1, 0), [('Name', (1, 0), 'v', ('Store',))], ('Num', (1, 4), 1))]), @@ -355,16 +564,26 @@ ('Module', [('For', (1, 0), ('Tuple', (1, 4), [('Name', (1, 4), 'a', ('Store',)), ('Name', (1, 6), 'b', ('Store',))], ('Store',)), ('Name', (1, 11), 'c', ('Load',)), [('Pass', (1, 14))], [])]), ('Module', [('Expr', (1, 0), ('ListComp', (1, 1), ('Tuple', (1, 2), [('Name', (1, 2), 'a', ('Load',)), ('Name', (1, 4), 'b', ('Load',))], ('Load',)), [('comprehension', ('Tuple', (1, 11), [('Name', (1, 11), 'a', ('Store',)), ('Name', (1, 13), 'b', ('Store',))], ('Store',)), ('Name', (1, 18), 'c', ('Load',)), [])]))]), ('Module', [('Expr', (1, 0), ('GeneratorExp', (1, 1), ('Tuple', (1, 2), [('Name', (1, 2), 'a', ('Load',)), ('Name', (1, 4), 'b', ('Load',))], ('Load',)), [('comprehension', ('Tuple', (1, 11), [('Name', (1, 11), 'a', ('Store',)), ('Name', (1, 13), 'b', ('Store',))], ('Store',)), ('Name', (1, 18), 'c', ('Load',)), [])]))]), +('Module', [('Expr', (1, 0), ('GeneratorExp', (1, 1), ('Tuple', (1, 2), [('Name', (1, 2), 'a', ('Load',)), ('Name', (1, 4), 'b', ('Load',))], ('Load',)), [('comprehension', ('Tuple', (1, 12), [('Name', (1, 12), 'a', ('Store',)), ('Name', (1, 14), 'b', ('Store',))], ('Store',)), ('Name', (1, 20), 'c', ('Load',)), [])]))]), +('Module', [('Expr', (1, 0), ('GeneratorExp', (2, 4), ('Tuple', (3, 4), [('Name', (3, 4), 'Aa', ('Load',)), ('Name', (5, 7), 'Bb', ('Load',))], ('Load',)), [('comprehension', ('Tuple', (8, 4), [('Name', (8, 4), 'Aa', ('Store',)), ('Name', (10, 4), 'Bb', ('Store',))], ('Store',)), ('Name', (10, 10), 'Cc', ('Load',)), [])]))]), +('Module', [('Expr', (1, 0), ('DictComp', (1, 1), ('Name', (1, 1), 'a', ('Load',)), ('Name', (1, 5), 'b', ('Load',)), [('comprehension', ('Name', (1, 11), 'w', ('Store',)), ('Name', (1, 16), 'x', ('Load',)), []), ('comprehension', ('Name', (1, 22), 'm', ('Store',)), ('Name', (1, 27), 'p', ('Load',)), [('Name', (1, 32), 'g', ('Load',))])]))]), +('Module', [('Expr', (1, 0), ('DictComp', (1, 1), ('Name', (1, 1), 'a', ('Load',)), ('Name', (1, 5), 'b', ('Load',)), [('comprehension', ('Tuple', (1, 11), [('Name', (1, 11), 'v', ('Store',)), ('Name', (1, 13), 'w', ('Store',))], ('Store',)), ('Name', (1, 18), 'x', ('Load',)), [])]))]), +('Module', [('Expr', (1, 0), ('SetComp', (1, 1), ('Name', (1, 1), 'r', ('Load',)), [('comprehension', ('Name', (1, 7), 'l', ('Store',)), ('Name', (1, 12), 'x', ('Load',)), [('Name', (1, 17), 'g', ('Load',))])]))]), +('Module', [('Expr', (1, 0), ('SetComp', (1, 1), ('Name', (1, 1), 'r', ('Load',)), [('comprehension', ('Tuple', (1, 7), [('Name', (1, 7), 'l', ('Store',)), ('Name', (1, 9), 'm', ('Store',))], ('Store',)), ('Name', (1, 14), 'x', ('Load',)), [])]))]), ] single_results = [ ('Interactive', [('Expr', (1, 0), ('BinOp', (1, 0), ('Num', (1, 0), 1), ('Add',), ('Num', (1, 2), 2)))]), ] eval_results = [ +('Expression', ('Name', (1, 0), 'None', ('Load',))), ('Expression', ('BoolOp', (1, 0), ('And',), [('Name', (1, 0), 'a', ('Load',)), ('Name', (1, 6), 'b', ('Load',))])), ('Expression', ('BinOp', (1, 0), ('Name', (1, 0), 'a', ('Load',)), ('Add',), ('Name', (1, 4), 'b', ('Load',)))), ('Expression', ('UnaryOp', (1, 0), ('Not',), ('Name', (1, 4), 'v', ('Load',)))), ('Expression', ('Lambda', (1, 0), ('arguments', [], None, None, []), ('Name', (1, 7), 'None', ('Load',)))), ('Expression', ('Dict', (1, 0), [('Num', (1, 2), 1)], [('Num', (1, 4), 2)])), +('Expression', ('Dict', (1, 0), [], [])), +('Expression', ('Set', (1, 0), [('Name', (1, 1), 'None', ('Load',))])), +('Expression', ('Dict', (1, 0), [('Num', (2, 6), 1)], [('Num', (4, 10), 2)])), ('Expression', ('ListComp', (1, 1), ('Name', (1, 1), 'a', ('Load',)), [('comprehension', ('Name', (1, 7), 'b', ('Store',)), ('Name', (1, 12), 'c', ('Load',)), [('Name', (1, 17), 'd', ('Load',))])])), ('Expression', ('GeneratorExp', (1, 1), ('Name', (1, 1), 'a', ('Load',)), [('comprehension', ('Name', (1, 7), 'b', ('Store',)), ('Name', (1, 12), 'c', ('Load',)), [('Name', (1, 17), 'd', ('Load',))])])), ('Expression', ('Compare', (1, 0), ('Num', (1, 0), 1), [('Lt',), ('Lt',)], [('Num', (1, 4), 2), ('Num', (1, 8), 3)])), @@ -376,7 +595,10 @@ ('Expression', ('Subscript', (1, 0), ('Name', (1, 0), 'a', ('Load',)), ('Slice', ('Name', (1, 2), 'b', ('Load',)), ('Name', (1, 4), 'c', ('Load',)), None), ('Load',))), ('Expression', ('Name', (1, 0), 'v', ('Load',))), ('Expression', ('List', (1, 0), [('Num', (1, 1), 1), ('Num', (1, 3), 2), ('Num', (1, 5), 3)], ('Load',))), +('Expression', ('List', (1, 0), [], ('Load',))), ('Expression', ('Tuple', (1, 0), [('Num', (1, 0), 1), ('Num', (1, 2), 2), ('Num', (1, 4), 3)], ('Load',))), +('Expression', ('Tuple', (1, 1), [('Num', (1, 1), 1), ('Num', (1, 3), 2), ('Num', (1, 5), 3)], ('Load',))), +('Expression', ('Tuple', (1, 0), [], ('Load',))), ('Expression', ('Call', (1, 0), ('Attribute', (1, 0), ('Attribute', (1, 0), ('Attribute', (1, 0), ('Name', (1, 0), 'a', ('Load',)), 'b', ('Load',)), 'c', ('Load',)), 'd', ('Load',)), [('Subscript', (1, 8), ('Attribute', (1, 8), ('Name', (1, 8), 'a', ('Load',)), 'b', ('Load',)), ('Slice', ('Num', (1, 12), 1), ('Num', (1, 14), 2), None), ('Load',))], [], None, None)), ] main() diff --git a/lib-python/2.7/test/test_builtin.py b/lib-python/2.7/test/test_builtin.py --- a/lib-python/2.7/test/test_builtin.py +++ b/lib-python/2.7/test/test_builtin.py @@ -3,7 +3,8 @@ import platform import unittest from test.test_support import fcmp, have_unicode, TESTFN, unlink, \ - run_unittest, check_py3k_warnings + run_unittest, check_py3k_warnings, \ + check_impl_detail import warnings from operator import neg @@ -247,12 +248,14 @@ self.assertRaises(TypeError, compile) self.assertRaises(ValueError, compile, 'print 42\n', '', 'badmode') self.assertRaises(ValueError, compile, 'print 42\n', '', 'single', 0xff) - self.assertRaises(TypeError, compile, chr(0), 'f', 'exec') + if check_impl_detail(cpython=True): + self.assertRaises(TypeError, compile, chr(0), 'f', 'exec') self.assertRaises(TypeError, compile, 'pass', '?', 'exec', mode='eval', source='0', filename='tmp') if have_unicode: compile(unicode('print u"\xc3\xa5"\n', 'utf8'), '', 'exec') - self.assertRaises(TypeError, compile, unichr(0), 'f', 'exec') + if check_impl_detail(cpython=True): + self.assertRaises(TypeError, compile, unichr(0), 'f', 'exec') self.assertRaises(ValueError, compile, unicode('a = 1'), 'f', 'bad') @@ -395,12 +398,16 @@ self.assertEqual(eval('dir()', g, m), list('xyz')) self.assertEqual(eval('globals()', g, m), g) self.assertEqual(eval('locals()', g, m), m) - self.assertRaises(TypeError, eval, 'a', m) + # on top of CPython, the first dictionary (the globals) has to + # be a real dict. This is not the case on top of PyPy. + if check_impl_detail(pypy=False): + self.assertRaises(TypeError, eval, 'a', m) + class A: "Non-mapping" pass m = A() - self.assertRaises(TypeError, eval, 'a', g, m) + self.assertRaises((TypeError, AttributeError), eval, 'a', g, m) # Verify that dict subclasses work as well class D(dict): @@ -491,9 +498,10 @@ execfile(TESTFN, globals, locals) self.assertEqual(locals['z'], 2) + self.assertRaises(TypeError, execfile, TESTFN, {}, ()) unlink(TESTFN) self.assertRaises(TypeError, execfile) - self.assertRaises(TypeError, execfile, TESTFN, {}, ()) + self.assertRaises((TypeError, IOError), execfile, TESTFN, {}, ()) import os self.assertRaises(IOError, execfile, os.curdir) self.assertRaises(IOError, execfile, "I_dont_exist") @@ -1108,7 +1116,8 @@ def __cmp__(self, other): raise RuntimeError __hash__ = None # Invalid cmp makes this unhashable - self.assertRaises(RuntimeError, range, a, a + 1, badzero(1)) + if check_impl_detail(cpython=True): + self.assertRaises(RuntimeError, range, a, a + 1, badzero(1)) # Reject floats. self.assertRaises(TypeError, range, 1., 1., 1.) diff --git a/lib-python/2.7/test/test_bytes.py b/lib-python/2.7/test/test_bytes.py --- a/lib-python/2.7/test/test_bytes.py +++ b/lib-python/2.7/test/test_bytes.py @@ -694,6 +694,7 @@ self.assertEqual(b, b1) self.assertTrue(b is b1) + @test.test_support.impl_detail("undocumented bytes.__alloc__()") def test_alloc(self): b = bytearray() alloc = b.__alloc__() @@ -821,6 +822,8 @@ self.assertEqual(b, b"") self.assertEqual(c, b"") + @test.test_support.impl_detail( + "resizing semantics of CPython rely on refcounting") def test_resize_forbidden(self): # #4509: can't resize a bytearray when there are buffer exports, even # if it wouldn't reallocate the underlying buffer. @@ -853,6 +856,26 @@ self.assertRaises(BufferError, delslice) self.assertEqual(b, orig) + @test.test_support.impl_detail("resizing semantics", cpython=False) + def test_resize_forbidden_non_cpython(self): + # on non-CPython implementations, we cannot prevent changes to + # bytearrays just because there are buffers around. Instead, + # we get (on PyPy) a buffer that follows the changes and resizes. + b = bytearray(range(10)) + for v in [memoryview(b), buffer(b)]: + b[5] = 99 + self.assertIn(v[5], (99, chr(99))) + b[5] = 100 + b += b + b += b + b += b + self.assertEquals(len(v), 80) + self.assertIn(v[5], (100, chr(100))) + self.assertIn(v[79], (9, chr(9))) + del b[10:] + self.assertRaises(IndexError, lambda: v[10]) + self.assertEquals(len(v), 10) + def test_empty_bytearray(self): # Issue #7561: operations on empty bytearrays could crash in many # situations, due to a fragile implementation of the diff --git a/lib-python/2.7/test/test_bz2.py b/lib-python/2.7/test/test_bz2.py --- a/lib-python/2.7/test/test_bz2.py +++ b/lib-python/2.7/test/test_bz2.py @@ -50,6 +50,7 @@ self.filename = TESTFN def tearDown(self): + test_support.gc_collect() if os.path.isfile(self.filename): os.unlink(self.filename) @@ -246,6 +247,8 @@ for i in xrange(10000): o = BZ2File(self.filename) del o + if i % 100 == 0: + test_support.gc_collect() def testOpenNonexistent(self): # "Test opening a nonexistent file" @@ -310,6 +313,7 @@ for t in threads: t.join() + @test_support.impl_detail() def testMixedIterationReads(self): # Issue #8397: mixed iteration and reads should be forbidden. with bz2.BZ2File(self.filename, 'wb') as f: diff --git a/lib-python/2.7/test/test_cmd_line_script.py b/lib-python/2.7/test/test_cmd_line_script.py --- a/lib-python/2.7/test/test_cmd_line_script.py +++ b/lib-python/2.7/test/test_cmd_line_script.py @@ -112,6 +112,8 @@ self._check_script(script_dir, script_name, script_dir, '') def test_directory_compiled(self): + if test.test_support.check_impl_detail(pypy=True): + raise unittest.SkipTest("pypy won't load lone .pyc files") with temp_dir() as script_dir: script_name = _make_test_script(script_dir, '__main__') compiled_name = compile_script(script_name) @@ -173,6 +175,8 @@ script_name, 'test_pkg') def test_package_compiled(self): + if test.test_support.check_impl_detail(pypy=True): + raise unittest.SkipTest("pypy won't load lone .pyc files") with temp_dir() as script_dir: pkg_dir = os.path.join(script_dir, 'test_pkg') make_pkg(pkg_dir) diff --git a/lib-python/2.7/test/test_code.py b/lib-python/2.7/test/test_code.py --- a/lib-python/2.7/test/test_code.py +++ b/lib-python/2.7/test/test_code.py @@ -82,7 +82,7 @@ import unittest import weakref -import _testcapi +from test import test_support def consts(t): @@ -104,7 +104,9 @@ class CodeTest(unittest.TestCase): + @test_support.impl_detail("test for PyCode_NewEmpty") def test_newempty(self): + import _testcapi co = _testcapi.code_newempty("filename", "funcname", 15) self.assertEqual(co.co_filename, "filename") self.assertEqual(co.co_name, "funcname") @@ -132,6 +134,7 @@ coderef = weakref.ref(f.__code__, callback) self.assertTrue(bool(coderef())) del f + test_support.gc_collect() self.assertFalse(bool(coderef())) self.assertTrue(self.called) diff --git a/lib-python/2.7/test/test_codeop.py b/lib-python/2.7/test/test_codeop.py --- a/lib-python/2.7/test/test_codeop.py +++ b/lib-python/2.7/test/test_codeop.py @@ -3,7 +3,7 @@ Nick Mathewson """ import unittest -from test.test_support import run_unittest, is_jython +from test.test_support import run_unittest, is_jython, check_impl_detail from codeop import compile_command, PyCF_DONT_IMPLY_DEDENT @@ -270,7 +270,9 @@ ai("a = 'a\\\n") ai("a = 1","eval") - ai("a = (","eval") + if check_impl_detail(): # on PyPy it asks for more data, which is not + ai("a = (","eval") # completely correct but hard to fix and + # really a detail (in my opinion ) ai("]","eval") ai("())","eval") ai("[}","eval") diff --git a/lib-python/2.7/test/test_coercion.py b/lib-python/2.7/test/test_coercion.py --- a/lib-python/2.7/test/test_coercion.py +++ b/lib-python/2.7/test/test_coercion.py @@ -1,6 +1,7 @@ import copy import unittest -from test.test_support import run_unittest, TestFailed, check_warnings +from test.test_support import ( + run_unittest, TestFailed, check_warnings, check_impl_detail) # Fake a number that implements numeric methods through __coerce__ @@ -306,12 +307,18 @@ self.assertNotEqual(cmp(u'fish', evil_coercer), 0) self.assertNotEqual(cmp(slice(1), evil_coercer), 0) # ...but that this still works - class WackyComparer(object): - def __cmp__(slf, other): - self.assertTrue(other == 42, 'expected evil_coercer, got %r' % other) - return 0 - __hash__ = None # Invalid cmp makes this unhashable - self.assertEqual(cmp(WackyComparer(), evil_coercer), 0) + if check_impl_detail(): + # NB. I (arigo) would consider the following as implementation- + # specific. For example, in CPython, if we replace 42 with 42.0 + # both below and in CoerceTo() above, then the test fails. This + # hints that the behavior is really dependent on some obscure + # internal details. + class WackyComparer(object): + def __cmp__(slf, other): + self.assertTrue(other == 42, 'expected evil_coercer, got %r' % other) + return 0 + __hash__ = None # Invalid cmp makes this unhashable + self.assertEqual(cmp(WackyComparer(), evil_coercer), 0) # ...and classic classes too, since that code path is a little different class ClassicWackyComparer: def __cmp__(slf, other): diff --git a/lib-python/2.7/test/test_compile.py b/lib-python/2.7/test/test_compile.py --- a/lib-python/2.7/test/test_compile.py +++ b/lib-python/2.7/test/test_compile.py @@ -3,6 +3,7 @@ import _ast from test import test_support import textwrap +from test.test_support import check_impl_detail class TestSpecifics(unittest.TestCase): @@ -90,12 +91,13 @@ self.assertEqual(m.results, ('z', g)) exec 'z = locals()' in g, m self.assertEqual(m.results, ('z', m)) - try: - exec 'z = b' in m - except TypeError: - pass - else: - self.fail('Did not validate globals as a real dict') + if check_impl_detail(): + try: + exec 'z = b' in m + except TypeError: + pass + else: + self.fail('Did not validate globals as a real dict') class A: "Non-mapping" diff --git a/lib-python/2.7/test/test_copy.py b/lib-python/2.7/test/test_copy.py --- a/lib-python/2.7/test/test_copy.py +++ b/lib-python/2.7/test/test_copy.py @@ -637,6 +637,7 @@ self.assertEqual(v[c], d) self.assertEqual(len(v), 2) del c, d + test_support.gc_collect() self.assertEqual(len(v), 1) x, y = C(), C() # The underlying containers are decoupled @@ -666,6 +667,7 @@ self.assertEqual(v[a].i, b.i) self.assertEqual(v[c].i, d.i) del c + test_support.gc_collect() self.assertEqual(len(v), 1) def test_deepcopy_weakvaluedict(self): @@ -689,6 +691,7 @@ self.assertTrue(t is d) del x, y, z, t del d + test_support.gc_collect() self.assertEqual(len(v), 1) def test_deepcopy_bound_method(self): diff --git a/lib-python/2.7/test/test_cpickle.py b/lib-python/2.7/test/test_cpickle.py --- a/lib-python/2.7/test/test_cpickle.py +++ b/lib-python/2.7/test/test_cpickle.py @@ -61,27 +61,27 @@ error = cPickle.BadPickleGet def test_recursive_list(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_list, self) def test_recursive_tuple(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_tuple, self) def test_recursive_inst(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_inst, self) def test_recursive_dict(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_dict, self) def test_recursive_multi(self): - self.assertRaises(ValueError, + self.assertRaises((ValueError, RuntimeError), AbstractPickleTests.test_recursive_multi, self) diff --git a/lib-python/2.7/test/test_csv.py b/lib-python/2.7/test/test_csv.py --- a/lib-python/2.7/test/test_csv.py +++ b/lib-python/2.7/test/test_csv.py @@ -54,8 +54,10 @@ self.assertEqual(obj.dialect.skipinitialspace, False) self.assertEqual(obj.dialect.strict, False) # Try deleting or changing attributes (they are read-only) - self.assertRaises(TypeError, delattr, obj.dialect, 'delimiter') - self.assertRaises(TypeError, setattr, obj.dialect, 'delimiter', ':') + self.assertRaises((TypeError, AttributeError), delattr, obj.dialect, + 'delimiter') + self.assertRaises((TypeError, AttributeError), setattr, obj.dialect, + 'delimiter', ':') self.assertRaises(AttributeError, delattr, obj.dialect, 'quoting') self.assertRaises(AttributeError, setattr, obj.dialect, 'quoting', None) diff --git a/lib-python/2.7/test/test_deque.py b/lib-python/2.7/test/test_deque.py --- a/lib-python/2.7/test/test_deque.py +++ b/lib-python/2.7/test/test_deque.py @@ -109,7 +109,7 @@ self.assertEqual(deque('abc', maxlen=4).maxlen, 4) self.assertEqual(deque('abc', maxlen=2).maxlen, 2) self.assertEqual(deque('abc', maxlen=0).maxlen, 0) - with self.assertRaises(AttributeError): + with self.assertRaises((AttributeError, TypeError)): d = deque('abc') d.maxlen = 10 @@ -352,7 +352,10 @@ for match in (True, False): d = deque(['ab']) d.extend([MutateCmp(d, match), 'c']) - self.assertRaises(IndexError, d.remove, 'c') + # On CPython we get IndexError: deque mutated during remove(). + # Why is it an IndexError during remove() only??? + # On PyPy it is a RuntimeError, as in the other operations. + self.assertRaises((IndexError, RuntimeError), d.remove, 'c') self.assertEqual(d, deque()) def test_repr(self): @@ -514,7 +517,7 @@ container = reversed(deque([obj, 1])) obj.x = iter(container) del obj, container - gc.collect() + test_support.gc_collect() self.assertTrue(ref() is None, "Cycle was not collected") class TestVariousIteratorArgs(unittest.TestCase): @@ -630,6 +633,7 @@ p = weakref.proxy(d) self.assertEqual(str(p), str(d)) d = None + test_support.gc_collect() self.assertRaises(ReferenceError, str, p) def test_strange_subclass(self): diff --git a/lib-python/2.7/test/test_descr.py b/lib-python/2.7/test/test_descr.py --- a/lib-python/2.7/test/test_descr.py +++ b/lib-python/2.7/test/test_descr.py @@ -2,6 +2,7 @@ import sys import types import unittest +import popen2 # trigger early the warning from popen2.py from copy import deepcopy from test import test_support @@ -1128,7 +1129,7 @@ # Test lookup leaks [SF bug 572567] import gc - if hasattr(gc, 'get_objects'): + if test_support.check_impl_detail(): class G(object): def __cmp__(self, other): return 0 @@ -1741,6 +1742,10 @@ raise MyException for name, runner, meth_impl, ok, env in specials: + if name == '__length_hint__' or name == '__sizeof__': + if not test_support.check_impl_detail(): + continue + class X(Checker): pass for attr, obj in env.iteritems(): @@ -1980,7 +1985,9 @@ except TypeError, msg: self.assertTrue(str(msg).find("weak reference") >= 0) else: - self.fail("weakref.ref(no) should be illegal") + if test_support.check_impl_detail(pypy=False): + self.fail("weakref.ref(no) should be illegal") + #else: pypy supports taking weakrefs to some more objects class Weak(object): __slots__ = ['foo', '__weakref__'] yes = Weak() @@ -3092,7 +3099,16 @@ class R(J): __slots__ = ["__dict__", "__weakref__"] - for cls, cls2 in ((G, H), (G, I), (I, H), (Q, R), (R, Q)): + if test_support.check_impl_detail(pypy=False): + lst = ((G, H), (G, I), (I, H), (Q, R), (R, Q)) + else: + # Not supported in pypy: changing the __class__ of an object + # to another __class__ that just happens to have the same slots. + # If needed, we can add the feature, but what we'll likely do + # then is to allow mostly any __class__ assignment, even if the + # classes have different __slots__, because we it's easier. + lst = ((Q, R), (R, Q)) + for cls, cls2 in lst: x = cls() x.a = 1 x.__class__ = cls2 @@ -3175,7 +3191,8 @@ except TypeError: pass else: - self.fail("%r's __dict__ can be modified" % cls) + if test_support.check_impl_detail(pypy=False): + self.fail("%r's __dict__ can be modified" % cls) # Modules also disallow __dict__ assignment class Module1(types.ModuleType, Base): @@ -4383,13 +4400,10 @@ self.assertTrue(l.__add__ != [5].__add__) self.assertTrue(l.__add__ != l.__mul__) self.assertTrue(l.__add__.__name__ == '__add__') - if hasattr(l.__add__, '__self__'): - # CPython - self.assertTrue(l.__add__.__self__ is l) + self.assertTrue(l.__add__.__self__ is l) + if hasattr(l.__add__, '__objclass__'): # CPython self.assertTrue(l.__add__.__objclass__ is list) - else: - # Python implementations where [].__add__ is a normal bound method - self.assertTrue(l.__add__.im_self is l) + else: # PyPy self.assertTrue(l.__add__.im_class is list) self.assertEqual(l.__add__.__doc__, list.__add__.__doc__) try: @@ -4578,8 +4592,12 @@ str.split(fake_str) # call a slot wrapper descriptor - with self.assertRaises(TypeError): - str.__add__(fake_str, "abc") + try: + r = str.__add__(fake_str, "abc") + except TypeError: + pass + else: + self.assertEqual(r, NotImplemented) class DictProxyTests(unittest.TestCase): diff --git a/lib-python/2.7/test/test_descrtut.py b/lib-python/2.7/test/test_descrtut.py --- a/lib-python/2.7/test/test_descrtut.py +++ b/lib-python/2.7/test/test_descrtut.py @@ -172,46 +172,12 @@ AttributeError: 'list' object has no attribute '__methods__' >>> -Instead, you can get the same information from the list type: +Instead, you can get the same information from the list type +(the following example filters out the numerous method names +starting with '_'): - >>> pprint.pprint(dir(list)) # like list.__dict__.keys(), but sorted - ['__add__', - '__class__', - '__contains__', - '__delattr__', - '__delitem__', - '__delslice__', - '__doc__', - '__eq__', - '__format__', - '__ge__', - '__getattribute__', - '__getitem__', - '__getslice__', - '__gt__', - '__hash__', - '__iadd__', - '__imul__', - '__init__', - '__iter__', - '__le__', - '__len__', - '__lt__', - '__mul__', - '__ne__', - '__new__', - '__reduce__', - '__reduce_ex__', - '__repr__', - '__reversed__', - '__rmul__', - '__setattr__', - '__setitem__', - '__setslice__', - '__sizeof__', - '__str__', - '__subclasshook__', - 'append', + >>> pprint.pprint([name for name in dir(list) if not name.startswith('_')]) + ['append', 'count', 'extend', 'index', diff --git a/lib-python/2.7/test/test_dict.py b/lib-python/2.7/test/test_dict.py --- a/lib-python/2.7/test/test_dict.py +++ b/lib-python/2.7/test/test_dict.py @@ -319,7 +319,8 @@ self.assertEqual(va, int(ka)) kb, vb = tb = b.popitem() self.assertEqual(vb, int(kb)) - self.assertFalse(copymode < 0 and ta != tb) + if test_support.check_impl_detail(): + self.assertFalse(copymode < 0 and ta != tb) self.assertFalse(a) self.assertFalse(b) diff --git a/lib-python/2.7/test/test_dis.py b/lib-python/2.7/test/test_dis.py --- a/lib-python/2.7/test/test_dis.py +++ b/lib-python/2.7/test/test_dis.py @@ -56,8 +56,8 @@ %-4d 0 LOAD_CONST 1 (0) 3 POP_JUMP_IF_TRUE 38 6 LOAD_GLOBAL 0 (AssertionError) - 9 BUILD_LIST 0 - 12 LOAD_FAST 0 (x) + 9 LOAD_FAST 0 (x) + 12 BUILD_LIST_FROM_ARG 0 15 GET_ITER >> 16 FOR_ITER 12 (to 31) 19 STORE_FAST 1 (s) diff --git a/lib-python/2.7/test/test_doctest.py b/lib-python/2.7/test/test_doctest.py --- a/lib-python/2.7/test/test_doctest.py +++ b/lib-python/2.7/test/test_doctest.py @@ -782,7 +782,7 @@ ... >>> x = 12 ... >>> print x//0 ... Traceback (most recent call last): - ... ZeroDivisionError: integer division or modulo by zero + ... ZeroDivisionError: integer division by zero ... ''' >>> test = doctest.DocTestFinder().find(f)[0] >>> doctest.DocTestRunner(verbose=False).run(test) @@ -799,7 +799,7 @@ ... >>> print 'pre-exception output', x//0 ... pre-exception output ... Traceback (most recent call last): - ... ZeroDivisionError: integer division or modulo by zero + ... ZeroDivisionError: integer division by zero ... ''' >>> test = doctest.DocTestFinder().find(f)[0] >>> doctest.DocTestRunner(verbose=False).run(test) @@ -810,7 +810,7 @@ print 'pre-exception output', x//0 Exception raised: ... - ZeroDivisionError: integer division or modulo by zero + ZeroDivisionError: integer division by zero TestResults(failed=1, attempted=2) Exception messages may contain newlines: @@ -978,7 +978,7 @@ Exception raised: Traceback (most recent call last): ... - ZeroDivisionError: integer division or modulo by zero + ZeroDivisionError: integer division by zero TestResults(failed=1, attempted=1) """ def displayhook(): r""" @@ -1924,7 +1924,7 @@ > (1)() -> calls_set_trace() (Pdb) print foo - *** NameError: name 'foo' is not defined + *** NameError: global name 'foo' is not defined (Pdb) continue TestResults(failed=0, attempted=2) """ @@ -2229,7 +2229,7 @@ favorite_color Exception raised: ... - NameError: name 'favorite_color' is not defined + NameError: global name 'favorite_color' is not defined @@ -2289,7 +2289,7 @@ favorite_color Exception raised: ... - NameError: name 'favorite_color' is not defined + NameError: global name 'favorite_color' is not defined ********************************************************************** 1 items had failures: 1 of 2 in test_doctest.txt @@ -2382,7 +2382,7 @@ favorite_color Exception raised: ... - NameError: name 'favorite_color' is not defined + NameError: global name 'favorite_color' is not defined TestResults(failed=1, attempted=2) >>> doctest.master = None # Reset master. diff --git a/lib-python/2.7/test/test_dumbdbm.py b/lib-python/2.7/test/test_dumbdbm.py --- a/lib-python/2.7/test/test_dumbdbm.py +++ b/lib-python/2.7/test/test_dumbdbm.py @@ -107,9 +107,11 @@ f.close() # Mangle the file by adding \r before each newline - data = open(_fname + '.dir').read() + with open(_fname + '.dir') as f: + data = f.read() data = data.replace('\n', '\r\n') - open(_fname + '.dir', 'wb').write(data) + with open(_fname + '.dir', 'wb') as f: + f.write(data) f = dumbdbm.open(_fname) self.assertEqual(f['1'], 'hello') diff --git a/lib-python/2.7/test/test_extcall.py b/lib-python/2.7/test/test_extcall.py --- a/lib-python/2.7/test/test_extcall.py +++ b/lib-python/2.7/test/test_extcall.py @@ -90,19 +90,19 @@ >>> class Nothing: pass ... - >>> g(*Nothing()) + >>> g(*Nothing()) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: g() argument after * must be a sequence, not instance + TypeError: ...argument after * must be a sequence, not instance >>> class Nothing: ... def __len__(self): return 5 ... - >>> g(*Nothing()) + >>> g(*Nothing()) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: g() argument after * must be a sequence, not instance + TypeError: ...argument after * must be a sequence, not instance >>> class Nothing(): ... def __len__(self): return 5 @@ -154,52 +154,50 @@ ... TypeError: g() got multiple values for keyword argument 'x' - >>> f(**{1:2}) + >>> f(**{1:2}) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: f() keywords must be strings + TypeError: ...keywords must be strings >>> h(**{'e': 2}) Traceback (most recent call last): ... TypeError: h() got an unexpected keyword argument 'e' - >>> h(*h) + >>> h(*h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: h() argument after * must be a sequence, not function + TypeError: ...argument after * must be a sequence, not function - >>> dir(*h) + >>> dir(*h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: dir() argument after * must be a sequence, not function + TypeError: ...argument after * must be a sequence, not function - >>> None(*h) + >>> None(*h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: NoneType object argument after * must be a sequence, \ -not function + TypeError: ...argument after * must be a sequence, not function - >>> h(**h) + >>> h(**h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: h() argument after ** must be a mapping, not function + TypeError: ...argument after ** must be a mapping, not function - >>> dir(**h) + >>> dir(**h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: dir() argument after ** must be a mapping, not function + TypeError: ...argument after ** must be a mapping, not function - >>> None(**h) + >>> None(**h) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: NoneType object argument after ** must be a mapping, \ -not function + TypeError: ...argument after ** must be a mapping, not function - >>> dir(b=1, **{'b': 1}) + >>> dir(b=1, **{'b': 1}) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: dir() got multiple values for keyword argument 'b' + TypeError: ...got multiple values for keyword argument 'b' Another helper function @@ -247,10 +245,10 @@ ... False True - >>> id(1, **{'foo': 1}) + >>> id(1, **{'foo': 1}) #doctest: +ELLIPSIS Traceback (most recent call last): ... - TypeError: id() takes no keyword arguments + TypeError: id() ... keyword argument... A corner case of keyword dictionary items being deleted during the function call setup. See . diff --git a/lib-python/2.7/test/test_fcntl.py b/lib-python/2.7/test/test_fcntl.py --- a/lib-python/2.7/test/test_fcntl.py +++ b/lib-python/2.7/test/test_fcntl.py @@ -32,7 +32,7 @@ 'freebsd2', 'freebsd3', 'freebsd4', 'freebsd5', 'freebsd6', 'freebsd7', 'freebsd8', 'bsdos2', 'bsdos3', 'bsdos4', - 'openbsd', 'openbsd2', 'openbsd3', 'openbsd4'): + 'openbsd', 'openbsd2', 'openbsd3', 'openbsd4', 'openbsd5'): if struct.calcsize('l') == 8: off_t = 'l' pid_t = 'i' diff --git a/lib-python/2.7/test/test_file.py b/lib-python/2.7/test/test_file.py --- a/lib-python/2.7/test/test_file.py +++ b/lib-python/2.7/test/test_file.py @@ -12,7 +12,7 @@ import io import _pyio as pyio -from test.test_support import TESTFN, run_unittest +from test.test_support import TESTFN, run_unittest, gc_collect from UserList import UserList class AutoFileTests(unittest.TestCase): @@ -33,6 +33,7 @@ self.assertEqual(self.f.tell(), p.tell()) self.f.close() self.f = None + gc_collect() self.assertRaises(ReferenceError, getattr, p, 'tell') def testAttributes(self): @@ -157,7 +158,12 @@ def testStdin(self): # This causes the interpreter to exit on OSF1 v5.1. if sys.platform != 'osf1V5': - self.assertRaises((IOError, ValueError), sys.stdin.seek, -1) + if sys.stdin.isatty(): + self.assertRaises((IOError, ValueError), sys.stdin.seek, -1) + else: + print(( + ' Skipping sys.stdin.seek(-1): stdin is not a tty.' + ' Test manually.'), file=sys.__stdout__) else: print(( ' Skipping sys.stdin.seek(-1), it may crash the interpreter.' diff --git a/lib-python/2.7/test/test_file2k.py b/lib-python/2.7/test/test_file2k.py --- a/lib-python/2.7/test/test_file2k.py +++ b/lib-python/2.7/test/test_file2k.py @@ -11,7 +11,7 @@ threading = None from test import test_support -from test.test_support import TESTFN, run_unittest +from test.test_support import TESTFN, run_unittest, gc_collect from UserList import UserList class AutoFileTests(unittest.TestCase): @@ -32,6 +32,7 @@ self.assertEqual(self.f.tell(), p.tell()) self.f.close() self.f = None + gc_collect() self.assertRaises(ReferenceError, getattr, p, 'tell') def testAttributes(self): @@ -116,8 +117,12 @@ for methodname in methods: method = getattr(self.f, methodname) + args = {'readinto': (bytearray(''),), + 'seek': (0,), + 'write': ('',), + }.get(methodname, ()) # should raise on closed file - self.assertRaises(ValueError, method) + self.assertRaises(ValueError, method, *args) with test_support.check_py3k_warnings(): for methodname in deprecated_methods: method = getattr(self.f, methodname) @@ -216,7 +221,12 @@ def testStdin(self): # This causes the interpreter to exit on OSF1 v5.1. if sys.platform != 'osf1V5': - self.assertRaises(IOError, sys.stdin.seek, -1) + if sys.stdin.isatty(): + self.assertRaises(IOError, sys.stdin.seek, -1) + else: + print >>sys.__stdout__, ( + ' Skipping sys.stdin.seek(-1): stdin is not a tty.' + ' Test manualy.') else: print >>sys.__stdout__, ( ' Skipping sys.stdin.seek(-1), it may crash the interpreter.' @@ -336,8 +346,9 @@ except ValueError: pass else: - self.fail("%s%r after next() didn't raise ValueError" % - (methodname, args)) + if test_support.check_impl_detail(): + self.fail("%s%r after next() didn't raise ValueError" % + (methodname, args)) f.close() # Test to see if harmless (by accident) mixing of read* and @@ -388,6 +399,7 @@ if lines != testlines: self.fail("readlines() after next() with empty buffer " "failed. Got %r, expected %r" % (line, testline)) + f.close() # Reading after iteration hit EOF shouldn't hurt either f = open(TESTFN) try: @@ -438,6 +450,9 @@ self.close_count = 0 self.close_success_count = 0 self.use_buffering = False + # to prevent running out of file descriptors on PyPy, + # we only keep the 50 most recent files open + self.all_files = [None] * 50 def tearDown(self): if self.f: @@ -453,9 +468,14 @@ def _create_file(self): if self.use_buffering: - self.f = open(self.filename, "w+", buffering=1024*16) + f = open(self.filename, "w+", buffering=1024*16) else: - self.f = open(self.filename, "w+") + f = open(self.filename, "w+") + self.f = f + self.all_files.append(f) + oldf = self.all_files.pop(0) + if oldf is not None: + oldf.close() def _close_file(self): with self._count_lock: @@ -496,7 +516,6 @@ def _test_close_open_io(self, io_func, nb_workers=5): def worker(): - self._create_file() funcs = itertools.cycle(( lambda: io_func(), lambda: self._close_and_reopen_file(), @@ -508,7 +527,11 @@ f() except (IOError, ValueError): pass + self._create_file() self._run_workers(worker, nb_workers) + # make sure that all files can be closed now + del self.all_files + gc_collect() if test_support.verbose: # Useful verbose statistics when tuning this test to take # less time to run but still ensuring that its still useful. diff --git a/lib-python/2.7/test/test_fileio.py b/lib-python/2.7/test/test_fileio.py --- a/lib-python/2.7/test/test_fileio.py +++ b/lib-python/2.7/test/test_fileio.py @@ -12,6 +12,7 @@ from test.test_support import TESTFN, check_warnings, run_unittest, make_bad_fd from test.test_support import py3k_bytes as bytes +from test.test_support import gc_collect from test.script_helper import run_python from _io import FileIO as _FileIO @@ -34,6 +35,7 @@ self.assertEqual(self.f.tell(), p.tell()) self.f.close() self.f = None + gc_collect() self.assertRaises(ReferenceError, getattr, p, 'tell') def testSeekTell(self): @@ -104,8 +106,8 @@ self.assertTrue(f.closed) def testMethods(self): - methods = ['fileno', 'isatty', 'read', 'readinto', - 'seek', 'tell', 'truncate', 'write', 'seekable', + methods = ['fileno', 'isatty', 'read', + 'tell', 'truncate', 'seekable', 'readable', 'writable'] if sys.platform.startswith('atheos'): methods.remove('truncate') @@ -117,6 +119,10 @@ method = getattr(self.f, methodname) # should raise on closed file self.assertRaises(ValueError, method) + # methods with one argument + self.assertRaises(ValueError, self.f.readinto, 0) + self.assertRaises(ValueError, self.f.write, 0) + self.assertRaises(ValueError, self.f.seek, 0) def testOpendir(self): # Issue 3703: opening a directory should fill the errno diff --git a/lib-python/2.7/test/test_format.py b/lib-python/2.7/test/test_format.py --- a/lib-python/2.7/test/test_format.py +++ b/lib-python/2.7/test/test_format.py @@ -242,7 +242,7 @@ try: testformat(formatstr, args) except exception, exc: - if str(exc) == excmsg: + if str(exc) == excmsg or not test_support.check_impl_detail(): if verbose: print "yes" else: @@ -272,13 +272,16 @@ test_exc(u'no format', u'1', TypeError, "not all arguments converted during string formatting") - class Foobar(long): - def __oct__(self): - # Returning a non-string should not blow up. - return self + 1 - - test_exc('%o', Foobar(), TypeError, - "expected string or Unicode object, long found") + if test_support.check_impl_detail(): + # __oct__() is called if Foobar inherits from 'long', but + # not, say, 'object' or 'int' or 'str'. This seems strange + # enough to consider it a complete implementation detail. + class Foobar(long): + def __oct__(self): + # Returning a non-string should not blow up. + return self + 1 + test_exc('%o', Foobar(), TypeError, + "expected string or Unicode object, long found") if maxsize == 2**31-1: # crashes 2.2.1 and earlier: diff --git a/lib-python/2.7/test/test_funcattrs.py b/lib-python/2.7/test/test_funcattrs.py --- a/lib-python/2.7/test/test_funcattrs.py +++ b/lib-python/2.7/test/test_funcattrs.py @@ -14,6 +14,8 @@ self.b = b def cannot_set_attr(self, obj, name, value, exceptions): + if not test_support.check_impl_detail(): + exceptions = (TypeError, AttributeError) # Helper method for other tests. try: setattr(obj, name, value) @@ -286,13 +288,13 @@ def test_delete_func_dict(self): try: del self.b.__dict__ - except TypeError: + except (AttributeError, TypeError): pass else: self.fail("deleting function dictionary should raise TypeError") try: del self.b.func_dict - except TypeError: + except (AttributeError, TypeError): pass else: self.fail("deleting function dictionary should raise TypeError") diff --git a/lib-python/2.7/test/test_functools.py b/lib-python/2.7/test/test_functools.py --- a/lib-python/2.7/test/test_functools.py +++ b/lib-python/2.7/test/test_functools.py @@ -45,6 +45,8 @@ # attributes should not be writable if not isinstance(self.thetype, type): return + if not test_support.check_impl_detail(): + return self.assertRaises(TypeError, setattr, p, 'func', map) self.assertRaises(TypeError, setattr, p, 'args', (1, 2)) self.assertRaises(TypeError, setattr, p, 'keywords', dict(a=1, b=2)) @@ -136,6 +138,7 @@ p = proxy(f) self.assertEqual(f.func, p.func) f = None + test_support.gc_collect() self.assertRaises(ReferenceError, getattr, p, 'func') def test_with_bound_and_unbound_methods(self): @@ -172,7 +175,7 @@ updated=functools.WRAPPER_UPDATES): # Check attributes were assigned for name in assigned: - self.assertTrue(getattr(wrapper, name) is getattr(wrapped, name)) + self.assertTrue(getattr(wrapper, name) == getattr(wrapped, name), name) # Check attributes were updated for name in updated: wrapper_attr = getattr(wrapper, name) diff --git a/lib-python/2.7/test/test_generators.py b/lib-python/2.7/test/test_generators.py --- a/lib-python/2.7/test/test_generators.py +++ b/lib-python/2.7/test/test_generators.py @@ -190,7 +190,7 @@ File "", line 1, in ? File "", line 2, in g File "", line 2, in f - ZeroDivisionError: integer division or modulo by zero + ZeroDivisionError: integer division by zero >>> k.next() # and the generator cannot be resumed Traceback (most recent call last): File "", line 1, in ? @@ -733,14 +733,16 @@ ... yield 1 Traceback (most recent call last): .. -SyntaxError: 'return' with argument inside generator (, line 3) + File "", line 3 +SyntaxError: 'return' with argument inside generator >>> def f(): ... yield 1 ... return 22 Traceback (most recent call last): .. -SyntaxError: 'return' with argument inside generator (, line 3) + File "", line 3 +SyntaxError: 'return' with argument inside generator "return None" is not the same as "return" in a generator: @@ -749,7 +751,8 @@ ... return None Traceback (most recent call last): .. -SyntaxError: 'return' with argument inside generator (, line 3) + File "", line 3 +SyntaxError: 'return' with argument inside generator These are fine: @@ -878,7 +881,9 @@ ... if 0: ... yield 2 # because it's a generator (line 10) Traceback (most recent call last): -SyntaxError: 'return' with argument inside generator (, line 10) + ... + File "", line 10 +SyntaxError: 'return' with argument inside generator This one caused a crash (see SF bug 567538): @@ -1496,6 +1501,10 @@ """ coroutine_tests = """\ +A helper function to call gc.collect() without printing +>>> import gc +>>> def gc_collect(): gc.collect() + Sending a value into a started generator: >>> def f(): @@ -1570,13 +1579,14 @@ >>> def f(): return lambda x=(yield): 1 Traceback (most recent call last): ... -SyntaxError: 'return' with argument inside generator (, line 1) + File "", line 1 +SyntaxError: 'return' with argument inside generator >>> def f(): x = yield = y Traceback (most recent call last): ... File "", line 1 -SyntaxError: assignment to yield expression not possible +SyntaxError: can't assign to yield expression >>> def f(): (yield bar) = y Traceback (most recent call last): @@ -1665,7 +1675,7 @@ >>> f().throw("abc") # throw on just-opened generator Traceback (most recent call last): ... -TypeError: exceptions must be classes, or instances, not str +TypeError: exceptions must be old-style classes or derived from BaseException, not str Now let's try closing a generator: @@ -1697,7 +1707,7 @@ >>> g = f() >>> g.next() ->>> del g +>>> del g; gc_collect() exiting >>> class context(object): @@ -1708,7 +1718,7 @@ ... yield >>> g = f() >>> g.next() ->>> del g +>>> del g; gc_collect() exiting @@ -1721,7 +1731,7 @@ >>> g = f() >>> g.next() ->>> del g +>>> del g; gc_collect() finally @@ -1747,6 +1757,7 @@ >>> g = f() >>> g.next() >>> del g +>>> gc_collect() >>> sys.stderr.getvalue().startswith( ... "Exception RuntimeError: 'generator ignored GeneratorExit' in " ... ) @@ -1812,6 +1823,9 @@ references. We add it to the standard suite so the routine refleak-tests would trigger if it starts being uncleanable again. +>>> import gc +>>> def gc_collect(): gc.collect() + >>> import itertools >>> def leak(): ... class gen: @@ -1863,9 +1877,10 @@ ... ... l = Leaker() ... del l +... gc_collect() ... err = sys.stderr.getvalue().strip() ... err.startswith( -... "Exception RuntimeError: RuntimeError() in <" +... "Exception RuntimeError: RuntimeError() in " ... ) ... err.endswith("> ignored") ... len(err.splitlines()) diff --git a/lib-python/2.7/test/test_genexps.py b/lib-python/2.7/test/test_genexps.py --- a/lib-python/2.7/test/test_genexps.py +++ b/lib-python/2.7/test/test_genexps.py @@ -128,8 +128,9 @@ Verify re-use of tuples (a side benefit of using genexps over listcomps) + >>> from test.test_support import check_impl_detail >>> tupleids = map(id, ((i,i) for i in xrange(10))) - >>> int(max(tupleids) - min(tupleids)) + >>> int(max(tupleids) - min(tupleids)) if check_impl_detail() else 0 0 Verify that syntax error's are raised for genexps used as lvalues @@ -198,13 +199,13 @@ >>> g = (10 // i for i in (5, 0, 2)) >>> g.next() 2 - >>> g.next() + >>> g.next() # doctest: +ELLIPSIS Traceback (most recent call last): File "", line 1, in -toplevel- g.next() File "", line 1, in g = (10 // i for i in (5, 0, 2)) - ZeroDivisionError: integer division or modulo by zero + ZeroDivisionError: integer division...by zero >>> g.next() Traceback (most recent call last): File "", line 1, in -toplevel- diff --git a/lib-python/2.7/test/test_heapq.py b/lib-python/2.7/test/test_heapq.py --- a/lib-python/2.7/test/test_heapq.py +++ b/lib-python/2.7/test/test_heapq.py @@ -215,6 +215,11 @@ class TestHeapPython(TestHeap): module = py_heapq + def test_islice_protection(self): + m = self.module + self.assertFalse(m.nsmallest(-1, [1])) + self.assertFalse(m.nlargest(-1, [1])) + @skipUnless(c_heapq, 'requires _heapq') class TestHeapC(TestHeap): diff --git a/lib-python/2.7/test/test_import.py b/lib-python/2.7/test/test_import.py --- a/lib-python/2.7/test/test_import.py +++ b/lib-python/2.7/test/test_import.py @@ -7,7 +7,8 @@ import sys import unittest from test.test_support import (unlink, TESTFN, unload, run_unittest, rmtree, - is_jython, check_warnings, EnvironmentVarGuard) + is_jython, check_warnings, EnvironmentVarGuard, + impl_detail, check_impl_detail) import textwrap from test import script_helper @@ -69,7 +70,8 @@ self.assertEqual(mod.b, b, "module loaded (%s) but contents invalid" % mod) finally: - unlink(source) + if check_impl_detail(pypy=False): + unlink(source) try: imp.reload(mod) @@ -149,13 +151,16 @@ # Compile & remove .py file, we only need .pyc (or .pyo). with open(filename, 'r') as f: py_compile.compile(filename) - unlink(filename) + if check_impl_detail(pypy=False): + # pypy refuses to import a .pyc if the .py does not exist + unlink(filename) # Need to be able to load from current dir. sys.path.append('') # This used to crash. exec 'import ' + module + reload(longlist) # Cleanup. del sys.path[-1] @@ -326,6 +331,7 @@ self.assertEqual(mod.code_filename, self.file_name) self.assertEqual(mod.func_filename, self.file_name) + @impl_detail("pypy refuses to import without a .py source", pypy=False) def test_module_without_source(self): target = "another_module.py" py_compile.compile(self.file_name, dfile=target) diff --git a/lib-python/2.7/test/test_inspect.py b/lib-python/2.7/test/test_inspect.py --- a/lib-python/2.7/test/test_inspect.py +++ b/lib-python/2.7/test/test_inspect.py @@ -4,11 +4,11 @@ import unittest import inspect import linecache -import datetime from UserList import UserList from UserDict import UserDict from test.test_support import run_unittest, check_py3k_warnings +from test.test_support import check_impl_detail with check_py3k_warnings( ("tuple parameter unpacking has been removed", SyntaxWarning), @@ -74,7 +74,8 @@ def test_excluding_predicates(self): self.istest(inspect.isbuiltin, 'sys.exit') - self.istest(inspect.isbuiltin, '[].append') + if check_impl_detail(): + self.istest(inspect.isbuiltin, '[].append') self.istest(inspect.iscode, 'mod.spam.func_code') self.istest(inspect.isframe, 'tb.tb_frame') self.istest(inspect.isfunction, 'mod.spam') @@ -92,9 +93,9 @@ else: self.assertFalse(inspect.isgetsetdescriptor(type(tb.tb_frame).f_locals)) if hasattr(types, 'MemberDescriptorType'): - self.istest(inspect.ismemberdescriptor, 'datetime.timedelta.days') + self.istest(inspect.ismemberdescriptor, 'type(lambda: None).func_globals') else: - self.assertFalse(inspect.ismemberdescriptor(datetime.timedelta.days)) + self.assertFalse(inspect.ismemberdescriptor(type(lambda: None).func_globals)) def test_isroutine(self): self.assertTrue(inspect.isroutine(mod.spam)) @@ -567,7 +568,8 @@ else: self.fail('Exception not raised') self.assertIs(type(ex1), type(ex2)) - self.assertEqual(str(ex1), str(ex2)) + if check_impl_detail(): + self.assertEqual(str(ex1), str(ex2)) def makeCallable(self, signature): """Create a function that returns its locals(), excluding the diff --git a/lib-python/2.7/test/test_int.py b/lib-python/2.7/test/test_int.py --- a/lib-python/2.7/test/test_int.py +++ b/lib-python/2.7/test/test_int.py @@ -1,7 +1,7 @@ import sys import unittest -from test.test_support import run_unittest, have_unicode +from test.test_support import run_unittest, have_unicode, check_impl_detail import math L = [ @@ -392,9 +392,10 @@ try: int(TruncReturnsNonIntegral()) except TypeError as e: - self.assertEqual(str(e), - "__trunc__ returned non-Integral" - " (type NonIntegral)") + if check_impl_detail(cpython=True): + self.assertEqual(str(e), + "__trunc__ returned non-Integral" + " (type NonIntegral)") else: self.fail("Failed to raise TypeError with %s" % ((base, trunc_result_base),)) diff --git a/lib-python/2.7/test/test_io.py b/lib-python/2.7/test/test_io.py --- a/lib-python/2.7/test/test_io.py +++ b/lib-python/2.7/test/test_io.py @@ -2561,6 +2561,31 @@ """Check that a partial write, when it gets interrupted, properly invokes the signal handler, and bubbles up the exception raised in the latter.""" + + # XXX This test has three flaws that appear when objects are + # XXX not reference counted. + + # - if wio.write() happens to trigger a garbage collection, + # the signal exception may be raised when some __del__ + # method is running; it will not reach the assertRaises() + # call. + + # - more subtle, if the wio object is not destroyed at once + # and survives this function, the next opened file is likely + # to have the same fileno (since the file descriptor was + # actively closed). When wio.__del__ is finally called, it + # will close the other's test file... To trigger this with + # CPython, try adding "global wio" in this function. + + # - This happens only for streams created by the _pyio module, + # because a wio.close() that fails still consider that the + # file needs to be closed again. You can try adding an + # "assert wio.closed" at the end of the function. + + # Fortunately, a little gc.gollect() seems to be enough to + # work around all these issues. + support.gc_collect() + read_results = [] def _read(): s = os.read(r, 1) diff --git a/lib-python/2.7/test/test_isinstance.py b/lib-python/2.7/test/test_isinstance.py --- a/lib-python/2.7/test/test_isinstance.py +++ b/lib-python/2.7/test/test_isinstance.py @@ -260,7 +260,18 @@ # Make sure that calling isinstance with a deeply nested tuple for its # argument will raise RuntimeError eventually. tuple_arg = (compare_to,) - for cnt in xrange(sys.getrecursionlimit()+5): + + + if test_support.check_impl_detail(cpython=True): + RECURSION_LIMIT = sys.getrecursionlimit() + else: + # on non-CPython implementations, the maximum + # actual recursion limit might be higher, but + # probably not higher than 99999 + # + RECURSION_LIMIT = 99999 + + for cnt in xrange(RECURSION_LIMIT+5): tuple_arg = (tuple_arg,) fxn(arg, tuple_arg) diff --git a/lib-python/2.7/test/test_itertools.py b/lib-python/2.7/test/test_itertools.py --- a/lib-python/2.7/test/test_itertools.py +++ b/lib-python/2.7/test/test_itertools.py @@ -137,6 +137,8 @@ self.assertEqual(result, list(combinations2(values, r))) # matches second pure python version self.assertEqual(result, list(combinations3(values, r))) # matches second pure python version + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_combinations_tuple_reuse(self): # Test implementation detail: tuple re-use self.assertEqual(len(set(map(id, combinations('abcde', 3)))), 1) self.assertNotEqual(len(set(map(id, list(combinations('abcde', 3))))), 1) @@ -207,7 +209,10 @@ self.assertEqual(result, list(cwr1(values, r))) # matches first pure python version self.assertEqual(result, list(cwr2(values, r))) # matches second pure python version + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_combinations_with_replacement_tuple_reuse(self): # Test implementation detail: tuple re-use + cwr = combinations_with_replacement self.assertEqual(len(set(map(id, cwr('abcde', 3)))), 1) self.assertNotEqual(len(set(map(id, list(cwr('abcde', 3))))), 1) @@ -271,6 +276,8 @@ self.assertEqual(result, list(permutations(values, None))) # test r as None self.assertEqual(result, list(permutations(values))) # test default r + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_permutations_tuple_reuse(self): # Test implementation detail: tuple re-use self.assertEqual(len(set(map(id, permutations('abcde', 3)))), 1) self.assertNotEqual(len(set(map(id, list(permutations('abcde', 3))))), 1) @@ -526,6 +533,9 @@ self.assertEqual(list(izip()), zip()) self.assertRaises(TypeError, izip, 3) self.assertRaises(TypeError, izip, range(3), 3) + + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_izip_tuple_reuse(self): # Check tuple re-use (implementation detail) self.assertEqual([tuple(list(pair)) for pair in izip('abc', 'def')], zip('abc', 'def')) @@ -575,6 +585,8 @@ else: self.fail('Did not raise Type in: ' + stmt) + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_iziplongest_tuple_reuse(self): # Check tuple re-use (implementation detail) self.assertEqual([tuple(list(pair)) for pair in izip_longest('abc', 'def')], zip('abc', 'def')) @@ -683,6 +695,8 @@ args = map(iter, args) self.assertEqual(len(list(product(*args))), expected_len) + @test_support.impl_detail("tuple reuse is specific to CPython") + def test_product_tuple_reuse(self): # Test implementation detail: tuple re-use self.assertEqual(len(set(map(id, product('abc', 'def')))), 1) self.assertNotEqual(len(set(map(id, list(product('abc', 'def'))))), 1) @@ -771,11 +785,11 @@ self.assertRaises(ValueError, islice, xrange(10), 1, -5, -1) self.assertRaises(ValueError, islice, xrange(10), 1, 10, -1) self.assertRaises(ValueError, islice, xrange(10), 1, 10, 0) - self.assertRaises(ValueError, islice, xrange(10), 'a') - self.assertRaises(ValueError, islice, xrange(10), 'a', 1) - self.assertRaises(ValueError, islice, xrange(10), 1, 'a') - self.assertRaises(ValueError, islice, xrange(10), 'a', 1, 1) - self.assertRaises(ValueError, islice, xrange(10), 1, 'a', 1) + self.assertRaises((ValueError, TypeError), islice, xrange(10), 'a') + self.assertRaises((ValueError, TypeError), islice, xrange(10), 'a', 1) + self.assertRaises((ValueError, TypeError), islice, xrange(10), 1, 'a') + self.assertRaises((ValueError, TypeError), islice, xrange(10), 'a', 1, 1) + self.assertRaises((ValueError, TypeError), islice, xrange(10), 1, 'a', 1) self.assertEqual(len(list(islice(count(), 1, 10, maxsize))), 1) # Issue #10323: Less islice in a predictable state @@ -855,9 +869,17 @@ self.assertRaises(TypeError, tee, [1,2], 3, 'x') # tee object should be instantiable - a, b = tee('abc') - c = type(a)('def') - self.assertEqual(list(c), list('def')) + if test_support.check_impl_detail(): + # XXX I (arigo) would argue that 'type(a)(iterable)' has + # ill-defined semantics: it always return a fresh tee object, + # but depending on whether 'iterable' is itself a tee object + # or not, it is ok or not to continue using 'iterable' after + # the call. I cannot imagine why 'type(a)(non_tee_object)' + # would be useful, as 'iter(non_tee_obect)' is equivalent + # as far as I can see. + a, b = tee('abc') + c = type(a)('def') + self.assertEqual(list(c), list('def')) # test long-lagged and multi-way split a, b, c = tee(xrange(2000), 3) @@ -895,6 +917,7 @@ p = proxy(a) self.assertEqual(getattr(p, '__class__'), type(b)) del a + test_support.gc_collect() self.assertRaises(ReferenceError, getattr, p, '__class__') def test_StopIteration(self): @@ -1317,6 +1340,7 @@ class LengthTransparency(unittest.TestCase): + @test_support.impl_detail("__length_hint__() API is undocumented") def test_repeat(self): from test.test_iterlen import len self.assertEqual(len(repeat(None, 50)), 50) diff --git a/lib-python/2.7/test/test_linecache.py b/lib-python/2.7/test/test_linecache.py --- a/lib-python/2.7/test/test_linecache.py +++ b/lib-python/2.7/test/test_linecache.py @@ -54,13 +54,13 @@ # Check whether lines correspond to those from file iteration for entry in TESTS: - filename = os.path.join(TEST_PATH, entry) + '.py' + filename = support.findfile( entry + '.py') for index, line in enumerate(open(filename)): self.assertEqual(line, getline(filename, index + 1)) # Check module loading for entry in MODULES: - filename = os.path.join(MODULE_PATH, entry) + '.py' + filename = support.findfile( entry + '.py') for index, line in enumerate(open(filename)): self.assertEqual(line, getline(filename, index + 1)) @@ -78,7 +78,7 @@ def test_clearcache(self): cached = [] for entry in TESTS: - filename = os.path.join(TEST_PATH, entry) + '.py' + filename = support.findfile( entry + '.py') cached.append(filename) linecache.getline(filename, 1) diff --git a/lib-python/2.7/test/test_list.py b/lib-python/2.7/test/test_list.py --- a/lib-python/2.7/test/test_list.py +++ b/lib-python/2.7/test/test_list.py @@ -15,6 +15,10 @@ self.assertEqual(list(''), []) self.assertEqual(list('spam'), ['s', 'p', 'a', 'm']) + # the following test also works with pypy, but eats all your address + # space's RAM before raising and takes too long. + @test_support.impl_detail("eats all your RAM before working", pypy=False) + def test_segfault_1(self): if sys.maxsize == 0x7fffffff: # This test can currently only work on 32-bit machines. # XXX If/when PySequence_Length() returns a ssize_t, it should be @@ -32,6 +36,7 @@ # http://sources.redhat.com/ml/newlib/2002/msg00369.html self.assertRaises(MemoryError, list, xrange(sys.maxint // 2)) + def test_segfault_2(self): # This code used to segfault in Py2.4a3 x = [] x.extend(-y for y in x) diff --git a/lib-python/2.7/test/test_long.py b/lib-python/2.7/test/test_long.py --- a/lib-python/2.7/test/test_long.py +++ b/lib-python/2.7/test/test_long.py @@ -530,9 +530,10 @@ try: long(TruncReturnsNonIntegral()) except TypeError as e: - self.assertEqual(str(e), - "__trunc__ returned non-Integral" - " (type NonIntegral)") + if test_support.check_impl_detail(cpython=True): + self.assertEqual(str(e), + "__trunc__ returned non-Integral" + " (type NonIntegral)") else: self.fail("Failed to raise TypeError with %s" % ((base, trunc_result_base),)) diff --git a/lib-python/2.7/test/test_mailbox.py b/lib-python/2.7/test/test_mailbox.py --- a/lib-python/2.7/test/test_mailbox.py +++ b/lib-python/2.7/test/test_mailbox.py @@ -137,6 +137,7 @@ msg = self._box.get(key1) self.assertEqual(msg['from'], 'foo') self.assertEqual(msg.fp.read(), '1') + msg.fp.close() def test_getitem(self): # Retrieve message using __getitem__() @@ -169,10 +170,12 @@ # Get file representations of messages key0 = self._box.add(self._template % 0) key1 = self._box.add(_sample_message) - self.assertEqual(self._box.get_file(key0).read().replace(os.linesep, '\n'), - self._template % 0) - self.assertEqual(self._box.get_file(key1).read().replace(os.linesep, '\n'), - _sample_message) + msg0 = self._box.get_file(key0) + self.assertEqual(msg0.read().replace(os.linesep, '\n'), self._template % 0) + msg1 = self._box.get_file(key1) + self.assertEqual(msg1.read().replace(os.linesep, '\n'), _sample_message) + msg0.close() + msg1.close() def test_iterkeys(self): # Get keys using iterkeys() @@ -1837,7 +1840,9 @@ self.createMessage("cur") self.mbox = mailbox.Maildir(test_support.TESTFN) #self.assertTrue(len(self.mbox.boxes) == 1) - self.assertIsNot(self.mbox.next(), None) + msg = self.mbox.next() + self.assertIsNot(msg, None) + msg.fp.close() self.assertIs(self.mbox.next(), None) self.assertIs(self.mbox.next(), None) @@ -1845,7 +1850,9 @@ self.createMessage("new") self.mbox = mailbox.Maildir(test_support.TESTFN) #self.assertTrue(len(self.mbox.boxes) == 1) - self.assertIsNot(self.mbox.next(), None) + msg = self.mbox.next() + self.assertIsNot(msg, None) + msg.fp.close() self.assertIs(self.mbox.next(), None) self.assertIs(self.mbox.next(), None) @@ -1854,8 +1861,12 @@ self.createMessage("new") self.mbox = mailbox.Maildir(test_support.TESTFN) #self.assertTrue(len(self.mbox.boxes) == 2) - self.assertIsNot(self.mbox.next(), None) - self.assertIsNot(self.mbox.next(), None) + msg = self.mbox.next() + self.assertIsNot(msg, None) + msg.fp.close() + msg = self.mbox.next() + self.assertIsNot(msg, None) + msg.fp.close() self.assertIs(self.mbox.next(), None) self.assertIs(self.mbox.next(), None) @@ -1864,11 +1875,13 @@ import email.parser fname = self.createMessage("cur", True) n = 0 - for msg in mailbox.PortableUnixMailbox(open(fname), + fid = open(fname) + for msg in mailbox.PortableUnixMailbox(fid, email.parser.Parser().parse): n += 1 self.assertEqual(msg["subject"], "Simple Test") self.assertEqual(len(str(msg)), len(FROM_)+len(DUMMY_MESSAGE)) + fid.close() self.assertEqual(n, 1) ## End: classes from the original module (for backward compatibility). diff --git a/lib-python/2.7/test/test_marshal.py b/lib-python/2.7/test/test_marshal.py --- a/lib-python/2.7/test/test_marshal.py +++ b/lib-python/2.7/test/test_marshal.py @@ -7,20 +7,31 @@ import unittest import os -class IntTestCase(unittest.TestCase): +class HelperMixin: + def helper(self, sample, *extra, **kwargs): + expected = kwargs.get('expected', sample) + new = marshal.loads(marshal.dumps(sample, *extra)) + self.assertEqual(expected, new) + self.assertEqual(type(expected), type(new)) + try: + with open(test_support.TESTFN, "wb") as f: + marshal.dump(sample, f, *extra) + with open(test_support.TESTFN, "rb") as f: + new = marshal.load(f) + self.assertEqual(expected, new) + self.assertEqual(type(expected), type(new)) + finally: + test_support.unlink(test_support.TESTFN) + + +class IntTestCase(unittest.TestCase, HelperMixin): def test_ints(self): # Test the full range of Python ints. n = sys.maxint while n: for expected in (-n, n): - s = marshal.dumps(expected) - got = marshal.loads(s) - self.assertEqual(expected, got) - marshal.dump(expected, file(test_support.TESTFN, "wb")) - got = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(expected, got) + self.helper(expected) n = n >> 1 - os.unlink(test_support.TESTFN) def test_int64(self): # Simulate int marshaling on a 64-bit box. This is most interesting if @@ -48,28 +59,16 @@ def test_bool(self): for b in (True, False): - new = marshal.loads(marshal.dumps(b)) - self.assertEqual(b, new) - self.assertEqual(type(b), type(new)) - marshal.dump(b, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(b, new) - self.assertEqual(type(b), type(new)) + self.helper(b) -class FloatTestCase(unittest.TestCase): +class FloatTestCase(unittest.TestCase, HelperMixin): def test_floats(self): # Test a few floats small = 1e-25 n = sys.maxint * 3.7e250 while n > small: for expected in (-n, n): - f = float(expected) - s = marshal.dumps(f) - got = marshal.loads(s) - self.assertEqual(f, got) - marshal.dump(f, file(test_support.TESTFN, "wb")) - got = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(f, got) + self.helper(expected) n /= 123.4567 f = 0.0 @@ -85,59 +84,25 @@ while n < small: for expected in (-n, n): f = float(expected) + self.helper(f) + self.helper(f, 1) + n *= 123.4567 - s = marshal.dumps(f) - got = marshal.loads(s) - self.assertEqual(f, got) - - s = marshal.dumps(f, 1) - got = marshal.loads(s) - self.assertEqual(f, got) - - marshal.dump(f, file(test_support.TESTFN, "wb")) - got = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(f, got) - - marshal.dump(f, file(test_support.TESTFN, "wb"), 1) - got = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(f, got) - n *= 123.4567 - os.unlink(test_support.TESTFN) - -class StringTestCase(unittest.TestCase): +class StringTestCase(unittest.TestCase, HelperMixin): def test_unicode(self): for s in [u"", u"Andr� Previn", u"abc", u" "*10000]: - new = marshal.loads(marshal.dumps(s)) - self.assertEqual(s, new) - self.assertEqual(type(s), type(new)) - marshal.dump(s, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(s, new) - self.assertEqual(type(s), type(new)) - os.unlink(test_support.TESTFN) + self.helper(s) def test_string(self): for s in ["", "Andr� Previn", "abc", " "*10000]: - new = marshal.loads(marshal.dumps(s)) - self.assertEqual(s, new) - self.assertEqual(type(s), type(new)) - marshal.dump(s, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(s, new) - self.assertEqual(type(s), type(new)) - os.unlink(test_support.TESTFN) + self.helper(s) def test_buffer(self): for s in ["", "Andr� Previn", "abc", " "*10000]: with test_support.check_py3k_warnings(("buffer.. not supported", DeprecationWarning)): b = buffer(s) - new = marshal.loads(marshal.dumps(b)) - self.assertEqual(s, new) - marshal.dump(b, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(s, new) - os.unlink(test_support.TESTFN) + self.helper(b, expected=s) class ExceptionTestCase(unittest.TestCase): def test_exceptions(self): @@ -150,7 +115,7 @@ new = marshal.loads(marshal.dumps(co)) self.assertEqual(co, new) -class ContainerTestCase(unittest.TestCase): +class ContainerTestCase(unittest.TestCase, HelperMixin): d = {'astring': 'foo at bar.baz.spam', 'afloat': 7283.43, 'anint': 2**20, @@ -161,42 +126,20 @@ 'aunicode': u"Andr� Previn" } def test_dict(self): - new = marshal.loads(marshal.dumps(self.d)) - self.assertEqual(self.d, new) - marshal.dump(self.d, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(self.d, new) - os.unlink(test_support.TESTFN) + self.helper(self.d) def test_list(self): lst = self.d.items() - new = marshal.loads(marshal.dumps(lst)) - self.assertEqual(lst, new) - marshal.dump(lst, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(lst, new) - os.unlink(test_support.TESTFN) + self.helper(lst) def test_tuple(self): t = tuple(self.d.keys()) - new = marshal.loads(marshal.dumps(t)) - self.assertEqual(t, new) - marshal.dump(t, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(t, new) - os.unlink(test_support.TESTFN) + self.helper(t) def test_sets(self): for constructor in (set, frozenset): t = constructor(self.d.keys()) - new = marshal.loads(marshal.dumps(t)) - self.assertEqual(t, new) - self.assertTrue(isinstance(new, constructor)) - self.assertNotEqual(id(t), id(new)) - marshal.dump(t, file(test_support.TESTFN, "wb")) - new = marshal.load(file(test_support.TESTFN, "rb")) - self.assertEqual(t, new) - os.unlink(test_support.TESTFN) + self.helper(t) class BugsTestCase(unittest.TestCase): def test_bug_5888452(self): @@ -226,6 +169,7 @@ s = 'c' + ('X' * 4*4) + '{' * 2**20 self.assertRaises(ValueError, marshal.loads, s) + @test_support.impl_detail('specific recursion check') def test_recursion_limit(self): # Create a deeply nested structure. head = last = [] diff --git a/lib-python/2.7/test/test_memoryio.py b/lib-python/2.7/test/test_memoryio.py --- a/lib-python/2.7/test/test_memoryio.py +++ b/lib-python/2.7/test/test_memoryio.py @@ -617,7 +617,7 @@ state = memio.__getstate__() self.assertEqual(len(state), 3) bytearray(state[0]) # Check if state[0] supports the buffer interface. - self.assertIsInstance(state[1], int) + self.assertIsInstance(state[1], (int, long)) self.assertTrue(isinstance(state[2], dict) or state[2] is None) memio.close() self.assertRaises(ValueError, memio.__getstate__) diff --git a/lib-python/2.7/test/test_memoryview.py b/lib-python/2.7/test/test_memoryview.py --- a/lib-python/2.7/test/test_memoryview.py +++ b/lib-python/2.7/test/test_memoryview.py @@ -26,7 +26,8 @@ def check_getitem_with_type(self, tp): item = self.getitem_type b = tp(self._source) - oldrefcount = sys.getrefcount(b) + if hasattr(sys, 'getrefcount'): + oldrefcount = sys.getrefcount(b) m = self._view(b) self.assertEqual(m[0], item(b"a")) self.assertIsInstance(m[0], bytes) @@ -43,7 +44,8 @@ self.assertRaises(TypeError, lambda: m[0.0]) self.assertRaises(TypeError, lambda: m["a"]) m = None - self.assertEqual(sys.getrefcount(b), oldrefcount) + if hasattr(sys, 'getrefcount'): + self.assertEqual(sys.getrefcount(b), oldrefcount) def test_getitem(self): for tp in self._types: @@ -65,7 +67,8 @@ if not self.ro_type: return b = self.ro_type(self._source) - oldrefcount = sys.getrefcount(b) + if hasattr(sys, 'getrefcount'): + oldrefcount = sys.getrefcount(b) m = self._view(b) def setitem(value): m[0] = value @@ -73,14 +76,16 @@ self.assertRaises(TypeError, setitem, 65) self.assertRaises(TypeError, setitem, memoryview(b"a")) m = None - self.assertEqual(sys.getrefcount(b), oldrefcount) + if hasattr(sys, 'getrefcount'): + self.assertEqual(sys.getrefcount(b), oldrefcount) def test_setitem_writable(self): if not self.rw_type: return tp = self.rw_type b = self.rw_type(self._source) - oldrefcount = sys.getrefcount(b) + if hasattr(sys, 'getrefcount'): + oldrefcount = sys.getrefcount(b) m = self._view(b) m[0] = tp(b"0") self._check_contents(tp, b, b"0bcdef") @@ -110,13 +115,14 @@ self.assertRaises(TypeError, setitem, (0,), b"a") self.assertRaises(TypeError, setitem, "a", b"a") # Trying to resize the memory object - self.assertRaises(ValueError, setitem, 0, b"") - self.assertRaises(ValueError, setitem, 0, b"ab") + self.assertRaises((ValueError, TypeError), setitem, 0, b"") + self.assertRaises((ValueError, TypeError), setitem, 0, b"ab") self.assertRaises(ValueError, setitem, slice(1,1), b"a") self.assertRaises(ValueError, setitem, slice(0,2), b"a") m = None - self.assertEqual(sys.getrefcount(b), oldrefcount) + if hasattr(sys, 'getrefcount'): + self.assertEqual(sys.getrefcount(b), oldrefcount) def test_delitem(self): for tp in self._types: @@ -292,6 +298,7 @@ def _check_contents(self, tp, obj, contents): self.assertEqual(obj[1:7], tp(contents)) + @unittest.skipUnless(hasattr(sys, 'getrefcount'), "Reference counting") def test_refs(self): for tp in self._types: m = memoryview(tp(self._source)) diff --git a/lib-python/2.7/test/test_mmap.py b/lib-python/2.7/test/test_mmap.py --- a/lib-python/2.7/test/test_mmap.py +++ b/lib-python/2.7/test/test_mmap.py @@ -119,7 +119,8 @@ def test_access_parameter(self): # Test for "access" keyword parameter mapsize = 10 - open(TESTFN, "wb").write("a"*mapsize) + with open(TESTFN, "wb") as f: + f.write("a"*mapsize) f = open(TESTFN, "rb") m = mmap.mmap(f.fileno(), mapsize, access=mmap.ACCESS_READ) self.assertEqual(m[:], 'a'*mapsize, "Readonly memory map data incorrect.") @@ -168,9 +169,11 @@ else: self.fail("Able to resize readonly memory map") f.close() + m.close() del m, f - self.assertEqual(open(TESTFN, "rb").read(), 'a'*mapsize, - "Readonly memory map data file was modified") + with open(TESTFN, "rb") as f: + self.assertEqual(f.read(), 'a'*mapsize, + "Readonly memory map data file was modified") # Opening mmap with size too big import sys @@ -220,11 +223,13 @@ self.assertEqual(m[:], 'd' * mapsize, "Copy-on-write memory map data not written correctly.") m.flush() - self.assertEqual(open(TESTFN, "rb").read(), 'c'*mapsize, - "Copy-on-write test data file should not be modified.") + f.close() + with open(TESTFN, "rb") as f: + self.assertEqual(f.read(), 'c'*mapsize, + "Copy-on-write test data file should not be modified.") # Ensuring copy-on-write maps cannot be resized self.assertRaises(TypeError, m.resize, 2*mapsize) - f.close() + m.close() del m, f # Ensuring invalid access parameter raises exception @@ -287,6 +292,7 @@ self.assertEqual(m.find('one', 1), 8) self.assertEqual(m.find('one', 1, -1), 8) self.assertEqual(m.find('one', 1, -2), -1) + m.close() def test_rfind(self): @@ -305,6 +311,7 @@ self.assertEqual(m.rfind('one', 0, -2), 0) self.assertEqual(m.rfind('one', 1, -1), 8) self.assertEqual(m.rfind('one', 1, -2), -1) + m.close() def test_double_close(self): @@ -533,7 +540,8 @@ if not hasattr(mmap, 'PROT_READ'): return mapsize = 10 - open(TESTFN, "wb").write("a"*mapsize) + with open(TESTFN, "wb") as f: + f.write("a"*mapsize) f = open(TESTFN, "rb") m = mmap.mmap(f.fileno(), mapsize, prot=mmap.PROT_READ) self.assertRaises(TypeError, m.write, "foo") @@ -545,7 +553,8 @@ def test_io_methods(self): data = "0123456789" - open(TESTFN, "wb").write("x"*len(data)) + with open(TESTFN, "wb") as f: + f.write("x"*len(data)) f = open(TESTFN, "r+b") m = mmap.mmap(f.fileno(), len(data)) f.close() @@ -574,6 +583,7 @@ self.assertEqual(m[:], "012bar6789") m.seek(8) self.assertRaises(ValueError, m.write, "bar") + m.close() if os.name == 'nt': def test_tagname(self): @@ -611,7 +621,8 @@ m.close() # Should not crash (Issue 5385) - open(TESTFN, "wb").write("x"*10) + with open(TESTFN, "wb") as f: + f.write("x"*10) f = open(TESTFN, "r+b") m = mmap.mmap(f.fileno(), 0) f.close() diff --git a/lib-python/2.7/test/test_module.py b/lib-python/2.7/test/test_module.py --- a/lib-python/2.7/test/test_module.py +++ b/lib-python/2.7/test/test_module.py @@ -1,6 +1,6 @@ # Test the module type import unittest -from test.test_support import run_unittest, gc_collect +from test.test_support import run_unittest, gc_collect, check_impl_detail import sys ModuleType = type(sys) @@ -10,8 +10,10 @@ # An uninitialized module has no __dict__ or __name__, # and __doc__ is None foo = ModuleType.__new__(ModuleType) - self.assertTrue(foo.__dict__ is None) - self.assertRaises(SystemError, dir, foo) + self.assertFalse(foo.__dict__) + if check_impl_detail(): + self.assertTrue(foo.__dict__ is None) + self.assertRaises(SystemError, dir, foo) try: s = foo.__name__ self.fail("__name__ = %s" % repr(s)) diff --git a/lib-python/2.7/test/test_multibytecodec.py b/lib-python/2.7/test/test_multibytecodec.py --- a/lib-python/2.7/test/test_multibytecodec.py +++ b/lib-python/2.7/test/test_multibytecodec.py @@ -42,7 +42,7 @@ dec = codecs.getdecoder('euc-kr') myreplace = lambda exc: (u'', sys.maxint+1) codecs.register_error('test.cjktest', myreplace) - self.assertRaises(IndexError, dec, + self.assertRaises((IndexError, OverflowError), dec, 'apple\x92ham\x93spam', 'test.cjktest') def test_codingspec(self): @@ -148,7 +148,8 @@ class Test_StreamReader(unittest.TestCase): def test_bug1728403(self): try: - open(TESTFN, 'w').write('\xa1') + with open(TESTFN, 'w') as f: + f.write('\xa1') f = codecs.open(TESTFN, encoding='cp949') self.assertRaises(UnicodeDecodeError, f.read, 2) finally: diff --git a/lib-python/2.7/test/test_multibytecodec_support.py b/lib-python/2.7/test/test_multibytecodec_support.py --- a/lib-python/2.7/test/test_multibytecodec_support.py +++ b/lib-python/2.7/test/test_multibytecodec_support.py @@ -110,8 +110,8 @@ def myreplace(exc): return (u'x', sys.maxint + 1) codecs.register_error("test.cjktest", myreplace) - self.assertRaises(IndexError, self.encode, self.unmappedunicode, - 'test.cjktest') + self.assertRaises((IndexError, OverflowError), self.encode, + self.unmappedunicode, 'test.cjktest') def test_callback_None_index(self): def myreplace(exc): @@ -330,7 +330,7 @@ repr(csetch), repr(unich), exc.reason)) def load_teststring(name): - dir = os.path.join(os.path.dirname(__file__), 'cjkencodings') + dir = test_support.findfile('cjkencodings') with open(os.path.join(dir, name + '.txt'), 'rb') as f: encoded = f.read() with open(os.path.join(dir, name + '-utf8.txt'), 'rb') as f: diff --git a/lib-python/2.7/test/test_multiprocessing.py b/lib-python/2.7/test/test_multiprocessing.py --- a/lib-python/2.7/test/test_multiprocessing.py +++ b/lib-python/2.7/test/test_multiprocessing.py @@ -1316,6 +1316,7 @@ queue = manager.get_queue() self.assertEqual(queue.get(), 'hello world') del queue + test_support.gc_collect() manager.shutdown() manager = QueueManager( address=addr, authkey=authkey, serializer=SERIALIZER) @@ -1605,6 +1606,10 @@ if len(blocks) > maxblocks: i = random.randrange(maxblocks) del blocks[i] + # XXX There should be a better way to release resources for a + # single block + if i % maxblocks == 0: + import gc; gc.collect() # get the heap object heap = multiprocessing.heap.BufferWrapper._heap @@ -1704,6 +1709,7 @@ a = Foo() util.Finalize(a, conn.send, args=('a',)) del a # triggers callback for a + test_support.gc_collect() b = Foo() close_b = util.Finalize(b, conn.send, args=('b',)) diff --git a/lib-python/2.7/test/test_mutants.py b/lib-python/2.7/test/test_mutants.py --- a/lib-python/2.7/test/test_mutants.py +++ b/lib-python/2.7/test/test_mutants.py @@ -1,4 +1,4 @@ -from test.test_support import verbose, TESTFN +from test.test_support import verbose, TESTFN, check_impl_detail import random import os @@ -137,10 +137,16 @@ while dict1 and len(dict1) == len(dict2): if verbose: print ".", - if random.random() < 0.5: - c = cmp(dict1, dict2) - else: - c = dict1 == dict2 + try: + if random.random() < 0.5: + c = cmp(dict1, dict2) + else: + c = dict1 == dict2 + except RuntimeError: + # CPython never raises RuntimeError here, but other implementations + # might, and it's fine. + if check_impl_detail(cpython=True): + raise if verbose: print diff --git a/lib-python/2.7/test/test_old_mailbox.py b/lib-python/2.7/test/test_old_mailbox.py --- a/lib-python/2.7/test/test_old_mailbox.py +++ b/lib-python/2.7/test/test_old_mailbox.py @@ -73,7 +73,9 @@ self.createMessage("cur") self.mbox = mailbox.Maildir(test_support.TESTFN) self.assertTrue(len(self.mbox) == 1) - self.assertTrue(self.mbox.next() is not None) + msg = self.mbox.next() + self.assertTrue(msg is not None) + msg.fp.close() self.assertTrue(self.mbox.next() is None) self.assertTrue(self.mbox.next() is None) @@ -81,7 +83,9 @@ self.createMessage("new") self.mbox = mailbox.Maildir(test_support.TESTFN) self.assertTrue(len(self.mbox) == 1) - self.assertTrue(self.mbox.next() is not None) + msg = self.mbox.next() + self.assertTrue(msg is not None) + msg.fp.close() self.assertTrue(self.mbox.next() is None) self.assertTrue(self.mbox.next() is None) @@ -90,8 +94,12 @@ self.createMessage("new") self.mbox = mailbox.Maildir(test_support.TESTFN) self.assertTrue(len(self.mbox) == 2) - self.assertTrue(self.mbox.next() is not None) - self.assertTrue(self.mbox.next() is not None) + msg = self.mbox.next() + self.assertTrue(msg is not None) + msg.fp.close() + msg = self.mbox.next() + self.assertTrue(msg is not None) + msg.fp.close() self.assertTrue(self.mbox.next() is None) self.assertTrue(self.mbox.next() is None) diff --git a/lib-python/2.7/test/test_optparse.py b/lib-python/2.7/test/test_optparse.py --- a/lib-python/2.7/test/test_optparse.py +++ b/lib-python/2.7/test/test_optparse.py @@ -383,6 +383,7 @@ self.assertRaises(self.parser.remove_option, ('foo',), None, ValueError, "no such option 'foo'") + @test_support.impl_detail("sys.getrefcount") def test_refleak(self): # If an OptionParser is carrying around a reference to a large # object, various cycles can prevent it from being GC'd in diff --git a/lib-python/2.7/test/test_os.py b/lib-python/2.7/test/test_os.py --- a/lib-python/2.7/test/test_os.py +++ b/lib-python/2.7/test/test_os.py @@ -74,7 +74,8 @@ self.assertFalse(os.path.exists(name), "file already exists for temporary file") # make sure we can create the file - open(name, "w") + fid = open(name, "w") + fid.close() self.files.append(name) def test_tempnam(self): diff --git a/lib-python/2.7/test/test_peepholer.py b/lib-python/2.7/test/test_peepholer.py --- a/lib-python/2.7/test/test_peepholer.py +++ b/lib-python/2.7/test/test_peepholer.py @@ -41,7 +41,7 @@ def test_none_as_constant(self): # LOAD_GLOBAL None --> LOAD_CONST None def f(x): - None + y = None return x asm = disassemble(f) for elem in ('LOAD_GLOBAL',): @@ -67,10 +67,13 @@ self.assertIn(elem, asm) def test_pack_unpack(self): + # On PyPy, "a, b = ..." is even more optimized, by removing + # the ROT_TWO. But the ROT_TWO is not removed if assigning + # to more complex expressions, so check that. for line, elem in ( ('a, = a,', 'LOAD_CONST',), - ('a, b = a, b', 'ROT_TWO',), - ('a, b, c = a, b, c', 'ROT_THREE',), + ('a[1], b = a, b', 'ROT_TWO',), + ('a, b[2], c = a, b, c', 'ROT_THREE',), ): asm = dis_single(line) self.assertIn(elem, asm) @@ -78,6 +81,8 @@ self.assertNotIn('UNPACK_TUPLE', asm) def test_folding_of_tuples_of_constants(self): + # On CPython, "a,b,c=1,2,3" turns into "a,b,c=" + # but on PyPy, it turns into "a=1;b=2;c=3". for line, elem in ( ('a = 1,2,3', '((1, 2, 3))'), ('("a","b","c")', "(('a', 'b', 'c'))"), @@ -86,7 +91,8 @@ ('((1, 2), 3, 4)', '(((1, 2), 3, 4))'), ): asm = dis_single(line) - self.assertIn(elem, asm) + self.assert_(elem in asm or ( + line == 'a,b,c = 1,2,3' and 'UNPACK_TUPLE' not in asm)) self.assertNotIn('BUILD_TUPLE', asm) # Bug 1053819: Tuple of constants misidentified when presented with: @@ -139,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib-python/2.7/test/test_pprint.py b/lib-python/2.7/test/test_pprint.py --- a/lib-python/2.7/test/test_pprint.py +++ b/lib-python/2.7/test/test_pprint.py @@ -233,7 +233,16 @@ frozenset([0, 2]), frozenset([0, 1])])}""" cube = test.test_set.cube(3) - self.assertEqual(pprint.pformat(cube), cube_repr_tgt) + # XXX issues of dictionary order, and for the case below, + # order of items in the frozenset([...]) representation. + # Whether we get precisely cube_repr_tgt or not is open + # to implementation-dependent choices (this test probably + # fails horribly in CPython if we tweak the dict order too). + got = pprint.pformat(cube) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cube_repr_tgt) + else: + self.assertEqual(eval(got), cube) cubo_repr_tgt = """\ {frozenset([frozenset([0, 2]), frozenset([0])]): frozenset([frozenset([frozenset([0, 2]), @@ -393,7 +402,11 @@ 2])])])}""" cubo = test.test_set.linegraph(cube) - self.assertEqual(pprint.pformat(cubo), cubo_repr_tgt) + got = pprint.pformat(cubo) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cubo_repr_tgt) + else: + self.assertEqual(eval(got), cubo) def test_depth(self): nested_tuple = (1, (2, (3, (4, (5, 6))))) diff --git a/lib-python/2.7/test/test_pydoc.py b/lib-python/2.7/test/test_pydoc.py --- a/lib-python/2.7/test/test_pydoc.py +++ b/lib-python/2.7/test/test_pydoc.py @@ -267,8 +267,8 @@ testpairs = ( ('i_am_not_here', 'i_am_not_here'), ('test.i_am_not_here_either', 'i_am_not_here_either'), - ('test.i_am_not_here.neither_am_i', 'i_am_not_here.neither_am_i'), - ('i_am_not_here.{}'.format(modname), 'i_am_not_here.{}'.format(modname)), + ('test.i_am_not_here.neither_am_i', 'i_am_not_here'), + ('i_am_not_here.{}'.format(modname), 'i_am_not_here'), ('test.{}'.format(modname), modname), ) @@ -292,8 +292,8 @@ result = run_pydoc(modname) finally: forget(modname) - expected = badimport_pattern % (modname, expectedinmsg) - self.assertEqual(expected, result) + expected = badimport_pattern % (modname, '(.+\\.)?' + expectedinmsg + '(\\..+)?$') + self.assertTrue(re.match(expected, result)) def test_input_strip(self): missing_module = " test.i_am_not_here " diff --git a/lib-python/2.7/test/test_pyexpat.py b/lib-python/2.7/test/test_pyexpat.py --- a/lib-python/2.7/test/test_pyexpat.py +++ b/lib-python/2.7/test/test_pyexpat.py @@ -570,6 +570,9 @@ self.assertEqual(self.n, 4) class MalformedInputText(unittest.TestCase): + # CPython seems to ship its own version of expat, they fixed it on this commit : + # http://svn.python.org/view?revision=74429&view=revision + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test1(self): xml = "\0\r\n" parser = expat.ParserCreate() @@ -579,6 +582,7 @@ except expat.ExpatError as e: self.assertEqual(str(e), 'unclosed token: line 2, column 0') + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test2(self): xml = "\r\n" parser = expat.ParserCreate() diff --git a/lib-python/2.7/test/test_repr.py b/lib-python/2.7/test/test_repr.py --- a/lib-python/2.7/test/test_repr.py +++ b/lib-python/2.7/test/test_repr.py @@ -9,6 +9,7 @@ import unittest from test.test_support import run_unittest, check_py3k_warnings +from test.test_support import check_impl_detail from repr import repr as r # Don't shadow builtin repr from repr import Repr @@ -145,8 +146,11 @@ # Functions eq(repr(hash), '') # Methods - self.assertTrue(repr(''.split).startswith( - '") def test_xrange(self): eq = self.assertEqual @@ -185,7 +189,10 @@ def test_descriptors(self): eq = self.assertEqual # method descriptors - eq(repr(dict.items), "") + if check_impl_detail(cpython=True): + eq(repr(dict.items), "") + elif check_impl_detail(pypy=True): + eq(repr(dict.items), "") # XXX member descriptors # XXX attribute descriptors # XXX slot descriptors @@ -247,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_runpy.py b/lib-python/2.7/test/test_runpy.py --- a/lib-python/2.7/test/test_runpy.py +++ b/lib-python/2.7/test/test_runpy.py @@ -5,10 +5,15 @@ import sys import re import tempfile -from test.test_support import verbose, run_unittest, forget +from test.test_support import verbose, run_unittest, forget, check_impl_detail from test.script_helper import (temp_dir, make_script, compile_script, make_pkg, make_zip_script, make_zip_pkg) +if check_impl_detail(pypy=True): + no_lone_pyc_file = True +else: + no_lone_pyc_file = False + from runpy import _run_code, _run_module_code, run_module, run_path # Note: This module can't safely test _run_module_as_main as it @@ -168,13 +173,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + d2 = run_module(mod_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -190,13 +196,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", pkg_name - d2 = run_module(pkg_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", pkg_name + d2 = run_module(pkg_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, pkg_name) if verbose: print "Package executed successfully" @@ -244,15 +251,17 @@ self.assertIn("sibling", d1) self.assertIn("nephew", d1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name, run_name=run_name) # Read from bytecode - self.assertIn("__package__", d2) - self.assertTrue(d2["__package__"] == pkg_name) - self.assertIn("sibling", d2) - self.assertIn("nephew", d2) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + # Read from bytecode + d2 = run_module(mod_name, run_name=run_name) + self.assertIn("__package__", d2) + self.assertTrue(d2["__package__"] == pkg_name) + self.assertIn("sibling", d2) + self.assertIn("nephew", d2) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -345,6 +354,8 @@ script_dir, '') def test_directory_compiled(self): + if no_lone_pyc_file: + return with temp_dir() as script_dir: mod_name = '__main__' script_name = self._make_test_script(script_dir, mod_name) diff --git a/lib-python/2.7/test/test_scope.py b/lib-python/2.7/test/test_scope.py --- a/lib-python/2.7/test/test_scope.py +++ b/lib-python/2.7/test/test_scope.py @@ -1,6 +1,6 @@ import unittest from test.test_support import check_syntax_error, check_py3k_warnings, \ - check_warnings, run_unittest + check_warnings, run_unittest, gc_collect class ScopeTests(unittest.TestCase): @@ -432,6 +432,7 @@ for i in range(100): f1() + gc_collect() self.assertEqual(Foo.count, 0) diff --git a/lib-python/2.7/test/test_set.py b/lib-python/2.7/test/test_set.py --- a/lib-python/2.7/test/test_set.py +++ b/lib-python/2.7/test/test_set.py @@ -309,6 +309,7 @@ fo.close() test_support.unlink(test_support.TESTFN) + @test_support.impl_detail(pypy=False) def test_do_not_rehash_dict_keys(self): n = 10 d = dict.fromkeys(map(HashCountingInt, xrange(n))) @@ -559,6 +560,7 @@ p = weakref.proxy(s) self.assertEqual(str(p), str(s)) s = None + test_support.gc_collect() self.assertRaises(ReferenceError, str, p) # C API test only available in a debug build @@ -590,6 +592,7 @@ s.__init__(self.otherword) self.assertEqual(s, set(self.word)) + @test_support.impl_detail() def test_singleton_empty_frozenset(self): f = frozenset() efs = [frozenset(), frozenset([]), frozenset(()), frozenset(''), @@ -770,9 +773,10 @@ for v in self.set: self.assertIn(v, self.values) setiter = iter(self.set) - # note: __length_hint__ is an internal undocumented API, - # don't rely on it in your own programs - self.assertEqual(setiter.__length_hint__(), len(self.set)) + if test_support.check_impl_detail(): + # note: __length_hint__ is an internal undocumented API, + # don't rely on it in your own programs + self.assertEqual(setiter.__length_hint__(), len(self.set)) def test_pickling(self): p = pickle.dumps(self.set) @@ -1564,7 +1568,7 @@ for meth in (s.union, s.intersection, s.difference, s.symmetric_difference, s.isdisjoint): for g in (G, I, Ig, L, R): expected = meth(data) - actual = meth(G(data)) + actual = meth(g(data)) if isinstance(expected, bool): self.assertEqual(actual, expected) else: diff --git a/lib-python/2.7/test/test_sets.py b/lib-python/2.7/test/test_sets.py --- a/lib-python/2.7/test/test_sets.py +++ b/lib-python/2.7/test/test_sets.py @@ -686,7 +686,9 @@ set_list = sorted(self.set) self.assertEqual(len(dup_list), len(set_list)) for i, el in enumerate(dup_list): - self.assertIs(el, set_list[i]) + # Object identity is not guarnteed for immutable objects, so we + # can't use assertIs here. + self.assertEqual(el, set_list[i]) def test_deep_copy(self): dup = copy.deepcopy(self.set) diff --git a/lib-python/2.7/test/test_site.py b/lib-python/2.7/test/test_site.py --- a/lib-python/2.7/test/test_site.py +++ b/lib-python/2.7/test/test_site.py @@ -226,6 +226,10 @@ self.assertEqual(len(dirs), 1) wanted = os.path.join('xoxo', 'Lib', 'site-packages') self.assertEqual(dirs[0], wanted) + elif '__pypy__' in sys.builtin_module_names: + self.assertEquals(len(dirs), 1) + wanted = os.path.join('xoxo', 'site-packages') + self.assertEquals(dirs[0], wanted) elif os.sep == '/': self.assertEqual(len(dirs), 2) wanted = os.path.join('xoxo', 'lib', 'python' + sys.version[:3], diff --git a/lib-python/2.7/test/test_socket.py b/lib-python/2.7/test/test_socket.py --- a/lib-python/2.7/test/test_socket.py +++ b/lib-python/2.7/test/test_socket.py @@ -252,6 +252,7 @@ self.assertEqual(p.fileno(), s.fileno()) s.close() s = None + test_support.gc_collect() try: p.fileno() except ReferenceError: @@ -285,32 +286,34 @@ s.sendto(u'\u2620', sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None) - self.assertIn('not NoneType', str(cm.exception)) + self.assertIn('NoneType', str(cm.exception)) # 3 args with self.assertRaises(UnicodeEncodeError): s.sendto(u'\u2620', 0, sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, None) - self.assertIn('not NoneType', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 'bar', sockname) - self.assertIn('an integer is required', str(cm.exception)) + self.assertIn('integer', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None, None) - self.assertIn('an integer is required', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('an integer is required', str(cm.exception)) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto('foo') - self.assertIn('(1 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, sockname, 4) - self.assertIn('(4 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) def testCrucialConstants(self): @@ -385,10 +388,10 @@ socket.htonl(k) socket.htons(k) for k in bad_values: - self.assertRaises(OverflowError, socket.ntohl, k) - self.assertRaises(OverflowError, socket.ntohs, k) - self.assertRaises(OverflowError, socket.htonl, k) - self.assertRaises(OverflowError, socket.htons, k) + self.assertRaises((OverflowError, ValueError), socket.ntohl, k) + self.assertRaises((OverflowError, ValueError), socket.ntohs, k) + self.assertRaises((OverflowError, ValueError), socket.htonl, k) + self.assertRaises((OverflowError, ValueError), socket.htons, k) def testGetServBy(self): eq = self.assertEqual @@ -428,8 +431,8 @@ if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. - self.assertRaises(OverflowError, socket.getservbyport, -1) - self.assertRaises(OverflowError, socket.getservbyport, 65536) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, -1) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout @@ -608,8 +611,8 @@ neg_port = port - 65536 sock = socket.socket() try: - self.assertRaises(OverflowError, sock.bind, (host, big_port)) - self.assertRaises(OverflowError, sock.bind, (host, neg_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, big_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, neg_port)) sock.bind((host, port)) finally: sock.close() @@ -1309,6 +1312,7 @@ closed = False def flush(self): pass def close(self): self.closed = True + def _decref_socketios(self): pass # must not close unless we request it: the original use of _fileobject # by module socket requires that the underlying socket not be closed until diff --git a/lib-python/2.7/test/test_sort.py b/lib-python/2.7/test/test_sort.py --- a/lib-python/2.7/test/test_sort.py +++ b/lib-python/2.7/test/test_sort.py @@ -140,7 +140,10 @@ return random.random() < 0.5 L = [C() for i in range(50)] - self.assertRaises(ValueError, L.sort) + try: + L.sort() + except ValueError: + pass def test_cmpNone(self): # Testing None as a comparison function. @@ -150,8 +153,10 @@ L.sort(None) self.assertEqual(L, range(50)) + @test_support.impl_detail(pypy=False) def test_undetected_mutation(self): # Python 2.4a1 did not always detect mutation + # So does pypy... memorywaster = [] for i in range(20): def mutating_cmp(x, y): @@ -226,7 +231,10 @@ def __del__(self): del data[:] data[:] = range(20) - self.assertRaises(ValueError, data.sort, key=SortKiller) + try: + data.sort(key=SortKiller) + except ValueError: + pass def test_key_with_mutating_del_and_exception(self): data = range(10) diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -881,6 +881,8 @@ c = socket.socket() c.connect((HOST, port)) listener_gone.wait() + # XXX why is it necessary? + test_support.gc_collect() try: ssl_sock = ssl.wrap_socket(c) except IOError: @@ -1330,10 +1332,8 @@ def test_main(verbose=False): global CERTFILE, SVN_PYTHON_ORG_ROOT_CERT - CERTFILE = os.path.join(os.path.dirname(__file__) or os.curdir, - "keycert.pem") - SVN_PYTHON_ORG_ROOT_CERT = os.path.join( - os.path.dirname(__file__) or os.curdir, + CERTFILE = test_support.findfile("keycert.pem") + SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( "https_svn_python_org_root.pem") if (not os.path.exists(CERTFILE) or diff --git a/lib-python/2.7/test/test_str.py b/lib-python/2.7/test/test_str.py --- a/lib-python/2.7/test/test_str.py +++ b/lib-python/2.7/test/test_str.py @@ -422,10 +422,11 @@ for meth in ('foo'.startswith, 'foo'.endswith): with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) def test_main(): test_support.run_unittest(StrTest) diff --git a/lib-python/2.7/test/test_struct.py b/lib-python/2.7/test/test_struct.py --- a/lib-python/2.7/test/test_struct.py +++ b/lib-python/2.7/test/test_struct.py @@ -535,7 +535,8 @@ @unittest.skipUnless(IS32BIT, "Specific to 32bit machines") def test_crasher(self): - self.assertRaises(MemoryError, struct.pack, "357913941c", "a") + self.assertRaises((MemoryError, struct.error), struct.pack, + "357913941c", "a") def test_count_overflow(self): hugecount = '{}b'.format(sys.maxsize+1) diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/2.7/test/test_support.py b/lib-python/2.7/test/test_support.py --- a/lib-python/2.7/test/test_support.py +++ b/lib-python/2.7/test/test_support.py @@ -431,16 +431,20 @@ rmtree(name) -def findfile(file, here=__file__, subdir=None): +def findfile(file, here=None, subdir=None): """Try to find a file on sys.path and the working directory. If it is not found the argument passed to the function is returned (this does not necessarily signal failure; could still be the legitimate path).""" + import test if os.path.isabs(file): return file if subdir is not None: file = os.path.join(subdir, file) path = sys.path - path = [os.path.dirname(here)] + path + if here is None: + path = test.__path__ + path + else: + path = [os.path.dirname(here)] + path for dn in path: fn = os.path.join(dn, file) if os.path.exists(fn): return fn @@ -1050,15 +1054,33 @@ guards, default = _parse_guards(guards) return guards.get(platform.python_implementation().lower(), default) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --pdb +# to get a pdb prompt in case of exceptions +ResultClass = unittest.TextTestRunner.resultclass + +class TestResultWithPdb(ResultClass): + + def addError(self, testcase, exc_info): + ResultClass.addError(self, testcase, exc_info) + if '--pdb' in sys.argv: + import pdb, traceback + traceback.print_tb(exc_info[2]) + pdb.post_mortem(exc_info[2]) + +# ---------------------------------- def _run_suite(suite): """Run tests from a unittest.TestSuite-derived class.""" if verbose: - runner = unittest.TextTestRunner(sys.stdout, verbosity=2) + runner = unittest.TextTestRunner(sys.stdout, verbosity=2, + resultclass=TestResultWithPdb) else: runner = BasicTestRunner() + result = runner.run(suite) if not result.wasSuccessful(): if len(result.errors) == 1 and not result.failures: @@ -1071,6 +1093,34 @@ err += "; run in verbose mode for details" raise TestFailed(err) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --filter bar +# to run only the test cases whose name contains bar + +def filter_maybe(suite): + try: + i = sys.argv.index('--filter') + filter = sys.argv[i+1] + except (ValueError, IndexError): + return suite + tests = [] + for test in linearize_suite(suite): + if filter in test._testMethodName: + tests.append(test) + return unittest.TestSuite(tests) + +def linearize_suite(suite_or_test): + try: + it = iter(suite_or_test) + except TypeError: + yield suite_or_test + return + for subsuite in it: + for item in linearize_suite(subsuite): + yield item + +# ---------------------------------- def run_unittest(*classes): """Run tests from unittest.TestCase-derived classes.""" @@ -1086,6 +1136,7 @@ suite.addTest(cls) else: suite.addTest(unittest.makeSuite(cls)) + suite = filter_maybe(suite) _run_suite(suite) diff --git a/lib-python/2.7/test/test_syntax.py b/lib-python/2.7/test/test_syntax.py --- a/lib-python/2.7/test/test_syntax.py +++ b/lib-python/2.7/test/test_syntax.py @@ -5,7 +5,8 @@ >>> def f(x): ... global x Traceback (most recent call last): -SyntaxError: name 'x' is local and global (, line 1) + File "", line 1 +SyntaxError: name 'x' is local and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that @@ -375,7 +376,7 @@ In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 - >>> while 1: + >>> while 1: # doctest:+SKIP ... while 2: ... while 3: ... while 4: diff --git a/lib-python/2.7/test/test_sys.py b/lib-python/2.7/test/test_sys.py --- a/lib-python/2.7/test/test_sys.py +++ b/lib-python/2.7/test/test_sys.py @@ -264,6 +264,7 @@ self.assertEqual(sys.getdlopenflags(), oldflags+1) sys.setdlopenflags(oldflags) + @test.test_support.impl_detail("reference counting") def test_refcount(self): # n here must be a global in order for this test to pass while # tracing with a python function. Tracing calls PyFrame_FastToLocals @@ -287,7 +288,7 @@ is sys._getframe().f_code ) - # sys._current_frames() is a CPython-only gimmick. + @test.test_support.impl_detail("current_frames") def test_current_frames(self): have_threads = True try: @@ -383,7 +384,10 @@ self.assertEqual(len(sys.float_info), 11) self.assertEqual(sys.float_info.radix, 2) self.assertEqual(len(sys.long_info), 2) - self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + if test.test_support.check_impl_detail(cpython=True): + self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + else: + self.assertTrue(sys.long_info.bits_per_digit >= 1) self.assertTrue(sys.long_info.sizeof_digit >= 1) self.assertEqual(type(sys.long_info.bits_per_digit), int) self.assertEqual(type(sys.long_info.sizeof_digit), int) @@ -432,6 +436,7 @@ self.assertEqual(type(getattr(sys.flags, attr)), int, attr) self.assertTrue(repr(sys.flags)) + @test.test_support.impl_detail("sys._clear_type_cache") def test_clear_type_cache(self): sys._clear_type_cache() @@ -473,6 +478,7 @@ p.wait() self.assertIn(executable, ["''", repr(sys.executable)]) + at unittest.skipUnless(test.test_support.check_impl_detail(), "sys.getsizeof()") class SizeofTest(unittest.TestCase): TPFLAGS_HAVE_GC = 1<<14 diff --git a/lib-python/2.7/test/test_sys_settrace.py b/lib-python/2.7/test/test_sys_settrace.py --- a/lib-python/2.7/test/test_sys_settrace.py +++ b/lib-python/2.7/test/test_sys_settrace.py @@ -213,12 +213,16 @@ "finally" def generator_example(): # any() will leave the generator before its end - x = any(generator_function()) + x = any(generator_function()); gc.collect() # the following lines were not traced for x in range(10): y = x +# On CPython, when the generator is decref'ed to zero, we see the trace +# for the "finally:" portion. On PyPy, we don't see it before the next +# garbage collection. That's why we put gc.collect() on the same line above. + generator_example.events = ([(0, 'call'), (2, 'line'), (-6, 'call'), @@ -282,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass @@ -323,17 +327,24 @@ self.run_test(tighterloop_example) def test_13_genexp(self): - self.run_test(generator_example) - # issue1265: if the trace function contains a generator, - # and if the traced function contains another generator - # that is not completely exhausted, the trace stopped. - # Worse: the 'finally' clause was not invoked. - tracer = Tracer() - sys.settrace(tracer.traceWithGenexp) - generator_example() - sys.settrace(None) - self.compare_events(generator_example.__code__.co_firstlineno, - tracer.events, generator_example.events) + if self.using_gc: + test_support.gc_collect() + gc.enable() + try: + self.run_test(generator_example) + # issue1265: if the trace function contains a generator, + # and if the traced function contains another generator + # that is not completely exhausted, the trace stopped. + # Worse: the 'finally' clause was not invoked. + tracer = Tracer() + sys.settrace(tracer.traceWithGenexp) + generator_example() + sys.settrace(None) + self.compare_events(generator_example.__code__.co_firstlineno, + tracer.events, generator_example.events) + finally: + if self.using_gc: + gc.disable() def test_14_onliner_if(self): def onliners(): diff --git a/lib-python/2.7/test/test_sysconfig.py b/lib-python/2.7/test/test_sysconfig.py --- a/lib-python/2.7/test/test_sysconfig.py +++ b/lib-python/2.7/test/test_sysconfig.py @@ -209,13 +209,22 @@ self.assertEqual(get_platform(), 'macosx-10.4-fat64') - for arch in ('ppc', 'i386', 'x86_64', 'ppc64'): + for arch in ('ppc', 'i386', 'ppc64', 'x86_64'): get_config_vars()['CFLAGS'] = ('-arch %s -isysroot ' '/Developer/SDKs/MacOSX10.4u.sdk ' '-fno-strict-aliasing -fno-common ' '-dynamic -DNDEBUG -g -O3'%(arch,)) self.assertEqual(get_platform(), 'macosx-10.4-%s'%(arch,)) + + # macosx with ARCHFLAGS set and empty _CONFIG_VARS + os.environ['ARCHFLAGS'] = '-arch i386' + sysconfig._CONFIG_VARS = None + + # this will attempt to recreate the _CONFIG_VARS based on environment + # variables; used to check a problem with the PyPy's _init_posix + # implementation; see: issue 705 + get_config_vars() # linux debian sarge os.name = 'posix' @@ -235,7 +244,7 @@ def test_get_scheme_names(self): wanted = ('nt', 'nt_user', 'os2', 'os2_home', 'osx_framework_user', - 'posix_home', 'posix_prefix', 'posix_user') + 'posix_home', 'posix_prefix', 'posix_user', 'pypy') self.assertEqual(get_scheme_names(), wanted) def test_symlink(self): diff --git a/lib-python/2.7/test/test_tarfile.py b/lib-python/2.7/test/test_tarfile.py --- a/lib-python/2.7/test/test_tarfile.py +++ b/lib-python/2.7/test/test_tarfile.py @@ -169,6 +169,7 @@ except tarfile.ReadError: self.fail("tarfile.open() failed on empty archive") self.assertListEqual(tar.getmembers(), []) + tar.close() def test_null_tarfile(self): # Test for issue6123: Allow opening empty archives. @@ -207,16 +208,21 @@ fobj = open(self.tarname, "rb") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, os.path.abspath(fobj.name)) + tar.close() def test_no_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) self.assertRaises(AttributeError, getattr, fobj, "name") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, None) def test_empty_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) fobj.name = "" tar = tarfile.open(fileobj=fobj, mode=self.mode) @@ -334,7 +340,8 @@ # constructor in case of an error. For the test we rely on # the fact that opening an empty file raises a ReadError. empty = os.path.join(TEMPDIR, "empty") - open(empty, "wb").write("") + with open(empty, "wb") as fid: + fid.write("") try: tar = object.__new__(tarfile.TarFile) @@ -515,6 +522,7 @@ self.tar = tarfile.open(self.tarname, mode=self.mode, encoding="iso8859-1") tarinfo = self.tar.getmember("pax/umlauts-�������") self._test_member(tarinfo, size=7011, chksum=md5_regtype) + self.tar.close() class LongnameTest(ReadTest): @@ -675,6 +683,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.rmdir(path) @@ -692,6 +701,7 @@ tar.gettarinfo(target) tarinfo = tar.gettarinfo(link) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(target) os.remove(link) @@ -704,6 +714,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(path) @@ -722,6 +733,7 @@ tar.add(dstname) os.chdir(cwd) self.assertTrue(tar.getnames() == [], "added the archive to itself") + tar.close() def test_exclude(self): tempdir = os.path.join(TEMPDIR, "exclude") @@ -742,6 +754,7 @@ tar = tarfile.open(tmpname, "r") self.assertEqual(len(tar.getmembers()), 1) self.assertEqual(tar.getnames()[0], "empty_dir") + tar.close() finally: shutil.rmtree(tempdir) @@ -947,7 +960,9 @@ fobj.close() elif self.mode.endswith("bz2"): dec = bz2.BZ2Decompressor() - data = open(tmpname, "rb").read() + f = open(tmpname, "rb") + data = f.read() + f.close() data = dec.decompress(data) self.assertTrue(len(dec.unused_data) == 0, "found trailing data") @@ -1026,6 +1041,7 @@ "unable to read longname member") self.assertEqual(tarinfo.linkname, member.linkname, "unable to read longname member") + tar.close() def test_longname_1023(self): self._test(("longnam/" * 127) + "longnam") @@ -1118,6 +1134,7 @@ else: n = tar.getmembers()[0].name self.assertTrue(name == n, "PAX longname creation failed") + tar.close() def test_pax_global_header(self): pax_headers = { @@ -1146,6 +1163,7 @@ tarfile.PAX_NUMBER_FIELDS[key](val) except (TypeError, ValueError): self.fail("unable to convert pax header field") + tar.close() def test_pax_extended_header(self): # The fields from the pax header have priority over the @@ -1165,6 +1183,7 @@ self.assertEqual(t.pax_headers, pax_headers) self.assertEqual(t.name, "foo") self.assertEqual(t.uid, 123) + tar.close() class UstarUnicodeTest(unittest.TestCase): @@ -1208,6 +1227,7 @@ tarinfo.name = "foo" tarinfo.uname = u"���" self.assertRaises(UnicodeError, tar.addfile, tarinfo) + tar.close() def test_unicode_argument(self): tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict") @@ -1262,6 +1282,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="ascii", errors=handler) self.assertEqual(tar.getnames()[0], name) + tar.close() self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="ascii", errors="strict") @@ -1274,6 +1295,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1", errors="utf-8") self.assertEqual(tar.getnames()[0], "���/" + u"�".encode("utf8")) + tar.close() class AppendTest(unittest.TestCase): @@ -1301,6 +1323,7 @@ def _test(self, names=["bar"], fileobj=None): tar = tarfile.open(self.tarname, fileobj=fileobj) self.assertEqual(tar.getnames(), names) + tar.close() def test_non_existing(self): self._add_testfile() @@ -1319,7 +1342,9 @@ def test_fileobj(self): self._create_testtar() - data = open(self.tarname).read() + f = open(self.tarname) + data = f.read() + f.close() fobj = StringIO.StringIO(data) self._add_testfile(fobj) fobj.seek(0) @@ -1345,7 +1370,9 @@ # Append mode is supposed to fail if the tarfile to append to # does not end with a zero block. def _test_error(self, data): - open(self.tarname, "wb").write(data) + f = open(self.tarname, "wb") + f.write(data) + f.close() self.assertRaises(tarfile.ReadError, self._add_testfile) def test_null(self): diff --git a/lib-python/2.7/test/test_tempfile.py b/lib-python/2.7/test/test_tempfile.py --- a/lib-python/2.7/test/test_tempfile.py +++ b/lib-python/2.7/test/test_tempfile.py @@ -23,8 +23,8 @@ # TEST_FILES may need to be tweaked for systems depending on the maximum # number of files that can be opened at one time (see ulimit -n) -if sys.platform in ('openbsd3', 'openbsd4'): - TEST_FILES = 48 +if sys.platform.startswith("openbsd"): + TEST_FILES = 64 # ulimit -n defaults to 128 for normal users else: TEST_FILES = 100 @@ -244,6 +244,7 @@ dir = tempfile.mkdtemp() try: self.do_create(dir=dir).write("blat") + test_support.gc_collect() finally: os.rmdir(dir) @@ -528,12 +529,15 @@ self.do_create(suf="b") self.do_create(pre="a", suf="b") self.do_create(pre="aa", suf=".txt") + test_support.gc_collect() def test_many(self): # mktemp can choose many usable file names (stochastic) extant = range(TEST_FILES) for i in extant: extant[i] = self.do_create(pre="aa") + del extant + test_support.gc_collect() ## def test_warning(self): ## # mktemp issues a warning when used diff --git a/lib-python/2.7/test/test_thread.py b/lib-python/2.7/test/test_thread.py --- a/lib-python/2.7/test/test_thread.py +++ b/lib-python/2.7/test/test_thread.py @@ -128,6 +128,7 @@ del task while not done: time.sleep(0.01) + test_support.gc_collect() self.assertEqual(thread._count(), orig) diff --git a/lib-python/2.7/test/test_threading.py b/lib-python/2.7/test/test_threading.py --- a/lib-python/2.7/test/test_threading.py +++ b/lib-python/2.7/test/test_threading.py @@ -161,6 +161,7 @@ # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. + @test.test_support.cpython_only def test_PyThreadState_SetAsyncExc(self): try: import ctypes @@ -266,6 +267,7 @@ finally: threading._start_new_thread = _start_new_thread + @test.test_support.cpython_only def test_finalize_runnning_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for @@ -383,6 +385,7 @@ finally: sys.setcheckinterval(old_interval) + @test.test_support.cpython_only def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): @@ -425,6 +428,9 @@ def joiningfunc(mainthread): mainthread.join() print 'end of thread' + # stdout is fully buffered because not a tty, we have to flush + # before exit. + sys.stdout.flush() \n""" + script p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE) diff --git a/lib-python/2.7/test/test_threading_local.py b/lib-python/2.7/test/test_threading_local.py --- a/lib-python/2.7/test/test_threading_local.py +++ b/lib-python/2.7/test/test_threading_local.py @@ -173,8 +173,9 @@ obj = cls() obj.x = 5 self.assertEqual(obj.__dict__, {'x': 5}) - with self.assertRaises(AttributeError): - obj.__dict__ = {} + if test_support.check_impl_detail(): + with self.assertRaises(AttributeError): + obj.__dict__ = {} with self.assertRaises(AttributeError): del obj.__dict__ diff --git a/lib-python/2.7/test/test_traceback.py b/lib-python/2.7/test/test_traceback.py --- a/lib-python/2.7/test/test_traceback.py +++ b/lib-python/2.7/test/test_traceback.py @@ -5,7 +5,8 @@ import sys import unittest from imp import reload -from test.test_support import run_unittest, is_jython, Error +from test.test_support import run_unittest, Error +from test.test_support import impl_detail, check_impl_detail import traceback @@ -49,10 +50,8 @@ self.assertTrue(err[2].count('\n') == 1) # and no additional newline self.assertTrue(err[1].find("+") == err[2].find("^")) # in the right place + @impl_detail("other implementations may add a caret (why shouldn't they?)") def test_nocaret(self): - if is_jython: - # jython adds a caret in this case (why shouldn't it?) - return err = self.get_exception_format(self.syntax_error_without_caret, SyntaxError) self.assertTrue(len(err) == 3) @@ -63,8 +62,11 @@ IndentationError) self.assertTrue(len(err) == 4) self.assertTrue(err[1].strip() == "print 2") - self.assertIn("^", err[2]) - self.assertTrue(err[1].find("2") == err[2].find("^")) + if check_impl_detail(): + # on CPython, there is a "^" at the end of the line + # on PyPy, there is a "^" too, but at the start, more logically + self.assertIn("^", err[2]) + self.assertTrue(err[1].find("2") == err[2].find("^")) def test_bug737473(self): import os, tempfile, time @@ -74,7 +76,8 @@ try: sys.path.insert(0, testdir) testfile = os.path.join(testdir, 'test_bug737473.py') - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise ValueError""" @@ -96,7 +99,8 @@ # three seconds are needed for this test to pass reliably :-( time.sleep(4) - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise NotImplementedError""" reload(test_bug737473) diff --git a/lib-python/2.7/test/test_types.py b/lib-python/2.7/test/test_types.py --- a/lib-python/2.7/test/test_types.py +++ b/lib-python/2.7/test/test_types.py @@ -1,7 +1,8 @@ # Python test set -- part 6, built-in types from test.test_support import run_unittest, have_unicode, run_with_locale, \ - check_py3k_warnings + check_py3k_warnings, \ + impl_detail, check_impl_detail import unittest import sys import locale @@ -289,9 +290,14 @@ # array.array() returns an object that does not implement a char buffer, # something which int() uses for conversion. import array - try: int(buffer(array.array('c'))) + try: int(buffer(array.array('c', '5'))) except TypeError: pass - else: self.fail("char buffer (at C level) not working") + else: + if check_impl_detail(): + self.fail("char buffer (at C level) not working") + #else: + # it works on PyPy, which does not have the distinction + # between char buffer and binary buffer. XXX fine enough? def test_int__format__(self): def test(i, format_spec, result): @@ -741,6 +747,7 @@ for code in 'xXobns': self.assertRaises(ValueError, format, 0, ',' + code) + @impl_detail("the types' internal size attributes are CPython-only") def test_internal_sizes(self): self.assertGreater(object.__basicsize__, 0) self.assertGreater(tuple.__itemsize__, 0) diff --git a/lib-python/2.7/test/test_unicode.py b/lib-python/2.7/test/test_unicode.py --- a/lib-python/2.7/test/test_unicode.py +++ b/lib-python/2.7/test/test_unicode.py @@ -448,10 +448,11 @@ meth('\xff') with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) @test_support.run_with_locale('LC_ALL', 'de_DE', 'fr_FR') def test_format_float(self): @@ -1062,7 +1063,8 @@ # to take a 64-bit long, this test should apply to all platforms. if sys.maxint > (1 << 32) or struct.calcsize('P') != 4: return - self.assertRaises(OverflowError, u't\tt\t'.expandtabs, sys.maxint) + self.assertRaises((OverflowError, MemoryError), + u't\tt\t'.expandtabs, sys.maxint) def test__format__(self): def test(value, format, expected): diff --git a/lib-python/2.7/test/test_unicodedata.py b/lib-python/2.7/test/test_unicodedata.py --- a/lib-python/2.7/test/test_unicodedata.py +++ b/lib-python/2.7/test/test_unicodedata.py @@ -233,10 +233,12 @@ # been loaded in this process. popen = subprocess.Popen(args, stderr=subprocess.PIPE) popen.wait() - self.assertEqual(popen.returncode, 1) - error = "SyntaxError: (unicode error) \N escapes not supported " \ - "(can't load unicodedata module)" - self.assertIn(error, popen.stderr.read()) + self.assertIn(popen.returncode, [0, 1]) # at least it did not segfault + if test.test_support.check_impl_detail(): + self.assertEqual(popen.returncode, 1) + error = "SyntaxError: (unicode error) \N escapes not supported " \ + "(can't load unicodedata module)" + self.assertIn(error, popen.stderr.read()) def test_decimal_numeric_consistent(self): # Test that decimal and numeric are consistent, diff --git a/lib-python/2.7/test/test_unpack.py b/lib-python/2.7/test/test_unpack.py --- a/lib-python/2.7/test/test_unpack.py +++ b/lib-python/2.7/test/test_unpack.py @@ -62,14 +62,14 @@ >>> a, b = t Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking tuple of wrong size >>> a, b = l Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking sequence too short diff --git a/lib-python/2.7/test/test_urllib2.py b/lib-python/2.7/test/test_urllib2.py --- a/lib-python/2.7/test/test_urllib2.py +++ b/lib-python/2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/2.7/test/test_warnings.py b/lib-python/2.7/test/test_warnings.py --- a/lib-python/2.7/test/test_warnings.py +++ b/lib-python/2.7/test/test_warnings.py @@ -355,7 +355,8 @@ # test_support.import_fresh_module utility function def test_accelerated(self): self.assertFalse(original_warnings is self.module) - self.assertFalse(hasattr(self.module.warn, 'func_code')) + self.assertFalse(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class PyWarnTests(BaseTest, WarnTests): module = py_warnings @@ -364,7 +365,8 @@ # test_support.import_fresh_module utility function def test_pure_python(self): self.assertFalse(original_warnings is self.module) - self.assertTrue(hasattr(self.module.warn, 'func_code')) + self.assertTrue(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class WCmdLineTests(unittest.TestCase): diff --git a/lib-python/2.7/test/test_weakref.py b/lib-python/2.7/test/test_weakref.py --- a/lib-python/2.7/test/test_weakref.py +++ b/lib-python/2.7/test/test_weakref.py @@ -1,4 +1,3 @@ -import gc import sys import unittest import UserList @@ -6,6 +5,7 @@ import operator from test import test_support +from test.test_support import gc_collect # Used in ReferencesTestCase.test_ref_created_during_del() . ref_from_del = None @@ -70,6 +70,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(ref1() is None, "expected reference to be invalidated") self.assertTrue(ref2() is None, @@ -101,13 +102,16 @@ ref1 = weakref.proxy(o, self.callback) ref2 = weakref.proxy(o, self.callback) del o + gc_collect() def check(proxy): proxy.bar self.assertRaises(weakref.ReferenceError, check, ref1) self.assertRaises(weakref.ReferenceError, check, ref2) - self.assertRaises(weakref.ReferenceError, bool, weakref.proxy(C())) + ref3 = weakref.proxy(C()) + gc_collect() + self.assertRaises(weakref.ReferenceError, bool, ref3) self.assertTrue(self.cbcalled == 2) def check_basic_ref(self, factory): @@ -124,6 +128,7 @@ o = factory() ref = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(self.cbcalled == 1, "callback did not properly set 'cbcalled'") self.assertTrue(ref() is None, @@ -148,6 +153,7 @@ self.assertTrue(weakref.getweakrefcount(o) == 2, "wrong weak ref count for object") del proxy + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 1, "wrong weak ref count for object after deleting proxy") @@ -325,6 +331,7 @@ "got wrong number of weak reference objects") del ref1, ref2, proxy1, proxy2 + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 0, "weak reference objects not unlinked from" " referent when discarded.") @@ -338,6 +345,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref2], "list of refs does not match") @@ -345,10 +353,12 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref2 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref1], "list of refs does not match") del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [], "list of refs not cleared") @@ -400,13 +410,11 @@ # when the second attempt to remove the instance from the "list # of all objects" occurs. - import gc - class C(object): pass c = C() - wr = weakref.ref(c, lambda ignore: gc.collect()) + wr = weakref.ref(c, lambda ignore: gc_collect()) del c # There endeth the first part. It gets worse. @@ -414,7 +422,7 @@ c1 = C() c1.i = C() - wr = weakref.ref(c1.i, lambda ignore: gc.collect()) + wr = weakref.ref(c1.i, lambda ignore: gc_collect()) c2 = C() c2.c1 = c1 @@ -430,8 +438,6 @@ del c2 def test_callback_in_cycle_1(self): - import gc - class J(object): pass @@ -467,11 +473,9 @@ # search II.__mro__, but that's NULL. The result was a segfault in # a release build, and an assert failure in a debug build. del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_2(self): - import gc - # This is just like test_callback_in_cycle_1, except that II is an # old-style class. The symptom is different then: an instance of an # old-style class looks in its own __dict__ first. 'J' happens to @@ -496,11 +500,9 @@ I.wr = weakref.ref(J, I.acallback) del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_3(self): - import gc - # This one broke the first patch that fixed the last two. In this # case, the objects reachable from the callback aren't also reachable # from the object (c1) *triggering* the callback: you can get to @@ -520,11 +522,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2 - gc.collect() + gc_collect() def test_callback_in_cycle_4(self): - import gc - # Like test_callback_in_cycle_3, except c2 and c1 have different # classes. c2's class (C) isn't reachable from c1 then, so protecting # objects reachable from the dying object (c1) isn't enough to stop @@ -548,11 +548,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2, C, D - gc.collect() + gc_collect() def test_callback_in_cycle_resurrection(self): - import gc - # Do something nasty in a weakref callback: resurrect objects # from dead cycles. For this to be attempted, the weakref and # its callback must also be part of the cyclic trash (else the @@ -583,7 +581,7 @@ del c1, c2, C # make them all trash self.assertEqual(alist, []) # del isn't enough to reclaim anything - gc.collect() + gc_collect() # c1.wr and c2.wr were part of the cyclic trash, so should have # been cleared without their callbacks executing. OTOH, the weakref # to C is bound to a function local (wr), and wasn't trash, so that @@ -593,12 +591,10 @@ self.assertEqual(wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_callbacks_on_callback(self): - import gc - # Set up weakref callbacks *on* weakref callbacks. alist = [] def safe_callback(ignore): @@ -626,12 +622,12 @@ del callback, c, d, C self.assertEqual(alist, []) # del isn't enough to clean up cycles - gc.collect() + gc_collect() self.assertEqual(alist, ["safe_callback called"]) self.assertEqual(external_wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_gc_during_ref_creation(self): @@ -641,9 +637,11 @@ self.check_gc_during_creation(weakref.proxy) def check_gc_during_creation(self, makeref): - thresholds = gc.get_threshold() - gc.set_threshold(1, 1, 1) - gc.collect() + if test_support.check_impl_detail(): + import gc + thresholds = gc.get_threshold() + gc.set_threshold(1, 1, 1) + gc_collect() class A: pass @@ -663,7 +661,8 @@ weakref.ref(referenced, callback) finally: - gc.set_threshold(*thresholds) + if test_support.check_impl_detail(): + gc.set_threshold(*thresholds) def test_ref_created_during_del(self): # Bug #1377858 @@ -683,7 +682,7 @@ r = weakref.ref(Exception) self.assertRaises(TypeError, r.__init__, 0, 0, 0, 0, 0) # No exception should be raised here - gc.collect() + gc_collect() def test_classes(self): # Check that both old-style classes and new-style classes @@ -696,12 +695,12 @@ weakref.ref(int) a = weakref.ref(A, l.append) A = None - gc.collect() + gc_collect() self.assertEqual(a(), None) self.assertEqual(l, [a]) b = weakref.ref(B, l.append) B = None - gc.collect() + gc_collect() self.assertEqual(b(), None) self.assertEqual(l, [a, b]) @@ -722,6 +721,7 @@ self.assertTrue(mr.called) self.assertEqual(mr.value, 24) del o + gc_collect() self.assertTrue(mr() is None) self.assertTrue(mr.called) @@ -738,9 +738,11 @@ self.assertEqual(weakref.getweakrefcount(o), 3) refs = weakref.getweakrefs(o) self.assertEqual(len(refs), 3) - self.assertTrue(r2 is refs[0]) - self.assertIn(r1, refs[1:]) - self.assertIn(r3, refs[1:]) + assert set(refs) == set((r1, r2, r3)) + if test_support.check_impl_detail(): + self.assertTrue(r2 is refs[0]) + self.assertIn(r1, refs[1:]) + self.assertIn(r3, refs[1:]) def test_subclass_refs_dont_conflate_callbacks(self): class MyRef(weakref.ref): @@ -839,15 +841,18 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the values did not clear the dictionary") # regression on SF bug #447152: dict = weakref.WeakValueDictionary() self.assertRaises(KeyError, dict.__getitem__, 1) dict[2] = C() + gc_collect() self.assertRaises(KeyError, dict.__getitem__, 2) def test_weak_keys(self): @@ -868,9 +873,11 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the keys did not clear the dictionary") o = Object(42) @@ -986,13 +993,13 @@ self.assertTrue(len(weakdict) == 2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 1) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 0) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) @@ -1137,6 +1144,7 @@ for o in objs: count += 1 del d[o] + gc_collect() self.assertEqual(len(d), 0) self.assertEqual(count, 2) @@ -1177,6 +1185,7 @@ >>> o is o2 True >>> del o, o2 +>>> gc_collect() >>> print r() None @@ -1229,6 +1238,7 @@ >>> id2obj(a_id) is a True >>> del a +>>> gc_collect() >>> try: ... id2obj(a_id) ... except KeyError: diff --git a/lib-python/2.7/test/test_weakset.py b/lib-python/2.7/test/test_weakset.py --- a/lib-python/2.7/test/test_weakset.py +++ b/lib-python/2.7/test/test_weakset.py @@ -57,6 +57,7 @@ self.assertEqual(len(self.s), len(self.d)) self.assertEqual(len(self.fs), 1) del self.obj + test_support.gc_collect() self.assertEqual(len(self.fs), 0) def test_contains(self): @@ -66,6 +67,7 @@ self.assertNotIn(1, self.s) self.assertIn(self.obj, self.fs) del self.obj + test_support.gc_collect() self.assertNotIn(SomeClass('F'), self.fs) def test_union(self): @@ -204,6 +206,7 @@ self.assertEqual(self.s, dup) self.assertRaises(TypeError, self.s.add, []) self.fs.add(Foo()) + test_support.gc_collect() self.assertTrue(len(self.fs) == 1) self.fs.add(self.obj) self.assertTrue(len(self.fs) == 1) @@ -330,10 +333,11 @@ next(it) # Trigger internal iteration # Destroy an item del items[-1] - gc.collect() # just in case + test_support.gc_collect() # We have removed either the first consumed items, or another one self.assertIn(len(list(it)), [len(items), len(items) - 1]) del it + test_support.gc_collect() # The removal has been committed self.assertEqual(len(s), len(items)) diff --git a/lib-python/2.7/test/test_xml_etree.py b/lib-python/2.7/test/test_xml_etree.py --- a/lib-python/2.7/test/test_xml_etree.py +++ b/lib-python/2.7/test/test_xml_etree.py @@ -1633,10 +1633,10 @@ Check reference leak. >>> xmltoolkit63() - >>> count = sys.getrefcount(None) + >>> count = sys.getrefcount(None) #doctest: +SKIP >>> for i in range(1000): ... xmltoolkit63() - >>> sys.getrefcount(None) - count + >>> sys.getrefcount(None) - count #doctest: +SKIP 0 """ diff --git a/lib-python/2.7/test/test_xmlrpc.py b/lib-python/2.7/test/test_xmlrpc.py --- a/lib-python/2.7/test/test_xmlrpc.py +++ b/lib-python/2.7/test/test_xmlrpc.py @@ -308,7 +308,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -367,7 +367,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -435,6 +435,7 @@ def tearDown(self): # wait on the server thread to terminate + test_support.gc_collect() # to close the active connections self.evt.wait(10) # disable traceback reporting @@ -472,9 +473,6 @@ # protocol error; provide additional information in test output self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) - def test_unicode_host(self): - server = xmlrpclib.ServerProxy(u"http://%s:%d/RPC2"%(ADDR, PORT)) - self.assertEqual(server.add("a", u"\xe9"), u"a\xe9") # [ch] The test 404 is causing lots of false alarms. def XXXtest_404(self): @@ -589,12 +587,6 @@ # This avoids waiting for the socket timeout. self.test_simple1() - def test_partial_post(self): - # Check that a partial POST doesn't make the server loop: issue #14001. - conn = httplib.HTTPConnection(ADDR, PORT) - conn.request('POST', '/RPC2 HTTP/1.0\r\nContent-Length: 100\r\n\r\nbye') - conn.close() - class MultiPathServerTestCase(BaseServerTestCase): threadFunc = staticmethod(http_multi_server) request_count = 2 diff --git a/lib-python/2.7/test/test_zipfile.py b/lib-python/2.7/test/test_zipfile.py --- a/lib-python/2.7/test/test_zipfile.py +++ b/lib-python/2.7/test/test_zipfile.py @@ -234,8 +234,9 @@ # Read the ZIP archive with zipfile.ZipFile(f, "r") as zipfp: - for line, zipline in zip(self.line_gen, zipfp.open(TESTFN)): - self.assertEqual(zipline, line + '\n') + with zipfp.open(TESTFN) as f: + for line, zipline in zip(self.line_gen, f): + self.assertEqual(zipline, line + '\n') def test_readline_read_stored(self): # Issue #7610: calls to readline() interleaved with calls to read(). @@ -340,7 +341,8 @@ produces the expected result.""" with zipfile.ZipFile(TESTFN2, "w") as zipfp: zipfp.write(TESTFN) - self.assertEqual(zipfp.read(TESTFN), open(TESTFN).read()) + with open(TESTFN) as f: + self.assertEqual(zipfp.read(TESTFN), f.read()) @skipUnless(zlib, "requires zlib") def test_per_file_compression(self): @@ -382,7 +384,8 @@ self.assertEqual(writtenfile, correctfile) # make sure correct data is in correct file - self.assertEqual(fdata, open(writtenfile, "rb").read()) + with open(writtenfile, "rb") as fid: + self.assertEqual(fdata, fid.read()) os.remove(writtenfile) # remove the test file subdirectories @@ -401,24 +404,25 @@ else: outfile = os.path.join(os.getcwd(), fpath) - self.assertEqual(fdata, open(outfile, "rb").read()) + with open(outfile, "rb") as fid: + self.assertEqual(fdata, fid.read()) os.remove(outfile) # remove the test file subdirectories shutil.rmtree(os.path.join(os.getcwd(), 'ziptest2dir')) def test_writestr_compression(self): - zipfp = zipfile.ZipFile(TESTFN2, "w") - zipfp.writestr("a.txt", "hello world", compress_type=zipfile.ZIP_STORED) - if zlib: - zipfp.writestr("b.txt", "hello world", compress_type=zipfile.ZIP_DEFLATED) + with zipfile.ZipFile(TESTFN2, "w") as zipfp: + zipfp.writestr("a.txt", "hello world", compress_type=zipfile.ZIP_STORED) + if zlib: + zipfp.writestr("b.txt", "hello world", compress_type=zipfile.ZIP_DEFLATED) - info = zipfp.getinfo('a.txt') - self.assertEqual(info.compress_type, zipfile.ZIP_STORED) + info = zipfp.getinfo('a.txt') + self.assertEqual(info.compress_type, zipfile.ZIP_STORED) - if zlib: - info = zipfp.getinfo('b.txt') - self.assertEqual(info.compress_type, zipfile.ZIP_DEFLATED) + if zlib: + info = zipfp.getinfo('b.txt') + self.assertEqual(info.compress_type, zipfile.ZIP_DEFLATED) def zip_test_writestr_permissions(self, f, compression): @@ -646,7 +650,8 @@ def test_write_non_pyfile(self): with zipfile.PyZipFile(TemporaryFile(), "w") as zipfp: - open(TESTFN, 'w').write('most definitely not a python file') + with open(TESTFN, 'w') as f: + f.write('most definitely not a python file') self.assertRaises(RuntimeError, zipfp.writepy, TESTFN) os.remove(TESTFN) @@ -795,7 +800,8 @@ self.assertRaises(RuntimeError, zipf.open, "foo.txt") self.assertRaises(RuntimeError, zipf.testzip) self.assertRaises(RuntimeError, zipf.writestr, "bogus.txt", "bogus") - open(TESTFN, 'w').write('zipfile test data') + with open(TESTFN, 'w') as fp: + fp.write('zipfile test data') self.assertRaises(RuntimeError, zipf.write, TESTFN) def test_bad_constructor_mode(self): @@ -851,7 +857,6 @@ def test_comments(self): """Check that comments on the archive are handled properly.""" - # check default comment is empty with zipfile.ZipFile(TESTFN, mode="w") as zipf: self.assertEqual(zipf.comment, '') @@ -953,14 +958,16 @@ with zipfile.ZipFile(TESTFN, mode="w") as zipf: pass try: - zipf = zipfile.ZipFile(TESTFN, mode="r") + with zipfile.ZipFile(TESTFN, mode="r") as zipf: + pass except zipfile.BadZipfile: self.fail("Unable to create empty ZIP file in 'w' mode") with zipfile.ZipFile(TESTFN, mode="a") as zipf: pass try: - zipf = zipfile.ZipFile(TESTFN, mode="r") + with zipfile.ZipFile(TESTFN, mode="r") as zipf: + pass except: self.fail("Unable to create empty ZIP file in 'a' mode") @@ -1160,6 +1167,8 @@ data1 += zopen1.read(500) data2 += zopen2.read(500) self.assertEqual(data1, data2) + zopen1.close() + zopen2.close() def test_different_file(self): # Verify that (when the ZipFile is in control of creating file objects) @@ -1207,9 +1216,9 @@ def test_store_dir(self): os.mkdir(os.path.join(TESTFN2, "x")) - zipf = zipfile.ZipFile(TESTFN, "w") - zipf.write(os.path.join(TESTFN2, "x"), "x") - self.assertTrue(zipf.filelist[0].filename.endswith("x/")) + with zipfile.ZipFile(TESTFN, "w") as zipf: + zipf.write(os.path.join(TESTFN2, "x"), "x") + self.assertTrue(zipf.filelist[0].filename.endswith("x/")) def tearDown(self): shutil.rmtree(TESTFN2) @@ -1226,7 +1235,8 @@ for n, s in enumerate(self.seps): self.arcdata[s] = s.join(self.line_gen) + s self.arcfiles[s] = '%s-%d' % (TESTFN, n) - open(self.arcfiles[s], "wb").write(self.arcdata[s]) + with open(self.arcfiles[s], "wb") as f: + f.write(self.arcdata[s]) def make_test_archive(self, f, compression): # Create the ZIP archive @@ -1295,8 +1305,9 @@ # Read the ZIP archive with zipfile.ZipFile(f, "r") as zipfp: for sep, fn in self.arcfiles.items(): - for line, zipline in zip(self.line_gen, zipfp.open(fn, "rU")): - self.assertEqual(zipline, line + '\n') + with zipfp.open(fn, "rU") as f: + for line, zipline in zip(self.line_gen, f): + self.assertEqual(zipline, line + '\n') def test_read_stored(self): for f in (TESTFN2, TemporaryFile(), StringIO()): diff --git a/lib-python/2.7/test/test_zlib.py b/lib-python/2.7/test/test_zlib.py --- a/lib-python/2.7/test/test_zlib.py +++ b/lib-python/2.7/test/test_zlib.py @@ -1,6 +1,7 @@ import unittest from test.test_support import TESTFN, run_unittest, import_module, unlink, requires import binascii +import os import random from test.test_support import precisionbigmemtest, _1G, _4G import sys @@ -99,14 +100,7 @@ class BaseCompressTestCase(object): def check_big_compress_buffer(self, size, compress_func): - _1M = 1024 * 1024 - fmt = "%%0%dx" % (2 * _1M) - # Generate 10MB worth of random, and expand it by repeating it. - # The assumption is that zlib's memory is not big enough to exploit - # such spread out redundancy. - data = ''.join([binascii.a2b_hex(fmt % random.getrandbits(8 * _1M)) - for i in range(10)]) - data = data * (size // len(data) + 1) + data = os.urandom(size) try: compress_func(data) finally: diff --git a/lib-python/2.7/trace.py b/lib-python/2.7/trace.py --- a/lib-python/2.7/trace.py +++ b/lib-python/2.7/trace.py @@ -559,6 +559,10 @@ if len(funcs) == 1: dicts = [d for d in gc.get_referrers(funcs[0]) if isinstance(d, dict)] + if len(dicts) == 0: + # PyPy may store functions directly on the class + # (more exactly: the container is not a Python object) + dicts = funcs if len(dicts) == 1: classes = [c for c in gc.get_referrers(dicts[0]) if hasattr(c, "__bases__")] diff --git a/lib-python/2.7/urllib2.py b/lib-python/2.7/urllib2.py --- a/lib-python/2.7/urllib2.py +++ b/lib-python/2.7/urllib2.py @@ -1171,6 +1171,7 @@ except TypeError: #buffering kw not supported r = h.getresponse() except socket.error, err: # XXX what error? + h.close() raise URLError(err) # Pick apart the HTTPResponse object to get the addinfourl diff --git a/lib-python/2.7/uuid.py b/lib-python/2.7/uuid.py --- a/lib-python/2.7/uuid.py +++ b/lib-python/2.7/uuid.py @@ -406,8 +406,12 @@ continue if hasattr(lib, 'uuid_generate_random'): _uuid_generate_random = lib.uuid_generate_random + _uuid_generate_random.argtypes = [ctypes.c_char * 16] + _uuid_generate_random.restype = None if hasattr(lib, 'uuid_generate_time'): _uuid_generate_time = lib.uuid_generate_time + _uuid_generate_time.argtypes = [ctypes.c_char * 16] + _uuid_generate_time.restype = None # The uuid_generate_* functions are broken on MacOS X 10.5, as noted # in issue #8621 the function generates the same sequence of values @@ -436,6 +440,9 @@ lib = None _UuidCreate = getattr(lib, 'UuidCreateSequential', getattr(lib, 'UuidCreate', None)) + if _UuidCreate is not None: + _UuidCreate.argtypes = [ctypes.c_char * 16] + _UuidCreate.restype = ctypes.c_int except: pass diff --git a/lib-python/2.7/zipfile.py b/lib-python/2.7/zipfile.py --- a/lib-python/2.7/zipfile.py +++ b/lib-python/2.7/zipfile.py @@ -648,6 +648,10 @@ return data +class ZipExtFileWithClose(ZipExtFile): + def close(self): + self._fileobj.close() + class ZipFile: """ Class with methods to open, read, write, close, list zip files. @@ -843,9 +847,9 @@ try: # Read by chunks, to avoid an OverflowError or a # MemoryError with very large embedded files. - f = self.open(zinfo.filename, "r") - while f.read(chunk_size): # Check CRC-32 - pass + with self.open(zinfo.filename, "r") as f: + while f.read(chunk_size): # Check CRC-32 + pass except BadZipfile: return zinfo.filename @@ -864,7 +868,9 @@ def read(self, name, pwd=None): """Return file bytes (as a string) for name.""" - return self.open(name, "r", pwd).read() + with self.open(name, "r", pwd) as f: + retval = f.read() + return retval def open(self, name, mode="r", pwd=None): """Return file-like object for 'name'.""" @@ -881,59 +887,66 @@ else: zef_file = open(self.filename, 'rb') - # Make sure we have an info object - if isinstance(name, ZipInfo): - # 'name' is already an info object - zinfo = name + try: + # Make sure we have an info object + if isinstance(name, ZipInfo): + # 'name' is already an info object + zinfo = name + else: + # Get info object for name + zinfo = self.getinfo(name) + + zef_file.seek(zinfo.header_offset, 0) + + # Skip the file header: + fheader = zef_file.read(sizeFileHeader) + if fheader[0:4] != stringFileHeader: + raise BadZipfile, "Bad magic number for file header" + + fheader = struct.unpack(structFileHeader, fheader) + fname = zef_file.read(fheader[_FH_FILENAME_LENGTH]) + if fheader[_FH_EXTRA_FIELD_LENGTH]: + zef_file.read(fheader[_FH_EXTRA_FIELD_LENGTH]) + + if fname != zinfo.orig_filename: + raise BadZipfile, \ + 'File name in directory "%s" and header "%s" differ.' % ( + zinfo.orig_filename, fname) + + # check for encrypted flag & handle password + is_encrypted = zinfo.flag_bits & 0x1 + zd = None + if is_encrypted: + if not pwd: + pwd = self.pwd + if not pwd: + raise RuntimeError, "File %s is encrypted, " \ + "password required for extraction" % name + + zd = _ZipDecrypter(pwd) + # The first 12 bytes in the cypher stream is an encryption header + # used to strengthen the algorithm. The first 11 bytes are + # completely random, while the 12th contains the MSB of the CRC, + # or the MSB of the file time depending on the header type + # and is used to check the correctness of the password. + bytes = zef_file.read(12) + h = map(zd, bytes[0:12]) + if zinfo.flag_bits & 0x8: + # compare against the file type from extended local headers + check_byte = (zinfo._raw_time >> 8) & 0xff + else: + # compare against the CRC otherwise + check_byte = (zinfo.CRC >> 24) & 0xff + if ord(h[11]) != check_byte: + raise RuntimeError("Bad password for file", name) + except: + if not self._filePassed: + zef_file.close() + raise + if self._filePassed: + return ZipExtFile(zef_file, mode, zinfo, zd) else: - # Get info object for name - zinfo = self.getinfo(name) - - zef_file.seek(zinfo.header_offset, 0) - - # Skip the file header: - fheader = zef_file.read(sizeFileHeader) - if fheader[0:4] != stringFileHeader: - raise BadZipfile, "Bad magic number for file header" - - fheader = struct.unpack(structFileHeader, fheader) - fname = zef_file.read(fheader[_FH_FILENAME_LENGTH]) - if fheader[_FH_EXTRA_FIELD_LENGTH]: - zef_file.read(fheader[_FH_EXTRA_FIELD_LENGTH]) - - if fname != zinfo.orig_filename: - raise BadZipfile, \ - 'File name in directory "%s" and header "%s" differ.' % ( - zinfo.orig_filename, fname) - - # check for encrypted flag & handle password - is_encrypted = zinfo.flag_bits & 0x1 - zd = None - if is_encrypted: - if not pwd: - pwd = self.pwd - if not pwd: - raise RuntimeError, "File %s is encrypted, " \ - "password required for extraction" % name - - zd = _ZipDecrypter(pwd) - # The first 12 bytes in the cypher stream is an encryption header - # used to strengthen the algorithm. The first 11 bytes are - # completely random, while the 12th contains the MSB of the CRC, - # or the MSB of the file time depending on the header type - # and is used to check the correctness of the password. - bytes = zef_file.read(12) - h = map(zd, bytes[0:12]) - if zinfo.flag_bits & 0x8: - # compare against the file type from extended local headers - check_byte = (zinfo._raw_time >> 8) & 0xff - else: - # compare against the CRC otherwise - check_byte = (zinfo.CRC >> 24) & 0xff - if ord(h[11]) != check_byte: - raise RuntimeError("Bad password for file", name) - - return ZipExtFile(zef_file, mode, zinfo, zd) + return ZipExtFileWithClose(zef_file, mode, zinfo, zd) def extract(self, member, path=None, pwd=None): """Extract a member from the archive to the current working directory, diff --git a/lib-python/3.2/__future__.py b/lib-python/3.2/__future__.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__future__.py @@ -0,0 +1,134 @@ +"""Record of phased-in incompatible language changes. + +Each line is of the form: + + FeatureName = "_Feature(" OptionalRelease "," MandatoryRelease "," + CompilerFlag ")" + +where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples +of the same form as sys.version_info: + + (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int + PY_MINOR_VERSION, # the 1; an int + PY_MICRO_VERSION, # the 0; an int + PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string + PY_RELEASE_SERIAL # the 3; an int + ) + +OptionalRelease records the first release in which + + from __future__ import FeatureName + +was accepted. + +In the case of MandatoryReleases that have not yet occurred, +MandatoryRelease predicts the release in which the feature will become part +of the language. + +Else MandatoryRelease records when the feature became part of the language; +in releases at or after that, modules no longer need + + from __future__ import FeatureName + +to use the feature in question, but may continue to use such imports. + +MandatoryRelease may also be None, meaning that a planned feature got +dropped. + +Instances of class _Feature have two corresponding methods, +.getOptionalRelease() and .getMandatoryRelease(). + +CompilerFlag is the (bitfield) flag that should be passed in the fourth +argument to the builtin function compile() to enable the feature in +dynamically compiled code. This flag is stored in the .compiler_flag +attribute on _Future instances. These values must match the appropriate +#defines of CO_xxx flags in Include/compile.h. + +No feature line is ever to be deleted from this file. +""" + +all_feature_names = [ + "nested_scopes", + "generators", + "division", + "absolute_import", + "with_statement", + "print_function", + "unicode_literals", + "barry_as_FLUFL", +] + +__all__ = ["all_feature_names"] + all_feature_names + +# The CO_xxx symbols are defined here under the same names used by +# compile.h, so that an editor search will find them here. However, +# they're not exported in __all__, because they don't really belong to +# this module. +CO_NESTED = 0x0010 # nested_scopes +CO_GENERATOR_ALLOWED = 0 # generators (obsolete, was 0x1000) +CO_FUTURE_DIVISION = 0x2000 # division +CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default +CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement +CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function +CO_FUTURE_UNICODE_LITERALS = 0x20000 # unicode string literals +CO_FUTURE_BARRY_AS_BDFL = 0x40000 + +class _Feature: + def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): + self.optional = optionalRelease + self.mandatory = mandatoryRelease + self.compiler_flag = compiler_flag + + def getOptionalRelease(self): + """Return first release in which this feature was recognized. + + This is a 5-tuple, of the same form as sys.version_info. + """ + + return self.optional + + def getMandatoryRelease(self): + """Return release in which this feature will become mandatory. + + This is a 5-tuple, of the same form as sys.version_info, or, if + the feature was dropped, is None. + """ + + return self.mandatory + + def __repr__(self): + return "_Feature" + repr((self.optional, + self.mandatory, + self.compiler_flag)) + +nested_scopes = _Feature((2, 1, 0, "beta", 1), + (2, 2, 0, "alpha", 0), + CO_NESTED) + +generators = _Feature((2, 2, 0, "alpha", 1), + (2, 3, 0, "final", 0), + CO_GENERATOR_ALLOWED) + +division = _Feature((2, 2, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_DIVISION) + +absolute_import = _Feature((2, 5, 0, "alpha", 1), + (2, 7, 0, "alpha", 0), + CO_FUTURE_ABSOLUTE_IMPORT) + +with_statement = _Feature((2, 5, 0, "alpha", 1), + (2, 6, 0, "alpha", 0), + CO_FUTURE_WITH_STATEMENT) + +print_function = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_PRINT_FUNCTION) + +unicode_literals = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_UNICODE_LITERALS) + +barry_as_FLUFL = _Feature((3, 1, 0, "alpha", 2), + (3, 9, 0, "alpha", 0), + CO_FUTURE_BARRY_AS_BDFL) diff --git a/lib-python/3.2/__phello__.foo.py b/lib-python/3.2/__phello__.foo.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__phello__.foo.py @@ -0,0 +1,1 @@ +# This file exists as a helper for the test.test_frozen module. diff --git a/lib-python/3.2/_abcoll.py b/lib-python/3.2/_abcoll.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_abcoll.py @@ -0,0 +1,623 @@ +# Copyright 2007 Google, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""Abstract Base Classes (ABCs) for collections, according to PEP 3119. + +DON'T USE THIS MODULE DIRECTLY! The classes here should be imported +via collections; they are defined here only to alleviate certain +bootstrapping issues. Unit tests are in test_collections. +""" + +from abc import ABCMeta, abstractmethod +import sys + +__all__ = ["Hashable", "Iterable", "Iterator", + "Sized", "Container", "Callable", + "Set", "MutableSet", + "Mapping", "MutableMapping", + "MappingView", "KeysView", "ItemsView", "ValuesView", + "Sequence", "MutableSequence", + "ByteString", + ] + + +### collection related types which are not exposed through builtin ### +## iterators ## +bytes_iterator = type(iter(b'')) +bytearray_iterator = type(iter(bytearray())) +#callable_iterator = ??? +dict_keyiterator = type(iter({}.keys())) +dict_valueiterator = type(iter({}.values())) +dict_itemiterator = type(iter({}.items())) +list_iterator = type(iter([])) +list_reverseiterator = type(iter(reversed([]))) +range_iterator = type(iter(range(0))) +set_iterator = type(iter(set())) +str_iterator = type(iter("")) +tuple_iterator = type(iter(())) +zip_iterator = type(iter(zip())) +## views ## +dict_keys = type({}.keys()) +dict_values = type({}.values()) +dict_items = type({}.items()) +## misc ## +dict_proxy = type(type.__dict__) + + +### ONE-TRICK PONIES ### + +class Hashable(metaclass=ABCMeta): + + @abstractmethod + def __hash__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Hashable: + for B in C.__mro__: + if "__hash__" in B.__dict__: + if B.__dict__["__hash__"]: + return True + break + return NotImplemented + + +class Iterable(metaclass=ABCMeta): + + @abstractmethod + def __iter__(self): + while False: + yield None + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterable: + if any("__iter__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Iterator(Iterable): + + @abstractmethod + def __next__(self): + raise StopIteration + + def __iter__(self): + return self + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterator: + if (any("__next__" in B.__dict__ for B in C.__mro__) and + any("__iter__" in B.__dict__ for B in C.__mro__)): + return True + return NotImplemented + +Iterator.register(bytes_iterator) +Iterator.register(bytearray_iterator) +#Iterator.register(callable_iterator) +Iterator.register(dict_keyiterator) +Iterator.register(dict_valueiterator) +Iterator.register(dict_itemiterator) +Iterator.register(list_iterator) +Iterator.register(list_reverseiterator) +Iterator.register(range_iterator) +Iterator.register(set_iterator) +Iterator.register(str_iterator) +Iterator.register(tuple_iterator) +Iterator.register(zip_iterator) + +class Sized(metaclass=ABCMeta): + + @abstractmethod + def __len__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Sized: + if any("__len__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Container(metaclass=ABCMeta): + + @abstractmethod + def __contains__(self, x): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Container: + if any("__contains__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Callable(metaclass=ABCMeta): + + @abstractmethod + def __call__(self, *args, **kwds): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Callable: + if any("__call__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +### SETS ### + + +class Set(Sized, Iterable, Container): + + """A set is a finite, iterable container. + + This class provides concrete generic implementations of all + methods except for __contains__, __iter__ and __len__. + + To override the comparisons (presumably for speed, as the + semantics are fixed), all you have to do is redefine __le__ and + then the other operations will automatically follow suit. + """ + + def __le__(self, other): + if not isinstance(other, Set): + return NotImplemented + if len(self) > len(other): + return False + for elem in self: + if elem not in other: + return False + return True + + def __lt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) < len(other) and self.__le__(other) + + def __gt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other < self + + def __ge__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other <= self + + def __eq__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) == len(other) and self.__le__(other) + + def __ne__(self, other): + return not (self == other) + + @classmethod + def _from_iterable(cls, it): + '''Construct an instance of the class from any iterable input. + + Must override this method if the class constructor signature + does not accept an iterable for an input. + ''' + return cls(it) + + def __and__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + return self._from_iterable(value for value in other if value in self) + + def isdisjoint(self, other): + for value in other: + if value in self: + return False + return True + + def __or__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + chain = (e for s in (self, other) for e in s) + return self._from_iterable(chain) + + def __sub__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return self._from_iterable(value for value in self + if value not in other) + + def __xor__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return (self - other) | (other - self) + + def _hash(self): + """Compute the hash value of a set. + + Note that we don't define __hash__: not all sets are hashable. + But if you define a hashable set type, its __hash__ should + call this function. + + This must be compatible __eq__. + + All sets ought to compare equal if they contain the same + elements, regardless of how they are implemented, and + regardless of the order of the elements; so there's not much + freedom for __eq__ or __hash__. We match the algorithm used + by the built-in frozenset type. + """ + MAX = sys.maxsize + MASK = 2 * MAX + 1 + n = len(self) + h = 1927868237 * (n + 1) + h &= MASK + for x in self: + hx = hash(x) + h ^= (hx ^ (hx << 16) ^ 89869747) * 3644798167 + h &= MASK + h = h * 69069 + 907133923 + h &= MASK + if h > MAX: + h -= MASK + 1 + if h == -1: + h = 590923713 + return h + +Set.register(frozenset) + + +class MutableSet(Set): + + @abstractmethod + def add(self, value): + """Add an element.""" + raise NotImplementedError + + @abstractmethod + def discard(self, value): + """Remove an element. Do not raise an exception if absent.""" + raise NotImplementedError + + def remove(self, value): + """Remove an element. If not a member, raise a KeyError.""" + if value not in self: + raise KeyError(value) + self.discard(value) + + def pop(self): + """Return the popped value. Raise KeyError if empty.""" + it = iter(self) + try: + value = next(it) + except StopIteration: + raise KeyError + self.discard(value) + return value + + def clear(self): + """This is slow (creates N new iterators!) but effective.""" + try: + while True: + self.pop() + except KeyError: + pass + + def __ior__(self, it): + for value in it: + self.add(value) + return self + + def __iand__(self, it): + for value in (self - it): + self.discard(value) + return self + + def __ixor__(self, it): + if it is self: + self.clear() + else: + if not isinstance(it, Set): + it = self._from_iterable(it) + for value in it: + if value in self: + self.discard(value) + else: + self.add(value) + return self + + def __isub__(self, it): + if it is self: + self.clear() + else: + for value in it: + self.discard(value) + return self + +MutableSet.register(set) + + +### MAPPINGS ### + + +class Mapping(Sized, Iterable, Container): + + @abstractmethod + def __getitem__(self, key): + raise KeyError + + def get(self, key, default=None): + try: + return self[key] + except KeyError: + return default + + def __contains__(self, key): + try: + self[key] + except KeyError: + return False + else: + return True + + def keys(self): + return KeysView(self) + + def items(self): + return ItemsView(self) + + def values(self): + return ValuesView(self) + + def __eq__(self, other): + if not isinstance(other, Mapping): + return NotImplemented + return dict(self.items()) == dict(other.items()) + + def __ne__(self, other): + return not (self == other) + + +class MappingView(Sized): + + def __init__(self, mapping): + self._mapping = mapping + + def __len__(self): + return len(self._mapping) + + def __repr__(self): + return '{0.__class__.__name__}({0._mapping!r})'.format(self) + + +class KeysView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, key): + return key in self._mapping + + def __iter__(self): + for key in self._mapping: + yield key + +KeysView.register(dict_keys) + + +class ItemsView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, item): + key, value = item + try: + v = self._mapping[key] + except KeyError: + return False + else: + return v == value + + def __iter__(self): + for key in self._mapping: + yield (key, self._mapping[key]) + +ItemsView.register(dict_items) + + +class ValuesView(MappingView): + + def __contains__(self, value): + for key in self._mapping: + if value == self._mapping[key]: + return True + return False + + def __iter__(self): + for key in self._mapping: + yield self._mapping[key] + +ValuesView.register(dict_values) + + +class MutableMapping(Mapping): + + @abstractmethod + def __setitem__(self, key, value): + raise KeyError + + @abstractmethod + def __delitem__(self, key): + raise KeyError + + __marker = object() + + def pop(self, key, default=__marker): + try: + value = self[key] + except KeyError: + if default is self.__marker: + raise + return default + else: + del self[key] + return value + + def popitem(self): + try: + key = next(iter(self)) + except StopIteration: + raise KeyError + value = self[key] + del self[key] + return key, value + + def clear(self): + try: + while True: + self.popitem() + except KeyError: + pass + + def update(*args, **kwds): + if len(args) > 2: + raise TypeError("update() takes at most 2 positional " + "arguments ({} given)".format(len(args))) + elif not args: + raise TypeError("update() takes at least 1 argument (0 given)") + self = args[0] + other = args[1] if len(args) >= 2 else () + + if isinstance(other, Mapping): + for key in other: + self[key] = other[key] + elif hasattr(other, "keys"): + for key in other.keys(): + self[key] = other[key] + else: + for key, value in other: + self[key] = value + for key, value in kwds.items(): + self[key] = value + + def setdefault(self, key, default=None): + try: + return self[key] + except KeyError: + self[key] = default + return default + +MutableMapping.register(dict) + + +### SEQUENCES ### + + +class Sequence(Sized, Iterable, Container): + + """All the operations on a read-only sequence. + + Concrete subclasses must override __new__ or __init__, + __getitem__, and __len__. + """ + + @abstractmethod + def __getitem__(self, index): + raise IndexError + + def __iter__(self): + i = 0 + try: + while True: + v = self[i] + yield v + i += 1 + except IndexError: + return + + def __contains__(self, value): + for v in self: + if v == value: + return True + return False + + def __reversed__(self): + for i in reversed(range(len(self))): + yield self[i] + + def index(self, value): + for i, v in enumerate(self): + if v == value: + return i + raise ValueError + + def count(self, value): + return sum(1 for v in self if v == value) + +Sequence.register(tuple) +Sequence.register(str) +Sequence.register(range) + + +class ByteString(Sequence): + + """This unifies bytes and bytearray. + + XXX Should add all their methods. + """ + +ByteString.register(bytes) +ByteString.register(bytearray) + + +class MutableSequence(Sequence): + + @abstractmethod + def __setitem__(self, index, value): + raise IndexError + + @abstractmethod + def __delitem__(self, index): + raise IndexError + + @abstractmethod + def insert(self, index, value): + raise IndexError + + def append(self, value): + self.insert(len(self), value) + + def reverse(self): + n = len(self) + for i in range(n//2): + self[i], self[n-i-1] = self[n-i-1], self[i] + + def extend(self, values): + for v in values: + self.append(v) + + def pop(self, index=-1): + v = self[index] + del self[index] + return v + + def remove(self, value): + del self[self.index(value)] + + def __iadd__(self, values): + self.extend(values) + return self + +MutableSequence.register(list) +MutableSequence.register(bytearray) # Multiply inheriting, see ByteString diff --git a/lib-python/3.2/_compat_pickle.py b/lib-python/3.2/_compat_pickle.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_compat_pickle.py @@ -0,0 +1,81 @@ +# This module is used to map the old Python 2 names to the new names used in +# Python 3 for the pickle module. This needed to make pickle streams +# generated with Python 2 loadable by Python 3. + +# This is a copy of lib2to3.fixes.fix_imports.MAPPING. We cannot import +# lib2to3 and use the mapping defined there, because lib2to3 uses pickle. +# Thus, this could cause the module to be imported recursively. +IMPORT_MAPPING = { + 'StringIO': 'io', + 'cStringIO': 'io', + 'cPickle': 'pickle', + '__builtin__' : 'builtins', + 'copy_reg': 'copyreg', + 'Queue': 'queue', + 'SocketServer': 'socketserver', + 'ConfigParser': 'configparser', + 'repr': 'reprlib', + 'FileDialog': 'tkinter.filedialog', + 'tkFileDialog': 'tkinter.filedialog', + 'SimpleDialog': 'tkinter.simpledialog', + 'tkSimpleDialog': 'tkinter.simpledialog', + 'tkColorChooser': 'tkinter.colorchooser', + 'tkCommonDialog': 'tkinter.commondialog', + 'Dialog': 'tkinter.dialog', + 'Tkdnd': 'tkinter.dnd', + 'tkFont': 'tkinter.font', + 'tkMessageBox': 'tkinter.messagebox', + 'ScrolledText': 'tkinter.scrolledtext', + 'Tkconstants': 'tkinter.constants', + 'Tix': 'tkinter.tix', + 'ttk': 'tkinter.ttk', + 'Tkinter': 'tkinter', + 'markupbase': '_markupbase', + '_winreg': 'winreg', + 'thread': '_thread', + 'dummy_thread': '_dummy_thread', + 'dbhash': 'dbm.bsd', + 'dumbdbm': 'dbm.dumb', + 'dbm': 'dbm.ndbm', + 'gdbm': 'dbm.gnu', + 'xmlrpclib': 'xmlrpc.client', + 'DocXMLRPCServer': 'xmlrpc.server', + 'SimpleXMLRPCServer': 'xmlrpc.server', + 'httplib': 'http.client', + 'htmlentitydefs' : 'html.entities', + 'HTMLParser' : 'html.parser', + 'Cookie': 'http.cookies', + 'cookielib': 'http.cookiejar', + 'BaseHTTPServer': 'http.server', + 'SimpleHTTPServer': 'http.server', + 'CGIHTTPServer': 'http.server', + 'test.test_support': 'test.support', + 'commands': 'subprocess', + 'UserString' : 'collections', + 'UserList' : 'collections', + 'urlparse' : 'urllib.parse', + 'robotparser' : 'urllib.robotparser', + 'whichdb': 'dbm', + 'anydbm': 'dbm' +} + + +# This contains rename rules that are easy to handle. We ignore the more +# complex stuff (e.g. mapping the names in the urllib and types modules). +# These rules should be run before import names are fixed. +NAME_MAPPING = { + ('__builtin__', 'xrange'): ('builtins', 'range'), + ('__builtin__', 'reduce'): ('functools', 'reduce'), + ('__builtin__', 'intern'): ('sys', 'intern'), + ('__builtin__', 'unichr'): ('builtins', 'chr'), + ('__builtin__', 'basestring'): ('builtins', 'str'), + ('__builtin__', 'long'): ('builtins', 'int'), + ('itertools', 'izip'): ('builtins', 'zip'), + ('itertools', 'imap'): ('builtins', 'map'), + ('itertools', 'ifilter'): ('builtins', 'filter'), + ('itertools', 'ifilterfalse'): ('itertools', 'filterfalse'), +} + +# Same, but for 3.x to 2.x +REVERSE_IMPORT_MAPPING = dict((v, k) for (k, v) in IMPORT_MAPPING.items()) +REVERSE_NAME_MAPPING = dict((v, k) for (k, v) in NAME_MAPPING.items()) diff --git a/lib-python/3.2/_dummy_thread.py b/lib-python/3.2/_dummy_thread.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_dummy_thread.py @@ -0,0 +1,155 @@ +"""Drop-in replacement for the thread module. + +Meant to be used as a brain-dead substitute so that threaded code does +not need to be rewritten for when the thread module is not present. + +Suggested usage is:: + + try: + import _thread + except ImportError: + import _dummy_thread as _thread + +""" +# Exports only things specified by thread documentation; +# skipping obsolete synonyms allocate(), start_new(), exit_thread(). +__all__ = ['error', 'start_new_thread', 'exit', 'get_ident', 'allocate_lock', + 'interrupt_main', 'LockType'] + +# A dummy value +TIMEOUT_MAX = 2**31 + +# NOTE: this module can be imported early in the extension building process, +# and so top level imports of other modules should be avoided. Instead, all +# imports are done when needed on a function-by-function basis. Since threads +# are disabled, the import lock should not be an issue anyway (??). + +class error(Exception): + """Dummy implementation of _thread.error.""" + + def __init__(self, *args): + self.args = args + +def start_new_thread(function, args, kwargs={}): + """Dummy implementation of _thread.start_new_thread(). + + Compatibility is maintained by making sure that ``args`` is a + tuple and ``kwargs`` is a dictionary. If an exception is raised + and it is SystemExit (which can be done by _thread.exit()) it is + caught and nothing is done; all other exceptions are printed out + by using traceback.print_exc(). + + If the executed function calls interrupt_main the KeyboardInterrupt will be + raised when the function returns. + + """ + if type(args) != type(tuple()): + raise TypeError("2nd arg must be a tuple") + if type(kwargs) != type(dict()): + raise TypeError("3rd arg must be a dict") + global _main + _main = False + try: + function(*args, **kwargs) + except SystemExit: + pass + except: + import traceback + traceback.print_exc() + _main = True + global _interrupt + if _interrupt: + _interrupt = False + raise KeyboardInterrupt + +def exit(): + """Dummy implementation of _thread.exit().""" + raise SystemExit + +def get_ident(): + """Dummy implementation of _thread.get_ident(). + + Since this module should only be used when _threadmodule is not + available, it is safe to assume that the current process is the + only thread. Thus a constant can be safely returned. + """ + return -1 + +def allocate_lock(): + """Dummy implementation of _thread.allocate_lock().""" + return LockType() + +def stack_size(size=None): + """Dummy implementation of _thread.stack_size().""" + if size is not None: + raise error("setting thread stack size not supported") + return 0 + +class LockType(object): + """Class implementing dummy implementation of _thread.LockType. + + Compatibility is maintained by maintaining self.locked_status + which is a boolean that stores the state of the lock. Pickling of + the lock, though, should not be done since if the _thread module is + then used with an unpickled ``lock()`` from here problems could + occur from this class not having atomic methods. + + """ + + def __init__(self): + self.locked_status = False + + def acquire(self, waitflag=None, timeout=-1): + """Dummy implementation of acquire(). + + For blocking calls, self.locked_status is automatically set to + True and returned appropriately based on value of + ``waitflag``. If it is non-blocking, then the value is + actually checked and not set if it is already acquired. This + is all done so that threading.Condition's assert statements + aren't triggered and throw a little fit. + + """ + if waitflag is None or waitflag: + self.locked_status = True + return True + else: + if not self.locked_status: + self.locked_status = True + return True + else: + if timeout > 0: + import time + time.sleep(timeout) + return False + + __enter__ = acquire + + def __exit__(self, typ, val, tb): + self.release() + + def release(self): + """Release the dummy lock.""" + # XXX Perhaps shouldn't actually bother to test? Could lead + # to problems for complex, threaded code. + if not self.locked_status: + raise error + self.locked_status = False + return True + + def locked(self): + return self.locked_status + +# Used to signal that interrupt_main was called in a "thread" +_interrupt = False +# True when not executing in a "thread" +_main = True + +def interrupt_main(): + """Set _interrupt flag to True to have start_new_thread raise + KeyboardInterrupt upon exiting.""" + if _main: + raise KeyboardInterrupt + else: + global _interrupt + _interrupt = True diff --git a/lib-python/3.2/_markupbase.py b/lib-python/3.2/_markupbase.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_markupbase.py @@ -0,0 +1,395 @@ +"""Shared support for scanning document type declarations in HTML and XHTML. + +This module is used as a foundation for the html.parser module. It has no +documented public API and should not be used directly. + +""" + +import re + +_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9]*\s*').match +_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match +_commentclose = re.compile(r'--\s*>') +_markedsectionclose = re.compile(r']\s*]\s*>') + +# An analysis of the MS-Word extensions is available at +# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf + +_msmarkedsectionclose = re.compile(r']\s*>') + +del re + + +class ParserBase: + """Parser base class which provides some common support methods used + by the SGML/HTML and XHTML parsers.""" + + def __init__(self): + if self.__class__ is ParserBase: + raise RuntimeError( + "_markupbase.ParserBase must be subclassed") + + def error(self, message): + raise NotImplementedError( + "subclasses of ParserBase must override error()") + + def reset(self): + self.lineno = 1 + self.offset = 0 + + def getpos(self): + """Return current line number and offset.""" + return self.lineno, self.offset + + # Internal -- update line number and offset. This should be + # called for each piece of data exactly once, in order -- in other + # words the concatenation of all the input strings to this + # function should be exactly the entire input. + def updatepos(self, i, j): + if i >= j: + return j + rawdata = self.rawdata + nlines = rawdata.count("\n", i, j) + if nlines: + self.lineno = self.lineno + nlines + pos = rawdata.rindex("\n", i, j) # Should not fail + self.offset = j-(pos+1) + else: + self.offset = self.offset + j-i + return j + + _decl_otherchars = '' + + # Internal -- parse declaration (for use by subclasses). + def parse_declaration(self, i): + # This is some sort of declaration; in "HTML as + # deployed," this should only be the document type + # declaration (""). + # ISO 8879:1986, however, has more complex + # declaration syntax for elements in , including: + # --comment-- + # [marked section] + # name in the following list: ENTITY, DOCTYPE, ELEMENT, + # ATTLIST, NOTATION, SHORTREF, USEMAP, + # LINKTYPE, LINK, IDLINK, USELINK, SYSTEM + rawdata = self.rawdata + j = i + 2 + assert rawdata[i:j] == "": + # the empty comment + return j + 1 + if rawdata[j:j+1] in ("-", ""): + # Start of comment followed by buffer boundary, + # or just a buffer boundary. + return -1 + # A simple, practical version could look like: ((name|stringlit) S*) + '>' + n = len(rawdata) + if rawdata[j:j+2] == '--': #comment + # Locate --.*-- as the body of the comment + return self.parse_comment(i) + elif rawdata[j] == '[': #marked section + # Locate [statusWord [...arbitrary SGML...]] as the body of the marked section + # Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA + # Note that this is extended by Microsoft Office "Save as Web" function + # to include [if...] and [endif]. + return self.parse_marked_section(i) + else: #all other declaration elements + decltype, j = self._scan_name(j, i) + if j < 0: + return j + if decltype == "doctype": + self._decl_otherchars = '' + while j < n: + c = rawdata[j] + if c == ">": + # end of declaration syntax + data = rawdata[i+2:j] + if decltype == "doctype": + self.handle_decl(data) + else: + # According to the HTML5 specs sections "8.2.4.44 Bogus + # comment state" and "8.2.4.45 Markup declaration open + # state", a comment token should be emitted. + # Calling unknown_decl provides more flexibility though. + self.unknown_decl(data) + return j + 1 + if c in "\"'": + m = _declstringlit_match(rawdata, j) + if not m: + return -1 # incomplete + j = m.end() + elif c in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ": + name, j = self._scan_name(j, i) + elif c in self._decl_otherchars: + j = j + 1 + elif c == "[": + # this could be handled in a separate doctype parser + if decltype == "doctype": + j = self._parse_doctype_subset(j + 1, i) + elif decltype in {"attlist", "linktype", "link", "element"}: + # must tolerate []'d groups in a content model in an element declaration + # also in data attribute specifications of attlist declaration + # also link type declaration subsets in linktype declarations + # also link attribute specification lists in link declarations + self.error("unsupported '[' char in %s declaration" % decltype) + else: + self.error("unexpected '[' char in declaration") + else: + self.error( + "unexpected %r char in declaration" % rawdata[j]) + if j < 0: + return j + return -1 # incomplete + + # Internal -- parse a marked section + # Override this to handle MS-word extension syntax content + def parse_marked_section(self, i, report=1): + rawdata= self.rawdata + assert rawdata[i:i+3] == ' ending + match= _markedsectionclose.search(rawdata, i+3) + elif sectName in {"if", "else", "endif"}: + # look for MS Office ]> ending + match= _msmarkedsectionclose.search(rawdata, i+3) + else: + self.error('unknown status keyword %r in marked section' % rawdata[i+3:j]) + if not match: + return -1 + if report: + j = match.start(0) + self.unknown_decl(rawdata[i+3: j]) + return match.end(0) + + # Internal -- parse comment, return length or -1 if not terminated + def parse_comment(self, i, report=1): + rawdata = self.rawdata + if rawdata[i:i+4] != ' LOAD_CONST None def f(x): - None + y = None return x asm = disassemble(f) for elem in ('LOAD_GLOBAL',): @@ -67,10 +67,13 @@ self.assertIn(elem, asm) def test_pack_unpack(self): + # On PyPy, "a, b = ..." is even more optimized, by removing + # the ROT_TWO. But the ROT_TWO is not removed if assigning + # to more complex expressions, so check that. for line, elem in ( ('a, = a,', 'LOAD_CONST',), - ('a, b = a, b', 'ROT_TWO',), - ('a, b, c = a, b, c', 'ROT_THREE',), + ('a[1], b = a, b', 'ROT_TWO',), + ('a, b[2], c = a, b, c', 'ROT_THREE',), ): asm = dis_single(line) self.assertIn(elem, asm) @@ -78,6 +81,8 @@ self.assertNotIn('UNPACK_TUPLE', asm) def test_folding_of_tuples_of_constants(self): + # On CPython, "a,b,c=1,2,3" turns into "a,b,c=" + # but on PyPy, it turns into "a=1;b=2;c=3". for line, elem in ( ('a = 1,2,3', '((1, 2, 3))'), ('("a","b","c")', "(('a', 'b', 'c'))"), @@ -86,7 +91,8 @@ ('((1, 2), 3, 4)', '(((1, 2), 3, 4))'), ): asm = dis_single(line) - self.assertIn(elem, asm) + self.assert_(elem in asm or ( + line == 'a,b,c = 1,2,3' and 'UNPACK_TUPLE' not in asm)) self.assertNotIn('BUILD_TUPLE', asm) # Bug 1053819: Tuple of constants misidentified when presented with: @@ -139,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib-python/2.7/test/test_pprint.py b/lib-python/2.7/test/test_pprint.py --- a/lib-python/2.7/test/test_pprint.py +++ b/lib-python/2.7/test/test_pprint.py @@ -233,7 +233,16 @@ frozenset([0, 2]), frozenset([0, 1])])}""" cube = test.test_set.cube(3) - self.assertEqual(pprint.pformat(cube), cube_repr_tgt) + # XXX issues of dictionary order, and for the case below, + # order of items in the frozenset([...]) representation. + # Whether we get precisely cube_repr_tgt or not is open + # to implementation-dependent choices (this test probably + # fails horribly in CPython if we tweak the dict order too). + got = pprint.pformat(cube) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cube_repr_tgt) + else: + self.assertEqual(eval(got), cube) cubo_repr_tgt = """\ {frozenset([frozenset([0, 2]), frozenset([0])]): frozenset([frozenset([frozenset([0, 2]), @@ -393,7 +402,11 @@ 2])])])}""" cubo = test.test_set.linegraph(cube) - self.assertEqual(pprint.pformat(cubo), cubo_repr_tgt) + got = pprint.pformat(cubo) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cubo_repr_tgt) + else: + self.assertEqual(eval(got), cubo) def test_depth(self): nested_tuple = (1, (2, (3, (4, (5, 6))))) diff --git a/lib-python/2.7/test/test_pydoc.py b/lib-python/2.7/test/test_pydoc.py --- a/lib-python/2.7/test/test_pydoc.py +++ b/lib-python/2.7/test/test_pydoc.py @@ -267,8 +267,8 @@ testpairs = ( ('i_am_not_here', 'i_am_not_here'), ('test.i_am_not_here_either', 'i_am_not_here_either'), - ('test.i_am_not_here.neither_am_i', 'i_am_not_here.neither_am_i'), - ('i_am_not_here.{}'.format(modname), 'i_am_not_here.{}'.format(modname)), + ('test.i_am_not_here.neither_am_i', 'i_am_not_here'), + ('i_am_not_here.{}'.format(modname), 'i_am_not_here'), ('test.{}'.format(modname), modname), ) @@ -292,8 +292,8 @@ result = run_pydoc(modname) finally: forget(modname) - expected = badimport_pattern % (modname, expectedinmsg) - self.assertEqual(expected, result) + expected = badimport_pattern % (modname, '(.+\\.)?' + expectedinmsg + '(\\..+)?$') + self.assertTrue(re.match(expected, result)) def test_input_strip(self): missing_module = " test.i_am_not_here " diff --git a/lib-python/2.7/test/test_pyexpat.py b/lib-python/2.7/test/test_pyexpat.py --- a/lib-python/2.7/test/test_pyexpat.py +++ b/lib-python/2.7/test/test_pyexpat.py @@ -570,6 +570,9 @@ self.assertEqual(self.n, 4) class MalformedInputText(unittest.TestCase): + # CPython seems to ship its own version of expat, they fixed it on this commit : + # http://svn.python.org/view?revision=74429&view=revision + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test1(self): xml = "\0\r\n" parser = expat.ParserCreate() @@ -579,6 +582,7 @@ except expat.ExpatError as e: self.assertEqual(str(e), 'unclosed token: line 2, column 0') + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test2(self): xml = "\r\n" parser = expat.ParserCreate() diff --git a/lib-python/2.7/test/test_repr.py b/lib-python/2.7/test/test_repr.py --- a/lib-python/2.7/test/test_repr.py +++ b/lib-python/2.7/test/test_repr.py @@ -9,6 +9,7 @@ import unittest from test.test_support import run_unittest, check_py3k_warnings +from test.test_support import check_impl_detail from repr import repr as r # Don't shadow builtin repr from repr import Repr @@ -145,8 +146,11 @@ # Functions eq(repr(hash), '') # Methods - self.assertTrue(repr(''.split).startswith( - '") def test_xrange(self): eq = self.assertEqual @@ -185,7 +189,10 @@ def test_descriptors(self): eq = self.assertEqual # method descriptors - eq(repr(dict.items), "") + if check_impl_detail(cpython=True): + eq(repr(dict.items), "") + elif check_impl_detail(pypy=True): + eq(repr(dict.items), "") # XXX member descriptors # XXX attribute descriptors # XXX slot descriptors @@ -247,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_runpy.py b/lib-python/2.7/test/test_runpy.py --- a/lib-python/2.7/test/test_runpy.py +++ b/lib-python/2.7/test/test_runpy.py @@ -5,10 +5,15 @@ import sys import re import tempfile -from test.test_support import verbose, run_unittest, forget +from test.test_support import verbose, run_unittest, forget, check_impl_detail from test.script_helper import (temp_dir, make_script, compile_script, make_pkg, make_zip_script, make_zip_pkg) +if check_impl_detail(pypy=True): + no_lone_pyc_file = True +else: + no_lone_pyc_file = False + from runpy import _run_code, _run_module_code, run_module, run_path # Note: This module can't safely test _run_module_as_main as it @@ -168,13 +173,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + d2 = run_module(mod_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -190,13 +196,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", pkg_name - d2 = run_module(pkg_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", pkg_name + d2 = run_module(pkg_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, pkg_name) if verbose: print "Package executed successfully" @@ -244,15 +251,17 @@ self.assertIn("sibling", d1) self.assertIn("nephew", d1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name, run_name=run_name) # Read from bytecode - self.assertIn("__package__", d2) - self.assertTrue(d2["__package__"] == pkg_name) - self.assertIn("sibling", d2) - self.assertIn("nephew", d2) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + # Read from bytecode + d2 = run_module(mod_name, run_name=run_name) + self.assertIn("__package__", d2) + self.assertTrue(d2["__package__"] == pkg_name) + self.assertIn("sibling", d2) + self.assertIn("nephew", d2) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -345,6 +354,8 @@ script_dir, '') def test_directory_compiled(self): + if no_lone_pyc_file: + return with temp_dir() as script_dir: mod_name = '__main__' script_name = self._make_test_script(script_dir, mod_name) diff --git a/lib-python/2.7/test/test_scope.py b/lib-python/2.7/test/test_scope.py --- a/lib-python/2.7/test/test_scope.py +++ b/lib-python/2.7/test/test_scope.py @@ -1,6 +1,6 @@ import unittest from test.test_support import check_syntax_error, check_py3k_warnings, \ - check_warnings, run_unittest + check_warnings, run_unittest, gc_collect class ScopeTests(unittest.TestCase): @@ -432,6 +432,7 @@ for i in range(100): f1() + gc_collect() self.assertEqual(Foo.count, 0) diff --git a/lib-python/2.7/test/test_set.py b/lib-python/2.7/test/test_set.py --- a/lib-python/2.7/test/test_set.py +++ b/lib-python/2.7/test/test_set.py @@ -309,6 +309,7 @@ fo.close() test_support.unlink(test_support.TESTFN) + @test_support.impl_detail(pypy=False) def test_do_not_rehash_dict_keys(self): n = 10 d = dict.fromkeys(map(HashCountingInt, xrange(n))) @@ -559,6 +560,7 @@ p = weakref.proxy(s) self.assertEqual(str(p), str(s)) s = None + test_support.gc_collect() self.assertRaises(ReferenceError, str, p) # C API test only available in a debug build @@ -590,6 +592,7 @@ s.__init__(self.otherword) self.assertEqual(s, set(self.word)) + @test_support.impl_detail() def test_singleton_empty_frozenset(self): f = frozenset() efs = [frozenset(), frozenset([]), frozenset(()), frozenset(''), @@ -770,9 +773,10 @@ for v in self.set: self.assertIn(v, self.values) setiter = iter(self.set) - # note: __length_hint__ is an internal undocumented API, - # don't rely on it in your own programs - self.assertEqual(setiter.__length_hint__(), len(self.set)) + if test_support.check_impl_detail(): + # note: __length_hint__ is an internal undocumented API, + # don't rely on it in your own programs + self.assertEqual(setiter.__length_hint__(), len(self.set)) def test_pickling(self): p = pickle.dumps(self.set) @@ -1564,7 +1568,7 @@ for meth in (s.union, s.intersection, s.difference, s.symmetric_difference, s.isdisjoint): for g in (G, I, Ig, L, R): expected = meth(data) - actual = meth(G(data)) + actual = meth(g(data)) if isinstance(expected, bool): self.assertEqual(actual, expected) else: diff --git a/lib-python/2.7/test/test_sets.py b/lib-python/2.7/test/test_sets.py --- a/lib-python/2.7/test/test_sets.py +++ b/lib-python/2.7/test/test_sets.py @@ -686,7 +686,9 @@ set_list = sorted(self.set) self.assertEqual(len(dup_list), len(set_list)) for i, el in enumerate(dup_list): - self.assertIs(el, set_list[i]) + # Object identity is not guarnteed for immutable objects, so we + # can't use assertIs here. + self.assertEqual(el, set_list[i]) def test_deep_copy(self): dup = copy.deepcopy(self.set) diff --git a/lib-python/2.7/test/test_site.py b/lib-python/2.7/test/test_site.py --- a/lib-python/2.7/test/test_site.py +++ b/lib-python/2.7/test/test_site.py @@ -226,6 +226,10 @@ self.assertEqual(len(dirs), 1) wanted = os.path.join('xoxo', 'Lib', 'site-packages') self.assertEqual(dirs[0], wanted) + elif '__pypy__' in sys.builtin_module_names: + self.assertEquals(len(dirs), 1) + wanted = os.path.join('xoxo', 'site-packages') + self.assertEquals(dirs[0], wanted) elif os.sep == '/': self.assertEqual(len(dirs), 2) wanted = os.path.join('xoxo', 'lib', 'python' + sys.version[:3], diff --git a/lib-python/2.7/test/test_socket.py b/lib-python/2.7/test/test_socket.py --- a/lib-python/2.7/test/test_socket.py +++ b/lib-python/2.7/test/test_socket.py @@ -252,6 +252,7 @@ self.assertEqual(p.fileno(), s.fileno()) s.close() s = None + test_support.gc_collect() try: p.fileno() except ReferenceError: @@ -285,32 +286,34 @@ s.sendto(u'\u2620', sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None) - self.assertIn('not NoneType', str(cm.exception)) + self.assertIn('NoneType', str(cm.exception)) # 3 args with self.assertRaises(UnicodeEncodeError): s.sendto(u'\u2620', 0, sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, None) - self.assertIn('not NoneType', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 'bar', sockname) - self.assertIn('an integer is required', str(cm.exception)) + self.assertIn('integer', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None, None) - self.assertIn('an integer is required', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('an integer is required', str(cm.exception)) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto('foo') - self.assertIn('(1 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, sockname, 4) - self.assertIn('(4 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) def testCrucialConstants(self): @@ -385,10 +388,10 @@ socket.htonl(k) socket.htons(k) for k in bad_values: - self.assertRaises(OverflowError, socket.ntohl, k) - self.assertRaises(OverflowError, socket.ntohs, k) - self.assertRaises(OverflowError, socket.htonl, k) - self.assertRaises(OverflowError, socket.htons, k) + self.assertRaises((OverflowError, ValueError), socket.ntohl, k) + self.assertRaises((OverflowError, ValueError), socket.ntohs, k) + self.assertRaises((OverflowError, ValueError), socket.htonl, k) + self.assertRaises((OverflowError, ValueError), socket.htons, k) def testGetServBy(self): eq = self.assertEqual @@ -428,8 +431,8 @@ if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. - self.assertRaises(OverflowError, socket.getservbyport, -1) - self.assertRaises(OverflowError, socket.getservbyport, 65536) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, -1) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout @@ -608,8 +611,8 @@ neg_port = port - 65536 sock = socket.socket() try: - self.assertRaises(OverflowError, sock.bind, (host, big_port)) - self.assertRaises(OverflowError, sock.bind, (host, neg_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, big_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, neg_port)) sock.bind((host, port)) finally: sock.close() @@ -1309,6 +1312,7 @@ closed = False def flush(self): pass def close(self): self.closed = True + def _decref_socketios(self): pass # must not close unless we request it: the original use of _fileobject # by module socket requires that the underlying socket not be closed until diff --git a/lib-python/2.7/test/test_sort.py b/lib-python/2.7/test/test_sort.py --- a/lib-python/2.7/test/test_sort.py +++ b/lib-python/2.7/test/test_sort.py @@ -140,7 +140,10 @@ return random.random() < 0.5 L = [C() for i in range(50)] - self.assertRaises(ValueError, L.sort) + try: + L.sort() + except ValueError: + pass def test_cmpNone(self): # Testing None as a comparison function. @@ -150,8 +153,10 @@ L.sort(None) self.assertEqual(L, range(50)) + @test_support.impl_detail(pypy=False) def test_undetected_mutation(self): # Python 2.4a1 did not always detect mutation + # So does pypy... memorywaster = [] for i in range(20): def mutating_cmp(x, y): @@ -226,7 +231,10 @@ def __del__(self): del data[:] data[:] = range(20) - self.assertRaises(ValueError, data.sort, key=SortKiller) + try: + data.sort(key=SortKiller) + except ValueError: + pass def test_key_with_mutating_del_and_exception(self): data = range(10) diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -881,6 +881,8 @@ c = socket.socket() c.connect((HOST, port)) listener_gone.wait() + # XXX why is it necessary? + test_support.gc_collect() try: ssl_sock = ssl.wrap_socket(c) except IOError: @@ -1330,10 +1332,8 @@ def test_main(verbose=False): global CERTFILE, SVN_PYTHON_ORG_ROOT_CERT - CERTFILE = os.path.join(os.path.dirname(__file__) or os.curdir, - "keycert.pem") - SVN_PYTHON_ORG_ROOT_CERT = os.path.join( - os.path.dirname(__file__) or os.curdir, + CERTFILE = test_support.findfile("keycert.pem") + SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( "https_svn_python_org_root.pem") if (not os.path.exists(CERTFILE) or diff --git a/lib-python/2.7/test/test_str.py b/lib-python/2.7/test/test_str.py --- a/lib-python/2.7/test/test_str.py +++ b/lib-python/2.7/test/test_str.py @@ -422,10 +422,11 @@ for meth in ('foo'.startswith, 'foo'.endswith): with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) def test_main(): test_support.run_unittest(StrTest) diff --git a/lib-python/2.7/test/test_struct.py b/lib-python/2.7/test/test_struct.py --- a/lib-python/2.7/test/test_struct.py +++ b/lib-python/2.7/test/test_struct.py @@ -535,7 +535,8 @@ @unittest.skipUnless(IS32BIT, "Specific to 32bit machines") def test_crasher(self): - self.assertRaises(MemoryError, struct.pack, "357913941c", "a") + self.assertRaises((MemoryError, struct.error), struct.pack, + "357913941c", "a") def test_count_overflow(self): hugecount = '{}b'.format(sys.maxsize+1) diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/2.7/test/test_support.py b/lib-python/2.7/test/test_support.py --- a/lib-python/2.7/test/test_support.py +++ b/lib-python/2.7/test/test_support.py @@ -431,16 +431,20 @@ rmtree(name) -def findfile(file, here=__file__, subdir=None): +def findfile(file, here=None, subdir=None): """Try to find a file on sys.path and the working directory. If it is not found the argument passed to the function is returned (this does not necessarily signal failure; could still be the legitimate path).""" + import test if os.path.isabs(file): return file if subdir is not None: file = os.path.join(subdir, file) path = sys.path - path = [os.path.dirname(here)] + path + if here is None: + path = test.__path__ + path + else: + path = [os.path.dirname(here)] + path for dn in path: fn = os.path.join(dn, file) if os.path.exists(fn): return fn @@ -1050,15 +1054,33 @@ guards, default = _parse_guards(guards) return guards.get(platform.python_implementation().lower(), default) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --pdb +# to get a pdb prompt in case of exceptions +ResultClass = unittest.TextTestRunner.resultclass + +class TestResultWithPdb(ResultClass): + + def addError(self, testcase, exc_info): + ResultClass.addError(self, testcase, exc_info) + if '--pdb' in sys.argv: + import pdb, traceback + traceback.print_tb(exc_info[2]) + pdb.post_mortem(exc_info[2]) + +# ---------------------------------- def _run_suite(suite): """Run tests from a unittest.TestSuite-derived class.""" if verbose: - runner = unittest.TextTestRunner(sys.stdout, verbosity=2) + runner = unittest.TextTestRunner(sys.stdout, verbosity=2, + resultclass=TestResultWithPdb) else: runner = BasicTestRunner() + result = runner.run(suite) if not result.wasSuccessful(): if len(result.errors) == 1 and not result.failures: @@ -1071,6 +1093,34 @@ err += "; run in verbose mode for details" raise TestFailed(err) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --filter bar +# to run only the test cases whose name contains bar + +def filter_maybe(suite): + try: + i = sys.argv.index('--filter') + filter = sys.argv[i+1] + except (ValueError, IndexError): + return suite + tests = [] + for test in linearize_suite(suite): + if filter in test._testMethodName: + tests.append(test) + return unittest.TestSuite(tests) + +def linearize_suite(suite_or_test): + try: + it = iter(suite_or_test) + except TypeError: + yield suite_or_test + return + for subsuite in it: + for item in linearize_suite(subsuite): + yield item + +# ---------------------------------- def run_unittest(*classes): """Run tests from unittest.TestCase-derived classes.""" @@ -1086,6 +1136,7 @@ suite.addTest(cls) else: suite.addTest(unittest.makeSuite(cls)) + suite = filter_maybe(suite) _run_suite(suite) diff --git a/lib-python/2.7/test/test_syntax.py b/lib-python/2.7/test/test_syntax.py --- a/lib-python/2.7/test/test_syntax.py +++ b/lib-python/2.7/test/test_syntax.py @@ -5,7 +5,8 @@ >>> def f(x): ... global x Traceback (most recent call last): -SyntaxError: name 'x' is local and global (, line 1) + File "", line 1 +SyntaxError: name 'x' is local and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that @@ -375,7 +376,7 @@ In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 - >>> while 1: + >>> while 1: # doctest:+SKIP ... while 2: ... while 3: ... while 4: diff --git a/lib-python/2.7/test/test_sys.py b/lib-python/2.7/test/test_sys.py --- a/lib-python/2.7/test/test_sys.py +++ b/lib-python/2.7/test/test_sys.py @@ -264,6 +264,7 @@ self.assertEqual(sys.getdlopenflags(), oldflags+1) sys.setdlopenflags(oldflags) + @test.test_support.impl_detail("reference counting") def test_refcount(self): # n here must be a global in order for this test to pass while # tracing with a python function. Tracing calls PyFrame_FastToLocals @@ -287,7 +288,7 @@ is sys._getframe().f_code ) - # sys._current_frames() is a CPython-only gimmick. + @test.test_support.impl_detail("current_frames") def test_current_frames(self): have_threads = True try: @@ -383,7 +384,10 @@ self.assertEqual(len(sys.float_info), 11) self.assertEqual(sys.float_info.radix, 2) self.assertEqual(len(sys.long_info), 2) - self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + if test.test_support.check_impl_detail(cpython=True): + self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + else: + self.assertTrue(sys.long_info.bits_per_digit >= 1) self.assertTrue(sys.long_info.sizeof_digit >= 1) self.assertEqual(type(sys.long_info.bits_per_digit), int) self.assertEqual(type(sys.long_info.sizeof_digit), int) @@ -432,6 +436,7 @@ self.assertEqual(type(getattr(sys.flags, attr)), int, attr) self.assertTrue(repr(sys.flags)) + @test.test_support.impl_detail("sys._clear_type_cache") def test_clear_type_cache(self): sys._clear_type_cache() @@ -473,6 +478,7 @@ p.wait() self.assertIn(executable, ["''", repr(sys.executable)]) + at unittest.skipUnless(test.test_support.check_impl_detail(), "sys.getsizeof()") class SizeofTest(unittest.TestCase): TPFLAGS_HAVE_GC = 1<<14 diff --git a/lib-python/2.7/test/test_sys_settrace.py b/lib-python/2.7/test/test_sys_settrace.py --- a/lib-python/2.7/test/test_sys_settrace.py +++ b/lib-python/2.7/test/test_sys_settrace.py @@ -213,12 +213,16 @@ "finally" def generator_example(): # any() will leave the generator before its end - x = any(generator_function()) + x = any(generator_function()); gc.collect() # the following lines were not traced for x in range(10): y = x +# On CPython, when the generator is decref'ed to zero, we see the trace +# for the "finally:" portion. On PyPy, we don't see it before the next +# garbage collection. That's why we put gc.collect() on the same line above. + generator_example.events = ([(0, 'call'), (2, 'line'), (-6, 'call'), @@ -282,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass @@ -323,17 +327,24 @@ self.run_test(tighterloop_example) def test_13_genexp(self): - self.run_test(generator_example) - # issue1265: if the trace function contains a generator, - # and if the traced function contains another generator - # that is not completely exhausted, the trace stopped. - # Worse: the 'finally' clause was not invoked. - tracer = Tracer() - sys.settrace(tracer.traceWithGenexp) - generator_example() - sys.settrace(None) - self.compare_events(generator_example.__code__.co_firstlineno, - tracer.events, generator_example.events) + if self.using_gc: + test_support.gc_collect() + gc.enable() + try: + self.run_test(generator_example) + # issue1265: if the trace function contains a generator, + # and if the traced function contains another generator + # that is not completely exhausted, the trace stopped. + # Worse: the 'finally' clause was not invoked. + tracer = Tracer() + sys.settrace(tracer.traceWithGenexp) + generator_example() + sys.settrace(None) + self.compare_events(generator_example.__code__.co_firstlineno, + tracer.events, generator_example.events) + finally: + if self.using_gc: + gc.disable() def test_14_onliner_if(self): def onliners(): diff --git a/lib-python/2.7/test/test_sysconfig.py b/lib-python/2.7/test/test_sysconfig.py --- a/lib-python/2.7/test/test_sysconfig.py +++ b/lib-python/2.7/test/test_sysconfig.py @@ -209,13 +209,22 @@ self.assertEqual(get_platform(), 'macosx-10.4-fat64') - for arch in ('ppc', 'i386', 'x86_64', 'ppc64'): + for arch in ('ppc', 'i386', 'ppc64', 'x86_64'): get_config_vars()['CFLAGS'] = ('-arch %s -isysroot ' '/Developer/SDKs/MacOSX10.4u.sdk ' '-fno-strict-aliasing -fno-common ' '-dynamic -DNDEBUG -g -O3'%(arch,)) self.assertEqual(get_platform(), 'macosx-10.4-%s'%(arch,)) + + # macosx with ARCHFLAGS set and empty _CONFIG_VARS + os.environ['ARCHFLAGS'] = '-arch i386' + sysconfig._CONFIG_VARS = None + + # this will attempt to recreate the _CONFIG_VARS based on environment + # variables; used to check a problem with the PyPy's _init_posix + # implementation; see: issue 705 + get_config_vars() # linux debian sarge os.name = 'posix' @@ -235,7 +244,7 @@ def test_get_scheme_names(self): wanted = ('nt', 'nt_user', 'os2', 'os2_home', 'osx_framework_user', - 'posix_home', 'posix_prefix', 'posix_user') + 'posix_home', 'posix_prefix', 'posix_user', 'pypy') self.assertEqual(get_scheme_names(), wanted) def test_symlink(self): diff --git a/lib-python/2.7/test/test_tarfile.py b/lib-python/2.7/test/test_tarfile.py --- a/lib-python/2.7/test/test_tarfile.py +++ b/lib-python/2.7/test/test_tarfile.py @@ -169,6 +169,7 @@ except tarfile.ReadError: self.fail("tarfile.open() failed on empty archive") self.assertListEqual(tar.getmembers(), []) + tar.close() def test_null_tarfile(self): # Test for issue6123: Allow opening empty archives. @@ -207,16 +208,21 @@ fobj = open(self.tarname, "rb") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, os.path.abspath(fobj.name)) + tar.close() def test_no_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) self.assertRaises(AttributeError, getattr, fobj, "name") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, None) def test_empty_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) fobj.name = "" tar = tarfile.open(fileobj=fobj, mode=self.mode) @@ -515,6 +521,7 @@ self.tar = tarfile.open(self.tarname, mode=self.mode, encoding="iso8859-1") tarinfo = self.tar.getmember("pax/umlauts-�������") self._test_member(tarinfo, size=7011, chksum=md5_regtype) + self.tar.close() class LongnameTest(ReadTest): @@ -675,6 +682,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.rmdir(path) @@ -692,6 +700,7 @@ tar.gettarinfo(target) tarinfo = tar.gettarinfo(link) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(target) os.remove(link) @@ -704,6 +713,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(path) @@ -722,6 +732,7 @@ tar.add(dstname) os.chdir(cwd) self.assertTrue(tar.getnames() == [], "added the archive to itself") + tar.close() def test_exclude(self): tempdir = os.path.join(TEMPDIR, "exclude") @@ -742,6 +753,7 @@ tar = tarfile.open(tmpname, "r") self.assertEqual(len(tar.getmembers()), 1) self.assertEqual(tar.getnames()[0], "empty_dir") + tar.close() finally: shutil.rmtree(tempdir) @@ -947,7 +959,9 @@ fobj.close() elif self.mode.endswith("bz2"): dec = bz2.BZ2Decompressor() - data = open(tmpname, "rb").read() + f = open(tmpname, "rb") + data = f.read() + f.close() data = dec.decompress(data) self.assertTrue(len(dec.unused_data) == 0, "found trailing data") @@ -1026,6 +1040,7 @@ "unable to read longname member") self.assertEqual(tarinfo.linkname, member.linkname, "unable to read longname member") + tar.close() def test_longname_1023(self): self._test(("longnam/" * 127) + "longnam") @@ -1118,6 +1133,7 @@ else: n = tar.getmembers()[0].name self.assertTrue(name == n, "PAX longname creation failed") + tar.close() def test_pax_global_header(self): pax_headers = { @@ -1146,6 +1162,7 @@ tarfile.PAX_NUMBER_FIELDS[key](val) except (TypeError, ValueError): self.fail("unable to convert pax header field") + tar.close() def test_pax_extended_header(self): # The fields from the pax header have priority over the @@ -1165,6 +1182,7 @@ self.assertEqual(t.pax_headers, pax_headers) self.assertEqual(t.name, "foo") self.assertEqual(t.uid, 123) + tar.close() class UstarUnicodeTest(unittest.TestCase): @@ -1208,6 +1226,7 @@ tarinfo.name = "foo" tarinfo.uname = u"���" self.assertRaises(UnicodeError, tar.addfile, tarinfo) + tar.close() def test_unicode_argument(self): tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict") @@ -1262,6 +1281,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="ascii", errors=handler) self.assertEqual(tar.getnames()[0], name) + tar.close() self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="ascii", errors="strict") @@ -1274,6 +1294,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1", errors="utf-8") self.assertEqual(tar.getnames()[0], "���/" + u"�".encode("utf8")) + tar.close() class AppendTest(unittest.TestCase): @@ -1301,6 +1322,7 @@ def _test(self, names=["bar"], fileobj=None): tar = tarfile.open(self.tarname, fileobj=fileobj) self.assertEqual(tar.getnames(), names) + tar.close() def test_non_existing(self): self._add_testfile() @@ -1319,7 +1341,9 @@ def test_fileobj(self): self._create_testtar() - data = open(self.tarname).read() + f = open(self.tarname) + data = f.read() + f.close() fobj = StringIO.StringIO(data) self._add_testfile(fobj) fobj.seek(0) @@ -1345,7 +1369,9 @@ # Append mode is supposed to fail if the tarfile to append to # does not end with a zero block. def _test_error(self, data): - open(self.tarname, "wb").write(data) + f = open(self.tarname, "wb") + f.write(data) + f.close() self.assertRaises(tarfile.ReadError, self._add_testfile) def test_null(self): diff --git a/lib-python/2.7/test/test_tempfile.py b/lib-python/2.7/test/test_tempfile.py --- a/lib-python/2.7/test/test_tempfile.py +++ b/lib-python/2.7/test/test_tempfile.py @@ -23,8 +23,8 @@ # TEST_FILES may need to be tweaked for systems depending on the maximum # number of files that can be opened at one time (see ulimit -n) -if sys.platform in ('openbsd3', 'openbsd4'): - TEST_FILES = 48 +if sys.platform.startswith("openbsd"): + TEST_FILES = 64 # ulimit -n defaults to 128 for normal users else: TEST_FILES = 100 @@ -244,6 +244,7 @@ dir = tempfile.mkdtemp() try: self.do_create(dir=dir).write("blat") + test_support.gc_collect() finally: os.rmdir(dir) @@ -528,12 +529,15 @@ self.do_create(suf="b") self.do_create(pre="a", suf="b") self.do_create(pre="aa", suf=".txt") + test_support.gc_collect() def test_many(self): # mktemp can choose many usable file names (stochastic) extant = range(TEST_FILES) for i in extant: extant[i] = self.do_create(pre="aa") + del extant + test_support.gc_collect() ## def test_warning(self): ## # mktemp issues a warning when used diff --git a/lib-python/2.7/test/test_thread.py b/lib-python/2.7/test/test_thread.py --- a/lib-python/2.7/test/test_thread.py +++ b/lib-python/2.7/test/test_thread.py @@ -128,6 +128,7 @@ del task while not done: time.sleep(0.01) + test_support.gc_collect() self.assertEqual(thread._count(), orig) diff --git a/lib-python/2.7/test/test_threading.py b/lib-python/2.7/test/test_threading.py --- a/lib-python/2.7/test/test_threading.py +++ b/lib-python/2.7/test/test_threading.py @@ -161,6 +161,7 @@ # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. + @test.test_support.cpython_only def test_PyThreadState_SetAsyncExc(self): try: import ctypes @@ -266,6 +267,7 @@ finally: threading._start_new_thread = _start_new_thread + @test.test_support.cpython_only def test_finalize_runnning_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for @@ -383,6 +385,7 @@ finally: sys.setcheckinterval(old_interval) + @test.test_support.cpython_only def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): @@ -425,6 +428,9 @@ def joiningfunc(mainthread): mainthread.join() print 'end of thread' + # stdout is fully buffered because not a tty, we have to flush + # before exit. + sys.stdout.flush() \n""" + script p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE) diff --git a/lib-python/2.7/test/test_threading_local.py b/lib-python/2.7/test/test_threading_local.py --- a/lib-python/2.7/test/test_threading_local.py +++ b/lib-python/2.7/test/test_threading_local.py @@ -173,8 +173,9 @@ obj = cls() obj.x = 5 self.assertEqual(obj.__dict__, {'x': 5}) - with self.assertRaises(AttributeError): - obj.__dict__ = {} + if test_support.check_impl_detail(): + with self.assertRaises(AttributeError): + obj.__dict__ = {} with self.assertRaises(AttributeError): del obj.__dict__ diff --git a/lib-python/2.7/test/test_traceback.py b/lib-python/2.7/test/test_traceback.py --- a/lib-python/2.7/test/test_traceback.py +++ b/lib-python/2.7/test/test_traceback.py @@ -5,7 +5,8 @@ import sys import unittest from imp import reload -from test.test_support import run_unittest, is_jython, Error +from test.test_support import run_unittest, Error +from test.test_support import impl_detail, check_impl_detail import traceback @@ -49,10 +50,8 @@ self.assertTrue(err[2].count('\n') == 1) # and no additional newline self.assertTrue(err[1].find("+") == err[2].find("^")) # in the right place + @impl_detail("other implementations may add a caret (why shouldn't they?)") def test_nocaret(self): - if is_jython: - # jython adds a caret in this case (why shouldn't it?) - return err = self.get_exception_format(self.syntax_error_without_caret, SyntaxError) self.assertTrue(len(err) == 3) @@ -63,8 +62,11 @@ IndentationError) self.assertTrue(len(err) == 4) self.assertTrue(err[1].strip() == "print 2") - self.assertIn("^", err[2]) - self.assertTrue(err[1].find("2") == err[2].find("^")) + if check_impl_detail(): + # on CPython, there is a "^" at the end of the line + # on PyPy, there is a "^" too, but at the start, more logically + self.assertIn("^", err[2]) + self.assertTrue(err[1].find("2") == err[2].find("^")) def test_bug737473(self): import os, tempfile, time @@ -74,7 +76,8 @@ try: sys.path.insert(0, testdir) testfile = os.path.join(testdir, 'test_bug737473.py') - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise ValueError""" @@ -96,7 +99,8 @@ # three seconds are needed for this test to pass reliably :-( time.sleep(4) - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise NotImplementedError""" reload(test_bug737473) diff --git a/lib-python/2.7/test/test_types.py b/lib-python/2.7/test/test_types.py --- a/lib-python/2.7/test/test_types.py +++ b/lib-python/2.7/test/test_types.py @@ -1,7 +1,8 @@ # Python test set -- part 6, built-in types from test.test_support import run_unittest, have_unicode, run_with_locale, \ - check_py3k_warnings + check_py3k_warnings, \ + impl_detail, check_impl_detail import unittest import sys import locale @@ -289,9 +290,14 @@ # array.array() returns an object that does not implement a char buffer, # something which int() uses for conversion. import array - try: int(buffer(array.array('c'))) + try: int(buffer(array.array('c', '5'))) except TypeError: pass - else: self.fail("char buffer (at C level) not working") + else: + if check_impl_detail(): + self.fail("char buffer (at C level) not working") + #else: + # it works on PyPy, which does not have the distinction + # between char buffer and binary buffer. XXX fine enough? def test_int__format__(self): def test(i, format_spec, result): @@ -741,6 +747,7 @@ for code in 'xXobns': self.assertRaises(ValueError, format, 0, ',' + code) + @impl_detail("the types' internal size attributes are CPython-only") def test_internal_sizes(self): self.assertGreater(object.__basicsize__, 0) self.assertGreater(tuple.__itemsize__, 0) diff --git a/lib-python/2.7/test/test_unicode.py b/lib-python/2.7/test/test_unicode.py --- a/lib-python/2.7/test/test_unicode.py +++ b/lib-python/2.7/test/test_unicode.py @@ -448,10 +448,11 @@ meth('\xff') with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) @test_support.run_with_locale('LC_ALL', 'de_DE', 'fr_FR') def test_format_float(self): @@ -1062,7 +1063,8 @@ # to take a 64-bit long, this test should apply to all platforms. if sys.maxint > (1 << 32) or struct.calcsize('P') != 4: return - self.assertRaises(OverflowError, u't\tt\t'.expandtabs, sys.maxint) + self.assertRaises((OverflowError, MemoryError), + u't\tt\t'.expandtabs, sys.maxint) def test__format__(self): def test(value, format, expected): diff --git a/lib-python/2.7/test/test_unicodedata.py b/lib-python/2.7/test/test_unicodedata.py --- a/lib-python/2.7/test/test_unicodedata.py +++ b/lib-python/2.7/test/test_unicodedata.py @@ -233,10 +233,12 @@ # been loaded in this process. popen = subprocess.Popen(args, stderr=subprocess.PIPE) popen.wait() - self.assertEqual(popen.returncode, 1) - error = "SyntaxError: (unicode error) \N escapes not supported " \ - "(can't load unicodedata module)" - self.assertIn(error, popen.stderr.read()) + self.assertIn(popen.returncode, [0, 1]) # at least it did not segfault + if test.test_support.check_impl_detail(): + self.assertEqual(popen.returncode, 1) + error = "SyntaxError: (unicode error) \N escapes not supported " \ + "(can't load unicodedata module)" + self.assertIn(error, popen.stderr.read()) def test_decimal_numeric_consistent(self): # Test that decimal and numeric are consistent, diff --git a/lib-python/2.7/test/test_unpack.py b/lib-python/2.7/test/test_unpack.py --- a/lib-python/2.7/test/test_unpack.py +++ b/lib-python/2.7/test/test_unpack.py @@ -62,14 +62,14 @@ >>> a, b = t Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking tuple of wrong size >>> a, b = l Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking sequence too short diff --git a/lib-python/2.7/test/test_urllib2.py b/lib-python/2.7/test/test_urllib2.py --- a/lib-python/2.7/test/test_urllib2.py +++ b/lib-python/2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/2.7/test/test_warnings.py b/lib-python/2.7/test/test_warnings.py --- a/lib-python/2.7/test/test_warnings.py +++ b/lib-python/2.7/test/test_warnings.py @@ -355,7 +355,8 @@ # test_support.import_fresh_module utility function def test_accelerated(self): self.assertFalse(original_warnings is self.module) - self.assertFalse(hasattr(self.module.warn, 'func_code')) + self.assertFalse(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class PyWarnTests(BaseTest, WarnTests): module = py_warnings @@ -364,7 +365,8 @@ # test_support.import_fresh_module utility function def test_pure_python(self): self.assertFalse(original_warnings is self.module) - self.assertTrue(hasattr(self.module.warn, 'func_code')) + self.assertTrue(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class WCmdLineTests(unittest.TestCase): diff --git a/lib-python/2.7/test/test_weakref.py b/lib-python/2.7/test/test_weakref.py --- a/lib-python/2.7/test/test_weakref.py +++ b/lib-python/2.7/test/test_weakref.py @@ -1,4 +1,3 @@ -import gc import sys import unittest import UserList @@ -6,6 +5,7 @@ import operator from test import test_support +from test.test_support import gc_collect # Used in ReferencesTestCase.test_ref_created_during_del() . ref_from_del = None @@ -70,6 +70,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(ref1() is None, "expected reference to be invalidated") self.assertTrue(ref2() is None, @@ -101,13 +102,16 @@ ref1 = weakref.proxy(o, self.callback) ref2 = weakref.proxy(o, self.callback) del o + gc_collect() def check(proxy): proxy.bar self.assertRaises(weakref.ReferenceError, check, ref1) self.assertRaises(weakref.ReferenceError, check, ref2) - self.assertRaises(weakref.ReferenceError, bool, weakref.proxy(C())) + ref3 = weakref.proxy(C()) + gc_collect() + self.assertRaises(weakref.ReferenceError, bool, ref3) self.assertTrue(self.cbcalled == 2) def check_basic_ref(self, factory): @@ -124,6 +128,7 @@ o = factory() ref = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(self.cbcalled == 1, "callback did not properly set 'cbcalled'") self.assertTrue(ref() is None, @@ -148,6 +153,7 @@ self.assertTrue(weakref.getweakrefcount(o) == 2, "wrong weak ref count for object") del proxy + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 1, "wrong weak ref count for object after deleting proxy") @@ -325,6 +331,7 @@ "got wrong number of weak reference objects") del ref1, ref2, proxy1, proxy2 + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 0, "weak reference objects not unlinked from" " referent when discarded.") @@ -338,6 +345,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref2], "list of refs does not match") @@ -345,10 +353,12 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref2 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref1], "list of refs does not match") del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [], "list of refs not cleared") @@ -400,13 +410,11 @@ # when the second attempt to remove the instance from the "list # of all objects" occurs. - import gc - class C(object): pass c = C() - wr = weakref.ref(c, lambda ignore: gc.collect()) + wr = weakref.ref(c, lambda ignore: gc_collect()) del c # There endeth the first part. It gets worse. @@ -414,7 +422,7 @@ c1 = C() c1.i = C() - wr = weakref.ref(c1.i, lambda ignore: gc.collect()) + wr = weakref.ref(c1.i, lambda ignore: gc_collect()) c2 = C() c2.c1 = c1 @@ -430,8 +438,6 @@ del c2 def test_callback_in_cycle_1(self): - import gc - class J(object): pass @@ -467,11 +473,9 @@ # search II.__mro__, but that's NULL. The result was a segfault in # a release build, and an assert failure in a debug build. del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_2(self): - import gc - # This is just like test_callback_in_cycle_1, except that II is an # old-style class. The symptom is different then: an instance of an # old-style class looks in its own __dict__ first. 'J' happens to @@ -496,11 +500,9 @@ I.wr = weakref.ref(J, I.acallback) del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_3(self): - import gc - # This one broke the first patch that fixed the last two. In this # case, the objects reachable from the callback aren't also reachable # from the object (c1) *triggering* the callback: you can get to @@ -520,11 +522,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2 - gc.collect() + gc_collect() def test_callback_in_cycle_4(self): - import gc - # Like test_callback_in_cycle_3, except c2 and c1 have different # classes. c2's class (C) isn't reachable from c1 then, so protecting # objects reachable from the dying object (c1) isn't enough to stop @@ -548,11 +548,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2, C, D - gc.collect() + gc_collect() def test_callback_in_cycle_resurrection(self): - import gc - # Do something nasty in a weakref callback: resurrect objects # from dead cycles. For this to be attempted, the weakref and # its callback must also be part of the cyclic trash (else the @@ -583,7 +581,7 @@ del c1, c2, C # make them all trash self.assertEqual(alist, []) # del isn't enough to reclaim anything - gc.collect() + gc_collect() # c1.wr and c2.wr were part of the cyclic trash, so should have # been cleared without their callbacks executing. OTOH, the weakref # to C is bound to a function local (wr), and wasn't trash, so that @@ -593,12 +591,10 @@ self.assertEqual(wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_callbacks_on_callback(self): - import gc - # Set up weakref callbacks *on* weakref callbacks. alist = [] def safe_callback(ignore): @@ -626,12 +622,12 @@ del callback, c, d, C self.assertEqual(alist, []) # del isn't enough to clean up cycles - gc.collect() + gc_collect() self.assertEqual(alist, ["safe_callback called"]) self.assertEqual(external_wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_gc_during_ref_creation(self): @@ -641,9 +637,11 @@ self.check_gc_during_creation(weakref.proxy) def check_gc_during_creation(self, makeref): - thresholds = gc.get_threshold() - gc.set_threshold(1, 1, 1) - gc.collect() + if test_support.check_impl_detail(): + import gc + thresholds = gc.get_threshold() + gc.set_threshold(1, 1, 1) + gc_collect() class A: pass @@ -663,7 +661,8 @@ weakref.ref(referenced, callback) finally: - gc.set_threshold(*thresholds) + if test_support.check_impl_detail(): + gc.set_threshold(*thresholds) def test_ref_created_during_del(self): # Bug #1377858 @@ -683,7 +682,7 @@ r = weakref.ref(Exception) self.assertRaises(TypeError, r.__init__, 0, 0, 0, 0, 0) # No exception should be raised here - gc.collect() + gc_collect() def test_classes(self): # Check that both old-style classes and new-style classes @@ -696,12 +695,12 @@ weakref.ref(int) a = weakref.ref(A, l.append) A = None - gc.collect() + gc_collect() self.assertEqual(a(), None) self.assertEqual(l, [a]) b = weakref.ref(B, l.append) B = None - gc.collect() + gc_collect() self.assertEqual(b(), None) self.assertEqual(l, [a, b]) @@ -722,6 +721,7 @@ self.assertTrue(mr.called) self.assertEqual(mr.value, 24) del o + gc_collect() self.assertTrue(mr() is None) self.assertTrue(mr.called) @@ -738,9 +738,11 @@ self.assertEqual(weakref.getweakrefcount(o), 3) refs = weakref.getweakrefs(o) self.assertEqual(len(refs), 3) - self.assertTrue(r2 is refs[0]) - self.assertIn(r1, refs[1:]) - self.assertIn(r3, refs[1:]) + assert set(refs) == set((r1, r2, r3)) + if test_support.check_impl_detail(): + self.assertTrue(r2 is refs[0]) + self.assertIn(r1, refs[1:]) + self.assertIn(r3, refs[1:]) def test_subclass_refs_dont_conflate_callbacks(self): class MyRef(weakref.ref): @@ -839,15 +841,18 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the values did not clear the dictionary") # regression on SF bug #447152: dict = weakref.WeakValueDictionary() self.assertRaises(KeyError, dict.__getitem__, 1) dict[2] = C() + gc_collect() self.assertRaises(KeyError, dict.__getitem__, 2) def test_weak_keys(self): @@ -868,9 +873,11 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the keys did not clear the dictionary") o = Object(42) @@ -986,13 +993,13 @@ self.assertTrue(len(weakdict) == 2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 1) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 0) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) @@ -1137,6 +1144,7 @@ for o in objs: count += 1 del d[o] + gc_collect() self.assertEqual(len(d), 0) self.assertEqual(count, 2) @@ -1177,6 +1185,7 @@ >>> o is o2 True >>> del o, o2 +>>> gc_collect() >>> print r() None @@ -1229,6 +1238,7 @@ >>> id2obj(a_id) is a True >>> del a +>>> gc_collect() >>> try: ... id2obj(a_id) ... except KeyError: diff --git a/lib-python/2.7/test/test_weakset.py b/lib-python/2.7/test/test_weakset.py --- a/lib-python/2.7/test/test_weakset.py +++ b/lib-python/2.7/test/test_weakset.py @@ -57,6 +57,7 @@ self.assertEqual(len(self.s), len(self.d)) self.assertEqual(len(self.fs), 1) del self.obj + test_support.gc_collect() self.assertEqual(len(self.fs), 0) def test_contains(self): @@ -66,6 +67,7 @@ self.assertNotIn(1, self.s) self.assertIn(self.obj, self.fs) del self.obj + test_support.gc_collect() self.assertNotIn(SomeClass('F'), self.fs) def test_union(self): @@ -204,6 +206,7 @@ self.assertEqual(self.s, dup) self.assertRaises(TypeError, self.s.add, []) self.fs.add(Foo()) + test_support.gc_collect() self.assertTrue(len(self.fs) == 1) self.fs.add(self.obj) self.assertTrue(len(self.fs) == 1) @@ -330,10 +333,11 @@ next(it) # Trigger internal iteration # Destroy an item del items[-1] - gc.collect() # just in case + test_support.gc_collect() # We have removed either the first consumed items, or another one self.assertIn(len(list(it)), [len(items), len(items) - 1]) del it + test_support.gc_collect() # The removal has been committed self.assertEqual(len(s), len(items)) diff --git a/lib-python/2.7/test/test_xml_etree.py b/lib-python/2.7/test/test_xml_etree.py --- a/lib-python/2.7/test/test_xml_etree.py +++ b/lib-python/2.7/test/test_xml_etree.py @@ -1633,10 +1633,10 @@ Check reference leak. >>> xmltoolkit63() - >>> count = sys.getrefcount(None) + >>> count = sys.getrefcount(None) #doctest: +SKIP >>> for i in range(1000): ... xmltoolkit63() - >>> sys.getrefcount(None) - count + >>> sys.getrefcount(None) - count #doctest: +SKIP 0 """ diff --git a/lib-python/2.7/test/test_xmlrpc.py b/lib-python/2.7/test/test_xmlrpc.py --- a/lib-python/2.7/test/test_xmlrpc.py +++ b/lib-python/2.7/test/test_xmlrpc.py @@ -308,7 +308,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -367,7 +367,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -435,6 +435,7 @@ def tearDown(self): # wait on the server thread to terminate + test_support.gc_collect() # to close the active connections self.evt.wait(10) # disable traceback reporting @@ -472,9 +473,6 @@ # protocol error; provide additional information in test output self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) - def test_unicode_host(self): - server = xmlrpclib.ServerProxy(u"http://%s:%d/RPC2"%(ADDR, PORT)) - self.assertEqual(server.add("a", u"\xe9"), u"a\xe9") # [ch] The test 404 is causing lots of false alarms. def XXXtest_404(self): @@ -589,12 +587,6 @@ # This avoids waiting for the socket timeout. self.test_simple1() - def test_partial_post(self): - # Check that a partial POST doesn't make the server loop: issue #14001. - conn = httplib.HTTPConnection(ADDR, PORT) - conn.request('POST', '/RPC2 HTTP/1.0\r\nContent-Length: 100\r\n\r\nbye') - conn.close() - class MultiPathServerTestCase(BaseServerTestCase): threadFunc = staticmethod(http_multi_server) request_count = 2 diff --git a/lib-python/2.7/test/test_zlib.py b/lib-python/2.7/test/test_zlib.py --- a/lib-python/2.7/test/test_zlib.py +++ b/lib-python/2.7/test/test_zlib.py @@ -1,6 +1,7 @@ import unittest from test.test_support import TESTFN, run_unittest, import_module, unlink, requires import binascii +import os import random from test.test_support import precisionbigmemtest, _1G, _4G import sys @@ -99,14 +100,7 @@ class BaseCompressTestCase(object): def check_big_compress_buffer(self, size, compress_func): - _1M = 1024 * 1024 - fmt = "%%0%dx" % (2 * _1M) - # Generate 10MB worth of random, and expand it by repeating it. - # The assumption is that zlib's memory is not big enough to exploit - # such spread out redundancy. - data = ''.join([binascii.a2b_hex(fmt % random.getrandbits(8 * _1M)) - for i in range(10)]) - data = data * (size // len(data) + 1) + data = os.urandom(size) try: compress_func(data) finally: diff --git a/lib-python/2.7/trace.py b/lib-python/2.7/trace.py --- a/lib-python/2.7/trace.py +++ b/lib-python/2.7/trace.py @@ -559,6 +559,10 @@ if len(funcs) == 1: dicts = [d for d in gc.get_referrers(funcs[0]) if isinstance(d, dict)] + if len(dicts) == 0: + # PyPy may store functions directly on the class + # (more exactly: the container is not a Python object) + dicts = funcs if len(dicts) == 1: classes = [c for c in gc.get_referrers(dicts[0]) if hasattr(c, "__bases__")] diff --git a/lib-python/2.7/urllib2.py b/lib-python/2.7/urllib2.py --- a/lib-python/2.7/urllib2.py +++ b/lib-python/2.7/urllib2.py @@ -1171,6 +1171,7 @@ except TypeError: #buffering kw not supported r = h.getresponse() except socket.error, err: # XXX what error? + h.close() raise URLError(err) # Pick apart the HTTPResponse object to get the addinfourl diff --git a/lib-python/2.7/uuid.py b/lib-python/2.7/uuid.py --- a/lib-python/2.7/uuid.py +++ b/lib-python/2.7/uuid.py @@ -406,8 +406,12 @@ continue if hasattr(lib, 'uuid_generate_random'): _uuid_generate_random = lib.uuid_generate_random + _uuid_generate_random.argtypes = [ctypes.c_char * 16] + _uuid_generate_random.restype = None if hasattr(lib, 'uuid_generate_time'): _uuid_generate_time = lib.uuid_generate_time + _uuid_generate_time.argtypes = [ctypes.c_char * 16] + _uuid_generate_time.restype = None # The uuid_generate_* functions are broken on MacOS X 10.5, as noted # in issue #8621 the function generates the same sequence of values @@ -436,6 +440,9 @@ lib = None _UuidCreate = getattr(lib, 'UuidCreateSequential', getattr(lib, 'UuidCreate', None)) + if _UuidCreate is not None: + _UuidCreate.argtypes = [ctypes.c_char * 16] + _UuidCreate.restype = ctypes.c_int except: pass diff --git a/lib-python/3.2/__future__.py b/lib-python/3.2/__future__.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__future__.py @@ -0,0 +1,134 @@ +"""Record of phased-in incompatible language changes. + +Each line is of the form: + + FeatureName = "_Feature(" OptionalRelease "," MandatoryRelease "," + CompilerFlag ")" + +where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples +of the same form as sys.version_info: + + (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int + PY_MINOR_VERSION, # the 1; an int + PY_MICRO_VERSION, # the 0; an int + PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string + PY_RELEASE_SERIAL # the 3; an int + ) + +OptionalRelease records the first release in which + + from __future__ import FeatureName + +was accepted. + +In the case of MandatoryReleases that have not yet occurred, +MandatoryRelease predicts the release in which the feature will become part +of the language. + +Else MandatoryRelease records when the feature became part of the language; +in releases at or after that, modules no longer need + + from __future__ import FeatureName + +to use the feature in question, but may continue to use such imports. + +MandatoryRelease may also be None, meaning that a planned feature got +dropped. + +Instances of class _Feature have two corresponding methods, +.getOptionalRelease() and .getMandatoryRelease(). + +CompilerFlag is the (bitfield) flag that should be passed in the fourth +argument to the builtin function compile() to enable the feature in +dynamically compiled code. This flag is stored in the .compiler_flag +attribute on _Future instances. These values must match the appropriate +#defines of CO_xxx flags in Include/compile.h. + +No feature line is ever to be deleted from this file. +""" + +all_feature_names = [ + "nested_scopes", + "generators", + "division", + "absolute_import", + "with_statement", + "print_function", + "unicode_literals", + "barry_as_FLUFL", +] + +__all__ = ["all_feature_names"] + all_feature_names + +# The CO_xxx symbols are defined here under the same names used by +# compile.h, so that an editor search will find them here. However, +# they're not exported in __all__, because they don't really belong to +# this module. +CO_NESTED = 0x0010 # nested_scopes +CO_GENERATOR_ALLOWED = 0 # generators (obsolete, was 0x1000) +CO_FUTURE_DIVISION = 0x2000 # division +CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default +CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement +CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function +CO_FUTURE_UNICODE_LITERALS = 0x20000 # unicode string literals +CO_FUTURE_BARRY_AS_BDFL = 0x40000 + +class _Feature: + def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): + self.optional = optionalRelease + self.mandatory = mandatoryRelease + self.compiler_flag = compiler_flag + + def getOptionalRelease(self): + """Return first release in which this feature was recognized. + + This is a 5-tuple, of the same form as sys.version_info. + """ + + return self.optional + + def getMandatoryRelease(self): + """Return release in which this feature will become mandatory. + + This is a 5-tuple, of the same form as sys.version_info, or, if + the feature was dropped, is None. + """ + + return self.mandatory + + def __repr__(self): + return "_Feature" + repr((self.optional, + self.mandatory, + self.compiler_flag)) + +nested_scopes = _Feature((2, 1, 0, "beta", 1), + (2, 2, 0, "alpha", 0), + CO_NESTED) + +generators = _Feature((2, 2, 0, "alpha", 1), + (2, 3, 0, "final", 0), + CO_GENERATOR_ALLOWED) + +division = _Feature((2, 2, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_DIVISION) + +absolute_import = _Feature((2, 5, 0, "alpha", 1), + (2, 7, 0, "alpha", 0), + CO_FUTURE_ABSOLUTE_IMPORT) + +with_statement = _Feature((2, 5, 0, "alpha", 1), + (2, 6, 0, "alpha", 0), + CO_FUTURE_WITH_STATEMENT) + +print_function = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_PRINT_FUNCTION) + +unicode_literals = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_UNICODE_LITERALS) + +barry_as_FLUFL = _Feature((3, 1, 0, "alpha", 2), + (3, 9, 0, "alpha", 0), + CO_FUTURE_BARRY_AS_BDFL) diff --git a/lib-python/3.2/__phello__.foo.py b/lib-python/3.2/__phello__.foo.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__phello__.foo.py @@ -0,0 +1,1 @@ +# This file exists as a helper for the test.test_frozen module. diff --git a/lib-python/3.2/_abcoll.py b/lib-python/3.2/_abcoll.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_abcoll.py @@ -0,0 +1,623 @@ +# Copyright 2007 Google, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""Abstract Base Classes (ABCs) for collections, according to PEP 3119. + +DON'T USE THIS MODULE DIRECTLY! The classes here should be imported +via collections; they are defined here only to alleviate certain +bootstrapping issues. Unit tests are in test_collections. +""" + +from abc import ABCMeta, abstractmethod +import sys + +__all__ = ["Hashable", "Iterable", "Iterator", + "Sized", "Container", "Callable", + "Set", "MutableSet", + "Mapping", "MutableMapping", + "MappingView", "KeysView", "ItemsView", "ValuesView", + "Sequence", "MutableSequence", + "ByteString", + ] + + +### collection related types which are not exposed through builtin ### +## iterators ## +bytes_iterator = type(iter(b'')) +bytearray_iterator = type(iter(bytearray())) +#callable_iterator = ??? +dict_keyiterator = type(iter({}.keys())) +dict_valueiterator = type(iter({}.values())) +dict_itemiterator = type(iter({}.items())) +list_iterator = type(iter([])) +list_reverseiterator = type(iter(reversed([]))) +range_iterator = type(iter(range(0))) +set_iterator = type(iter(set())) +str_iterator = type(iter("")) +tuple_iterator = type(iter(())) +zip_iterator = type(iter(zip())) +## views ## +dict_keys = type({}.keys()) +dict_values = type({}.values()) +dict_items = type({}.items()) +## misc ## +dict_proxy = type(type.__dict__) + + +### ONE-TRICK PONIES ### + +class Hashable(metaclass=ABCMeta): + + @abstractmethod + def __hash__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Hashable: + for B in C.__mro__: + if "__hash__" in B.__dict__: + if B.__dict__["__hash__"]: + return True + break + return NotImplemented + + +class Iterable(metaclass=ABCMeta): + + @abstractmethod + def __iter__(self): + while False: + yield None + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterable: + if any("__iter__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Iterator(Iterable): + + @abstractmethod + def __next__(self): + raise StopIteration + + def __iter__(self): + return self + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterator: + if (any("__next__" in B.__dict__ for B in C.__mro__) and + any("__iter__" in B.__dict__ for B in C.__mro__)): + return True + return NotImplemented + +Iterator.register(bytes_iterator) +Iterator.register(bytearray_iterator) +#Iterator.register(callable_iterator) +Iterator.register(dict_keyiterator) +Iterator.register(dict_valueiterator) +Iterator.register(dict_itemiterator) +Iterator.register(list_iterator) +Iterator.register(list_reverseiterator) +Iterator.register(range_iterator) +Iterator.register(set_iterator) +Iterator.register(str_iterator) +Iterator.register(tuple_iterator) +Iterator.register(zip_iterator) + +class Sized(metaclass=ABCMeta): + + @abstractmethod + def __len__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Sized: + if any("__len__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Container(metaclass=ABCMeta): + + @abstractmethod + def __contains__(self, x): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Container: + if any("__contains__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Callable(metaclass=ABCMeta): + + @abstractmethod + def __call__(self, *args, **kwds): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Callable: + if any("__call__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +### SETS ### + + +class Set(Sized, Iterable, Container): + + """A set is a finite, iterable container. + + This class provides concrete generic implementations of all + methods except for __contains__, __iter__ and __len__. + + To override the comparisons (presumably for speed, as the + semantics are fixed), all you have to do is redefine __le__ and + then the other operations will automatically follow suit. + """ + + def __le__(self, other): + if not isinstance(other, Set): + return NotImplemented + if len(self) > len(other): + return False + for elem in self: + if elem not in other: + return False + return True + + def __lt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) < len(other) and self.__le__(other) + + def __gt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other < self + + def __ge__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other <= self + + def __eq__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) == len(other) and self.__le__(other) + + def __ne__(self, other): + return not (self == other) + + @classmethod + def _from_iterable(cls, it): + '''Construct an instance of the class from any iterable input. + + Must override this method if the class constructor signature + does not accept an iterable for an input. + ''' + return cls(it) + + def __and__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + return self._from_iterable(value for value in other if value in self) + + def isdisjoint(self, other): + for value in other: + if value in self: + return False + return True + + def __or__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + chain = (e for s in (self, other) for e in s) + return self._from_iterable(chain) + + def __sub__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return self._from_iterable(value for value in self + if value not in other) + + def __xor__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return (self - other) | (other - self) + + def _hash(self): + """Compute the hash value of a set. + + Note that we don't define __hash__: not all sets are hashable. + But if you define a hashable set type, its __hash__ should + call this function. + + This must be compatible __eq__. + + All sets ought to compare equal if they contain the same + elements, regardless of how they are implemented, and + regardless of the order of the elements; so there's not much + freedom for __eq__ or __hash__. We match the algorithm used + by the built-in frozenset type. + """ + MAX = sys.maxsize + MASK = 2 * MAX + 1 + n = len(self) + h = 1927868237 * (n + 1) + h &= MASK + for x in self: + hx = hash(x) + h ^= (hx ^ (hx << 16) ^ 89869747) * 3644798167 + h &= MASK + h = h * 69069 + 907133923 + h &= MASK + if h > MAX: + h -= MASK + 1 + if h == -1: + h = 590923713 + return h + +Set.register(frozenset) + + +class MutableSet(Set): + + @abstractmethod + def add(self, value): + """Add an element.""" + raise NotImplementedError + + @abstractmethod + def discard(self, value): + """Remove an element. Do not raise an exception if absent.""" + raise NotImplementedError + + def remove(self, value): + """Remove an element. If not a member, raise a KeyError.""" + if value not in self: + raise KeyError(value) + self.discard(value) + + def pop(self): + """Return the popped value. Raise KeyError if empty.""" + it = iter(self) + try: + value = next(it) + except StopIteration: + raise KeyError + self.discard(value) + return value + + def clear(self): + """This is slow (creates N new iterators!) but effective.""" + try: + while True: + self.pop() + except KeyError: + pass + + def __ior__(self, it): + for value in it: + self.add(value) + return self + + def __iand__(self, it): + for value in (self - it): + self.discard(value) + return self + + def __ixor__(self, it): + if it is self: + self.clear() + else: + if not isinstance(it, Set): + it = self._from_iterable(it) + for value in it: + if value in self: + self.discard(value) + else: + self.add(value) + return self + + def __isub__(self, it): + if it is self: + self.clear() + else: + for value in it: + self.discard(value) + return self + +MutableSet.register(set) + + +### MAPPINGS ### + + +class Mapping(Sized, Iterable, Container): + + @abstractmethod + def __getitem__(self, key): + raise KeyError + + def get(self, key, default=None): + try: + return self[key] + except KeyError: + return default + + def __contains__(self, key): + try: + self[key] + except KeyError: + return False + else: + return True + + def keys(self): + return KeysView(self) + + def items(self): + return ItemsView(self) + + def values(self): + return ValuesView(self) + + def __eq__(self, other): + if not isinstance(other, Mapping): + return NotImplemented + return dict(self.items()) == dict(other.items()) + + def __ne__(self, other): + return not (self == other) + + +class MappingView(Sized): + + def __init__(self, mapping): + self._mapping = mapping + + def __len__(self): + return len(self._mapping) + + def __repr__(self): + return '{0.__class__.__name__}({0._mapping!r})'.format(self) + + +class KeysView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, key): + return key in self._mapping + + def __iter__(self): + for key in self._mapping: + yield key + +KeysView.register(dict_keys) + + +class ItemsView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, item): + key, value = item + try: + v = self._mapping[key] + except KeyError: + return False + else: + return v == value + + def __iter__(self): + for key in self._mapping: + yield (key, self._mapping[key]) + +ItemsView.register(dict_items) + + +class ValuesView(MappingView): + + def __contains__(self, value): + for key in self._mapping: + if value == self._mapping[key]: + return True + return False + + def __iter__(self): + for key in self._mapping: + yield self._mapping[key] + +ValuesView.register(dict_values) + + +class MutableMapping(Mapping): + + @abstractmethod + def __setitem__(self, key, value): + raise KeyError + + @abstractmethod + def __delitem__(self, key): + raise KeyError + + __marker = object() + + def pop(self, key, default=__marker): + try: + value = self[key] + except KeyError: + if default is self.__marker: + raise + return default + else: + del self[key] + return value + + def popitem(self): + try: + key = next(iter(self)) + except StopIteration: + raise KeyError + value = self[key] + del self[key] + return key, value + + def clear(self): + try: + while True: + self.popitem() + except KeyError: + pass + + def update(*args, **kwds): + if len(args) > 2: + raise TypeError("update() takes at most 2 positional " + "arguments ({} given)".format(len(args))) + elif not args: + raise TypeError("update() takes at least 1 argument (0 given)") + self = args[0] + other = args[1] if len(args) >= 2 else () + + if isinstance(other, Mapping): + for key in other: + self[key] = other[key] + elif hasattr(other, "keys"): + for key in other.keys(): + self[key] = other[key] + else: + for key, value in other: + self[key] = value + for key, value in kwds.items(): + self[key] = value + + def setdefault(self, key, default=None): + try: + return self[key] + except KeyError: + self[key] = default + return default + +MutableMapping.register(dict) + + +### SEQUENCES ### + + +class Sequence(Sized, Iterable, Container): + + """All the operations on a read-only sequence. + + Concrete subclasses must override __new__ or __init__, + __getitem__, and __len__. + """ + + @abstractmethod + def __getitem__(self, index): + raise IndexError + + def __iter__(self): + i = 0 + try: + while True: + v = self[i] + yield v + i += 1 + except IndexError: + return + + def __contains__(self, value): + for v in self: + if v == value: + return True + return False + + def __reversed__(self): + for i in reversed(range(len(self))): + yield self[i] + + def index(self, value): + for i, v in enumerate(self): + if v == value: + return i + raise ValueError + + def count(self, value): + return sum(1 for v in self if v == value) + +Sequence.register(tuple) +Sequence.register(str) +Sequence.register(range) + + +class ByteString(Sequence): + + """This unifies bytes and bytearray. + + XXX Should add all their methods. + """ + +ByteString.register(bytes) +ByteString.register(bytearray) + + +class MutableSequence(Sequence): + + @abstractmethod + def __setitem__(self, index, value): + raise IndexError + + @abstractmethod + def __delitem__(self, index): + raise IndexError + + @abstractmethod + def insert(self, index, value): + raise IndexError + + def append(self, value): + self.insert(len(self), value) + + def reverse(self): + n = len(self) + for i in range(n//2): + self[i], self[n-i-1] = self[n-i-1], self[i] + + def extend(self, values): + for v in values: + self.append(v) + + def pop(self, index=-1): + v = self[index] + del self[index] + return v + + def remove(self, value): + del self[self.index(value)] + + def __iadd__(self, values): + self.extend(values) + return self + +MutableSequence.register(list) +MutableSequence.register(bytearray) # Multiply inheriting, see ByteString diff --git a/lib-python/3.2/_compat_pickle.py b/lib-python/3.2/_compat_pickle.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_compat_pickle.py @@ -0,0 +1,81 @@ +# This module is used to map the old Python 2 names to the new names used in +# Python 3 for the pickle module. This needed to make pickle streams +# generated with Python 2 loadable by Python 3. + +# This is a copy of lib2to3.fixes.fix_imports.MAPPING. We cannot import +# lib2to3 and use the mapping defined there, because lib2to3 uses pickle. +# Thus, this could cause the module to be imported recursively. +IMPORT_MAPPING = { + 'StringIO': 'io', + 'cStringIO': 'io', + 'cPickle': 'pickle', + '__builtin__' : 'builtins', + 'copy_reg': 'copyreg', + 'Queue': 'queue', + 'SocketServer': 'socketserver', + 'ConfigParser': 'configparser', + 'repr': 'reprlib', + 'FileDialog': 'tkinter.filedialog', + 'tkFileDialog': 'tkinter.filedialog', + 'SimpleDialog': 'tkinter.simpledialog', + 'tkSimpleDialog': 'tkinter.simpledialog', + 'tkColorChooser': 'tkinter.colorchooser', + 'tkCommonDialog': 'tkinter.commondialog', + 'Dialog': 'tkinter.dialog', + 'Tkdnd': 'tkinter.dnd', + 'tkFont': 'tkinter.font', + 'tkMessageBox': 'tkinter.messagebox', + 'ScrolledText': 'tkinter.scrolledtext', + 'Tkconstants': 'tkinter.constants', + 'Tix': 'tkinter.tix', + 'ttk': 'tkinter.ttk', + 'Tkinter': 'tkinter', + 'markupbase': '_markupbase', + '_winreg': 'winreg', + 'thread': '_thread', + 'dummy_thread': '_dummy_thread', + 'dbhash': 'dbm.bsd', + 'dumbdbm': 'dbm.dumb', + 'dbm': 'dbm.ndbm', + 'gdbm': 'dbm.gnu', + 'xmlrpclib': 'xmlrpc.client', + 'DocXMLRPCServer': 'xmlrpc.server', + 'SimpleXMLRPCServer': 'xmlrpc.server', + 'httplib': 'http.client', + 'htmlentitydefs' : 'html.entities', + 'HTMLParser' : 'html.parser', + 'Cookie': 'http.cookies', + 'cookielib': 'http.cookiejar', + 'BaseHTTPServer': 'http.server', + 'SimpleHTTPServer': 'http.server', + 'CGIHTTPServer': 'http.server', + 'test.test_support': 'test.support', + 'commands': 'subprocess', + 'UserString' : 'collections', + 'UserList' : 'collections', + 'urlparse' : 'urllib.parse', + 'robotparser' : 'urllib.robotparser', + 'whichdb': 'dbm', + 'anydbm': 'dbm' +} + + +# This contains rename rules that are easy to handle. We ignore the more +# complex stuff (e.g. mapping the names in the urllib and types modules). +# These rules should be run before import names are fixed. +NAME_MAPPING = { + ('__builtin__', 'xrange'): ('builtins', 'range'), + ('__builtin__', 'reduce'): ('functools', 'reduce'), + ('__builtin__', 'intern'): ('sys', 'intern'), + ('__builtin__', 'unichr'): ('builtins', 'chr'), + ('__builtin__', 'basestring'): ('builtins', 'str'), + ('__builtin__', 'long'): ('builtins', 'int'), + ('itertools', 'izip'): ('builtins', 'zip'), + ('itertools', 'imap'): ('builtins', 'map'), + ('itertools', 'ifilter'): ('builtins', 'filter'), + ('itertools', 'ifilterfalse'): ('itertools', 'filterfalse'), +} + +# Same, but for 3.x to 2.x +REVERSE_IMPORT_MAPPING = dict((v, k) for (k, v) in IMPORT_MAPPING.items()) +REVERSE_NAME_MAPPING = dict((v, k) for (k, v) in NAME_MAPPING.items()) diff --git a/lib-python/3.2/_dummy_thread.py b/lib-python/3.2/_dummy_thread.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_dummy_thread.py @@ -0,0 +1,155 @@ +"""Drop-in replacement for the thread module. + +Meant to be used as a brain-dead substitute so that threaded code does +not need to be rewritten for when the thread module is not present. + +Suggested usage is:: + + try: + import _thread + except ImportError: + import _dummy_thread as _thread + +""" +# Exports only things specified by thread documentation; +# skipping obsolete synonyms allocate(), start_new(), exit_thread(). +__all__ = ['error', 'start_new_thread', 'exit', 'get_ident', 'allocate_lock', + 'interrupt_main', 'LockType'] + +# A dummy value +TIMEOUT_MAX = 2**31 + +# NOTE: this module can be imported early in the extension building process, +# and so top level imports of other modules should be avoided. Instead, all +# imports are done when needed on a function-by-function basis. Since threads +# are disabled, the import lock should not be an issue anyway (??). + +class error(Exception): + """Dummy implementation of _thread.error.""" + + def __init__(self, *args): + self.args = args + +def start_new_thread(function, args, kwargs={}): + """Dummy implementation of _thread.start_new_thread(). + + Compatibility is maintained by making sure that ``args`` is a + tuple and ``kwargs`` is a dictionary. If an exception is raised + and it is SystemExit (which can be done by _thread.exit()) it is + caught and nothing is done; all other exceptions are printed out + by using traceback.print_exc(). + + If the executed function calls interrupt_main the KeyboardInterrupt will be + raised when the function returns. + + """ + if type(args) != type(tuple()): + raise TypeError("2nd arg must be a tuple") + if type(kwargs) != type(dict()): + raise TypeError("3rd arg must be a dict") + global _main + _main = False + try: + function(*args, **kwargs) + except SystemExit: + pass + except: + import traceback + traceback.print_exc() + _main = True + global _interrupt + if _interrupt: + _interrupt = False + raise KeyboardInterrupt + +def exit(): + """Dummy implementation of _thread.exit().""" + raise SystemExit + +def get_ident(): + """Dummy implementation of _thread.get_ident(). + + Since this module should only be used when _threadmodule is not + available, it is safe to assume that the current process is the + only thread. Thus a constant can be safely returned. + """ + return -1 + +def allocate_lock(): + """Dummy implementation of _thread.allocate_lock().""" + return LockType() + +def stack_size(size=None): + """Dummy implementation of _thread.stack_size().""" + if size is not None: + raise error("setting thread stack size not supported") + return 0 + +class LockType(object): + """Class implementing dummy implementation of _thread.LockType. + + Compatibility is maintained by maintaining self.locked_status + which is a boolean that stores the state of the lock. Pickling of + the lock, though, should not be done since if the _thread module is + then used with an unpickled ``lock()`` from here problems could + occur from this class not having atomic methods. + + """ + + def __init__(self): + self.locked_status = False + + def acquire(self, waitflag=None, timeout=-1): + """Dummy implementation of acquire(). + + For blocking calls, self.locked_status is automatically set to + True and returned appropriately based on value of + ``waitflag``. If it is non-blocking, then the value is + actually checked and not set if it is already acquired. This + is all done so that threading.Condition's assert statements + aren't triggered and throw a little fit. + + """ + if waitflag is None or waitflag: + self.locked_status = True + return True + else: + if not self.locked_status: + self.locked_status = True + return True + else: + if timeout > 0: + import time + time.sleep(timeout) + return False + + __enter__ = acquire + + def __exit__(self, typ, val, tb): + self.release() + + def release(self): + """Release the dummy lock.""" + # XXX Perhaps shouldn't actually bother to test? Could lead + # to problems for complex, threaded code. + if not self.locked_status: + raise error + self.locked_status = False + return True + + def locked(self): + return self.locked_status + +# Used to signal that interrupt_main was called in a "thread" +_interrupt = False +# True when not executing in a "thread" +_main = True + +def interrupt_main(): + """Set _interrupt flag to True to have start_new_thread raise + KeyboardInterrupt upon exiting.""" + if _main: + raise KeyboardInterrupt + else: + global _interrupt + _interrupt = True diff --git a/lib-python/3.2/_markupbase.py b/lib-python/3.2/_markupbase.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_markupbase.py @@ -0,0 +1,395 @@ +"""Shared support for scanning document type declarations in HTML and XHTML. + +This module is used as a foundation for the html.parser module. It has no +documented public API and should not be used directly. + +""" + +import re + +_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9]*\s*').match +_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match +_commentclose = re.compile(r'--\s*>') +_markedsectionclose = re.compile(r']\s*]\s*>') + +# An analysis of the MS-Word extensions is available at +# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf + +_msmarkedsectionclose = re.compile(r']\s*>') + +del re + + +class ParserBase: + """Parser base class which provides some common support methods used + by the SGML/HTML and XHTML parsers.""" + + def __init__(self): + if self.__class__ is ParserBase: + raise RuntimeError( + "_markupbase.ParserBase must be subclassed") + + def error(self, message): + raise NotImplementedError( + "subclasses of ParserBase must override error()") + + def reset(self): + self.lineno = 1 + self.offset = 0 + + def getpos(self): + """Return current line number and offset.""" + return self.lineno, self.offset + + # Internal -- update line number and offset. This should be + # called for each piece of data exactly once, in order -- in other + # words the concatenation of all the input strings to this + # function should be exactly the entire input. + def updatepos(self, i, j): + if i >= j: + return j + rawdata = self.rawdata + nlines = rawdata.count("\n", i, j) + if nlines: + self.lineno = self.lineno + nlines + pos = rawdata.rindex("\n", i, j) # Should not fail + self.offset = j-(pos+1) + else: + self.offset = self.offset + j-i + return j + + _decl_otherchars = '' + + # Internal -- parse declaration (for use by subclasses). + def parse_declaration(self, i): + # This is some sort of declaration; in "HTML as + # deployed," this should only be the document type + # declaration (""). + # ISO 8879:1986, however, has more complex + # declaration syntax for elements in , including: + # --comment-- + # [marked section] + # name in the following list: ENTITY, DOCTYPE, ELEMENT, + # ATTLIST, NOTATION, SHORTREF, USEMAP, + # LINKTYPE, LINK, IDLINK, USELINK, SYSTEM + rawdata = self.rawdata + j = i + 2 + assert rawdata[i:j] == "": + # the empty comment + return j + 1 + if rawdata[j:j+1] in ("-", ""): + # Start of comment followed by buffer boundary, + # or just a buffer boundary. + return -1 + # A simple, practical version could look like: ((name|stringlit) S*) + '>' + n = len(rawdata) + if rawdata[j:j+2] == '--': #comment + # Locate --.*-- as the body of the comment + return self.parse_comment(i) + elif rawdata[j] == '[': #marked section + # Locate [statusWord [...arbitrary SGML...]] as the body of the marked section + # Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA + # Note that this is extended by Microsoft Office "Save as Web" function + # to include [if...] and [endif]. + return self.parse_marked_section(i) + else: #all other declaration elements + decltype, j = self._scan_name(j, i) + if j < 0: + return j + if decltype == "doctype": + self._decl_otherchars = '' + while j < n: + c = rawdata[j] + if c == ">": + # end of declaration syntax + data = rawdata[i+2:j] + if decltype == "doctype": + self.handle_decl(data) + else: + # According to the HTML5 specs sections "8.2.4.44 Bogus + # comment state" and "8.2.4.45 Markup declaration open + # state", a comment token should be emitted. + # Calling unknown_decl provides more flexibility though. + self.unknown_decl(data) + return j + 1 + if c in "\"'": + m = _declstringlit_match(rawdata, j) + if not m: + return -1 # incomplete + j = m.end() + elif c in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ": + name, j = self._scan_name(j, i) + elif c in self._decl_otherchars: + j = j + 1 + elif c == "[": + # this could be handled in a separate doctype parser + if decltype == "doctype": + j = self._parse_doctype_subset(j + 1, i) + elif decltype in {"attlist", "linktype", "link", "element"}: + # must tolerate []'d groups in a content model in an element declaration + # also in data attribute specifications of attlist declaration + # also link type declaration subsets in linktype declarations + # also link attribute specification lists in link declarations + self.error("unsupported '[' char in %s declaration" % decltype) + else: + self.error("unexpected '[' char in declaration") + else: + self.error( + "unexpected %r char in declaration" % rawdata[j]) + if j < 0: + return j + return -1 # incomplete + + # Internal -- parse a marked section + # Override this to handle MS-word extension syntax content + def parse_marked_section(self, i, report=1): + rawdata= self.rawdata + assert rawdata[i:i+3] == ' ending + match= _markedsectionclose.search(rawdata, i+3) + elif sectName in {"if", "else", "endif"}: + # look for MS Office ]> ending + match= _msmarkedsectionclose.search(rawdata, i+3) + else: + self.error('unknown status keyword %r in marked section' % rawdata[i+3:j]) + if not match: + return -1 + if report: + j = match.start(0) + self.unknown_decl(rawdata[i+3: j]) + return match.end(0) + + # Internal -- parse comment, return length or -1 if not terminated + def parse_comment(self, i, report=1): + rawdata = self.rawdata + if rawdata[i:i+4] != ' LOAD_CONST None def f(x): - None + y = None return x asm = disassemble(f) for elem in ('LOAD_GLOBAL',): @@ -67,10 +67,13 @@ self.assertIn(elem, asm) def test_pack_unpack(self): + # On PyPy, "a, b = ..." is even more optimized, by removing + # the ROT_TWO. But the ROT_TWO is not removed if assigning + # to more complex expressions, so check that. for line, elem in ( ('a, = a,', 'LOAD_CONST',), - ('a, b = a, b', 'ROT_TWO',), - ('a, b, c = a, b, c', 'ROT_THREE',), + ('a[1], b = a, b', 'ROT_TWO',), + ('a, b[2], c = a, b, c', 'ROT_THREE',), ): asm = dis_single(line) self.assertIn(elem, asm) @@ -78,6 +81,8 @@ self.assertNotIn('UNPACK_TUPLE', asm) def test_folding_of_tuples_of_constants(self): + # On CPython, "a,b,c=1,2,3" turns into "a,b,c=" + # but on PyPy, it turns into "a=1;b=2;c=3". for line, elem in ( ('a = 1,2,3', '((1, 2, 3))'), ('("a","b","c")', "(('a', 'b', 'c'))"), @@ -86,7 +91,8 @@ ('((1, 2), 3, 4)', '(((1, 2), 3, 4))'), ): asm = dis_single(line) - self.assertIn(elem, asm) + self.assert_(elem in asm or ( + line == 'a,b,c = 1,2,3' and 'UNPACK_TUPLE' not in asm)) self.assertNotIn('BUILD_TUPLE', asm) # Bug 1053819: Tuple of constants misidentified when presented with: @@ -139,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib-python/2.7/test/test_pprint.py b/lib-python/2.7/test/test_pprint.py --- a/lib-python/2.7/test/test_pprint.py +++ b/lib-python/2.7/test/test_pprint.py @@ -233,7 +233,16 @@ frozenset([0, 2]), frozenset([0, 1])])}""" cube = test.test_set.cube(3) - self.assertEqual(pprint.pformat(cube), cube_repr_tgt) + # XXX issues of dictionary order, and for the case below, + # order of items in the frozenset([...]) representation. + # Whether we get precisely cube_repr_tgt or not is open + # to implementation-dependent choices (this test probably + # fails horribly in CPython if we tweak the dict order too). + got = pprint.pformat(cube) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cube_repr_tgt) + else: + self.assertEqual(eval(got), cube) cubo_repr_tgt = """\ {frozenset([frozenset([0, 2]), frozenset([0])]): frozenset([frozenset([frozenset([0, 2]), @@ -393,7 +402,11 @@ 2])])])}""" cubo = test.test_set.linegraph(cube) - self.assertEqual(pprint.pformat(cubo), cubo_repr_tgt) + got = pprint.pformat(cubo) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cubo_repr_tgt) + else: + self.assertEqual(eval(got), cubo) def test_depth(self): nested_tuple = (1, (2, (3, (4, (5, 6))))) diff --git a/lib-python/2.7/test/test_pydoc.py b/lib-python/2.7/test/test_pydoc.py --- a/lib-python/2.7/test/test_pydoc.py +++ b/lib-python/2.7/test/test_pydoc.py @@ -267,8 +267,8 @@ testpairs = ( ('i_am_not_here', 'i_am_not_here'), ('test.i_am_not_here_either', 'i_am_not_here_either'), - ('test.i_am_not_here.neither_am_i', 'i_am_not_here.neither_am_i'), - ('i_am_not_here.{}'.format(modname), 'i_am_not_here.{}'.format(modname)), + ('test.i_am_not_here.neither_am_i', 'i_am_not_here'), + ('i_am_not_here.{}'.format(modname), 'i_am_not_here'), ('test.{}'.format(modname), modname), ) @@ -292,8 +292,8 @@ result = run_pydoc(modname) finally: forget(modname) - expected = badimport_pattern % (modname, expectedinmsg) - self.assertEqual(expected, result) + expected = badimport_pattern % (modname, '(.+\\.)?' + expectedinmsg + '(\\..+)?$') + self.assertTrue(re.match(expected, result)) def test_input_strip(self): missing_module = " test.i_am_not_here " diff --git a/lib-python/2.7/test/test_pyexpat.py b/lib-python/2.7/test/test_pyexpat.py --- a/lib-python/2.7/test/test_pyexpat.py +++ b/lib-python/2.7/test/test_pyexpat.py @@ -570,6 +570,9 @@ self.assertEqual(self.n, 4) class MalformedInputText(unittest.TestCase): + # CPython seems to ship its own version of expat, they fixed it on this commit : + # http://svn.python.org/view?revision=74429&view=revision + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test1(self): xml = "\0\r\n" parser = expat.ParserCreate() @@ -579,6 +582,7 @@ except expat.ExpatError as e: self.assertEqual(str(e), 'unclosed token: line 2, column 0') + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test2(self): xml = "\r\n" parser = expat.ParserCreate() diff --git a/lib-python/2.7/test/test_repr.py b/lib-python/2.7/test/test_repr.py --- a/lib-python/2.7/test/test_repr.py +++ b/lib-python/2.7/test/test_repr.py @@ -9,6 +9,7 @@ import unittest from test.test_support import run_unittest, check_py3k_warnings +from test.test_support import check_impl_detail from repr import repr as r # Don't shadow builtin repr from repr import Repr @@ -145,8 +146,11 @@ # Functions eq(repr(hash), '') # Methods - self.assertTrue(repr(''.split).startswith( - '") def test_xrange(self): eq = self.assertEqual @@ -185,7 +189,10 @@ def test_descriptors(self): eq = self.assertEqual # method descriptors - eq(repr(dict.items), "") + if check_impl_detail(cpython=True): + eq(repr(dict.items), "") + elif check_impl_detail(pypy=True): + eq(repr(dict.items), "") # XXX member descriptors # XXX attribute descriptors # XXX slot descriptors @@ -247,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_runpy.py b/lib-python/2.7/test/test_runpy.py --- a/lib-python/2.7/test/test_runpy.py +++ b/lib-python/2.7/test/test_runpy.py @@ -5,10 +5,15 @@ import sys import re import tempfile -from test.test_support import verbose, run_unittest, forget +from test.test_support import verbose, run_unittest, forget, check_impl_detail from test.script_helper import (temp_dir, make_script, compile_script, make_pkg, make_zip_script, make_zip_pkg) +if check_impl_detail(pypy=True): + no_lone_pyc_file = True +else: + no_lone_pyc_file = False + from runpy import _run_code, _run_module_code, run_module, run_path # Note: This module can't safely test _run_module_as_main as it @@ -168,13 +173,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + d2 = run_module(mod_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -190,13 +196,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", pkg_name - d2 = run_module(pkg_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", pkg_name + d2 = run_module(pkg_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, pkg_name) if verbose: print "Package executed successfully" @@ -244,15 +251,17 @@ self.assertIn("sibling", d1) self.assertIn("nephew", d1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name, run_name=run_name) # Read from bytecode - self.assertIn("__package__", d2) - self.assertTrue(d2["__package__"] == pkg_name) - self.assertIn("sibling", d2) - self.assertIn("nephew", d2) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + # Read from bytecode + d2 = run_module(mod_name, run_name=run_name) + self.assertIn("__package__", d2) + self.assertTrue(d2["__package__"] == pkg_name) + self.assertIn("sibling", d2) + self.assertIn("nephew", d2) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -345,6 +354,8 @@ script_dir, '') def test_directory_compiled(self): + if no_lone_pyc_file: + return with temp_dir() as script_dir: mod_name = '__main__' script_name = self._make_test_script(script_dir, mod_name) diff --git a/lib-python/2.7/test/test_scope.py b/lib-python/2.7/test/test_scope.py --- a/lib-python/2.7/test/test_scope.py +++ b/lib-python/2.7/test/test_scope.py @@ -1,6 +1,6 @@ import unittest from test.test_support import check_syntax_error, check_py3k_warnings, \ - check_warnings, run_unittest + check_warnings, run_unittest, gc_collect class ScopeTests(unittest.TestCase): @@ -432,6 +432,7 @@ for i in range(100): f1() + gc_collect() self.assertEqual(Foo.count, 0) diff --git a/lib-python/2.7/test/test_set.py b/lib-python/2.7/test/test_set.py --- a/lib-python/2.7/test/test_set.py +++ b/lib-python/2.7/test/test_set.py @@ -309,6 +309,7 @@ fo.close() test_support.unlink(test_support.TESTFN) + @test_support.impl_detail(pypy=False) def test_do_not_rehash_dict_keys(self): n = 10 d = dict.fromkeys(map(HashCountingInt, xrange(n))) @@ -559,6 +560,7 @@ p = weakref.proxy(s) self.assertEqual(str(p), str(s)) s = None + test_support.gc_collect() self.assertRaises(ReferenceError, str, p) # C API test only available in a debug build @@ -590,6 +592,7 @@ s.__init__(self.otherword) self.assertEqual(s, set(self.word)) + @test_support.impl_detail() def test_singleton_empty_frozenset(self): f = frozenset() efs = [frozenset(), frozenset([]), frozenset(()), frozenset(''), @@ -770,9 +773,10 @@ for v in self.set: self.assertIn(v, self.values) setiter = iter(self.set) - # note: __length_hint__ is an internal undocumented API, - # don't rely on it in your own programs - self.assertEqual(setiter.__length_hint__(), len(self.set)) + if test_support.check_impl_detail(): + # note: __length_hint__ is an internal undocumented API, + # don't rely on it in your own programs + self.assertEqual(setiter.__length_hint__(), len(self.set)) def test_pickling(self): p = pickle.dumps(self.set) @@ -1564,7 +1568,7 @@ for meth in (s.union, s.intersection, s.difference, s.symmetric_difference, s.isdisjoint): for g in (G, I, Ig, L, R): expected = meth(data) - actual = meth(G(data)) + actual = meth(g(data)) if isinstance(expected, bool): self.assertEqual(actual, expected) else: diff --git a/lib-python/2.7/test/test_sets.py b/lib-python/2.7/test/test_sets.py --- a/lib-python/2.7/test/test_sets.py +++ b/lib-python/2.7/test/test_sets.py @@ -686,7 +686,9 @@ set_list = sorted(self.set) self.assertEqual(len(dup_list), len(set_list)) for i, el in enumerate(dup_list): - self.assertIs(el, set_list[i]) + # Object identity is not guarnteed for immutable objects, so we + # can't use assertIs here. + self.assertEqual(el, set_list[i]) def test_deep_copy(self): dup = copy.deepcopy(self.set) diff --git a/lib-python/2.7/test/test_site.py b/lib-python/2.7/test/test_site.py --- a/lib-python/2.7/test/test_site.py +++ b/lib-python/2.7/test/test_site.py @@ -226,6 +226,10 @@ self.assertEqual(len(dirs), 1) wanted = os.path.join('xoxo', 'Lib', 'site-packages') self.assertEqual(dirs[0], wanted) + elif '__pypy__' in sys.builtin_module_names: + self.assertEquals(len(dirs), 1) + wanted = os.path.join('xoxo', 'site-packages') + self.assertEquals(dirs[0], wanted) elif os.sep == '/': self.assertEqual(len(dirs), 2) wanted = os.path.join('xoxo', 'lib', 'python' + sys.version[:3], diff --git a/lib-python/2.7/test/test_socket.py b/lib-python/2.7/test/test_socket.py --- a/lib-python/2.7/test/test_socket.py +++ b/lib-python/2.7/test/test_socket.py @@ -252,6 +252,7 @@ self.assertEqual(p.fileno(), s.fileno()) s.close() s = None + test_support.gc_collect() try: p.fileno() except ReferenceError: @@ -285,32 +286,34 @@ s.sendto(u'\u2620', sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None) - self.assertIn('not NoneType', str(cm.exception)) + self.assertIn('NoneType', str(cm.exception)) # 3 args with self.assertRaises(UnicodeEncodeError): s.sendto(u'\u2620', 0, sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, None) - self.assertIn('not NoneType', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 'bar', sockname) - self.assertIn('an integer is required', str(cm.exception)) + self.assertIn('integer', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None, None) - self.assertIn('an integer is required', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('an integer is required', str(cm.exception)) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto('foo') - self.assertIn('(1 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, sockname, 4) - self.assertIn('(4 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) def testCrucialConstants(self): @@ -385,10 +388,10 @@ socket.htonl(k) socket.htons(k) for k in bad_values: - self.assertRaises(OverflowError, socket.ntohl, k) - self.assertRaises(OverflowError, socket.ntohs, k) - self.assertRaises(OverflowError, socket.htonl, k) - self.assertRaises(OverflowError, socket.htons, k) + self.assertRaises((OverflowError, ValueError), socket.ntohl, k) + self.assertRaises((OverflowError, ValueError), socket.ntohs, k) + self.assertRaises((OverflowError, ValueError), socket.htonl, k) + self.assertRaises((OverflowError, ValueError), socket.htons, k) def testGetServBy(self): eq = self.assertEqual @@ -428,8 +431,8 @@ if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. - self.assertRaises(OverflowError, socket.getservbyport, -1) - self.assertRaises(OverflowError, socket.getservbyport, 65536) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, -1) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout @@ -608,8 +611,8 @@ neg_port = port - 65536 sock = socket.socket() try: - self.assertRaises(OverflowError, sock.bind, (host, big_port)) - self.assertRaises(OverflowError, sock.bind, (host, neg_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, big_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, neg_port)) sock.bind((host, port)) finally: sock.close() @@ -1309,6 +1312,7 @@ closed = False def flush(self): pass def close(self): self.closed = True + def _decref_socketios(self): pass # must not close unless we request it: the original use of _fileobject # by module socket requires that the underlying socket not be closed until diff --git a/lib-python/2.7/test/test_sort.py b/lib-python/2.7/test/test_sort.py --- a/lib-python/2.7/test/test_sort.py +++ b/lib-python/2.7/test/test_sort.py @@ -140,7 +140,10 @@ return random.random() < 0.5 L = [C() for i in range(50)] - self.assertRaises(ValueError, L.sort) + try: + L.sort() + except ValueError: + pass def test_cmpNone(self): # Testing None as a comparison function. @@ -150,8 +153,10 @@ L.sort(None) self.assertEqual(L, range(50)) + @test_support.impl_detail(pypy=False) def test_undetected_mutation(self): # Python 2.4a1 did not always detect mutation + # So does pypy... memorywaster = [] for i in range(20): def mutating_cmp(x, y): @@ -226,7 +231,10 @@ def __del__(self): del data[:] data[:] = range(20) - self.assertRaises(ValueError, data.sort, key=SortKiller) + try: + data.sort(key=SortKiller) + except ValueError: + pass def test_key_with_mutating_del_and_exception(self): data = range(10) diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -881,6 +881,8 @@ c = socket.socket() c.connect((HOST, port)) listener_gone.wait() + # XXX why is it necessary? + test_support.gc_collect() try: ssl_sock = ssl.wrap_socket(c) except IOError: @@ -1330,10 +1332,8 @@ def test_main(verbose=False): global CERTFILE, SVN_PYTHON_ORG_ROOT_CERT - CERTFILE = os.path.join(os.path.dirname(__file__) or os.curdir, - "keycert.pem") - SVN_PYTHON_ORG_ROOT_CERT = os.path.join( - os.path.dirname(__file__) or os.curdir, + CERTFILE = test_support.findfile("keycert.pem") + SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( "https_svn_python_org_root.pem") if (not os.path.exists(CERTFILE) or diff --git a/lib-python/2.7/test/test_str.py b/lib-python/2.7/test/test_str.py --- a/lib-python/2.7/test/test_str.py +++ b/lib-python/2.7/test/test_str.py @@ -422,10 +422,11 @@ for meth in ('foo'.startswith, 'foo'.endswith): with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) def test_main(): test_support.run_unittest(StrTest) diff --git a/lib-python/2.7/test/test_struct.py b/lib-python/2.7/test/test_struct.py --- a/lib-python/2.7/test/test_struct.py +++ b/lib-python/2.7/test/test_struct.py @@ -535,7 +535,8 @@ @unittest.skipUnless(IS32BIT, "Specific to 32bit machines") def test_crasher(self): - self.assertRaises(MemoryError, struct.pack, "357913941c", "a") + self.assertRaises((MemoryError, struct.error), struct.pack, + "357913941c", "a") def test_count_overflow(self): hugecount = '{}b'.format(sys.maxsize+1) diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/2.7/test/test_support.py b/lib-python/2.7/test/test_support.py --- a/lib-python/2.7/test/test_support.py +++ b/lib-python/2.7/test/test_support.py @@ -431,16 +431,20 @@ rmtree(name) -def findfile(file, here=__file__, subdir=None): +def findfile(file, here=None, subdir=None): """Try to find a file on sys.path and the working directory. If it is not found the argument passed to the function is returned (this does not necessarily signal failure; could still be the legitimate path).""" + import test if os.path.isabs(file): return file if subdir is not None: file = os.path.join(subdir, file) path = sys.path - path = [os.path.dirname(here)] + path + if here is None: + path = test.__path__ + path + else: + path = [os.path.dirname(here)] + path for dn in path: fn = os.path.join(dn, file) if os.path.exists(fn): return fn @@ -1050,15 +1054,33 @@ guards, default = _parse_guards(guards) return guards.get(platform.python_implementation().lower(), default) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --pdb +# to get a pdb prompt in case of exceptions +ResultClass = unittest.TextTestRunner.resultclass + +class TestResultWithPdb(ResultClass): + + def addError(self, testcase, exc_info): + ResultClass.addError(self, testcase, exc_info) + if '--pdb' in sys.argv: + import pdb, traceback + traceback.print_tb(exc_info[2]) + pdb.post_mortem(exc_info[2]) + +# ---------------------------------- def _run_suite(suite): """Run tests from a unittest.TestSuite-derived class.""" if verbose: - runner = unittest.TextTestRunner(sys.stdout, verbosity=2) + runner = unittest.TextTestRunner(sys.stdout, verbosity=2, + resultclass=TestResultWithPdb) else: runner = BasicTestRunner() + result = runner.run(suite) if not result.wasSuccessful(): if len(result.errors) == 1 and not result.failures: @@ -1071,6 +1093,34 @@ err += "; run in verbose mode for details" raise TestFailed(err) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --filter bar +# to run only the test cases whose name contains bar + +def filter_maybe(suite): + try: + i = sys.argv.index('--filter') + filter = sys.argv[i+1] + except (ValueError, IndexError): + return suite + tests = [] + for test in linearize_suite(suite): + if filter in test._testMethodName: + tests.append(test) + return unittest.TestSuite(tests) + +def linearize_suite(suite_or_test): + try: + it = iter(suite_or_test) + except TypeError: + yield suite_or_test + return + for subsuite in it: + for item in linearize_suite(subsuite): + yield item + +# ---------------------------------- def run_unittest(*classes): """Run tests from unittest.TestCase-derived classes.""" @@ -1086,6 +1136,7 @@ suite.addTest(cls) else: suite.addTest(unittest.makeSuite(cls)) + suite = filter_maybe(suite) _run_suite(suite) diff --git a/lib-python/2.7/test/test_syntax.py b/lib-python/2.7/test/test_syntax.py --- a/lib-python/2.7/test/test_syntax.py +++ b/lib-python/2.7/test/test_syntax.py @@ -5,7 +5,8 @@ >>> def f(x): ... global x Traceback (most recent call last): -SyntaxError: name 'x' is local and global (, line 1) + File "", line 1 +SyntaxError: name 'x' is local and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that @@ -375,7 +376,7 @@ In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 - >>> while 1: + >>> while 1: # doctest:+SKIP ... while 2: ... while 3: ... while 4: diff --git a/lib-python/2.7/test/test_sys.py b/lib-python/2.7/test/test_sys.py --- a/lib-python/2.7/test/test_sys.py +++ b/lib-python/2.7/test/test_sys.py @@ -264,6 +264,7 @@ self.assertEqual(sys.getdlopenflags(), oldflags+1) sys.setdlopenflags(oldflags) + @test.test_support.impl_detail("reference counting") def test_refcount(self): # n here must be a global in order for this test to pass while # tracing with a python function. Tracing calls PyFrame_FastToLocals @@ -287,7 +288,7 @@ is sys._getframe().f_code ) - # sys._current_frames() is a CPython-only gimmick. + @test.test_support.impl_detail("current_frames") def test_current_frames(self): have_threads = True try: @@ -383,7 +384,10 @@ self.assertEqual(len(sys.float_info), 11) self.assertEqual(sys.float_info.radix, 2) self.assertEqual(len(sys.long_info), 2) - self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + if test.test_support.check_impl_detail(cpython=True): + self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + else: + self.assertTrue(sys.long_info.bits_per_digit >= 1) self.assertTrue(sys.long_info.sizeof_digit >= 1) self.assertEqual(type(sys.long_info.bits_per_digit), int) self.assertEqual(type(sys.long_info.sizeof_digit), int) @@ -432,6 +436,7 @@ self.assertEqual(type(getattr(sys.flags, attr)), int, attr) self.assertTrue(repr(sys.flags)) + @test.test_support.impl_detail("sys._clear_type_cache") def test_clear_type_cache(self): sys._clear_type_cache() @@ -473,6 +478,7 @@ p.wait() self.assertIn(executable, ["''", repr(sys.executable)]) + at unittest.skipUnless(test.test_support.check_impl_detail(), "sys.getsizeof()") class SizeofTest(unittest.TestCase): TPFLAGS_HAVE_GC = 1<<14 diff --git a/lib-python/2.7/test/test_sys_settrace.py b/lib-python/2.7/test/test_sys_settrace.py --- a/lib-python/2.7/test/test_sys_settrace.py +++ b/lib-python/2.7/test/test_sys_settrace.py @@ -213,12 +213,16 @@ "finally" def generator_example(): # any() will leave the generator before its end - x = any(generator_function()) + x = any(generator_function()); gc.collect() # the following lines were not traced for x in range(10): y = x +# On CPython, when the generator is decref'ed to zero, we see the trace +# for the "finally:" portion. On PyPy, we don't see it before the next +# garbage collection. That's why we put gc.collect() on the same line above. + generator_example.events = ([(0, 'call'), (2, 'line'), (-6, 'call'), @@ -282,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass @@ -323,17 +327,24 @@ self.run_test(tighterloop_example) def test_13_genexp(self): - self.run_test(generator_example) - # issue1265: if the trace function contains a generator, - # and if the traced function contains another generator - # that is not completely exhausted, the trace stopped. - # Worse: the 'finally' clause was not invoked. - tracer = Tracer() - sys.settrace(tracer.traceWithGenexp) - generator_example() - sys.settrace(None) - self.compare_events(generator_example.__code__.co_firstlineno, - tracer.events, generator_example.events) + if self.using_gc: + test_support.gc_collect() + gc.enable() + try: + self.run_test(generator_example) + # issue1265: if the trace function contains a generator, + # and if the traced function contains another generator + # that is not completely exhausted, the trace stopped. + # Worse: the 'finally' clause was not invoked. + tracer = Tracer() + sys.settrace(tracer.traceWithGenexp) + generator_example() + sys.settrace(None) + self.compare_events(generator_example.__code__.co_firstlineno, + tracer.events, generator_example.events) + finally: + if self.using_gc: + gc.disable() def test_14_onliner_if(self): def onliners(): diff --git a/lib-python/2.7/test/test_sysconfig.py b/lib-python/2.7/test/test_sysconfig.py --- a/lib-python/2.7/test/test_sysconfig.py +++ b/lib-python/2.7/test/test_sysconfig.py @@ -209,13 +209,22 @@ self.assertEqual(get_platform(), 'macosx-10.4-fat64') - for arch in ('ppc', 'i386', 'x86_64', 'ppc64'): + for arch in ('ppc', 'i386', 'ppc64', 'x86_64'): get_config_vars()['CFLAGS'] = ('-arch %s -isysroot ' '/Developer/SDKs/MacOSX10.4u.sdk ' '-fno-strict-aliasing -fno-common ' '-dynamic -DNDEBUG -g -O3'%(arch,)) self.assertEqual(get_platform(), 'macosx-10.4-%s'%(arch,)) + + # macosx with ARCHFLAGS set and empty _CONFIG_VARS + os.environ['ARCHFLAGS'] = '-arch i386' + sysconfig._CONFIG_VARS = None + + # this will attempt to recreate the _CONFIG_VARS based on environment + # variables; used to check a problem with the PyPy's _init_posix + # implementation; see: issue 705 + get_config_vars() # linux debian sarge os.name = 'posix' @@ -235,7 +244,7 @@ def test_get_scheme_names(self): wanted = ('nt', 'nt_user', 'os2', 'os2_home', 'osx_framework_user', - 'posix_home', 'posix_prefix', 'posix_user') + 'posix_home', 'posix_prefix', 'posix_user', 'pypy') self.assertEqual(get_scheme_names(), wanted) def test_symlink(self): diff --git a/lib-python/2.7/test/test_tarfile.py b/lib-python/2.7/test/test_tarfile.py --- a/lib-python/2.7/test/test_tarfile.py +++ b/lib-python/2.7/test/test_tarfile.py @@ -169,6 +169,7 @@ except tarfile.ReadError: self.fail("tarfile.open() failed on empty archive") self.assertListEqual(tar.getmembers(), []) + tar.close() def test_null_tarfile(self): # Test for issue6123: Allow opening empty archives. @@ -207,16 +208,21 @@ fobj = open(self.tarname, "rb") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, os.path.abspath(fobj.name)) + tar.close() def test_no_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) self.assertRaises(AttributeError, getattr, fobj, "name") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, None) def test_empty_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) fobj.name = "" tar = tarfile.open(fileobj=fobj, mode=self.mode) @@ -515,6 +521,7 @@ self.tar = tarfile.open(self.tarname, mode=self.mode, encoding="iso8859-1") tarinfo = self.tar.getmember("pax/umlauts-�������") self._test_member(tarinfo, size=7011, chksum=md5_regtype) + self.tar.close() class LongnameTest(ReadTest): @@ -675,6 +682,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.rmdir(path) @@ -692,6 +700,7 @@ tar.gettarinfo(target) tarinfo = tar.gettarinfo(link) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(target) os.remove(link) @@ -704,6 +713,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(path) @@ -722,6 +732,7 @@ tar.add(dstname) os.chdir(cwd) self.assertTrue(tar.getnames() == [], "added the archive to itself") + tar.close() def test_exclude(self): tempdir = os.path.join(TEMPDIR, "exclude") @@ -742,6 +753,7 @@ tar = tarfile.open(tmpname, "r") self.assertEqual(len(tar.getmembers()), 1) self.assertEqual(tar.getnames()[0], "empty_dir") + tar.close() finally: shutil.rmtree(tempdir) @@ -947,7 +959,9 @@ fobj.close() elif self.mode.endswith("bz2"): dec = bz2.BZ2Decompressor() - data = open(tmpname, "rb").read() + f = open(tmpname, "rb") + data = f.read() + f.close() data = dec.decompress(data) self.assertTrue(len(dec.unused_data) == 0, "found trailing data") @@ -1026,6 +1040,7 @@ "unable to read longname member") self.assertEqual(tarinfo.linkname, member.linkname, "unable to read longname member") + tar.close() def test_longname_1023(self): self._test(("longnam/" * 127) + "longnam") @@ -1118,6 +1133,7 @@ else: n = tar.getmembers()[0].name self.assertTrue(name == n, "PAX longname creation failed") + tar.close() def test_pax_global_header(self): pax_headers = { @@ -1146,6 +1162,7 @@ tarfile.PAX_NUMBER_FIELDS[key](val) except (TypeError, ValueError): self.fail("unable to convert pax header field") + tar.close() def test_pax_extended_header(self): # The fields from the pax header have priority over the @@ -1165,6 +1182,7 @@ self.assertEqual(t.pax_headers, pax_headers) self.assertEqual(t.name, "foo") self.assertEqual(t.uid, 123) + tar.close() class UstarUnicodeTest(unittest.TestCase): @@ -1208,6 +1226,7 @@ tarinfo.name = "foo" tarinfo.uname = u"���" self.assertRaises(UnicodeError, tar.addfile, tarinfo) + tar.close() def test_unicode_argument(self): tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict") @@ -1262,6 +1281,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="ascii", errors=handler) self.assertEqual(tar.getnames()[0], name) + tar.close() self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="ascii", errors="strict") @@ -1274,6 +1294,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1", errors="utf-8") self.assertEqual(tar.getnames()[0], "���/" + u"�".encode("utf8")) + tar.close() class AppendTest(unittest.TestCase): @@ -1301,6 +1322,7 @@ def _test(self, names=["bar"], fileobj=None): tar = tarfile.open(self.tarname, fileobj=fileobj) self.assertEqual(tar.getnames(), names) + tar.close() def test_non_existing(self): self._add_testfile() @@ -1319,7 +1341,9 @@ def test_fileobj(self): self._create_testtar() - data = open(self.tarname).read() + f = open(self.tarname) + data = f.read() + f.close() fobj = StringIO.StringIO(data) self._add_testfile(fobj) fobj.seek(0) @@ -1345,7 +1369,9 @@ # Append mode is supposed to fail if the tarfile to append to # does not end with a zero block. def _test_error(self, data): - open(self.tarname, "wb").write(data) + f = open(self.tarname, "wb") + f.write(data) + f.close() self.assertRaises(tarfile.ReadError, self._add_testfile) def test_null(self): diff --git a/lib-python/2.7/test/test_tempfile.py b/lib-python/2.7/test/test_tempfile.py --- a/lib-python/2.7/test/test_tempfile.py +++ b/lib-python/2.7/test/test_tempfile.py @@ -23,8 +23,8 @@ # TEST_FILES may need to be tweaked for systems depending on the maximum # number of files that can be opened at one time (see ulimit -n) -if sys.platform in ('openbsd3', 'openbsd4'): - TEST_FILES = 48 +if sys.platform.startswith("openbsd"): + TEST_FILES = 64 # ulimit -n defaults to 128 for normal users else: TEST_FILES = 100 @@ -244,6 +244,7 @@ dir = tempfile.mkdtemp() try: self.do_create(dir=dir).write("blat") + test_support.gc_collect() finally: os.rmdir(dir) @@ -528,12 +529,15 @@ self.do_create(suf="b") self.do_create(pre="a", suf="b") self.do_create(pre="aa", suf=".txt") + test_support.gc_collect() def test_many(self): # mktemp can choose many usable file names (stochastic) extant = range(TEST_FILES) for i in extant: extant[i] = self.do_create(pre="aa") + del extant + test_support.gc_collect() ## def test_warning(self): ## # mktemp issues a warning when used diff --git a/lib-python/2.7/test/test_thread.py b/lib-python/2.7/test/test_thread.py --- a/lib-python/2.7/test/test_thread.py +++ b/lib-python/2.7/test/test_thread.py @@ -128,6 +128,7 @@ del task while not done: time.sleep(0.01) + test_support.gc_collect() self.assertEqual(thread._count(), orig) diff --git a/lib-python/2.7/test/test_threading.py b/lib-python/2.7/test/test_threading.py --- a/lib-python/2.7/test/test_threading.py +++ b/lib-python/2.7/test/test_threading.py @@ -161,6 +161,7 @@ # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. + @test.test_support.cpython_only def test_PyThreadState_SetAsyncExc(self): try: import ctypes @@ -266,6 +267,7 @@ finally: threading._start_new_thread = _start_new_thread + @test.test_support.cpython_only def test_finalize_runnning_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for @@ -383,6 +385,7 @@ finally: sys.setcheckinterval(old_interval) + @test.test_support.cpython_only def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): @@ -425,6 +428,9 @@ def joiningfunc(mainthread): mainthread.join() print 'end of thread' + # stdout is fully buffered because not a tty, we have to flush + # before exit. + sys.stdout.flush() \n""" + script p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE) diff --git a/lib-python/2.7/test/test_threading_local.py b/lib-python/2.7/test/test_threading_local.py --- a/lib-python/2.7/test/test_threading_local.py +++ b/lib-python/2.7/test/test_threading_local.py @@ -173,8 +173,9 @@ obj = cls() obj.x = 5 self.assertEqual(obj.__dict__, {'x': 5}) - with self.assertRaises(AttributeError): - obj.__dict__ = {} + if test_support.check_impl_detail(): + with self.assertRaises(AttributeError): + obj.__dict__ = {} with self.assertRaises(AttributeError): del obj.__dict__ diff --git a/lib-python/2.7/test/test_traceback.py b/lib-python/2.7/test/test_traceback.py --- a/lib-python/2.7/test/test_traceback.py +++ b/lib-python/2.7/test/test_traceback.py @@ -5,7 +5,8 @@ import sys import unittest from imp import reload -from test.test_support import run_unittest, is_jython, Error +from test.test_support import run_unittest, Error +from test.test_support import impl_detail, check_impl_detail import traceback @@ -49,10 +50,8 @@ self.assertTrue(err[2].count('\n') == 1) # and no additional newline self.assertTrue(err[1].find("+") == err[2].find("^")) # in the right place + @impl_detail("other implementations may add a caret (why shouldn't they?)") def test_nocaret(self): - if is_jython: - # jython adds a caret in this case (why shouldn't it?) - return err = self.get_exception_format(self.syntax_error_without_caret, SyntaxError) self.assertTrue(len(err) == 3) @@ -63,8 +62,11 @@ IndentationError) self.assertTrue(len(err) == 4) self.assertTrue(err[1].strip() == "print 2") - self.assertIn("^", err[2]) - self.assertTrue(err[1].find("2") == err[2].find("^")) + if check_impl_detail(): + # on CPython, there is a "^" at the end of the line + # on PyPy, there is a "^" too, but at the start, more logically + self.assertIn("^", err[2]) + self.assertTrue(err[1].find("2") == err[2].find("^")) def test_bug737473(self): import os, tempfile, time @@ -74,7 +76,8 @@ try: sys.path.insert(0, testdir) testfile = os.path.join(testdir, 'test_bug737473.py') - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise ValueError""" @@ -96,7 +99,8 @@ # three seconds are needed for this test to pass reliably :-( time.sleep(4) - print >> open(testfile, 'w'), """ + with open(testfile, 'w') as f: + print >> f, """ def test(): raise NotImplementedError""" reload(test_bug737473) diff --git a/lib-python/2.7/test/test_types.py b/lib-python/2.7/test/test_types.py --- a/lib-python/2.7/test/test_types.py +++ b/lib-python/2.7/test/test_types.py @@ -1,7 +1,8 @@ # Python test set -- part 6, built-in types from test.test_support import run_unittest, have_unicode, run_with_locale, \ - check_py3k_warnings + check_py3k_warnings, \ + impl_detail, check_impl_detail import unittest import sys import locale @@ -289,9 +290,14 @@ # array.array() returns an object that does not implement a char buffer, # something which int() uses for conversion. import array - try: int(buffer(array.array('c'))) + try: int(buffer(array.array('c', '5'))) except TypeError: pass - else: self.fail("char buffer (at C level) not working") + else: + if check_impl_detail(): + self.fail("char buffer (at C level) not working") + #else: + # it works on PyPy, which does not have the distinction + # between char buffer and binary buffer. XXX fine enough? def test_int__format__(self): def test(i, format_spec, result): @@ -741,6 +747,7 @@ for code in 'xXobns': self.assertRaises(ValueError, format, 0, ',' + code) + @impl_detail("the types' internal size attributes are CPython-only") def test_internal_sizes(self): self.assertGreater(object.__basicsize__, 0) self.assertGreater(tuple.__itemsize__, 0) diff --git a/lib-python/2.7/test/test_unicode.py b/lib-python/2.7/test/test_unicode.py --- a/lib-python/2.7/test/test_unicode.py +++ b/lib-python/2.7/test/test_unicode.py @@ -448,10 +448,11 @@ meth('\xff') with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) @test_support.run_with_locale('LC_ALL', 'de_DE', 'fr_FR') def test_format_float(self): @@ -1062,7 +1063,8 @@ # to take a 64-bit long, this test should apply to all platforms. if sys.maxint > (1 << 32) or struct.calcsize('P') != 4: return - self.assertRaises(OverflowError, u't\tt\t'.expandtabs, sys.maxint) + self.assertRaises((OverflowError, MemoryError), + u't\tt\t'.expandtabs, sys.maxint) def test__format__(self): def test(value, format, expected): diff --git a/lib-python/2.7/test/test_unicodedata.py b/lib-python/2.7/test/test_unicodedata.py --- a/lib-python/2.7/test/test_unicodedata.py +++ b/lib-python/2.7/test/test_unicodedata.py @@ -233,10 +233,12 @@ # been loaded in this process. popen = subprocess.Popen(args, stderr=subprocess.PIPE) popen.wait() - self.assertEqual(popen.returncode, 1) - error = "SyntaxError: (unicode error) \N escapes not supported " \ - "(can't load unicodedata module)" - self.assertIn(error, popen.stderr.read()) + self.assertIn(popen.returncode, [0, 1]) # at least it did not segfault + if test.test_support.check_impl_detail(): + self.assertEqual(popen.returncode, 1) + error = "SyntaxError: (unicode error) \N escapes not supported " \ + "(can't load unicodedata module)" + self.assertIn(error, popen.stderr.read()) def test_decimal_numeric_consistent(self): # Test that decimal and numeric are consistent, diff --git a/lib-python/2.7/test/test_unpack.py b/lib-python/2.7/test/test_unpack.py --- a/lib-python/2.7/test/test_unpack.py +++ b/lib-python/2.7/test/test_unpack.py @@ -62,14 +62,14 @@ >>> a, b = t Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking tuple of wrong size >>> a, b = l Traceback (most recent call last): ... - ValueError: too many values to unpack + ValueError: expected length 2, got 3 Unpacking sequence too short diff --git a/lib-python/2.7/test/test_urllib2.py b/lib-python/2.7/test/test_urllib2.py --- a/lib-python/2.7/test/test_urllib2.py +++ b/lib-python/2.7/test/test_urllib2.py @@ -307,6 +307,9 @@ def getresponse(self): return MockHTTPResponse(MockFile(), {}, 200, "OK") + def close(self): + pass + class MockHandler: # useful for testing handler machinery # see add_ordered_mock_handlers() docstring diff --git a/lib-python/2.7/test/test_warnings.py b/lib-python/2.7/test/test_warnings.py --- a/lib-python/2.7/test/test_warnings.py +++ b/lib-python/2.7/test/test_warnings.py @@ -355,7 +355,8 @@ # test_support.import_fresh_module utility function def test_accelerated(self): self.assertFalse(original_warnings is self.module) - self.assertFalse(hasattr(self.module.warn, 'func_code')) + self.assertFalse(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class PyWarnTests(BaseTest, WarnTests): module = py_warnings @@ -364,7 +365,8 @@ # test_support.import_fresh_module utility function def test_pure_python(self): self.assertFalse(original_warnings is self.module) - self.assertTrue(hasattr(self.module.warn, 'func_code')) + self.assertTrue(hasattr(self.module.warn, 'func_code') and + hasattr(self.module.warn.func_code, 'co_filename')) class WCmdLineTests(unittest.TestCase): diff --git a/lib-python/2.7/test/test_weakref.py b/lib-python/2.7/test/test_weakref.py --- a/lib-python/2.7/test/test_weakref.py +++ b/lib-python/2.7/test/test_weakref.py @@ -1,4 +1,3 @@ -import gc import sys import unittest import UserList @@ -6,6 +5,7 @@ import operator from test import test_support +from test.test_support import gc_collect # Used in ReferencesTestCase.test_ref_created_during_del() . ref_from_del = None @@ -70,6 +70,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(ref1() is None, "expected reference to be invalidated") self.assertTrue(ref2() is None, @@ -101,13 +102,16 @@ ref1 = weakref.proxy(o, self.callback) ref2 = weakref.proxy(o, self.callback) del o + gc_collect() def check(proxy): proxy.bar self.assertRaises(weakref.ReferenceError, check, ref1) self.assertRaises(weakref.ReferenceError, check, ref2) - self.assertRaises(weakref.ReferenceError, bool, weakref.proxy(C())) + ref3 = weakref.proxy(C()) + gc_collect() + self.assertRaises(weakref.ReferenceError, bool, ref3) self.assertTrue(self.cbcalled == 2) def check_basic_ref(self, factory): @@ -124,6 +128,7 @@ o = factory() ref = weakref.ref(o, self.callback) del o + gc_collect() self.assertTrue(self.cbcalled == 1, "callback did not properly set 'cbcalled'") self.assertTrue(ref() is None, @@ -148,6 +153,7 @@ self.assertTrue(weakref.getweakrefcount(o) == 2, "wrong weak ref count for object") del proxy + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 1, "wrong weak ref count for object after deleting proxy") @@ -325,6 +331,7 @@ "got wrong number of weak reference objects") del ref1, ref2, proxy1, proxy2 + gc_collect() self.assertTrue(weakref.getweakrefcount(o) == 0, "weak reference objects not unlinked from" " referent when discarded.") @@ -338,6 +345,7 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref2], "list of refs does not match") @@ -345,10 +353,12 @@ ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref2 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [ref1], "list of refs does not match") del ref1 + gc_collect() self.assertTrue(weakref.getweakrefs(o) == [], "list of refs not cleared") @@ -400,13 +410,11 @@ # when the second attempt to remove the instance from the "list # of all objects" occurs. - import gc - class C(object): pass c = C() - wr = weakref.ref(c, lambda ignore: gc.collect()) + wr = weakref.ref(c, lambda ignore: gc_collect()) del c # There endeth the first part. It gets worse. @@ -414,7 +422,7 @@ c1 = C() c1.i = C() - wr = weakref.ref(c1.i, lambda ignore: gc.collect()) + wr = weakref.ref(c1.i, lambda ignore: gc_collect()) c2 = C() c2.c1 = c1 @@ -430,8 +438,6 @@ del c2 def test_callback_in_cycle_1(self): - import gc - class J(object): pass @@ -467,11 +473,9 @@ # search II.__mro__, but that's NULL. The result was a segfault in # a release build, and an assert failure in a debug build. del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_2(self): - import gc - # This is just like test_callback_in_cycle_1, except that II is an # old-style class. The symptom is different then: an instance of an # old-style class looks in its own __dict__ first. 'J' happens to @@ -496,11 +500,9 @@ I.wr = weakref.ref(J, I.acallback) del I, J, II - gc.collect() + gc_collect() def test_callback_in_cycle_3(self): - import gc - # This one broke the first patch that fixed the last two. In this # case, the objects reachable from the callback aren't also reachable # from the object (c1) *triggering* the callback: you can get to @@ -520,11 +522,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2 - gc.collect() + gc_collect() def test_callback_in_cycle_4(self): - import gc - # Like test_callback_in_cycle_3, except c2 and c1 have different # classes. c2's class (C) isn't reachable from c1 then, so protecting # objects reachable from the dying object (c1) isn't enough to stop @@ -548,11 +548,9 @@ c2.wr = weakref.ref(c1, c2.cb) del c1, c2, C, D - gc.collect() + gc_collect() def test_callback_in_cycle_resurrection(self): - import gc - # Do something nasty in a weakref callback: resurrect objects # from dead cycles. For this to be attempted, the weakref and # its callback must also be part of the cyclic trash (else the @@ -583,7 +581,7 @@ del c1, c2, C # make them all trash self.assertEqual(alist, []) # del isn't enough to reclaim anything - gc.collect() + gc_collect() # c1.wr and c2.wr were part of the cyclic trash, so should have # been cleared without their callbacks executing. OTOH, the weakref # to C is bound to a function local (wr), and wasn't trash, so that @@ -593,12 +591,10 @@ self.assertEqual(wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_callbacks_on_callback(self): - import gc - # Set up weakref callbacks *on* weakref callbacks. alist = [] def safe_callback(ignore): @@ -626,12 +622,12 @@ del callback, c, d, C self.assertEqual(alist, []) # del isn't enough to clean up cycles - gc.collect() + gc_collect() self.assertEqual(alist, ["safe_callback called"]) self.assertEqual(external_wr(), None) del alist[:] - gc.collect() + gc_collect() self.assertEqual(alist, []) def test_gc_during_ref_creation(self): @@ -641,9 +637,11 @@ self.check_gc_during_creation(weakref.proxy) def check_gc_during_creation(self, makeref): - thresholds = gc.get_threshold() - gc.set_threshold(1, 1, 1) - gc.collect() + if test_support.check_impl_detail(): + import gc + thresholds = gc.get_threshold() + gc.set_threshold(1, 1, 1) + gc_collect() class A: pass @@ -663,7 +661,8 @@ weakref.ref(referenced, callback) finally: - gc.set_threshold(*thresholds) + if test_support.check_impl_detail(): + gc.set_threshold(*thresholds) def test_ref_created_during_del(self): # Bug #1377858 @@ -683,7 +682,7 @@ r = weakref.ref(Exception) self.assertRaises(TypeError, r.__init__, 0, 0, 0, 0, 0) # No exception should be raised here - gc.collect() + gc_collect() def test_classes(self): # Check that both old-style classes and new-style classes @@ -696,12 +695,12 @@ weakref.ref(int) a = weakref.ref(A, l.append) A = None - gc.collect() + gc_collect() self.assertEqual(a(), None) self.assertEqual(l, [a]) b = weakref.ref(B, l.append) B = None - gc.collect() + gc_collect() self.assertEqual(b(), None) self.assertEqual(l, [a, b]) @@ -722,6 +721,7 @@ self.assertTrue(mr.called) self.assertEqual(mr.value, 24) del o + gc_collect() self.assertTrue(mr() is None) self.assertTrue(mr.called) @@ -738,9 +738,11 @@ self.assertEqual(weakref.getweakrefcount(o), 3) refs = weakref.getweakrefs(o) self.assertEqual(len(refs), 3) - self.assertTrue(r2 is refs[0]) - self.assertIn(r1, refs[1:]) - self.assertIn(r3, refs[1:]) + assert set(refs) == set((r1, r2, r3)) + if test_support.check_impl_detail(): + self.assertTrue(r2 is refs[0]) + self.assertIn(r1, refs[1:]) + self.assertIn(r3, refs[1:]) def test_subclass_refs_dont_conflate_callbacks(self): class MyRef(weakref.ref): @@ -839,15 +841,18 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the values did not clear the dictionary") # regression on SF bug #447152: dict = weakref.WeakValueDictionary() self.assertRaises(KeyError, dict.__getitem__, 1) dict[2] = C() + gc_collect() self.assertRaises(KeyError, dict.__getitem__, 2) def test_weak_keys(self): @@ -868,9 +873,11 @@ del items1, items2 self.assertTrue(len(dict) == self.COUNT) del objects[0] + gc_collect() self.assertTrue(len(dict) == (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o + gc_collect() self.assertTrue(len(dict) == 0, "deleting the keys did not clear the dictionary") o = Object(42) @@ -986,13 +993,13 @@ self.assertTrue(len(weakdict) == 2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 1) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) k, v = weakdict.popitem() self.assertTrue(len(weakdict) == 0) - if k is key1: + if k == key1: self.assertTrue(v is value1) else: self.assertTrue(v is value2) @@ -1137,6 +1144,7 @@ for o in objs: count += 1 del d[o] + gc_collect() self.assertEqual(len(d), 0) self.assertEqual(count, 2) @@ -1177,6 +1185,7 @@ >>> o is o2 True >>> del o, o2 +>>> gc_collect() >>> print r() None @@ -1229,6 +1238,7 @@ >>> id2obj(a_id) is a True >>> del a +>>> gc_collect() >>> try: ... id2obj(a_id) ... except KeyError: diff --git a/lib-python/2.7/test/test_weakset.py b/lib-python/2.7/test/test_weakset.py --- a/lib-python/2.7/test/test_weakset.py +++ b/lib-python/2.7/test/test_weakset.py @@ -57,6 +57,7 @@ self.assertEqual(len(self.s), len(self.d)) self.assertEqual(len(self.fs), 1) del self.obj + test_support.gc_collect() self.assertEqual(len(self.fs), 0) def test_contains(self): @@ -66,6 +67,7 @@ self.assertNotIn(1, self.s) self.assertIn(self.obj, self.fs) del self.obj + test_support.gc_collect() self.assertNotIn(SomeClass('F'), self.fs) def test_union(self): @@ -204,6 +206,7 @@ self.assertEqual(self.s, dup) self.assertRaises(TypeError, self.s.add, []) self.fs.add(Foo()) + test_support.gc_collect() self.assertTrue(len(self.fs) == 1) self.fs.add(self.obj) self.assertTrue(len(self.fs) == 1) @@ -330,10 +333,11 @@ next(it) # Trigger internal iteration # Destroy an item del items[-1] - gc.collect() # just in case + test_support.gc_collect() # We have removed either the first consumed items, or another one self.assertIn(len(list(it)), [len(items), len(items) - 1]) del it + test_support.gc_collect() # The removal has been committed self.assertEqual(len(s), len(items)) diff --git a/lib-python/2.7/test/test_xml_etree.py b/lib-python/2.7/test/test_xml_etree.py --- a/lib-python/2.7/test/test_xml_etree.py +++ b/lib-python/2.7/test/test_xml_etree.py @@ -1633,10 +1633,10 @@ Check reference leak. >>> xmltoolkit63() - >>> count = sys.getrefcount(None) + >>> count = sys.getrefcount(None) #doctest: +SKIP >>> for i in range(1000): ... xmltoolkit63() - >>> sys.getrefcount(None) - count + >>> sys.getrefcount(None) - count #doctest: +SKIP 0 """ diff --git a/lib-python/2.7/test/test_xmlrpc.py b/lib-python/2.7/test/test_xmlrpc.py --- a/lib-python/2.7/test/test_xmlrpc.py +++ b/lib-python/2.7/test/test_xmlrpc.py @@ -308,7 +308,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -367,7 +367,7 @@ global ADDR, PORT, URL ADDR, PORT = serv.socket.getsockname() #connect to IP address directly. This avoids socket.create_connection() - #trying to connect to "localhost" using all address families, which + #trying to connect to to "localhost" using all address families, which #causes slowdown e.g. on vista which supports AF_INET6. The server listens #on AF_INET only. URL = "http://%s:%d"%(ADDR, PORT) @@ -435,6 +435,7 @@ def tearDown(self): # wait on the server thread to terminate + test_support.gc_collect() # to close the active connections self.evt.wait(10) # disable traceback reporting @@ -472,9 +473,6 @@ # protocol error; provide additional information in test output self.fail("%s\n%s" % (e, getattr(e, "headers", ""))) - def test_unicode_host(self): - server = xmlrpclib.ServerProxy(u"http://%s:%d/RPC2"%(ADDR, PORT)) - self.assertEqual(server.add("a", u"\xe9"), u"a\xe9") # [ch] The test 404 is causing lots of false alarms. def XXXtest_404(self): @@ -589,12 +587,6 @@ # This avoids waiting for the socket timeout. self.test_simple1() - def test_partial_post(self): - # Check that a partial POST doesn't make the server loop: issue #14001. - conn = httplib.HTTPConnection(ADDR, PORT) - conn.request('POST', '/RPC2 HTTP/1.0\r\nContent-Length: 100\r\n\r\nbye') - conn.close() - class MultiPathServerTestCase(BaseServerTestCase): threadFunc = staticmethod(http_multi_server) request_count = 2 diff --git a/lib-python/2.7/test/test_zlib.py b/lib-python/2.7/test/test_zlib.py --- a/lib-python/2.7/test/test_zlib.py +++ b/lib-python/2.7/test/test_zlib.py @@ -1,6 +1,7 @@ import unittest from test.test_support import TESTFN, run_unittest, import_module, unlink, requires import binascii +import os import random from test.test_support import precisionbigmemtest, _1G, _4G import sys @@ -99,14 +100,7 @@ class BaseCompressTestCase(object): def check_big_compress_buffer(self, size, compress_func): - _1M = 1024 * 1024 - fmt = "%%0%dx" % (2 * _1M) - # Generate 10MB worth of random, and expand it by repeating it. - # The assumption is that zlib's memory is not big enough to exploit - # such spread out redundancy. - data = ''.join([binascii.a2b_hex(fmt % random.getrandbits(8 * _1M)) - for i in range(10)]) - data = data * (size // len(data) + 1) + data = os.urandom(size) try: compress_func(data) finally: diff --git a/lib-python/2.7/trace.py b/lib-python/2.7/trace.py --- a/lib-python/2.7/trace.py +++ b/lib-python/2.7/trace.py @@ -559,6 +559,10 @@ if len(funcs) == 1: dicts = [d for d in gc.get_referrers(funcs[0]) if isinstance(d, dict)] + if len(dicts) == 0: + # PyPy may store functions directly on the class + # (more exactly: the container is not a Python object) + dicts = funcs if len(dicts) == 1: classes = [c for c in gc.get_referrers(dicts[0]) if hasattr(c, "__bases__")] diff --git a/lib-python/2.7/urllib2.py b/lib-python/2.7/urllib2.py --- a/lib-python/2.7/urllib2.py +++ b/lib-python/2.7/urllib2.py @@ -1171,6 +1171,7 @@ except TypeError: #buffering kw not supported r = h.getresponse() except socket.error, err: # XXX what error? + h.close() raise URLError(err) # Pick apart the HTTPResponse object to get the addinfourl diff --git a/lib-python/2.7/uuid.py b/lib-python/2.7/uuid.py --- a/lib-python/2.7/uuid.py +++ b/lib-python/2.7/uuid.py @@ -406,8 +406,12 @@ continue if hasattr(lib, 'uuid_generate_random'): _uuid_generate_random = lib.uuid_generate_random + _uuid_generate_random.argtypes = [ctypes.c_char * 16] + _uuid_generate_random.restype = None if hasattr(lib, 'uuid_generate_time'): _uuid_generate_time = lib.uuid_generate_time + _uuid_generate_time.argtypes = [ctypes.c_char * 16] + _uuid_generate_time.restype = None # The uuid_generate_* functions are broken on MacOS X 10.5, as noted # in issue #8621 the function generates the same sequence of values @@ -436,6 +440,9 @@ lib = None _UuidCreate = getattr(lib, 'UuidCreateSequential', getattr(lib, 'UuidCreate', None)) + if _UuidCreate is not None: + _UuidCreate.argtypes = [ctypes.c_char * 16] + _UuidCreate.restype = ctypes.c_int except: pass diff --git a/lib-python/3.2/__future__.py b/lib-python/3.2/__future__.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__future__.py @@ -0,0 +1,134 @@ +"""Record of phased-in incompatible language changes. + +Each line is of the form: + + FeatureName = "_Feature(" OptionalRelease "," MandatoryRelease "," + CompilerFlag ")" + +where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples +of the same form as sys.version_info: + + (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int + PY_MINOR_VERSION, # the 1; an int + PY_MICRO_VERSION, # the 0; an int + PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string + PY_RELEASE_SERIAL # the 3; an int + ) + +OptionalRelease records the first release in which + + from __future__ import FeatureName + +was accepted. + +In the case of MandatoryReleases that have not yet occurred, +MandatoryRelease predicts the release in which the feature will become part +of the language. + +Else MandatoryRelease records when the feature became part of the language; +in releases at or after that, modules no longer need + + from __future__ import FeatureName + +to use the feature in question, but may continue to use such imports. + +MandatoryRelease may also be None, meaning that a planned feature got +dropped. + +Instances of class _Feature have two corresponding methods, +.getOptionalRelease() and .getMandatoryRelease(). + +CompilerFlag is the (bitfield) flag that should be passed in the fourth +argument to the builtin function compile() to enable the feature in +dynamically compiled code. This flag is stored in the .compiler_flag +attribute on _Future instances. These values must match the appropriate +#defines of CO_xxx flags in Include/compile.h. + +No feature line is ever to be deleted from this file. +""" + +all_feature_names = [ + "nested_scopes", + "generators", + "division", + "absolute_import", + "with_statement", + "print_function", + "unicode_literals", + "barry_as_FLUFL", +] + +__all__ = ["all_feature_names"] + all_feature_names + +# The CO_xxx symbols are defined here under the same names used by +# compile.h, so that an editor search will find them here. However, +# they're not exported in __all__, because they don't really belong to +# this module. +CO_NESTED = 0x0010 # nested_scopes +CO_GENERATOR_ALLOWED = 0 # generators (obsolete, was 0x1000) +CO_FUTURE_DIVISION = 0x2000 # division +CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default +CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement +CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function +CO_FUTURE_UNICODE_LITERALS = 0x20000 # unicode string literals +CO_FUTURE_BARRY_AS_BDFL = 0x40000 + +class _Feature: + def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): + self.optional = optionalRelease + self.mandatory = mandatoryRelease + self.compiler_flag = compiler_flag + + def getOptionalRelease(self): + """Return first release in which this feature was recognized. + + This is a 5-tuple, of the same form as sys.version_info. + """ + + return self.optional + + def getMandatoryRelease(self): + """Return release in which this feature will become mandatory. + + This is a 5-tuple, of the same form as sys.version_info, or, if + the feature was dropped, is None. + """ + + return self.mandatory + + def __repr__(self): + return "_Feature" + repr((self.optional, + self.mandatory, + self.compiler_flag)) + +nested_scopes = _Feature((2, 1, 0, "beta", 1), + (2, 2, 0, "alpha", 0), + CO_NESTED) + +generators = _Feature((2, 2, 0, "alpha", 1), + (2, 3, 0, "final", 0), + CO_GENERATOR_ALLOWED) + +division = _Feature((2, 2, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_DIVISION) + +absolute_import = _Feature((2, 5, 0, "alpha", 1), + (2, 7, 0, "alpha", 0), + CO_FUTURE_ABSOLUTE_IMPORT) + +with_statement = _Feature((2, 5, 0, "alpha", 1), + (2, 6, 0, "alpha", 0), + CO_FUTURE_WITH_STATEMENT) + +print_function = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_PRINT_FUNCTION) + +unicode_literals = _Feature((2, 6, 0, "alpha", 2), + (3, 0, 0, "alpha", 0), + CO_FUTURE_UNICODE_LITERALS) + +barry_as_FLUFL = _Feature((3, 1, 0, "alpha", 2), + (3, 9, 0, "alpha", 0), + CO_FUTURE_BARRY_AS_BDFL) diff --git a/lib-python/3.2/__phello__.foo.py b/lib-python/3.2/__phello__.foo.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/__phello__.foo.py @@ -0,0 +1,1 @@ +# This file exists as a helper for the test.test_frozen module. diff --git a/lib-python/3.2/_abcoll.py b/lib-python/3.2/_abcoll.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_abcoll.py @@ -0,0 +1,623 @@ +# Copyright 2007 Google, Inc. All Rights Reserved. +# Licensed to PSF under a Contributor Agreement. + +"""Abstract Base Classes (ABCs) for collections, according to PEP 3119. + +DON'T USE THIS MODULE DIRECTLY! The classes here should be imported +via collections; they are defined here only to alleviate certain +bootstrapping issues. Unit tests are in test_collections. +""" + +from abc import ABCMeta, abstractmethod +import sys + +__all__ = ["Hashable", "Iterable", "Iterator", + "Sized", "Container", "Callable", + "Set", "MutableSet", + "Mapping", "MutableMapping", + "MappingView", "KeysView", "ItemsView", "ValuesView", + "Sequence", "MutableSequence", + "ByteString", + ] + + +### collection related types which are not exposed through builtin ### +## iterators ## +bytes_iterator = type(iter(b'')) +bytearray_iterator = type(iter(bytearray())) +#callable_iterator = ??? +dict_keyiterator = type(iter({}.keys())) +dict_valueiterator = type(iter({}.values())) +dict_itemiterator = type(iter({}.items())) +list_iterator = type(iter([])) +list_reverseiterator = type(iter(reversed([]))) +range_iterator = type(iter(range(0))) +set_iterator = type(iter(set())) +str_iterator = type(iter("")) +tuple_iterator = type(iter(())) +zip_iterator = type(iter(zip())) +## views ## +dict_keys = type({}.keys()) +dict_values = type({}.values()) +dict_items = type({}.items()) +## misc ## +dict_proxy = type(type.__dict__) + + +### ONE-TRICK PONIES ### + +class Hashable(metaclass=ABCMeta): + + @abstractmethod + def __hash__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Hashable: + for B in C.__mro__: + if "__hash__" in B.__dict__: + if B.__dict__["__hash__"]: + return True + break + return NotImplemented + + +class Iterable(metaclass=ABCMeta): + + @abstractmethod + def __iter__(self): + while False: + yield None + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterable: + if any("__iter__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Iterator(Iterable): + + @abstractmethod + def __next__(self): + raise StopIteration + + def __iter__(self): + return self + + @classmethod + def __subclasshook__(cls, C): + if cls is Iterator: + if (any("__next__" in B.__dict__ for B in C.__mro__) and + any("__iter__" in B.__dict__ for B in C.__mro__)): + return True + return NotImplemented + +Iterator.register(bytes_iterator) +Iterator.register(bytearray_iterator) +#Iterator.register(callable_iterator) +Iterator.register(dict_keyiterator) +Iterator.register(dict_valueiterator) +Iterator.register(dict_itemiterator) +Iterator.register(list_iterator) +Iterator.register(list_reverseiterator) +Iterator.register(range_iterator) +Iterator.register(set_iterator) +Iterator.register(str_iterator) +Iterator.register(tuple_iterator) +Iterator.register(zip_iterator) + +class Sized(metaclass=ABCMeta): + + @abstractmethod + def __len__(self): + return 0 + + @classmethod + def __subclasshook__(cls, C): + if cls is Sized: + if any("__len__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Container(metaclass=ABCMeta): + + @abstractmethod + def __contains__(self, x): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Container: + if any("__contains__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +class Callable(metaclass=ABCMeta): + + @abstractmethod + def __call__(self, *args, **kwds): + return False + + @classmethod + def __subclasshook__(cls, C): + if cls is Callable: + if any("__call__" in B.__dict__ for B in C.__mro__): + return True + return NotImplemented + + +### SETS ### + + +class Set(Sized, Iterable, Container): + + """A set is a finite, iterable container. + + This class provides concrete generic implementations of all + methods except for __contains__, __iter__ and __len__. + + To override the comparisons (presumably for speed, as the + semantics are fixed), all you have to do is redefine __le__ and + then the other operations will automatically follow suit. + """ + + def __le__(self, other): + if not isinstance(other, Set): + return NotImplemented + if len(self) > len(other): + return False + for elem in self: + if elem not in other: + return False + return True + + def __lt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) < len(other) and self.__le__(other) + + def __gt__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other < self + + def __ge__(self, other): + if not isinstance(other, Set): + return NotImplemented + return other <= self + + def __eq__(self, other): + if not isinstance(other, Set): + return NotImplemented + return len(self) == len(other) and self.__le__(other) + + def __ne__(self, other): + return not (self == other) + + @classmethod + def _from_iterable(cls, it): + '''Construct an instance of the class from any iterable input. + + Must override this method if the class constructor signature + does not accept an iterable for an input. + ''' + return cls(it) + + def __and__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + return self._from_iterable(value for value in other if value in self) + + def isdisjoint(self, other): + for value in other: + if value in self: + return False + return True + + def __or__(self, other): + if not isinstance(other, Iterable): + return NotImplemented + chain = (e for s in (self, other) for e in s) + return self._from_iterable(chain) + + def __sub__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return self._from_iterable(value for value in self + if value not in other) + + def __xor__(self, other): + if not isinstance(other, Set): + if not isinstance(other, Iterable): + return NotImplemented + other = self._from_iterable(other) + return (self - other) | (other - self) + + def _hash(self): + """Compute the hash value of a set. + + Note that we don't define __hash__: not all sets are hashable. + But if you define a hashable set type, its __hash__ should + call this function. + + This must be compatible __eq__. + + All sets ought to compare equal if they contain the same + elements, regardless of how they are implemented, and + regardless of the order of the elements; so there's not much + freedom for __eq__ or __hash__. We match the algorithm used + by the built-in frozenset type. + """ + MAX = sys.maxsize + MASK = 2 * MAX + 1 + n = len(self) + h = 1927868237 * (n + 1) + h &= MASK + for x in self: + hx = hash(x) + h ^= (hx ^ (hx << 16) ^ 89869747) * 3644798167 + h &= MASK + h = h * 69069 + 907133923 + h &= MASK + if h > MAX: + h -= MASK + 1 + if h == -1: + h = 590923713 + return h + +Set.register(frozenset) + + +class MutableSet(Set): + + @abstractmethod + def add(self, value): + """Add an element.""" + raise NotImplementedError + + @abstractmethod + def discard(self, value): + """Remove an element. Do not raise an exception if absent.""" + raise NotImplementedError + + def remove(self, value): + """Remove an element. If not a member, raise a KeyError.""" + if value not in self: + raise KeyError(value) + self.discard(value) + + def pop(self): + """Return the popped value. Raise KeyError if empty.""" + it = iter(self) + try: + value = next(it) + except StopIteration: + raise KeyError + self.discard(value) + return value + + def clear(self): + """This is slow (creates N new iterators!) but effective.""" + try: + while True: + self.pop() + except KeyError: + pass + + def __ior__(self, it): + for value in it: + self.add(value) + return self + + def __iand__(self, it): + for value in (self - it): + self.discard(value) + return self + + def __ixor__(self, it): + if it is self: + self.clear() + else: + if not isinstance(it, Set): + it = self._from_iterable(it) + for value in it: + if value in self: + self.discard(value) + else: + self.add(value) + return self + + def __isub__(self, it): + if it is self: + self.clear() + else: + for value in it: + self.discard(value) + return self + +MutableSet.register(set) + + +### MAPPINGS ### + + +class Mapping(Sized, Iterable, Container): + + @abstractmethod + def __getitem__(self, key): + raise KeyError + + def get(self, key, default=None): + try: + return self[key] + except KeyError: + return default + + def __contains__(self, key): + try: + self[key] + except KeyError: + return False + else: + return True + + def keys(self): + return KeysView(self) + + def items(self): + return ItemsView(self) + + def values(self): + return ValuesView(self) + + def __eq__(self, other): + if not isinstance(other, Mapping): + return NotImplemented + return dict(self.items()) == dict(other.items()) + + def __ne__(self, other): + return not (self == other) + + +class MappingView(Sized): + + def __init__(self, mapping): + self._mapping = mapping + + def __len__(self): + return len(self._mapping) + + def __repr__(self): + return '{0.__class__.__name__}({0._mapping!r})'.format(self) + + +class KeysView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, key): + return key in self._mapping + + def __iter__(self): + for key in self._mapping: + yield key + +KeysView.register(dict_keys) + + +class ItemsView(MappingView, Set): + + @classmethod + def _from_iterable(self, it): + return set(it) + + def __contains__(self, item): + key, value = item + try: + v = self._mapping[key] + except KeyError: + return False + else: + return v == value + + def __iter__(self): + for key in self._mapping: + yield (key, self._mapping[key]) + +ItemsView.register(dict_items) + + +class ValuesView(MappingView): + + def __contains__(self, value): + for key in self._mapping: + if value == self._mapping[key]: + return True + return False + + def __iter__(self): + for key in self._mapping: + yield self._mapping[key] + +ValuesView.register(dict_values) + + +class MutableMapping(Mapping): + + @abstractmethod + def __setitem__(self, key, value): + raise KeyError + + @abstractmethod + def __delitem__(self, key): + raise KeyError + + __marker = object() + + def pop(self, key, default=__marker): + try: + value = self[key] + except KeyError: + if default is self.__marker: + raise + return default + else: + del self[key] + return value + + def popitem(self): + try: + key = next(iter(self)) + except StopIteration: + raise KeyError + value = self[key] + del self[key] + return key, value + + def clear(self): + try: + while True: + self.popitem() + except KeyError: + pass + + def update(*args, **kwds): + if len(args) > 2: + raise TypeError("update() takes at most 2 positional " + "arguments ({} given)".format(len(args))) + elif not args: + raise TypeError("update() takes at least 1 argument (0 given)") + self = args[0] + other = args[1] if len(args) >= 2 else () + + if isinstance(other, Mapping): + for key in other: + self[key] = other[key] + elif hasattr(other, "keys"): + for key in other.keys(): + self[key] = other[key] + else: + for key, value in other: + self[key] = value + for key, value in kwds.items(): + self[key] = value + + def setdefault(self, key, default=None): + try: + return self[key] + except KeyError: + self[key] = default + return default + +MutableMapping.register(dict) + + +### SEQUENCES ### + + +class Sequence(Sized, Iterable, Container): + + """All the operations on a read-only sequence. + + Concrete subclasses must override __new__ or __init__, + __getitem__, and __len__. + """ + + @abstractmethod + def __getitem__(self, index): + raise IndexError + + def __iter__(self): + i = 0 + try: + while True: + v = self[i] + yield v + i += 1 + except IndexError: + return + + def __contains__(self, value): + for v in self: + if v == value: + return True + return False + + def __reversed__(self): + for i in reversed(range(len(self))): + yield self[i] + + def index(self, value): + for i, v in enumerate(self): + if v == value: + return i + raise ValueError + + def count(self, value): + return sum(1 for v in self if v == value) + +Sequence.register(tuple) +Sequence.register(str) +Sequence.register(range) + + +class ByteString(Sequence): + + """This unifies bytes and bytearray. + + XXX Should add all their methods. + """ + +ByteString.register(bytes) +ByteString.register(bytearray) + + +class MutableSequence(Sequence): + + @abstractmethod + def __setitem__(self, index, value): + raise IndexError + + @abstractmethod + def __delitem__(self, index): + raise IndexError + + @abstractmethod + def insert(self, index, value): + raise IndexError + + def append(self, value): + self.insert(len(self), value) + + def reverse(self): + n = len(self) + for i in range(n//2): + self[i], self[n-i-1] = self[n-i-1], self[i] + + def extend(self, values): + for v in values: + self.append(v) + + def pop(self, index=-1): + v = self[index] + del self[index] + return v + + def remove(self, value): + del self[self.index(value)] + + def __iadd__(self, values): + self.extend(values) + return self + +MutableSequence.register(list) +MutableSequence.register(bytearray) # Multiply inheriting, see ByteString diff --git a/lib-python/3.2/_compat_pickle.py b/lib-python/3.2/_compat_pickle.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_compat_pickle.py @@ -0,0 +1,81 @@ +# This module is used to map the old Python 2 names to the new names used in +# Python 3 for the pickle module. This needed to make pickle streams +# generated with Python 2 loadable by Python 3. + +# This is a copy of lib2to3.fixes.fix_imports.MAPPING. We cannot import +# lib2to3 and use the mapping defined there, because lib2to3 uses pickle. +# Thus, this could cause the module to be imported recursively. +IMPORT_MAPPING = { + 'StringIO': 'io', + 'cStringIO': 'io', + 'cPickle': 'pickle', + '__builtin__' : 'builtins', + 'copy_reg': 'copyreg', + 'Queue': 'queue', + 'SocketServer': 'socketserver', + 'ConfigParser': 'configparser', + 'repr': 'reprlib', + 'FileDialog': 'tkinter.filedialog', + 'tkFileDialog': 'tkinter.filedialog', + 'SimpleDialog': 'tkinter.simpledialog', + 'tkSimpleDialog': 'tkinter.simpledialog', + 'tkColorChooser': 'tkinter.colorchooser', + 'tkCommonDialog': 'tkinter.commondialog', + 'Dialog': 'tkinter.dialog', + 'Tkdnd': 'tkinter.dnd', + 'tkFont': 'tkinter.font', + 'tkMessageBox': 'tkinter.messagebox', + 'ScrolledText': 'tkinter.scrolledtext', + 'Tkconstants': 'tkinter.constants', + 'Tix': 'tkinter.tix', + 'ttk': 'tkinter.ttk', + 'Tkinter': 'tkinter', + 'markupbase': '_markupbase', + '_winreg': 'winreg', + 'thread': '_thread', + 'dummy_thread': '_dummy_thread', + 'dbhash': 'dbm.bsd', + 'dumbdbm': 'dbm.dumb', + 'dbm': 'dbm.ndbm', + 'gdbm': 'dbm.gnu', + 'xmlrpclib': 'xmlrpc.client', + 'DocXMLRPCServer': 'xmlrpc.server', + 'SimpleXMLRPCServer': 'xmlrpc.server', + 'httplib': 'http.client', + 'htmlentitydefs' : 'html.entities', + 'HTMLParser' : 'html.parser', + 'Cookie': 'http.cookies', + 'cookielib': 'http.cookiejar', + 'BaseHTTPServer': 'http.server', + 'SimpleHTTPServer': 'http.server', + 'CGIHTTPServer': 'http.server', + 'test.test_support': 'test.support', + 'commands': 'subprocess', + 'UserString' : 'collections', + 'UserList' : 'collections', + 'urlparse' : 'urllib.parse', + 'robotparser' : 'urllib.robotparser', + 'whichdb': 'dbm', + 'anydbm': 'dbm' +} + + +# This contains rename rules that are easy to handle. We ignore the more +# complex stuff (e.g. mapping the names in the urllib and types modules). +# These rules should be run before import names are fixed. +NAME_MAPPING = { + ('__builtin__', 'xrange'): ('builtins', 'range'), + ('__builtin__', 'reduce'): ('functools', 'reduce'), + ('__builtin__', 'intern'): ('sys', 'intern'), + ('__builtin__', 'unichr'): ('builtins', 'chr'), + ('__builtin__', 'basestring'): ('builtins', 'str'), + ('__builtin__', 'long'): ('builtins', 'int'), + ('itertools', 'izip'): ('builtins', 'zip'), + ('itertools', 'imap'): ('builtins', 'map'), + ('itertools', 'ifilter'): ('builtins', 'filter'), + ('itertools', 'ifilterfalse'): ('itertools', 'filterfalse'), +} + +# Same, but for 3.x to 2.x +REVERSE_IMPORT_MAPPING = dict((v, k) for (k, v) in IMPORT_MAPPING.items()) +REVERSE_NAME_MAPPING = dict((v, k) for (k, v) in NAME_MAPPING.items()) diff --git a/lib-python/3.2/_dummy_thread.py b/lib-python/3.2/_dummy_thread.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_dummy_thread.py @@ -0,0 +1,155 @@ +"""Drop-in replacement for the thread module. + +Meant to be used as a brain-dead substitute so that threaded code does +not need to be rewritten for when the thread module is not present. + +Suggested usage is:: + + try: + import _thread + except ImportError: + import _dummy_thread as _thread + +""" +# Exports only things specified by thread documentation; +# skipping obsolete synonyms allocate(), start_new(), exit_thread(). +__all__ = ['error', 'start_new_thread', 'exit', 'get_ident', 'allocate_lock', + 'interrupt_main', 'LockType'] + +# A dummy value +TIMEOUT_MAX = 2**31 + +# NOTE: this module can be imported early in the extension building process, +# and so top level imports of other modules should be avoided. Instead, all +# imports are done when needed on a function-by-function basis. Since threads +# are disabled, the import lock should not be an issue anyway (??). + +class error(Exception): + """Dummy implementation of _thread.error.""" + + def __init__(self, *args): + self.args = args + +def start_new_thread(function, args, kwargs={}): + """Dummy implementation of _thread.start_new_thread(). + + Compatibility is maintained by making sure that ``args`` is a + tuple and ``kwargs`` is a dictionary. If an exception is raised + and it is SystemExit (which can be done by _thread.exit()) it is + caught and nothing is done; all other exceptions are printed out + by using traceback.print_exc(). + + If the executed function calls interrupt_main the KeyboardInterrupt will be + raised when the function returns. + + """ + if type(args) != type(tuple()): + raise TypeError("2nd arg must be a tuple") + if type(kwargs) != type(dict()): + raise TypeError("3rd arg must be a dict") + global _main + _main = False + try: + function(*args, **kwargs) + except SystemExit: + pass + except: + import traceback + traceback.print_exc() + _main = True + global _interrupt + if _interrupt: + _interrupt = False + raise KeyboardInterrupt + +def exit(): + """Dummy implementation of _thread.exit().""" + raise SystemExit + +def get_ident(): + """Dummy implementation of _thread.get_ident(). + + Since this module should only be used when _threadmodule is not + available, it is safe to assume that the current process is the + only thread. Thus a constant can be safely returned. + """ + return -1 + +def allocate_lock(): + """Dummy implementation of _thread.allocate_lock().""" + return LockType() + +def stack_size(size=None): + """Dummy implementation of _thread.stack_size().""" + if size is not None: + raise error("setting thread stack size not supported") + return 0 + +class LockType(object): + """Class implementing dummy implementation of _thread.LockType. + + Compatibility is maintained by maintaining self.locked_status + which is a boolean that stores the state of the lock. Pickling of + the lock, though, should not be done since if the _thread module is + then used with an unpickled ``lock()`` from here problems could + occur from this class not having atomic methods. + + """ + + def __init__(self): + self.locked_status = False + + def acquire(self, waitflag=None, timeout=-1): + """Dummy implementation of acquire(). + + For blocking calls, self.locked_status is automatically set to + True and returned appropriately based on value of + ``waitflag``. If it is non-blocking, then the value is + actually checked and not set if it is already acquired. This + is all done so that threading.Condition's assert statements + aren't triggered and throw a little fit. + + """ + if waitflag is None or waitflag: + self.locked_status = True + return True + else: + if not self.locked_status: + self.locked_status = True + return True + else: + if timeout > 0: + import time + time.sleep(timeout) + return False + + __enter__ = acquire + + def __exit__(self, typ, val, tb): + self.release() + + def release(self): + """Release the dummy lock.""" + # XXX Perhaps shouldn't actually bother to test? Could lead + # to problems for complex, threaded code. + if not self.locked_status: + raise error + self.locked_status = False + return True + + def locked(self): + return self.locked_status + +# Used to signal that interrupt_main was called in a "thread" +_interrupt = False +# True when not executing in a "thread" +_main = True + +def interrupt_main(): + """Set _interrupt flag to True to have start_new_thread raise + KeyboardInterrupt upon exiting.""" + if _main: + raise KeyboardInterrupt + else: + global _interrupt + _interrupt = True diff --git a/lib-python/3.2/_markupbase.py b/lib-python/3.2/_markupbase.py new file mode 100644 --- /dev/null +++ b/lib-python/3.2/_markupbase.py @@ -0,0 +1,395 @@ +"""Shared support for scanning document type declarations in HTML and XHTML. + +This module is used as a foundation for the html.parser module. It has no +documented public API and should not be used directly. + +""" + +import re + +_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9]*\s*').match +_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match +_commentclose = re.compile(r'--\s*>') +_markedsectionclose = re.compile(r']\s*]\s*>') + +# An analysis of the MS-Word extensions is available at +# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf + +_msmarkedsectionclose = re.compile(r']\s*>') + +del re + + +class ParserBase: + """Parser base class which provides some common support methods used + by the SGML/HTML and XHTML parsers.""" + + def __init__(self): + if self.__class__ is ParserBase: + raise RuntimeError( + "_markupbase.ParserBase must be subclassed") + + def error(self, message): + raise NotImplementedError( + "subclasses of ParserBase must override error()") + + def reset(self): + self.lineno = 1 + self.offset = 0 + + def getpos(self): + """Return current line number and offset.""" + return self.lineno, self.offset + + # Internal -- update line number and offset. This should be + # called for each piece of data exactly once, in order -- in other + # words the concatenation of all the input strings to this + # function should be exactly the entire input. + def updatepos(self, i, j): + if i >= j: + return j + rawdata = self.rawdata + nlines = rawdata.count("\n", i, j) + if nlines: + self.lineno = self.lineno + nlines + pos = rawdata.rindex("\n", i, j) # Should not fail + self.offset = j-(pos+1) + else: + self.offset = self.offset + j-i + return j + + _decl_otherchars = '' + + # Internal -- parse declaration (for use by subclasses). + def parse_declaration(self, i): + # This is some sort of declaration; in "HTML as + # deployed," this should only be the document type + # declaration (""). + # ISO 8879:1986, however, has more complex + # declaration syntax for elements in , including: + # --comment-- + # [marked section] + # name in the following list: ENTITY, DOCTYPE, ELEMENT, + # ATTLIST, NOTATION, SHORTREF, USEMAP, + # LINKTYPE, LINK, IDLINK, USELINK, SYSTEM + rawdata = self.rawdata + j = i + 2 + assert rawdata[i:j] == "": + # the empty comment + return j + 1 + if rawdata[j:j+1] in ("-", ""): + # Start of comment followed by buffer boundary, + # or just a buffer boundary. + return -1 + # A simple, practical version could look like: ((name|stringlit) S*) + '>' + n = len(rawdata) + if rawdata[j:j+2] == '--': #comment + # Locate --.*-- as the body of the comment + return self.parse_comment(i) + elif rawdata[j] == '[': #marked section + # Locate [statusWord [...arbitrary SGML...]] as the body of the marked section + # Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA + # Note that this is extended by Microsoft Office "Save as Web" function + # to include [if...] and [endif]. + return self.parse_marked_section(i) + else: #all other declaration elements + decltype, j = self._scan_name(j, i) + if j < 0: + return j + if decltype == "doctype": + self._decl_otherchars = '' + while j < n: + c = rawdata[j] + if c == ">": + # end of declaration syntax + data = rawdata[i+2:j] + if decltype == "doctype": + self.handle_decl(data) + else: + # According to the HTML5 specs sections "8.2.4.44 Bogus + # comment state" and "8.2.4.45 Markup declaration open + # state", a comment token should be emitted. + # Calling unknown_decl provides more flexibility though. + self.unknown_decl(data) + return j + 1 + if c in "\"'": + m = _declstringlit_match(rawdata, j) + if not m: + return -1 # incomplete + j = m.end() + elif c in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ": + name, j = self._scan_name(j, i) + elif c in self._decl_otherchars: + j = j + 1 + elif c == "[": + # this could be handled in a separate doctype parser + if decltype == "doctype": + j = self._parse_doctype_subset(j + 1, i) + elif decltype in {"attlist", "linktype", "link", "element"}: + # must tolerate []'d groups in a content model in an element declaration + # also in data attribute specifications of attlist declaration + # also link type declaration subsets in linktype declarations + # also link attribute specification lists in link declarations + self.error("unsupported '[' char in %s declaration" % decltype) + else: + self.error("unexpected '[' char in declaration") + else: + self.error( + "unexpected %r char in declaration" % rawdata[j]) + if j < 0: + return j + return -1 # incomplete + + # Internal -- parse a marked section + # Override this to handle MS-word extension syntax content + def parse_marked_section(self, i, report=1): + rawdata= self.rawdata + assert rawdata[i:i+3] == ' ending + match= _markedsectionclose.search(rawdata, i+3) + elif sectName in {"if", "else", "endif"}: + # look for MS Office ]> ending + match= _msmarkedsectionclose.search(rawdata, i+3) + else: + self.error('unknown status keyword %r in marked section' % rawdata[i+3:j]) + if not match: + return -1 + if report: + j = match.start(0) + self.unknown_decl(rawdata[i+3: j]) + return match.end(0) + + # Internal -- parse comment, return length or -1 if not terminated + def parse_comment(self, i, report=1): + rawdata = self.rawdata + if rawdata[i:i+4] != ' LOAD_CONST None def f(x): - None + y = None return x asm = disassemble(f) for elem in ('LOAD_GLOBAL',): @@ -67,10 +67,13 @@ self.assertIn(elem, asm) def test_pack_unpack(self): + # On PyPy, "a, b = ..." is even more optimized, by removing + # the ROT_TWO. But the ROT_TWO is not removed if assigning + # to more complex expressions, so check that. for line, elem in ( ('a, = a,', 'LOAD_CONST',), - ('a, b = a, b', 'ROT_TWO',), - ('a, b, c = a, b, c', 'ROT_THREE',), + ('a[1], b = a, b', 'ROT_TWO',), + ('a, b[2], c = a, b, c', 'ROT_THREE',), ): asm = dis_single(line) self.assertIn(elem, asm) @@ -78,6 +81,8 @@ self.assertNotIn('UNPACK_TUPLE', asm) def test_folding_of_tuples_of_constants(self): + # On CPython, "a,b,c=1,2,3" turns into "a,b,c=" + # but on PyPy, it turns into "a=1;b=2;c=3". for line, elem in ( ('a = 1,2,3', '((1, 2, 3))'), ('("a","b","c")', "(('a', 'b', 'c'))"), @@ -86,7 +91,8 @@ ('((1, 2), 3, 4)', '(((1, 2), 3, 4))'), ): asm = dis_single(line) - self.assertIn(elem, asm) + self.assert_(elem in asm or ( + line == 'a,b,c = 1,2,3' and 'UNPACK_TUPLE' not in asm)) self.assertNotIn('BUILD_TUPLE', asm) # Bug 1053819: Tuple of constants misidentified when presented with: @@ -139,12 +145,15 @@ def test_binary_subscr_on_unicode(self): # valid code get optimized - asm = dis_single('u"foo"[0]') - self.assertIn("(u'f')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) - asm = dis_single('u"\u0061\uffff"[1]') - self.assertIn("(u'\\uffff')", asm) - self.assertNotIn('BINARY_SUBSCR', asm) + # XXX for now we always disable this optimization + # XXX see CPython's issue5057 + if 0: + asm = dis_single('u"foo"[0]') + self.assertIn("(u'f')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) + asm = dis_single('u"\u0061\uffff"[1]') + self.assertIn("(u'\\uffff')", asm) + self.assertNotIn('BINARY_SUBSCR', asm) # invalid code doesn't get optimized # out of range diff --git a/lib-python/2.7/test/test_pprint.py b/lib-python/2.7/test/test_pprint.py --- a/lib-python/2.7/test/test_pprint.py +++ b/lib-python/2.7/test/test_pprint.py @@ -233,7 +233,16 @@ frozenset([0, 2]), frozenset([0, 1])])}""" cube = test.test_set.cube(3) - self.assertEqual(pprint.pformat(cube), cube_repr_tgt) + # XXX issues of dictionary order, and for the case below, + # order of items in the frozenset([...]) representation. + # Whether we get precisely cube_repr_tgt or not is open + # to implementation-dependent choices (this test probably + # fails horribly in CPython if we tweak the dict order too). + got = pprint.pformat(cube) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cube_repr_tgt) + else: + self.assertEqual(eval(got), cube) cubo_repr_tgt = """\ {frozenset([frozenset([0, 2]), frozenset([0])]): frozenset([frozenset([frozenset([0, 2]), @@ -393,7 +402,11 @@ 2])])])}""" cubo = test.test_set.linegraph(cube) - self.assertEqual(pprint.pformat(cubo), cubo_repr_tgt) + got = pprint.pformat(cubo) + if test.test_support.check_impl_detail(cpython=True): + self.assertEqual(got, cubo_repr_tgt) + else: + self.assertEqual(eval(got), cubo) def test_depth(self): nested_tuple = (1, (2, (3, (4, (5, 6))))) diff --git a/lib-python/2.7/test/test_pydoc.py b/lib-python/2.7/test/test_pydoc.py --- a/lib-python/2.7/test/test_pydoc.py +++ b/lib-python/2.7/test/test_pydoc.py @@ -267,8 +267,8 @@ testpairs = ( ('i_am_not_here', 'i_am_not_here'), ('test.i_am_not_here_either', 'i_am_not_here_either'), - ('test.i_am_not_here.neither_am_i', 'i_am_not_here.neither_am_i'), - ('i_am_not_here.{}'.format(modname), 'i_am_not_here.{}'.format(modname)), + ('test.i_am_not_here.neither_am_i', 'i_am_not_here'), + ('i_am_not_here.{}'.format(modname), 'i_am_not_here'), ('test.{}'.format(modname), modname), ) @@ -292,8 +292,8 @@ result = run_pydoc(modname) finally: forget(modname) - expected = badimport_pattern % (modname, expectedinmsg) - self.assertEqual(expected, result) + expected = badimport_pattern % (modname, '(.+\\.)?' + expectedinmsg + '(\\..+)?$') + self.assertTrue(re.match(expected, result)) def test_input_strip(self): missing_module = " test.i_am_not_here " diff --git a/lib-python/2.7/test/test_pyexpat.py b/lib-python/2.7/test/test_pyexpat.py --- a/lib-python/2.7/test/test_pyexpat.py +++ b/lib-python/2.7/test/test_pyexpat.py @@ -570,6 +570,9 @@ self.assertEqual(self.n, 4) class MalformedInputText(unittest.TestCase): + # CPython seems to ship its own version of expat, they fixed it on this commit : + # http://svn.python.org/view?revision=74429&view=revision + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test1(self): xml = "\0\r\n" parser = expat.ParserCreate() @@ -579,6 +582,7 @@ except expat.ExpatError as e: self.assertEqual(str(e), 'unclosed token: line 2, column 0') + @unittest.skipIf(sys.platform == "darwin", "Expat is broken on Mac OS X 10.6.6") def test2(self): xml = "\r\n" parser = expat.ParserCreate() diff --git a/lib-python/2.7/test/test_repr.py b/lib-python/2.7/test/test_repr.py --- a/lib-python/2.7/test/test_repr.py +++ b/lib-python/2.7/test/test_repr.py @@ -9,6 +9,7 @@ import unittest from test.test_support import run_unittest, check_py3k_warnings +from test.test_support import check_impl_detail from repr import repr as r # Don't shadow builtin repr from repr import Repr @@ -145,8 +146,11 @@ # Functions eq(repr(hash), '') # Methods - self.assertTrue(repr(''.split).startswith( - '") def test_xrange(self): eq = self.assertEqual @@ -185,7 +189,10 @@ def test_descriptors(self): eq = self.assertEqual # method descriptors - eq(repr(dict.items), "") + if check_impl_detail(cpython=True): + eq(repr(dict.items), "") + elif check_impl_detail(pypy=True): + eq(repr(dict.items), "") # XXX member descriptors # XXX attribute descriptors # XXX slot descriptors @@ -247,8 +254,14 @@ eq = self.assertEqual touch(os.path.join(self.subpkgname, self.pkgname + os.extsep + 'py')) from areallylongpackageandmodulenametotestreprtruncation.areallylongpackageandmodulenametotestreprtruncation import areallylongpackageandmodulenametotestreprtruncation - eq(repr(areallylongpackageandmodulenametotestreprtruncation), - "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + # On PyPy, we use %r to format the file name; on CPython it is done + # with '%s'. It seems to me that %r is safer . + if '__pypy__' in sys.builtin_module_names: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) + else: + eq(repr(areallylongpackageandmodulenametotestreprtruncation), + "" % (areallylongpackageandmodulenametotestreprtruncation.__name__, areallylongpackageandmodulenametotestreprtruncation.__file__)) eq(repr(sys), "") def test_type(self): diff --git a/lib-python/2.7/test/test_runpy.py b/lib-python/2.7/test/test_runpy.py --- a/lib-python/2.7/test/test_runpy.py +++ b/lib-python/2.7/test/test_runpy.py @@ -5,10 +5,15 @@ import sys import re import tempfile -from test.test_support import verbose, run_unittest, forget +from test.test_support import verbose, run_unittest, forget, check_impl_detail from test.script_helper import (temp_dir, make_script, compile_script, make_pkg, make_zip_script, make_zip_pkg) +if check_impl_detail(pypy=True): + no_lone_pyc_file = True +else: + no_lone_pyc_file = False + from runpy import _run_code, _run_module_code, run_module, run_path # Note: This module can't safely test _run_module_as_main as it @@ -168,13 +173,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + d2 = run_module(mod_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -190,13 +196,14 @@ self.assertIn("x", d1) self.assertTrue(d1["x"] == 1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", pkg_name - d2 = run_module(pkg_name) # Read from bytecode - self.assertIn("x", d2) - self.assertTrue(d2["x"] == 1) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", pkg_name + d2 = run_module(pkg_name) # Read from bytecode + self.assertIn("x", d2) + self.assertTrue(d2["x"] == 1) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, pkg_name) if verbose: print "Package executed successfully" @@ -244,15 +251,17 @@ self.assertIn("sibling", d1) self.assertIn("nephew", d1) del d1 # Ensure __loader__ entry doesn't keep file open - __import__(mod_name) - os.remove(mod_fname) - if verbose: print "Running from compiled:", mod_name - d2 = run_module(mod_name, run_name=run_name) # Read from bytecode - self.assertIn("__package__", d2) - self.assertTrue(d2["__package__"] == pkg_name) - self.assertIn("sibling", d2) - self.assertIn("nephew", d2) - del d2 # Ensure __loader__ entry doesn't keep file open + if not no_lone_pyc_file: + __import__(mod_name) + os.remove(mod_fname) + if verbose: print "Running from compiled:", mod_name + # Read from bytecode + d2 = run_module(mod_name, run_name=run_name) + self.assertIn("__package__", d2) + self.assertTrue(d2["__package__"] == pkg_name) + self.assertIn("sibling", d2) + self.assertIn("nephew", d2) + del d2 # Ensure __loader__ entry doesn't keep file open finally: self._del_pkg(pkg_dir, depth, mod_name) if verbose: print "Module executed successfully" @@ -345,6 +354,8 @@ script_dir, '') def test_directory_compiled(self): + if no_lone_pyc_file: + return with temp_dir() as script_dir: mod_name = '__main__' script_name = self._make_test_script(script_dir, mod_name) diff --git a/lib-python/2.7/test/test_scope.py b/lib-python/2.7/test/test_scope.py --- a/lib-python/2.7/test/test_scope.py +++ b/lib-python/2.7/test/test_scope.py @@ -1,6 +1,6 @@ import unittest from test.test_support import check_syntax_error, check_py3k_warnings, \ - check_warnings, run_unittest + check_warnings, run_unittest, gc_collect class ScopeTests(unittest.TestCase): @@ -432,6 +432,7 @@ for i in range(100): f1() + gc_collect() self.assertEqual(Foo.count, 0) diff --git a/lib-python/2.7/test/test_set.py b/lib-python/2.7/test/test_set.py --- a/lib-python/2.7/test/test_set.py +++ b/lib-python/2.7/test/test_set.py @@ -309,6 +309,7 @@ fo.close() test_support.unlink(test_support.TESTFN) + @test_support.impl_detail(pypy=False) def test_do_not_rehash_dict_keys(self): n = 10 d = dict.fromkeys(map(HashCountingInt, xrange(n))) @@ -559,6 +560,7 @@ p = weakref.proxy(s) self.assertEqual(str(p), str(s)) s = None + test_support.gc_collect() self.assertRaises(ReferenceError, str, p) # C API test only available in a debug build @@ -590,6 +592,7 @@ s.__init__(self.otherword) self.assertEqual(s, set(self.word)) + @test_support.impl_detail() def test_singleton_empty_frozenset(self): f = frozenset() efs = [frozenset(), frozenset([]), frozenset(()), frozenset(''), @@ -770,9 +773,10 @@ for v in self.set: self.assertIn(v, self.values) setiter = iter(self.set) - # note: __length_hint__ is an internal undocumented API, - # don't rely on it in your own programs - self.assertEqual(setiter.__length_hint__(), len(self.set)) + if test_support.check_impl_detail(): + # note: __length_hint__ is an internal undocumented API, + # don't rely on it in your own programs + self.assertEqual(setiter.__length_hint__(), len(self.set)) def test_pickling(self): p = pickle.dumps(self.set) @@ -1564,7 +1568,7 @@ for meth in (s.union, s.intersection, s.difference, s.symmetric_difference, s.isdisjoint): for g in (G, I, Ig, L, R): expected = meth(data) - actual = meth(G(data)) + actual = meth(g(data)) if isinstance(expected, bool): self.assertEqual(actual, expected) else: diff --git a/lib-python/2.7/test/test_sets.py b/lib-python/2.7/test/test_sets.py --- a/lib-python/2.7/test/test_sets.py +++ b/lib-python/2.7/test/test_sets.py @@ -686,7 +686,9 @@ set_list = sorted(self.set) self.assertEqual(len(dup_list), len(set_list)) for i, el in enumerate(dup_list): - self.assertIs(el, set_list[i]) + # Object identity is not guarnteed for immutable objects, so we + # can't use assertIs here. + self.assertEqual(el, set_list[i]) def test_deep_copy(self): dup = copy.deepcopy(self.set) diff --git a/lib-python/2.7/test/test_site.py b/lib-python/2.7/test/test_site.py --- a/lib-python/2.7/test/test_site.py +++ b/lib-python/2.7/test/test_site.py @@ -226,6 +226,10 @@ self.assertEqual(len(dirs), 1) wanted = os.path.join('xoxo', 'Lib', 'site-packages') self.assertEqual(dirs[0], wanted) + elif '__pypy__' in sys.builtin_module_names: + self.assertEquals(len(dirs), 1) + wanted = os.path.join('xoxo', 'site-packages') + self.assertEquals(dirs[0], wanted) elif os.sep == '/': self.assertEqual(len(dirs), 2) wanted = os.path.join('xoxo', 'lib', 'python' + sys.version[:3], diff --git a/lib-python/2.7/test/test_socket.py b/lib-python/2.7/test/test_socket.py --- a/lib-python/2.7/test/test_socket.py +++ b/lib-python/2.7/test/test_socket.py @@ -252,6 +252,7 @@ self.assertEqual(p.fileno(), s.fileno()) s.close() s = None + test_support.gc_collect() try: p.fileno() except ReferenceError: @@ -285,32 +286,34 @@ s.sendto(u'\u2620', sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None) - self.assertIn('not NoneType', str(cm.exception)) + self.assertIn('NoneType', str(cm.exception)) # 3 args with self.assertRaises(UnicodeEncodeError): s.sendto(u'\u2620', 0, sockname) with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) - self.assertIn('not complex', str(cm.exception)) + self.assertIn('complex', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, None) - self.assertIn('not NoneType', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 'bar', sockname) - self.assertIn('an integer is required', str(cm.exception)) + self.assertIn('integer', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', None, None) - self.assertIn('an integer is required', str(cm.exception)) + if test_support.check_impl_detail(): + self.assertIn('an integer is required', str(cm.exception)) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto('foo') - self.assertIn('(1 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto('foo', 0, sockname, 4) - self.assertIn('(4 given)', str(cm.exception)) + self.assertIn(' given)', str(cm.exception)) def testCrucialConstants(self): @@ -385,10 +388,10 @@ socket.htonl(k) socket.htons(k) for k in bad_values: - self.assertRaises(OverflowError, socket.ntohl, k) - self.assertRaises(OverflowError, socket.ntohs, k) - self.assertRaises(OverflowError, socket.htonl, k) - self.assertRaises(OverflowError, socket.htons, k) + self.assertRaises((OverflowError, ValueError), socket.ntohl, k) + self.assertRaises((OverflowError, ValueError), socket.ntohs, k) + self.assertRaises((OverflowError, ValueError), socket.htonl, k) + self.assertRaises((OverflowError, ValueError), socket.htons, k) def testGetServBy(self): eq = self.assertEqual @@ -428,8 +431,8 @@ if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. - self.assertRaises(OverflowError, socket.getservbyport, -1) - self.assertRaises(OverflowError, socket.getservbyport, 65536) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, -1) + self.assertRaises((OverflowError, ValueError), socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout @@ -608,8 +611,8 @@ neg_port = port - 65536 sock = socket.socket() try: - self.assertRaises(OverflowError, sock.bind, (host, big_port)) - self.assertRaises(OverflowError, sock.bind, (host, neg_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, big_port)) + self.assertRaises((OverflowError, ValueError), sock.bind, (host, neg_port)) sock.bind((host, port)) finally: sock.close() @@ -1309,6 +1312,7 @@ closed = False def flush(self): pass def close(self): self.closed = True + def _decref_socketios(self): pass # must not close unless we request it: the original use of _fileobject # by module socket requires that the underlying socket not be closed until diff --git a/lib-python/2.7/test/test_sort.py b/lib-python/2.7/test/test_sort.py --- a/lib-python/2.7/test/test_sort.py +++ b/lib-python/2.7/test/test_sort.py @@ -140,7 +140,10 @@ return random.random() < 0.5 L = [C() for i in range(50)] - self.assertRaises(ValueError, L.sort) + try: + L.sort() + except ValueError: + pass def test_cmpNone(self): # Testing None as a comparison function. @@ -150,8 +153,10 @@ L.sort(None) self.assertEqual(L, range(50)) + @test_support.impl_detail(pypy=False) def test_undetected_mutation(self): # Python 2.4a1 did not always detect mutation + # So does pypy... memorywaster = [] for i in range(20): def mutating_cmp(x, y): @@ -226,7 +231,10 @@ def __del__(self): del data[:] data[:] = range(20) - self.assertRaises(ValueError, data.sort, key=SortKiller) + try: + data.sort(key=SortKiller) + except ValueError: + pass def test_key_with_mutating_del_and_exception(self): data = range(10) diff --git a/lib-python/2.7/test/test_ssl.py b/lib-python/2.7/test/test_ssl.py --- a/lib-python/2.7/test/test_ssl.py +++ b/lib-python/2.7/test/test_ssl.py @@ -881,6 +881,8 @@ c = socket.socket() c.connect((HOST, port)) listener_gone.wait() + # XXX why is it necessary? + test_support.gc_collect() try: ssl_sock = ssl.wrap_socket(c) except IOError: @@ -1330,10 +1332,8 @@ def test_main(verbose=False): global CERTFILE, SVN_PYTHON_ORG_ROOT_CERT - CERTFILE = os.path.join(os.path.dirname(__file__) or os.curdir, - "keycert.pem") - SVN_PYTHON_ORG_ROOT_CERT = os.path.join( - os.path.dirname(__file__) or os.curdir, + CERTFILE = test_support.findfile("keycert.pem") + SVN_PYTHON_ORG_ROOT_CERT = test_support.findfile( "https_svn_python_org_root.pem") if (not os.path.exists(CERTFILE) or diff --git a/lib-python/2.7/test/test_str.py b/lib-python/2.7/test/test_str.py --- a/lib-python/2.7/test/test_str.py +++ b/lib-python/2.7/test/test_str.py @@ -422,10 +422,11 @@ for meth in ('foo'.startswith, 'foo'.endswith): with self.assertRaises(TypeError) as cm: meth(['f']) - exc = str(cm.exception) - self.assertIn('unicode', exc) - self.assertIn('str', exc) - self.assertIn('tuple', exc) + if test_support.check_impl_detail(): + exc = str(cm.exception) + self.assertIn('unicode', exc) + self.assertIn('str', exc) + self.assertIn('tuple', exc) def test_main(): test_support.run_unittest(StrTest) diff --git a/lib-python/2.7/test/test_struct.py b/lib-python/2.7/test/test_struct.py --- a/lib-python/2.7/test/test_struct.py +++ b/lib-python/2.7/test/test_struct.py @@ -535,7 +535,8 @@ @unittest.skipUnless(IS32BIT, "Specific to 32bit machines") def test_crasher(self): - self.assertRaises(MemoryError, struct.pack, "357913941c", "a") + self.assertRaises((MemoryError, struct.error), struct.pack, + "357913941c", "a") def test_count_overflow(self): hugecount = '{}b'.format(sys.maxsize+1) diff --git a/lib-python/2.7/test/test_subprocess.py b/lib-python/2.7/test/test_subprocess.py --- a/lib-python/2.7/test/test_subprocess.py +++ b/lib-python/2.7/test/test_subprocess.py @@ -16,11 +16,11 @@ # Depends on the following external programs: Python # -if mswindows: - SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' - 'os.O_BINARY);') -else: - SETBINARY = '' +#if mswindows: +# SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' +# 'os.O_BINARY);') +#else: +# SETBINARY = '' try: @@ -420,8 +420,9 @@ self.assertStderrEqual(stderr, "") def test_universal_newlines(self): - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' @@ -448,8 +449,9 @@ def test_universal_newlines_communicate(self): # universal newlines through communicate() - p = subprocess.Popen([sys.executable, "-c", - 'import sys,os;' + SETBINARY + + # NB. replaced SETBINARY with the -u flag + p = subprocess.Popen([sys.executable, "-u", "-c", + 'import sys,os;' + #SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' diff --git a/lib-python/2.7/test/test_support.py b/lib-python/2.7/test/test_support.py --- a/lib-python/2.7/test/test_support.py +++ b/lib-python/2.7/test/test_support.py @@ -431,16 +431,20 @@ rmtree(name) -def findfile(file, here=__file__, subdir=None): +def findfile(file, here=None, subdir=None): """Try to find a file on sys.path and the working directory. If it is not found the argument passed to the function is returned (this does not necessarily signal failure; could still be the legitimate path).""" + import test if os.path.isabs(file): return file if subdir is not None: file = os.path.join(subdir, file) path = sys.path - path = [os.path.dirname(here)] + path + if here is None: + path = test.__path__ + path + else: + path = [os.path.dirname(here)] + path for dn in path: fn = os.path.join(dn, file) if os.path.exists(fn): return fn @@ -1050,15 +1054,33 @@ guards, default = _parse_guards(guards) return guards.get(platform.python_implementation().lower(), default) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --pdb +# to get a pdb prompt in case of exceptions +ResultClass = unittest.TextTestRunner.resultclass + +class TestResultWithPdb(ResultClass): + + def addError(self, testcase, exc_info): + ResultClass.addError(self, testcase, exc_info) + if '--pdb' in sys.argv: + import pdb, traceback + traceback.print_tb(exc_info[2]) + pdb.post_mortem(exc_info[2]) + +# ---------------------------------- def _run_suite(suite): """Run tests from a unittest.TestSuite-derived class.""" if verbose: - runner = unittest.TextTestRunner(sys.stdout, verbosity=2) + runner = unittest.TextTestRunner(sys.stdout, verbosity=2, + resultclass=TestResultWithPdb) else: runner = BasicTestRunner() + result = runner.run(suite) if not result.wasSuccessful(): if len(result.errors) == 1 and not result.failures: @@ -1071,6 +1093,34 @@ err += "; run in verbose mode for details" raise TestFailed(err) +# ---------------------------------- +# PyPy extension: you can run:: +# python ..../test_foo.py --filter bar +# to run only the test cases whose name contains bar + +def filter_maybe(suite): + try: + i = sys.argv.index('--filter') + filter = sys.argv[i+1] + except (ValueError, IndexError): + return suite + tests = [] + for test in linearize_suite(suite): + if filter in test._testMethodName: + tests.append(test) + return unittest.TestSuite(tests) + +def linearize_suite(suite_or_test): + try: + it = iter(suite_or_test) + except TypeError: + yield suite_or_test + return + for subsuite in it: + for item in linearize_suite(subsuite): + yield item + +# ---------------------------------- def run_unittest(*classes): """Run tests from unittest.TestCase-derived classes.""" @@ -1086,6 +1136,7 @@ suite.addTest(cls) else: suite.addTest(unittest.makeSuite(cls)) + suite = filter_maybe(suite) _run_suite(suite) diff --git a/lib-python/2.7/test/test_syntax.py b/lib-python/2.7/test/test_syntax.py --- a/lib-python/2.7/test/test_syntax.py +++ b/lib-python/2.7/test/test_syntax.py @@ -5,7 +5,8 @@ >>> def f(x): ... global x Traceback (most recent call last): -SyntaxError: name 'x' is local and global (, line 1) + File "", line 1 +SyntaxError: name 'x' is local and global The tests are all raise SyntaxErrors. They were created by checking each C call that raises SyntaxError. There are several modules that @@ -375,7 +376,7 @@ In 2.5 there was a missing exception and an assert was triggered in a debug build. The number of blocks must be greater than CO_MAXBLOCKS. SF #1565514 - >>> while 1: + >>> while 1: # doctest:+SKIP ... while 2: ... while 3: ... while 4: diff --git a/lib-python/2.7/test/test_sys.py b/lib-python/2.7/test/test_sys.py --- a/lib-python/2.7/test/test_sys.py +++ b/lib-python/2.7/test/test_sys.py @@ -264,6 +264,7 @@ self.assertEqual(sys.getdlopenflags(), oldflags+1) sys.setdlopenflags(oldflags) + @test.test_support.impl_detail("reference counting") def test_refcount(self): # n here must be a global in order for this test to pass while # tracing with a python function. Tracing calls PyFrame_FastToLocals @@ -287,7 +288,7 @@ is sys._getframe().f_code ) - # sys._current_frames() is a CPython-only gimmick. + @test.test_support.impl_detail("current_frames") def test_current_frames(self): have_threads = True try: @@ -383,7 +384,10 @@ self.assertEqual(len(sys.float_info), 11) self.assertEqual(sys.float_info.radix, 2) self.assertEqual(len(sys.long_info), 2) - self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + if test.test_support.check_impl_detail(cpython=True): + self.assertTrue(sys.long_info.bits_per_digit % 5 == 0) + else: + self.assertTrue(sys.long_info.bits_per_digit >= 1) self.assertTrue(sys.long_info.sizeof_digit >= 1) self.assertEqual(type(sys.long_info.bits_per_digit), int) self.assertEqual(type(sys.long_info.sizeof_digit), int) @@ -432,6 +436,7 @@ self.assertEqual(type(getattr(sys.flags, attr)), int, attr) self.assertTrue(repr(sys.flags)) + @test.test_support.impl_detail("sys._clear_type_cache") def test_clear_type_cache(self): sys._clear_type_cache() @@ -473,6 +478,7 @@ p.wait() self.assertIn(executable, ["''", repr(sys.executable)]) + at unittest.skipUnless(test.test_support.check_impl_detail(), "sys.getsizeof()") class SizeofTest(unittest.TestCase): TPFLAGS_HAVE_GC = 1<<14 diff --git a/lib-python/2.7/test/test_sys_settrace.py b/lib-python/2.7/test/test_sys_settrace.py --- a/lib-python/2.7/test/test_sys_settrace.py +++ b/lib-python/2.7/test/test_sys_settrace.py @@ -213,12 +213,16 @@ "finally" def generator_example(): # any() will leave the generator before its end - x = any(generator_function()) + x = any(generator_function()); gc.collect() # the following lines were not traced for x in range(10): y = x +# On CPython, when the generator is decref'ed to zero, we see the trace +# for the "finally:" portion. On PyPy, we don't see it before the next +# garbage collection. That's why we put gc.collect() on the same line above. + generator_example.events = ([(0, 'call'), (2, 'line'), (-6, 'call'), @@ -282,11 +286,11 @@ self.compare_events(func.func_code.co_firstlineno, tracer.events, func.events) - def set_and_retrieve_none(self): + def test_set_and_retrieve_none(self): sys.settrace(None) assert sys.gettrace() is None - def set_and_retrieve_func(self): + def test_set_and_retrieve_func(self): def fn(*args): pass @@ -323,17 +327,24 @@ self.run_test(tighterloop_example) def test_13_genexp(self): - self.run_test(generator_example) - # issue1265: if the trace function contains a generator, - # and if the traced function contains another generator - # that is not completely exhausted, the trace stopped. - # Worse: the 'finally' clause was not invoked. - tracer = Tracer() - sys.settrace(tracer.traceWithGenexp) - generator_example() - sys.settrace(None) - self.compare_events(generator_example.__code__.co_firstlineno, - tracer.events, generator_example.events) + if self.using_gc: + test_support.gc_collect() + gc.enable() + try: + self.run_test(generator_example) + # issue1265: if the trace function contains a generator, + # and if the traced function contains another generator + # that is not completely exhausted, the trace stopped. + # Worse: the 'finally' clause was not invoked. + tracer = Tracer() + sys.settrace(tracer.traceWithGenexp) + generator_example() + sys.settrace(None) + self.compare_events(generator_example.__code__.co_firstlineno, + tracer.events, generator_example.events) + finally: + if self.using_gc: + gc.disable() def test_14_onliner_if(self): def onliners(): diff --git a/lib-python/2.7/test/test_sysconfig.py b/lib-python/2.7/test/test_sysconfig.py --- a/lib-python/2.7/test/test_sysconfig.py +++ b/lib-python/2.7/test/test_sysconfig.py @@ -209,13 +209,22 @@ self.assertEqual(get_platform(), 'macosx-10.4-fat64') - for arch in ('ppc', 'i386', 'x86_64', 'ppc64'): + for arch in ('ppc', 'i386', 'ppc64', 'x86_64'): get_config_vars()['CFLAGS'] = ('-arch %s -isysroot ' '/Developer/SDKs/MacOSX10.4u.sdk ' '-fno-strict-aliasing -fno-common ' '-dynamic -DNDEBUG -g -O3'%(arch,)) self.assertEqual(get_platform(), 'macosx-10.4-%s'%(arch,)) + + # macosx with ARCHFLAGS set and empty _CONFIG_VARS + os.environ['ARCHFLAGS'] = '-arch i386' + sysconfig._CONFIG_VARS = None + + # this will attempt to recreate the _CONFIG_VARS based on environment + # variables; used to check a problem with the PyPy's _init_posix + # implementation; see: issue 705 + get_config_vars() # linux debian sarge os.name = 'posix' @@ -235,7 +244,7 @@ def test_get_scheme_names(self): wanted = ('nt', 'nt_user', 'os2', 'os2_home', 'osx_framework_user', - 'posix_home', 'posix_prefix', 'posix_user') + 'posix_home', 'posix_prefix', 'posix_user', 'pypy') self.assertEqual(get_scheme_names(), wanted) def test_symlink(self): diff --git a/lib-python/2.7/test/test_tarfile.py b/lib-python/2.7/test/test_tarfile.py --- a/lib-python/2.7/test/test_tarfile.py +++ b/lib-python/2.7/test/test_tarfile.py @@ -169,6 +169,7 @@ except tarfile.ReadError: self.fail("tarfile.open() failed on empty archive") self.assertListEqual(tar.getmembers(), []) + tar.close() def test_null_tarfile(self): # Test for issue6123: Allow opening empty archives. @@ -207,16 +208,21 @@ fobj = open(self.tarname, "rb") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, os.path.abspath(fobj.name)) + tar.close() def test_no_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) self.assertRaises(AttributeError, getattr, fobj, "name") tar = tarfile.open(fileobj=fobj, mode=self.mode) self.assertEqual(tar.name, None) def test_empty_name_attribute(self): - data = open(self.tarname, "rb").read() + f = open(self.tarname, "rb") + data = f.read() + f.close() fobj = StringIO.StringIO(data) fobj.name = "" tar = tarfile.open(fileobj=fobj, mode=self.mode) @@ -515,6 +521,7 @@ self.tar = tarfile.open(self.tarname, mode=self.mode, encoding="iso8859-1") tarinfo = self.tar.getmember("pax/umlauts-�������") self._test_member(tarinfo, size=7011, chksum=md5_regtype) + self.tar.close() class LongnameTest(ReadTest): @@ -675,6 +682,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.rmdir(path) @@ -692,6 +700,7 @@ tar.gettarinfo(target) tarinfo = tar.gettarinfo(link) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(target) os.remove(link) @@ -704,6 +713,7 @@ tar = tarfile.open(tmpname, self.mode) tarinfo = tar.gettarinfo(path) self.assertEqual(tarinfo.size, 0) + tar.close() finally: os.remove(path) @@ -722,6 +732,7 @@ tar.add(dstname) os.chdir(cwd) self.assertTrue(tar.getnames() == [], "added the archive to itself") + tar.close() def test_exclude(self): tempdir = os.path.join(TEMPDIR, "exclude") @@ -742,6 +753,7 @@ tar = tarfile.open(tmpname, "r") self.assertEqual(len(tar.getmembers()), 1) self.assertEqual(tar.getnames()[0], "empty_dir") + tar.close() finally: shutil.rmtree(tempdir) @@ -947,7 +959,9 @@ fobj.close() elif self.mode.endswith("bz2"): dec = bz2.BZ2Decompressor() - data = open(tmpname, "rb").read() + f = open(tmpname, "rb") + data = f.read() + f.close() data = dec.decompress(data) self.assertTrue(len(dec.unused_data) == 0, "found trailing data") @@ -1026,6 +1040,7 @@ "unable to read longname member") self.assertEqual(tarinfo.linkname, member.linkname, "unable to read longname member") + tar.close() def test_longname_1023(self): self._test(("longnam/" * 127) + "longnam") @@ -1118,6 +1133,7 @@ else: n = tar.getmembers()[0].name self.assertTrue(name == n, "PAX longname creation failed") + tar.close() def test_pax_global_header(self): pax_headers = { @@ -1146,6 +1162,7 @@ tarfile.PAX_NUMBER_FIELDS[key](val) except (TypeError, ValueError): self.fail("unable to convert pax header field") + tar.close() def test_pax_extended_header(self): # The fields from the pax header have priority over the @@ -1165,6 +1182,7 @@ self.assertEqual(t.pax_headers, pax_headers) self.assertEqual(t.name, "foo") self.assertEqual(t.uid, 123) + tar.close() class UstarUnicodeTest(unittest.TestCase): @@ -1208,6 +1226,7 @@ tarinfo.name = "foo" tarinfo.uname = u"���" self.assertRaises(UnicodeError, tar.addfile, tarinfo) + tar.close() def test_unicode_argument(self): tar = tarfile.open(tarname, "r", encoding="iso8859-1", errors="strict") @@ -1262,6 +1281,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="ascii", errors=handler) self.assertEqual(tar.getnames()[0], name) + tar.close() self.assertRaises(UnicodeError, tarfile.open, tmpname, encoding="ascii", errors="strict") @@ -1274,6 +1294,7 @@ tar = tarfile.open(tmpname, format=self.format, encoding="iso8859-1", errors="utf-8") self.assertEqual(tar.getnames()[0], "���/" + u"�".encode("utf8")) + tar.close() class AppendTest(unittest.TestCase): @@ -1301,6 +1322,7 @@ def _test(self, names=["bar"], fileobj=None): tar = tarfile.open(self.tarname, fileobj=fileobj) self.assertEqual(tar.getnames(), names) + tar.close() def test_non_existing(self): self._add_testfile() @@ -1319,7 +1341,9 @@ def test_fileobj(self): self._create_testtar() - data = open(self.tarname).read() + f = open(self.tarname) + data = f.read() + f.close() fobj = StringIO.StringIO(data) self._add_testfile(fobj) fobj.seek(0) @@ -1345,7 +1369,9 @@ # Append mode is supposed to fail if the tarfile to append to # does not end with a zero block. def _test_error(self, data): - open(self.tarname, "wb").write(data) + f = open(self.tarname, "wb") + f.write(data) + f.close() self.assertRaises(tarfile.ReadError, self._add_testfile) def test_null(self): diff --git a/lib-python/2.7/test/test_tempfile.py b/lib-python/2.7/test/test_tempfile.py From noreply at buildbot.pypy.org Mon May 14 21:46:11 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 14 May 2012 21:46:11 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: add a test to check that we can take pointers to incomplete structures Message-ID: <20120514194611.17AAF82253@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55090:7db2e26caf2f Date: 2012-05-14 21:38 +0200 http://bitbucket.org/pypy/pypy/changeset/7db2e26caf2f/ Log: add a test to check that we can take pointers to incomplete structures diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -245,6 +245,22 @@ assert repr(descr.ffitype) == '' assert descr.ffitype.sizeof() == longsize*2 raises(ValueError, "descr.define_fields(fields)") + + def test_pointer_to_incomplete_struct(self): + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('x', types.slong), + Field('y', types.slong), + ] + descr = _StructDescr('foo') + foo_ffitype = descr.ffitype + foo_p = types.Pointer(descr.ffitype) + assert foo_p.deref_pointer() is foo_ffitype + descr.define_fields(fields) + assert descr.ffitype is foo_ffitype + assert foo_p.deref_pointer() is foo_ffitype + assert types.Pointer(descr.ffitype) is foo_p def test_compute_shape(self): From noreply at buildbot.pypy.org Mon May 14 21:46:12 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Mon, 14 May 2012 21:46:12 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: typo Message-ID: <20120514194612.7D5AB82253@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55091:61a2b32337cb Date: 2012-05-14 21:45 +0200 http://bitbucket.org/pypy/pypy/changeset/61a2b32337cb/ Log: typo diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -90,7 +90,7 @@ def descr_new_structdescr(space, w_type, name, w_fields=None): descr = W__StructDescr(space, name) if w_fields is not space.w_None: - descr.define_fields(w_fields) + descr.define_fields(space, w_fields) return descr def round_up(size, alignment): From noreply at buildbot.pypy.org Tue May 15 08:53:48 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 15 May 2012 08:53:48 +0200 (CEST) Subject: [pypy-commit] pypy default: Test and fix: uintptr_t is actually unsigned. Message-ID: <20120515065348.20A8E82253@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r55092:9112e81329f7 Date: 2012-05-15 08:53 +0200 http://bitbucket.org/pypy/pypy/changeset/9112e81329f7/ Log: Test and fix: uintptr_t is actually unsigned. diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -74,3 +74,20 @@ space = gettestobjspace(withstrbuf=True) cls = space._get_interplevel_cls(space.w_str) assert cls is W_AbstractStringObject + + def test_wrap_various_unsigned_types(self): + import sys + from pypy.rpython.lltypesystem import lltype, rffi + space = self.space + value = sys.maxint * 2 + x = rffi.cast(lltype.Unsigned, value) + assert space.eq_w(space.wrap(value), space.wrap(x)) + x = rffi.cast(rffi.UINTPTR_T, value) + assert x > 0 + assert space.eq_w(space.wrap(value), space.wrap(x)) + value = 60000 + x = rffi.cast(rffi.USHORT, value) + assert space.eq_w(space.wrap(value), space.wrap(x)) + value = 200 + x = rffi.cast(rffi.UCHAR, value) + assert space.eq_w(space.wrap(value), space.wrap(x)) diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -436,6 +436,7 @@ 'long long', 'unsigned long long', 'size_t', 'time_t', 'wchar_t', 'uintptr_t', 'intptr_t'] +_TYPES_ARE_UNSIGNED = set(['size_t', 'uintptr_t']) # plus "unsigned *" if os.name != 'nt': TYPES.append('mode_t') TYPES.append('pid_t') @@ -454,7 +455,7 @@ name = 'u' + name[9:] signed = False else: - signed = (name != 'size_t') + signed = (name not in _TYPES_ARE_UNSIGNED) name = name.replace(' ', '') names.append(name) populatelist.append((name.upper(), c_name, signed)) From noreply at buildbot.pypy.org Tue May 15 11:01:23 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 15 May 2012 11:01:23 +0200 (CEST) Subject: [pypy-commit] pypy default: enforce nonnegative ints here Message-ID: <20120515090123.7FD9882253@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r55093:063cfdcec5f0 Date: 2012-05-15 11:01 +0200 http://bitbucket.org/pypy/pypy/changeset/063cfdcec5f0/ Log: enforce nonnegative ints here diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -810,13 +810,14 @@ b.append(cp[i]) i += 1 return assert_str0(b.build()) + charp2strn._annenforceargs_ = [None, annmodel.SomeInteger(nonneg=True)] # char* and size -> str (which can contain null bytes) def charpsize2str(cp, size): b = builder_class(size) b.append_charpsize(cp, size) return b.build() - charpsize2str._annenforceargs_ = [None, int] + charpsize2str._annenforceargs_ = [None, annmodel.SomeInteger(nonneg=True)] return (str2charp, free_charp, charp2str, get_nonmovingbuffer, free_nonmovingbuffer, From noreply at buildbot.pypy.org Tue May 15 11:06:39 2012 From: noreply at buildbot.pypy.org (fijal) Date: Tue, 15 May 2012 11:06:39 +0200 (CEST) Subject: [pypy-commit] pypy default: Backed out changeset 063cfdcec5f0 Message-ID: <20120515090639.2015D82253@wyvern.cs.uni-duesseldorf.de> Author: Maciej Fijalkowski Branch: Changeset: r55094:7e4a5a9c477c Date: 2012-05-15 11:06 +0200 http://bitbucket.org/pypy/pypy/changeset/7e4a5a9c477c/ Log: Backed out changeset 063cfdcec5f0 diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -810,14 +810,13 @@ b.append(cp[i]) i += 1 return assert_str0(b.build()) - charp2strn._annenforceargs_ = [None, annmodel.SomeInteger(nonneg=True)] # char* and size -> str (which can contain null bytes) def charpsize2str(cp, size): b = builder_class(size) b.append_charpsize(cp, size) return b.build() - charpsize2str._annenforceargs_ = [None, annmodel.SomeInteger(nonneg=True)] + charpsize2str._annenforceargs_ = [None, int] return (str2charp, free_charp, charp2str, get_nonmovingbuffer, free_nonmovingbuffer, From noreply at buildbot.pypy.org Tue May 15 11:10:44 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 15 May 2012 11:10:44 +0200 (CEST) Subject: [pypy-commit] pypy default: Rewrite select.select() to go directly from module/select to Message-ID: <20120515091044.9E2C382253@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r55095:36de2e50b7b1 Date: 2012-05-15 11:00 +0200 http://bitbucket.org/pypy/pypy/changeset/36de2e50b7b1/ Log: Rewrite select.select() to go directly from module/select to the rffi interface without stopping at rlib/rpoll. The code is not much longer, and I think incredibly more efficient (no dicts in particular). diff --git a/pypy/module/select/interp_select.py b/pypy/module/select/interp_select.py --- a/pypy/module/select/interp_select.py +++ b/pypy/module/select/interp_select.py @@ -74,6 +74,32 @@ pollmethods[methodname] = interp2app(getattr(Poll, methodname)) Poll.typedef = TypeDef('select.poll', **pollmethods) +# ____________________________________________________________ + + +from pypy.rlib import _rsocket_rffi as _c +from pypy.rpython.lltypesystem import lltype, rffi + + +def _build_fd_set(space, list_w, ll_list, nfds): + _c.FD_ZERO(ll_list) + fdlist = [] + for w_f in list_w: + fd = space.c_filedescriptor_w(w_f) + if fd > nfds: + nfds = fd + _c.FD_SET(fd, ll_list) + fdlist.append(fd) + return fdlist, nfds +_build_fd_set._always_inline_ = True # get rid of the tuple result + +def _unbuild_fd_set(space, list_w, fdlist, ll_list, reslist_w): + for i in range(len(fdlist)): + fd = fdlist[i] + if _c.FD_ISSET(fd, ll_list): + reslist_w.append(list_w[i]) + + def select(space, w_iwtd, w_owtd, w_ewtd, w_timeout=None): """Wait until one or more file descriptors are ready for some kind of I/O. The first three arguments are sequences of file descriptors to be waited for: @@ -99,29 +125,62 @@ iwtd_w = space.listview(w_iwtd) owtd_w = space.listview(w_owtd) ewtd_w = space.listview(w_ewtd) - iwtd = [space.c_filedescriptor_w(w_f) for w_f in iwtd_w] - owtd = [space.c_filedescriptor_w(w_f) for w_f in owtd_w] - ewtd = [space.c_filedescriptor_w(w_f) for w_f in ewtd_w] - iwtd_d = {} - owtd_d = {} - ewtd_d = {} - for i in range(len(iwtd)): - iwtd_d[iwtd[i]] = iwtd_w[i] - for i in range(len(owtd)): - owtd_d[owtd[i]] = owtd_w[i] - for i in range(len(ewtd)): - ewtd_d[ewtd[i]] = ewtd_w[i] + + ll_inl = lltype.nullptr(_c.fd_set.TO) + ll_outl = lltype.nullptr(_c.fd_set.TO) + ll_errl = lltype.nullptr(_c.fd_set.TO) + ll_timeval = lltype.nullptr(_c.timeval) + try: + fdlistin = None + fdlistout = None + fdlisterr = None + nfds = -1 + if len(iwtd_w) > 0: + ll_inl = lltype.malloc(_c.fd_set.TO, flavor='raw') + fdlistin, nfds = _build_fd_set(space, iwtd_w, ll_inl, nfds) + if len(owtd_w) > 0: + ll_outl = lltype.malloc(_c.fd_set.TO, flavor='raw') + fdlistout, nfds = _build_fd_set(space, owtd_w, ll_outl, nfds) + if len(ewtd_w) > 0: + ll_errl = lltype.malloc(_c.fd_set.TO, flavor='raw') + fdlisterr, nfds = _build_fd_set(space, ewtd_w, ll_errl, nfds) + if space.is_w(w_timeout, space.w_None): - iwtd, owtd, ewtd = rpoll.select(iwtd, owtd, ewtd) + timeout = -1.0 else: - iwtd, owtd, ewtd = rpoll.select(iwtd, owtd, ewtd, space.float_w(w_timeout)) - except rpoll.SelectError, s: - w_errortype = space.fromcache(Cache).w_error - raise OperationError(w_errortype, space.newtuple([ - space.wrap(s.errno), space.wrap(s.get_msg())])) + timeout = space.float_w(w_timeout) + if timeout >= 0.0: + ll_timeval = rffi.make(_c.timeval) + i = int(timeout) + rffi.setintfield(ll_timeval, 'c_tv_sec', i) + rffi.setintfield(ll_timeval, 'c_tv_usec', int((timeout-i)*1000000)) - return space.newtuple([ - space.newlist([iwtd_d[i] for i in iwtd]), - space.newlist([owtd_d[i] for i in owtd]), - space.newlist([ewtd_d[i] for i in ewtd])]) + res = _c.select(nfds + 1, ll_inl, ll_outl, ll_errl, ll_timeval) + + if res < 0: + errno = _c.geterrno() + msg = _c.socket_strerror_str(errno) + w_errortype = space.fromcache(Cache).w_error + raise OperationError(w_errortype, space.newtuple([ + space.wrap(errno), space.wrap(msg)])) + + resin_w = [] + resout_w = [] + reserr_w = [] + if res > 0: + if fdlistin is not None: + _unbuild_fd_set(space, iwtd_w, fdlistin, ll_inl, resin_w) + if fdlistout is not None: + _unbuild_fd_set(space, owtd_w, fdlistout, ll_outl, resout_w) + if fdlisterr is not None: + _unbuild_fd_set(space, ewtd_w, fdlisterr, ll_errl, reserr_w) + finally: + if ll_timeval: lltype.free(ll_timeval, flavor='raw') + if ll_errl: lltype.free(ll_errl, flavor='raw') + if ll_outl: lltype.free(ll_outl, flavor='raw') + if ll_inl: lltype.free(ll_inl, flavor='raw') + + return space.newtuple([space.newlist(resin_w), + space.newlist(resout_w), + space.newlist(reserr_w)]) From noreply at buildbot.pypy.org Tue May 15 11:10:45 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 15 May 2012 11:10:45 +0200 (CEST) Subject: [pypy-commit] pypy default: merge heads Message-ID: <20120515091045.D511A82253@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r55096:56ab50a13380 Date: 2012-05-15 11:10 +0200 http://bitbucket.org/pypy/pypy/changeset/56ab50a13380/ Log: merge heads From noreply at buildbot.pypy.org Tue May 15 11:24:06 2012 From: noreply at buildbot.pypy.org (arigo) Date: Tue, 15 May 2012 11:24:06 +0200 (CEST) Subject: [pypy-commit] pypy default: Python 2.5 compat Message-ID: <20120515092406.0E14782253@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r55097:0aa5df985aea Date: 2012-05-15 11:23 +0200 http://bitbucket.org/pypy/pypy/changeset/0aa5df985aea/ Log: Python 2.5 compat diff --git a/pypy/module/select/test/test_kqueue.py b/pypy/module/select/test/test_kqueue.py --- a/pypy/module/select/test/test_kqueue.py +++ b/pypy/module/select/test/test_kqueue.py @@ -100,7 +100,7 @@ client.setblocking(False) try: client.connect(("127.0.0.1", server_socket.getsockname()[1])) - except socket.error as e: + except socket.error, e: if 'bsd' in sys.platform: assert e.args[0] == errno.ENOENT else: From noreply at buildbot.pypy.org Tue May 15 14:44:58 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 15 May 2012 14:44:58 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: a failing test Message-ID: <20120515124458.0322782253@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55098:1ee9230bdbad Date: 2012-05-15 10:09 +0200 http://bitbucket.org/pypy/pypy/changeset/1ee9230bdbad/ Log: a failing test diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -261,7 +261,31 @@ assert descr.ffitype is foo_ffitype assert foo_p.deref_pointer() is foo_ffitype assert types.Pointer(descr.ffitype) is foo_p - + + def test_nested_structure(self): + skip('in-progress') + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + foo_fields = [ + Field('x', types.slong), + Field('y', types.slong), + ] + foo_descr = _StructDescr('foo', foo_fields) + # + bar_fields = [ + Field('x', types.slong), + Field('foo', foo_descr.ffitype), + ] + bar_descr = _StructDescr('bar', bar_fields) + assert bar_descr.ffitype.sizeof() == longsize*3 + # + struct = bar_descr.allocate() + struct.setfield('x', 40) + struct_foo = struct.getfield('foo') + struct_foo.setfield('x', 41) + struct_foo.setfield('y', 42) + mem = self.read_raw_mem(struct.getaddr(), 'c_long', 3) + assert mem == [40, 41, 42] def test_compute_shape(self): from _ffi import Structure, Field, types From noreply at buildbot.pypy.org Tue May 15 14:45:04 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 15 May 2012 14:45:04 +0200 (CEST) Subject: [pypy-commit] pypy default: add a test which automatically checks that branches are documented as soon as they are merged. For now, whatsnew-1.9.rst contains empty documentation for all the branches merged since 1.8: everyone please briefly document what the branch did Message-ID: <20120515124504.2AD2A82B37@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r55103:b9d226fcf4d9 Date: 2012-05-15 14:44 +0200 http://bitbucket.org/pypy/pypy/changeset/b9d226fcf4d9/ Log: add a test which automatically checks that branches are documented as soon as they are merged. For now, whatsnew-1.9.rst contains empty documentation for all the branches merged since 1.8: everyone please briefly document what the branch did diff --git a/pypy/doc/test/test_whatsnew.py b/pypy/doc/test/test_whatsnew.py new file mode 100644 --- /dev/null +++ b/pypy/doc/test/test_whatsnew.py @@ -0,0 +1,80 @@ +import py +import pypy +from commands import getoutput +ROOT = py.path.local(pypy.__file__).dirpath().dirpath() + + +def parse_doc(s): + startrev = None + branches = set() + def parseline(line): + _, value = line.split(':', 1) + return value.strip() + # + for line in s.splitlines(): + if line.startswith('.. startrev:'): + startrev = parseline(line) + elif line.startswith('.. branch:'): + branches.add(parseline(line)) + return startrev, branches + +def get_merged_branches(path, startrev, endrev): + # X = take all the merges which are descendants of startrev and are on default + # revset = all the parents of X which are not on default + # ===> + # revset contains all the branches which have been merged to default since + # startrev + revset = 'parents(%s::%s and \ + merge() and \ + branch(default)) and \ + not branch(default)' % (startrev, endrev) + cmd = r"hg log -R '%s' -r '%s' --template '{branches}\n'" % (path, revset) + out = getoutput(cmd) + branches = set(map(str.strip, out.splitlines())) + return branches + + +def test_parse_doc(): + s = """ +===== +Title +===== + +.. startrev: 12345 + +bla bla bla bla + +.. branch: foobar + +xxx yyy zzz + +.. branch: hello + +qqq www ttt +""" + startrev, branches = parse_doc(s) + assert startrev == '12345' + assert branches == set(['foobar', 'hello']) + +def test_get_merged_branches(): + branches = get_merged_branches(ROOT, 'f34f0c11299f', '79770e0c2f93') + assert branches == set(['numpy-indexing-by-arrays-bool', + 'better-jit-hooks-2', + 'numpypy-ufuncs']) + +def test_whatsnew(): + doc = ROOT.join('pypy', 'doc') + whatsnew_list = doc.listdir('whatsnew-*.rst') + whatsnew_list.sort() + last_whatsnew = whatsnew_list[-1].read() + startrev, documented = parse_doc(last_whatsnew) + merged = get_merged_branches(ROOT, startrev, '') + not_documented = merged.difference(documented) + not_merged = documented.difference(merged) + print 'Branches merged but not documented:' + print '\n'.join(not_documented) + print + print 'Branches documented but not merged:' + print '\n'.join(not_merged) + print + assert not not_documented and not not_merged diff --git a/pypy/doc/whatsnew-1.9.rst b/pypy/doc/whatsnew-1.9.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/whatsnew-1.9.rst @@ -0,0 +1,49 @@ +====================== +What's new in PyPy 1.9 +====================== + +.. this is the revision just after the creation of the release-1.8.x branch +.. startrev: a4261375b359 + +.. branch: array_equal +.. branch: better-jit-hooks-2 +.. branch: exception-cannot-occur +.. branch: faster-heapcache +.. branch: faster-str-decode-escape +.. branch: float-bytes +.. branch: float-bytes-2 +.. branch: jit-frame-counter +.. branch: kill-geninterp +.. branch: kqueue +.. branch: kwargsdict-strategy +.. branch: matrixmath-dot +.. branch: merge-2.7.2 +.. branch: ndmin +.. branch: newindex +.. branch: non-null-threadstate +.. branch: numppy-flatitter +.. branch: numpy-back-to-applevel +.. branch: numpy-concatenate +.. branch: numpy-indexing-by-arrays-bool +.. branch: numpy-record-dtypes +.. branch: numpy-single-jitdriver +.. branch: numpy-ufuncs2 +.. branch: numpy-ufuncs3 +.. branch: numpypy-issue1137 +.. branch: numpypy-out +.. branch: numpypy-shape-bug +.. branch: numpypy-ufuncs +.. branch: pytest +.. branch: revive-dlltool +.. branch: safe-getargs-freelist +.. branch: sanitize-finally-stack +.. branch: set-strategies +.. branch: speedup-list-comprehension +.. branch: stdlib-unification +.. branch: step-one-xrange +.. branch: string-NUL +.. branch: win32-cleanup +.. branch: win32-cleanup2 +.. branch: win32-cleanup_2 +.. branch: win64-stage1 +.. branch: zlib-mem-pressure From noreply at buildbot.pypy.org Tue May 15 14:44:59 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 15 May 2012 14:44:59 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: kill support for _rawffi structures and add support for _ffi structures. Some tests fail because we leak W__StructDescr.ffistruct, in-progress Message-ID: <20120515124459.4211682255@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55099:58e8161a5deb Date: 2012-05-15 11:25 +0200 http://bitbucket.org/pypy/pypy/changeset/58e8161a5deb/ Log: kill support for _rawffi structures and add support for _ffi structures. Some tests fail because we leak W__StructDescr.ffistruct, in-progress diff --git a/pypy/module/_ffi/interp_ffitype.py b/pypy/module/_ffi/interp_ffitype.py --- a/pypy/module/_ffi/interp_ffitype.py +++ b/pypy/module/_ffi/interp_ffitype.py @@ -1,4 +1,4 @@ -from pypy.rlib import libffi +from pypy.rlib import libffi, clibffi from pypy.rlib.rarithmetic import intmask from pypy.rlib import jit from pypy.interpreter.baseobjspace import Wrappable @@ -12,12 +12,10 @@ def __init__(self, name, ffitype, w_datashape=None, w_pointer_to=None): self.name = name - self._ffitype = ffitype + self._ffitype = clibffi.FFI_TYPE_NULL self.w_datashape = w_datashape self.w_pointer_to = w_pointer_to - ## XXX: re-enable this check when the ffistruct branch is done - ## if self.is_struct(): - ## assert w_datashape is not None + self.set_ffitype(ffitype) @jit.elidable def get_ffitype(self): @@ -29,6 +27,8 @@ if self._ffitype: raise ValueError("The _ffitype is already set") self._ffitype = ffitype + if ffitype and self.is_struct(): + assert self.w_datashape is not None def descr_deref_pointer(self, space): if self.w_pointer_to is None: diff --git a/pypy/module/_ffi/interp_funcptr.py b/pypy/module/_ffi/interp_funcptr.py --- a/pypy/module/_ffi/interp_funcptr.py +++ b/pypy/module/_ffi/interp_funcptr.py @@ -125,7 +125,7 @@ self.argchain.arg(singlefloatval) def handle_struct(self, w_ffitype, w_structinstance): - ptrval = w_structinstance.ll_buffer + ptrval = w_structinstance.rawmem self.argchain.arg_raw(ptrval) @@ -204,7 +204,7 @@ return self.func.call(self.argchain, rffi.FLOAT) def get_struct(self, w_datashape): - return self.func.call(self.argchain, rffi.ULONG, is_struct=True) + return self.func.call(self.argchain, rffi.LONG, is_struct=True) def get_void(self, w_ffitype): return self.func.call(self.argchain, lltype.Void) diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -43,7 +43,8 @@ def __init__(self, space, name): self.space = space - self.w_ffitype = W_FFIType('struct %s' % name, clibffi.FFI_TYPE_NULL, None) + self.w_ffitype = W_FFIType('struct %s' % name, clibffi.FFI_TYPE_NULL, + w_datashape=self) self.fields_w = None self.name2w_field = {} @@ -65,12 +66,21 @@ self.ffistruct = clibffi.make_struct_ffitype_e(size, alignment, field_types) self.w_ffitype.set_ffitype(self.ffistruct.ffistruct) - def allocate(self, space): + def check_complete(self): if self.fields_w is None: raise operationerrfmt(space.w_ValueError, "%s has an incomplete type", self.w_ffitype.name) + + def allocate(self, space): + self.check_complete() return W__StructInstance(self) + @unwrap_spec(addr=int) + def fromaddress(self, space, addr): + self.check_complete() + rawmem = rffi.cast(rffi.VOIDP, addr) + return W__StructInstance(self, allocate=False, autofree=True, rawmem=rawmem) + @jit.elidable_promote('0') def get_type_and_offset_for_field(self, name): try: @@ -121,24 +131,33 @@ ffitype = interp_attrproperty('w_ffitype', W__StructDescr), define_fields = interp2app(W__StructDescr.define_fields), allocate = interp2app(W__StructDescr.allocate), + fromaddress = interp2app(W__StructDescr.fromaddress), ) # ============================================================================== +NULL = lltype.nullptr(rffi.VOIDP.TO) + class W__StructInstance(Wrappable): _immutable_fields_ = ['structdescr', 'rawmem'] - def __init__(self, structdescr): + def __init__(self, structdescr, allocate=True, autofree=True, rawmem=NULL): self.structdescr = structdescr - size = structdescr.w_ffitype.sizeof() - self.rawmem = lltype.malloc(rffi.VOIDP.TO, size, flavor='raw', - zero=True, add_memory_pressure=True) + self.autofree = autofree + if allocate: + assert not rawmem + assert autofree + size = structdescr.w_ffitype.sizeof() + self.rawmem = lltype.malloc(rffi.VOIDP.TO, size, flavor='raw', + zero=True, add_memory_pressure=True) + else: + self.rawmem = rawmem @must_be_light_finalizer def __del__(self): - if self.rawmem: + if self.autofree and self.rawmem: lltype.free(self.rawmem, flavor='raw') self.rawmem = lltype.nullptr(rffi.VOIDP.TO) diff --git a/pypy/module/_ffi/test/test_funcptr.py b/pypy/module/_ffi/test/test_funcptr.py --- a/pypy/module/_ffi/test/test_funcptr.py +++ b/pypy/module/_ffi/test/test_funcptr.py @@ -452,19 +452,19 @@ return p.x + p.y; } """ - import _rawffi - from _ffi import CDLL, types - POINT = _rawffi.Structure([('x', 'l'), ('y', 'l')]) - ffi_point = POINT.get_ffi_type() + from _ffi import CDLL, types, _StructDescr, Field + Point = _StructDescr('Point', [ + Field('x', types.slong), + Field('y', types.slong), + ]) libfoo = CDLL(self.libfoo_name) - sum_point = libfoo.getfunc('sum_point', [ffi_point], types.slong) + sum_point = libfoo.getfunc('sum_point', [Point.ffitype], types.slong) # - p = POINT() - p.x = 30 - p.y = 12 + p = Point.allocate() + p.setfield('x', 30) + p.setfield('y', 12) res = sum_point(p) assert res == 42 - p.free() def test_byval_result(self): """ @@ -475,17 +475,18 @@ return p; } """ - import _rawffi - from _ffi import CDLL, types - POINT = _rawffi.Structure([('x', 'l'), ('y', 'l')]) - ffi_point = POINT.get_ffi_type() + from _ffi import CDLL, types, _StructDescr, Field + Point = _StructDescr('Point', [ + Field('x', types.slong), + Field('y', types.slong), + ]) libfoo = CDLL(self.libfoo_name) - make_point = libfoo.getfunc('make_point', [types.slong, types.slong], ffi_point) + make_point = libfoo.getfunc('make_point', [types.slong, types.slong], + Point.ffitype) # p = make_point(12, 34) - assert p.x == 12 - assert p.y == 34 - p.free() + assert p.getfield('x') == 12 + assert p.getfield('y') == 34 def test_TypeError_numargs(self): from _ffi import CDLL, types diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -3,7 +3,6 @@ from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rpython.lltypesystem import rffi from pypy.interpreter.error import operationerrfmt -from pypy.module._rawffi.structure import W_StructureInstance, W_Structure from pypy.module._ffi.interp_ffitype import app_types class FromAppLevelConverter(object): @@ -18,6 +17,7 @@ self.space = space def unwrap_and_do(self, w_ffitype, w_obj): + from pypy.module._ffi.interp_struct import W__StructInstance space = self.space if w_ffitype.is_longlong(): # note that we must check for longlong first, because either @@ -50,7 +50,7 @@ self._singlefloat(w_ffitype, w_obj) elif w_ffitype.is_struct(): # arg_raw directly takes value to put inside ll_args - w_obj = space.interp_w(W_StructureInstance, w_obj) + w_obj = space.interp_w(W__StructInstance, w_obj) self.handle_struct(w_ffitype, w_obj) else: self.error(w_ffitype, w_obj) @@ -183,6 +183,7 @@ self.space = space def do_and_wrap(self, w_ffitype): + from pypy.module._ffi.interp_struct import W__StructDescr space = self.space if w_ffitype.is_longlong(): # note that we must check for longlong first, because either @@ -222,9 +223,9 @@ return self._singlefloat(w_ffitype) elif w_ffitype.is_struct(): w_datashape = w_ffitype.w_datashape - assert isinstance(w_datashape, W_Structure) - uintval = self.get_struct(w_datashape) # this is the ptr to the struct - return w_datashape.fromaddress(space, uintval) + assert isinstance(w_datashape, W__StructDescr) + addr = self.get_struct(w_datashape) # this is the ptr to the struct + return w_datashape.fromaddress(space, addr) elif w_ffitype.is_void(): voidval = self.get_void(w_ffitype) assert voidval is None From noreply at buildbot.pypy.org Tue May 15 14:45:00 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 15 May 2012 14:45:00 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: resolve the ffistruct memory leak by delegating its ownership to an object which is outside the cycle Message-ID: <20120515124500.7829882256@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55100:959395d9c0f9 Date: 2012-05-15 12:09 +0200 http://bitbucket.org/pypy/pypy/changeset/959395d9c0f9/ Log: resolve the ffistruct memory leak by delegating its ownership to an object which is outside the cycle diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -39,6 +39,21 @@ # ============================================================================== +class FFIStructOwner(object): + """ + The only job of this class is to stay outside of the reference cycle + W__StructDescr -> W_FFIType -> W__StructDescr and free the ffistruct + """ + + def __init__(self, ffistruct): + self.ffistruct = ffistruct + + @must_be_light_finalizer + def __del__(self): + if self.ffistruct: + lltype.free(self.ffistruct, flavor='raw') + + class W__StructDescr(Wrappable): def __init__(self, space, name): @@ -47,6 +62,7 @@ w_datashape=self) self.fields_w = None self.name2w_field = {} + self._ffistruct_owner = None def define_fields(self, space, w_fields): if self.fields_w is not None: @@ -63,8 +79,9 @@ for w_field in fields_w: field_types.append(w_field.w_ffitype.get_ffitype()) self.name2w_field[w_field.name] = w_field - self.ffistruct = clibffi.make_struct_ffitype_e(size, alignment, field_types) - self.w_ffitype.set_ffitype(self.ffistruct.ffistruct) + ffistruct = clibffi.make_struct_ffitype_e(size, alignment, field_types) + self.w_ffitype.set_ffitype(ffistruct.ffistruct) + self._ffistruct_owner = FFIStructOwner(ffistruct) def check_complete(self): if self.fields_w is None: @@ -90,10 +107,6 @@ return w_field.w_ffitype, w_field.offset - @must_be_light_finalizer - def __del__(self): - if self.ffistruct: - lltype.free(self.ffistruct, flavor='raw') @unwrap_spec(name=str) From noreply at buildbot.pypy.org Tue May 15 14:45:01 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 15 May 2012 14:45:01 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: fix a typo, and initialize one more field for this dummy type Message-ID: <20120515124501.B232782369@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55101:334f1c6af541 Date: 2012-05-15 12:17 +0200 http://bitbucket.org/pypy/pypy/changeset/334f1c6af541/ Log: fix a typo, and initialize one more field for this dummy type diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -83,18 +83,18 @@ self.w_ffitype.set_ffitype(ffistruct.ffistruct) self._ffistruct_owner = FFIStructOwner(ffistruct) - def check_complete(self): + def check_complete(self, space): if self.fields_w is None: raise operationerrfmt(space.w_ValueError, "%s has an incomplete type", self.w_ffitype.name) def allocate(self, space): - self.check_complete() + self.check_complete(space) return W__StructInstance(self) @unwrap_spec(addr=int) def fromaddress(self, space, addr): - self.check_complete() + self.check_complete(space) rawmem = rffi.cast(rffi.VOIDP, addr) return W__StructInstance(self, allocate=False, autofree=True, rawmem=rawmem) diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -60,6 +60,7 @@ dummy_type = lltype.malloc(clibffi.FFI_TYPE_P.TO, flavor='raw') dummy_type.c_size = r_uint(123) dummy_type.c_alignment = rffi.cast(rffi.USHORT, 0) + dummy_type.c_type = rffi.cast(rffi.USHORT, 0) cls.w_dummy_type = W_FFIType('dummy', dummy_type) def test__StructDescr(self): From noreply at buildbot.pypy.org Tue May 15 14:45:02 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 15 May 2012 14:45:02 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: don't track the allocation of ffistruct, on cpython it might go to gc.garbage Message-ID: <20120515124502.E8AFE82B36@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55102:b88f00a63c19 Date: 2012-05-15 12:22 +0200 http://bitbucket.org/pypy/pypy/changeset/b88f00a63c19/ Log: don't track the allocation of ffistruct, on cpython it might go to gc.garbage diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -51,7 +51,7 @@ @must_be_light_finalizer def __del__(self): if self.ffistruct: - lltype.free(self.ffistruct, flavor='raw') + lltype.free(self.ffistruct, flavor='raw', track_allocation=True) class W__StructDescr(Wrappable): @@ -79,7 +79,12 @@ for w_field in fields_w: field_types.append(w_field.w_ffitype.get_ffitype()) self.name2w_field[w_field.name] = w_field - ffistruct = clibffi.make_struct_ffitype_e(size, alignment, field_types) + # + # on CPython, the FFIStructOwner might go into gc.garbage and thus the + # __del__ never be called. Thus, we don't track the allocation of the + # malloc done inside this function, else the leakfinder might complain + ffistruct = clibffi.make_struct_ffitype_e(size, alignment, field_types, + track_allocation=False) self.w_ffitype.set_ffitype(ffistruct.ffistruct) self._ffistruct_owner = FFIStructOwner(ffistruct) diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -347,11 +347,12 @@ ('ffistruct', FFI_TYPE_P.TO), ('members', lltype.Array(FFI_TYPE_P)))) -def make_struct_ffitype_e(size, aligment, field_types): +def make_struct_ffitype_e(size, aligment, field_types, track_allocation=True): """Compute the type of a structure. Returns a FFI_STRUCT_P out of which the 'ffistruct' member is a regular FFI_TYPE. """ - tpe = lltype.malloc(FFI_STRUCT_P.TO, len(field_types)+1, flavor='raw') + tpe = lltype.malloc(FFI_STRUCT_P.TO, len(field_types)+1, flavor='raw', + track_allocation=track_allocation) tpe.ffistruct.c_type = rffi.cast(rffi.USHORT, FFI_TYPE_STRUCT) tpe.ffistruct.c_size = rffi.cast(rffi.SIZE_T, size) tpe.ffistruct.c_alignment = rffi.cast(rffi.USHORT, aligment) From noreply at buildbot.pypy.org Tue May 15 15:51:55 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 15 May 2012 15:51:55 +0200 (CEST) Subject: [pypy-commit] pypy default: mark this branch as 'uninteresting' Message-ID: <20120515135155.4B75982253@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r55104:7f954ee4f39b Date: 2012-05-15 15:51 +0200 http://bitbucket.org/pypy/pypy/changeset/7f954ee4f39b/ Log: mark this branch as 'uninteresting' diff --git a/pypy/doc/whatsnew-1.9.rst b/pypy/doc/whatsnew-1.9.rst --- a/pypy/doc/whatsnew-1.9.rst +++ b/pypy/doc/whatsnew-1.9.rst @@ -36,7 +36,6 @@ .. branch: pytest .. branch: revive-dlltool .. branch: safe-getargs-freelist -.. branch: sanitize-finally-stack .. branch: set-strategies .. branch: speedup-list-comprehension .. branch: stdlib-unification @@ -47,3 +46,7 @@ .. branch: win32-cleanup_2 .. branch: win64-stage1 .. branch: zlib-mem-pressure + + +.. "uninteresting" branches that we should just ignore for the whatsnew: +.. branch: sanitize-finally-stack From noreply at buildbot.pypy.org Tue May 15 17:00:52 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 15 May 2012 17:00:52 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: add support for reading nested structures Message-ID: <20120515150052.1868982253@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55105:691b846e85b6 Date: 2012-05-15 16:24 +0200 http://bitbucket.org/pypy/pypy/changeset/691b846e85b6/ Log: add support for reading nested structures diff --git a/pypy/module/_ffi/interp_funcptr.py b/pypy/module/_ffi/interp_funcptr.py --- a/pypy/module/_ffi/interp_funcptr.py +++ b/pypy/module/_ffi/interp_funcptr.py @@ -203,8 +203,9 @@ def get_singlefloat(self, w_ffitype): return self.func.call(self.argchain, rffi.FLOAT) - def get_struct(self, w_datashape): - return self.func.call(self.argchain, rffi.LONG, is_struct=True) + def get_struct(self, w_ffitype, w_datashape): + addr = self.func.call(self.argchain, rffi.LONG, is_struct=True) + return w_datashape.fromaddress(self.space, addr) def get_void(self, w_ffitype): return self.func.call(self.argchain, lltype.Void) diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -245,8 +245,13 @@ return libffi.struct_getfield_singlefloat(w_ffitype.get_ffitype(), self.rawmem, self.offset) - ## def get_struct(self, w_datashape): - ## ... + def get_struct(self, w_ffitype, w_datashape): + assert isinstance(w_datashape, W__StructDescr) + innermem = rffi.ptradd(self.rawmem, self.offset) + # we return a reference to the inner struct, not a copy + # autofree=False because it's still owned by the parent struct + return W__StructInstance(w_datashape, allocate=False, autofree=False, + rawmem=innermem) ## def get_void(self, w_ffitype): ## ... diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -264,7 +264,6 @@ assert types.Pointer(descr.ffitype) is foo_p def test_nested_structure(self): - skip('in-progress') from _ffi import _StructDescr, Field, types longsize = types.slong.sizeof() foo_fields = [ diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -224,8 +224,7 @@ elif w_ffitype.is_struct(): w_datashape = w_ffitype.w_datashape assert isinstance(w_datashape, W__StructDescr) - addr = self.get_struct(w_datashape) # this is the ptr to the struct - return w_datashape.fromaddress(space, addr) + return self.get_struct(w_ffitype, w_datashape) elif w_ffitype.is_void(): voidval = self.get_void(w_ffitype) assert voidval is None @@ -325,7 +324,7 @@ """ self.error(w_ffitype) - def get_struct(self, w_datashape): + def get_struct(self, w_ffitype, w_datashape): """ Return type: lltype.Unsigned (the address of the structure) From noreply at buildbot.pypy.org Tue May 15 17:00:53 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 15 May 2012 17:00:53 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: s/w_datashape/w_structdescr Message-ID: <20120515150053.9619782253@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55106:d0ad95d6f4d8 Date: 2012-05-15 16:26 +0200 http://bitbucket.org/pypy/pypy/changeset/d0ad95d6f4d8/ Log: s/w_datashape/w_structdescr diff --git a/pypy/module/_ffi/interp_ffitype.py b/pypy/module/_ffi/interp_ffitype.py --- a/pypy/module/_ffi/interp_ffitype.py +++ b/pypy/module/_ffi/interp_ffitype.py @@ -8,12 +8,12 @@ class W_FFIType(Wrappable): - _immutable_fields_ = ['name', 'w_datashape', 'w_pointer_to'] + _immutable_fields_ = ['name', 'w_structdescr', 'w_pointer_to'] - def __init__(self, name, ffitype, w_datashape=None, w_pointer_to=None): + def __init__(self, name, ffitype, w_structdescr=None, w_pointer_to=None): self.name = name self._ffitype = clibffi.FFI_TYPE_NULL - self.w_datashape = w_datashape + self.w_structdescr = w_structdescr self.w_pointer_to = w_pointer_to self.set_ffitype(ffitype) @@ -28,7 +28,7 @@ raise ValueError("The _ffitype is already set") self._ffitype = ffitype if ffitype and self.is_struct(): - assert self.w_datashape is not None + assert self.w_structdescr is not None def descr_deref_pointer(self, space): if self.w_pointer_to is None: diff --git a/pypy/module/_ffi/interp_funcptr.py b/pypy/module/_ffi/interp_funcptr.py --- a/pypy/module/_ffi/interp_funcptr.py +++ b/pypy/module/_ffi/interp_funcptr.py @@ -203,9 +203,9 @@ def get_singlefloat(self, w_ffitype): return self.func.call(self.argchain, rffi.FLOAT) - def get_struct(self, w_ffitype, w_datashape): + def get_struct(self, w_ffitype, w_structdescr): addr = self.func.call(self.argchain, rffi.LONG, is_struct=True) - return w_datashape.fromaddress(self.space, addr) + return w_structdescr.fromaddress(self.space, addr) def get_void(self, w_ffitype): return self.func.call(self.argchain, lltype.Void) diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -59,7 +59,7 @@ def __init__(self, space, name): self.space = space self.w_ffitype = W_FFIType('struct %s' % name, clibffi.FFI_TYPE_NULL, - w_datashape=self) + w_structdescr=self) self.fields_w = None self.name2w_field = {} self._ffistruct_owner = None @@ -245,12 +245,12 @@ return libffi.struct_getfield_singlefloat(w_ffitype.get_ffitype(), self.rawmem, self.offset) - def get_struct(self, w_ffitype, w_datashape): - assert isinstance(w_datashape, W__StructDescr) + def get_struct(self, w_ffitype, w_structdescr): + assert isinstance(w_structdescr, W__StructDescr) innermem = rffi.ptradd(self.rawmem, self.offset) # we return a reference to the inner struct, not a copy # autofree=False because it's still owned by the parent struct - return W__StructInstance(w_datashape, allocate=False, autofree=False, + return W__StructInstance(w_structdescr, allocate=False, autofree=False, rawmem=innermem) ## def get_void(self, w_ffitype): diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -222,9 +222,9 @@ elif w_ffitype.is_singlefloat(): return self._singlefloat(w_ffitype) elif w_ffitype.is_struct(): - w_datashape = w_ffitype.w_datashape - assert isinstance(w_datashape, W__StructDescr) - return self.get_struct(w_ffitype, w_datashape) + w_structdescr = w_ffitype.w_structdescr + assert isinstance(w_structdescr, W__StructDescr) + return self.get_struct(w_ffitype, w_structdescr) elif w_ffitype.is_void(): voidval = self.get_void(w_ffitype) assert voidval is None @@ -324,7 +324,7 @@ """ self.error(w_ffitype) - def get_struct(self, w_ffitype, w_datashape): + def get_struct(self, w_ffitype, w_structdescr): """ Return type: lltype.Unsigned (the address of the structure) From noreply at buildbot.pypy.org Tue May 15 17:00:54 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 15 May 2012 17:00:54 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: add support to write nested structures Message-ID: <20120515150054.CC89282253@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55107:bc08b45940f9 Date: 2012-05-15 16:59 +0200 http://bitbucket.org/pypy/pypy/changeset/bc08b45940f9/ Log: add support to write nested structures diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -292,8 +292,11 @@ libffi.struct_setfield_singlefloat(w_ffitype.get_ffitype(), self.rawmem, self.offset, singlefloatval) - ## def handle_struct(self, w_ffitype, w_structinstance): - ## ... + def handle_struct(self, w_ffitype, w_structinstance): + dst = rffi.ptradd(self.rawmem, self.offset) + src = w_structinstance.rawmem + length = w_ffitype.sizeof() + rffi.c_memcpy(dst, src, length) ## def handle_char_p(self, w_ffitype, w_obj, strval): ## ... diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py --- a/pypy/module/_ffi/test/test_struct.py +++ b/pypy/module/_ffi/test/test_struct.py @@ -281,11 +281,26 @@ # struct = bar_descr.allocate() struct.setfield('x', 40) + # reading a nested structure yields a reference to it struct_foo = struct.getfield('foo') struct_foo.setfield('x', 41) struct_foo.setfield('y', 42) mem = self.read_raw_mem(struct.getaddr(), 'c_long', 3) assert mem == [40, 41, 42] + # + struct_foo2 = foo_descr.allocate() + struct_foo2.setfield('x', 141) + struct_foo2.setfield('y', 142) + # writing a nested structure copies its memory into the target + struct.setfield('foo', struct_foo2) + struct_foo2.setfield('x', 241) + struct_foo2.setfield('y', 242) + mem = self.read_raw_mem(struct.getaddr(), 'c_long', 3) + assert mem == [40, 141, 142] + mem = self.read_raw_mem(struct_foo2.getaddr(), 'c_long', 2) + assert mem == [241, 242] + + def test_compute_shape(self): from _ffi import Structure, Field, types From noreply at buildbot.pypy.org Tue May 15 17:00:56 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Tue, 15 May 2012 17:00:56 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: move this comment to the proper place Message-ID: <20120515150056.0DEC982253@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55108:2088dfbea64c Date: 2012-05-15 17:00 +0200 http://bitbucket.org/pypy/pypy/changeset/2088dfbea64c/ Log: move this comment to the proper place diff --git a/pypy/module/_ffi/interp_funcptr.py b/pypy/module/_ffi/interp_funcptr.py --- a/pypy/module/_ffi/interp_funcptr.py +++ b/pypy/module/_ffi/interp_funcptr.py @@ -125,6 +125,7 @@ self.argchain.arg(singlefloatval) def handle_struct(self, w_ffitype, w_structinstance): + # arg_raw directly takes value to put inside ll_args ptrval = w_structinstance.rawmem self.argchain.arg_raw(ptrval) From noreply at buildbot.pypy.org Tue May 15 22:55:11 2012 From: noreply at buildbot.pypy.org (mattip) Date: Tue, 15 May 2012 22:55:11 +0200 (CEST) Subject: [pypy-commit] pypy default: make sure CTR_* signals exist in windows, they were masked by SG_DFL and SIG_IGN Message-ID: <20120515205511.1E79282253@wyvern.cs.uni-duesseldorf.de> Author: Matti Picus Branch: Changeset: r55109:edf33ec590d2 Date: 2012-05-15 23:53 +0300 http://bitbucket.org/pypy/pypy/changeset/edf33ec590d2/ Log: make sure CTR_* signals exist in windows, they were masked by SG_DFL and SIG_IGN diff --git a/pypy/module/signal/interp_signal.py b/pypy/module/signal/interp_signal.py --- a/pypy/module/signal/interp_signal.py +++ b/pypy/module/signal/interp_signal.py @@ -16,7 +16,8 @@ def setup(): for key, value in cpy_signal.__dict__.items(): if (key.startswith('SIG') or key.startswith('CTRL_')) and \ - is_valid_int(value): + is_valid_int(value) and \ + key != 'SIG_DFL' and key != 'SIG_IGN': globals()[key] = value yield key diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -43,7 +43,11 @@ cls.w_signal = space.appexec([], "(): import signal; return signal") def test_exported_names(self): + import os self.signal.__dict__ # crashes if the interpleveldefs are invalid + if os.name == 'nt': + self.signal.CTRL_BREAK_EVENT + self.signal.CTRL_C_EVENT def test_basics(self): import types, os From noreply at buildbot.pypy.org Tue May 15 23:08:49 2012 From: noreply at buildbot.pypy.org (mattip) Date: Tue, 15 May 2012 23:08:49 +0200 (CEST) Subject: [pypy-commit] pypy default: document some branches Message-ID: <20120515210849.55B2782253@wyvern.cs.uni-duesseldorf.de> Author: Matti Picus Branch: Changeset: r55110:c8508951c824 Date: 2012-05-16 00:08 +0300 http://bitbucket.org/pypy/pypy/changeset/c8508951c824/ Log: document some branches diff --git a/pypy/doc/whatsnew-1.9.rst b/pypy/doc/whatsnew-1.9.rst --- a/pypy/doc/whatsnew-1.9.rst +++ b/pypy/doc/whatsnew-1.9.rst @@ -17,7 +17,9 @@ .. branch: kqueue .. branch: kwargsdict-strategy .. branch: matrixmath-dot +numpypy can now handle matrix multiplication. .. branch: merge-2.7.2 +The stdlib was updated to version 2.7.2 .. branch: ndmin .. branch: newindex .. branch: non-null-threadstate @@ -31,6 +33,7 @@ .. branch: numpy-ufuncs3 .. branch: numpypy-issue1137 .. branch: numpypy-out +The "out" argument was added to most of the numypypy functions. .. branch: numpypy-shape-bug .. branch: numpypy-ufuncs .. branch: pytest @@ -44,6 +47,9 @@ .. branch: win32-cleanup .. branch: win32-cleanup2 .. branch: win32-cleanup_2 +Many bugs were corrected for windows 32 bit. New functionality was added to +test validity of file descriptors, leading to the removal of the global +_invalid_parameter_handler .. branch: win64-stage1 .. branch: zlib-mem-pressure From noreply at buildbot.pypy.org Tue May 15 23:36:19 2012 From: noreply at buildbot.pypy.org (amauryfa) Date: Tue, 15 May 2012 23:36:19 +0200 (CEST) Subject: [pypy-commit] pypy default: My comments for closed branches Message-ID: <20120515213619.4516F82253@wyvern.cs.uni-duesseldorf.de> Author: Amaury Forgeot d'Arc Branch: Changeset: r55111:3dc1f9111e1e Date: 2012-05-15 22:35 +0100 http://bitbucket.org/pypy/pypy/changeset/3dc1f9111e1e/ Log: My comments for closed branches diff --git a/pypy/doc/whatsnew-1.9.rst b/pypy/doc/whatsnew-1.9.rst --- a/pypy/doc/whatsnew-1.9.rst +++ b/pypy/doc/whatsnew-1.9.rst @@ -23,6 +23,8 @@ .. branch: ndmin .. branch: newindex .. branch: non-null-threadstate +cpyext: Better support for PyEval_SaveThread and other PyTreadState_* +functions. .. branch: numppy-flatitter .. branch: numpy-back-to-applevel .. branch: numpy-concatenate @@ -37,13 +39,17 @@ .. branch: numpypy-shape-bug .. branch: numpypy-ufuncs .. branch: pytest -.. branch: revive-dlltool .. branch: safe-getargs-freelist .. branch: set-strategies .. branch: speedup-list-comprehension .. branch: stdlib-unification +The directory "lib-python/modified-2.7" has been removed, and its +content merged into "lib-python/2.7". .. branch: step-one-xrange .. branch: string-NUL +PyPy refuses filenames with chr(0) characters. This is implemented in +RPython which can enforce no-NUL correctness and propagation, similar +to const-correctness in C++. .. branch: win32-cleanup .. branch: win32-cleanup2 .. branch: win32-cleanup_2 @@ -56,3 +62,4 @@ .. "uninteresting" branches that we should just ignore for the whatsnew: .. branch: sanitize-finally-stack +.. branch: revive-dlltool (preliminary work for sepcomp) From noreply at buildbot.pypy.org Tue May 15 23:39:26 2012 From: noreply at buildbot.pypy.org (alex_gaynor) Date: Tue, 15 May 2012 23:39:26 +0200 (CEST) Subject: [pypy-commit] pypy default: document my branches Message-ID: <20120515213926.0548C82253@wyvern.cs.uni-duesseldorf.de> Author: Alex Gaynor Branch: Changeset: r55112:18e6d6d81497 Date: 2012-05-15 17:39 -0400 http://bitbucket.org/pypy/pypy/changeset/18e6d6d81497/ Log: document my branches diff --git a/pypy/doc/whatsnew-1.9.rst b/pypy/doc/whatsnew-1.9.rst --- a/pypy/doc/whatsnew-1.9.rst +++ b/pypy/doc/whatsnew-1.9.rst @@ -11,10 +11,14 @@ .. branch: faster-heapcache .. branch: faster-str-decode-escape .. branch: float-bytes +Added some pritives for dealing with floats as raw bytes. .. branch: float-bytes-2 +Added more float byte primitives. .. branch: jit-frame-counter +Put more debug info into resops. .. branch: kill-geninterp .. branch: kqueue +Finished select.kqueue. .. branch: kwargsdict-strategy .. branch: matrixmath-dot numpypy can now handle matrix multiplication. From noreply at buildbot.pypy.org Wed May 16 08:23:30 2012 From: noreply at buildbot.pypy.org (hakanardo) Date: Wed, 16 May 2012 08:23:30 +0200 (CEST) Subject: [pypy-commit] pypy default: Explain the step-one-xrange branch Message-ID: <20120516062330.4DE0D82253@wyvern.cs.uni-duesseldorf.de> Author: Hakan Ardo Branch: Changeset: r55113:c541bb30bb6b Date: 2012-05-16 08:22 +0200 http://bitbucket.org/pypy/pypy/changeset/c541bb30bb6b/ Log: Explain the step-one-xrange branch diff --git a/pypy/doc/whatsnew-1.9.rst b/pypy/doc/whatsnew-1.9.rst --- a/pypy/doc/whatsnew-1.9.rst +++ b/pypy/doc/whatsnew-1.9.rst @@ -50,6 +50,9 @@ The directory "lib-python/modified-2.7" has been removed, and its content merged into "lib-python/2.7". .. branch: step-one-xrange +The common case of a xrange iterator with no step argument specifed +was somewhat optimized. The tightest loop involving it, +sum(xrange(n)), is now 18% faster on average. .. branch: string-NUL PyPy refuses filenames with chr(0) characters. This is implemented in RPython which can enforce no-NUL correctness and propagation, similar From noreply at buildbot.pypy.org Wed May 16 18:14:33 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 16 May 2012 18:14:33 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: rpython fixes Message-ID: <20120516161433.7977782D59@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55114:11246454eb46 Date: 2012-05-16 13:44 +0200 http://bitbucket.org/pypy/pypy/changeset/11246454eb46/ Log: rpython fixes diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -86,6 +86,7 @@ unicodeval = self.space.unicode_w(w_obj) self.handle_unichar_p(w_ffitype, w_obj, unicodeval) return True + return False def convert_pointer_arg_maybe(self, w_arg, w_argtype): """ diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -11,6 +11,7 @@ from pypy.rlib.rdynload import dlopen, dlclose, dlsym, dlsym_byordinal from pypy.rlib.rdynload import DLOpenError, DLLHANDLE from pypy.rlib import jit +from pypy.rlib.objectmodel import specialize from pypy.tool.autopath import pypydir from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.platform import platform @@ -347,6 +348,7 @@ ('ffistruct', FFI_TYPE_P.TO), ('members', lltype.Array(FFI_TYPE_P)))) + at specialize.arg(3) def make_struct_ffitype_e(size, aligment, field_types, track_allocation=True): """Compute the type of a structure. Returns a FFI_STRUCT_P out of which the 'ffistruct' member is a regular FFI_TYPE. From noreply at buildbot.pypy.org Wed May 16 18:14:34 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 16 May 2012 18:14:34 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: more rpython fixes Message-ID: <20120516161434.B35B082D5A@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55115:20e0e8443d50 Date: 2012-05-16 15:26 +0200 http://bitbucket.org/pypy/pypy/changeset/20e0e8443d50/ Log: more rpython fixes diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -230,12 +230,14 @@ get_pointer = get_unsigned def get_char(self, w_ffitype): - return libffi.struct_getfield_int(w_ffitype.get_ffitype(), - self.rawmem, self.offset) + intval = libffi.struct_getfield_int(w_ffitype.get_ffitype(), + self.rawmem, self.offset) + return rffi.cast(rffi.UCHAR, intval) def get_unichar(self, w_ffitype): - return libffi.struct_getfield_int(w_ffitype.get_ffitype(), - self.rawmem, self.offset) + intval = libffi.struct_getfield_int(w_ffitype.get_ffitype(), + self.rawmem, self.offset) + return rffi.cast(rffi.WCHAR_T, intval) def get_float(self, w_ffitype): return libffi.struct_getfield_float(w_ffitype.get_ffitype(), @@ -247,7 +249,8 @@ def get_struct(self, w_ffitype, w_structdescr): assert isinstance(w_structdescr, W__StructDescr) - innermem = rffi.ptradd(self.rawmem, self.offset) + rawmem = rffi.cast(rffi.CCHARP, self.rawmem) + innermem = rffi.cast(rffi.VOIDP, rffi.ptradd(rawmem, self.offset)) # we return a reference to the inner struct, not a copy # autofree=False because it's still owned by the parent struct return W__StructInstance(w_structdescr, allocate=False, autofree=False, @@ -293,7 +296,8 @@ self.rawmem, self.offset, singlefloatval) def handle_struct(self, w_ffitype, w_structinstance): - dst = rffi.ptradd(self.rawmem, self.offset) + rawmem = rffi.cast(rffi.CCHARP, self.rawmem) + dst = rffi.cast(rffi.VOIDP, rffi.ptradd(rawmem, self.offset)) src = w_structinstance.rawmem length = w_ffitype.sizeof() rffi.c_memcpy(dst, src, length) From noreply at buildbot.pypy.org Wed May 16 18:14:35 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Wed, 16 May 2012 18:14:35 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: kill duplicate imports, and rpython fix Message-ID: <20120516161435.EC7AE82D5B@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55116:a747c671a834 Date: 2012-05-16 18:11 +0200 http://bitbucket.org/pypy/pypy/changeset/a747c671a834/ Log: kill duplicate imports, and rpython fix diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -9,13 +9,7 @@ from pypy.rpython.annlowlevel import cast_base_ptr_to_instance from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.libffi import Func -from pypy.rlib.debug import debug_print -from pypy.rlib import libffi, clibffi -from pypy.jit.codewriter.effectinfo import EffectInfo -from pypy.jit.metainterp.resoperation import rop, ResOperation -from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.rlib.rarithmetic import intmask class FuncInfo(object): @@ -242,7 +236,7 @@ else: assert False, "unsupported ffitype or kind" # - fieldsize = ffitype.c_size + fieldsize = intmask(ffitype.c_size) return self.optimizer.cpu.fielddescrof_dynamic(offset, fieldsize, is_pointer, is_float, is_signed) From noreply at buildbot.pypy.org Wed May 16 22:38:21 2012 From: noreply at buildbot.pypy.org (mattip) Date: Wed, 16 May 2012 22:38:21 +0200 (CEST) Subject: [pypy-commit] pypy default: restore missing signals on windows Message-ID: <20120516203821.C28A482D59@wyvern.cs.uni-duesseldorf.de> Author: Matti Picus Branch: Changeset: r55117:7ad8c65b2edd Date: 2012-05-16 23:35 +0300 http://bitbucket.org/pypy/pypy/changeset/7ad8c65b2edd/ Log: restore missing signals on windows diff --git a/pypy/module/signal/interp_signal.py b/pypy/module/signal/interp_signal.py --- a/pypy/module/signal/interp_signal.py +++ b/pypy/module/signal/interp_signal.py @@ -25,11 +25,18 @@ SIG_DFL = cpy_signal.SIG_DFL SIG_IGN = cpy_signal.SIG_IGN signal_names = list(setup()) -signal_values = [globals()[key] for key in signal_names] signal_values = {} for key in signal_names: signal_values[globals()[key]] = None - +if sys.platform == 'win32' and not hasattr(cpy_signal,'CTRL_C_EVENT'): + # XXX Hack to revive values that went missing, + # Remove this once we are sure the host cpy module has them. + signal_values[0] = None + signal_values[1] = None + signal_names.append('CTRL_C_EVENT') + signal_names.append('CTRL_BREAK_EVENT') + CTRL_C_EVENT = 0 + CTRL_BREAK_EVENT = 1 includes = ['stdlib.h', 'src/signals.h'] if sys.platform != 'win32': includes.append('sys/time.h') diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -46,8 +46,8 @@ import os self.signal.__dict__ # crashes if the interpleveldefs are invalid if os.name == 'nt': - self.signal.CTRL_BREAK_EVENT - self.signal.CTRL_C_EVENT + assert self.signal.CTRL_BREAK_EVENT == 1 + assert self.signal.CTRL_C_EVENT == 0 def test_basics(self): import types, os From noreply at buildbot.pypy.org Thu May 17 09:26:48 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 17 May 2012 09:26:48 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: kill outdated comment Message-ID: <20120517072648.1B43A82D45@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55118:1789dd23d999 Date: 2012-05-16 10:53 +0200 http://bitbucket.org/pypy/pypy/changeset/1789dd23d999/ Log: kill outdated comment diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -49,7 +49,6 @@ elif w_ffitype.is_singlefloat(): self._singlefloat(w_ffitype, w_obj) elif w_ffitype.is_struct(): - # arg_raw directly takes value to put inside ll_args w_obj = space.interp_w(W__StructInstance, w_obj) self.handle_struct(w_ffitype, w_obj) else: From noreply at buildbot.pypy.org Thu May 17 09:26:49 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 17 May 2012 09:26:49 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: temporarly re-add support for _rawffi structures. This way, it should be possible to merge the branch without breaking ctypes. In the longterm, the plan is to implement ctypes.Structure on top of _ffi._StructDescr Message-ID: <20120517072649.9DFBB82D46@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55119:15f1a0ac245f Date: 2012-05-17 09:24 +0200 http://bitbucket.org/pypy/pypy/changeset/15f1a0ac245f/ Log: temporarly re-add support for _rawffi structures. This way, it should be possible to merge the branch without breaking ctypes. In the longterm, the plan is to implement ctypes.Structure on top of _ffi._StructDescr diff --git a/pypy/module/_ffi/interp_funcptr.py b/pypy/module/_ffi/interp_funcptr.py --- a/pypy/module/_ffi/interp_funcptr.py +++ b/pypy/module/_ffi/interp_funcptr.py @@ -129,6 +129,11 @@ ptrval = w_structinstance.rawmem self.argchain.arg_raw(ptrval) + def handle_struct_rawffi(self, w_ffitype, w_structinstance): + # arg_raw directly takes value to put inside ll_args + ptrval = w_structinstance.ll_buffer + self.argchain.arg_raw(ptrval) + class CallFunctionConverter(ToAppLevelConverter): """ @@ -208,6 +213,10 @@ addr = self.func.call(self.argchain, rffi.LONG, is_struct=True) return w_structdescr.fromaddress(self.space, addr) + def get_struct_rawffi(self, w_ffitype, w_structdescr): + uintval = self.func.call(self.argchain, rffi.ULONG, is_struct=True) + return w_structdescr.fromaddress(self.space, uintval) + def get_void(self, w_ffitype): return self.func.call(self.argchain, lltype.Void) diff --git a/pypy/module/_ffi/test/test_funcptr.py b/pypy/module/_ffi/test/test_funcptr.py --- a/pypy/module/_ffi/test/test_funcptr.py +++ b/pypy/module/_ffi/test/test_funcptr.py @@ -488,6 +488,47 @@ assert p.getfield('x') == 12 assert p.getfield('y') == 34 + # XXX: support for _rawffi structures should be killed as soon as we + # implement ctypes.Structure on top of _ffi. In the meantime, we support + # both + def test_byval_argument__rawffi(self): + """ + // defined above + struct Point; + DLLEXPORT long sum_point(struct Point p); + """ + import _rawffi + from _ffi import CDLL, types + POINT = _rawffi.Structure([('x', 'l'), ('y', 'l')]) + ffi_point = POINT.get_ffi_type() + libfoo = CDLL(self.libfoo_name) + sum_point = libfoo.getfunc('sum_point', [ffi_point], types.slong) + # + p = POINT() + p.x = 30 + p.y = 12 + res = sum_point(p) + assert res == 42 + p.free() + + def test_byval_result__rawffi(self): + """ + // defined above + DLLEXPORT struct Point make_point(long x, long y); + """ + import _rawffi + from _ffi import CDLL, types + POINT = _rawffi.Structure([('x', 'l'), ('y', 'l')]) + ffi_point = POINT.get_ffi_type() + libfoo = CDLL(self.libfoo_name) + make_point = libfoo.getfunc('make_point', [types.slong, types.slong], ffi_point) + # + p = make_point(12, 34) + assert p.x == 12 + assert p.y == 34 + p.free() + + def test_TypeError_numargs(self): from _ffi import CDLL, types libfoo = CDLL(self.libfoo_name) diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -3,6 +3,7 @@ from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rpython.lltypesystem import rffi from pypy.interpreter.error import operationerrfmt +from pypy.module._rawffi.structure import W_StructureInstance, W_Structure from pypy.module._ffi.interp_ffitype import app_types class FromAppLevelConverter(object): @@ -49,8 +50,11 @@ elif w_ffitype.is_singlefloat(): self._singlefloat(w_ffitype, w_obj) elif w_ffitype.is_struct(): - w_obj = space.interp_w(W__StructInstance, w_obj) - self.handle_struct(w_ffitype, w_obj) + if isinstance(w_obj, W_StructureInstance): + self.handle_struct_rawffi(w_ffitype, w_obj) + else: + w_obj = space.interp_w(W__StructInstance, w_obj) + self.handle_struct(w_ffitype, w_obj) else: self.error(w_ffitype, w_obj) @@ -168,6 +172,14 @@ """ self.error(w_ffitype, w_structinstance) + def handle_struct_rawffi(self, w_ffitype, w_structinstance): + """ + This method should be killed as soon as we remove support for _rawffi structures + + w_structinstance: W_StructureInstance + """ + self.error(w_ffitype, w_structinstance) + class ToAppLevelConverter(object): @@ -222,8 +234,12 @@ return self._singlefloat(w_ffitype) elif w_ffitype.is_struct(): w_structdescr = w_ffitype.w_structdescr - assert isinstance(w_structdescr, W__StructDescr) - return self.get_struct(w_ffitype, w_structdescr) + if isinstance(w_structdescr, W__StructDescr): + return self.get_struct(w_ffitype, w_structdescr) + elif isinstance(w_structdescr, W_Structure): + return self.get_struct_rawffi(w_ffitype, w_structdescr) + else: + raise OperationError(self.space.w_TypeError, "Unsupported struct shape") elif w_ffitype.is_void(): voidval = self.get_void(w_ffitype) assert voidval is None @@ -325,6 +341,15 @@ def get_struct(self, w_ffitype, w_structdescr): """ + Return type: lltype.Signed + (the address of the structure) + """ + self.error(w_ffitype) + + def get_struct_rawffi(self, w_ffitype, w_structdescr): + """ + This should be killed as soon as we kill support for _rawffi structures + Return type: lltype.Unsigned (the address of the structure) """ From noreply at buildbot.pypy.org Thu May 17 09:26:51 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 17 May 2012 09:26:51 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: merge heads Message-ID: <20120517072651.236CB82D45@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55120:cd0a99a4c058 Date: 2012-05-17 09:26 +0200 http://bitbucket.org/pypy/pypy/changeset/cd0a99a4c058/ Log: merge heads diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -9,13 +9,7 @@ from pypy.rpython.annlowlevel import cast_base_ptr_to_instance from pypy.rpython.lltypesystem import lltype, llmemory, rffi from pypy.rlib.objectmodel import we_are_translated -from pypy.rlib.libffi import Func -from pypy.rlib.debug import debug_print -from pypy.rlib import libffi, clibffi -from pypy.jit.codewriter.effectinfo import EffectInfo -from pypy.jit.metainterp.resoperation import rop, ResOperation -from pypy.jit.metainterp.optimizeopt.util import make_dispatcher_method -from pypy.jit.metainterp.optimizeopt.optimizer import Optimization +from pypy.rlib.rarithmetic import intmask class FuncInfo(object): @@ -242,7 +236,7 @@ else: assert False, "unsupported ffitype or kind" # - fieldsize = ffitype.c_size + fieldsize = intmask(ffitype.c_size) return self.optimizer.cpu.fielddescrof_dynamic(offset, fieldsize, is_pointer, is_float, is_signed) diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py --- a/pypy/module/_ffi/interp_struct.py +++ b/pypy/module/_ffi/interp_struct.py @@ -230,12 +230,14 @@ get_pointer = get_unsigned def get_char(self, w_ffitype): - return libffi.struct_getfield_int(w_ffitype.get_ffitype(), - self.rawmem, self.offset) + intval = libffi.struct_getfield_int(w_ffitype.get_ffitype(), + self.rawmem, self.offset) + return rffi.cast(rffi.UCHAR, intval) def get_unichar(self, w_ffitype): - return libffi.struct_getfield_int(w_ffitype.get_ffitype(), - self.rawmem, self.offset) + intval = libffi.struct_getfield_int(w_ffitype.get_ffitype(), + self.rawmem, self.offset) + return rffi.cast(rffi.WCHAR_T, intval) def get_float(self, w_ffitype): return libffi.struct_getfield_float(w_ffitype.get_ffitype(), @@ -247,7 +249,8 @@ def get_struct(self, w_ffitype, w_structdescr): assert isinstance(w_structdescr, W__StructDescr) - innermem = rffi.ptradd(self.rawmem, self.offset) + rawmem = rffi.cast(rffi.CCHARP, self.rawmem) + innermem = rffi.cast(rffi.VOIDP, rffi.ptradd(rawmem, self.offset)) # we return a reference to the inner struct, not a copy # autofree=False because it's still owned by the parent struct return W__StructInstance(w_structdescr, allocate=False, autofree=False, @@ -293,7 +296,8 @@ self.rawmem, self.offset, singlefloatval) def handle_struct(self, w_ffitype, w_structinstance): - dst = rffi.ptradd(self.rawmem, self.offset) + rawmem = rffi.cast(rffi.CCHARP, self.rawmem) + dst = rffi.cast(rffi.VOIDP, rffi.ptradd(rawmem, self.offset)) src = w_structinstance.rawmem length = w_ffitype.sizeof() rffi.c_memcpy(dst, src, length) diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -89,6 +89,7 @@ unicodeval = self.space.unicode_w(w_obj) self.handle_unichar_p(w_ffitype, w_obj, unicodeval) return True + return False def convert_pointer_arg_maybe(self, w_arg, w_argtype): """ diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -11,6 +11,7 @@ from pypy.rlib.rdynload import dlopen, dlclose, dlsym, dlsym_byordinal from pypy.rlib.rdynload import DLOpenError, DLLHANDLE from pypy.rlib import jit +from pypy.rlib.objectmodel import specialize from pypy.tool.autopath import pypydir from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.platform import platform @@ -347,6 +348,7 @@ ('ffistruct', FFI_TYPE_P.TO), ('members', lltype.Array(FFI_TYPE_P)))) + at specialize.arg(3) def make_struct_ffitype_e(size, aligment, field_types, track_allocation=True): """Compute the type of a structure. Returns a FFI_STRUCT_P out of which the 'ffistruct' member is a regular FFI_TYPE. From noreply at buildbot.pypy.org Thu May 17 10:03:23 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 17 May 2012 10:03:23 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: bah, missing import and space.wrap() Message-ID: <20120517080323.9302E82D45@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55121:e4072dd3977e Date: 2012-05-17 09:59 +0200 http://bitbucket.org/pypy/pypy/changeset/e4072dd3977e/ Log: bah, missing import and space.wrap() diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py --- a/pypy/module/_ffi/type_converter.py +++ b/pypy/module/_ffi/type_converter.py @@ -2,7 +2,7 @@ from pypy.rlib import jit from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rpython.lltypesystem import rffi -from pypy.interpreter.error import operationerrfmt +from pypy.interpreter.error import operationerrfmt, OperationError from pypy.module._rawffi.structure import W_StructureInstance, W_Structure from pypy.module._ffi.interp_ffitype import app_types @@ -240,7 +240,8 @@ elif isinstance(w_structdescr, W_Structure): return self.get_struct_rawffi(w_ffitype, w_structdescr) else: - raise OperationError(self.space.w_TypeError, "Unsupported struct shape") + raise OperationError(self.space.w_TypeError, + self.space.wrap("Unsupported struct shape")) elif w_ffitype.is_void(): voidval = self.get_void(w_ffitype) assert voidval is None From noreply at buildbot.pypy.org Thu May 17 11:45:40 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 17 May 2012 11:45:40 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: hg merge default Message-ID: <20120517094540.2CE7B82D45@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55122:587cb9c86612 Date: 2012-05-17 10:58 +0200 http://bitbucket.org/pypy/pypy/changeset/587cb9c86612/ Log: hg merge default diff --git a/pypy/doc/test/test_whatsnew.py b/pypy/doc/test/test_whatsnew.py new file mode 100644 --- /dev/null +++ b/pypy/doc/test/test_whatsnew.py @@ -0,0 +1,80 @@ +import py +import pypy +from commands import getoutput +ROOT = py.path.local(pypy.__file__).dirpath().dirpath() + + +def parse_doc(s): + startrev = None + branches = set() + def parseline(line): + _, value = line.split(':', 1) + return value.strip() + # + for line in s.splitlines(): + if line.startswith('.. startrev:'): + startrev = parseline(line) + elif line.startswith('.. branch:'): + branches.add(parseline(line)) + return startrev, branches + +def get_merged_branches(path, startrev, endrev): + # X = take all the merges which are descendants of startrev and are on default + # revset = all the parents of X which are not on default + # ===> + # revset contains all the branches which have been merged to default since + # startrev + revset = 'parents(%s::%s and \ + merge() and \ + branch(default)) and \ + not branch(default)' % (startrev, endrev) + cmd = r"hg log -R '%s' -r '%s' --template '{branches}\n'" % (path, revset) + out = getoutput(cmd) + branches = set(map(str.strip, out.splitlines())) + return branches + + +def test_parse_doc(): + s = """ +===== +Title +===== + +.. startrev: 12345 + +bla bla bla bla + +.. branch: foobar + +xxx yyy zzz + +.. branch: hello + +qqq www ttt +""" + startrev, branches = parse_doc(s) + assert startrev == '12345' + assert branches == set(['foobar', 'hello']) + +def test_get_merged_branches(): + branches = get_merged_branches(ROOT, 'f34f0c11299f', '79770e0c2f93') + assert branches == set(['numpy-indexing-by-arrays-bool', + 'better-jit-hooks-2', + 'numpypy-ufuncs']) + +def test_whatsnew(): + doc = ROOT.join('pypy', 'doc') + whatsnew_list = doc.listdir('whatsnew-*.rst') + whatsnew_list.sort() + last_whatsnew = whatsnew_list[-1].read() + startrev, documented = parse_doc(last_whatsnew) + merged = get_merged_branches(ROOT, startrev, '') + not_documented = merged.difference(documented) + not_merged = documented.difference(merged) + print 'Branches merged but not documented:' + print '\n'.join(not_documented) + print + print 'Branches documented but not merged:' + print '\n'.join(not_merged) + print + assert not not_documented and not not_merged diff --git a/pypy/doc/whatsnew-1.9.rst b/pypy/doc/whatsnew-1.9.rst new file mode 100644 --- /dev/null +++ b/pypy/doc/whatsnew-1.9.rst @@ -0,0 +1,72 @@ +====================== +What's new in PyPy 1.9 +====================== + +.. this is the revision just after the creation of the release-1.8.x branch +.. startrev: a4261375b359 + +.. branch: array_equal +.. branch: better-jit-hooks-2 +.. branch: exception-cannot-occur +.. branch: faster-heapcache +.. branch: faster-str-decode-escape +.. branch: float-bytes +Added some pritives for dealing with floats as raw bytes. +.. branch: float-bytes-2 +Added more float byte primitives. +.. branch: jit-frame-counter +Put more debug info into resops. +.. branch: kill-geninterp +.. branch: kqueue +Finished select.kqueue. +.. branch: kwargsdict-strategy +.. branch: matrixmath-dot +numpypy can now handle matrix multiplication. +.. branch: merge-2.7.2 +The stdlib was updated to version 2.7.2 +.. branch: ndmin +.. branch: newindex +.. branch: non-null-threadstate +cpyext: Better support for PyEval_SaveThread and other PyTreadState_* +functions. +.. branch: numppy-flatitter +.. branch: numpy-back-to-applevel +.. branch: numpy-concatenate +.. branch: numpy-indexing-by-arrays-bool +.. branch: numpy-record-dtypes +.. branch: numpy-single-jitdriver +.. branch: numpy-ufuncs2 +.. branch: numpy-ufuncs3 +.. branch: numpypy-issue1137 +.. branch: numpypy-out +The "out" argument was added to most of the numypypy functions. +.. branch: numpypy-shape-bug +.. branch: numpypy-ufuncs +.. branch: pytest +.. branch: safe-getargs-freelist +.. branch: set-strategies +.. branch: speedup-list-comprehension +.. branch: stdlib-unification +The directory "lib-python/modified-2.7" has been removed, and its +content merged into "lib-python/2.7". +.. branch: step-one-xrange +The common case of a xrange iterator with no step argument specifed +was somewhat optimized. The tightest loop involving it, +sum(xrange(n)), is now 18% faster on average. +.. branch: string-NUL +PyPy refuses filenames with chr(0) characters. This is implemented in +RPython which can enforce no-NUL correctness and propagation, similar +to const-correctness in C++. +.. branch: win32-cleanup +.. branch: win32-cleanup2 +.. branch: win32-cleanup_2 +Many bugs were corrected for windows 32 bit. New functionality was added to +test validity of file descriptors, leading to the removal of the global +_invalid_parameter_handler +.. branch: win64-stage1 +.. branch: zlib-mem-pressure + + +.. "uninteresting" branches that we should just ignore for the whatsnew: +.. branch: sanitize-finally-stack +.. branch: revive-dlltool (preliminary work for sepcomp) diff --git a/pypy/module/select/interp_kqueue.py b/pypy/module/select/interp_kqueue.py --- a/pypy/module/select/interp_kqueue.py +++ b/pypy/module/select/interp_kqueue.py @@ -3,6 +3,7 @@ from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef, generic_new_descr, GetSetProperty from pypy.rlib._rsocket_rffi import socketclose +from pypy.rlib.rarithmetic import r_uint from pypy.rpython.lltypesystem import rffi, lltype from pypy.rpython.tool import rffi_platform from pypy.translator.tool.cbuild import ExternalCompilationInfo @@ -226,9 +227,12 @@ if self.event: lltype.free(self.event, flavor="raw") - @unwrap_spec(filter=int, flags='c_uint', fflags='c_uint', data=int, udata='c_uint') - def descr__init__(self, space, w_ident, filter=KQ_FILTER_READ, flags=KQ_EV_ADD, fflags=0, data=0, udata=0): - ident = space.c_filedescriptor_w(w_ident) + @unwrap_spec(filter=int, flags='c_uint', fflags='c_uint', data=int, udata=r_uint) + def descr__init__(self, space, w_ident, filter=KQ_FILTER_READ, flags=KQ_EV_ADD, fflags=0, data=0, udata=r_uint(0)): + if space.isinstance_w(w_ident, space.w_long): + ident = space.uint_w(w_ident) + else: + ident = r_uint(space.c_filedescriptor_w(w_ident)) self.event = lltype.malloc(kevent, flavor="raw") rffi.setintfield(self.event, "c_ident", ident) @@ -320,7 +324,7 @@ return space.wrap(self.event.c_data) def descr_get_udata(self, space): - return space.wrap(rffi.cast(rffi.SIZE_T, self.event.c_udata)) + return space.wrap(rffi.cast(rffi.UINTPTR_T, self.event.c_udata)) W_Kevent.typedef = TypeDef("select.kevent", diff --git a/pypy/module/select/interp_select.py b/pypy/module/select/interp_select.py --- a/pypy/module/select/interp_select.py +++ b/pypy/module/select/interp_select.py @@ -74,6 +74,32 @@ pollmethods[methodname] = interp2app(getattr(Poll, methodname)) Poll.typedef = TypeDef('select.poll', **pollmethods) +# ____________________________________________________________ + + +from pypy.rlib import _rsocket_rffi as _c +from pypy.rpython.lltypesystem import lltype, rffi + + +def _build_fd_set(space, list_w, ll_list, nfds): + _c.FD_ZERO(ll_list) + fdlist = [] + for w_f in list_w: + fd = space.c_filedescriptor_w(w_f) + if fd > nfds: + nfds = fd + _c.FD_SET(fd, ll_list) + fdlist.append(fd) + return fdlist, nfds +_build_fd_set._always_inline_ = True # get rid of the tuple result + +def _unbuild_fd_set(space, list_w, fdlist, ll_list, reslist_w): + for i in range(len(fdlist)): + fd = fdlist[i] + if _c.FD_ISSET(fd, ll_list): + reslist_w.append(list_w[i]) + + def select(space, w_iwtd, w_owtd, w_ewtd, w_timeout=None): """Wait until one or more file descriptors are ready for some kind of I/O. The first three arguments are sequences of file descriptors to be waited for: @@ -99,29 +125,62 @@ iwtd_w = space.listview(w_iwtd) owtd_w = space.listview(w_owtd) ewtd_w = space.listview(w_ewtd) - iwtd = [space.c_filedescriptor_w(w_f) for w_f in iwtd_w] - owtd = [space.c_filedescriptor_w(w_f) for w_f in owtd_w] - ewtd = [space.c_filedescriptor_w(w_f) for w_f in ewtd_w] - iwtd_d = {} - owtd_d = {} - ewtd_d = {} - for i in range(len(iwtd)): - iwtd_d[iwtd[i]] = iwtd_w[i] - for i in range(len(owtd)): - owtd_d[owtd[i]] = owtd_w[i] - for i in range(len(ewtd)): - ewtd_d[ewtd[i]] = ewtd_w[i] + + ll_inl = lltype.nullptr(_c.fd_set.TO) + ll_outl = lltype.nullptr(_c.fd_set.TO) + ll_errl = lltype.nullptr(_c.fd_set.TO) + ll_timeval = lltype.nullptr(_c.timeval) + try: + fdlistin = None + fdlistout = None + fdlisterr = None + nfds = -1 + if len(iwtd_w) > 0: + ll_inl = lltype.malloc(_c.fd_set.TO, flavor='raw') + fdlistin, nfds = _build_fd_set(space, iwtd_w, ll_inl, nfds) + if len(owtd_w) > 0: + ll_outl = lltype.malloc(_c.fd_set.TO, flavor='raw') + fdlistout, nfds = _build_fd_set(space, owtd_w, ll_outl, nfds) + if len(ewtd_w) > 0: + ll_errl = lltype.malloc(_c.fd_set.TO, flavor='raw') + fdlisterr, nfds = _build_fd_set(space, ewtd_w, ll_errl, nfds) + if space.is_w(w_timeout, space.w_None): - iwtd, owtd, ewtd = rpoll.select(iwtd, owtd, ewtd) + timeout = -1.0 else: - iwtd, owtd, ewtd = rpoll.select(iwtd, owtd, ewtd, space.float_w(w_timeout)) - except rpoll.SelectError, s: - w_errortype = space.fromcache(Cache).w_error - raise OperationError(w_errortype, space.newtuple([ - space.wrap(s.errno), space.wrap(s.get_msg())])) + timeout = space.float_w(w_timeout) + if timeout >= 0.0: + ll_timeval = rffi.make(_c.timeval) + i = int(timeout) + rffi.setintfield(ll_timeval, 'c_tv_sec', i) + rffi.setintfield(ll_timeval, 'c_tv_usec', int((timeout-i)*1000000)) - return space.newtuple([ - space.newlist([iwtd_d[i] for i in iwtd]), - space.newlist([owtd_d[i] for i in owtd]), - space.newlist([ewtd_d[i] for i in ewtd])]) + res = _c.select(nfds + 1, ll_inl, ll_outl, ll_errl, ll_timeval) + + if res < 0: + errno = _c.geterrno() + msg = _c.socket_strerror_str(errno) + w_errortype = space.fromcache(Cache).w_error + raise OperationError(w_errortype, space.newtuple([ + space.wrap(errno), space.wrap(msg)])) + + resin_w = [] + resout_w = [] + reserr_w = [] + if res > 0: + if fdlistin is not None: + _unbuild_fd_set(space, iwtd_w, fdlistin, ll_inl, resin_w) + if fdlistout is not None: + _unbuild_fd_set(space, owtd_w, fdlistout, ll_outl, resout_w) + if fdlisterr is not None: + _unbuild_fd_set(space, ewtd_w, fdlisterr, ll_errl, reserr_w) + finally: + if ll_timeval: lltype.free(ll_timeval, flavor='raw') + if ll_errl: lltype.free(ll_errl, flavor='raw') + if ll_outl: lltype.free(ll_outl, flavor='raw') + if ll_inl: lltype.free(ll_inl, flavor='raw') + + return space.newtuple([space.newlist(resin_w), + space.newlist(resout_w), + space.newlist(reserr_w)]) diff --git a/pypy/module/select/test/test_kqueue.py b/pypy/module/select/test/test_kqueue.py --- a/pypy/module/select/test/test_kqueue.py +++ b/pypy/module/select/test/test_kqueue.py @@ -100,7 +100,7 @@ client.setblocking(False) try: client.connect(("127.0.0.1", server_socket.getsockname()[1])) - except socket.error as e: + except socket.error, e: if 'bsd' in sys.platform: assert e.args[0] == errno.ENOENT else: diff --git a/pypy/module/signal/interp_signal.py b/pypy/module/signal/interp_signal.py --- a/pypy/module/signal/interp_signal.py +++ b/pypy/module/signal/interp_signal.py @@ -16,7 +16,8 @@ def setup(): for key, value in cpy_signal.__dict__.items(): if (key.startswith('SIG') or key.startswith('CTRL_')) and \ - is_valid_int(value): + is_valid_int(value) and \ + key != 'SIG_DFL' and key != 'SIG_IGN': globals()[key] = value yield key @@ -24,11 +25,18 @@ SIG_DFL = cpy_signal.SIG_DFL SIG_IGN = cpy_signal.SIG_IGN signal_names = list(setup()) -signal_values = [globals()[key] for key in signal_names] signal_values = {} for key in signal_names: signal_values[globals()[key]] = None - +if sys.platform == 'win32' and not hasattr(cpy_signal,'CTRL_C_EVENT'): + # XXX Hack to revive values that went missing, + # Remove this once we are sure the host cpy module has them. + signal_values[0] = None + signal_values[1] = None + signal_names.append('CTRL_C_EVENT') + signal_names.append('CTRL_BREAK_EVENT') + CTRL_C_EVENT = 0 + CTRL_BREAK_EVENT = 1 includes = ['stdlib.h', 'src/signals.h'] if sys.platform != 'win32': includes.append('sys/time.h') diff --git a/pypy/module/signal/test/test_signal.py b/pypy/module/signal/test/test_signal.py --- a/pypy/module/signal/test/test_signal.py +++ b/pypy/module/signal/test/test_signal.py @@ -43,7 +43,11 @@ cls.w_signal = space.appexec([], "(): import signal; return signal") def test_exported_names(self): + import os self.signal.__dict__ # crashes if the interpleveldefs are invalid + if os.name == 'nt': + assert self.signal.CTRL_BREAK_EVENT == 1 + assert self.signal.CTRL_C_EVENT == 0 def test_basics(self): import types, os diff --git a/pypy/objspace/fake/objspace.py b/pypy/objspace/fake/objspace.py --- a/pypy/objspace/fake/objspace.py +++ b/pypy/objspace/fake/objspace.py @@ -55,6 +55,9 @@ from pypy.rlib.rbigint import rbigint return rbigint.fromint(NonConstant(42)) +class W_MyType(W_MyObject): + def __init__(self): + self.mro_w = [w_some_obj(), w_some_obj()] def w_some_obj(): if NonConstant(False): @@ -66,6 +69,9 @@ return None return w_some_obj() +def w_some_type(): + return W_MyType() + def is_root(w_obj): assert isinstance(w_obj, W_Root) is_root.expecting = W_Root @@ -220,6 +226,9 @@ assert typedef is not None return self.fromcache(TypeCache).getorbuild(typedef) + def type(self, w_obj): + return w_some_type() + def unpackiterable(self, w_iterable, expected_length=-1): is_root(w_iterable) if expected_length < 0: @@ -287,10 +296,13 @@ ObjSpace.ExceptionTable + ['int', 'str', 'float', 'long', 'tuple', 'list', 'dict', 'unicode', 'complex', 'slice', 'bool', - 'type', 'basestring', 'object']): + 'basestring', 'object']): setattr(FakeObjSpace, 'w_' + name, w_some_obj()) + FakeObjSpace.w_type = w_some_type() # for (name, _, arity, _) in ObjSpace.MethodTable: + if name == 'type': + continue args = ['w_%d' % i for i in range(arity)] params = args[:] d = {'is_root': is_root, diff --git a/pypy/objspace/fake/test/test_checkmodule.py b/pypy/objspace/fake/test/test_checkmodule.py --- a/pypy/objspace/fake/test/test_checkmodule.py +++ b/pypy/objspace/fake/test/test_checkmodule.py @@ -1,9 +1,9 @@ -import py + from pypy.objspace.fake.objspace import FakeObjSpace, is_root from pypy.interpreter.baseobjspace import Wrappable from pypy.interpreter.typedef import TypeDef, GetSetProperty from pypy.interpreter.gateway import interp2app, W_Root, ObjSpace - +from pypy.rpython.test.test_llinterp import interpret def make_checker(): check = [] @@ -61,3 +61,18 @@ assert not check space.translates() assert check + +def test_gettype_mro_untranslated(): + space = FakeObjSpace() + w_type = space.type(space.wrap(1)) + assert len(w_type.mro_w) == 2 + +def test_gettype_mro(): + space = FakeObjSpace() + + def f(i): + w_x = space.wrap(i) + w_type = space.type(w_x) + return len(w_type.mro_w) + + assert interpret(f, [1]) == 2 diff --git a/pypy/objspace/std/test/test_stdobjspace.py b/pypy/objspace/std/test/test_stdobjspace.py --- a/pypy/objspace/std/test/test_stdobjspace.py +++ b/pypy/objspace/std/test/test_stdobjspace.py @@ -74,3 +74,20 @@ space = gettestobjspace(withstrbuf=True) cls = space._get_interplevel_cls(space.w_str) assert cls is W_AbstractStringObject + + def test_wrap_various_unsigned_types(self): + import sys + from pypy.rpython.lltypesystem import lltype, rffi + space = self.space + value = sys.maxint * 2 + x = rffi.cast(lltype.Unsigned, value) + assert space.eq_w(space.wrap(value), space.wrap(x)) + x = rffi.cast(rffi.UINTPTR_T, value) + assert x > 0 + assert space.eq_w(space.wrap(value), space.wrap(x)) + value = 60000 + x = rffi.cast(rffi.USHORT, value) + assert space.eq_w(space.wrap(value), space.wrap(x)) + value = 200 + x = rffi.cast(rffi.UCHAR, value) + assert space.eq_w(space.wrap(value), space.wrap(x)) diff --git a/pypy/rpython/lltypesystem/rffi.py b/pypy/rpython/lltypesystem/rffi.py --- a/pypy/rpython/lltypesystem/rffi.py +++ b/pypy/rpython/lltypesystem/rffi.py @@ -436,6 +436,7 @@ 'long long', 'unsigned long long', 'size_t', 'time_t', 'wchar_t', 'uintptr_t', 'intptr_t'] +_TYPES_ARE_UNSIGNED = set(['size_t', 'uintptr_t']) # plus "unsigned *" if os.name != 'nt': TYPES.append('mode_t') TYPES.append('pid_t') @@ -454,7 +455,7 @@ name = 'u' + name[9:] signed = False else: - signed = (name != 'size_t') + signed = (name not in _TYPES_ARE_UNSIGNED) name = name.replace(' ', '') names.append(name) populatelist.append((name.upper(), c_name, signed)) From noreply at buildbot.pypy.org Thu May 17 11:45:41 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 17 May 2012 11:45:41 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: delay the lookup of to_free until it's needed. This should save a getfield_gc in the trace in the common case Message-ID: <20120517094541.81D6A82D46@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55123:b7e461c340c7 Date: 2012-05-17 11:45 +0200 http://bitbucket.org/pypy/pypy/changeset/b7e461c340c7/ Log: delay the lookup of to_free until it's needed. This should save a getfield_gc in the trace in the common case diff --git a/pypy/module/_ffi/interp_funcptr.py b/pypy/module/_ffi/interp_funcptr.py --- a/pypy/module/_ffi/interp_funcptr.py +++ b/pypy/module/_ffi/interp_funcptr.py @@ -48,7 +48,7 @@ self.func.name, expected, arg, given) # argchain = libffi.ArgChain() - argpusher = PushArgumentConverter(space, argchain, self.to_free) + argpusher = PushArgumentConverter(space, argchain, self) for i in range(expected): w_argtype = self.argtypes_w[i] w_arg = args_w[i] @@ -83,10 +83,10 @@ low-level types and push them to the argchain. """ - def __init__(self, space, argchain, to_free): + def __init__(self, space, argchain, w_func): FromAppLevelConverter.__init__(self, space) self.argchain = argchain - self.to_free = to_free + self.w_func = w_func def handle_signed(self, w_ffitype, w_obj, intval): self.argchain.arg(intval) @@ -108,13 +108,13 @@ def handle_char_p(self, w_ffitype, w_obj, strval): buf = rffi.str2charp(strval) - self.to_free.append(rffi.cast(rffi.VOIDP, buf)) + self.w_func.to_free.append(rffi.cast(rffi.VOIDP, buf)) addr = rffi.cast(rffi.ULONG, buf) self.argchain.arg(addr) def handle_unichar_p(self, w_ffitype, w_obj, unicodeval): buf = rffi.unicode2wcharp(unicodeval) - self.to_free.append(rffi.cast(rffi.VOIDP, buf)) + self.w_func.to_free.append(rffi.cast(rffi.VOIDP, buf)) addr = rffi.cast(rffi.ULONG, buf) self.argchain.arg(addr) From noreply at buildbot.pypy.org Thu May 17 12:16:46 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 17 May 2012 12:16:46 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: add a test_pypy_c test for _ffi structs Message-ID: <20120517101646.37C7382D45@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55124:ed4bfa7ee214 Date: 2012-05-17 12:16 +0200 http://bitbucket.org/pypy/pypy/changeset/ed4bfa7ee214/ Log: add a test_pypy_c test for _ffi structs diff --git a/pypy/module/pypyjit/test_pypy_c/test__ffi.py b/pypy/module/pypyjit/test_pypy_c/test__ffi.py --- a/pypy/module/pypyjit/test_pypy_c/test__ffi.py +++ b/pypy/module/pypyjit/test_pypy_c/test__ffi.py @@ -134,3 +134,31 @@ call = ops[idx] assert (call.args[0] == 'ConstClass(fabs)' or # e.g. OS/X int(call.args[0]) == fabs_addr) + + + def test__ffi_struct(self): + def main(): + from _ffi import _StructDescr, Field, types + fields = [ + Field('x', types.slong), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + i = 0 + while i < 300: + x = struct.getfield('x') # ID: getfield + x = x+1 + struct.setfield('x', x) # ID: setfield + i += 1 + return struct.getfield('x') + # + log = self.run(main, []) + loop, = log.loops_by_filename(self.filepath) + assert loop.match_by_id('getfield', """ + guard_not_invalidated(descr=...) + i57 = getfield_raw(i46, descr=) + """) + assert loop.match_by_id('setfield', """ + setfield_raw(i44, i57, descr=) + """) + From noreply at buildbot.pypy.org Thu May 17 13:58:53 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 17 May 2012 13:58:53 +0200 (CEST) Subject: [pypy-commit] pyrepl default: kill uniqify for sorted(set(...)) Message-ID: <20120517115853.0A8D382D45@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r195:ae89de421288 Date: 2012-05-03 18:16 +0200 http://bitbucket.org/pypy/pyrepl/changeset/ae89de421288/ Log: kill uniqify for sorted(set(...)) diff --git a/pyrepl/cmdrepl.py b/pyrepl/cmdrepl.py --- a/pyrepl/cmdrepl.py +++ b/pyrepl/cmdrepl.py @@ -53,8 +53,8 @@ def get_completions(self, stem): if len(stem) != self.pos: return [] - return cr.uniqify([s for s in self.completions - if s.startswith(stem)]) + return sorted(set(s for s in self.completions + if s.startswith(stem))) def replize(klass, history_across_invocations=1): diff --git a/pyrepl/completing_reader.py b/pyrepl/completing_reader.py --- a/pyrepl/completing_reader.py +++ b/pyrepl/completing_reader.py @@ -21,13 +21,6 @@ from pyrepl import commands, reader from pyrepl.reader import Reader -def uniqify(l): - d = {} - for i in l: - d[i] = 1 - r = d.keys() - r.sort() - return r def prefix(wordlist, j = 0): d = {} diff --git a/pyrepl/module_lister.py b/pyrepl/module_lister.py --- a/pyrepl/module_lister.py +++ b/pyrepl/module_lister.py @@ -17,7 +17,6 @@ # CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN # CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -from pyrepl.completing_reader import uniqify import os, sys # for the completion support. @@ -38,9 +37,7 @@ l.append( prefix + fname ) _packages[prefix + fname] = _make_module_list_dir( file, suffs, prefix + fname + '.' ) - l = uniqify(l) - l.sort() - return l + return sorted(set(l)) def _make_module_list(): import imp diff --git a/pyrepl/python_reader.py b/pyrepl/python_reader.py --- a/pyrepl/python_reader.py +++ b/pyrepl/python_reader.py @@ -155,7 +155,7 @@ return [x[len(mod) + 1:] for x in l if x.startswith(mod + '.' + name)] try: - l = completing_reader.uniqify(self.completer.complete(stem)) + l = sorted(set(self.completer.complete(stem))) return l except (NameError, AttributeError): return [] From noreply at buildbot.pypy.org Thu May 17 13:58:54 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 17 May 2012 13:58:54 +0200 (CEST) Subject: [pypy-commit] pyrepl default: add some data tests Message-ID: <20120517115854.AF1E482D45@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r196:62f2256014af Date: 2012-05-03 18:18 +0200 http://bitbucket.org/pypy/pyrepl/changeset/62f2256014af/ Log: add some data tests diff --git a/testing/test_unix_reader.py b/testing/test_unix_reader.py --- a/testing/test_unix_reader.py +++ b/testing/test_unix_reader.py @@ -14,4 +14,6 @@ event = q.get() assert q.get() is None + assert event.data == a + assert event.raw == b From noreply at buildbot.pypy.org Thu May 17 13:58:56 2012 From: noreply at buildbot.pypy.org (RonnyPfannschmidt) Date: Thu, 17 May 2012 13:58:56 +0200 (CEST) Subject: [pypy-commit] pyrepl default: Added tag v0.8.4 for changeset 62f2256014af Message-ID: <20120517115856.91A9182D45@wyvern.cs.uni-duesseldorf.de> Author: Ronny Pfannschmidt Branch: Changeset: r197:9982325d3628 Date: 2012-05-17 13:58 +0200 http://bitbucket.org/pypy/pyrepl/changeset/9982325d3628/ Log: Added tag v0.8.4 for changeset 62f2256014af diff --git a/.hgtags b/.hgtags --- a/.hgtags +++ b/.hgtags @@ -1,3 +1,4 @@ 9e6ce97035736092e9eb7815816b36bee5e92cb3 pyrepl080 5f47f9f65ff7127668bdeda102e675c23224d321 pyrepl081 61de8bdd14264ab8af5b336be854cb9bfa720542 pyrepl082 +62f2256014af7b74b97c00827f1a7789e00dd814 v0.8.4 From noreply at buildbot.pypy.org Thu May 17 16:14:00 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 17 May 2012 16:14:00 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: skip these tests on cli/jvm Message-ID: <20120517141400.5F0F982D49@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55125:809248c4489d Date: 2012-05-17 16:09 +0200 http://bitbucket.org/pypy/pypy/changeset/809248c4489d/ Log: skip these tests on cli/jvm diff --git a/pypy/translator/cli/test/test_builtin.py b/pypy/translator/cli/test/test_builtin.py --- a/pypy/translator/cli/test/test_builtin.py +++ b/pypy/translator/cli/test/test_builtin.py @@ -16,7 +16,10 @@ test_os_isdir = skip_os test_os_dup_oo = skip_os test_os_access = skip_os - + + def test_longlongmask(self): + py.test.skip("fix me") + def test_builtin_math_frexp(self): self._skip_powerpc("Mono math floating point problem") BaseTestBuiltin.test_builtin_math_frexp(self) diff --git a/pypy/translator/jvm/test/test_builtin.py b/pypy/translator/jvm/test/test_builtin.py --- a/pypy/translator/jvm/test/test_builtin.py +++ b/pypy/translator/jvm/test/test_builtin.py @@ -47,6 +47,9 @@ res = self.interpret(fn, []) assert stat.S_ISREG(res) + def test_longlongmask(self): + py.test.skip("fix me") + class TestJvmTime(JvmTest, BaseTestTime): pass From noreply at buildbot.pypy.org Thu May 17 16:14:01 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 17 May 2012 16:14:01 +0200 (CEST) Subject: [pypy-commit] pypy ffistruct: close to-be-merged branch Message-ID: <20120517141401.9CDFC82D49@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: ffistruct Changeset: r55126:5b042d056556 Date: 2012-05-17 16:12 +0200 http://bitbucket.org/pypy/pypy/changeset/5b042d056556/ Log: close to-be-merged branch From noreply at buildbot.pypy.org Thu May 17 16:14:04 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 17 May 2012 16:14:04 +0200 (CEST) Subject: [pypy-commit] pypy default: merge the ffistruct branch: it adds a very low level way to express C structures with _ffi in a very JIT-friendly way Message-ID: <20120517141404.D295E82D49@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r55127:9c77e2bd53fe Date: 2012-05-17 16:13 +0200 http://bitbucket.org/pypy/pypy/changeset/9c77e2bd53fe/ Log: merge the ffistruct branch: it adds a very low level way to express C structures with _ffi in a very JIT-friendly way diff --git a/pypy/annotation/builtin.py b/pypy/annotation/builtin.py --- a/pypy/annotation/builtin.py +++ b/pypy/annotation/builtin.py @@ -298,6 +298,9 @@ def rarith_intmask(s_obj): return SomeInteger() +def rarith_longlongmask(s_obj): + return SomeInteger(knowntype=pypy.rlib.rarithmetic.r_longlong) + def robjmodel_instantiate(s_clspbc): assert isinstance(s_clspbc, SomePBC) clsdef = None @@ -376,6 +379,7 @@ BUILTIN_ANALYZERS[original] = value BUILTIN_ANALYZERS[pypy.rlib.rarithmetic.intmask] = rarith_intmask +BUILTIN_ANALYZERS[pypy.rlib.rarithmetic.longlongmask] = rarith_longlongmask BUILTIN_ANALYZERS[pypy.rlib.objectmodel.instantiate] = robjmodel_instantiate BUILTIN_ANALYZERS[pypy.rlib.objectmodel.r_dict] = robjmodel_r_dict BUILTIN_ANALYZERS[pypy.rlib.objectmodel.hlinvoke] = robjmodel_hlinvoke diff --git a/pypy/interpreter/baseobjspace.py b/pypy/interpreter/baseobjspace.py --- a/pypy/interpreter/baseobjspace.py +++ b/pypy/interpreter/baseobjspace.py @@ -1450,7 +1450,7 @@ self.wrap("expected a 32-bit integer")) return value - def truncatedint(self, w_obj): + def truncatedint_w(self, w_obj): # Like space.gateway_int_w(), but return the integer truncated # instead of raising OverflowError. For obscure cases only. try: @@ -1461,6 +1461,17 @@ from pypy.rlib.rarithmetic import intmask return intmask(self.bigint_w(w_obj).uintmask()) + def truncatedlonglong_w(self, w_obj): + # Like space.gateway_r_longlong_w(), but return the integer truncated + # instead of raising OverflowError. + try: + return self.r_longlong_w(w_obj) + except OperationError, e: + if not e.match(self, self.w_OverflowError): + raise + from pypy.rlib.rarithmetic import longlongmask + return longlongmask(self.bigint_w(w_obj).ulonglongmask()) + def c_filedescriptor_w(self, w_fd): # This is only used sometimes in CPython, e.g. for os.fsync() but # not os.close(). It's likely designed for 'select'. It's irregular diff --git a/pypy/interpreter/gateway.py b/pypy/interpreter/gateway.py --- a/pypy/interpreter/gateway.py +++ b/pypy/interpreter/gateway.py @@ -145,7 +145,7 @@ def visit_c_nonnegint(self, el, app_sig): self.checked_space_method(el, app_sig) - def visit_truncatedint(self, el, app_sig): + def visit_truncatedint_w(self, el, app_sig): self.checked_space_method(el, app_sig) def visit__Wrappable(self, el, app_sig): @@ -268,8 +268,8 @@ def visit_c_nonnegint(self, typ): self.run_args.append("space.c_nonnegint_w(%s)" % (self.scopenext(),)) - def visit_truncatedint(self, typ): - self.run_args.append("space.truncatedint(%s)" % (self.scopenext(),)) + def visit_truncatedint_w(self, typ): + self.run_args.append("space.truncatedint_w(%s)" % (self.scopenext(),)) def _make_unwrap_activation_class(self, unwrap_spec, cache={}): try: @@ -404,8 +404,8 @@ def visit_c_nonnegint(self, typ): self.unwrap.append("space.c_nonnegint_w(%s)" % (self.nextarg(),)) - def visit_truncatedint(self, typ): - self.unwrap.append("space.truncatedint(%s)" % (self.nextarg(),)) + def visit_truncatedint_w(self, typ): + self.unwrap.append("space.truncatedint_w(%s)" % (self.nextarg(),)) def make_fastfunc(unwrap_spec, func): unwrap_info = UnwrapSpec_FastFunc_Unwrap() diff --git a/pypy/interpreter/test/test_objspace.py b/pypy/interpreter/test/test_objspace.py --- a/pypy/interpreter/test/test_objspace.py +++ b/pypy/interpreter/test/test_objspace.py @@ -241,6 +241,37 @@ w_obj = space.wrap(-12) space.raises_w(space.w_ValueError, space.r_ulonglong_w, w_obj) + def test_truncatedint_w(self): + space = self.space + assert space.truncatedint_w(space.wrap(42)) == 42 + assert space.truncatedint_w(space.wrap(sys.maxint)) == sys.maxint + assert space.truncatedint_w(space.wrap(sys.maxint+1)) == -sys.maxint-1 + assert space.truncatedint_w(space.wrap(-1)) == -1 + assert space.truncatedint_w(space.wrap(-sys.maxint-2)) == sys.maxint + + def test_truncatedlonglong_w(self): + space = self.space + w_value = space.wrap(12) + res = space.truncatedlonglong_w(w_value) + assert res == 12 + assert type(res) is r_longlong + # + w_value = space.wrap(r_ulonglong(9223372036854775808)) + res = space.truncatedlonglong_w(w_value) + assert res == -9223372036854775808 + assert type(res) is r_longlong + # + w_value = space.wrap(r_ulonglong(18446744073709551615)) + res = space.truncatedlonglong_w(w_value) + assert res == -1 + assert type(res) is r_longlong + # + w_value = space.wrap(r_ulonglong(18446744073709551616)) + res = space.truncatedlonglong_w(w_value) + assert res == 0 + assert type(res) is r_longlong + + def test_call_obj_args(self): from pypy.interpreter.argument import Arguments diff --git a/pypy/jit/backend/llgraph/llimpl.py b/pypy/jit/backend/llgraph/llimpl.py --- a/pypy/jit/backend/llgraph/llimpl.py +++ b/pypy/jit/backend/llgraph/llimpl.py @@ -823,7 +823,9 @@ op_getfield_gc_pure = op_getfield_gc def op_getfield_raw(self, fielddescr, struct): - if fielddescr.typeinfo == REF: + if fielddescr.arg_types == 'dynamic': # abuse of .arg_types + return do_getfield_raw_dynamic(struct, fielddescr) + elif fielddescr.typeinfo == REF: return do_getfield_raw_ptr(struct, fielddescr.ofs) elif fielddescr.typeinfo == INT: return do_getfield_raw_int(struct, fielddescr.ofs) @@ -919,7 +921,9 @@ raise NotImplementedError def op_setfield_raw(self, fielddescr, struct, newvalue): - if fielddescr.typeinfo == REF: + if fielddescr.arg_types == 'dynamic': # abuse of .arg_types + do_setfield_raw_dynamic(struct, fielddescr, newvalue) + elif fielddescr.typeinfo == REF: do_setfield_raw_ptr(struct, fielddescr.ofs, newvalue) elif fielddescr.typeinfo == INT: do_setfield_raw_int(struct, fielddescr.ofs, newvalue) @@ -1500,6 +1504,17 @@ def do_getfield_raw_ptr(struct, fieldnum): return cast_to_ptr(_getfield_raw(struct, fieldnum)) +def do_getfield_raw_dynamic(struct, fielddescr): + from pypy.rlib import libffi + addr = cast_from_int(rffi.VOIDP, struct) + ofs = fielddescr.ofs + if fielddescr.is_pointer_field(): + assert False, 'fixme' + elif fielddescr.is_float_field(): + assert False, 'fixme' + else: + return libffi._struct_getfield(lltype.Signed, addr, ofs) + def do_new(size): TYPE = symbolic.Size2Type[size] x = lltype.malloc(TYPE, zero=True) @@ -1597,6 +1612,17 @@ newvalue = cast_from_ptr(FIELDTYPE, newvalue) setattr(ptr, fieldname, newvalue) +def do_setfield_raw_dynamic(struct, fielddescr, newvalue): + from pypy.rlib import libffi + addr = cast_from_int(rffi.VOIDP, struct) + ofs = fielddescr.ofs + if fielddescr.is_pointer_field(): + assert False, 'fixme' + elif fielddescr.is_float_field(): + assert False, 'fixme' + else: + libffi._struct_setfield(lltype.Signed, addr, ofs, newvalue) + def do_newstr(length): x = rstr.mallocstr(length) return cast_to_ptr(x) diff --git a/pypy/jit/backend/llgraph/runner.py b/pypy/jit/backend/llgraph/runner.py --- a/pypy/jit/backend/llgraph/runner.py +++ b/pypy/jit/backend/llgraph/runner.py @@ -334,6 +334,16 @@ token = history.getkind(getattr(S, fieldname)) return self.getdescr(ofs, token[0], name=fieldname) + def fielddescrof_dynamic(self, offset, fieldsize, is_pointer, is_float, is_signed): + if is_pointer: + typeinfo = REF + elif is_float: + typeinfo = FLOAT + else: + typeinfo = INT + # we abuse the arg_types field to distinguish dynamic and static descrs + return self.getdescr(offset, typeinfo, arg_types='dynamic', name='') + def interiorfielddescrof(self, A, fieldname): S = A.OF width = symbolic.get_size(A) diff --git a/pypy/jit/backend/llsupport/descr.py b/pypy/jit/backend/llsupport/descr.py --- a/pypy/jit/backend/llsupport/descr.py +++ b/pypy/jit/backend/llsupport/descr.py @@ -237,18 +237,25 @@ cache[(ARRAY, name)] = descr return descr +def compute_flag(is_pointer, is_float, is_signed): + if is_pointer: + assert not is_float + return FLAG_POINTER + elif is_float: + return FLAG_FLOAT + elif is_signed: + return FLAG_SIGNED + else: + return FLAG_UNSIGNED + +def get_dynamic_field_descr(offset, fieldsize, is_pointer, is_float, is_signed): + flag = compute_flag(is_pointer, is_float, is_signed) + return FieldDescr('dynamic', offset, fieldsize, flag) + def get_dynamic_interiorfield_descr(gc_ll_descr, offset, width, fieldsize, is_pointer, is_float, is_signed): arraydescr = ArrayDescr(0, width, None, FLAG_STRUCT) - if is_pointer: - assert not is_float - flag = FLAG_POINTER - elif is_float: - flag = FLAG_FLOAT - elif is_signed: - flag = FLAG_SIGNED - else: - flag = FLAG_UNSIGNED + flag = compute_flag(is_pointer, is_float, is_signed) fielddescr = FieldDescr('dynamic', offset, fieldsize, flag) return InteriorFieldDescr(arraydescr, fielddescr) diff --git a/pypy/jit/backend/llsupport/llmodel.py b/pypy/jit/backend/llsupport/llmodel.py --- a/pypy/jit/backend/llsupport/llmodel.py +++ b/pypy/jit/backend/llsupport/llmodel.py @@ -11,7 +11,7 @@ from pypy.jit.backend.llsupport.descr import ( get_size_descr, get_field_descr, get_array_descr, get_call_descr, get_interiorfield_descr, get_dynamic_interiorfield_descr, - FieldDescr, ArrayDescr, CallDescr, InteriorFieldDescr) + FieldDescr, ArrayDescr, CallDescr, InteriorFieldDescr, get_dynamic_field_descr) from pypy.jit.backend.llsupport.asmmemmgr import AsmMemoryManager @@ -245,6 +245,9 @@ def fielddescrof(self, STRUCT, fieldname): return get_field_descr(self.gc_ll_descr, STRUCT, fieldname) + def fielddescrof_dynamic(self, offset, fieldsize, is_pointer, is_float, is_signed): + return get_dynamic_field_descr(offset, fieldsize, is_pointer, is_float, is_signed) + def unpack_fielddescr(self, fielddescr): assert isinstance(fielddescr, FieldDescr) return fielddescr.offset diff --git a/pypy/jit/backend/test/runner_test.py b/pypy/jit/backend/test/runner_test.py --- a/pypy/jit/backend/test/runner_test.py +++ b/pypy/jit/backend/test/runner_test.py @@ -1660,20 +1660,37 @@ assert s.x == chr(190) assert s.y == chr(150) - def test_field_raw_pure(self): - # This is really testing the same thing as test_field_basic but can't - # hurt... - S = lltype.Struct('S', ('x', lltype.Signed)) + def test_fielddescrof_dynamic(self): + S = lltype.Struct('S', + ('x', lltype.Signed), + ('y', lltype.Signed), + ) + longsize = rffi.sizeof(lltype.Signed) + y_ofs = longsize s = lltype.malloc(S, flavor='raw') sa = llmemory.cast_ptr_to_adr(s) s_box = BoxInt(heaptracker.adr2int(sa)) + # + field = self.cpu.fielddescrof(S, 'y') + field_dyn = self.cpu.fielddescrof_dynamic(offset=y_ofs, + fieldsize=longsize, + is_pointer=False, + is_float=False, + is_signed=True) + assert field.is_pointer_field() == field_dyn.is_pointer_field() + assert field.is_float_field() == field_dyn.is_float_field() + if 'llgraph' not in str(self.cpu): + assert field.is_field_signed() == field_dyn.is_field_signed() + + # for get_op, set_op in ((rop.GETFIELD_RAW, rop.SETFIELD_RAW), (rop.GETFIELD_RAW_PURE, rop.SETFIELD_RAW)): - fd = self.cpu.fielddescrof(S, 'x') - self.execute_operation(set_op, [s_box, BoxInt(32)], 'void', - descr=fd) - res = self.execute_operation(get_op, [s_box], 'int', descr=fd) - assert res.getint() == 32 + for descr in (field, field_dyn): + self.execute_operation(set_op, [s_box, BoxInt(32)], 'void', + descr=descr) + res = self.execute_operation(get_op, [s_box], 'int', descr=descr) + assert res.getint() == 32 + lltype.free(s, flavor='raw') def test_new_with_vtable(self): diff --git a/pypy/jit/codewriter/effectinfo.py b/pypy/jit/codewriter/effectinfo.py --- a/pypy/jit/codewriter/effectinfo.py +++ b/pypy/jit/codewriter/effectinfo.py @@ -48,8 +48,10 @@ OS_LIBFFI_PREPARE = 60 OS_LIBFFI_PUSH_ARG = 61 OS_LIBFFI_CALL = 62 - OS_LIBFFI_GETARRAYITEM = 63 - OS_LIBFFI_SETARRAYITEM = 64 + OS_LIBFFI_STRUCT_GETFIELD = 63 + OS_LIBFFI_STRUCT_SETFIELD = 64 + OS_LIBFFI_GETARRAYITEM = 65 + OS_LIBFFI_SETARRAYITEM = 66 # OS_LLONG_INVERT = 69 OS_LLONG_ADD = 70 diff --git a/pypy/jit/codewriter/jtransform.py b/pypy/jit/codewriter/jtransform.py --- a/pypy/jit/codewriter/jtransform.py +++ b/pypy/jit/codewriter/jtransform.py @@ -1675,6 +1675,12 @@ elif oopspec_name.startswith('libffi_call_'): oopspecindex = EffectInfo.OS_LIBFFI_CALL extraeffect = EffectInfo.EF_RANDOM_EFFECTS + elif oopspec_name == 'libffi_struct_getfield': + oopspecindex = EffectInfo.OS_LIBFFI_STRUCT_GETFIELD + extraeffect = EffectInfo.EF_CANNOT_RAISE + elif oopspec_name == 'libffi_struct_setfield': + oopspecindex = EffectInfo.OS_LIBFFI_STRUCT_SETFIELD + extraeffect = EffectInfo.EF_CANNOT_RAISE elif oopspec_name == 'libffi_array_getitem': oopspecindex = EffectInfo.OS_LIBFFI_GETARRAYITEM extraeffect = EffectInfo.EF_CANNOT_RAISE diff --git a/pypy/jit/codewriter/support.py b/pypy/jit/codewriter/support.py --- a/pypy/jit/codewriter/support.py +++ b/pypy/jit/codewriter/support.py @@ -456,7 +456,6 @@ def _ll_3_libffi_call_void(llfunc, funcsym, ll_args): return func(llfunc)._do_call(funcsym, ll_args, lltype.Void) - # in the following calls to builtins, the JIT is allowed to look inside: inline_calls_to = [ ('int_floordiv_ovf_zer', [lltype.Signed, lltype.Signed], lltype.Signed), diff --git a/pypy/jit/metainterp/optimizeopt/fficall.py b/pypy/jit/metainterp/optimizeopt/fficall.py --- a/pypy/jit/metainterp/optimizeopt/fficall.py +++ b/pypy/jit/metainterp/optimizeopt/fficall.py @@ -7,7 +7,9 @@ from pypy.rlib.libffi import Func from pypy.rlib.objectmodel import we_are_translated from pypy.rpython.annlowlevel import cast_base_ptr_to_instance -from pypy.rpython.lltypesystem import llmemory, rffi +from pypy.rpython.lltypesystem import lltype, llmemory, rffi +from pypy.rlib.objectmodel import we_are_translated +from pypy.rlib.rarithmetic import intmask class FuncInfo(object): @@ -118,6 +120,9 @@ ops = self.do_push_arg(op) elif oopspec == EffectInfo.OS_LIBFFI_CALL: ops = self.do_call(op) + elif (oopspec == EffectInfo.OS_LIBFFI_STRUCT_GETFIELD or + oopspec == EffectInfo.OS_LIBFFI_STRUCT_SETFIELD): + ops = self.do_struct_getsetfield(op, oopspec) elif (oopspec == EffectInfo.OS_LIBFFI_GETARRAYITEM or oopspec == EffectInfo.OS_LIBFFI_SETARRAYITEM): ops = self.do_getsetarrayitem(op, oopspec) @@ -195,6 +200,46 @@ ops.append(newop) return ops + def do_struct_getsetfield(self, op, oopspec): + ffitypeval = self.getvalue(op.getarg(1)) + addrval = self.getvalue(op.getarg(2)) + offsetval = self.getvalue(op.getarg(3)) + if not ffitypeval.is_constant() or not offsetval.is_constant(): + return [op] + # + ffitypeaddr = ffitypeval.box.getaddr() + ffitype = llmemory.cast_adr_to_ptr(ffitypeaddr, clibffi.FFI_TYPE_P) + offset = offsetval.box.getint() + descr = self._get_field_descr(ffitype, offset) + # + arglist = [addrval.force_box(self.optimizer)] + if oopspec == EffectInfo.OS_LIBFFI_STRUCT_GETFIELD: + opnum = rop.GETFIELD_RAW + else: + opnum = rop.SETFIELD_RAW + newval = self.getvalue(op.getarg(4)) + arglist.append(newval.force_box(self.optimizer)) + # + newop = ResOperation(opnum, arglist, op.result, descr=descr) + return [newop] + + def _get_field_descr(self, ffitype, offset): + kind = libffi.types.getkind(ffitype) + is_pointer = is_float = is_signed = False + if ffitype is libffi.types.pointer: + is_pointer = True + elif kind == 'i': + is_signed = True + elif kind == 'f' or kind == 'I' or kind == 'U': + # longlongs are treated as floats, see e.g. llsupport/descr.py:getDescrClass + is_float = True + else: + assert False, "unsupported ffitype or kind" + # + fieldsize = intmask(ffitype.c_size) + return self.optimizer.cpu.fielddescrof_dynamic(offset, fieldsize, + is_pointer, is_float, is_signed) + def do_getsetarrayitem(self, op, oopspec): ffitypeval = self.getvalue(op.getarg(1)) widthval = self.getvalue(op.getarg(2)) @@ -245,6 +290,7 @@ offset, width, fieldsize, is_pointer, is_float, is_signed ) + def propagate_forward(self, op): if self.logops is not None: debug_print(self.logops.repr_of_resop(op)) diff --git a/pypy/jit/metainterp/optimizeopt/test/test_optimizefficall.py b/pypy/jit/metainterp/optimizeopt/test/test_optimizefficall.py --- a/pypy/jit/metainterp/optimizeopt/test/test_optimizefficall.py +++ b/pypy/jit/metainterp/optimizeopt/test/test_optimizefficall.py @@ -56,6 +56,13 @@ restype=types.sint, flags=43) # + ffi_slong = types.slong + dyn_123_field = cpu.fielddescrof_dynamic(offset=123, + fieldsize=types.slong.c_size, + is_pointer=False, + is_float=False, + is_signed=True) + # def calldescr(cpu, FUNC, oopspecindex, extraeffect=None): if extraeffect == EffectInfo.EF_RANDOM_EFFECTS: f = None # means "can force all" really @@ -69,6 +76,8 @@ libffi_push_arg = calldescr(cpu, FUNC, EffectInfo.OS_LIBFFI_PUSH_ARG) libffi_call = calldescr(cpu, FUNC, EffectInfo.OS_LIBFFI_CALL, EffectInfo.EF_RANDOM_EFFECTS) + libffi_struct_getfield = calldescr(cpu, FUNC, EffectInfo.OS_LIBFFI_STRUCT_GETFIELD) + libffi_struct_setfield = calldescr(cpu, FUNC, EffectInfo.OS_LIBFFI_STRUCT_SETFIELD) namespace = namespace.__dict__ @@ -277,3 +286,30 @@ jump(i3, f1, p2) """ loop = self.optimize_loop(ops, expected) + + def test_ffi_struct_fields(self): + ops = """ + [i0] + i1 = call(0, ConstClass(ffi_slong), i0, 123, descr=libffi_struct_getfield) + i2 = int_add(i1, 1) + call(0, ConstClass(ffi_slong), i0, 123, i2, descr=libffi_struct_setfield) + jump(i1) + """ + expected = """ + [i0] + i1 = getfield_raw(i0, descr=dyn_123_field) + i2 = int_add(i1, 1) + setfield_raw(i0, i2, descr=dyn_123_field) + jump(i1) + """ + loop = self.optimize_loop(ops, expected) + + def test_ffi_struct_fields_nonconst(self): + ops = """ + [i0, i1] + i2 = call(0, ConstClass(ffi_slong), i0, i1, descr=libffi_struct_getfield) + i3 = call(0, i1 , i0, 123, descr=libffi_struct_getfield) + jump(i1) + """ + expected = ops + loop = self.optimize_loop(ops, expected) diff --git a/pypy/jit/metainterp/test/test_fficall.py b/pypy/jit/metainterp/test/test_fficall.py --- a/pypy/jit/metainterp/test/test_fficall.py +++ b/pypy/jit/metainterp/test/test_fficall.py @@ -4,7 +4,7 @@ from pypy.jit.metainterp.test.support import LLJitMixin from pypy.rlib.jit import JitDriver, promote, dont_look_inside from pypy.rlib.libffi import (ArgChain, IS_32_BIT, array_getitem, array_setitem, - types) + types, struct_setfield_int, struct_getfield_int) from pypy.rlib.objectmodel import specialize from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong from pypy.rlib.test.test_libffi import TestLibffiCall as _TestLibffiCall @@ -187,5 +187,24 @@ class TestFfiCallSupportAll(FfiCallTests, LLJitMixin): supports_all = True # supports_{floats,longlong,singlefloats} + def test_struct_getfield(self): + myjitdriver = JitDriver(greens = [], reds = ['n', 'i', 'addr']) + + def f(n): + i = 0 + addr = lltype.malloc(rffi.VOIDP.TO, 10, flavor='raw') + while i < n: + myjitdriver.jit_merge_point(n=n, i=i, addr=addr) + struct_setfield_int(types.slong, addr, 0, 1) + i += struct_getfield_int(types.slong, addr, 0) + lltype.free(addr, flavor='raw') + return i + assert self.meta_interp(f, [20]) == f(20) + self.check_resops( + setfield_raw=2, + getfield_raw=2, + call=0) + + class TestFfiLookup(FfiLookupTests, LLJitMixin): pass diff --git a/pypy/module/_ffi/__init__.py b/pypy/module/_ffi/__init__.py --- a/pypy/module/_ffi/__init__.py +++ b/pypy/module/_ffi/__init__.py @@ -1,13 +1,16 @@ from pypy.interpreter.mixedmodule import MixedModule -from pypy.module._ffi import interp_ffi class Module(MixedModule): interpleveldefs = { - 'CDLL': 'interp_ffi.W_CDLL', - 'types': 'interp_ffi.W_types', - 'FuncPtr': 'interp_ffi.W_FuncPtr', - 'get_libc':'interp_ffi.get_libc', + 'types': 'interp_ffitype.W_types', + 'CDLL': 'interp_funcptr.W_CDLL', + 'FuncPtr': 'interp_funcptr.W_FuncPtr', + 'get_libc':'interp_funcptr.get_libc', + '_StructDescr': 'interp_struct.W__StructDescr', + 'Field': 'interp_struct.W_Field', } - appleveldefs = {} + appleveldefs = { + 'Structure': 'app_struct.Structure', + } diff --git a/pypy/module/_ffi/app_struct.py b/pypy/module/_ffi/app_struct.py new file mode 100644 --- /dev/null +++ b/pypy/module/_ffi/app_struct.py @@ -0,0 +1,21 @@ +import _ffi + +class MetaStructure(type): + + def __new__(cls, name, bases, dic): + cls._compute_shape(name, dic) + return type.__new__(cls, name, bases, dic) + + @classmethod + def _compute_shape(cls, name, dic): + fields = dic.get('_fields_') + if fields is None: + return + struct_descr = _ffi._StructDescr(name, fields) + for field in fields: + dic[field.name] = field + dic['_struct_'] = struct_descr + + +class Structure(object): + __metaclass__ = MetaStructure diff --git a/pypy/module/_ffi/interp_ffitype.py b/pypy/module/_ffi/interp_ffitype.py new file mode 100644 --- /dev/null +++ b/pypy/module/_ffi/interp_ffitype.py @@ -0,0 +1,181 @@ +from pypy.rlib import libffi, clibffi +from pypy.rlib.rarithmetic import intmask +from pypy.rlib import jit +from pypy.interpreter.baseobjspace import Wrappable +from pypy.interpreter.typedef import TypeDef, interp_attrproperty +from pypy.interpreter.gateway import interp2app +from pypy.interpreter.error import OperationError + +class W_FFIType(Wrappable): + + _immutable_fields_ = ['name', 'w_structdescr', 'w_pointer_to'] + + def __init__(self, name, ffitype, w_structdescr=None, w_pointer_to=None): + self.name = name + self._ffitype = clibffi.FFI_TYPE_NULL + self.w_structdescr = w_structdescr + self.w_pointer_to = w_pointer_to + self.set_ffitype(ffitype) + + @jit.elidable + def get_ffitype(self): + if not self._ffitype: + raise ValueError("Operation not permitted on an incomplete type") + return self._ffitype + + def set_ffitype(self, ffitype): + if self._ffitype: + raise ValueError("The _ffitype is already set") + self._ffitype = ffitype + if ffitype and self.is_struct(): + assert self.w_structdescr is not None + + def descr_deref_pointer(self, space): + if self.w_pointer_to is None: + return space.w_None + return self.w_pointer_to + + def descr_sizeof(self, space): + try: + return space.wrap(self.sizeof()) + except ValueError: + msg = "Operation not permitted on an incomplete type" + raise OperationError(space.w_ValueError, space.wrap(msg)) + + def sizeof(self): + return intmask(self.get_ffitype().c_size) + + def get_alignment(self): + return intmask(self.get_ffitype().c_alignment) + + def repr(self, space): + return space.wrap(self.__repr__()) + + def __repr__(self): + name = self.name + if not self._ffitype: + name += ' (incomplete)' + return "" % name + + def is_signed(self): + return (self is app_types.slong or + self is app_types.sint or + self is app_types.sshort or + self is app_types.sbyte or + self is app_types.slonglong) + + def is_unsigned(self): + return (self is app_types.ulong or + self is app_types.uint or + self is app_types.ushort or + self is app_types.ubyte or + self is app_types.ulonglong) + + def is_pointer(self): + return self.get_ffitype() is libffi.types.pointer + + def is_char(self): + return self is app_types.char + + def is_unichar(self): + return self is app_types.unichar + + def is_longlong(self): + return libffi.IS_32_BIT and (self is app_types.slonglong or + self is app_types.ulonglong) + + def is_double(self): + return self is app_types.double + + def is_singlefloat(self): + return self is app_types.float + + def is_void(self): + return self is app_types.void + + def is_struct(self): + return libffi.types.is_struct(self.get_ffitype()) + + def is_char_p(self): + return self is app_types.char_p + + def is_unichar_p(self): + return self is app_types.unichar_p + + +W_FFIType.typedef = TypeDef( + 'FFIType', + name = interp_attrproperty('name', W_FFIType), + __repr__ = interp2app(W_FFIType.repr), + deref_pointer = interp2app(W_FFIType.descr_deref_pointer), + sizeof = interp2app(W_FFIType.descr_sizeof), + ) + + +def build_ffi_types(): + types = [ + # note: most of the type name directly come from the C equivalent, + # with the exception of bytes: in C, ubyte and char are equivalent, + # but for _ffi the first expects a number while the second a 1-length + # string + W_FFIType('slong', libffi.types.slong), + W_FFIType('sint', libffi.types.sint), + W_FFIType('sshort', libffi.types.sshort), + W_FFIType('sbyte', libffi.types.schar), + W_FFIType('slonglong', libffi.types.slonglong), + # + W_FFIType('ulong', libffi.types.ulong), + W_FFIType('uint', libffi.types.uint), + W_FFIType('ushort', libffi.types.ushort), + W_FFIType('ubyte', libffi.types.uchar), + W_FFIType('ulonglong', libffi.types.ulonglong), + # + W_FFIType('char', libffi.types.uchar), + W_FFIType('unichar', libffi.types.wchar_t), + # + W_FFIType('double', libffi.types.double), + W_FFIType('float', libffi.types.float), + W_FFIType('void', libffi.types.void), + W_FFIType('void_p', libffi.types.pointer), + # + # missing types: + + ## 's' : ffi_type_pointer, + ## 'z' : ffi_type_pointer, + ## 'O' : ffi_type_pointer, + ## 'Z' : ffi_type_pointer, + + ] + d = dict([(t.name, t) for t in types]) + w_char = d['char'] + w_unichar = d['unichar'] + d['char_p'] = W_FFIType('char_p', libffi.types.pointer, w_pointer_to = w_char) + d['unichar_p'] = W_FFIType('unichar_p', libffi.types.pointer, w_pointer_to = w_unichar) + return d + +class app_types: + pass +app_types.__dict__ = build_ffi_types() + +def descr_new_pointer(space, w_cls, w_pointer_to): + try: + return descr_new_pointer.cache[w_pointer_to] + except KeyError: + if w_pointer_to is app_types.char: + w_result = app_types.char_p + elif w_pointer_to is app_types.unichar: + w_result = app_types.unichar_p + else: + w_pointer_to = space.interp_w(W_FFIType, w_pointer_to) + name = '(pointer to %s)' % w_pointer_to.name + w_result = W_FFIType(name, libffi.types.pointer, w_pointer_to = w_pointer_to) + descr_new_pointer.cache[w_pointer_to] = w_result + return w_result +descr_new_pointer.cache = {} + +class W_types(Wrappable): + pass +W_types.typedef = TypeDef( + 'types', + Pointer = interp2app(descr_new_pointer, as_classmethod=True), + **app_types.__dict__) diff --git a/pypy/module/_ffi/interp_ffi.py b/pypy/module/_ffi/interp_funcptr.py rename from pypy/module/_ffi/interp_ffi.py rename to pypy/module/_ffi/interp_funcptr.py --- a/pypy/module/_ffi/interp_ffi.py +++ b/pypy/module/_ffi/interp_funcptr.py @@ -3,7 +3,7 @@ operationerrfmt from pypy.interpreter.gateway import interp2app, unwrap_spec from pypy.interpreter.typedef import TypeDef -from pypy.module._rawffi.structure import W_StructureInstance, W_Structure +from pypy.module._ffi.interp_ffitype import W_FFIType # from pypy.rpython.lltypesystem import lltype, rffi # @@ -12,165 +12,16 @@ from pypy.rlib.rdynload import DLOpenError from pypy.rlib.rarithmetic import intmask, r_uint from pypy.rlib.objectmodel import we_are_translated - -class W_FFIType(Wrappable): - - _immutable_fields_ = ['name', 'ffitype', 'w_datashape', 'w_pointer_to'] - - def __init__(self, name, ffitype, w_datashape=None, w_pointer_to=None): - self.name = name - self.ffitype = ffitype - self.w_datashape = w_datashape - self.w_pointer_to = w_pointer_to - if self.is_struct(): - assert w_datashape is not None - - def descr_deref_pointer(self, space): - if self.w_pointer_to is None: - return space.w_None - return self.w_pointer_to - - def repr(self, space): - return space.wrap(self.__repr__()) - - def __repr__(self): - return "" % self.name - - def is_signed(self): - return (self is app_types.slong or - self is app_types.sint or - self is app_types.sshort or - self is app_types.sbyte or - self is app_types.slonglong) - - def is_unsigned(self): - return (self is app_types.ulong or - self is app_types.uint or - self is app_types.ushort or - self is app_types.ubyte or - self is app_types.ulonglong) - - def is_pointer(self): - return self.ffitype is libffi.types.pointer - - def is_char(self): - return self is app_types.char - - def is_unichar(self): - return self is app_types.unichar - - def is_longlong(self): - return libffi.IS_32_BIT and (self is app_types.slonglong or - self is app_types.ulonglong) - - def is_double(self): - return self is app_types.double - - def is_singlefloat(self): - return self is app_types.float - - def is_void(self): - return self is app_types.void - - def is_struct(self): - return libffi.types.is_struct(self.ffitype) - - def is_char_p(self): - return self is app_types.char_p - - def is_unichar_p(self): - return self is app_types.unichar_p - - -W_FFIType.typedef = TypeDef( - 'FFIType', - __repr__ = interp2app(W_FFIType.repr), - deref_pointer = interp2app(W_FFIType.descr_deref_pointer), - ) - - -def build_ffi_types(): - types = [ - # note: most of the type name directly come from the C equivalent, - # with the exception of bytes: in C, ubyte and char are equivalent, - # but for _ffi the first expects a number while the second a 1-length - # string - W_FFIType('slong', libffi.types.slong), - W_FFIType('sint', libffi.types.sint), - W_FFIType('sshort', libffi.types.sshort), - W_FFIType('sbyte', libffi.types.schar), - W_FFIType('slonglong', libffi.types.slonglong), - # - W_FFIType('ulong', libffi.types.ulong), - W_FFIType('uint', libffi.types.uint), - W_FFIType('ushort', libffi.types.ushort), - W_FFIType('ubyte', libffi.types.uchar), - W_FFIType('ulonglong', libffi.types.ulonglong), - # - W_FFIType('char', libffi.types.uchar), - W_FFIType('unichar', libffi.types.wchar_t), - # - W_FFIType('double', libffi.types.double), - W_FFIType('float', libffi.types.float), - W_FFIType('void', libffi.types.void), - W_FFIType('void_p', libffi.types.pointer), - # - # missing types: - - ## 's' : ffi_type_pointer, - ## 'z' : ffi_type_pointer, - ## 'O' : ffi_type_pointer, - ## 'Z' : ffi_type_pointer, - - ] - d = dict([(t.name, t) for t in types]) - w_char = d['char'] - w_unichar = d['unichar'] - d['char_p'] = W_FFIType('char_p', libffi.types.pointer, w_pointer_to = w_char) - d['unichar_p'] = W_FFIType('unichar_p', libffi.types.pointer, w_pointer_to = w_unichar) - return d - -class app_types: - pass -app_types.__dict__ = build_ffi_types() - -def descr_new_pointer(space, w_cls, w_pointer_to): - try: - return descr_new_pointer.cache[w_pointer_to] - except KeyError: - if w_pointer_to is app_types.char: - w_result = app_types.char_p - elif w_pointer_to is app_types.unichar: - w_result = app_types.unichar_p - else: - w_pointer_to = space.interp_w(W_FFIType, w_pointer_to) - name = '(pointer to %s)' % w_pointer_to.name - w_result = W_FFIType(name, libffi.types.pointer, w_pointer_to = w_pointer_to) - descr_new_pointer.cache[w_pointer_to] = w_result - return w_result -descr_new_pointer.cache = {} - -class W_types(Wrappable): - pass -W_types.typedef = TypeDef( - 'types', - Pointer = interp2app(descr_new_pointer, as_classmethod=True), - **app_types.__dict__) +from pypy.module._ffi.type_converter import FromAppLevelConverter, ToAppLevelConverter def unwrap_ffitype(space, w_argtype, allow_void=False): - res = w_argtype.ffitype + res = w_argtype.get_ffitype() if res is libffi.types.void and not allow_void: msg = 'void is not a valid argument type' raise OperationError(space.w_TypeError, space.wrap(msg)) return res -def unwrap_truncate_int(TP, space, w_arg): - if space.is_true(space.isinstance(w_arg, space.w_int)): - return rffi.cast(TP, space.int_w(w_arg)) - else: - return rffi.cast(TP, space.bigint_w(w_arg).ulonglongmask()) -unwrap_truncate_int._annspecialcase_ = 'specialize:arg(0)' # ======================================================================== @@ -197,101 +48,19 @@ self.func.name, expected, arg, given) # argchain = libffi.ArgChain() + argpusher = PushArgumentConverter(space, argchain, self) for i in range(expected): w_argtype = self.argtypes_w[i] w_arg = args_w[i] - if w_argtype.is_longlong(): - # note that we must check for longlong first, because either - # is_signed or is_unsigned returns true anyway - assert libffi.IS_32_BIT - self.arg_longlong(space, argchain, w_arg) - elif w_argtype.is_signed(): - argchain.arg(unwrap_truncate_int(rffi.LONG, space, w_arg)) - elif self.add_char_p_maybe(space, argchain, w_arg, w_argtype): - # the argument is added to the argchain direcly by the method above - pass - elif w_argtype.is_pointer(): - w_arg = self.convert_pointer_arg_maybe(space, w_arg, w_argtype) - argchain.arg(intmask(space.uint_w(w_arg))) - elif w_argtype.is_unsigned(): - argchain.arg(unwrap_truncate_int(rffi.ULONG, space, w_arg)) - elif w_argtype.is_char(): - w_arg = space.ord(w_arg) - argchain.arg(space.int_w(w_arg)) - elif w_argtype.is_unichar(): - w_arg = space.ord(w_arg) - argchain.arg(space.int_w(w_arg)) - elif w_argtype.is_double(): - self.arg_float(space, argchain, w_arg) - elif w_argtype.is_singlefloat(): - self.arg_singlefloat(space, argchain, w_arg) - elif w_argtype.is_struct(): - # arg_raw directly takes value to put inside ll_args - w_arg = space.interp_w(W_StructureInstance, w_arg) - ptrval = w_arg.ll_buffer - argchain.arg_raw(ptrval) - else: - assert False, "Argument shape '%s' not supported" % w_argtype + argpusher.unwrap_and_do(w_argtype, w_arg) return argchain - def add_char_p_maybe(self, space, argchain, w_arg, w_argtype): - """ - Automatic conversion from string to char_p. The allocated buffer will - be automatically freed after the call. - """ - w_type = jit.promote(space.type(w_arg)) - if w_argtype.is_char_p() and w_type is space.w_str: - strval = space.str_w(w_arg) - buf = rffi.str2charp(strval) - self.to_free.append(rffi.cast(rffi.VOIDP, buf)) - addr = rffi.cast(rffi.ULONG, buf) - argchain.arg(addr) - return True - elif w_argtype.is_unichar_p() and (w_type is space.w_str or - w_type is space.w_unicode): - unicodeval = space.unicode_w(w_arg) - buf = rffi.unicode2wcharp(unicodeval) - self.to_free.append(rffi.cast(rffi.VOIDP, buf)) - addr = rffi.cast(rffi.ULONG, buf) - argchain.arg(addr) - return True - return False - - def convert_pointer_arg_maybe(self, space, w_arg, w_argtype): - """ - Try to convert the argument by calling _as_ffi_pointer_() - """ - meth = space.lookup(w_arg, '_as_ffi_pointer_') # this also promotes the type - if meth: - return space.call_function(meth, w_arg, w_argtype) - else: - return w_arg - - def arg_float(self, space, argchain, w_arg): - # a separate function, which can be seen by the jit or not, - # depending on whether floats are supported - argchain.arg(space.float_w(w_arg)) - - def arg_longlong(self, space, argchain, w_arg): - # a separate function, which can be seen by the jit or not, - # depending on whether longlongs are supported - bigarg = space.bigint_w(w_arg) - ullval = bigarg.ulonglongmask() - llval = rffi.cast(rffi.LONGLONG, ullval) - argchain.arg(llval) - - def arg_singlefloat(self, space, argchain, w_arg): - # a separate function, which can be seen by the jit or not, - # depending on whether singlefloats are supported - from pypy.rlib.rarithmetic import r_singlefloat - fval = space.float_w(w_arg) - sfval = r_singlefloat(fval) - argchain.arg(sfval) - def call(self, space, args_w): self = jit.promote(self) argchain = self.build_argchain(space, args_w) - return self._do_call(space, argchain) + func_caller = CallFunctionConverter(space, self.func, argchain) + return func_caller.do_and_wrap(self.w_restype) + #return self._do_call(space, argchain) def free_temp_buffers(self, space): for buf in self.to_free: @@ -301,40 +70,89 @@ lltype.free(buf, flavor='raw') self.to_free = [] - def _do_call(self, space, argchain): - w_restype = self.w_restype - if w_restype.is_longlong(): - # note that we must check for longlong first, because either - # is_signed or is_unsigned returns true anyway - assert libffi.IS_32_BIT - return self._call_longlong(space, argchain) - elif w_restype.is_signed(): - return self._call_int(space, argchain) - elif w_restype.is_unsigned() or w_restype.is_pointer(): - return self._call_uint(space, argchain) - elif w_restype.is_char(): - intres = self.func.call(argchain, rffi.UCHAR) - return space.wrap(chr(intres)) - elif w_restype.is_unichar(): - intres = self.func.call(argchain, rffi.WCHAR_T) - return space.wrap(unichr(intres)) - elif w_restype.is_double(): - return self._call_float(space, argchain) - elif w_restype.is_singlefloat(): - return self._call_singlefloat(space, argchain) - elif w_restype.is_struct(): - w_datashape = w_restype.w_datashape - assert isinstance(w_datashape, W_Structure) - ptrval = self.func.call(argchain, rffi.ULONG, is_struct=True) - return w_datashape.fromaddress(space, ptrval) - elif w_restype.is_void(): - voidres = self.func.call(argchain, lltype.Void) - assert voidres is None - return space.w_None - else: - assert False, "Return value shape '%s' not supported" % w_restype + def getaddr(self, space): + """ + Return the physical address in memory of the function + """ + return space.wrap(rffi.cast(rffi.LONG, self.func.funcsym)) - def _call_int(self, space, argchain): + +class PushArgumentConverter(FromAppLevelConverter): + """ + A converter used by W_FuncPtr to unwrap the app-level objects into + low-level types and push them to the argchain. + """ + + def __init__(self, space, argchain, w_func): + FromAppLevelConverter.__init__(self, space) + self.argchain = argchain + self.w_func = w_func + + def handle_signed(self, w_ffitype, w_obj, intval): + self.argchain.arg(intval) + + def handle_unsigned(self, w_ffitype, w_obj, uintval): + self.argchain.arg(uintval) + + def handle_pointer(self, w_ffitype, w_obj, intval): + self.argchain.arg(intval) + + def handle_char(self, w_ffitype, w_obj, intval): + self.argchain.arg(intval) + + def handle_unichar(self, w_ffitype, w_obj, intval): + self.argchain.arg(intval) + + def handle_longlong(self, w_ffitype, w_obj, longlongval): + self.argchain.arg(longlongval) + + def handle_char_p(self, w_ffitype, w_obj, strval): + buf = rffi.str2charp(strval) + self.w_func.to_free.append(rffi.cast(rffi.VOIDP, buf)) + addr = rffi.cast(rffi.ULONG, buf) + self.argchain.arg(addr) + + def handle_unichar_p(self, w_ffitype, w_obj, unicodeval): + buf = rffi.unicode2wcharp(unicodeval) + self.w_func.to_free.append(rffi.cast(rffi.VOIDP, buf)) + addr = rffi.cast(rffi.ULONG, buf) + self.argchain.arg(addr) + + def handle_float(self, w_ffitype, w_obj, floatval): + self.argchain.arg(floatval) + + def handle_singlefloat(self, w_ffitype, w_obj, singlefloatval): + self.argchain.arg(singlefloatval) + + def handle_struct(self, w_ffitype, w_structinstance): + # arg_raw directly takes value to put inside ll_args + ptrval = w_structinstance.rawmem + self.argchain.arg_raw(ptrval) + + def handle_struct_rawffi(self, w_ffitype, w_structinstance): + # arg_raw directly takes value to put inside ll_args + ptrval = w_structinstance.ll_buffer + self.argchain.arg_raw(ptrval) + + +class CallFunctionConverter(ToAppLevelConverter): + """ + A converter used by W_FuncPtr to call the function, expect the result of + a correct low-level type and wrap it to the corresponding app-level type + """ + + def __init__(self, space, func, argchain): + ToAppLevelConverter.__init__(self, space) + self.func = func + self.argchain = argchain + + def get_longlong(self, w_ffitype): + return self.func.call(self.argchain, rffi.LONGLONG) + + def get_ulonglong(self, w_ffitype): + return self.func.call(self.argchain, rffi.ULONGLONG) + + def get_signed(self, w_ffitype): # if the declared return type of the function is smaller than LONG, # the result buffer may contains garbage in its higher bits. To get # the correct value, and to be sure to handle the signed/unsigned case @@ -342,88 +160,66 @@ # that, we cast it back to LONG, because this is what we want to pass # to space.wrap in order to get a nice applevel . # - restype = self.func.restype + restype = w_ffitype.get_ffitype() call = self.func.call if restype is libffi.types.slong: - intres = call(argchain, rffi.LONG) + return call(self.argchain, rffi.LONG) elif restype is libffi.types.sint: - intres = rffi.cast(rffi.LONG, call(argchain, rffi.INT)) + return rffi.cast(rffi.LONG, call(self.argchain, rffi.INT)) elif restype is libffi.types.sshort: - intres = rffi.cast(rffi.LONG, call(argchain, rffi.SHORT)) + return rffi.cast(rffi.LONG, call(self.argchain, rffi.SHORT)) elif restype is libffi.types.schar: - intres = rffi.cast(rffi.LONG, call(argchain, rffi.SIGNEDCHAR)) + return rffi.cast(rffi.LONG, call(self.argchain, rffi.SIGNEDCHAR)) else: - raise OperationError(space.w_ValueError, - space.wrap('Unsupported restype')) - return space.wrap(intres) + self.error(w_ffitype) + + def get_unsigned(self, w_ffitype): + return self.func.call(self.argchain, rffi.ULONG) - def _call_uint(self, space, argchain): - # the same comment as above apply. Moreover, we need to be careful - # when the return type is ULONG, because the value might not fit into - # a signed LONG: this is the only case in which we cast the result to - # something different than LONG; as a result, the applevel value will - # be a . - # - # Note that we check for ULONG before UINT: this is needed on 32bit - # machines, where they are they same: if we checked for UINT before - # ULONG, we would cast to the wrong type. Note that this also means - # that on 32bit the UINT case will never be entered (because it is - # handled by the ULONG case). - restype = self.func.restype + def get_unsigned_which_fits_into_a_signed(self, w_ffitype): + # the same comment as get_signed apply + restype = w_ffitype.get_ffitype() call = self.func.call - if restype is libffi.types.ulong: - # special case - uintres = call(argchain, rffi.ULONG) - return space.wrap(uintres) - elif restype is libffi.types.pointer: - ptrres = call(argchain, rffi.VOIDP) - uintres = rffi.cast(rffi.ULONG, ptrres) - return space.wrap(uintres) - elif restype is libffi.types.uint: - intres = rffi.cast(rffi.LONG, call(argchain, rffi.UINT)) + if restype is libffi.types.uint: + assert not libffi.IS_32_BIT + # on 32bit machines, we should never get here, because it's a case + # which has already been handled by get_unsigned above. + return rffi.cast(rffi.LONG, call(self.argchain, rffi.UINT)) elif restype is libffi.types.ushort: - intres = rffi.cast(rffi.LONG, call(argchain, rffi.USHORT)) + return rffi.cast(rffi.LONG, call(self.argchain, rffi.USHORT)) elif restype is libffi.types.uchar: - intres = rffi.cast(rffi.LONG, call(argchain, rffi.UCHAR)) + return rffi.cast(rffi.LONG, call(self.argchain, rffi.UCHAR)) else: - raise OperationError(space.w_ValueError, - space.wrap('Unsupported restype')) - return space.wrap(intres) + self.error(w_ffitype) - def _call_float(self, space, argchain): - # a separate function, which can be seen by the jit or not, - # depending on whether floats are supported - floatres = self.func.call(argchain, rffi.DOUBLE) - return space.wrap(floatres) - def _call_longlong(self, space, argchain): - # a separate function, which can be seen by the jit or not, - # depending on whether longlongs are supported - restype = self.func.restype - call = self.func.call - if restype is libffi.types.slonglong: - llres = call(argchain, rffi.LONGLONG) - return space.wrap(llres) - elif restype is libffi.types.ulonglong: - ullres = call(argchain, rffi.ULONGLONG) - return space.wrap(ullres) - else: - raise OperationError(space.w_ValueError, - space.wrap('Unsupported longlong restype')) + def get_pointer(self, w_ffitype): + ptrres = self.func.call(self.argchain, rffi.VOIDP) + return rffi.cast(rffi.ULONG, ptrres) - def _call_singlefloat(self, space, argchain): - # a separate function, which can be seen by the jit or not, - # depending on whether singlefloats are supported - sfres = self.func.call(argchain, rffi.FLOAT) - return space.wrap(float(sfres)) + def get_char(self, w_ffitype): + return self.func.call(self.argchain, rffi.UCHAR) - def getaddr(self, space): - """ - Return the physical address in memory of the function - """ - return space.wrap(rffi.cast(rffi.LONG, self.func.funcsym)) + def get_unichar(self, w_ffitype): + return self.func.call(self.argchain, rffi.WCHAR_T) + def get_float(self, w_ffitype): + return self.func.call(self.argchain, rffi.DOUBLE) + def get_singlefloat(self, w_ffitype): + return self.func.call(self.argchain, rffi.FLOAT) + + def get_struct(self, w_ffitype, w_structdescr): + addr = self.func.call(self.argchain, rffi.LONG, is_struct=True) + return w_structdescr.fromaddress(self.space, addr) + + def get_struct_rawffi(self, w_ffitype, w_structdescr): + uintval = self.func.call(self.argchain, rffi.ULONG, is_struct=True) + return w_structdescr.fromaddress(self.space, uintval) + + def get_void(self, w_ffitype): + return self.func.call(self.argchain, lltype.Void) + def unpack_argtypes(space, w_argtypes, w_restype): argtypes_w = [space.interp_w(W_FFIType, w_argtype) @@ -512,3 +308,4 @@ return space.wrap(W_CDLL(space, get_libc_name(), -1)) except OSError, e: raise wrap_oserror(space, e) + diff --git a/pypy/module/_ffi/interp_struct.py b/pypy/module/_ffi/interp_struct.py new file mode 100644 --- /dev/null +++ b/pypy/module/_ffi/interp_struct.py @@ -0,0 +1,319 @@ +from pypy.rpython.lltypesystem import lltype, rffi +from pypy.rlib import clibffi +from pypy.rlib import libffi +from pypy.rlib import jit +from pypy.rlib.rgc import must_be_light_finalizer +from pypy.rlib.rarithmetic import r_uint, r_ulonglong, r_singlefloat, intmask +from pypy.interpreter.baseobjspace import Wrappable +from pypy.interpreter.typedef import TypeDef, interp_attrproperty +from pypy.interpreter.gateway import interp2app, unwrap_spec +from pypy.interpreter.error import operationerrfmt +from pypy.objspace.std.typetype import type_typedef +from pypy.module._ffi.interp_ffitype import W_FFIType, app_types +from pypy.module._ffi.type_converter import FromAppLevelConverter, ToAppLevelConverter + + +class W_Field(Wrappable): + + def __init__(self, name, w_ffitype): + self.name = name + self.w_ffitype = w_ffitype + self.offset = -1 + + def __repr__(self): + return '' % (self.name, self.w_ffitype.name) + + at unwrap_spec(name=str) +def descr_new_field(space, w_type, name, w_ffitype): + w_ffitype = space.interp_w(W_FFIType, w_ffitype) + return W_Field(name, w_ffitype) + +W_Field.typedef = TypeDef( + 'Field', + __new__ = interp2app(descr_new_field), + name = interp_attrproperty('name', W_Field), + ffitype = interp_attrproperty('w_ffitype', W_Field), + offset = interp_attrproperty('offset', W_Field), + ) + + +# ============================================================================== + +class FFIStructOwner(object): + """ + The only job of this class is to stay outside of the reference cycle + W__StructDescr -> W_FFIType -> W__StructDescr and free the ffistruct + """ + + def __init__(self, ffistruct): + self.ffistruct = ffistruct + + @must_be_light_finalizer + def __del__(self): + if self.ffistruct: + lltype.free(self.ffistruct, flavor='raw', track_allocation=True) + + +class W__StructDescr(Wrappable): + + def __init__(self, space, name): + self.space = space + self.w_ffitype = W_FFIType('struct %s' % name, clibffi.FFI_TYPE_NULL, + w_structdescr=self) + self.fields_w = None + self.name2w_field = {} + self._ffistruct_owner = None + + def define_fields(self, space, w_fields): + if self.fields_w is not None: + raise operationerrfmt(space.w_ValueError, + "%s's fields has already been defined", + self.w_ffitype.name) + space = self.space + fields_w = space.fixedview(w_fields) + # note that the fields_w returned by compute_size_and_alignement has a + # different annotation than the original: list(W_Root) vs list(W_Field) + size, alignment, fields_w = compute_size_and_alignement(space, fields_w) + self.fields_w = fields_w + field_types = [] # clibffi's types + for w_field in fields_w: + field_types.append(w_field.w_ffitype.get_ffitype()) + self.name2w_field[w_field.name] = w_field + # + # on CPython, the FFIStructOwner might go into gc.garbage and thus the + # __del__ never be called. Thus, we don't track the allocation of the + # malloc done inside this function, else the leakfinder might complain + ffistruct = clibffi.make_struct_ffitype_e(size, alignment, field_types, + track_allocation=False) + self.w_ffitype.set_ffitype(ffistruct.ffistruct) + self._ffistruct_owner = FFIStructOwner(ffistruct) + + def check_complete(self, space): + if self.fields_w is None: + raise operationerrfmt(space.w_ValueError, "%s has an incomplete type", + self.w_ffitype.name) + + def allocate(self, space): + self.check_complete(space) + return W__StructInstance(self) + + @unwrap_spec(addr=int) + def fromaddress(self, space, addr): + self.check_complete(space) + rawmem = rffi.cast(rffi.VOIDP, addr) + return W__StructInstance(self, allocate=False, autofree=True, rawmem=rawmem) + + @jit.elidable_promote('0') + def get_type_and_offset_for_field(self, name): + try: + w_field = self.name2w_field[name] + except KeyError: + raise operationerrfmt(self.space.w_AttributeError, '%s', name) + + return w_field.w_ffitype, w_field.offset + + + + at unwrap_spec(name=str) +def descr_new_structdescr(space, w_type, name, w_fields=None): + descr = W__StructDescr(space, name) + if w_fields is not space.w_None: + descr.define_fields(space, w_fields) + return descr + +def round_up(size, alignment): + return (size + alignment - 1) & -alignment + +def compute_size_and_alignement(space, fields_w): + size = 0 + alignment = 1 + fields_w2 = [] + for w_field in fields_w: + w_field = space.interp_w(W_Field, w_field) + fieldsize = w_field.w_ffitype.sizeof() + fieldalignment = w_field.w_ffitype.get_alignment() + alignment = max(alignment, fieldalignment) + size = round_up(size, fieldalignment) + w_field.offset = size + size += fieldsize + fields_w2.append(w_field) + # + size = round_up(size, alignment) + return size, alignment, fields_w2 + + + +W__StructDescr.typedef = TypeDef( + '_StructDescr', + __new__ = interp2app(descr_new_structdescr), + ffitype = interp_attrproperty('w_ffitype', W__StructDescr), + define_fields = interp2app(W__StructDescr.define_fields), + allocate = interp2app(W__StructDescr.allocate), + fromaddress = interp2app(W__StructDescr.fromaddress), + ) + + +# ============================================================================== + +NULL = lltype.nullptr(rffi.VOIDP.TO) + +class W__StructInstance(Wrappable): + + _immutable_fields_ = ['structdescr', 'rawmem'] + + def __init__(self, structdescr, allocate=True, autofree=True, rawmem=NULL): + self.structdescr = structdescr + self.autofree = autofree + if allocate: + assert not rawmem + assert autofree + size = structdescr.w_ffitype.sizeof() + self.rawmem = lltype.malloc(rffi.VOIDP.TO, size, flavor='raw', + zero=True, add_memory_pressure=True) + else: + self.rawmem = rawmem + + @must_be_light_finalizer + def __del__(self): + if self.autofree and self.rawmem: + lltype.free(self.rawmem, flavor='raw') + self.rawmem = lltype.nullptr(rffi.VOIDP.TO) + + def getaddr(self, space): + addr = rffi.cast(rffi.ULONG, self.rawmem) + return space.wrap(addr) + + @unwrap_spec(name=str) + def getfield(self, space, name): + w_ffitype, offset = self.structdescr.get_type_and_offset_for_field(name) + field_getter = GetFieldConverter(space, self.rawmem, offset) + return field_getter.do_and_wrap(w_ffitype) + + @unwrap_spec(name=str) + def setfield(self, space, name, w_value): + w_ffitype, offset = self.structdescr.get_type_and_offset_for_field(name) + field_setter = SetFieldConverter(space, self.rawmem, offset) + field_setter.unwrap_and_do(w_ffitype, w_value) + + +class GetFieldConverter(ToAppLevelConverter): + """ + A converter used by W__StructInstance to get a field from the struct and + wrap it to the correct app-level type. + """ + + def __init__(self, space, rawmem, offset): + self.space = space + self.rawmem = rawmem + self.offset = offset + + def get_longlong(self, w_ffitype): + return libffi.struct_getfield_longlong(libffi.types.slonglong, + self.rawmem, self.offset) + + def get_ulonglong(self, w_ffitype): + longlongval = libffi.struct_getfield_longlong(libffi.types.ulonglong, + self.rawmem, self.offset) + return r_ulonglong(longlongval) + + + def get_signed(self, w_ffitype): + return libffi.struct_getfield_int(w_ffitype.get_ffitype(), + self.rawmem, self.offset) + + def get_unsigned(self, w_ffitype): + value = libffi.struct_getfield_int(w_ffitype.get_ffitype(), + self.rawmem, self.offset) + return r_uint(value) + + get_unsigned_which_fits_into_a_signed = get_signed + get_pointer = get_unsigned + + def get_char(self, w_ffitype): + intval = libffi.struct_getfield_int(w_ffitype.get_ffitype(), + self.rawmem, self.offset) + return rffi.cast(rffi.UCHAR, intval) + + def get_unichar(self, w_ffitype): + intval = libffi.struct_getfield_int(w_ffitype.get_ffitype(), + self.rawmem, self.offset) + return rffi.cast(rffi.WCHAR_T, intval) + + def get_float(self, w_ffitype): + return libffi.struct_getfield_float(w_ffitype.get_ffitype(), + self.rawmem, self.offset) + + def get_singlefloat(self, w_ffitype): + return libffi.struct_getfield_singlefloat(w_ffitype.get_ffitype(), + self.rawmem, self.offset) + + def get_struct(self, w_ffitype, w_structdescr): + assert isinstance(w_structdescr, W__StructDescr) + rawmem = rffi.cast(rffi.CCHARP, self.rawmem) + innermem = rffi.cast(rffi.VOIDP, rffi.ptradd(rawmem, self.offset)) + # we return a reference to the inner struct, not a copy + # autofree=False because it's still owned by the parent struct + return W__StructInstance(w_structdescr, allocate=False, autofree=False, + rawmem=innermem) + + ## def get_void(self, w_ffitype): + ## ... + + +class SetFieldConverter(FromAppLevelConverter): + """ + A converter used by W__StructInstance to convert an app-level object to + the corresponding low-level value and set the field of a structure. + """ + + def __init__(self, space, rawmem, offset): + self.space = space + self.rawmem = rawmem + self.offset = offset + + def handle_signed(self, w_ffitype, w_obj, intval): + libffi.struct_setfield_int(w_ffitype.get_ffitype(), self.rawmem, self.offset, + intval) + + def handle_unsigned(self, w_ffitype, w_obj, uintval): + libffi.struct_setfield_int(w_ffitype.get_ffitype(), self.rawmem, self.offset, + intmask(uintval)) + + handle_pointer = handle_signed + handle_char = handle_signed + handle_unichar = handle_signed + + def handle_longlong(self, w_ffitype, w_obj, longlongval): + libffi.struct_setfield_longlong(w_ffitype.get_ffitype(), + self.rawmem, self.offset, longlongval) + + def handle_float(self, w_ffitype, w_obj, floatval): + libffi.struct_setfield_float(w_ffitype.get_ffitype(), + self.rawmem, self.offset, floatval) + + def handle_singlefloat(self, w_ffitype, w_obj, singlefloatval): + libffi.struct_setfield_singlefloat(w_ffitype.get_ffitype(), + self.rawmem, self.offset, singlefloatval) + + def handle_struct(self, w_ffitype, w_structinstance): + rawmem = rffi.cast(rffi.CCHARP, self.rawmem) + dst = rffi.cast(rffi.VOIDP, rffi.ptradd(rawmem, self.offset)) + src = w_structinstance.rawmem + length = w_ffitype.sizeof() + rffi.c_memcpy(dst, src, length) + + ## def handle_char_p(self, w_ffitype, w_obj, strval): + ## ... + + ## def handle_unichar_p(self, w_ffitype, w_obj, unicodeval): + ## ... + + + + +W__StructInstance.typedef = TypeDef( + '_StructInstance', + getaddr = interp2app(W__StructInstance.getaddr), + getfield = interp2app(W__StructInstance.getfield), + setfield = interp2app(W__StructInstance.setfield), + ) diff --git a/pypy/module/_ffi/test/test_ffitype.py b/pypy/module/_ffi/test/test_ffitype.py new file mode 100644 --- /dev/null +++ b/pypy/module/_ffi/test/test_ffitype.py @@ -0,0 +1,39 @@ +from pypy.module._ffi.test.test_funcptr import BaseAppTestFFI + +class AppTestFFIType(BaseAppTestFFI): + + def test_simple_types(self): + from _ffi import types + assert str(types.sint) == "" + assert str(types.uint) == "" + assert types.sint.name == 'sint' + assert types.uint.name == 'uint' + + def test_sizeof(self): + from _ffi import types + assert types.sbyte.sizeof() == 1 + assert types.sint.sizeof() == 4 + + def test_typed_pointer(self): + from _ffi import types + intptr = types.Pointer(types.sint) # create a typed pointer to sint + assert intptr.deref_pointer() is types.sint + assert str(intptr) == '' + assert types.sint.deref_pointer() is None + raises(TypeError, "types.Pointer(42)") + + def test_pointer_identity(self): + from _ffi import types + x = types.Pointer(types.slong) + y = types.Pointer(types.slong) + z = types.Pointer(types.char) + assert x is y + assert x is not z + + def test_char_p_cached(self): + from _ffi import types + x = types.Pointer(types.char) + assert x is types.char_p + x = types.Pointer(types.unichar) + assert x is types.unichar_p + diff --git a/pypy/module/_ffi/test/test__ffi.py b/pypy/module/_ffi/test/test_funcptr.py rename from pypy/module/_ffi/test/test__ffi.py rename to pypy/module/_ffi/test/test_funcptr.py --- a/pypy/module/_ffi/test/test__ffi.py +++ b/pypy/module/_ffi/test/test_funcptr.py @@ -7,7 +7,7 @@ import os, sys, py -class AppTestFfi: +class BaseAppTestFFI(object): @classmethod def prepare_c_example(cls): @@ -36,7 +36,6 @@ eci = ExternalCompilationInfo(export_symbols=[]) return str(platform.compile([c_file], eci, 'x', standalone=False)) - def setup_class(cls): from pypy.rpython.lltypesystem import rffi from pypy.rlib.libffi import get_libc_name, CDLL, types @@ -52,7 +51,12 @@ pow = libm.getpointer('pow', [], types.void) pow_addr = rffi.cast(rffi.LONG, pow.funcsym) cls.w_pow_addr = space.wrap(pow_addr) - # + +class AppTestFFI(BaseAppTestFFI): + + def setup_class(cls): + BaseAppTestFFI.setup_class.im_func(cls) + space = cls.space # these are needed for test_single_float_args from ctypes import c_float f_12_34 = c_float(12.34).value @@ -78,11 +82,6 @@ res = dll.getfunc('Py_IsInitialized', [], types.slong)() assert res == 1 - def test_simple_types(self): - from _ffi import types - assert str(types.sint) == "" - assert str(types.uint) == "" - def test_callfunc(self): from _ffi import CDLL, types libm = CDLL(self.libm_name) @@ -263,29 +262,6 @@ assert list(array) == list('foobar\00') do_nothing.free_temp_buffers() - def test_typed_pointer(self): - from _ffi import types - intptr = types.Pointer(types.sint) # create a typed pointer to sint - assert intptr.deref_pointer() is types.sint - assert str(intptr) == '' - assert types.sint.deref_pointer() is None - raises(TypeError, "types.Pointer(42)") - - def test_pointer_identity(self): - from _ffi import types - x = types.Pointer(types.slong) - y = types.Pointer(types.slong) - z = types.Pointer(types.char) - assert x is y - assert x is not z - - def test_char_p_cached(self): - from _ffi import types - x = types.Pointer(types.char) - assert x is types.char_p - x = types.Pointer(types.unichar) - assert x is types.unichar_p - def test_typed_pointer_args(self): """ extern int dummy; // defined in test_void_result @@ -476,6 +452,51 @@ return p.x + p.y; } """ + from _ffi import CDLL, types, _StructDescr, Field + Point = _StructDescr('Point', [ + Field('x', types.slong), + Field('y', types.slong), + ]) + libfoo = CDLL(self.libfoo_name) + sum_point = libfoo.getfunc('sum_point', [Point.ffitype], types.slong) + # + p = Point.allocate() + p.setfield('x', 30) + p.setfield('y', 12) + res = sum_point(p) + assert res == 42 + + def test_byval_result(self): + """ + DLLEXPORT struct Point make_point(long x, long y) { + struct Point p; + p.x = x; + p.y = y; + return p; + } + """ + from _ffi import CDLL, types, _StructDescr, Field + Point = _StructDescr('Point', [ + Field('x', types.slong), + Field('y', types.slong), + ]) + libfoo = CDLL(self.libfoo_name) + make_point = libfoo.getfunc('make_point', [types.slong, types.slong], + Point.ffitype) + # + p = make_point(12, 34) + assert p.getfield('x') == 12 + assert p.getfield('y') == 34 + + # XXX: support for _rawffi structures should be killed as soon as we + # implement ctypes.Structure on top of _ffi. In the meantime, we support + # both + def test_byval_argument__rawffi(self): + """ + // defined above + struct Point; + DLLEXPORT long sum_point(struct Point p); + """ import _rawffi from _ffi import CDLL, types POINT = _rawffi.Structure([('x', 'l'), ('y', 'l')]) @@ -490,14 +511,10 @@ assert res == 42 p.free() - def test_byval_result(self): + def test_byval_result__rawffi(self): """ - DLLEXPORT struct Point make_point(long x, long y) { - struct Point p; - p.x = x; - p.y = y; - return p; - } + // defined above + DLLEXPORT struct Point make_point(long x, long y); """ import _rawffi from _ffi import CDLL, types @@ -511,6 +528,7 @@ assert p.y == 34 p.free() + def test_TypeError_numargs(self): from _ffi import CDLL, types libfoo = CDLL(self.libfoo_name) diff --git a/pypy/module/_ffi/test/test_struct.py b/pypy/module/_ffi/test/test_struct.py new file mode 100644 --- /dev/null +++ b/pypy/module/_ffi/test/test_struct.py @@ -0,0 +1,320 @@ +import sys +from pypy.conftest import gettestobjspace +from pypy.module._ffi.test.test_funcptr import BaseAppTestFFI +from pypy.module._ffi.interp_struct import compute_size_and_alignement, W_Field +from pypy.module._ffi.interp_ffitype import app_types, W_FFIType + + +class TestStruct(object): + + class FakeSpace(object): + def interp_w(self, cls, obj): + return obj + + def compute(self, ffitypes_w): + fields_w = [W_Field('', w_ffitype) for + w_ffitype in ffitypes_w] + return compute_size_and_alignement(self.FakeSpace(), fields_w) + + def sizeof(self, ffitypes_w): + size, aligned, fields_w = self.compute(ffitypes_w) + return size + + def test_compute_size(self): + T = app_types + byte_size = app_types.sbyte.sizeof() + long_size = app_types.slong.sizeof() + llong_size = app_types.slonglong.sizeof() + llong_align = app_types.slonglong.get_alignment() + # + assert llong_align >= 4 + assert self.sizeof([T.sbyte, T.slong]) == 2*long_size + assert self.sizeof([T.sbyte, T.slonglong]) == llong_align + llong_size + assert self.sizeof([T.sbyte, T.sbyte, T.slonglong]) == llong_align + llong_size + assert self.sizeof([T.sbyte, T.sbyte, T.sbyte, T.slonglong]) == llong_align + llong_size + assert self.sizeof([T.sbyte, T.sbyte, T.sbyte, T.sbyte, T.slonglong]) == llong_align + llong_size + assert self.sizeof([T.slonglong, T.sbyte]) == llong_size + llong_align + assert self.sizeof([T.slonglong, T.sbyte, T.sbyte]) == llong_size + llong_align + assert self.sizeof([T.slonglong, T.sbyte, T.sbyte, T.sbyte]) == llong_size + llong_align + assert self.sizeof([T.slonglong, T.sbyte, T.sbyte, T.sbyte, T.sbyte]) == llong_size + llong_align + +class AppTestStruct(BaseAppTestFFI): + + def setup_class(cls): + BaseAppTestFFI.setup_class.im_func(cls) + # + def read_raw_mem(self, addr, typename, length): + import ctypes + addr = ctypes.cast(addr, ctypes.c_void_p) + c_type = getattr(ctypes, typename) + array_type = ctypes.POINTER(c_type * length) + ptr_array = ctypes.cast(addr, array_type) + array = ptr_array[0] + lst = [array[i] for i in range(length)] + return lst + cls.w_read_raw_mem = cls.space.wrap(read_raw_mem) + # + from pypy.rlib import clibffi + from pypy.rlib.rarithmetic import r_uint + from pypy.rpython.lltypesystem import lltype, rffi + dummy_type = lltype.malloc(clibffi.FFI_TYPE_P.TO, flavor='raw') + dummy_type.c_size = r_uint(123) + dummy_type.c_alignment = rffi.cast(rffi.USHORT, 0) + dummy_type.c_type = rffi.cast(rffi.USHORT, 0) + cls.w_dummy_type = W_FFIType('dummy', dummy_type) + + def test__StructDescr(self): + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('x', types.slong), + Field('y', types.slong), + ] + descr = _StructDescr('foo', fields) + assert descr.ffitype.sizeof() == longsize*2 + assert descr.ffitype.name == 'struct foo' + + def test_alignment(self): + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('x', types.sbyte), + Field('y', types.slong), + ] + descr = _StructDescr('foo', fields) + assert descr.ffitype.sizeof() == longsize*2 + assert fields[0].offset == 0 + assert fields[1].offset == longsize # aligned to WORD + + def test_missing_field(self): + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('x', types.slong), + Field('y', types.slong), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + raises(AttributeError, "struct.getfield('missing')") + raises(AttributeError, "struct.setfield('missing', 42)") + + def test_unknown_type(self): + from _ffi import _StructDescr, Field + fields = [ + Field('x', self.dummy_type), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + raises(TypeError, "struct.getfield('x')") + raises(TypeError, "struct.setfield('x', 42)") + + def test_getfield_setfield(self): + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('x', types.slong), + Field('y', types.slong), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + struct.setfield('x', 42) + struct.setfield('y', 43) + assert struct.getfield('x') == 42 + assert struct.getfield('y') == 43 + mem = self.read_raw_mem(struct.getaddr(), 'c_long', 2) + assert mem == [42, 43] + + def test_getfield_setfield_signed_types(self): + import sys + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('sbyte', types.sbyte), + Field('sshort', types.sshort), + Field('sint', types.sint), + Field('slong', types.slong), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + struct.setfield('sbyte', 128) + assert struct.getfield('sbyte') == -128 + struct.setfield('sshort', 32768) + assert struct.getfield('sshort') == -32768 + struct.setfield('sint', 43) + assert struct.getfield('sint') == 43 + struct.setfield('slong', sys.maxint+1) + assert struct.getfield('slong') == -sys.maxint-1 + struct.setfield('slong', sys.maxint*3) + assert struct.getfield('slong') == sys.maxint-2 + + def test_getfield_setfield_unsigned_types(self): + import sys + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('ubyte', types.ubyte), + Field('ushort', types.ushort), + Field('uint', types.uint), + Field('ulong', types.ulong), + Field('char', types.char), + Field('unichar', types.unichar), + Field('ptr', types.void_p), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + struct.setfield('ubyte', -1) + assert struct.getfield('ubyte') == 255 + struct.setfield('ushort', -1) + assert struct.getfield('ushort') == 65535 + struct.setfield('uint', 43) + assert struct.getfield('uint') == 43 + struct.setfield('ulong', -1) + assert struct.getfield('ulong') == sys.maxint*2 + 1 + struct.setfield('ulong', sys.maxint*2 + 2) + assert struct.getfield('ulong') == 0 + struct.setfield('char', 'a') + assert struct.getfield('char') == 'a' + struct.setfield('unichar', u'\u1234') + assert struct.getfield('unichar') == u'\u1234' + struct.setfield('ptr', -1) + assert struct.getfield('ptr') == sys.maxint*2 + 1 + + def test_getfield_setfield_longlong(self): + import sys + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('slonglong', types.slonglong), + Field('ulonglong', types.ulonglong), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + struct.setfield('slonglong', 9223372036854775808) + assert struct.getfield('slonglong') == -9223372036854775808 + struct.setfield('ulonglong', -1) + assert struct.getfield('ulonglong') == 18446744073709551615 + mem = self.read_raw_mem(struct.getaddr(), 'c_longlong', 2) + assert mem == [-9223372036854775808, -1] + + def test_getfield_setfield_float(self): + import sys + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('x', types.double), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + struct.setfield('x', 123.4) + assert struct.getfield('x') == 123.4 + mem = self.read_raw_mem(struct.getaddr(), 'c_double', 1) + assert mem == [123.4] + + def test_getfield_setfield_singlefloat(self): + import sys + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('x', types.float), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + struct.setfield('x', 123.4) # this is a value which DOES loose + # precision in a single float + assert 0 < abs(struct.getfield('x') - 123.4) < 0.0001 + # + struct.setfield('x', 123.5) # this is a value which does not loose + # precision in a single float + assert struct.getfield('x') == 123.5 + mem = self.read_raw_mem(struct.getaddr(), 'c_float', 1) + assert mem == [123.5] + + def test_define_fields(self): + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('x', types.slong), + Field('y', types.slong), + ] + descr = _StructDescr('foo') + assert descr.ffitype.name == 'struct foo' + assert repr(descr.ffitype) == '' + raises(ValueError, "descr.ffitype.sizeof()") + raises(ValueError, "descr.allocate()") + # + descr.define_fields(fields) + assert repr(descr.ffitype) == '' + assert descr.ffitype.sizeof() == longsize*2 + raises(ValueError, "descr.define_fields(fields)") + + def test_pointer_to_incomplete_struct(self): + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + fields = [ + Field('x', types.slong), + Field('y', types.slong), + ] + descr = _StructDescr('foo') + foo_ffitype = descr.ffitype + foo_p = types.Pointer(descr.ffitype) + assert foo_p.deref_pointer() is foo_ffitype + descr.define_fields(fields) + assert descr.ffitype is foo_ffitype + assert foo_p.deref_pointer() is foo_ffitype + assert types.Pointer(descr.ffitype) is foo_p + + def test_nested_structure(self): + from _ffi import _StructDescr, Field, types + longsize = types.slong.sizeof() + foo_fields = [ + Field('x', types.slong), + Field('y', types.slong), + ] + foo_descr = _StructDescr('foo', foo_fields) + # + bar_fields = [ + Field('x', types.slong), + Field('foo', foo_descr.ffitype), + ] + bar_descr = _StructDescr('bar', bar_fields) + assert bar_descr.ffitype.sizeof() == longsize*3 + # + struct = bar_descr.allocate() + struct.setfield('x', 40) + # reading a nested structure yields a reference to it + struct_foo = struct.getfield('foo') + struct_foo.setfield('x', 41) + struct_foo.setfield('y', 42) + mem = self.read_raw_mem(struct.getaddr(), 'c_long', 3) + assert mem == [40, 41, 42] + # + struct_foo2 = foo_descr.allocate() + struct_foo2.setfield('x', 141) + struct_foo2.setfield('y', 142) + # writing a nested structure copies its memory into the target + struct.setfield('foo', struct_foo2) + struct_foo2.setfield('x', 241) + struct_foo2.setfield('y', 242) + mem = self.read_raw_mem(struct.getaddr(), 'c_long', 3) + assert mem == [40, 141, 142] + mem = self.read_raw_mem(struct_foo2.getaddr(), 'c_long', 2) + assert mem == [241, 242] + + + + def test_compute_shape(self): + from _ffi import Structure, Field, types + class Point(Structure): + _fields_ = [ + Field('x', types.slong), + Field('y', types.slong), + ] + + longsize = types.slong.sizeof() + assert isinstance(Point.x, Field) + assert isinstance(Point.y, Field) + assert Point.x.offset == 0 + assert Point.y.offset == longsize + assert Point._struct_.ffitype.sizeof() == longsize*2 + assert Point._struct_.ffitype.name == 'struct Point' + diff --git a/pypy/module/_ffi/test/test_type_converter.py b/pypy/module/_ffi/test/test_type_converter.py new file mode 100644 --- /dev/null +++ b/pypy/module/_ffi/test/test_type_converter.py @@ -0,0 +1,128 @@ +import sys +from pypy.conftest import gettestobjspace +from pypy.rlib.rarithmetic import r_uint, r_singlefloat, r_longlong, r_ulonglong +from pypy.rlib.libffi import IS_32_BIT +from pypy.module._ffi.interp_ffitype import app_types, descr_new_pointer +from pypy.module._ffi.type_converter import FromAppLevelConverter, ToAppLevelConverter + +class DummyFromAppLevelConverter(FromAppLevelConverter): + + def handle_all(self, w_ffitype, w_obj, val): + self.lastval = val + + handle_signed = handle_all + handle_unsigned = handle_all + handle_pointer = handle_all + handle_char = handle_all + handle_unichar = handle_all + handle_longlong = handle_all + handle_char_p = handle_all + handle_unichar_p = handle_all + handle_float = handle_all + handle_singlefloat = handle_all + + def handle_struct(self, w_ffitype, w_structinstance): + self.lastval = w_structinstance + + def convert(self, w_ffitype, w_obj): + self.unwrap_and_do(w_ffitype, w_obj) + return self.lastval + + +class TestFromAppLevel(object): + + def setup_class(cls): + cls.space = gettestobjspace(usemodules=('_ffi',)) + converter = DummyFromAppLevelConverter(cls.space) + cls.from_app_level = staticmethod(converter.convert) + + def check(self, w_ffitype, w_obj, expected): + v = self.from_app_level(w_ffitype, w_obj) + assert v == expected + assert type(v) is type(expected) + + def test_int(self): + self.check(app_types.sint, self.space.wrap(42), 42) + self.check(app_types.sint, self.space.wrap(sys.maxint+1), -sys.maxint-1) + self.check(app_types.sint, self.space.wrap(sys.maxint*2), -2) + + def test_unsigned(self): + space = self.space + self.check(app_types.uint, space.wrap(42), r_uint(42)) + self.check(app_types.uint, space.wrap(-1), r_uint(sys.maxint*2 +1)) + self.check(app_types.uint, space.wrap(sys.maxint*3), + r_uint(sys.maxint - 2)) + self.check(app_types.ulong, space.wrap(sys.maxint+12), + r_uint(sys.maxint+12)) + self.check(app_types.ulong, space.wrap(sys.maxint*2+3), r_uint(1)) + + def test_char(self): + space = self.space + self.check(app_types.char, space.wrap('a'), ord('a')) + self.check(app_types.unichar, space.wrap(u'\u1234'), 0x1234) + + def test_signed_longlong(self): + space = self.space + maxint32 = 2147483647 # we cannot really go above maxint on 64 bits + # (and we would not test anything, as there long + # is the same as long long) + expected = maxint32+1 + if IS_32_BIT: + expected = r_longlong(expected) + self.check(app_types.slonglong, space.wrap(maxint32+1), expected) + + def test_unsigned_longlong(self): + space = self.space + maxint64 = 9223372036854775807 # maxint64+1 does not fit into a + # longlong, but it does into a + # ulonglong + if IS_32_BIT: + # internally, the type converter always casts to signed longlongs + expected = r_longlong(-maxint64-1) + else: + # on 64 bit, ulonglong == uint (i.e., unsigned long in C terms) + expected = r_uint(maxint64+1) + self.check(app_types.ulonglong, space.wrap(maxint64+1), expected) + + def test_float_and_double(self): + space = self.space + self.check(app_types.float, space.wrap(12.34), r_singlefloat(12.34)) + self.check(app_types.double, space.wrap(12.34), 12.34) + + def test_pointer(self): + # pointers are "unsigned" at applevel, but signed at interp-level (for + # no good reason, at interp-level Signed or Unsigned makes no + # difference for passing bits around) + space = self.space + self.check(app_types.void_p, space.wrap(42), 42) + self.check(app_types.void_p, space.wrap(sys.maxint+1), -sys.maxint-1) + # + # typed pointers + w_ptr_sint = descr_new_pointer(space, None, app_types.sint) + self.check(w_ptr_sint, space.wrap(sys.maxint+1), -sys.maxint-1) + + + def test__as_ffi_pointer_(self): + space = self.space + w_MyPointerWrapper = space.appexec([], """(): + import _ffi + class MyPointerWrapper(object): + def __init__(self, value): + self.value = value + def _as_ffi_pointer_(self, ffitype): + assert ffitype is _ffi.types.void_p + return self.value + + return MyPointerWrapper + """) + w_obj = space.call_function(w_MyPointerWrapper, space.wrap(42)) + self.check(app_types.void_p, w_obj, 42) + + def test_strings(self): + # first, try automatic conversion from applevel + self.check(app_types.char_p, self.space.wrap('foo'), 'foo') + self.check(app_types.unichar_p, self.space.wrap(u'foo\u1234'), u'foo\u1234') + self.check(app_types.unichar_p, self.space.wrap('foo'), u'foo') + # then, try to pass explicit pointers + self.check(app_types.char_p, self.space.wrap(42), 42) + self.check(app_types.unichar_p, self.space.wrap(42), 42) diff --git a/pypy/module/_ffi/type_converter.py b/pypy/module/_ffi/type_converter.py new file mode 100644 --- /dev/null +++ b/pypy/module/_ffi/type_converter.py @@ -0,0 +1,364 @@ +from pypy.rlib import libffi +from pypy.rlib import jit +from pypy.rlib.rarithmetic import intmask, r_uint +from pypy.rpython.lltypesystem import rffi +from pypy.interpreter.error import operationerrfmt, OperationError +from pypy.module._rawffi.structure import W_StructureInstance, W_Structure +from pypy.module._ffi.interp_ffitype import app_types + +class FromAppLevelConverter(object): + """ + Unwrap an app-level object to the corresponding low-level type, following + the conversion rules which apply to the specified w_ffitype. Once + unwrapped, the value is passed to the corresponding handle_* method. + Subclasses should override the desired ones. + """ + + def __init__(self, space): + self.space = space + + def unwrap_and_do(self, w_ffitype, w_obj): + from pypy.module._ffi.interp_struct import W__StructInstance + space = self.space + if w_ffitype.is_longlong(): + # note that we must check for longlong first, because either + # is_signed or is_unsigned returns true anyway + assert libffi.IS_32_BIT + self._longlong(w_ffitype, w_obj) + elif w_ffitype.is_signed(): + intval = space.truncatedint_w(w_obj) + self.handle_signed(w_ffitype, w_obj, intval) + elif self.maybe_handle_char_or_unichar_p(w_ffitype, w_obj): + # the object was already handled from within + # maybe_handle_char_or_unichar_p + pass + elif w_ffitype.is_pointer(): + w_obj = self.convert_pointer_arg_maybe(w_obj, w_ffitype) + intval = space.truncatedint_w(w_obj) + self.handle_pointer(w_ffitype, w_obj, intval) + elif w_ffitype.is_unsigned(): + uintval = r_uint(space.truncatedint_w(w_obj)) + self.handle_unsigned(w_ffitype, w_obj, uintval) + elif w_ffitype.is_char(): + intval = space.int_w(space.ord(w_obj)) + self.handle_char(w_ffitype, w_obj, intval) + elif w_ffitype.is_unichar(): + intval = space.int_w(space.ord(w_obj)) + self.handle_unichar(w_ffitype, w_obj, intval) + elif w_ffitype.is_double(): + self._float(w_ffitype, w_obj) + elif w_ffitype.is_singlefloat(): + self._singlefloat(w_ffitype, w_obj) + elif w_ffitype.is_struct(): + if isinstance(w_obj, W_StructureInstance): + self.handle_struct_rawffi(w_ffitype, w_obj) + else: + w_obj = space.interp_w(W__StructInstance, w_obj) + self.handle_struct(w_ffitype, w_obj) + else: + self.error(w_ffitype, w_obj) + + def _longlong(self, w_ffitype, w_obj): + # a separate function, which can be seen by the jit or not, + # depending on whether longlongs are supported + longlongval = self.space.truncatedlonglong_w(w_obj) + self.handle_longlong(w_ffitype, w_obj, longlongval) + + def _float(self, w_ffitype, w_obj): + # a separate function, which can be seen by the jit or not, + # depending on whether floats are supported + floatval = self.space.float_w(w_obj) + self.handle_float(w_ffitype, w_obj, floatval) + + def _singlefloat(self, w_ffitype, w_obj): + # a separate function, which can be seen by the jit or not, + # depending on whether singlefloats are supported + from pypy.rlib.rarithmetic import r_singlefloat + floatval = self.space.float_w(w_obj) + singlefloatval = r_singlefloat(floatval) + self.handle_singlefloat(w_ffitype, w_obj, singlefloatval) + + def maybe_handle_char_or_unichar_p(self, w_ffitype, w_obj): + w_type = jit.promote(self.space.type(w_obj)) + if w_ffitype.is_char_p() and w_type is self.space.w_str: + strval = self.space.str_w(w_obj) + self.handle_char_p(w_ffitype, w_obj, strval) + return True + elif w_ffitype.is_unichar_p() and (w_type is self.space.w_str or + w_type is self.space.w_unicode): + unicodeval = self.space.unicode_w(w_obj) + self.handle_unichar_p(w_ffitype, w_obj, unicodeval) + return True + return False + + def convert_pointer_arg_maybe(self, w_arg, w_argtype): + """ + Try to convert the argument by calling _as_ffi_pointer_() + """ + space = self.space + meth = space.lookup(w_arg, '_as_ffi_pointer_') # this also promotes the type + if meth: + return space.call_function(meth, w_arg, w_argtype) + else: + return w_arg + + def error(self, w_ffitype, w_obj): + raise operationerrfmt(self.space.w_TypeError, + 'Unsupported ffi type to convert: %s', + w_ffitype.name) + + def handle_signed(self, w_ffitype, w_obj, intval): + """ + intval: lltype.Signed + """ + self.error(w_ffitype, w_obj) + + def handle_unsigned(self, w_ffitype, w_obj, uintval): + """ + uintval: lltype.Unsigned + """ + self.error(w_ffitype, w_obj) + + def handle_pointer(self, w_ffitype, w_obj, intval): + """ + intval: lltype.Signed + """ + self.error(w_ffitype, w_obj) + + def handle_char(self, w_ffitype, w_obj, intval): + """ + intval: lltype.Signed + """ + self.error(w_ffitype, w_obj) + + def handle_unichar(self, w_ffitype, w_obj, intval): + """ + intval: lltype.Signed + """ + self.error(w_ffitype, w_obj) + + def handle_longlong(self, w_ffitype, w_obj, longlongval): + """ + longlongval: lltype.SignedLongLong + """ + self.error(w_ffitype, w_obj) + + def handle_char_p(self, w_ffitype, w_obj, strval): + """ + strval: interp-level str + """ + self.error(w_ffitype, w_obj) + + def handle_unichar_p(self, w_ffitype, w_obj, unicodeval): + """ + unicodeval: interp-level unicode + """ + self.error(w_ffitype, w_obj) + + def handle_float(self, w_ffitype, w_obj, floatval): + """ + floatval: lltype.Float + """ + self.error(w_ffitype, w_obj) + + def handle_singlefloat(self, w_ffitype, w_obj, singlefloatval): + """ + singlefloatval: lltype.SingleFloat + """ + self.error(w_ffitype, w_obj) + + def handle_struct(self, w_ffitype, w_structinstance): + """ + w_structinstance: W_StructureInstance + """ + self.error(w_ffitype, w_structinstance) + + def handle_struct_rawffi(self, w_ffitype, w_structinstance): + """ + This method should be killed as soon as we remove support for _rawffi structures + + w_structinstance: W_StructureInstance + """ + self.error(w_ffitype, w_structinstance) + + + +class ToAppLevelConverter(object): + """ + Wrap a low-level value to an app-level object, following the conversion + rules which apply to the specified w_ffitype. The value is got by calling + the get_* method corresponding to the w_ffitype. Subclasses should + override the desired ones. + """ + + def __init__(self, space): + self.space = space + + def do_and_wrap(self, w_ffitype): + from pypy.module._ffi.interp_struct import W__StructDescr + space = self.space + if w_ffitype.is_longlong(): + # note that we must check for longlong first, because either + # is_signed or is_unsigned returns true anyway + assert libffi.IS_32_BIT + return self._longlong(w_ffitype) + elif w_ffitype.is_signed(): + intval = self.get_signed(w_ffitype) + return space.wrap(intval) + elif w_ffitype is app_types.ulong or w_ffitype is app_types.ulonglong: + # Note that we the second check (for ulonglong) is meaningful only + # on 64 bit, because on 32 bit the ulonglong case would have been + # handled by the is_longlong() branch above. On 64 bit, ulonglong + # is essentially the same as ulong. + # + # We need to be careful when the return type is ULONG, because the + # value might not fit into a signed LONG, and thus might require + # and app-evel . This is why we need to treat it separately + # than the other unsigned types. + uintval = self.get_unsigned(w_ffitype) + return space.wrap(uintval) + elif w_ffitype.is_unsigned(): # note that ulong is handled just before + intval = self.get_unsigned_which_fits_into_a_signed(w_ffitype) + return space.wrap(intval) + elif w_ffitype.is_pointer(): + uintval = self.get_pointer(w_ffitype) + return space.wrap(uintval) + elif w_ffitype.is_char(): + ucharval = self.get_char(w_ffitype) + return space.wrap(chr(ucharval)) + elif w_ffitype.is_unichar(): + wcharval = self.get_unichar(w_ffitype) + return space.wrap(unichr(wcharval)) + elif w_ffitype.is_double(): + return self._float(w_ffitype) + elif w_ffitype.is_singlefloat(): + return self._singlefloat(w_ffitype) + elif w_ffitype.is_struct(): + w_structdescr = w_ffitype.w_structdescr + if isinstance(w_structdescr, W__StructDescr): + return self.get_struct(w_ffitype, w_structdescr) + elif isinstance(w_structdescr, W_Structure): + return self.get_struct_rawffi(w_ffitype, w_structdescr) + else: + raise OperationError(self.space.w_TypeError, + self.space.wrap("Unsupported struct shape")) + elif w_ffitype.is_void(): + voidval = self.get_void(w_ffitype) + assert voidval is None + return space.w_None + else: + self.error(w_ffitype) + + def _longlong(self, w_ffitype): + # a separate function, which can be seen by the jit or not, + # depending on whether longlongs are supported + if w_ffitype is app_types.slonglong: + longlongval = self.get_longlong(w_ffitype) + return self.space.wrap(longlongval) + elif w_ffitype is app_types.ulonglong: + ulonglongval = self.get_ulonglong(w_ffitype) + return self.space.wrap(ulonglongval) + else: + self.error(w_ffitype) + + def _float(self, w_ffitype): + # a separate function, which can be seen by the jit or not, + # depending on whether floats are supported + floatval = self.get_float(w_ffitype) + return self.space.wrap(floatval) + + def _singlefloat(self, w_ffitype): + # a separate function, which can be seen by the jit or not, + # depending on whether singlefloats are supported + singlefloatval = self.get_singlefloat(w_ffitype) + return self.space.wrap(float(singlefloatval)) + + def error(self, w_ffitype): + raise operationerrfmt(self.space.w_TypeError, + 'Unsupported ffi type to convert: %s', + w_ffitype.name) + + def get_longlong(self, w_ffitype): + """ + Return type: lltype.SignedLongLong + """ + self.error(w_ffitype) + + def get_ulonglong(self, w_ffitype): + """ + Return type: lltype.UnsignedLongLong + """ + self.error(w_ffitype) + + def get_signed(self, w_ffitype): + """ + Return type: lltype.Signed + """ + self.error(w_ffitype) + + def get_unsigned(self, w_ffitype): + """ + Return type: lltype.Unsigned + """ + self.error(w_ffitype) + + def get_unsigned_which_fits_into_a_signed(self, w_ffitype): + """ + Return type: lltype.Signed. + + We return Signed even if the input type is unsigned, because this way + we get an app-level instead of a . + """ + self.error(w_ffitype) + + def get_pointer(self, w_ffitype): + """ + Return type: lltype.Unsigned + """ + self.error(w_ffitype) + + def get_char(self, w_ffitype): + """ + Return type: rffi.UCHAR + """ + self.error(w_ffitype) + + def get_unichar(self, w_ffitype): + """ + Return type: rffi.WCHAR_T + """ + self.error(w_ffitype) + + def get_float(self, w_ffitype): + """ + Return type: lltype.Float + """ + self.error(w_ffitype) + + def get_singlefloat(self, w_ffitype): + """ + Return type: lltype.SingleFloat + """ + self.error(w_ffitype) + + def get_struct(self, w_ffitype, w_structdescr): + """ + Return type: lltype.Signed + (the address of the structure) + """ + self.error(w_ffitype) + + def get_struct_rawffi(self, w_ffitype, w_structdescr): + """ + This should be killed as soon as we kill support for _rawffi structures + + Return type: lltype.Unsigned + (the address of the structure) + """ + self.error(w_ffitype) + + def get_void(self, w_ffitype): + """ + Return type: None + """ + self.error(w_ffitype) diff --git a/pypy/module/_rawffi/interp_rawffi.py b/pypy/module/_rawffi/interp_rawffi.py --- a/pypy/module/_rawffi/interp_rawffi.py +++ b/pypy/module/_rawffi/interp_rawffi.py @@ -253,7 +253,7 @@ # XXX: this assumes that you have the _ffi module enabled. In the long # term, probably we will move the code for build structures and arrays # from _rawffi to _ffi - from pypy.module._ffi.interp_ffi import W_FFIType + from pypy.module._ffi.interp_ffitype import W_FFIType return W_FFIType('', self.get_basic_ffi_type(), self) @unwrap_spec(n=int) diff --git a/pypy/module/binascii/interp_crc32.py b/pypy/module/binascii/interp_crc32.py --- a/pypy/module/binascii/interp_crc32.py +++ b/pypy/module/binascii/interp_crc32.py @@ -61,7 +61,7 @@ crc_32_tab = map(r_uint, crc_32_tab) - at unwrap_spec(data='bufferstr', oldcrc='truncatedint') + at unwrap_spec(data='bufferstr', oldcrc='truncatedint_w') def crc32(space, data, oldcrc=0): "Compute the CRC-32 incrementally." diff --git a/pypy/module/pypyjit/test_pypy_c/test__ffi.py b/pypy/module/pypyjit/test_pypy_c/test__ffi.py --- a/pypy/module/pypyjit/test_pypy_c/test__ffi.py +++ b/pypy/module/pypyjit/test_pypy_c/test__ffi.py @@ -134,3 +134,31 @@ call = ops[idx] assert (call.args[0] == 'ConstClass(fabs)' or # e.g. OS/X int(call.args[0]) == fabs_addr) + + + def test__ffi_struct(self): + def main(): + from _ffi import _StructDescr, Field, types + fields = [ + Field('x', types.slong), + ] + descr = _StructDescr('foo', fields) + struct = descr.allocate() + i = 0 + while i < 300: + x = struct.getfield('x') # ID: getfield + x = x+1 + struct.setfield('x', x) # ID: setfield + i += 1 + return struct.getfield('x') + # + log = self.run(main, []) + loop, = log.loops_by_filename(self.filepath) + assert loop.match_by_id('getfield', """ + guard_not_invalidated(descr=...) + i57 = getfield_raw(i46, descr=) + """) + assert loop.match_by_id('setfield', """ + setfield_raw(i44, i57, descr=) + """) + diff --git a/pypy/module/zlib/interp_zlib.py b/pypy/module/zlib/interp_zlib.py --- a/pypy/module/zlib/interp_zlib.py +++ b/pypy/module/zlib/interp_zlib.py @@ -20,7 +20,7 @@ return intmask((x ^ SIGN_EXTEND2) - SIGN_EXTEND2) - at unwrap_spec(string='bufferstr', start='truncatedint') + at unwrap_spec(string='bufferstr', start='truncatedint_w') def crc32(space, string, start = rzlib.CRC32_DEFAULT_START): """ crc32(string[, start]) -- Compute a CRC-32 checksum of string. @@ -41,7 +41,7 @@ return space.wrap(checksum) - at unwrap_spec(string='bufferstr', start='truncatedint') + at unwrap_spec(string='bufferstr', start='truncatedint_w') def adler32(space, string, start=rzlib.ADLER32_DEFAULT_START): """ adler32(string[, start]) -- Compute an Adler-32 checksum of string. diff --git a/pypy/rlib/clibffi.py b/pypy/rlib/clibffi.py --- a/pypy/rlib/clibffi.py +++ b/pypy/rlib/clibffi.py @@ -11,6 +11,7 @@ from pypy.rlib.rdynload import dlopen, dlclose, dlsym, dlsym_byordinal from pypy.rlib.rdynload import DLOpenError, DLLHANDLE from pypy.rlib import jit +from pypy.rlib.objectmodel import specialize from pypy.tool.autopath import pypydir from pypy.translator.tool.cbuild import ExternalCompilationInfo from pypy.translator.platform import platform @@ -141,6 +142,7 @@ FFI_TYPE_P = lltype.Ptr(lltype.ForwardReference()) FFI_TYPE_PP = rffi.CArrayPtr(FFI_TYPE_P) +FFI_TYPE_NULL = lltype.nullptr(FFI_TYPE_P.TO) class CConfig: _compilation_info_ = eci @@ -346,11 +348,13 @@ ('ffistruct', FFI_TYPE_P.TO), ('members', lltype.Array(FFI_TYPE_P)))) -def make_struct_ffitype_e(size, aligment, field_types): + at specialize.arg(3) +def make_struct_ffitype_e(size, aligment, field_types, track_allocation=True): """Compute the type of a structure. Returns a FFI_STRUCT_P out of which the 'ffistruct' member is a regular FFI_TYPE. """ - tpe = lltype.malloc(FFI_STRUCT_P.TO, len(field_types)+1, flavor='raw') + tpe = lltype.malloc(FFI_STRUCT_P.TO, len(field_types)+1, flavor='raw', + track_allocation=track_allocation) tpe.ffistruct.c_type = rffi.cast(rffi.USHORT, FFI_TYPE_STRUCT) tpe.ffistruct.c_size = rffi.cast(rffi.SIZE_T, size) tpe.ffistruct.c_alignment = rffi.cast(rffi.USHORT, aligment) diff --git a/pypy/rlib/libffi.py b/pypy/rlib/libffi.py --- a/pypy/rlib/libffi.py +++ b/pypy/rlib/libffi.py @@ -210,7 +210,7 @@ _immutable_fields_ = ['funcsym'] argtypes = [] - restype = lltype.nullptr(clibffi.FFI_TYPE_P.TO) + restype = clibffi.FFI_TYPE_NULL flags = 0 funcsym = lltype.nullptr(rffi.VOIDP.TO) @@ -416,6 +416,96 @@ def getaddressindll(self, name): return dlsym(self.lib, name) +# ====================================================================== + + at jit.oopspec('libffi_struct_getfield(ffitype, addr, offset)') +def struct_getfield_int(ffitype, addr, offset): + """ + Return the field of type ``ffitype`` at ``addr+offset``, widened to + lltype.Signed. + """ + for TYPE, ffitype2 in clibffi.ffitype_map_int_or_ptr: + if ffitype is ffitype2: + value = _struct_getfield(TYPE, addr, offset) + return rffi.cast(lltype.Signed, value) + assert False, "cannot find the given ffitype" + + + at jit.oopspec('libffi_struct_setfield(ffitype, addr, offset, value)') +def struct_setfield_int(ffitype, addr, offset, value): + """ + Set the field of type ``ffitype`` at ``addr+offset``. ``value`` is of + type lltype.Signed, and it's automatically converted to the right type. + """ + for TYPE, ffitype2 in clibffi.ffitype_map_int_or_ptr: + if ffitype is ffitype2: + value = rffi.cast(TYPE, value) + _struct_setfield(TYPE, addr, offset, value) + return + assert False, "cannot find the given ffitype" + + + at jit.oopspec('libffi_struct_getfield(ffitype, addr, offset)') +def struct_getfield_longlong(ffitype, addr, offset): + """ + Return the field of type ``ffitype`` at ``addr+offset``, casted to + lltype.LongLong. + """ + value = _struct_getfield(lltype.SignedLongLong, addr, offset) + return value + + at jit.oopspec('libffi_struct_setfield(ffitype, addr, offset, value)') +def struct_setfield_longlong(ffitype, addr, offset, value): + """ + Set the field of type ``ffitype`` at ``addr+offset``. ``value`` is of + type lltype.LongLong + """ + _struct_setfield(lltype.SignedLongLong, addr, offset, value) + + + at jit.oopspec('libffi_struct_getfield(ffitype, addr, offset)') +def struct_getfield_float(ffitype, addr, offset): + value = _struct_getfield(lltype.Float, addr, offset) + return value + + at jit.oopspec('libffi_struct_setfield(ffitype, addr, offset, value)') +def struct_setfield_float(ffitype, addr, offset, value): + _struct_setfield(lltype.Float, addr, offset, value) + + + at jit.oopspec('libffi_struct_getfield(ffitype, addr, offset)') +def struct_getfield_singlefloat(ffitype, addr, offset): + value = _struct_getfield(lltype.SingleFloat, addr, offset) + return value + + at jit.oopspec('libffi_struct_setfield(ffitype, addr, offset, value)') +def struct_setfield_singlefloat(ffitype, addr, offset, value): + _struct_setfield(lltype.SingleFloat, addr, offset, value) + + + at specialize.arg(0) +def _struct_getfield(TYPE, addr, offset): + """ + Read the field of type TYPE at addr+offset. + addr is of type rffi.VOIDP, offset is an int. + """ + addr = rffi.ptradd(addr, offset) + PTR_FIELD = lltype.Ptr(rffi.CArray(TYPE)) + return rffi.cast(PTR_FIELD, addr)[0] + + + at specialize.arg(0) +def _struct_setfield(TYPE, addr, offset, value): + """ + Write the field of type TYPE at addr+offset. + addr is of type rffi.VOIDP, offset is an int. + """ + addr = rffi.ptradd(addr, offset) + PTR_FIELD = lltype.Ptr(rffi.CArray(TYPE)) + rffi.cast(PTR_FIELD, addr)[0] = value + +# ====================================================================== + # These specialize.call_location's should really be specialize.arg(0), however # you can't hash a pointer obj, which the specialize machinery wants to do. # Given the present usage of these functions, it's good enough. diff --git a/pypy/rlib/rarithmetic.py b/pypy/rlib/rarithmetic.py --- a/pypy/rlib/rarithmetic.py +++ b/pypy/rlib/rarithmetic.py @@ -97,6 +97,9 @@ # XXX TODO: replace all int(n) by long(n) and fix everything that breaks. # XXX Then relax it and replace int(n) by n. def intmask(n): + """ + NOT_RPYTHON + """ if isinstance(n, objectmodel.Symbolic): return n # assume Symbolics don't overflow assert not isinstance(n, float) @@ -109,6 +112,9 @@ return int(n) def longlongmask(n): + """ + NOT_RPYTHON + """ assert isinstance(n, (int, long)) n = long(n) n &= LONGLONG_MASK diff --git a/pypy/rlib/test/test_libffi.py b/pypy/rlib/test/test_libffi.py --- a/pypy/rlib/test/test_libffi.py +++ b/pypy/rlib/test/test_libffi.py @@ -2,12 +2,16 @@ import py -from pypy.rlib.libffi import (CDLL, Func, get_libc_name, ArgChain, types, - IS_32_BIT, array_getitem, array_setitem) from pypy.rlib.rarithmetic import r_singlefloat, r_longlong, r_ulonglong from pypy.rlib.test.test_clibffi import BaseFfiTest, get_libm_name, make_struct_ffitype_e from pypy.rpython.lltypesystem import rffi, lltype from pypy.rpython.lltypesystem.ll2ctypes import ALLOCATED +from pypy.rlib.libffi import (CDLL, Func, get_libc_name, ArgChain, types, + IS_32_BIT, array_getitem, array_setitem) +from pypy.rlib.libffi import (struct_getfield_int, struct_setfield_int, + struct_getfield_longlong, struct_setfield_longlong, + struct_getfield_float, struct_setfield_float, + struct_getfield_singlefloat, struct_setfield_singlefloat) class TestLibffiMisc(BaseFfiTest): @@ -54,6 +58,33 @@ del lib assert not ALLOCATED + def test_struct_fields(self): + longsize = 4 if IS_32_BIT else 8 + POINT = lltype.Struct('POINT', + ('x', rffi.LONG), + ('y', rffi.SHORT), + ('z', rffi.VOIDP), + ) + y_ofs = longsize + z_ofs = longsize*2 + p = lltype.malloc(POINT, flavor='raw') + p.x = 42 + p.y = rffi.cast(rffi.SHORT, -1) + p.z = rffi.cast(rffi.VOIDP, 0x1234) + addr = rffi.cast(rffi.VOIDP, p) + assert struct_getfield_int(types.slong, addr, 0) == 42 + assert struct_getfield_int(types.sshort, addr, y_ofs) == -1 + assert struct_getfield_int(types.pointer, addr, z_ofs) == 0x1234 + # + struct_setfield_int(types.slong, addr, 0, 43) + struct_setfield_int(types.sshort, addr, y_ofs, 0x1234FFFE) # 0x1234 is masked out + struct_setfield_int(types.pointer, addr, z_ofs, 0x4321) + assert p.x == 43 + assert p.y == -2 + assert rffi.cast(rffi.LONG, p.z) == 0x4321 + # + lltype.free(p, flavor='raw') + def test_array_fields(self): POINT = lltype.Struct("POINT", ("x", lltype.Float), @@ -69,18 +100,81 @@ assert array_getitem(types.double, 16, points, 0, 8) == 2.0 assert array_getitem(types.double, 16, points, 1, 0) == 3.0 assert array_getitem(types.double, 16, points, 1, 8) == 4.0 - + # array_setitem(types.double, 16, points, 0, 0, 10.0) array_setitem(types.double, 16, points, 0, 8, 20.0) array_setitem(types.double, 16, points, 1, 0, 30.0) array_setitem(types.double, 16, points, 1, 8, 40.0) - + # assert array_getitem(types.double, 16, points, 0, 0) == 10.0 assert array_getitem(types.double, 16, points, 0, 8) == 20.0 assert array_getitem(types.double, 16, points, 1, 0) == 30.0 assert array_getitem(types.double, 16, points, 1, 8) == 40.0 + # + lltype.free(points, flavor="raw") - lltype.free(points, flavor="raw") + + def test_struct_fields_longlong(self): + POINT = lltype.Struct('POINT', + ('x', rffi.LONGLONG), + ('y', rffi.ULONGLONG) + ) + y_ofs = 8 + p = lltype.malloc(POINT, flavor='raw') + p.x = r_longlong(123) + p.y = r_ulonglong(456) + addr = rffi.cast(rffi.VOIDP, p) + assert struct_getfield_longlong(types.slonglong, addr, 0) == 123 + assert struct_getfield_longlong(types.ulonglong, addr, y_ofs) == 456 + # + v = rffi.cast(lltype.SignedLongLong, r_ulonglong(9223372036854775808)) + struct_setfield_longlong(types.slonglong, addr, 0, v) + struct_setfield_longlong(types.ulonglong, addr, y_ofs, r_longlong(-1)) + assert p.x == -9223372036854775808 + assert rffi.cast(lltype.UnsignedLongLong, p.y) == 18446744073709551615 + # + lltype.free(p, flavor='raw') + + def test_struct_fields_float(self): + POINT = lltype.Struct('POINT', + ('x', rffi.DOUBLE), + ('y', rffi.DOUBLE) + ) + y_ofs = 8 + p = lltype.malloc(POINT, flavor='raw') + p.x = 123.4 + p.y = 567.8 + addr = rffi.cast(rffi.VOIDP, p) + assert struct_getfield_float(types.double, addr, 0) == 123.4 + assert struct_getfield_float(types.double, addr, y_ofs) == 567.8 + # + struct_setfield_float(types.double, addr, 0, 321.0) + struct_setfield_float(types.double, addr, y_ofs, 876.5) + assert p.x == 321.0 + assert p.y == 876.5 + # + lltype.free(p, flavor='raw') + + def test_struct_fields_singlefloat(self): + POINT = lltype.Struct('POINT', + ('x', rffi.FLOAT), + ('y', rffi.FLOAT) + ) + y_ofs = 4 + p = lltype.malloc(POINT, flavor='raw') + p.x = r_singlefloat(123.4) + p.y = r_singlefloat(567.8) + addr = rffi.cast(rffi.VOIDP, p) + assert struct_getfield_singlefloat(types.double, addr, 0) == r_singlefloat(123.4) + assert struct_getfield_singlefloat(types.double, addr, y_ofs) == r_singlefloat(567.8) + # + struct_setfield_singlefloat(types.double, addr, 0, r_singlefloat(321.0)) + struct_setfield_singlefloat(types.double, addr, y_ofs, r_singlefloat(876.5)) + assert p.x == r_singlefloat(321.0) + assert p.y == r_singlefloat(876.5) + # + lltype.free(p, flavor='raw') + class TestLibffiCall(BaseFfiTest): """ diff --git a/pypy/rpython/rbuiltin.py b/pypy/rpython/rbuiltin.py --- a/pypy/rpython/rbuiltin.py +++ b/pypy/rpython/rbuiltin.py @@ -247,6 +247,11 @@ vlist = hop.inputargs(lltype.Signed) return vlist[0] +def rtype_longlongmask(hop): + hop.exception_cannot_occur() + vlist = hop.inputargs(lltype.SignedLongLong) + return vlist[0] + def rtype_builtin_min(hop): v1, v2 = hop.inputargs(hop.r_result, hop.r_result) hop.exception_cannot_occur() @@ -564,6 +569,7 @@ BUILTIN_TYPER[lltype.Ptr] = rtype_const_result BUILTIN_TYPER[lltype.runtime_type_info] = rtype_runtime_type_info BUILTIN_TYPER[rarithmetic.intmask] = rtype_intmask +BUILTIN_TYPER[rarithmetic.longlongmask] = rtype_longlongmask BUILTIN_TYPER[objectmodel.hlinvoke] = rtype_hlinvoke diff --git a/pypy/rpython/test/test_rbuiltin.py b/pypy/rpython/test/test_rbuiltin.py --- a/pypy/rpython/test/test_rbuiltin.py +++ b/pypy/rpython/test/test_rbuiltin.py @@ -5,7 +5,7 @@ from pypy.rlib.debug import llinterpcall from pypy.rpython.lltypesystem import lltype from pypy.tool import udir -from pypy.rlib.rarithmetic import intmask, is_valid_int +from pypy.rlib.rarithmetic import intmask, longlongmask, r_int64, is_valid_int from pypy.rlib.rarithmetic import r_int, r_uint, r_longlong, r_ulonglong from pypy.annotation.builtin import * from pypy.rpython.test.tool import BaseRtypingTest, LLRtypeMixin, OORtypeMixin @@ -79,6 +79,16 @@ res = self.interpret(f, [r_uint(5)]) assert type(res) is int and res == 5 + def test_longlongmask(self): + def f(x=r_ulonglong): + try: + return longlongmask(x) + except ValueError: + return 0 + + res = self.interpret(f, [r_ulonglong(5)]) + assert type(res) is r_int64 and res == 5 + def test_rbuiltin_list(self): def f(): l=list((1,2,3)) diff --git a/pypy/translator/backendopt/test/test_finalizer.py b/pypy/translator/backendopt/test/test_finalizer.py --- a/pypy/translator/backendopt/test/test_finalizer.py +++ b/pypy/translator/backendopt/test/test_finalizer.py @@ -84,8 +84,8 @@ def __del__(self): if self.x: + lltype.free(self.x, flavor='raw') self.x = lltype.nullptr(S) - lltype.free(self.x, flavor='raw') def f(): return A() diff --git a/pypy/translator/c/test/test_typed.py b/pypy/translator/c/test/test_typed.py --- a/pypy/translator/c/test/test_typed.py +++ b/pypy/translator/c/test/test_typed.py @@ -885,3 +885,13 @@ assert res == 'acquire, hello, raised, release' res = f(2) assert res == 'acquire, hello, raised, release' + + def test_longlongmask(self): + from pypy.rlib.rarithmetic import longlongmask, r_ulonglong + def func(n): + m = r_ulonglong(n) + m *= 100000 + return longlongmask(m) + f = self.getcompiled(func, [int]) + res = f(-2000000000) + assert res == -200000000000000 diff --git a/pypy/translator/cli/test/test_builtin.py b/pypy/translator/cli/test/test_builtin.py --- a/pypy/translator/cli/test/test_builtin.py +++ b/pypy/translator/cli/test/test_builtin.py @@ -16,7 +16,10 @@ test_os_isdir = skip_os test_os_dup_oo = skip_os test_os_access = skip_os - + + def test_longlongmask(self): + py.test.skip("fix me") + def test_builtin_math_frexp(self): self._skip_powerpc("Mono math floating point problem") BaseTestBuiltin.test_builtin_math_frexp(self) diff --git a/pypy/translator/jvm/test/test_builtin.py b/pypy/translator/jvm/test/test_builtin.py --- a/pypy/translator/jvm/test/test_builtin.py +++ b/pypy/translator/jvm/test/test_builtin.py @@ -47,6 +47,9 @@ res = self.interpret(fn, []) assert stat.S_ISREG(res) + def test_longlongmask(self): + py.test.skip("fix me") + class TestJvmTime(JvmTest, BaseTestTime): pass From noreply at buildbot.pypy.org Thu May 17 16:35:13 2012 From: noreply at buildbot.pypy.org (antocuni) Date: Thu, 17 May 2012 16:35:13 +0200 (CEST) Subject: [pypy-commit] pypy default: update whatsnew Message-ID: <20120517143513.64A8E82D49@wyvern.cs.uni-duesseldorf.de> Author: Antonio Cuni Branch: Changeset: r55128:475b246431f9 Date: 2012-05-17 16:34 +0200 http://bitbucket.org/pypy/pypy/changeset/475b246431f9/ Log: update whatsnew diff --git a/pypy/doc/whatsnew-1.9.rst b/pypy/doc/whatsnew-1.9.rst --- a/pypy/doc/whatsnew-1.9.rst +++ b/pypy/doc/whatsnew-1.9.rst @@ -66,7 +66,13 @@ .. branch: win64-stage1 .. branch: zlib-mem-pressure +.. branch: ffistruct +The ``ffistruct`` branch adds a very low level way to express C structures +with _ffi in a very JIT-friendly way + + .. "uninteresting" branches that we should just ignore for the whatsnew: .. branch: sanitize-finally-stack -.. branch: revive-dlltool (preliminary work for sepcomp) +.. branch: revive-dlltool + (preliminary work for sepcomp) From noreply at buildbot.pypy.org Thu May 17 16:40:06 2012 From: noreply at buildbot.pypy.org (arigo) Date: Thu, 17 May 2012 16:40:06 +0200 (CEST) Subject: [pypy-commit] pypy default: Typo. Mark one of the few branches I did and merged as non-interesting. Message-ID: <20120517144006.BA5B182D49@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r55129:6378d049ae6a Date: 2012-05-17 16:39 +0200 http://bitbucket.org/pypy/pypy/changeset/6378d049ae6a/ Log: Typo. Mark one of the few branches I did and merged as non- interesting. diff --git a/pypy/doc/whatsnew-1.9.rst b/pypy/doc/whatsnew-1.9.rst --- a/pypy/doc/whatsnew-1.9.rst +++ b/pypy/doc/whatsnew-1.9.rst @@ -7,11 +7,10 @@ .. branch: array_equal .. branch: better-jit-hooks-2 -.. branch: exception-cannot-occur .. branch: faster-heapcache .. branch: faster-str-decode-escape .. branch: float-bytes -Added some pritives for dealing with floats as raw bytes. +Added some primitives for dealing with floats as raw bytes. .. branch: float-bytes-2 Added more float byte primitives. .. branch: jit-frame-counter @@ -73,6 +72,7 @@ .. "uninteresting" branches that we should just ignore for the whatsnew: +.. branch: exception-cannot-occur .. branch: sanitize-finally-stack .. branch: revive-dlltool (preliminary work for sepcomp) From noreply at buildbot.pypy.org Fri May 18 10:38:17 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 May 2012 10:38:17 +0200 (CEST) Subject: [pypy-commit] pypy default: Run doc/tool/makeref.py. Message-ID: <20120518083817.2D84A8203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: Changeset: r55130:1ebc9c1ec862 Date: 2012-05-18 10:36 +0200 http://bitbucket.org/pypy/pypy/changeset/1ebc9c1ec862/ Log: Run doc/tool/makeref.py. diff --git a/pypy/doc/_ref.txt b/pypy/doc/_ref.txt --- a/pypy/doc/_ref.txt +++ b/pypy/doc/_ref.txt @@ -84,7 +84,6 @@ .. _`pypy/rlib/rbigint.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/rbigint.py .. _`pypy/rlib/rrandom.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/rrandom.py .. _`pypy/rlib/rsocket.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/rsocket.py -.. _`pypy/rlib/rstack.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/rstack.py .. _`pypy/rlib/streamio.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/streamio.py .. _`pypy/rlib/test`: https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/test/ .. _`pypy/rlib/unroll.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/rlib/unroll.py @@ -97,7 +96,6 @@ .. _`pypy/rpython/memory/gc/hybrid.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/rpython/memory/gc/hybrid.py .. _`pypy/rpython/memory/gc/markcompact.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/rpython/memory/gc/markcompact.py .. _`pypy/rpython/memory/gc/marksweep.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/rpython/memory/gc/marksweep.py -.. _`pypy/rpython/memory/gc/minimark.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/rpython/memory/gc/minimark.py .. _`pypy/rpython/memory/gc/minimarkpage.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/rpython/memory/gc/minimarkpage.py .. _`pypy/rpython/memory/gc/semispace.py`: https://bitbucket.org/pypy/pypy/src/default/pypy/rpython/memory/gc/semispace.py .. _`pypy/rpython/ootypesystem/`: https://bitbucket.org/pypy/pypy/src/default/pypy/rpython/ootypesystem/ @@ -116,8 +114,8 @@ .. _`pypy/translator/backendopt/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/backendopt/ .. _`pypy/translator/c/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/c/ .. _`pypy/translator/c/src/stacklet/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/c/src/stacklet/ +.. _`pypy/translator/c/src/stacklet/stacklet.h`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/c/src/stacklet/stacklet.h .. _`pypy/translator/cli/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/cli/ .. _`pypy/translator/goal/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/goal/ .. _`pypy/translator/jvm/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/jvm/ -.. _`pypy/translator/stackless/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/stackless/ .. _`pypy/translator/tool/`: https://bitbucket.org/pypy/pypy/src/default/pypy/translator/tool/ From noreply at buildbot.pypy.org Fri May 18 10:38:36 2012 From: noreply at buildbot.pypy.org (arigo) Date: Fri, 18 May 2012 10:38:36 +0200 (CEST) Subject: [pypy-commit] pypy stm-thread: hg merge default Message-ID: <20120518083836.7E1158203C@wyvern.cs.uni-duesseldorf.de> Author: Armin Rigo Branch: stm-thread Changeset: r55131:eb0c39e72133 Date: 2012-05-18 10:37 +0200 http://bitbucket.org/pypy/pypy/changeset/eb0c39e72133/ Log: hg merge default diff too long, truncating to 10000 out of 562348 lines diff --git a/lib-python/3.2/__future__.py b/lib-python/3.2/__future__.py deleted file mode 100644 --- a/lib-python/3.2/__future__.py +++ /dev/null @@ -1,134 +0,0 @@ -"""Record of phased-in incompatible language changes. - -Each line is of the form: - - FeatureName = "_Feature(" OptionalRelease "," MandatoryRelease "," - CompilerFlag ")" - -where, normally, OptionalRelease < MandatoryRelease, and both are 5-tuples -of the same form as sys.version_info: - - (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int - PY_MINOR_VERSION, # the 1; an int - PY_MICRO_VERSION, # the 0; an int - PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string - PY_RELEASE_SERIAL # the 3; an int - ) - -OptionalRelease records the first release in which - - from __future__ import FeatureName - -was accepted. - -In the case of MandatoryReleases that have not yet occurred, -MandatoryRelease predicts the release in which the feature will become part -of the language. - -Else MandatoryRelease records when the feature became part of the language; -in releases at or after that, modules no longer need - - from __future__ import FeatureName - -to use the feature in question, but may continue to use such imports. - -MandatoryRelease may also be None, meaning that a planned feature got -dropped. - -Instances of class _Feature have two corresponding methods, -.getOptionalRelease() and .getMandatoryRelease(). - -CompilerFlag is the (bitfield) flag that should be passed in the fourth -argument to the builtin function compile() to enable the feature in -dynamically compiled code. This flag is stored in the .compiler_flag -attribute on _Future instances. These values must match the appropriate -#defines of CO_xxx flags in Include/compile.h. - -No feature line is ever to be deleted from this file. -""" - -all_feature_names = [ - "nested_scopes", - "generators", - "division", - "absolute_import", - "with_statement", - "print_function", - "unicode_literals", - "barry_as_FLUFL", -] - -__all__ = ["all_feature_names"] + all_feature_names - -# The CO_xxx symbols are defined here under the same names used by -# compile.h, so that an editor search will find them here. However, -# they're not exported in __all__, because they don't really belong to -# this module. -CO_NESTED = 0x0010 # nested_scopes -CO_GENERATOR_ALLOWED = 0 # generators (obsolete, was 0x1000) -CO_FUTURE_DIVISION = 0x2000 # division -CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default -CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement -CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function -CO_FUTURE_UNICODE_LITERALS = 0x20000 # unicode string literals -CO_FUTURE_BARRY_AS_BDFL = 0x40000 - -class _Feature: - def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): - self.optional = optionalRelease - self.mandatory = mandatoryRelease - self.compiler_flag = compiler_flag - - def getOptionalRelease(self): - """Return first release in which this feature was recognized. - - This is a 5-tuple, of the same form as sys.version_info. - """ - - return self.optional - - def getMandatoryRelease(self): - """Return release in which this feature will become mandatory. - - This is a 5-tuple, of the same form as sys.version_info, or, if - the feature was dropped, is None. - """ - - return self.mandatory - - def __repr__(self): - return "_Feature" + repr((self.optional, - self.mandatory, - self.compiler_flag)) - -nested_scopes = _Feature((2, 1, 0, "beta", 1), - (2, 2, 0, "alpha", 0), - CO_NESTED) - -generators = _Feature((2, 2, 0, "alpha", 1), - (2, 3, 0, "final", 0), - CO_GENERATOR_ALLOWED) - -division = _Feature((2, 2, 0, "alpha", 2), - (3, 0, 0, "alpha", 0), - CO_FUTURE_DIVISION) - -absolute_import = _Feature((2, 5, 0, "alpha", 1), - (2, 7, 0, "alpha", 0), - CO_FUTURE_ABSOLUTE_IMPORT) - -with_statement = _Feature((2, 5, 0, "alpha", 1), - (2, 6, 0, "alpha", 0), - CO_FUTURE_WITH_STATEMENT) - -print_function = _Feature((2, 6, 0, "alpha", 2), - (3, 0, 0, "alpha", 0), - CO_FUTURE_PRINT_FUNCTION) - -unicode_literals = _Feature((2, 6, 0, "alpha", 2), - (3, 0, 0, "alpha", 0), - CO_FUTURE_UNICODE_LITERALS) - -barry_as_FLUFL = _Feature((3, 1, 0, "alpha", 2), - (3, 9, 0, "alpha", 0), - CO_FUTURE_BARRY_AS_BDFL) diff --git a/lib-python/3.2/__phello__.foo.py b/lib-python/3.2/__phello__.foo.py deleted file mode 100644 --- a/lib-python/3.2/__phello__.foo.py +++ /dev/null @@ -1,1 +0,0 @@ -# This file exists as a helper for the test.test_frozen module. diff --git a/lib-python/3.2/_abcoll.py b/lib-python/3.2/_abcoll.py deleted file mode 100644 --- a/lib-python/3.2/_abcoll.py +++ /dev/null @@ -1,623 +0,0 @@ -# Copyright 2007 Google, Inc. All Rights Reserved. -# Licensed to PSF under a Contributor Agreement. - -"""Abstract Base Classes (ABCs) for collections, according to PEP 3119. - -DON'T USE THIS MODULE DIRECTLY! The classes here should be imported -via collections; they are defined here only to alleviate certain -bootstrapping issues. Unit tests are in test_collections. -""" - -from abc import ABCMeta, abstractmethod -import sys - -__all__ = ["Hashable", "Iterable", "Iterator", - "Sized", "Container", "Callable", - "Set", "MutableSet", - "Mapping", "MutableMapping", - "MappingView", "KeysView", "ItemsView", "ValuesView", - "Sequence", "MutableSequence", - "ByteString", - ] - - -### collection related types which are not exposed through builtin ### -## iterators ## -bytes_iterator = type(iter(b'')) -bytearray_iterator = type(iter(bytearray())) -#callable_iterator = ??? -dict_keyiterator = type(iter({}.keys())) -dict_valueiterator = type(iter({}.values())) -dict_itemiterator = type(iter({}.items())) -list_iterator = type(iter([])) -list_reverseiterator = type(iter(reversed([]))) -range_iterator = type(iter(range(0))) -set_iterator = type(iter(set())) -str_iterator = type(iter("")) -tuple_iterator = type(iter(())) -zip_iterator = type(iter(zip())) -## views ## -dict_keys = type({}.keys()) -dict_values = type({}.values()) -dict_items = type({}.items()) -## misc ## -dict_proxy = type(type.__dict__) - - -### ONE-TRICK PONIES ### - -class Hashable(metaclass=ABCMeta): - - @abstractmethod - def __hash__(self): - return 0 - - @classmethod - def __subclasshook__(cls, C): - if cls is Hashable: - for B in C.__mro__: - if "__hash__" in B.__dict__: - if B.__dict__["__hash__"]: - return True - break - return NotImplemented - - -class Iterable(metaclass=ABCMeta): - - @abstractmethod - def __iter__(self): - while False: - yield None - - @classmethod - def __subclasshook__(cls, C): - if cls is Iterable: - if any("__iter__" in B.__dict__ for B in C.__mro__): - return True - return NotImplemented - - -class Iterator(Iterable): - - @abstractmethod - def __next__(self): - raise StopIteration - - def __iter__(self): - return self - - @classmethod - def __subclasshook__(cls, C): - if cls is Iterator: - if (any("__next__" in B.__dict__ for B in C.__mro__) and - any("__iter__" in B.__dict__ for B in C.__mro__)): - return True - return NotImplemented - -Iterator.register(bytes_iterator) -Iterator.register(bytearray_iterator) -#Iterator.register(callable_iterator) -Iterator.register(dict_keyiterator) -Iterator.register(dict_valueiterator) -Iterator.register(dict_itemiterator) -Iterator.register(list_iterator) -Iterator.register(list_reverseiterator) -Iterator.register(range_iterator) -Iterator.register(set_iterator) -Iterator.register(str_iterator) -Iterator.register(tuple_iterator) -Iterator.register(zip_iterator) - -class Sized(metaclass=ABCMeta): - - @abstractmethod - def __len__(self): - return 0 - - @classmethod - def __subclasshook__(cls, C): - if cls is Sized: - if any("__len__" in B.__dict__ for B in C.__mro__): - return True - return NotImplemented - - -class Container(metaclass=ABCMeta): - - @abstractmethod - def __contains__(self, x): - return False - - @classmethod - def __subclasshook__(cls, C): - if cls is Container: - if any("__contains__" in B.__dict__ for B in C.__mro__): - return True - return NotImplemented - - -class Callable(metaclass=ABCMeta): - - @abstractmethod - def __call__(self, *args, **kwds): - return False - - @classmethod - def __subclasshook__(cls, C): - if cls is Callable: - if any("__call__" in B.__dict__ for B in C.__mro__): - return True - return NotImplemented - - -### SETS ### - - -class Set(Sized, Iterable, Container): - - """A set is a finite, iterable container. - - This class provides concrete generic implementations of all - methods except for __contains__, __iter__ and __len__. - - To override the comparisons (presumably for speed, as the - semantics are fixed), all you have to do is redefine __le__ and - then the other operations will automatically follow suit. - """ - - def __le__(self, other): - if not isinstance(other, Set): - return NotImplemented - if len(self) > len(other): - return False - for elem in self: - if elem not in other: - return False - return True - - def __lt__(self, other): - if not isinstance(other, Set): - return NotImplemented - return len(self) < len(other) and self.__le__(other) - - def __gt__(self, other): - if not isinstance(other, Set): - return NotImplemented - return other < self - - def __ge__(self, other): - if not isinstance(other, Set): - return NotImplemented - return other <= self - - def __eq__(self, other): - if not isinstance(other, Set): - return NotImplemented - return len(self) == len(other) and self.__le__(other) - - def __ne__(self, other): - return not (self == other) - - @classmethod - def _from_iterable(cls, it): - '''Construct an instance of the class from any iterable input. - - Must override this method if the class constructor signature - does not accept an iterable for an input. - ''' - return cls(it) - - def __and__(self, other): - if not isinstance(other, Iterable): - return NotImplemented - return self._from_iterable(value for value in other if value in self) - - def isdisjoint(self, other): - for value in other: - if value in self: - return False - return True - - def __or__(self, other): - if not isinstance(other, Iterable): - return NotImplemented - chain = (e for s in (self, other) for e in s) - return self._from_iterable(chain) - - def __sub__(self, other): - if not isinstance(other, Set): - if not isinstance(other, Iterable): - return NotImplemented - other = self._from_iterable(other) - return self._from_iterable(value for value in self - if value not in other) - - def __xor__(self, other): - if not isinstance(other, Set): - if not isinstance(other, Iterable): - return NotImplemented - other = self._from_iterable(other) - return (self - other) | (other - self) - - def _hash(self): - """Compute the hash value of a set. - - Note that we don't define __hash__: not all sets are hashable. - But if you define a hashable set type, its __hash__ should - call this function. - - This must be compatible __eq__. - - All sets ought to compare equal if they contain the same - elements, regardless of how they are implemented, and - regardless of the order of the elements; so there's not much - freedom for __eq__ or __hash__. We match the algorithm used - by the built-in frozenset type. - """ - MAX = sys.maxsize - MASK = 2 * MAX + 1 - n = len(self) - h = 1927868237 * (n + 1) - h &= MASK - for x in self: - hx = hash(x) - h ^= (hx ^ (hx << 16) ^ 89869747) * 3644798167 - h &= MASK - h = h * 69069 + 907133923 - h &= MASK - if h > MAX: - h -= MASK + 1 - if h == -1: - h = 590923713 - return h - -Set.register(frozenset) - - -class MutableSet(Set): - - @abstractmethod - def add(self, value): - """Add an element.""" - raise NotImplementedError - - @abstractmethod - def discard(self, value): - """Remove an element. Do not raise an exception if absent.""" - raise NotImplementedError - - def remove(self, value): - """Remove an element. If not a member, raise a KeyError.""" - if value not in self: - raise KeyError(value) - self.discard(value) - - def pop(self): - """Return the popped value. Raise KeyError if empty.""" - it = iter(self) - try: - value = next(it) - except StopIteration: - raise KeyError - self.discard(value) - return value - - def clear(self): - """This is slow (creates N new iterators!) but effective.""" - try: - while True: - self.pop() - except KeyError: - pass - - def __ior__(self, it): - for value in it: - self.add(value) - return self - - def __iand__(self, it): - for value in (self - it): - self.discard(value) - return self - - def __ixor__(self, it): - if it is self: - self.clear() - else: - if not isinstance(it, Set): - it = self._from_iterable(it) - for value in it: - if value in self: - self.discard(value) - else: - self.add(value) - return self - - def __isub__(self, it): - if it is self: - self.clear() - else: - for value in it: - self.discard(value) - return self - -MutableSet.register(set) - - -### MAPPINGS ### - - -class Mapping(Sized, Iterable, Container): - - @abstractmethod - def __getitem__(self, key): - raise KeyError - - def get(self, key, default=None): - try: - return self[key] - except KeyError: - return default - - def __contains__(self, key): - try: - self[key] - except KeyError: - return False - else: - return True - - def keys(self): - return KeysView(self) - - def items(self): - return ItemsView(self) - - def values(self): - return ValuesView(self) - - def __eq__(self, other): - if not isinstance(other, Mapping): - return NotImplemented - return dict(self.items()) == dict(other.items()) - - def __ne__(self, other): - return not (self == other) - - -class MappingView(Sized): - - def __init__(self, mapping): - self._mapping = mapping - - def __len__(self): - return len(self._mapping) - - def __repr__(self): - return '{0.__class__.__name__}({0._mapping!r})'.format(self) - - -class KeysView(MappingView, Set): - - @classmethod - def _from_iterable(self, it): - return set(it) - - def __contains__(self, key): - return key in self._mapping - - def __iter__(self): - for key in self._mapping: - yield key - -KeysView.register(dict_keys) - - -class ItemsView(MappingView, Set): - - @classmethod - def _from_iterable(self, it): - return set(it) - - def __contains__(self, item): - key, value = item - try: - v = self._mapping[key] - except KeyError: - return False - else: - return v == value - - def __iter__(self): - for key in self._mapping: - yield (key, self._mapping[key]) - -ItemsView.register(dict_items) - - -class ValuesView(MappingView): - - def __contains__(self, value): - for key in self._mapping: - if value == self._mapping[key]: - return True - return False - - def __iter__(self): - for key in self._mapping: - yield self._mapping[key] - -ValuesView.register(dict_values) - - -class MutableMapping(Mapping): - - @abstractmethod - def __setitem__(self, key, value): - raise KeyError - - @abstractmethod - def __delitem__(self, key): - raise KeyError - - __marker = object() - - def pop(self, key, default=__marker): - try: - value = self[key] - except KeyError: - if default is self.__marker: - raise - return default - else: - del self[key] - return value - - def popitem(self): - try: - key = next(iter(self)) - except StopIteration: - raise KeyError - value = self[key] - del self[key] - return key, value - - def clear(self): - try: - while True: - self.popitem() - except KeyError: - pass - - def update(*args, **kwds): - if len(args) > 2: - raise TypeError("update() takes at most 2 positional " - "arguments ({} given)".format(len(args))) - elif not args: - raise TypeError("update() takes at least 1 argument (0 given)") - self = args[0] - other = args[1] if len(args) >= 2 else () - - if isinstance(other, Mapping): - for key in other: - self[key] = other[key] - elif hasattr(other, "keys"): - for key in other.keys(): - self[key] = other[key] - else: - for key, value in other: - self[key] = value - for key, value in kwds.items(): - self[key] = value - - def setdefault(self, key, default=None): - try: - return self[key] - except KeyError: - self[key] = default - return default - -MutableMapping.register(dict) - - -### SEQUENCES ### - - -class Sequence(Sized, Iterable, Container): - - """All the operations on a read-only sequence. - - Concrete subclasses must override __new__ or __init__, - __getitem__, and __len__. - """ - - @abstractmethod - def __getitem__(self, index): - raise IndexError - - def __iter__(self): - i = 0 - try: - while True: - v = self[i] - yield v - i += 1 - except IndexError: - return - - def __contains__(self, value): - for v in self: - if v == value: - return True - return False - - def __reversed__(self): - for i in reversed(range(len(self))): - yield self[i] - - def index(self, value): - for i, v in enumerate(self): - if v == value: - return i - raise ValueError - - def count(self, value): - return sum(1 for v in self if v == value) - -Sequence.register(tuple) -Sequence.register(str) -Sequence.register(range) - - -class ByteString(Sequence): - - """This unifies bytes and bytearray. - - XXX Should add all their methods. - """ - -ByteString.register(bytes) -ByteString.register(bytearray) - - -class MutableSequence(Sequence): - - @abstractmethod - def __setitem__(self, index, value): - raise IndexError - - @abstractmethod - def __delitem__(self, index): - raise IndexError - - @abstractmethod - def insert(self, index, value): - raise IndexError - - def append(self, value): - self.insert(len(self), value) - - def reverse(self): - n = len(self) - for i in range(n//2): - self[i], self[n-i-1] = self[n-i-1], self[i] - - def extend(self, values): - for v in values: - self.append(v) - - def pop(self, index=-1): - v = self[index] - del self[index] - return v - - def remove(self, value): - del self[self.index(value)] - - def __iadd__(self, values): - self.extend(values) - return self - -MutableSequence.register(list) -MutableSequence.register(bytearray) # Multiply inheriting, see ByteString diff --git a/lib-python/3.2/_compat_pickle.py b/lib-python/3.2/_compat_pickle.py deleted file mode 100644 --- a/lib-python/3.2/_compat_pickle.py +++ /dev/null @@ -1,81 +0,0 @@ -# This module is used to map the old Python 2 names to the new names used in -# Python 3 for the pickle module. This needed to make pickle streams -# generated with Python 2 loadable by Python 3. - -# This is a copy of lib2to3.fixes.fix_imports.MAPPING. We cannot import -# lib2to3 and use the mapping defined there, because lib2to3 uses pickle. -# Thus, this could cause the module to be imported recursively. -IMPORT_MAPPING = { - 'StringIO': 'io', - 'cStringIO': 'io', - 'cPickle': 'pickle', - '__builtin__' : 'builtins', - 'copy_reg': 'copyreg', - 'Queue': 'queue', - 'SocketServer': 'socketserver', - 'ConfigParser': 'configparser', - 'repr': 'reprlib', - 'FileDialog': 'tkinter.filedialog', - 'tkFileDialog': 'tkinter.filedialog', - 'SimpleDialog': 'tkinter.simpledialog', - 'tkSimpleDialog': 'tkinter.simpledialog', - 'tkColorChooser': 'tkinter.colorchooser', - 'tkCommonDialog': 'tkinter.commondialog', - 'Dialog': 'tkinter.dialog', - 'Tkdnd': 'tkinter.dnd', - 'tkFont': 'tkinter.font', - 'tkMessageBox': 'tkinter.messagebox', - 'ScrolledText': 'tkinter.scrolledtext', - 'Tkconstants': 'tkinter.constants', - 'Tix': 'tkinter.tix', - 'ttk': 'tkinter.ttk', - 'Tkinter': 'tkinter', - 'markupbase': '_markupbase', - '_winreg': 'winreg', - 'thread': '_thread', - 'dummy_thread': '_dummy_thread', - 'dbhash': 'dbm.bsd', - 'dumbdbm': 'dbm.dumb', - 'dbm': 'dbm.ndbm', - 'gdbm': 'dbm.gnu', - 'xmlrpclib': 'xmlrpc.client', - 'DocXMLRPCServer': 'xmlrpc.server', - 'SimpleXMLRPCServer': 'xmlrpc.server', - 'httplib': 'http.client', - 'htmlentitydefs' : 'html.entities', - 'HTMLParser' : 'html.parser', - 'Cookie': 'http.cookies', - 'cookielib': 'http.cookiejar', - 'BaseHTTPServer': 'http.server', - 'SimpleHTTPServer': 'http.server', - 'CGIHTTPServer': 'http.server', - 'test.test_support': 'test.support', - 'commands': 'subprocess', - 'UserString' : 'collections', - 'UserList' : 'collections', - 'urlparse' : 'urllib.parse', - 'robotparser' : 'urllib.robotparser', - 'whichdb': 'dbm', - 'anydbm': 'dbm' -} - - -# This contains rename rules that are easy to handle. We ignore the more -# complex stuff (e.g. mapping the names in the urllib and types modules). -# These rules should be run before import names are fixed. -NAME_MAPPING = { - ('__builtin__', 'xrange'): ('builtins', 'range'), - ('__builtin__', 'reduce'): ('functools', 'reduce'), - ('__builtin__', 'intern'): ('sys', 'intern'), - ('__builtin__', 'unichr'): ('builtins', 'chr'), - ('__builtin__', 'basestring'): ('builtins', 'str'), - ('__builtin__', 'long'): ('builtins', 'int'), - ('itertools', 'izip'): ('builtins', 'zip'), - ('itertools', 'imap'): ('builtins', 'map'), - ('itertools', 'ifilter'): ('builtins', 'filter'), - ('itertools', 'ifilterfalse'): ('itertools', 'filterfalse'), -} - -# Same, but for 3.x to 2.x -REVERSE_IMPORT_MAPPING = dict((v, k) for (k, v) in IMPORT_MAPPING.items()) -REVERSE_NAME_MAPPING = dict((v, k) for (k, v) in NAME_MAPPING.items()) diff --git a/lib-python/3.2/_dummy_thread.py b/lib-python/3.2/_dummy_thread.py deleted file mode 100644 --- a/lib-python/3.2/_dummy_thread.py +++ /dev/null @@ -1,155 +0,0 @@ -"""Drop-in replacement for the thread module. - -Meant to be used as a brain-dead substitute so that threaded code does -not need to be rewritten for when the thread module is not present. - -Suggested usage is:: - - try: - import _thread - except ImportError: - import _dummy_thread as _thread - -""" -# Exports only things specified by thread documentation; -# skipping obsolete synonyms allocate(), start_new(), exit_thread(). -__all__ = ['error', 'start_new_thread', 'exit', 'get_ident', 'allocate_lock', - 'interrupt_main', 'LockType'] - -# A dummy value -TIMEOUT_MAX = 2**31 - -# NOTE: this module can be imported early in the extension building process, -# and so top level imports of other modules should be avoided. Instead, all -# imports are done when needed on a function-by-function basis. Since threads -# are disabled, the import lock should not be an issue anyway (??). - -class error(Exception): - """Dummy implementation of _thread.error.""" - - def __init__(self, *args): - self.args = args - -def start_new_thread(function, args, kwargs={}): - """Dummy implementation of _thread.start_new_thread(). - - Compatibility is maintained by making sure that ``args`` is a - tuple and ``kwargs`` is a dictionary. If an exception is raised - and it is SystemExit (which can be done by _thread.exit()) it is - caught and nothing is done; all other exceptions are printed out - by using traceback.print_exc(). - - If the executed function calls interrupt_main the KeyboardInterrupt will be - raised when the function returns. - - """ - if type(args) != type(tuple()): - raise TypeError("2nd arg must be a tuple") - if type(kwargs) != type(dict()): - raise TypeError("3rd arg must be a dict") - global _main - _main = False - try: - function(*args, **kwargs) - except SystemExit: - pass - except: - import traceback - traceback.print_exc() - _main = True - global _interrupt - if _interrupt: - _interrupt = False - raise KeyboardInterrupt - -def exit(): - """Dummy implementation of _thread.exit().""" - raise SystemExit - -def get_ident(): - """Dummy implementation of _thread.get_ident(). - - Since this module should only be used when _threadmodule is not - available, it is safe to assume that the current process is the - only thread. Thus a constant can be safely returned. - """ - return -1 - -def allocate_lock(): - """Dummy implementation of _thread.allocate_lock().""" - return LockType() - -def stack_size(size=None): - """Dummy implementation of _thread.stack_size().""" - if size is not None: - raise error("setting thread stack size not supported") - return 0 - -class LockType(object): - """Class implementing dummy implementation of _thread.LockType. - - Compatibility is maintained by maintaining self.locked_status - which is a boolean that stores the state of the lock. Pickling of - the lock, though, should not be done since if the _thread module is - then used with an unpickled ``lock()`` from here problems could - occur from this class not having atomic methods. - - """ - - def __init__(self): - self.locked_status = False - - def acquire(self, waitflag=None, timeout=-1): - """Dummy implementation of acquire(). - - For blocking calls, self.locked_status is automatically set to - True and returned appropriately based on value of - ``waitflag``. If it is non-blocking, then the value is - actually checked and not set if it is already acquired. This - is all done so that threading.Condition's assert statements - aren't triggered and throw a little fit. - - """ - if waitflag is None or waitflag: - self.locked_status = True - return True - else: - if not self.locked_status: - self.locked_status = True - return True - else: - if timeout > 0: - import time - time.sleep(timeout) - return False - - __enter__ = acquire - - def __exit__(self, typ, val, tb): - self.release() - - def release(self): - """Release the dummy lock.""" - # XXX Perhaps shouldn't actually bother to test? Could lead - # to problems for complex, threaded code. - if not self.locked_status: - raise error - self.locked_status = False - return True - - def locked(self): - return self.locked_status - -# Used to signal that interrupt_main was called in a "thread" -_interrupt = False -# True when not executing in a "thread" -_main = True - -def interrupt_main(): - """Set _interrupt flag to True to have start_new_thread raise - KeyboardInterrupt upon exiting.""" - if _main: - raise KeyboardInterrupt - else: - global _interrupt - _interrupt = True diff --git a/lib-python/3.2/_markupbase.py b/lib-python/3.2/_markupbase.py deleted file mode 100644 --- a/lib-python/3.2/_markupbase.py +++ /dev/null @@ -1,395 +0,0 @@ -"""Shared support for scanning document type declarations in HTML and XHTML. - -This module is used as a foundation for the html.parser module. It has no -documented public API and should not be used directly. - -""" - -import re - -_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9]*\s*').match -_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match -_commentclose = re.compile(r'--\s*>') -_markedsectionclose = re.compile(r']\s*]\s*>') - -# An analysis of the MS-Word extensions is available at -# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf - -_msmarkedsectionclose = re.compile(r']\s*>') - -del re - - -class ParserBase: - """Parser base class which provides some common support methods used - by the SGML/HTML and XHTML parsers.""" - - def __init__(self): - if self.__class__ is ParserBase: - raise RuntimeError( - "_markupbase.ParserBase must be subclassed") - - def error(self, message): - raise NotImplementedError( - "subclasses of ParserBase must override error()") - - def reset(self): - self.lineno = 1 - self.offset = 0 - - def getpos(self): - """Return current line number and offset.""" - return self.lineno, self.offset - - # Internal -- update line number and offset. This should be - # called for each piece of data exactly once, in order -- in other - # words the concatenation of all the input strings to this - # function should be exactly the entire input. - def updatepos(self, i, j): - if i >= j: - return j - rawdata = self.rawdata - nlines = rawdata.count("\n", i, j) - if nlines: - self.lineno = self.lineno + nlines - pos = rawdata.rindex("\n", i, j) # Should not fail - self.offset = j-(pos+1) - else: - self.offset = self.offset + j-i - return j - - _decl_otherchars = '' - - # Internal -- parse declaration (for use by subclasses). - def parse_declaration(self, i): - # This is some sort of declaration; in "HTML as - # deployed," this should only be the document type - # declaration (""). - # ISO 8879:1986, however, has more complex - # declaration syntax for elements in , including: - # --comment-- - # [marked section] - # name in the following list: ENTITY, DOCTYPE, ELEMENT, - # ATTLIST, NOTATION, SHORTREF, USEMAP, - # LINKTYPE, LINK, IDLINK, USELINK, SYSTEM - rawdata = self.rawdata - j = i + 2 - assert rawdata[i:j] == "": - # the empty comment - return j + 1 - if rawdata[j:j+1] in ("-", ""): - # Start of comment followed by buffer boundary, - # or just a buffer boundary. - return -1 - # A simple, practical version could look like: ((name|stringlit) S*) + '>' - n = len(rawdata) - if rawdata[j:j+2] == '--': #comment - # Locate --.*-- as the body of the comment - return self.parse_comment(i) - elif rawdata[j] == '[': #marked section - # Locate [statusWord [...arbitrary SGML...]] as the body of the marked section - # Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA - # Note that this is extended by Microsoft Office "Save as Web" function - # to include [if...] and [endif]. - return self.parse_marked_section(i) - else: #all other declaration elements - decltype, j = self._scan_name(j, i) - if j < 0: - return j - if decltype == "doctype": - self._decl_otherchars = '' - while j < n: - c = rawdata[j] - if c == ">": - # end of declaration syntax - data = rawdata[i+2:j] - if decltype == "doctype": - self.handle_decl(data) - else: - # According to the HTML5 specs sections "8.2.4.44 Bogus - # comment state" and "8.2.4.45 Markup declaration open - # state", a comment token should be emitted. - # Calling unknown_decl provides more flexibility though. - self.unknown_decl(data) - return j + 1 - if c in "\"'": - m = _declstringlit_match(rawdata, j) - if not m: - return -1 # incomplete - j = m.end() - elif c in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ": - name, j = self._scan_name(j, i) - elif c in self._decl_otherchars: - j = j + 1 - elif c == "[": - # this could be handled in a separate doctype parser - if decltype == "doctype": - j = self._parse_doctype_subset(j + 1, i) - elif decltype in {"attlist", "linktype", "link", "element"}: - # must tolerate []'d groups in a content model in an element declaration - # also in data attribute specifications of attlist declaration - # also link type declaration subsets in linktype declarations - # also link attribute specification lists in link declarations - self.error("unsupported '[' char in %s declaration" % decltype) - else: - self.error("unexpected '[' char in declaration") - else: - self.error( - "unexpected %r char in declaration" % rawdata[j]) - if j < 0: - return j - return -1 # incomplete - - # Internal -- parse a marked section - # Override this to handle MS-word extension syntax content - def parse_marked_section(self, i, report=1): - rawdata= self.rawdata - assert rawdata[i:i+3] == ' ending - match= _markedsectionclose.search(rawdata, i+3) - elif sectName in {"if", "else", "endif"}: - # look for MS Office ]> ending - match= _msmarkedsectionclose.search(rawdata, i+3) - else: - self.error('unknown status keyword %r in marked section' % rawdata[i+3:j]) - if not match: - return -1 - if report: - j = match.start(0) - self.unknown_decl(rawdata[i+3: j]) - return match.end(0) - - # Internal -- parse comment, return length or -1 if not terminated - def parse_comment(self, i, report=1): - rawdata = self.rawdata - if rawdata[i:i+4] != '